I’m currently undergoing this exercise myself - we want to host on AWS, but aren’t sure how we’d set up Discourse for “higher availability” without a lot of faff.
The way I’d see it, you’d need to do something like this:
- RDS to host Postgres (with multi-AZ enabled - this allows configuration changes including changing the size of the RDS instance without downtime, and gives you automatic failover if the DB seizes-up)
- EC2 r4.large instance for Redis (or ElastiCache Redis, but beware of vendor lock-in)
- EC2 m5.large or any appropriate size for the actual Discourse instances
- Classic ELB to load-balance traffic between the Discourse “web workers”
- S3 for upload storage
As you can see, the majority of it involves breaking the Discourse stack up into its individual components and putting them onto separate machines. That obviously doesn’t give you automatic scaling (you’d need AMIs, a Launch Config and ASGs for that), and you’d obviously still have the issue with managing upgrades on all instances plus any DB migrations that are needed along with it.
Perhaps in the future, there could be a Cloudformation script for AWS, or Packer script (for all providers) that does all this for you.
After you’ve gone through all that, you wonder if it’d just be easier to have a single AWS instance of the right size with daily backups going into S3. It’s not high availability, but it should suffice for all but the largest communities.
What I haven’t investigated yet (but will get to) is the possibility of using Kubernetes to provision/deploy Discourse. That way, in theory, you could set up a Kubernetes cluster on AWS with kops or something similar, and delegate responsibility for the docker containers to Kubernetes. This might be worth reading.
If I make any progress, I’ll get back to you!