Redis::CommandError: CROSSSLOT Keys in request don’t hash to the same slot
I did find this post & couldnt quite work out what @sam meant.
Is it in the Elasticache that you cant cluster?
TIA
Todd
PS Im not interested in running a Bitnami style solution where everything runs on the 1 box. I want to make use of various AWS solutions. (little surprised that this is not a common use case (particularly with using the free tier to have a play))
It’s not common because a simple single Discourse instance 4 cpu / 8 GB RAM digital ocean droplet for around $40/month can scale to huge communities, just by following our install guide. You are opting into a whole lot of complexity here for “big company” levels of scalability. Why do you need that?
I started off with offloading Postgres to RDS and using Elasticache. It definitely worked. However, I ended up running into a couple of things:
The costs really did start to add up. Your most cost effective option is going to be a 3-year, all up front RI and I wasn’t ready to make that commitment yet. In hindsight, I’m glad I didn’t.
I ran into an issue while importing 13 years of MailMan archives. It was all my fault, but I decided that I needed to re-import what I had already imported to get rid of the [Listname] subject lines. Long story short, because I copied my app.yml file as a template for the importer, it too was using the same RDS instance and it really threw me for a loop after following the instructions on how to re-import items again without the script telling me “skipping already imported items”.
Now, I’ve moved to just having a T2 Micro with Postgres and Redis all in the container. Discourse gets updated often enough that I’m not worried about the automatic patching that RDS or Elasticache does.
I do make sure and back up my data to S3 periodically, and that is a very nice feature.
TL;DR It is more complex and expensive to do it the “right” AWS way. Just run it all in the container and save backups to S3 and you’ll be fine. If you need more horsepower, shut down your instance and start it back up as an M5 or something.
I would dispute referring to it as the “right” way. AWS doesn’t expect customers to use those components if their implementations don’t fall within the scale of application they’re designed for. If you read through a lot of the AWS implementation whitepapers they pretty regularly try to deter people from using a howitzer for shelling peanuts.
The problem is that we techies see features and get excited about using them (aka play), even when it doesn’t necessarily make sense to do so.
Anything with mission critical data or high ARPU would benefit from stuff like RDS, Amazon Docker, Elasticache and all of the other stuff in AWS. Communities just don’t fall under that umbrella.