I know, questions as how much is much or is emacs better than vi, when there is nano are hard and impossible to answer, but still.
I’m considering create a new droplet at DigitalOcean just because of these AI-things. So, which one gives the best money/benefit-ratio on otherwise low traffic forum with very little money involved, and when the target is 16 GB RAM:
basic, 112 € — 8 cores Intel or AMD
general, 126 € — 4 cores
CPU-optimized, 168 € — 8 cores regular Intel
memory-optimized, 84 € — 2 cores
(USD is almost same as euro nowadays)
Again — I don’t know anything — but because Discourse is a client dependent app or something, totally different than PHP-based WordPress, it doesn’t need that much CPU-power, or am I totally lost? But AI-solutions changes that playbord totally and needs RAM and CPU?
And the actual and real question is, of course: what are the minimum costs if one wants for example Related Topics block?
The main problem with the AI “Related Topics” thing is that you have to generate embeddings for all your existing topics. In large forums that takes a while, and is the “expensive” part of the operation. However, you only need to run this once, so you can leverage hourly instances to pay just the bare minimum here.
After you have all those embeddings already generated, you only need to generate new ones for new and edited topics, and there you can probably fly with CPU-based inference.
So let’s say you now have:
One droplet at Digital Ocean running Discourse
During the backfill you can have:
One droplet at Digital Ocean running Discourse
One droplet at Digital Ocean running PostgreSQL for storing the embeddings
One VPS at Vultr for computing embeddings fast
After the backfill you change it to:
One droplet at Digital Ocean running Discourse
One droplet at Digital Ocean running PostgreSQL for storing the embeddings and now also the embeddings service
As for the droplet size for the 2, a small one with 4GB RAM may be enough, gotta double check how much RAM that embeddings service container is using.
We are actively working on it and we will be making lots of changes to it in the coming weeks as we ramp up this plugin in our Enterprise customers and get feedback.
That said, spending less than $10 to give this a spin and provide this feature for your community and be able to provide early feedback to it sounds like a great deal to me, but it depends on your money and time constraints.
One thing that we know will happen is that at the moment we only use the OP in the related topics embeddings, and we will be experimenting passing the OP and the replies that can fit instead, which means needing to regenerate all the embeddings again. That would cost you $3 and 1h of your time again.