Does Discourse require 2GB to work "Lag Free"?

In my opinion, we should be working on improving the performance of Discourse rather than working on adding more features. I find it ridiculous that for Discourse to work lag free we have to use a 2gb server, whereas using something like SMF barely requires 512mb. Just my two cents.

2 Likes

I work pretty extensively on performance and am quite proud of where Discourse is at the moment.

At this point pushing memory requirements down would require either

  1. Building support for JRuby so we can get native threading, which would fragment our development. Not that I am against it but it is very complex. Even then it is questionable if JRuby would be able to work nicely with say 250MB total. I don’t know.
  2. Building native threading into MRI Ruby, that would take multiple years of full time work for the entire team.
  3. Porting all of Discourse to another platform like say golang. Another multiple year project.
  4. Running a threaded web server that like puma, which is only sort of threaded due to the GIL and would be more glitchy.

None of the options sound too appealing to me so the 1GB requirement is going to have to remain for now.

Its the nature of the beast, if you want to build a big Ruby app it will consume a lot of memory, we continue to work on reducing memory requirements but it takes time and is not something we can all work on.

When it comes to performance on my immediate roadmap is:

  • Improve client side rendering of topics
  • Improve how Discourse works with gigantic databases.
11 Likes

That’s not what he said.

That is what he said, which is absolutely incorrect. So really your reply here is a waste of time, since it wasn’t addressed at what was said.

We have tons of clients on 1GB servers and they run fine for small and medium sites. No idea what you’re talking about.

I guess I was replying to:

Which is a stretch and is unlikely to happen for Discourse any time soon.

I don’t think it ever will and I would be bold to say you shouldn’t aim for it. I think with time as technology matures it might naturally come along. Or, VPSes will be cheaper that require at least 1GB memory.

3 Likes

@sam could you share some reasons why Discourse does not use puma?

From my experience, puma works better than unicorn even with MRI. It’s stable, and it requires less memory. Even though there is GIL, puma still helps the throughput a lot on I/O blocking requests situations.

I haven’t tested yet, but I think with a 1GB Discourse instance, if we switch from 2 unicorn worker to 1 puma worker with 0 - 16 thread settings, it should be able to provide similar amount of throughput, with less memory requirement.

Thanks

As I recall it has something to do with long polling but I can’t recall specifics. @sam would have to answer.

Puma is usable, in fact it is in our Gemfile. But as it stands now you would need 2 processes, one for puma and one for sidekiq, both of which would share no memory.

So that adds up to say 400MB RSS and that is even before postgres and redis.

That said, puma would be more “laggy” due to the GIL and lack of out-of-band GC. You may get similar throughput, but time per request would vary a lot more.

3 Likes

Is sidekiq replaceable?

I understand that for whoever is serious about forums it isn’t that expensive to host Discourse, but any increase in adoption leads to it getting more popular and will benefit all it’s ecosystem in the end, if I understand it correctly.

Replaceable with what? There has to be some method of specifying background tasks that run for Discourse to function.

Anyway 1GB RAM works absolutely fine for most Discourse sites, we have literally dozens of $99 installs running 1GB RAM on Digital Ocean with zero issues whatsoever. You won’t need more than 1GB RAM unless your Discourse is especially large or especially active, so I’m not sure what the benefit would be of replacing Sidekiq, exactly?

1 Like

If it works with Puma and cron jobs, would it run with 512MB?

cron jobs would still require memory to boot the full rails stack, this memory would not be shared so it would be even more inefficient.

Puma + threads for bg jobs could work in ultra low memory, but performance would suffer.

1 Like

What if a cron job makes an http request for the unicorns to process, using curl? Could it work?

yeah but then you would very quickly run out of workers, unicorn is single threaded, maybe with puma, but you are describing a very complex change there.

1 Like

I created a question on StackOverflow, and someone suggested me that sucker_punch is designed to help with this.

Do you think it’s worth investigating? Should it work?

I can tell you Sam hates Celluloid with the fire of a thousand suns, and probably for good reason, so learning that this “sucker punch” is based on Celluloid… I’m gonna go ahead and say :no_entry:

Better news is here: Sidekiq 4.0! and @sam will be looking at pulling in the newer Sidekiq soon.

5 Likes

That is a tad harsh, I find it a complicated abstraction that I have lots of trouble grokking.

But to be fair I have never really spent lots of time figuring it out. I am sure plenty of people are able to work with it successfully.

I am however super happy to see this big dependency is removed from sidekiq. We get a simpler, faster, easier to reason about sidekiq.

5 Likes

@sam by mentioning that this would be a multi-year project, I assume you have no intention of porting Discourse from Ruby to Go, but since you mentioned Go I’m curious if you think it would have been a better choice than Ruby. I’m wondering if you’ve done any small porting experiments. Or maybe you’re really just mentioning Go in passing. It’s hard to tell from your quick comment. Anyway, I’d be curious to hear more of your thoughts on this.

1 Like

I have not done any experiments with porting stuff, I mentioned Go cause it is pretty expressive, fast and makes a pretty good platform for a JSON backend. That said the plugin story becomes significantly more complicated.

We have no plans for exploratory work in the near to medium term.

I am happy with Ruby, and what we have achieved with it. The Ruby 3x3 initiative makes me particularly happy, if Ruby had a better multithreading story we would drastically reduce our memory footprint.

I am not enthused about throwing everything away and switching platforms, it is not totally off the cards to introduce some microservices for read tracking and message bus (which are the lion’s share of traffic we handle)

I think it would be super interesting to have Message Bus ported to go.

8 Likes

I have two Discourse instances:

They both run fine. You can potentially stretch the capabilities of a 1GB VPS by using zram, which I have done - so far so good.

With 1GB VPS slice costing 10$/month, is it really worth the effort try to push Discourse down to 512mb? I mean, what would be the expected savings - 60$ a year? If you have some actual traffic on your community, just add AdSense to help with the financials.

3 Likes