How does Discourse take advantage of multiple CPU cores environment on the server?

As far as I know Ruby MRI does not run on more than once CPU core despite the availability of others. How does Discourse deal with this?

The same way many other languages do: multiple different HTTP requests are handled by multiple different CPUs.


Let me explain,

Given the of Global Interpreter Lock Ruby MRI, how does discourse translate the processing power of a multi-CPU core system into real performance boosts?

More cores means more simultaneous HTTP requests being serviced. That’s all.

If you read further here you will see me advocating at great length for high clock rate, high single thread efficiency CPUs. There is a reason for that.


In short, can you tell me which core 3rd party or contributed libraries / Ruby gems are used to achieve that concurrency? Thank you.

In our particular case we use unicorn, this allows multiple processors to process requests for a single port

Also keep in mind the global interpreter lock is released when we run JavaScript in mini racer and when we run queries in the pg gem