The Discourse Servers

We are hosted on MRI 2.0 heavily tuned with tcmalloc. I tried jRuby locally and was barely able to get it running, when I did it was much slower than 1.9.3, mostly due to missing native implementations of various native functionality, for example oj is pretty damn fast.

Considering GitHub, Shopify and many other high scale Rails outfits are sticking with MRI and improving it I am comfortable with our decision here.

With regards to web servers, I intend to shift us to unicorn with oobgc, and complement it with thins for long polling. It complicates stuff a bit, so have held off, but will get to it. We work on Passenger, Thin and Puma at the moment, with some minor changes needed to work on Unicorn around Redis and forking.

I do not really intend to use Puma cause our next biggest perf win is oobgc something that is out of the question with Puma. Also, long polling is already implemented and works fine on Thin no urgency moving it to a threaded model.

3 Likes