Raptor web server

(Ilya Kuchaev) #1


Any thoughts?

(Jakob Borg) #2

Mostly “why not just release the code and skip the hype website”…

(Sam Saffron) #3


Is this really the bottleneck?

So, you can serve the string “Hello World” 4 times faster from the raw web server.

  • What about the DB calls that are 40% of the request’s cost
  • Or Active Record overhead that is 20% of the request’s cost
  • Or crap omniauth adds that is 2% of the request’s cost
  • Or time in controller and models and other rack filters that adds up to all the rest

Congratulations, you now can run Discourse 0.5% faster.

(Kane York) #4

And the client-side first render cost which is about 5x the server time on meta…

(Erlend Sogge Heggen) #5

Well, if anyone cares, it’s out now:

(Sam Saffron) #6

I have tremendous respect for @honglilai he has done a huge amount to ease deployment of Rails and improve Ruby and its ecosystem.

However, I do not think using raptor would result in any noticeable performance increase in Discourse.

  • Our Docker install uses Unicorn with Out-Of-Band GC support. (Our mechanism is not even supported on passenger enterprise, there is no memory probing there.)

  • Our Docker install supports rolling deploys with zero downtime. (A passenger enterprise feature which costs $$)

  • We serve all static assets via NGINX and take advantage of various NGINX features such as caching and rate limiting.

  • We have a custom forker for Sidekiq that helps keep memory usage down, which is not yet supported in Passenger AFAIK.

  • We do not have a bottleneck around the HTTP parser or handler. We are dealing with 50-200ms on the front page server side. Easily 99% of that time or even 99.5% of that time is in Rack Middleware, Rails and DB calls. X-Runtime header is almost always the same as the time reported by rack mini profiler (which is rack middleware)

I applaud all performance work here, but am afraid it is not that relevant to us.

(Hongli Lai) #7

I have to agree. It looks like you guys have so much custom stuff in there that staying with Unicorn might make better sense.

(Bráulio Bhavamitra) #8

nice informations @sam. some questions:

  • about nginx rate limiting, could you please share your config?
  • really got interested on your unicorn oobgc. how does it differ from unicorn’s oobc?

btw, your app is pretty fast :slight_smile:

(Bráulio Bhavamitra) #9

@sam, just copied and tried the unicorn_oobgc.rb in my app and the result (req/s) of using it is invariably worser than not using it.

(Sam Saffron) #10

its designed for Ruby 2.0 and may need tuning depending on the application profile. On 2.1 +

See: Ruby 2.1: Out-of-Band GC · computer talk by @tmm1