Using thin versus unicorn


(dougalcorn) #1

Continuing the discussion from With the official minimum requirement of the server, how many people can it withstand at the same time?:

Using unicorn changes all these numbers and I don’t have any experience with unicorn and discourse together. I see there’s a unicorn.sample.conf.rb, but the INSTALL-ubuntu.md doesn’t mention using it. There seems to be some hesitation to using unicorn, but maybe not. I’d love @sam or @codinghorror to talk about the virtue of using thin versus unicorn. Has there been problems running under unicorn? Are there issues running sidekiq under unicorn? From what I understand, unicorn is superior to thin in terms of raw requests per second handling.


(Anthony Sekatski) #2

What about Puma? I read that it’s more memory efficient server


(Ben T) #3

There was this previous discussion about the use of unicorn. I haven’t been running discourse to any scale, but my environments aren’t bound by memory; more by I/O so unicorn is of interest to me.

http://meta.discourse.org/t/unbound-memory-usage/8137/5?u=trident


(Robin Ward) #4

@sam is the expert on this but he’s on vacation so I’ll try to answer :smile:

It comes down to our message bus feature, which allows us to send updates to the browser dynamically. It uses long polling to do this, which only works with a server that can handle thousands of connections, even if they’re mostly idle.

To do this, we make use of the rack hijack API. However, it doesn’t fully work on all servers right now. In particular, thin is a best in class implementation.

Having said that, I think other servers have caught up and with some work we can support them. In particular the edge Passenger should support it (their initial stab had an error).


(Adam Baxter) #5

Any plans to support WebSocket in the long run? Wouldn’t that be more efficient than long polling?


(Robin Ward) #6

The first version of the message bus used web sockets. Unfortunately we had many more issues with it, so in the interest of shipping we reverted to long polling.

As for efficiency, long polling is really not that bad. All those connections are idle most of the time. Additionally, to support web sockets in Rails has many of the same issues with server compatibility that long polling does.

I wouldn’t be against adding them back once the message bus is stable and battle tested, but there’s no major advantage right now.


(dougalcorn) #7

I guess I’m a little confused. I believe “long polling” refers to a timeout on the client side that does an http get to see what data has changed since the last poll. Is that correct? How does that relate to the thin server? It leaves the connection open between the client and server? How does thin handle that? I thought each thin server could only handle a single connection at a time.


(Sam Saffron) #8

Thin has supported a concept of “async” connections from the dawn of time:

Recently, this has been ratified into Rack::Async, something that funny enough thin does not support.

Discourse uses the message_bus gem, it supports both thin async and rack hijack. I turned off hijack by default, cause passenger has a very dangerous implementation at the moment.

Keep in mind, unicorn is a stickler about the version of rack it boots with and will not enable hijack unless it boots with rack 1.5 or above, rails 3.2 is quite conservative about its rack version and is locked on 1.4.x

So, bottom line, when we upgrade to Rails 4 we may be able to recommend unicorn. I hate recommending a server that only gives you a subset of the Discourse experience. At the moment I can easily recommend Thin or Puma (with puma you will need to enable long polling)


(Sam Saffron) #9

To me WebSockets is a solution looking for a problem that introduces 100 new problems at the same time.


(Adam Baxter) #10

Thanks for the extra info sam, it makes more sense seeing the extra constraints.


(Robin Ward) #11

It sounds like you’re thinking of short polling. Short polling is where you
make a request every x seconds for an update. Long polling involves making
a connection to the server that is held in a pending state. It sits there
doing nothing until the server has something to reply with. When the server
has a message to send to the client, it sends it down the connection, and
then it is closed. The client then immediately opens a new connection to
wait for another message.

Thin can only handle one rails request at a time per process, but the many
connections in a pending state dont count, since they’re sitting there
doing nothing until they need a message.


(Alex Egg) #12

Can you elaborate on this a little? I see when you require message_bus in a rails project it inserts itself as a middleware and long polling doesn’t seem to work, but traditional polling does.


(Sam Saffron) #13

Puma should support rack hijack, not sure why it would have any issues.