I wrote the message bus so I should answer this. However, before I answer anything let me explain the message bus.
(in ruby code on the server)
# publish the string norris to a channel called /chuck
MessageBus.publish('/chuck', 'norris')
# publish the string 'secret' to a channel called /check, but only to these particular user_ids
MessageBus.publish('/chuck', 'norris', user_ids: [1,2,3])
# subscribe to the channel '/chuck' on the server
MessageBus.subscribe('/chuck') do |msg|
# yay, I got a message on the /chuck channel
data = msg.data
site_id = msg.site_id
channel = msg.channel
user_ids = msg.user_ids
# a global ever increasing id for the message
global_id = msg.global_id
# a unique id for this message within the channel
message_id = msg.message_id
end
# give me all the messages after local message id 10 on the /chuck channel
messages = MessageBus.backlog('/chuck', 10)
# the last local id on the chuck channel
id = MessageBus.last_id('/chuck')
On the client side you have (in JavaScript)
MessageBus.subscribe('/chuck', function(data){
// called when server publishes a message
});
MessageBus.unsubscribe('/chuck', fn)
Why not use Faye?
Faye is an awesome project I have nothing bad to say about it. But, my intention around message bus is both wider and narrower than Faye.
Faye supports the full Bayeux protocol, it abstract transport and storage so you can plug in redis and websockets if you wish. It has a node and a ruby port.
Faye, by design, is a lot of things to a lot of people, Message Bus on the other hand has a much more specific use case and a lot of the decisions I made reflect that.
-
Message Bus is opinionated, it only supports the protocol it needs to drive Discourse. It only supports redis for storage. Message Bus does not support web sockets. It only supports polling and long polling.
-
Message Bus is multi-host aware. We serve both
http://meta.discoruse.org
andhttp://try.discourse.org
from the same pool of processes. Message Bus has smart enough routing to ensure only the correct site gets the messages targeted at it. -
Message Bus is efficient and stores no client state. Many storage strategies will save up messages in “client” buckets (Faye does this) This means that when you are distributing messages you need to add one to each client bucket that cares about it and have to worry about expiring this bucket at some point. Message Bus on the other hand stores the backlogs on a channel backlog. This allows clients to recover from lost messages days later if they are still around in the channel bucket.
-
Message Bus is replayable: At any point you can request a backlog of all the messages on a channel (you can control how big you allow the backlog to get)
-
Message Bus is small: the entire implementation fits in a handful of files, see: https://github.com/SamSaffron/message_bus/tree/master/lib/message_bus because it only supports a limited protocol the code can be a lot smaller.
-
Message Bus is robust: not many buses can pass a test like this: https://github.com/SamSaffron/message_bus/blob/master/spec/lib/multi_process_spec.rb
-
Message Bus is used for intra-server comms. If you are running Discourse over 3 machines and need to expire a cache you can use Message Bus for that.
Historically, I originally did not use Faye cause I wanted to use em-websocket, in fact I even wrote integration bits to allow for em-websocket support in thin. em-websocket is by far the most complete socket implementation in ruby and it is far more complete than Fayes. Since then I have changed my tune.
These days I don’t really believe the complexity added by having web socket support really buys you much over long polling. Additionally, if you really must have* web sockets reliably, you must be using HTTPS and you got to have robust fallback logic, just in case. Various rewriting proxies prevalent on mobile and planes/hotels will muck with traffic cause sockets to hang.
It was also critical for me that the same process that runs our webs can serve the sockets on the same port (something I was not able to do with Faye web sockets). I wanted to ease deployment as much as I could.