Full site CDN acceleration for Discourse

Fastly , CloudFlare and a few other CDNs offer a mode where they accelerate dynamic content.

In a nutshell you point your domain IP address at the CDN and the CDN will intelligently decide how to deal with the request.

  • Static content can be easily served from cache
  • Dynamic content can be routed to the site.

This provides some advantages over only shipping static assets which is covered in the CDN howto.

  • You can elect for “shielding” that protects your site from traffic spikes.
  • Dynamic content can be accelerated using techniques like railgun. (note: in general our paylod fits in 1 RTT so this has less of an impact)
  • SSL negotiation can happen at the edge cutting on expensive round trips for negotiation.

If you enable full site acceleration with a CDN it is critical you follow 3 rules

  1. The “message bus” must be served from the origin.

  2. You need to set up X-Forwarded-For trust. For Cloudflare, add cloudflare.template.yml to your app.yml file.

  3. Be extra careful with techniques that apply optimisation to the site, stuff like Rocket Loader can stop Discourse from working. Discourse is already heavily optimised, this is not needed.

To serve “long polling” requests from a different domain, set the Site Setting long polling base url to the origin server.

For example, if your site is on “http://forum.example.com” you should set up http://forum-direct.example.com/ to plug into the site setting. If you don’t your site will be broken.

If you are fronting Discourse using Varnish you probably want to follow the same trick here and bypass Varnish for the message bus requests.

Boring technical notes:

Achieving a working message bus on a completely different domain is quite challenging. Our message bus is aware of which user is polling, the other domain may have no cookie set up so untouched there are two issues. Firstly, you can’t even make standard ajax requests cross domain without a huge CORS dance.

Secondly, we needed a mechanism to inform the other domain who the user is so we can poll for the correct information.

When long polling base url is changed, Discourse ships an extra meta tag that shares a “cross domain” auth token. This token is passed using a custom header back to the message bus. The token expires after 7 days or as soon as the user logs off. In future we are probably going to amend it so the token has N uses and is automatically reissued after they pass.

You can see most of the implementation here: FEATURE: allow long polling to go to a different url · discourse/discourse@aa9b3bb · GitHub


I don’t know what it means… fits in 1 RTT?

1 Round trip, read up abount TCP congestion control, initial windows and so on.


UPDATE 2015-04-02: I just made the full example more complete.

I am confused with this definition.

But I think this should clarify it. Please, don’t hesitate to tell me if i’m doing something wrong.

The confusing part is that if “http://some-origin.com/”. If you are behind Fastly, you have to use a CNAME entry and then you have to have a sub domain name and not the top level.

Background: In DNS, a top level domain name (i.e. “some-origin.com”) can only have A records. Since Fastly requires we use a CNAME entry, we have no choice but to use a sub domain name.

Let’s say that we will then use “http://discourse.some-origin.com/” to serve our Discourse forum so we can use Fastly.

Now there’s this thing called “long polling” which is basically an OPTION HTTP request with a long time before returning anything. If we use the Fastly or Varnish address, as Discourse would by default, Varnish will time out and “long polling” won’t work.

More background: Varnish has this option to bypass in known contexts through vcl_pipe which is roughly a raw TCP socket. But Fastly doesn’t offer it because of the size of their setup.

Proposed setup

Let’s enable long polling and expose our site under Fastly. We’ll need two names, one pointing to Fastly’s and the other to the IP addresses we give within the service dashboard.

  1. discourse.some-origin.com that’s our desired Discourse site domain name
  2. discoursepolling.some-origin.com (pick any name) that we’ll configure in Discourse to access directly to our public facing frontend web server

In my case, I generally have many web apps running that are only accessible from my internal network. I refer to them as “upstream”; the same term NGINX uses in their config. Since this number of web apps you would host on a site can fluctuate, you might still want the number public IP address to remain stable. That’s why I setup a NGINX server in front that proxies to internal web app server. I refer to them as “frontends”.

Let’s say you have two public facing frontends running NGINX.

Those would be the ones you setup in Fastly like this.

Here we see two Backends in Fastly pannel at Configure -> Hosts.

Notice that in this example i’m using 443 port because my backends are configured to communicate between Fastly and my frontends through TLS. But you don’t need to.

Quoting again @sam;

[quote=“sam, post:1, topic:21467”]
To server “long polling” requests from a different domain, set the Site Setting long polling base url to the origin server.[/quote]

Really means here is that we would have to put one of those IP addresses in Discourse settings.

What I’d recommend is to create a list of A entries for all your frontends.

In the end we need three things:

  1. What’s the public name that Fastly will serve
  2. Which IPs are the frontends
  3. Which hostname we want to use for long polling and we’ll add it to our VirtualHost

The zone file would look like this;

# The public facing URL
discourse.some-origin.com.  IN CNAME global.prod.fastly.net.

# The list of IP addresses you’d give to Fastly as origins/backends
frontends.some-origin.com.  IN A
frontends.some-origin.com.  IN A

# The long polling URL entry
discoursepolling.some-origin.com.  IN CNAME frontends.some-origin.com.

That way you can setup the “long polling base url” correctly without setting a single point of failure.

Then, we can go in Discourse admin zone and adjust the “long polling base url” to our other domain name.

# /etc/nginx/sites-enabled/10-discourse

# Let’s redirect to SSL, in case somebody tries to access the direct IP with
# host header.
server {
    listen      80;
    server_name discoursepolling.some-origin.com discourse.some-origin.com;
    include     common_params;
    return      301 https://$server_name$request_uri;

server {
    listen      443 ssl;
    server_name discoursepolling.some-origin.com discourse.some-origin.com;
    # Rest of NGINX server block
    # Also, I would make a condition if we are in discoursepolling but not
    # under using anything specific to polling.
    # #TODO; find paths specific to polling

To see if it works; look at your web browser developer tool “Network inspector” for /poll calls on discoursepolling.some-origin.com, and see if you have 200 OK status code.


To clarify something here, in a multisite configuration, all sites should use the same long polling url? It looks to me like the this line is making that a requirement:


Edit: No wait, that doesn’t work.

base site: example.com
long polling url: origin.example.com

multisite 1: mysite.com

If mysite uses origin.example.com as the long polling address I get:

XMLHttpRequest cannot load https://origin.example.com/message-bus/634dd18187094c6c950c0bf14f74c239/poll. Response to preflight request doesn't pass access control check: The 'Access-Control-Allow-Origin' header has a value 'https://example.com' that is not equal to the supplied origin. Origin 'https://mysite.com' is therefore not allowed access.

If mysite uses it’s own long polling origin as the domain I get this:

XMLHttpRequest cannot load https://origin.mysite.com/message-bus/b35c9c8e958f44f78d0d4773dc6d75f3/poll. Response to preflight request doesn't pass access control check: The 'Access-Control-Allow-Origin' header has a value 'https://example.com' that is not equal to the supplied origin. Origin 'https://mysite.com' is therefore not allowed access.

Is this because of "Access-Control-Allow-Origin" => Discourse.base_url_no_prefix ?

I have noticed there is no “cloudfront.template.yml” in discourse_docker/templates/. So I am wondering:
Can CloudFront work using the same techniques ?

Also, can we use http2 ? Is the long polling stuff still needed when using http2 ?

If you’re using a the supported Docker-based install, HTTP2 should be working automatically! :sunny:

Long polling is still needed for notifications to appear live.


I think if you have cloudfront setup, it’s only delivering specific objects (images), rather than the site/application in it’s entirety with js and so on.

So the only thing you need is to have the correct cloudfront url for those images.

Some additional notes for anyone who decides to use HTTPS together with Full site CDN acceleration

  1. Discourse internally uses the value of SiteSetting.force_https to decide if your access-control-allow-origin: is the HTTP or HTTPS version of your site. If while polling you see an error in the browser console along the lines of preflight request doesn't pass access control check: The 'Access-Control-Allow-Origin' header has a value http doesn't match https, double check your force_https setting. Also note the protocol in your DISCOURSE_CORS_ORIGIN in your container definition (http|https) will be overridden by force_https.
    Don’t forget to add DISCORSE_ENABLE_CORS: true in your container definition.

  2. If you were planning to only do HTTPS from your end users to your CDN, and then HTTP from your CDN to your actual Discourse web_only containers, lots of custom configuration will be required.

  3. If your CDN is serving your site on HTTPS, then whatever Long Polling URL you setup must also be on HTTPS, so even if the CDN is handling your HTTPS, you must still setup HTTPS on your Discourse servers. If you run into an error about Same-origin policy, double check that you’re not trying to connect to HTTP instead of HTTPS

    • If you use letsencrypt to generate your certificates, note that fullchain.pem => /shared/ssl/ssl.crt (ssl_certificate)
    • privkey.pem => /shared/ssl/ssl.key (ssl_certificate_key)
  4. You might use the following templates in your container definition:

  - "templates/web.template.yml"
  - "templates/web.ssl.template.yml"
  - "templates/fastly.template.yml"
  • Towards the end of the hook:ssl inside templates/web.ssl.template.yml you’ll see this block being added to your /etc/nginx/conf.d/discourse.conf.
if ($http_host != $$ENV_DISCOURSE_HOSTNAME) {
    rewrite (.*) https://$$ENV_DISCOURSE_HOSTNAME$1 permanent;
  • You’ll need to need to comment these lines out, otherwise you’re long polling attempts always serve up 301 redirects back to your origin, instead of respecting whatever you set in SiteSetting.long_polling_base_url
  1. The easiest way I’ve found to do this is to copy templates/web.ssl.template.yml to local.web.ssl.template.yml and just remove those extra lines, and update your container reference to use your local template. If you go that route, you should periodically diff your local version with the origional version, because there are some security improvements that are regularly incorporated into this template.

Some of the error messages you’ll run into until things are configured correctly.

Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://polling.example.com/message-bus/37c91c51e6cd4b0c95288b8fc29a0480/poll. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing).

Reason: CORS header ‘Access-Control-Allow-Origin’ missing

Response to preflight request doesn’t pass access control check: No ‘Access-Control-Allow-Origin’ header is present on the requested resource

Failed to load https://polling.example.com/message-bus/8caefcec2cf94de3ae684c4b953a1084/poll: Response for preflight is invalid (redirect)


You can do this with a - replace in your app.yml similar to how it is described on Setting up Let’s Encrypt with Multiple Domains.

    - replace:
        filename: "/etc/nginx/conf.d/discourse.conf"
        from: /return 301 https.+/
        to: |
          return 301 https://$host$request_uri;

    - replace:
        filename: "/etc/nginx/conf.d/discourse.conf"
        from: /gzip on;[^\}]+\}/m
        to: |
          gzip on;
          add_header Strict-Transport-Security 'max-age=31536000'; # remember the certificate for a year and automatically connect to HTTPS for this domain
1 Like

@brahn, I was looking at using a pups replace line, but I couldn’t figure out how to do a multi-line match in pups…

Note that templates/web.ssl.template.yml is inside of the port 443 block, not the port 80 block.

Is there a way to use the pups, replace command to match the entire mult-line string?

if ($http_host != www.example.com) {
   rewrite (.*) https://www.example.com$1 permanent;

The most direct way I thought of inside the container definition is an exec line running perl, awk or sed to do the multiline replace… but then you’ve got shell escaping along with your target language to disentangle before it will work…

The first replace takes care of the redirect from http to https for multisite. Perhaps that one is not relevant for you.

The second replace is multi-line. It replaces everything from line 33 to line 39 that was added by the web.ssl template.

It just removes that whole rewrite block. I could not figure out what purpose it serves and it breaks mutlisite so…

You could do this in your app.yml:

    - replace:
        filename: "/etc/nginx/conf.d/discourse.conf"
        from: /gzip on;[^\}]+\}/m
        to: |
          gzip on;
          add_header Strict-Transport-Security 'max-age=31536000'; # remember the certificate for a year and automatically connect to HTTPS for this domain
          if ($http_host != www.example.com) {
            rewrite (.*) https://www.example.com$1 permanent;
1 Like


I’m curious to know one thing:
Eg. My discourse is hosted on forum.example.com
Can I set long polling base to poll.example.org which points to the same server IP?
Will it have any impact considering CSP?


Hi there. Why you guys recommends keep away from Cloudflare when we are in Discourse?

I’m trying to install it in my new VPS (Debian in Hetzner) and I thought that could be useful to keep Cloudflare on in my little server.

Thanks for your time.

CloudFlare isn’t a conventional CDN, it’s a network proxy. Some of their performance features alter the code between client and server.

Leaving those features on can break Discourse in new and interesting ways. If you turn them off, you’re just adding extra network hops between the Discourse app in your browser, and the server. More hops = a less responsive interface.


Well, I’m in Hetzner VPS and I read that could be good to use Cloudflare to keep my server safe (in eventual attacks). CDN could be OK too, because I’m in another country (America, not Germany).

What do you think about it?

I’m not going to comment on whether you should be worried about attacks. You need to make that assessment yourself, but don’t fall prey to FUD.

If you leave their performance features enabled we cannot support you here. As mentioned above they interfere with the javascript in ways that do no good.

You may be able to make the basic asset caching work with all of the other performance features turned off.

Even then, if Cloudflare is active during installation and setup, certificate enrolment will fail. Let’s Encrypt isn’t supported behind a Cloudflare proxy for initial enrolment.


Thanks for your answer Stephen. I’m facing issues trying to install Discourse and I thought that could be related to Cloudflare.

So, I can’t use it neither to manage DNS? How can I protect my server and keep Discourse without Cloudflare?

Press the orange cloud next to your hostname in the Cloudfare control panel so the cloud turns grey.
Then install Discourse. If you want to protect your server, press the grey cloud so it turns orange, but make sure to disable all performance features first.