UPDATE 2015-04-02: I just made the full example more complete.
I am confused with this definition.
But I think this should clarify it. Please, don’t hesitate to tell me if i’m doing something wrong.
The confusing part is that if “http://some-origin.com/”. If you are behind Fastly, you have to use a CNAME entry and then you have to have a sub domain name and not the top level.
Background: In DNS, a top level domain name (i.e. “some-origin.com”) can only have A records. Since Fastly requires we use a CNAME entry, we have no choice but to use a sub domain name.
Now there’s this thing called “long polling” which is basically an OPTION HTTP request with a long time before returning anything. If we use the Fastly or Varnish address, as Discourse would by default, Varnish will time out and “long polling” won’t work.
More background: Varnish has this option to bypass in known contexts through vcl_pipe which is roughly a raw TCP socket. But Fastly doesn’t offer it because of the size of their setup.
Proposed setup
Let’s enable long polling and expose our site under Fastly. We’ll need two names, one pointing to Fastly’s and the other to the IP addresses we give within the service dashboard.
discoursepolling.some-origin.com (pick any name) that we’ll configure in Discourse to access directly to our public facing frontend web server
In my case, I generally have many web apps running that are only accessible from my internal network. I refer to them as “upstream”; the same term NGINX uses in their config. Since this number of web apps you would host on a site can fluctuate, you might still want the number public IP address to remain stable. That’s why I setup a NGINX server in front that proxies to internal web app server. I refer to them as “frontends”.
Let’s say you have two public facing frontends running NGINX.
Those would be the ones you setup in Fastly like this.
Here we see two Backends in Fastly pannel at Configure -> Hosts.
Notice that in this example i’m using 443 port because my backends are configured to communicate between Fastly and my frontends through TLS. But you don’t need to.
[quote=“sam, post:1, topic:21467”]
To server “long polling” requests from a different domain, set the Site Setting long polling base url to the origin server.[/quote]
Really means here is that we would have to put one of those IP addresses in Discourse settings.
What I’d recommend is to create a list of A entries for all your frontends.
In the end we need three things:
What’s the public name that Fastly will serve
Which IPs are the frontends
Which hostname we want to use for long polling and we’ll add it to our VirtualHost
The zone file would look like this;
# The public facing URL
discourse.some-origin.com. IN CNAME global.prod.fastly.net.
# The list of IP addresses you’d give to Fastly as origins/backends
frontends.some-origin.com. IN A 8.8.8.113
frontends.some-origin.com. IN A 8.8.8.115
# The long polling URL entry
discoursepolling.some-origin.com. IN CNAME frontends.some-origin.com.
That way you can setup the “long polling base url” correctly without setting a single point of failure.
Then, we can go in Discourse admin zone and adjust the “long polling base url” to our other domain name.
# /etc/nginx/sites-enabled/10-discourse
# Let’s redirect to SSL, in case somebody tries to access the direct IP with
# host header.
server {
listen 80;
server_name discoursepolling.some-origin.com discourse.some-origin.com;
include common_params;
return 301 https://$server_name$request_uri;
}
server {
listen 443 ssl;
server_name discoursepolling.some-origin.com discourse.some-origin.com;
# Rest of NGINX server block
# Also, I would make a condition if we are in discoursepolling but not
# under using anything specific to polling.
# #TODO; find paths specific to polling
}
To see if it works; look at your web browser developer tool “Network inspector” for /poll calls on discoursepolling.some-origin.com, and see if you have 200 OK status code.
To clarify something here, in a multisite configuration, all sites should use the same long polling url? It looks to me like the this line is making that a requirement:
XMLHttpRequest cannot load https://origin.example.com/message-bus/634dd18187094c6c950c0bf14f74c239/poll. Response to preflight request doesn't pass access control check: The 'Access-Control-Allow-Origin' header has a value 'https://example.com' that is not equal to the supplied origin. Origin 'https://mysite.com' is therefore not allowed access.
If mysite uses it’s own long polling origin as the domain I get this:
XMLHttpRequest cannot load https://origin.mysite.com/message-bus/b35c9c8e958f44f78d0d4773dc6d75f3/poll. Response to preflight request doesn't pass access control check: The 'Access-Control-Allow-Origin' header has a value 'https://example.com' that is not equal to the supplied origin. Origin 'https://mysite.com' is therefore not allowed access.
Is this because of "Access-Control-Allow-Origin" => Discourse.base_url_no_prefix ?
I have noticed there is no “cloudfront.template.yml” in discourse_docker/templates/. So I am wondering:
Can CloudFront work using the same techniques ?
I think if you have cloudfront setup, it’s only delivering specific objects (images), rather than the site/application in it’s entirety with js and so on.
So the only thing you need is to have the correct cloudfront url for those images.
Some additional notes for anyone who decides to use HTTPS together with Full site CDN acceleration
Discourse internally uses the value of SiteSetting.force_https to decide if your access-control-allow-origin: is the HTTP or HTTPS version of your site. If while polling you see an error in the browser console along the lines of preflight request doesn't pass access control check: The 'Access-Control-Allow-Origin' header has a value http doesn't match https, double check your force_https setting. Also note the protocol in your DISCOURSE_CORS_ORIGIN in your container definition (http|https) will be overridden by force_https.
Don’t forget to add DISCORSE_ENABLE_CORS: true in your container definition.
If you were planning to only do HTTPS from your end users to your CDN, and then HTTP from your CDN to your actual Discourse web_only containers, lots of custom configuration will be required.
If your CDN is serving your site on HTTPS, then whatever Long Polling URL you setup must also be on HTTPS, so even if the CDN is handling your HTTPS, you must still setup HTTPS on your Discourse servers. If you run into an error about Same-origin policy, double check that you’re not trying to connect to HTTP instead of HTTPS
If you use letsencrypt to generate your certificates, note that fullchain.pem => /shared/ssl/ssl.crt (ssl_certificate)
Towards the end of the hook:ssl inside templates/web.ssl.template.yml you’ll see this block being added to your /etc/nginx/conf.d/discourse.conf.
if ($http_host != $$ENV_DISCOURSE_HOSTNAME) {
rewrite (.*) https://$$ENV_DISCOURSE_HOSTNAME$1 permanent;
}
You’ll need to need to comment these lines out, otherwise you’re long polling attempts always serve up 301 redirects back to your origin, instead of respecting whatever you set in SiteSetting.long_polling_base_url
The easiest way I’ve found to do this is to copy templates/web.ssl.template.yml to local.web.ssl.template.yml and just remove those extra lines, and update your container reference to use your local template. If you go that route, you should periodically diff your local version with the origional version, because there are some security improvements that are regularly incorporated into this template.
Some of the error messages you’ll run into until things are configured correctly.
after_ssl:
- replace:
filename: "/etc/nginx/conf.d/discourse.conf"
from: /return 301 https.+/
to: |
return 301 https://$host$request_uri;
- replace:
filename: "/etc/nginx/conf.d/discourse.conf"
from: /gzip on;[^\}]+\}/m
to: |
gzip on;
add_header Strict-Transport-Security 'max-age=31536000'; # remember the certificate for a year and automatically connect to HTTPS for this domain
Is there a way to use the pups, replace command to match the entire mult-line string?
if ($http_host != www.example.com) {
rewrite (.*) https://www.example.com$1 permanent;
}
The most direct way I thought of inside the container definition is an exec line running perl, awk or sed to do the multiline replace… but then you’ve got shell escaping along with your target language to disentangle before it will work…
The first replace takes care of the redirect from http to https for multisite. Perhaps that one is not relevant for you.
The second replace is multi-line. It replaces everything from line 33 to line 39 that was added by the web.ssl template.
It just removes that whole rewrite block. I could not figure out what purpose it serves and it breaks mutlisite so…
You could do this in your app.yml:
after_ssl:
- replace:
filename: "/etc/nginx/conf.d/discourse.conf"
from: /gzip on;[^\}]+\}/m
to: |
gzip on;
add_header Strict-Transport-Security 'max-age=31536000'; # remember the certificate for a year and automatically connect to HTTPS for this domain
if ($http_host != www.example.com) {
rewrite (.*) https://www.example.com$1 permanent;
}
I’m curious to know one thing:
Eg. My discourse is hosted on forum.example.com
Can I set long polling base to poll.example.org which points to the same server IP?
Will it have any impact considering CSP?
CloudFlare isn’t a conventional CDN, it’s a network proxy. Some of their performance features alter the code between client and server.
Leaving those features on can break Discourse in new and interesting ways. If you turn them off, you’re just adding extra network hops between the Discourse app in your browser, and the server. More hops = a less responsive interface.
Well, I’m in Hetzner VPS and I read that could be good to use Cloudflare to keep my server safe (in eventual attacks). CDN could be OK too, because I’m in another country (America, not Germany).
I’m not going to comment on whether you should be worried about attacks. You need to make that assessment yourself, but don’t fall prey to FUD.
If you leave their performance features enabled we cannot support you here. As mentioned above they interfere with the javascript in ways that do no good.
You may be able to make the basic asset caching work with all of the other performance features turned off.
Even then, if Cloudflare is active during installation and setup, certificate enrolment will fail. Let’s Encrypt isn’t supported behind a Cloudflare proxy for initial enrolment.
Press the orange cloud next to your hostname in the Cloudfare control panel so the cloud turns grey.
Then install Discourse. If you want to protect your server, press the grey cloud so it turns orange, but make sure to disable all performance features first.