How do I configure Discourse behind an AWS or Google Cloud Proxy with SSL

(Michael Coelho) #1

Hi everyone, let me get into this conversation…

I am using a AWS Load Balancer with an Amazon Issued Certificate, I cannot download it on my ec2 instance. When i access my forum by it works fine, but when accessing by , as the default is http, not working.

On Route 53 i have an ALIAS from to So when you try to access Route 53 point to

So i create a copy of web.ssl.template.yml and replace all content by:

  - replace:
     filename: "/etc/nginx/conf.d/discourse.conf"
     from: /server.+{/
     to: |
       server {
         listen 80;
         return 301 https://$$ENV_DISCOURSE_HOSTNAME$request_uri;
       server {

But i received a Too Many Request error. I tried also:

  - replace:
     filename: "/etc/nginx/conf.d/discourse.conf"
     from: /server.+{/
     to: |
       server {
         listen 80;
         rewrite ^/(.*) https://$$ENV_DISCOURSE_HOSTNAME$request_uri permanent;
       server {

But with no success…

Do you guys can help me? My forum is down…

I still have not forced https through admin.

Thanks in advance.

Advanced Setup Only: Allowing SSL / HTTPS for your Discourse Docker setup
(Michael Coelho) #2

anyone could help me on this?

(Jay Pfaffman) #3

This isn’t a Discourse issue, it’s nginx and aws. Stack exchange is a place more likely to yield results in under four hours.

If it’s an emergency and you want immediate help here you can post in #marketplace with a budget.

(Anil Gupta) #4

Check this out:

The problem is that you need to use X-forwarded-http header if your discourse server is behind a load balancer.

Unfortunately, Discourse does not have a solution for this out of the box.
I am myself searching for a solution and have not found any, yet.

(Michael Coelho) #5

Hi @Anil_Gupta, i tried your example, but it gets redirected many times.
Here is my discourse

  - replace:
     filename: "/etc/nginx/conf.d/discourse.conf"
     from: /server.+{/
     to: |
       server {
         listen 80;
         if ($http_x_forwarded_proto != "https") {
           rewrite ^(.*)$ https://$server_name$request_uri permanent;
       server {

(Rafael dos Santos Silva) #6

Are you doing health checks in the ALB? Maybe you need to bypass that or accept 301 as a healthy response.

(Anil Gupta) #7

As I already said, this does not work with Discourse at this time.
I was questioning the same in the thread that I shared earlier.

At this time, i do not have any solution that would make it work.
We use Google’s Load balance server (same as Amazon ELB).

(Sam Saffron) #8

You do realise that this very site you are replying on is serving all the traffic via an AWS alb with an auto scaling group?

(Anil Gupta) #9

No, I did not knew it.

What’s the solution then?
Do we have it documented anywhere on how to achieve http to https using the http_forwarded_header?

I tried and it just shows the NGINX page.

It would be helpful if somebody can list the steps to set it up.

(Anil Gupta) #11

Does meta.discourse run on http behind the AWS ELB (using https)?

Or both Amazon ELB and discourse server are having HTTPS?

The question is to have https config ONLY at Load balance server level. The discourse install should be only using HTTP and still should be able to REWRITE/REDIRECT to https version.

Is it possible?

(Sam Saffron) #12

Yes, this is how you would set it up. ELB gets traffic unencrypted and it has the cert and handles SSL.

(Anil Gupta) #13

Ya… but, how does the redirection work at the discourse server level then?
There has to be some redirection rule at discourse set up. In my case, google load balance server does not handle the redirection.

Google recommends that this rule should be added to the discourse’s web server:

server {
      listen         80;
      if ($http_x_forwarded_proto != "https") {
          rewrite ^(.*)$ https://$server_name$REQUEST_URI permanent;

The question is where should we add this rule?

I have tried adding it in app.yml (inside after_web_config hook) but, it does not work.
It starts showing the ‘Welcome to Nginx’ screen after adding the above rule.

Do you have any suggestions as to how this rule can be configured in Discourse set up?

(Sam Saffron) #14

The general process you follow is …

enter container, mess with nginx config file inside container, run sv restart nginx to have it take effect. Continue messing with nginx conf until you have something that works.

Only after that would you add it to the bootstrap process.

(Sam Saffron) #18

Prior to implementing ipv6, our proxy server listened on 80/443 and it talked to Discourse listening on non port 80 on various machines.

Proxying on path is not going to work… not at all, not even following why you would do this, you would enter a world of regex pain.

That said this entire discussion is out of scope for the original howto, split off and moved to support.


A complete guide on this would be fantastic. There seems to be bits & pieces (plenty of interest) around this topic of deploying on AWS in a HA environment with ELB.

I acknowledge that such a document would compete against hosted options.

(Sam Saffron) #21

Very extremely unlikely we are going to publish any guidelines on “this is how you build a super mega enterprise Discourse setup” for a book full of reasons.

If the community want to share knowledge be our guests


Thanks @sam

I guess I’m not looking for a mega install but a ha version that scales & is not going to wake me up in the middle of the night with issues.

I’m not interested in a hosted solution.



(Jeff Atwood) #23

Unless you are doing at least 20 million pageviews per month – and please do correct me if you are – you can trivially achieve “not going to wake me up” and “scales” just fine on a single Digital Ocean droplet.

Otherwise you have decided to opt yourself into pain. Which is entirely your perogative, if that’s what you want.


Thanks @codinghorror

Just confirming is a single droplet a single machine?

(Matt Palmer) #25

Yep, one droplet is one machine.