How do I configure Discourse behind an AWS or Google Cloud Proxy with SSL

Hi everyone, let me get into this conversation…

I am using a AWS Load Balancer with an Amazon Issued Certificate, I cannot download it on my ec2 instance. When i access my forum by https://www.capitool.com.br it works fine, but when accessing by capitool.com.br , as the default is http, not working.

On Route 53 i have an ALIAS from capitool.com.br to www.capitool.com.br. So when you try to access capitool.com.br Route 53 point to www.capitol.com.br.

So i create a copy of web.ssl.template.yml and replace all content by:

  - replace:
     filename: "/etc/nginx/conf.d/discourse.conf"
     from: /server.+{/
     to: |
       server {
         listen 80;
         return 301 https://$$ENV_DISCOURSE_HOSTNAME$request_uri;
       }
       server {

But i received a Too Many Request error. I tried also:

  - replace:
     filename: "/etc/nginx/conf.d/discourse.conf"
     from: /server.+{/
     to: |
       server {
         listen 80;
         server_name forum.com.br www.forum.com.br;
         rewrite ^/(.*) https://$$ENV_DISCOURSE_HOSTNAME$request_uri permanent;
       }
       server {

But with no success…

Do you guys can help me? My forum is down…

I still have not forced https through admin.

Thanks in advance.

anyone could help me on this?

This isn’t a Discourse issue, it’s nginx and aws. Stack exchange is a place more likely to yield results in under four hours.

If it’s an emergency and you want immediate help here you can post in marketplace with a budget.

Check this out:

The problem is that you need to use X-forwarded-http header if your discourse server is behind a load balancer.

Unfortunately, Discourse does not have a solution for this out of the box.
I am myself searching for a solution and have not found any, yet.

3 Likes

Hi @Anil_Gupta, i tried your example, but it gets redirected many times.
Here is my discourse https://www.capitool.com.br/.

run:
  - replace:
     filename: "/etc/nginx/conf.d/discourse.conf"
     from: /server.+{/
     to: |
       server {
         listen 80;
         server_name www.capitool.com.br;
         if ($http_x_forwarded_proto != "https") {
           rewrite ^(.*)$ https://$server_name$request_uri permanent;
         }
       }
       server {

Are you doing health checks in the ALB? Maybe you need to bypass that or accept 301 as a healthy response.

1 Like

As I already said, this does not work with Discourse at this time.
I was questioning the same in the thread that I shared earlier.

At this time, i do not have any solution that would make it work.
We use Google’s Load balance server (same as Amazon ELB).

You do realise that this very site you are replying on is serving all the traffic via an AWS alb with an auto scaling group?

4 Likes

No, I did not knew it.

What’s the solution then?
Do we have it documented anywhere on how to achieve http to https using the http_forwarded_header?

I tried and it just shows the NGINX page.

It would be helpful if somebody can list the steps to set it up.

1 Like

Does meta.discourse run on http behind the AWS ELB (using https)?

Or both Amazon ELB and discourse server are having HTTPS?

The question is to have https config ONLY at Load balance server level. The discourse install should be only using HTTP and still should be able to REWRITE/REDIRECT to https version.

Is it possible?

1 Like

Yes, this is how you would set it up. ELB gets traffic unencrypted and it has the cert and handles SSL.

1 Like

Ya… but, how does the redirection work at the discourse server level then?
There has to be some redirection rule at discourse set up. In my case, google load balance server does not handle the redirection.

Google recommends that this rule should be added to the discourse’s web server:

server {
      listen         80;
      server_name    www.example.org;
      if ($http_x_forwarded_proto != "https") {
          rewrite ^(.*)$ https://$server_name$REQUEST_URI permanent;
      }
}

The question is where should we add this rule?

I have tried adding it in app.yml (inside after_web_config hook) but, it does not work.
It starts showing the ‘Welcome to Nginx’ screen after adding the above rule.

Do you have any suggestions as to how this rule can be configured in Discourse set up?

The general process you follow is …

enter container, mess with nginx config file inside container, run sv restart nginx to have it take effect. Continue messing with nginx conf until you have something that works.

Only after that would you add it to the bootstrap process.

2 Likes

Prior to implementing ipv6, our proxy server listened on 80/443 and it talked to Discourse listening on non port 80 on various machines.

Proxying on path is not going to work… not at all, not even following why you would do this, you would enter a world of regex pain.

That said this entire discussion is out of scope for the original howto, split off and moved to support.

A complete guide on this would be fantastic. There seems to be bits & pieces (plenty of interest) around this topic of deploying on AWS in a HA environment with ELB.

I acknowledge that such a document would compete against hosted options.

2 Likes

Very extremely unlikely we are going to publish any guidelines on “this is how you build a super mega enterprise Discourse setup” for a book full of reasons.

If the community want to share knowledge be our guests

3 Likes

Thanks @sam

I guess I’m not looking for a mega install but a ha version that scales & is not going to wake me up in the middle of the night with issues.

I’m not interested in a hosted solution.

Cheers

Todd

1 Like

Unless you are doing at least 20 million pageviews per month – and please do correct me if you are – you can trivially achieve “not going to wake me up” and “scales” just fine on a single Digital Ocean droplet.

Otherwise you have decided to opt yourself into pain. Which is entirely your perogative, if that’s what you want.

2 Likes

Thanks @codinghorror

Just confirming is a single droplet a single machine?

1 Like

Yep, one droplet is one machine.

3 Likes