What does the log/production.log (in the running docker) say when you try to log in with failure (hence with force_https=true) ?
@RGJ – I like your solution! Do I force-https
with the last suggestion? RE proxy_pass https
.
EDIT: I get a 502 if all I do is use proxy_pass https://192.168.86.108
EDIT2: I concealed 80 on Discourse via app.yml and it was still broke–I just re-read your post, does the Discourse container possess it’s own instance of Nginx? So, in effect, I am passing a proxy to a proxy? Sorry, I am really confused at this point.
@itsbhanusharma do I comment out 80:80 #http
and use @RGJ’s advise to proxy_pass https
?
Why aren’t you following the supported process for multisite here? You’re effectively reinventing a very fragile wheel.
So many links have been provided now, @Stephen-- that I am at a loss to make sense of it all. I thought I was following supported processes. Are the comments above not supported?
Yes because you weren’t using force_https
, hence the unsupported-install tag above. If you deviate from a supportable track then you won’t receive much assistance.
Start here:
There’s a separate guide to handle SSL encapsulation with multisite.
So, does the standard container only run http
? How is what I am trying to do multi-site? I’m not attempting to host multiple domains? I have a single domain. Can you please clarify how that addresses my issue?
What did you mean by:
I’ve setup Discourse servers in a about 5 instances now and all seem to be exhibiting odd behaviour; not sure if it is indeed a Bug or if anyone else has experience same.
Independent infrastructures, not in anyway connected to one another.
But all with very similar settings as per above.
So why is nginx proxying the host at all? What else is happening on the machines?
We have several VM’s that are not exposed externally, the traffic is routed through the proxy (is exposed externally), to the Discourse VM internally. Similar situation in each of the separate installs. Does that clarify?
Not really, but one way or another this is technical pain you’re just going to have to live with. It’s impossible to comment on whether there’s a better approach when the use case and topology is so ambiguous.
I believe the right *mix of solutions was as follows:
As per @itsbhanusharma:
EDIT /var/discourse/containers/app.yml
and amend ports to some custom, I used:
- "8080:80" #http
- "4343:443" #https
Did a ./launcher rebuild app
I then modified our externally accessible proxy to forward requests on 80/443
to http://internal_ip:8080
After a sudo nginx -t
and sudo systemctl restart nginx
I logged into https://discourse.mobiusnode.io server (which was still exhibiting the issues above) and enabled force_https
, at which point it appears all is working.
I am now going to reproduce this on the remaining servers/infrastructures.
Just to be clear, this isn’t what I suggested.
I only asked you to expose port 80 on a local IP and terminate ssl on your reverse proxy.
So, to not use SSL on my externally facing proxy but rather forward plain http
requests onto Discourse server and allow force_https
to handle those requests? How does the cert then get passed? Via the first the Nginx instance? This is where things break down for me.
Well, as long as it works and you can rebuild/upgrade cleanly. It looks like you’re already doing what @itsbhanusharma suggested in their most recent post.
If you’re sharing a single IP with multiple SSL connections you need a SAN cert on the front end of your proxy. If the network is secure then everything else behind it can be unencrypted.
Discourse needs force_https if the user connects via SSL, and you need to ensure the header flagged above is preserved and forwarded.
No, there is something called SNI.
I’m well aware, but as the certificates are all coming from Lets Encrypt what value exists in requesting separate certs? LE supports SAN, so why not use it?