I have setup discourse twice once in a container and the current one in a VM on both installs discourse is not reachable. Im not too sure what could be wrong.
How to i get it to use the HOST machine’s IP ?
I do not need another network in a network. I am happy for docker to use the HOST IP as this is the only service on this VM.
Any assistance would be greatly appreciated!
Is there an official method to install without docker ?
No, this machine is assigned a local IP and traffic gets routed to the localIP via my firewall. This is not the issue.
The Public IP has an A record for the server and it gets routed correctly. forum.somedomain.com points → to the correct server.
Yes, I have passed the install. Completed it 100% (3 times) to the point where the container is running.
It gets passed all the domain/dns checks. It states valid.
No, this cannot get rate limited as the SSL certificate is issued via my reverse proxy. I have the certificate.
This install is completed 100%. The issue is that Docker is creating a new network 172.17.0.1 which is not needed as i would like to use the HOST local IP 192.xx.xx.xx
The container is running but on a different network. I am unable to get it to the HOST IP
The docker host should be the IP of the host server (192.xxx.xxx.xxx) and not a new network. It is probably working but on that network.
How do i tell the install to use my local ip and not 172.17.0.1
You can’t use discourse-setup with a reverse proxy. You’ll have to edit the yml file yourself. There are some topics about running discourse with other sites on the same machine.
You’ll need to remove the ssl and let’s encrypt templates if you’re using a reverse proxy that does the ssl stuff.
That is quite trivial. You allow http port on app.yml where Ngix is sending traffic. And SSL is disabled. Those two things are the only ones you have yo fix. Of course you have to tell real IP, but that must do everytime is backend Discourse, Moodle, WordPress on what ever. UFW tries to limit access just between frontend and backend because there is no need to allow direct access to backend.
If I recall right here is doc how to setup Apache2. Nginx does same thing, but it’s own way of course.
The install completes successfully and then the issues starts.
I cannot access the docker container from the host because of the docker0 network. I can ping 172.17.0.2 and it is up and running but from the host machine 192.168.1.10:80/443 does not pass traffic to the container.
All i want is for the docker container to use the host network as the container has 80 & 443 ports exposed.
The first nginx reverse proxy handles the traffic from the outside and passes it to the VM correctly. If it did not then the ./discourse-setup would not have picked up the domain name correct and it would not be able to retrieve ssl certificates for the container.
At the end. I know that the container is working 100% i just cannot access it due to the docker network.
I can do that with Nginx or Nginx+Varnish to Discourse on same VPS or VPS on different IP. You don’t tell what you actually do with your Nginx acting as reverse proxy. Your examples are bit difficult because there is no way to know if those are examples or if you are actual trying to use private network.
But:
Of course not, because that takes care incoming traffic. You must use something else port for backend.
Something like this (that is actually used with Varnish, but the princible is totally same, and very much 101-level stuff):
Im not sure what because this docker network is confusing.
Absolutely thats why im getting frustrated with docker lol
Bellow is exactly how the WAN network passes and routes traffic from my nginx reverse proxy to the correct host.
map $scheme $hsts_header {
https "max-age=63072000;includeSubDomains; preload";
}
server {
set $forward_scheme https;
set $server "10.10.1.38";
set $port 443;
listen 80;
listen [::]:80;
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name forum.domainname.com;
# Let's Encrypt SSL
include conf.d/include/letsencrypt-acme-challenge.conf;
include conf.d/include/ssl-ciphers.conf;
ssl_certificate /srv/ssl/domainname.pem;
ssl_certificate_key /srv/ssl/domainname-ke.pem;
# Asset Caching
include conf.d/include/assets.conf;
# Block Exploits
include conf.d/include/block-exploits.conf;
# HSTS (ngx_http_headers_module is required) (63072000 seconds = 2 years)
add_header Strict-Transport-Security $hsts_header always;
# Force SSL
include conf.d/include/force-ssl.conf;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_http_version 1.1;
access_log /var/logs/domainname-access.log proxy;
error_log /var/logs/domainame_error.log warn;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $http_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
location / {
# HSTS (ngx_http_headers_module is required) (63072000 seconds = 2 years)
add_header Strict-Transport-Security $hsts_header always;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $http_connection;
proxy_http_version 1.1;
# Proxy!
include conf.d/include/proxy.conf;
}
}
What is weird, i have setup a docker container once for a client that wanted nginx reverse proxy manger and it was extremely simple.
docker-compose up -d
That was it . The private ip 192.168.1.3 could reach the containers exposed 80/443 and the outgoing traffic was routed correctly to 192.168.1.3.
It is confusing because it is packeting system that plays in its own sandbox. Basically that it is.
But understanding docker is different thing than using it (and now bunch of devs started crying ) Your reverse proxy is sending traffic to IP thru firewall and you have to tell that IP and listening port. And you have Discourse, aka. docker, on that IP, and the port you tell on app.yml. Inner Nginx that works with Discourse itself takes care of rest.
Discourse should not listen 443 because you already terminated SSL.
And you basically can’t use caching on reverse proxy. Backend, Discourse, is not a web page. It is a web app sending jacascript and json.
That is something i can agree with. I would not say crying its just useless to sysadmins and devs who actually know there way around linux. Creating a LxC or VM which is isolated to then let docker create another isolated environment is redundant and pointless.
This is the part that is confusing. app.yml is exposing 80:80 and 443:443 on 172.17.0.2 which is on the docker network 172.17.0.1/16 with the VM’s IP on 10.10.1.38.
How do i get discourse/docker to allow all traffic coming to 10.10.1.38 to be passed to 172.17.0.2 and all outgoing traffic must be passed to 10.10.1.38. Thats all that is needed to solve this issue. Literally.