EDIT: @pfaffman re-wrote this as a standalone topic from what @tophee had written earlier. It’s not yet tested by me, and I’ve moved around Chris’s words, so any mistakes are likely those of @pfaffman.
Are there any reasons not to use NGINX Proxy Manager instead of doing this manually as described in Run other websites on the same machine as Discourse?
I’m already using it. I’ve had it on my home server for a while and when I migrated my discourse instance to a new cloud server, I realized that I’ve forgotten most of the NGINX reverse proxy setup that I did 4 years ago on the old server so I thought: why not do it with NGINX Proxy Manager? To my surprise I found that it has been mentioned rather rarely here on meta so I started wondering whether there are sone downsides that I might have missed…
Indeed, this required a bit of trial and error, but I got it to work as follows (no guarantee that this is the best way of doing it - in fact, I know that there must be a better way - so corrections and improvements are very welcome):
To start with, there are two ways of accessing your discourse instance: 1. by exposing a port, 2. via websocket. I believe I learned somewhere on this forum that the websocket is faster/ more efficient so this is what I’m using, but exposing a port should be a lot easier, so if you can’t get the socket to work, try exposing a port. So, to avoid confusion: to access discourse via a port, follow steps 0, 1, 2, 3,4, and 8 below. If you want to use a websocket, follow steps 0, 1, 5, 6, 7, 8, and 9.
So let’s assume you have completed the 30-minute standard installation and let’s assume that you didn’t let discourse acquire a lets encrypt cert yet - because you don’t need it when using a reverse proxy. NGINX Proxy Manager will take care of that. It doesn’t matter, though, if you already have certificate. NGINX Proxy Manager will simply get a new one.
Next step is to install NGINX Proxy Manager so that you will have two more docker containers running (NGINX Proxy Manager and its database container).
Next is the tricky part that you’re asking about.
The first obstacle is that discourse runs on the default docker
bridge network while NGINX Proxy Manager by default runs on a default “user created network” (called
npm_default in my case) which means that NGINX Proxy Manager can’t see discourse.
So as long as I don’t know if and how discourse can be moved to a custom network, we have to move NGINX Proxy Manager into the default
bridge network. We can do this by adding
network_mode: bridge to both NGINX Proxy Manager containers in our docker compose file.
The next problem is that the standard docker compose file won’t work any more if you just move it to the
bridge network. NGINX Proxy Manager won’t be able to find its database container anymore. This is because internal DNS resolution for service names (which the docker-compose file relies on) is only available on user created networks, not on the default docker networks. So we have to resort to hard coded IP addresses (which is why this is definitely not the optimal solution because it will break if your container IPs change). So you need to start the container even though you know it wont work, note the IP of the NGINX Proxy Manager database container, and replace
DB_MYSQL_HOST: "db" in your docker compose file with
So now all containers should be on the default
bridge network so that NGINX Proxy Manager can see both discourse and its database.
But “seeing” discourse and being able to access it is not the same thing. So you need to make sure that discourse will accept whatever traffic NGINX Proxy Manager forwards to it. If you don’t care about using the websocket, I suppose you can just point NGINX Proxy Manager to port 80 (not 443) of your discourse container-IP, like this:
Havn’t tested this, though. As I mentioned, I’m using the websocket setup, which requires some further steps. Note that the hostname/IP and port above will be ignored when you use the websocket.
This is explained in the OP, so I won’t go into this.
We need to give NGINX Proxy Manager access to the websocket by mounting it as a volume:
- /var/discourse/shared/standalone/nginx.http.sock:/var/discourse/shared/standalone/nginx.http.sock. This is the final change to the default NGINX Proxy Manager docker compose file, so here is the final version that works for me:
version: '3' services: app: image: 'jc21/nginx-proxy-manager:latest' restart: unless-stopped network_mode: bridge ports: - '80:80' - '81:81' - '443:443' environment: DB_MYSQL_HOST: "172.17.0.6" DB_MYSQL_PORT: 3306 DB_MYSQL_USER: "npm" DB_MYSQL_PASSWORD: "my-super-safe-pwd" DB_MYSQL_NAME: "npm" volumes: - ./data:/data - ./letsencrypt:/etc/letsencrypt - /var/discourse/shared/standalone/nginx.http.sock:/var/discourse/shared/standalone/nginx.http.sock db: image: 'jc21/mariadb-aria:latest' restart: unless-stopped network_mode: bridge environment: MYSQL_ROOT_PASSWORD: 'my-super-safe-pwd' MYSQL_DATABASE: 'npm' MYSQL_USER: 'npm' MYSQL_PASSWORD: 'my-super-safe-pwd' volumes: - ./data/mysql:/var/lib/mysql
Last step: tell NGINX Proxy Manager to use the websocket. As far as I remember it wasn’t enough to just turn on “Websockets support” so I copied the NGINX location from the OP to the “Advanced” tab, like this:
I did not get this to work under the “Custom locations” tab.
I didn’t mention the SSL configuration in NGINX Proxy Manager because it seems pretty self-evident and I don’t think it matters at which point in the process you activate it. So if you haven’t done it yet, this is what mine looks like:
tl;dr Whenever you restart the discourse container, you also need to restart the main NGINX Proxy Manager conainer (no need to restart the db).
If you are accessing discourse through the websocket, you need to be aware that when you rebuild your discourse container (as is required every couple of months to update the base image), the previous websocket will be deleted and a new one created. As a consequence, NGINX Proxy Manager will loose contact to your discourse instance and throw a 502 error. Maybe a future update of NGINX Proxy Manager will be able to find the new websocket automatically, but currently (January 2022) NGINX Proxy Manager will not find your rebuilt discourse container unless you restart NGINX Proxy Manager.
If you’re wondering why the above instructions combine the websocket with ports, the simple reason is that, as I wrote this post, it suddenly occurred to me that NGINX Proxy Manager and discourse probably don’t even need to be on the same docker network when we use the websocket. And when this was confirmed, nobody felt like completely rewriting the instructions.
This is the most fascinating aspect of support forums: doing a good job in describing your problem often leads you to the solution without even posting your question. And in this case, I was answering someone else’s question but also might have found the answer to my own.