Discourse working with jwilder /nginx proxy & acme-companion

Hi guys,

I’ve been stuck on this for hours so requesting assistance as I cannot find a working solution.

Due to having multiple containers hosting web services I need to host my own Nginx to proxy traffic to correct containers. (I could simply move Discourse to its own server however this is a last resort if I cannot get this working).

I’m currently using the following setup:

A good diagram of how this fits together can be found on the following page:

I have utilised the following tutorials which have been helpful however I have still not been successful.

  1. How to Install Multiple Discourse Forums on the Same Server
  2. Use Nginx-Proxy and LetsEncrypt Companion to Host Multiple Websites - Singular Aspect

Previous Discourse threads I have found (from 2+ years ago) include:

My discourse app.yml contains the following (domain and email redacted):

docker_args:
  - "--net nginx-proxy"
  VIRTUAL_HOST: {domain}
  VIRTUAL_PORT: 80
  LETSENCRYPT_HOST: {domain}
  LETSENCRYPT_EMAIL: {email address}

I’ve commented out the exposed port in the below example. I have tried multiple variations and have seen many different submissions. E.g. the comments on this post: Discourse with jwilder/nginx-proxy howtos? , or others I’ve seen it commented out and specified as VIRTUAL_PORT in the docker variables. I’m a little confused on this part.

#expose:
#  - "80"

All my containers are on the same network e.g.
docker network inspect nginx-proxy

[
    {
        "Name": "nginx-proxy",
        "Id": "d2715f513771f002711521838340b879bb9106ef50118fb29be6b0cf2d5f25e7",
        "Created": "2022-02-01T20:32:10.021632263Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "0b6296ccf7e31b67caf0d31ccc5485e30bd6f1746ceee7e6ce29617b892178cd": {
                "Name": "app",
                "EndpointID": "1cea3782ef9dea3cf0066add142121f569682c74ff8faa6b19813035eff8998c",
                "MacAddress": "02:72:f8:ee:03:32",
                "IPv4Address": "172.18.0.4/16",
                "IPv6Address": ""
            },
            "975ea34d56a65b722aa5e83b99f71039fd687b1aee4e8d4df3cb192b1d35c043": {
                "Name": "nginx-proxy",
                "EndpointID": "bc7a0b4602614be516ee0ceb1c4edf923b91bf7a6528c013157180a98d435807",
                "MacAddress": "02:42:ac:12:00:02",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": ""
            },
            "e299a734584a765dbc10102a5bc9b0d0c98ac6096b6ae75dd1169105893e40af": {
                "Name": "letsencrypt-proxy",
                "EndpointID": "6b3588359ac4eec02618d3a7438d2750cb6e1c3d1cb6e1a8ce6378c568358a50",
                "MacAddress": "02:42:ac:12:00:03",
                "IPv4Address": "172.18.0.3/16",
                "IPv6Address": ""
            }
        },
        "Options": {},
        "Labels": {}
    }
]

My certificate is successfully generated as you can see here however I’m currently receiving a 502 error.

When trying to debug via looking at nginx logs on the discourse container the log files are generated however no logs are populated:

root@ubuntu-vm-dev-app:/var/log/nginx# ls -lt
total 692
-rw-r--r-- 1 www-data www-data      0 Feb  4 07:23 access.letsencrypt.log
-rw-r--r-- 1 www-data www-data      0 Feb  4 07:23 error.letsencrypt.log
-rw-r--r-- 1 www-data www-data      0 Feb  4 07:23 error.log
-rw-r--r-- 1 www-data www-data      0 Feb  1 23:04 access.log
-rw-r--r-- 1 www-data www-data 700917 Jan 31 23:48 error.log.1
-rw-r--r-- 1 www-data www-data      0 Jan 31 22:55 access.letsencrypt.log.1
-rw-r--r-- 1 www-data www-data      0 Jan 31 22:55 error.letsencrypt.log.1

My docker logs show the following which look to be the most useful info so far (I’ve removed public IP and my domain name.:

nginx-proxy          | nginx.1     | 2022/02/04 08:35:04 [error] 206#206: *93 connect() failed (111: Connection refused) while connecting to upstream, client: {REDACTED_PUBLIC_IP}, server: {MY_FORUM_DOMAIN}.com, request: "GET / HTTP/2.0", upstream: "http://172.18.0.4:80/", host: "{MY_FORUM_DOMAIN}.com"
nginx-proxy          | nginx.1     | {MY_FORUM_DOMAIN}.com {REDACTED_PUBLIC_IP} - - [04/Feb/2022:08:35:04 +0000] "GET / HTTP/2.0" 502 559 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36" "172.18.0.4:80"
nginx-proxy          | nginx.1     | 2022/02/04 08:35:06 [error] 206#206: *93 connect() failed (111: Connection refused) while connecting to upstream, client:{REDACTED_PUBLIC_IP}, server: {MY_FORUM_DOMAIN}.com, request: "GET /service-worker.js HTTP/2.0", upstream: "http://172.18.0.4:80/service-worker.js", host: "{MY_FORUM_DOMAIN}.com", referrer: "https://{MY_FORUM_DOMAIN}.com/service-worker.js"
nginx-proxy          | nginx.1     | {MY_FORUM_DOMAIN}.com {REDACTED_PUBLIC_IP} - - [04/Feb/2022:08:35:06 +0000] "GET /service-worker.js HTTP/2.0" 502 559 "https://{MY_FORUM_DOMAIN}.com/service-worker.js" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36" "172.18.0.4:80"
nginx-proxy          | nginx.1     | 2022/02/04 08:35:19 [error] 206#206: *93 connect() failed (111: Connection refused) while connecting to upstream, client:{REDACTED_PUBLIC_IP}, server: {MY_FORUM_DOMAIN}.com, request: "GET /service-worker.js HTTP/2.0", upstream: "http://172.18.0.4:80/service-worker.js", host: "{MY_FORUM_DOMAIN}.com", referrer: "https://{MY_FORUM_DOMAIN}.com/service-worker.js"
nginx-proxy          | nginx.1     | {MY_FORUM_DOMAIN}.com{REDACTED_PUBLIC_IP} - - [04/Feb/2022:08:35:19 +0000] "GET /service-worker.js HTTP/2.0" 502 559 "https://{MY_FORUM_DOMAIN}.com/service-worker.js" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36" "172.18.0.4:80"

I’ve confirmed that communication towards the discourse container via my specified port in the containers/app.yml file is fine. E.g. I logged into the discourse container and ran netcat on port 80 prior to performing a request from my browser.

root@ubuntu-vm-dev-app:/var/log/nginx# netcat -lnvp 80
Ncat: Version 7.80 ( https://nmap.org/ncat )
Ncat: Listening on :::80
Ncat: Listening on 0.0.0.0:80
Ncat: Connection from 172.18.0.2.
Ncat: Connection from 172.18.0.2:48776.
GET / HTTP/1.1
Host: {REDACTED_HOST}.com
Connection: close
X-Real-IP: {REDACTED_PUBLIC-IP}
X-Forwarded-For: {REDACTED_PUBLIC-IP}
X-Forwarded-Proto: https
X-Forwarded-Ssl: on
X-Forwarded-Port: 443
cache-control: max-age=0
sec-ch-ua: " Not;A Brand";v="99", "Google Chrome";v="97", "Chromium";v="97"
sec-ch-ua-mobile: ?0
sec-ch-ua-platform: "Windows"
upgrade-insecure-requests: 1
user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/97.0.4692.99 Safari/537.36
accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9
sec-fetch-site: none
sec-fetch-mode: navigate
sec-fetch-user: ?1
sec-fetch-dest: document
accept-encoding: gzip, deflate, br
accept-language: en-US,en;q=0.9
cookie: _t=azl2WkpyN1hnbENOd3pyZ1d6V3ZXUT....

root@ubuntu-vm-dev-app:/var/log/nginx# 

A sample of my generated Nginx configuration from the nginx-proxy container is as follows:

root@975ea34d56a6:/etc/nginx/conf.d# cat default.conf 
# nginx-proxy version : 0.10.0-11-g42c8b0c
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
  default $http_x_forwarded_proto;
  ''      $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
  default $http_x_forwarded_port;
  ''      $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
  default upgrade;
  '' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header based on $proxy_x_forwarded_proto
map $proxy_x_forwarded_proto $proxy_x_forwarded_ssl {
  default off;
  https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
                 '"$request" $status $body_bytes_sent '
                 '"$http_referer" "$http_user_agent" '
                 '"$upstream_addr"';
access_log off;
		ssl_protocols TLSv1.2 TLSv1.3;
		ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
		ssl_prefer_server_ciphers off;
resolver 127.0.0.11;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
	server_name _; # This is just an invalid value which will never trigger on a real hostname.
	server_tokens off;
	listen 80;
	access_log /var/log/nginx/access.log vhost;
	return 503;
}
server {
	server_name _; # This is just an invalid value which will never trigger on a real hostname.
	server_tokens off;
	listen 443 ssl http2;
	access_log /var/log/nginx/access.log vhost;
	return 503;
	ssl_session_cache shared:SSL:50m;
	ssl_session_tickets off;
	ssl_certificate /etc/nginx/certs/default.crt;
	ssl_certificate_key /etc/nginx/certs/default.key;
}
# {REDACTED_DOMAIN}.com
upstream {REDACTED_DOMAIN}.com {
	## Can be connected with "nginx-proxy" network
	# app
	server 172.18.0.4:80;
}
server {
	server_name {REDACTED_DOMAIN}.com;
	listen 80 ;
	access_log /var/log/nginx/access.log vhost;
	# Do not HTTPS redirect Let'sEncrypt ACME challenge
	location ^~ /.well-known/acme-challenge/ {
		auth_basic off;
		auth_request off;
		allow all;
		root /usr/share/nginx/html;
		try_files $uri =404;
		break;
	}
	location / {
		return 301 https://$host$request_uri;
	}
}
server {
	server_name {REDACTED_DOMAIN}.com;
	listen 443 ssl http2 ;
	access_log /var/log/nginx/access.log vhost;
	ssl_session_timeout 5m;
	ssl_session_cache shared:SSL:50m;
	ssl_session_tickets off;
	ssl_certificate /etc/nginx/certs/{REDACTED_DOMAIN}.com.crt;
	ssl_certificate_key /etc/nginx/certs/{REDACTED_DOMAIN}.com.key;
	ssl_dhparam /etc/nginx/certs/{REDACTED_DOMAIN}.com.dhparam.pem;
	ssl_stapling on;
	ssl_stapling_verify on;
	ssl_trusted_certificate /etc/nginx/certs/{REDACTED_DOMAIN}.com.chain.pem;
	add_header Strict-Transport-Security "max-age=31536000" always;
	include /etc/nginx/vhost.d/default;
	location / {
		proxy_pass http://{REDACTED_DOMAIN}.com;
	}
}

I’m just wondering has anyone managed to get this working? Or have any idea on methods of debugging the 502 error I am receiving - I know the error can be several variables however if you’ve come across this when configuring discourse / similar setups to myself your solution may be able to assist.

Thanks.

Please have a look on how to debug a discourse install with several proxy servers involved:
How to debug a sub-folder Discourse install

It looks like you’re proxy configuration is working, but discourse isn’t. My guess is that it can’t access the database? You can look at (something like)

/var/discourse/shared/standalone/logs/rails/production.log

"GET /server-worker.js HTTP/2.0" **502** […]
For me this looks like a error 502 on getting a static file - proxy does not seem to work.

Oh. That’s from your proxy nginx, not the discourse nginx. Sorry.

I don’t see anything obviously wrong in what you’ve described, but there are lots of little pieces. Maybe discourse is on the wrong docker network (but it looked like you tried you address that).

For the past several months I’ve been meaning to figure out how to do just what you’re saying, but people keep paying me to do other things. Maybe in the next week or two, but I’ve been saying that for months.

[…] "https://… […]

Try configuring your outside proxy to connect to the http-url on Discourse Nginx and not https.

2 Likes

Hi @rrit @pfaffman thank you for the responses and suggestions.

I had another dig at this today, took a good while however I have finally got it working. I’ll provide my configuration and some methods of debugging in case it helps any others out there.

note: Please refer to my initial post for info on how the nginx/acme-companion/discourse container ‘glue’ together and automate certs/reverse proxying.

If you don’t care of debugging methods jump towards the end where my solutions are posted (docker-compose file and app.yml file)

Main error identified from my configuration

It seems one error I was experiencing was my own fault due to blindly following multiple tutorials resulting in things getting messy. If you notice on my initial post I was able to listen on the discourse app containers port 80 by using “netcat -lnvp 80”. Well this doesn’t make sense if I want the reverse proxy to be sending traffic to a listening port on the discourse container that port should have not been available for me to listen via netcat… it should be ‘in use’.

The issue was I was using the following template: web.socketed.template.yml
This means nginx from the discourse container was utilising Unix sockets rather than listening on TCP/IP ports. An example of the template and the configurations it makes to nginx is as follows:

ubuntu@ubuntu-vm-dev:/var/discourse/templates$ cat web.socketed.template.yml 
run:
  - file:
     path: /etc/runit/1.d/remove-old-socket
     chmod: "+x"
     contents: |
        #!/bin/bash
        rm -f /shared/nginx.http*.sock
  - file:
     path: /etc/runit/3.d/remove-old-socket
     chmod: "+x"
     contents: |
        #!/bin/bash
        rm -rf /shared/nginx.http*.sock
  - replace:
     filename: "/etc/nginx/conf.d/discourse.conf"
     from: /listen 80;/
     to: |
       listen unix:/shared/nginx.http.sock;
       set_real_ip_from unix:;
  - replace:
     filename: "/etc/nginx/conf.d/discourse.conf"
     from: /listen 443 ssl http2;/
     to: |
       listen unix:/shared/nginx.https.sock ssl http2;
       set_real_ip_from unix:;

The issue was that nginx from my nginx-proxy container was forwarding traffic successfully to the discourse container but was forwarding to port 80 rather than utilizing the shared Unix socket.

You can see in the nginx-proxy documentation that the default port will be port 80 unless a Virtual Port is specified.
Screenshot attachment from: https://github.com/nginx-proxy/nginx-proxy
image

Rather than messing around with configuring the nginx-proxy container I instead removed the web.socketed.template.yml. Following this I removed any ports that my discourse container was exposing as this was not required. As long as the discourse container is on the correct docker network (the same network as nginx-proxy and acme-companion) it should work fine.

Debugging Tips

1. Network debugging

1.1 Docker network
An example of how you can test if the network is successful is with the following command (after launching your docker app and the nginx/acme containers).

ubuntu@ubuntu-vm-dev:/var/discourse$ docker network inspect nginx-proxy
 {
        "Name": "nginx-proxy",
        "Id": "d2715f513771f002711521838340b879bb9106ef50118fb29be6b0cf2d5f25e7",
        "Created": "2022-02-01T20:32:10.021632263Z",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": {},
            "Config": [
                {
                    "Subnet": "172.18.0.0/16",
                    "Gateway": "172.18.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "205772214b77a6bf49a76d082eef216aa5fc4ca15a39ae2c8fefc0b3eeb61e00": {
                "Name": "nginx-test_webserver2_1",
                "EndpointID": "ee3096663b1a90443906652ff3a192243fa1f60081e5bbee190a23f1d23393f7",
                "MacAddress": "02:42:ac:12:00:02",
                "IPv4Address": "172.18.0.2/16",
                "IPv6Address": ""
            },
            "3050831bf600ae02c18e5045549267b583ec732d6f1550f869ab651dfc7d8dca": {
                "Name": "nginx-test_nginx_1",
                "EndpointID": "a7f95f2a7589ca3823da01b9d28244bbd81815d6f5cc1f2f89804a488ef2148d",
                "MacAddress": "02:42:ac:12:00:03",
                "IPv4Address": "172.18.0.3/16",
                "IPv6Address": ""
            },
            "696ce8d129a62ce37ae8373ed4f603fd5d731f5f29166bd6fa889631ef0ae606": {
                "Name": "nginx-proxy",
                "EndpointID": "99d90539668b97652ffca7e3301aa83c40fe7b31ca8eea2a8a4d9bf7afe19ac0",
                "MacAddress": "02:42:ac:12:00:04",
                "IPv4Address": "172.18.0.4/16",
                "IPv6Address": ""
            },
            "d3faf6489ca6617dceda4f2907ee6c055a1d81e3590c3eab2768601dfc0b60d7": {
                "Name": "app",
                "EndpointID": "aed6bfcc61e439e2206615a1569355c529a63a21c356e54da378f630d44fd3bc",
                "MacAddress": "02:72:f8:ee:03:32",
                "IPv4Address": "172.18.0.6/16",
                "IPv6Address": ""
            },
            "e04f2125001d5f346255312f10cdd0ab176388cddb0819b0022c5988a23303eb": {
                "Name": "letsencrypt-proxy",
                "EndpointID": "0088bcd942aed41c9351b935ed7b115e49046cc3a8308ca5a8d3828a194ecfee",
                "MacAddress": "02:42:ac:12:00:05",
                "IPv4Address": "172.18.0.5/16",
                "IPv6Address": ""
            }
        },

Note: change the network name to the one you are using (unless you are also using the name nginx-proxy)

Following this, you should see these IP addresses from the docker network within the nginx-proxy container configuration which you can access and view via:

ubuntu@ubuntu-vm-dev:/var/discourse$ docker exec -it nginx-proxy /bin/bash
root@696ce8d129a6:/app# cd /etc/nginx/conf.d/
root@696ce8d129a6:/etc/nginx/conf.d# cat default.conf
... NGINX Configuration will be here ... 

To test successful network communication, interact with an IP address specified in one of the upstream locations. For example, from the nginx-proxy container, attempt to interact with the discourse container via the assigned docker network IP addresses.

E.g. my configuration file contains:

upstream my_forum_domain.com {
	## Can be connected with "nginx-proxy" network
	# app
	server 172.18.0.6:80;
}

As the upstream IP for my discourse container is 172.18.0.6 I should be able to see successful output when performing curl against my discourse container.

root@696ce8d129a6:/etc/nginx/conf.d# curl 172.18.0.6
... Curl request output ... 

1.2. Network troubleshooting tools (Netstat, lsof, netcat)
The following tools will assist troubleshooting ‘netstat’, 'lsof, ‘netcat’. As the discourse container does not come with these tools pre installed install via:

root@ubuntu-vm-dev-app:/# apt-get install lsof
root@ubuntu-vm-dev-app:/# apt-get install net-tools
root@ubuntu-vm-dev-app:/# apt-get install netcat

lsof
You should see your nginx process listening on port 80 via lsof.

root@ubuntu-vm-dev-app:/# lsof -i :80
COMMAND PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
nginx    55 root    6u  IPv4 254317      0t0  TCP *:http (LISTEN)
root@ubuntu-vm-dev-app:/# 

netstat
The following netstat command is also useful to assist in seeing if correct processes / ports are being utilised. E.g. you can see ports associated with HTTP (80), redis (6379), PostgreSQL (5432), Unicorn (3000) etc.

root@ubuntu-vm-dev-app:/# netstat -peanut
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       User       Inode      PID/Program name    
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      0          254317     55/nginx: master pr 
tcp        0      0 127.0.0.1:3000          0.0.0.0:*               LISTEN      1000       255560     -                   
tcp        0      0 0.0.0.0:5432            0.0.0.0:*               LISTEN      105        255137     -                   
tcp        0      0 127.0.0.11:46697        0.0.0.0:*               LISTEN      0          254194     -                   
tcp        0      0 0.0.0.0:6379            0.0.0.0:*               LISTEN      106        255126     -                   
tcp        0      0 127.0.0.1:6379          127.0.0.1:37600         ESTABLISHED 106        255583     -                   
tcp        0      0 127.0.0.1:6379          127.0.0.1:37608         ESTABLISHED 106        255610     -                   
tcp        0      0 127.0.0.1:37632         127.0.0.1:6379          ESTABLISHED 1000       254766     -                   
tcp        0      0 127.0.0.1:37626         127.0.0.1:6379          ESTABLISHED 1000       254745     -                   
tcp        0      0 127.0.0.1:6379          127.0.0.1:37634         ESTABLISHED 106        254773     -                   
tcp        0      0 127.0.0.1:6379          127.0.0.1:37650         ESTABLISHED 106        255950     -                   
tcp        0      0 127.0.0.1:6379          127.0.0.1:37668         ESTABLISHED 106        255965     -                   
tcp        0      0 127.0.0.1:6379          127.0.0.1:37612         ESTABLISHED 106        255618     -                   
tcp        0      0 127.0.0.1:6379          127.0.0.1:37602         ESTABLISHED 106        255585     -                   
tcp        0      0 127.0.0.1:37624         127.0.0.1:6379          ESTABLISHED 1000       254739     -                   
tcp        0      0 127.0.0.1:37594         127.0.0.1:6379          ESTABLISHED 1000       255578     -                   
tcp        0      0 127.0.0.1:6379          127.0.0.1:37594         ESTABLISHED 106        254527     -                   
tcp        0      0 127.0.0.1:37618         127.0.0.1:6379          ESTABLISHED 1000       255635     -                   
tcp        0      0 127.0.0.1:37586         127.0.0.1:6379          ESTABLISHED 1000       255168     -                   
tcp        0      0 127.0.0.1:6379          127.0.0.1:37622         ESTABLISHED 106        255645     -                   
tcp        0      0 127.0.0.1:6379          127.0.0.1:37624         ESTABLISHED 106        254740     -                   
tcp        0      0 127.0.0.1:37600         127.0.0.1:6379          ESTABLISHED 1000       254537     -                   
tcp        0      0 127.0.0.1:37610         127.0.0.1:6379          ESTABLISHED 1000       254563     -                   
tcp        0      0 127.0.0.1:3000          127.0.0.1:36678         TIME_WAIT   0          0          -                   
tcp        0      0 127.0.0.1:6379          127.0.0.1:37620         ESTABLISHED 106        255642     -                   
tcp        0      0 127.0.0.1:6379          127.0.0.1:37636         ESTABLISHED 106        254779     -                   
tcp        0      0 127.0.0.1:37628         127.0.0.1:6379          ESTABLISHED 1000       254751     -                   
tcp        0      0 127.0.0.1:37590         127.0.0.1:6379          ESTABLISHED 1000       254361     -                   
tcp        0      0 127.0.0.1:6379          127.0.0.1:37626         ESTABLISHED 106        254746     -                   
tcp        0      0 127.0.0.1:6379          127.0.0.1:37586         ESTABLISHED 106        255169     -                   
tcp        0      0 127.0.0.1:6379          127.0.0.1:37590         ESTABLISHED 106        255173     -                   
tcp        0      0 127.0.0.1:37668         127.0.0.1:6379          ESTABLISHED 1000       254849     -                   
tcp        0      0 127.0.0.1:37622         127.0.0.1:6379          ESTABLISHED 1000       254582     -                   
tcp        0      0 127.0.0.1:37634         127.0.0.1:6379          ESTABLISHED 1000       254772     -                   
tcp        0      0 127.0.0.1:37602         127.0.0.1:6379          ESTABLISHED 1000       254541     -                   
tcp        0      0 127.0.0.1:6379          127.0.0.1:37630         ESTABLISHED 106        254760     -                   
tcp        0      0 127.0.0.1:37612         127.0.0.1:6379          ESTABLISHED 1000       255617     -                   
tcp        0      0 127.0.0.1:6379          127.0.0.1:37628         ESTABLISHED 106        254752     -                   
tcp        0      0 127.0.0.1:6379          127.0.0.1:37610         ESTABLISHED 106        255612     -                   
tcp        0      0 127.0.0.1:3000          127.0.0.1:36682         TIME_WAIT   0          0          -                   
tcp        0      0 127.0.0.1:6379          127.0.0.1:37632         ESTABLISHED 106        254767     -                   
tcp        0      0 127.0.0.1:37650         127.0.0.1:6379          ESTABLISHED 1000       254826     -                   
tcp        0      0 127.0.0.1:37608         127.0.0.1:6379          ESTABLISHED 1000       254559     -                   
tcp        0      0 127.0.0.1:37620         127.0.0.1:6379          ESTABLISHED 1000       255641     -                   
tcp        0      0 127.0.0.1:37636         127.0.0.1:6379          ESTABLISHED 1000       254778     -                   
tcp        0      0 127.0.0.1:37630         127.0.0.1:6379          ESTABLISHED 1000       254759     -                   
tcp        0      0 127.0.0.1:6379          127.0.0.1:37618         ESTABLISHED 106        255636     -                   
tcp6       0      0 :::5432                 :::*                    LISTEN      105        255138     -                   
tcp6       0      0 :::6379                 :::*                    LISTEN      106        255127     -                   
udp        0      0 127.0.0.1:54569         127.0.0.1:54569         ESTABLISHED 105        255145     -                   
udp        0      0 127.0.0.11:44019        0.0.0.0:*                           0          254193     -                   
root@ubuntu-vm-dev-app:/# 

nc
nc to view listening ports are configured / listening correctly. (although this should be displayed from netstat anyway).

root@ubuntu-vm-dev-app:/# nc -zv localhost 1-10000 2>&1 | grep -i  succeeded 
Connection to localhost (127.0.0.1) 80 port [tcp/http] succeeded!
Connection to localhost (127.0.0.1) 3000 port [tcp/*] succeeded!
Connection to localhost (127.0.0.1) 5432 port [tcp/postgresql] succeeded!
Connection to localhost (127.0.0.1) 6379 port [tcp/redis] succeeded!

1.3 process
Another useful command: ps aux

root@ubuntu-vm-dev-app:/# ps aux
USER         PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root           1  0.0  0.0   6772  3252 pts/0    Ss+  15:20   0:00 /bin/bash /sbin/boot
root          41  0.0  0.0   2340   636 pts/0    S+   15:20   0:00 /usr/bin/runsvdir -P /etc/service
root          42  0.0  0.0   2188   696 ?        Ss   15:20   0:00 runsv cron
root          43  0.0  0.0   2188   636 ?        Ss   15:20   0:00 runsv rsyslog
root          44  0.0  0.0   2188   632 ?        Ss   15:20   0:00 runsv nginx
root          45  0.0  0.0   2188   636 ?        Ss   15:20   0:00 runsv postgres
root          46  0.0  0.0   2188   636 ?        Ss   15:20   0:00 runsv redis
root          47  0.0  0.0   2188   636 ?        Ss   15:20   0:00 runsv unicorn
root          48  0.0  0.0 151068  3868 ?        Sl   15:20   0:00 rsyslogd -n
root          49  0.0  0.0   2336   700 ?        S    15:20   0:00 svlogd /var/log/redis
root          50  0.0  0.0   6620  2768 ?        S    15:20   0:00 cron -f
redis         51  0.1  0.0  54180  5828 ?        Sl   15:20   0:02 /usr/bin/redis-server *:6379
root          52  0.0  0.0   2336   636 ?        S    15:20   0:00 svlogd /var/log/postgres
postgres      53  0.0  0.3 213196 28676 ?        S    15:20   0:00 /usr/lib/postgresql/13/bin/postmaster -D /etc/postgresql/13/main
discour+      54  0.0  0.0  15256  3924 ?        S    15:20   0:00 /bin/bash config/unicorn_launcher -E production -c config/unicorn.conf.rb
root          55  0.0  0.0  21280  7128 ?        S    15:20   0:00 nginx: master process /usr/sbin/nginx
www-data      69  0.0  0.0  22156  5004 ?        S    15:20   0:00 nginx: worker process
www-data      70  0.0  0.0  22124  4524 ?        S    15:20   0:00 nginx: worker process
www-data      71  0.0  0.0  21656  3768 ?        S    15:20   0:00 nginx: cache manager process
postgres      75  0.0  0.0 213296  6344 ?        Ss   15:20   0:00 postgres: 13/main: checkpointer 
postgres      76  0.0  0.0 213196  5896 ?        Ss   15:20   0:00 postgres: 13/main: background writer 
postgres      77  0.0  0.1 213196 10028 ?        Ss   15:20   0:00 postgres: 13/main: walwriter 
postgres      78  0.0  0.1 213736  8492 ?        Ss   15:20   0:00 postgres: 13/main: autovacuum launcher 
postgres      79  0.0  0.0  67960  5548 ?        Ss   15:20   0:00 postgres: 13/main: stats collector 
postgres      80  0.0  0.0 213752  6868 ?        Ss   15:20   0:00 postgres: 13/main: logical replication launcher 
discour+      81  0.4  3.1 436276 253572 ?       Sl   15:20   0:08 unicorn master -E production -c config/unicorn.conf.rb
postgres      99  0.0  0.3 220252 30612 ?        Ss   15:20   0:00 postgres: 13/main: discourse discourse [local] idle
discour+     116  0.6  3.7 831804 306708 ?       SNl  15:20   0:12 sidekiq 6.4.1 discourse [0 of 5 busy]
discour+     125  0.4  3.6 750356 297108 ?       Sl   15:20   0:07 unicorn worker[0] -E production -c config/unicorn.conf.rb
discour+     133  0.3  3.5 737812 289896 ?       Sl   15:20   0:07 unicorn worker[1] -E production -c config/unicorn.conf.rb
postgres     147  0.0  0.3 219372 27792 ?        Ss   15:20   0:00 postgres: 13/main: discourse discourse [local] idle
root        1858  0.0  0.0   7036  3828 pts/1    Ss   15:46   0:00 /bin/bash
postgres    1982  0.0  0.2 217012 24008 ?        Ss   15:47   0:00 postgres: 13/main: discourse discourse [local] idle
discour+    2236  0.0  0.0  13760  2216 ?        S    15:51   0:00 sleep 1
root        2237  0.0  0.0   9636  3292 pts/1    R+   15:51   0:00 ps aux
root@ubuntu-vm-dev-app:/# 

Note: The reason for providing the above outputs is to assist in spotting any abnormalities or differences from your deployment and mine. Rather than this you can spin up your own working deployment (without the reverse proxy) and compare things such as:

  • Networking
  • Processes
  • File Creations / Modification
  • Log output

My working configuration files

1. Example of log output upon ./launcher rebuild app

+ /usr/bin/docker run --shm-size=512m -d --restart=always -e LANG=en_US.UTF-8 -e RAILS_ENV=production -e UNICORN_WORKERS=2 -e UNICORN_SIDEKIQS=1 -e RUBY_GLOBAL_METHOD_CACHE_SIZE=131072 -e RUBY_GC_HEAP_GROWTH_MAX_SLOTS=40000 -e RUBY_GC_HEAP_INIT_SLOTS=400000 -e RUBY_GC_HEAP_OLDOBJECT_LIMIT_FACTOR=1.5 -e DISCOURSE_DB_SOCKET=/var/run/postgresql -e DISCOURSE_DB_HOST= -e DISCOURSE_DB_PORT= -e VIRTUAL_HOST={your_domain}.com -e LETSENCRYPT_HOST={your_domain}.com -e LETSENCRYPT_EMAIL=steve@{your_email_domain}.com -e LC_ALL=en_US.UTF-8 -e LANGUAGE=en_US.UTF-8 -e EMBER_CLI_PROD_ASSETS=1 -e DISCOURSE_HOSTNAME={your_domain}.com -e DISCOURSE_DEVELOPER_EMAILS=steve@{your_email_domain}.com -e DISCOURSE_SMTP_ADDRESS=smtp.mailgun.org -e DISCOURSE_SMTP_PORT=587 -e DISCOURSE_SMTP_USER_NAME=noreply@mail.{your_domain}.com -e DISCOURSE_SMTP_PASSWORD={your_password} -e DISCOURSE_SMTP_DOMAIN=mail.{your_domain}.com -e DISCOURSE_NOTIFICATION_EMAIL=noreply@mail.{your_domain}.com -h ubuntu-vm-dev-app -e DOCKER_HOST_IP=172.17.0.1 --name app -t -v /var/discourse/shared/standalone:/shared -v /var/discourse/shared/standalone/log/var-log:/var/log --mac-address 02:72:f8:ee:03:32 --network nginx-proxy local_discourse/app /sbin/boot
d3faf6489ca6617dceda4f2907ee6c055a1d81e3590c3eab2768601dfc0b60d7
ubuntu@ubuntu-vm-dev:/var/discourse$ 

2. My docker-compose file

ubuntu@ubuntu-vm-dev:~/nginx-test$ cat docker-compose.yml 
version: '3'
services:
  nginx-proxy:
    image: jwilder/nginx-proxy
    container_name: nginx-proxy
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - /var/run/docker.sock:/tmp/docker.sock:ro
      - letsencrypt-certs:/etc/nginx/certs
      - letsencrypt-vhost-d:/etc/nginx/vhost.d
      - letsencrypt-html:/usr/share/nginx/html
  letsencrypt-proxy:
      image: jrcs/letsencrypt-nginx-proxy-companion
      container_name: letsencrypt-proxy
      volumes:
        - /var/run/docker.sock:/var/run/docker.sock:ro
        - letsencrypt-certs:/etc/nginx/certs
        - letsencrypt-vhost-d:/etc/nginx/vhost.d
        - letsencrypt-html:/usr/share/nginx/html
      environment:
        - DEFAULT_EMAIL=steve@{your_email_domain}.com
        - NGINX_PROXY_CONTAINER=nginx-proxy

networks:
  default:
    external:
      name: nginx-proxy
 
volumes:
  letsencrypt-certs:
  letsencrypt-vhost-d:
  letsencrypt-html:

3. My discourse app.yml file

ubuntu@ubuntu-vm-dev:/var/discourse$ cat containers/app.yml 
templates:
  - "templates/postgres.template.yml"
  - "templates/redis.template.yml"
  - "templates/sshd.template.yml"
  - "templates/web.template.yml"

docker_args:
  - "--network nginx-proxy"

params:
  db_default_text_search_config: "pg_catalog.english"

  db_shared_buffers: "128MB"

env:
  VIRTUAL_HOST: {your_domain}.com
  LETSENCRYPT_HOST: example.com
  LETSENCRYPT_EMAIL: steve@{your_email_domain}.com
  LC_ALL: en_US.UTF-8
  LANG: en_US.UTF-8
  LANGUAGE: en_US.UTF-8
  EMBER_CLI_PROD_ASSETS: 1
  # DISCOURSE_DEFAULT_LOCALE: en

  UNICORN_WORKERS: 2
  
  DISCOURSE_HOSTNAME: example.com
  
  DISCOURSE_DEVELOPER_EMAILS: 'steve@{email_domain}.com'

  DISCOURSE_SMTP_ADDRESS: smtp.mailgun.org
  DISCOURSE_SMTP_PORT: 587
  DISCOURSE_SMTP_USER_NAME: noreply@mail.{domain.com}.com
  DISCOURSE_SMTP_PASSWORD: "{password}"
  #DISCOURSE_SMTP_ENABLE_START_TLS: true           # (optional, default true)
  DISCOURSE_SMTP_DOMAIN: mail.{domain}.com
  DISCOURSE_NOTIFICATION_EMAIL: noreply@mail.{domain}.com

volumes:
  - volume:
      host: /var/discourse/shared/standalone
      guest: /shared
  - volume:
      host: /var/discourse/shared/standalone/log/var-log
      guest: /var/log

hooks:
  after_code:
    - exec:
        cd: $home/plugins
        cmd:
          - git clone https://github.com/discourse/docker_manager.git
          - git clone https://github.com/discourse/discourse-spoiler-alert.git
          - git clone https://github.com/discourse/discourse-solved.git
          - git clone https://github.com/discourse/discourse-cakeday.git

run:
  - exec: echo "Beginning of custom commands"
  - exec: echo "End of custom commands"

Hopefully the above helps someone else who is also stuck on this :slight_smile:

2 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.