Ports 443/80 show as closed after installation

Hi,
I have just finished my first Discourse installation on a Ubuntu 22.04.4 server on Proxmox VE (virtual environment).
The installation went fine, with no errors, but after finishing it-- the forum site won’t open saying that the service is not accessible.

When checking from my network I see the ports as closed:

PS C:\Users\mwojt> nmap 192.168.131.211
Nmap scan report for 192.168.131.211

PORT    STATE  SERVICE
22/tcp  open   ssh
80/tcp  closed http
443/tcp closed https

But when running the same for localhost from inside the Ubuntu machine it shows as open:

root@ubuntu-discourse:~# nmap localhost
Nmap scan report for localhost (127.0.0.1)

PORT    STATE SERVICE
22/tcp  open  ssh
80/tcp  open  http
443/tcp open  https

However if I run the check the IP address from the same Ubuntu VM to the I see this:

root@ubuntu-discourse:~# nmap 192.168.131.211
Nmap scan report for ubuntu-discourse (192.168.131.211)

PORT    STATE    SERVICE
22/tcp  open     ssh
80/tcp  filtered http
443/tcp filtered https

So, the ports show up as filtered.
The ports were opened at the firewall:

root@ubuntu-discourse:~# ufw status
Status: active

To                         Action      From
--                         ------      ----
80                         ALLOW       Anywhere
443                        ALLOW       Anywhere
22                         ALLOW       Anywhere
80 (v6)                    ALLOW       Anywhere (v6)
443 (v6)                   ALLOW       Anywhere (v6)
22 (v6)                    ALLOW       Anywhere (v6)

And the Docker port forwarding seems to be set correctly:

root@ubuntu-discourse:~# docker port 6922c7802903
80/tcp -> 0.0.0.0:80
80/tcp -> [::]:80
443/tcp -> 0.0.0.0:443
443/tcp -> [::]:443

What do I do wrong? Where is the problem?

I just spent another 90 minutes on installing Discourse. This time on a separate physical machine to rule out the virtual environment and I got an identical issue, even though I carefully followed the instruction from GitHub.

Is it just impossible to get this to work??

Could the problem be at your end? I see very similar results to you, with my correctly working Discourse instance.

Can you reach your instance using a proxy, such as Browserling?

Edit: hang on, your address 192.168.131.211 that’s a local address, one would not expect it to be reachable from the world.

Edit: what do you see on your discourse host when you try netstat -rn

Here is my netstat:

root@ubuntu-forum:/var/discourse# netstat -rn
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
0.0.0.0         192.168.131.1   0.0.0.0         UG        0 0          0 enp1s0
172.17.0.0      0.0.0.0         255.255.0.0     U         0 0          0 docker0
192.168.130.0   0.0.0.0         255.255.254.0   U         0 0          0 enp1s0
192.168.131.1   0.0.0.0         255.255.255.255 UH        0 0          0 enp1s0
192.168.131.152 0.0.0.0         255.255.255.255 UH        0 0          0 enp1s0

Aside from the Discourse on Ubuntu I installed Talkyard on Debian (Talkyard is a forum engine a bit similar to Discourse), also on Docker, and it is working like a charm. So I think I will try installing Discourse on Debian too.

Netstat -rn on my Debian looks like this:

root@debian-12:~# netstat -rn
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
0.0.0.0         192.168.131.1   0.0.0.0         UG        0 0          0 ens18
172.17.0.0      0.0.0.0         255.255.0.0     U         0 0          0 docker0
172.26.0.0      0.0.0.0         255.255.255.128 U         0 0          0 br-886bebfa13ae
192.168.130.0   0.0.0.0         255.255.254.0   U         0 0          0 ens18

Not sure if this is helpful.

I think it’s true that Discourse only works when accessed through a domain, so do you have a setup whereby you can access your site using a browser and a domain? If you are entirely local to your own LAN you can perhaps do that with a hosts file, but I’m not sure. I think both the server and the client (and perhaps also the docker) need to be able to do a name lookup.

I have my local DNS server which is resolving my network name to that host, so it works just like from the external world.

I just sucessfully installed Discourse on a DigitialOcean VM. I am going to use it a reference to my local configuration. One think I immedatiely noticed is the hosts file on tha VM - it has the following entry:
image

Hopefully this is it. I will let you know.

Nope, failure… I am completely defeated after 3 days of a struggle and I am tired… :slightly_frowning_face:
I start thinking that it is not possible to install Discourse on your local machine, not hosted by a provider :frowning:

Check this video I recorded while installing it and please let me know – what I am doing wrong.

Might be worth trying
lsof -i
on the server

It seems pretty likely that Discourse is running but something about the network situation makes it unreachable.

OK, I found the root cause… I checked the docker logs and it turned out the nginx server does not start at all since it is failing getting Let’s Encrypt Certificate (see the attached logs)
docker_logs_not_working.txt (10.0 KB)

Now I need to figure out how to fix that. In fact I do not even need SSL as I am using a reverse proxy with its own SSL certificates. So it can easily talk with Discourse over port 80. Not sure if Discourse server will like it.

If you do searching you will find that is the most common reason why local setups with closed environments, aka. intranets, fails. Discourse needs SSL.

My DNS is hosted by Cloudflare, so I can easily get my LetsEncrypt certs as I can provide the API key for that. Can I configure ACME in Discourse to make my cert provision work smoothly? I was not able to find in the manual, but maybe I am not searching well.

After a long struggle I managed to fix it finally.

Here is what needs to be done:
From the SSH session run the following command to find the container IDs or names:

docker ps

Use the following command to access the docker’s container’s shell

docker exec -it [container_id or name] bash

Export the Cloudflare API key and email as environment variables. This is to allow ‘acme.sh’ script to authenticate with Cloudflare’s API to create and remove DNS records needed for the DNS challenge. I used my actual email address and Global API Key from your Cloudflare account.

export CF_Key="your_cloudflare_global_api_key"
export CF_Email="your_cloudflare_email_address"

Change directory to the following:

cd /shared/letsencrypt

Run acme.sh with the --issue command, specifying that you want to use the dns_cf (DNS Cloudflare) mode for handling DNS challenges. Replace yourdomain.com with the domain for which you want the certificate.

./acme.sh --issue --dns dns_cf -d yourdomain.com -d *.yourdomain.com

After succesful cert creation the script will display what directory it was copied to. In my case it was:

Your cert is in: /root/.acme.sh/sprawy.info.pl_ecc/sprawy.info.pl.cer
Your cert key is in: /root/.acme.sh/sprawy.info.pl_ecc/sprawy.info.pl.key
The intermediate CA cert is in: /root/.acme.sh/sprawy.info.pl_ecc/ca.cer
And the full chain certs is there: /root/.acme.sh/sprawy.info.pl_ecc/fullchain.cer

Edit the discourse.conf file to update the path to the cert:

nano /etc/nginx/conf.d/discourse.conf

The existing ssl_certificate and ssl_certificate_key lines should be replaced with:

ssl_certificate /root/.acme.sh/sprawy.info.pl_ecc/sprawy.info.pl.cer;
ssl_certificate_key /root/.acme.sh/sprawy.info.pl_ecc/sprawy.info.pl.key;

So that it now is pointing to the new cert locations.

Run this to test the configuration:

nginx -t

If no errors-- reload the web server:

nginx -s reload

And-- voila!

2 Likes

Excellent news, well done for figuring it out. Worth noting I think that with LetsEncrypt, if you have a series of unsuccessful certificate requests, you get locked out (for 7 days, I think.) So, it’s worth being careful to get these requests correct.

2 Likes