Use multiple Docker containers without exposing ports

:warning: This is an advanced setup. Don’t follow this unless you are experienced with Linux server administration and Docker.

Hello everyone!

If you don’t want to mess with the firewall rules and security of your server, you can configure your Docker multiple container setup with just links and no exposed ports!

This way you can share your data container (postgres/redis) with other containers without exposing it to the internet.


How to

  1. Edit your data.yml file commenting all the expose section:

    #expose:
    # - “5432:5432”
    # - “6379:6379”
    # - “2221:22”

  2. Edit your web_only.yml file uncommenting the links section:

    links:

    • link:
      name: data
      alias: data

    (remeber to use the name of your data container here)

  3. The trick! Also on web_only.yml file, use your data container’s name to connect to the database:

    DISCOURSE_DB_HOST: data
    DISCOURSE_REDIS_HOST: data


The Docker Magic

As explained on Docker Container Linking documentation, when you --link containers:

Docker adds a host entry for the source container to the /etc/hosts file

So now the containers can communicate locally! Also:

If you restart the source container, the linked containers /etc/hosts files will be automatically updated with the source container’s new IP address, allowing linked communication to continue.

9 Likes

Additionally, you can expose ports to localhost only. This way you can still connect to the data container via SSH but only from your server (again, without exposing the database to the internet).

How to:

  1. Edit your data.yml file adding the localhost on the expose section for port 22:

    expose:
    # - “5432:5432”
    # - “6379:6379”

    • “127.0.0.1:2221:22”

:smile:

7 Likes

I can’t get this going at all for the life of me. No matter what I try the docker containers always expose the Postgres, Redis and SSH ports. Its so annoying! I’ve tried commenting out the expose listings and rebooting everything this way to Sunday. I’ve also tried running many bootstraps on each instance.

root       864  0.3  0.4 1002372 19428 ?       Ssl  10:27   0:02 /usr/bin/docker -d --dns 8.8.8.8 --dns 8.8.4.4
root      1570  0.0  0.1 256020  7664 ?        Sl   10:27   0:00  \_ docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 2222 -container-ip 172.17.0.3 -container-port 22
root      1579  0.0  0.1 182288  5588 ?        Sl   10:27   0:00  \_ docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 443 -container-ip 172.17.0.3 -container-port 443
root      1606  0.0  0.1 198680  7692 ?        Sl   10:27   0:00  \_ docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 80 -container-ip 172.17.0.3 -container-port 80
root      1626  0.0  0.0  21100  1528 pts/3    Ss+  10:27   0:00  \_ /bin/bash /sbin/boot
root      1671  0.0  0.0    188    36 pts/3    S+   10:27   0:00  |   \_ /usr/bin/runsvdir -P /etc/service
root      1672  0.0  0.0    168     4 ?        Ss   10:27   0:00  |       \_ runsv rsyslog
syslog    1675  0.0  0.0 180148  1864 ?        Sl   10:27   0:00  |       |   \_ rsyslogd -n
root      1673  0.0  0.0    168     4 ?        Ss   10:27   0:00  |       \_ runsv cron
root      1676  0.0  0.0  26776  1324 ?        S    10:27   0:00  |       |   \_ cron -f
root      1674  0.0  0.0    168     4 ?        Ss   10:27   0:00  |       \_ runsv nginx
root      1678  0.0  0.1 121152  4704 ?        S    10:27   0:00  |       |   \_ nginx: master process /usr/sbin/nginx
www-data  1689  0.0  0.0 121488  1944 ?        S    10:27   0:00  |       |       \_ nginx: worker process
www-data  1690  0.0  0.0 121752  2688 ?        S    10:27   0:00  |       |       \_ nginx: worker process
www-data  1691  0.0  0.0 121488  1944 ?        S    10:27   0:00  |       |       \_ nginx: worker process
www-data  1692  0.0  0.0 121488  1944 ?        S    10:27   0:00  |       |       \_ nginx: worker process
www-data  1693  0.0  0.0 121332  1960 ?        S    10:27   0:00  |       |       \_ nginx: cache manager process
root      1677  0.0  0.0    168     4 ?        Ss   10:27   0:00  |       \_ runsv unicorn
james     1681  0.1  0.0  29780  3892 ?        S    10:27   0:01  |       |   \_ /bin/bash config/unicorn_launcher -E production -c config/unicorn.conf.rb
james     1695  1.9  5.2 439264 214024 ?       Sl   10:27   0:15  |       |       \_ unicorn master -E production -c config/unicorn.conf.rb
james     1792  1.2  6.1 513060 246952 ?       Sl   10:28   0:09  |       |       |   \_ sidekiq 3.2.5 discourse [0 of 5 busy]
james     1808  0.1  5.2 447456 211524 ?       Sl   10:28   0:01  |       |       |   \_ unicorn worker[0] -E production -c config/unicorn.conf.rb
james     1814  0.1  5.2 447456 211516 ?       Sl   10:28   0:01  |       |       |   \_ unicorn worker[1] -E production -c config/unicorn.conf.rb
james     1826  0.1  5.3 451552 215556 ?       Sl   10:28   0:01  |       |       |   \_ unicorn worker[3] -E production -c config/unicorn.conf.rb
james     3589  0.1  5.2 447456 211448 ?       Sl   10:30   0:01  |       |       |   \_ unicorn worker[2] -E production -c config/unicorn.conf.rb
james     4881  0.0  0.1  20036  4920 ?        S    10:41   0:00  |       |       \_ sleep 1
root      1679  0.0  0.0    168     4 ?        Ss   10:27   0:00  |       \_ runsv sshd
root      1683  0.0  0.0  61364  3056 ?        S    10:27   0:00  |           \_ /usr/sbin/sshd -D -e
root      3503  0.0  0.1 182416  5764 ?        Sl   10:30   0:00  \_ docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 2221 -container-ip 172.17.0.25 -container-port 22
root      3511  0.0  0.1 190484  5708 ?        Sl   10:30   0:00  \_ docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 5432 -container-ip 172.17.0.25 -container-port 5432
root      3520  0.0  0.1 190484  5720 ?        Sl   10:30   0:00  \_ docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 6379 -container-ip 172.17.0.25 -container-port 6379
root      3528  0.0  0.0  21088  1508 pts/4    Ss+  10:30   0:00  \_ /bin/bash /sbin/boot
root      3565  0.0  0.0    188    40 pts/4    S+   10:30   0:00      \_ /usr/bin/runsvdir -P /etc/service
root      3566  0.0  0.0    168     4 ?        Ss   10:30   0:00          \_ runsv rsyslog
syslog    3574  0.0  0.0 180148  1864 ?        Sl   10:30   0:00          |   \_ rsyslogd -n
root      3567  0.0  0.0    168     4 ?        Ss   10:30   0:00          \_ runsv postgres
message+  3571  0.0  0.9 662692 37716 ?        S    10:30   0:00          |   \_ /usr/lib/postgresql/9.3/bin/postmaster -D /etc/postgresql/9.3/main
message+  3583  0.0  0.0 662952  1664 ?        Ss   10:30   0:00          |       \_ postgres: checkpointer process
message+  3584  0.0  0.1 662952  5072 ?        Ss   10:30   0:00          |       \_ postgres: writer process
message+  3585  0.0  0.0 662952  1664 ?        Ss   10:30   0:00          |       \_ postgres: wal writer process
message+  3586  0.0  0.0 663768  2952 ?        Ss   10:30   0:00          |       \_ postgres: autovacuum launcher process
message+  3587  0.0  0.0 103776  1888 ?        Ss   10:30   0:00          |       \_ postgres: stats collector process
message+  4796  0.0  0.2 666256 11804 ?        Ss   10:40   0:00          |       \_ postgres: discourse discourse 172.17.0.3(56802) idle
message+  4842  0.0  0.1 664100  6964 ?        Ss   10:41   0:00          |       \_ postgres: discourse discourse 172.17.0.3(56804) idle
root      3568  0.0  0.0    168     4 ?        Ss   10:30   0:00          \_ runsv cron
root      3575  0.0  0.0  26776  1320 ?        S    10:30   0:00          |   \_ cron -f
root      3569  0.0  0.0    168     4 ?        Ss   10:30   0:00          \_ runsv redis
sshd      3573  0.2  0.1  38264  7776 ?        Sl   10:30   0:01          |   \_ /usr/bin/redis-server *:6379
root      3570  0.0  0.0    168     4 ?        Ss   10:30   0:00          \_ runsv sshd
root      3572  0.0  0.0  61364  3052 ?        S    10:30   0:00              \_ /usr/sbin/sshd -D -e

As you can see above everything is forwarding ports which can be confirmed using netstat:

root@host:/var/discourse# netstat -tlpn|grep docker
tcp6       0      0 :::6379                 :::*                    LISTEN      3520/docker-proxy
tcp6       0      0 :::2221                 :::*                    LISTEN      3503/docker-proxy
tcp6       0      0 :::2222                 :::*                    LISTEN      1570/docker-proxy
tcp6       0      0 :::80                   :::*                    LISTEN      1606/docker-proxy
tcp6       0      0 :::5432                 :::*                    LISTEN      3511/docker-proxy
tcp6       0      0 :::443                  :::*                    LISTEN      1579/docker-proxy
root@host:/var/discourse#

And I can even SSH and telnet to the ports from a different host! My current container images look like this:

data.yml:

expose:
#  - "127.0.0.1:5432:5432"
#  - "127.0.0.1:6379:6379"
  - "127.0.0.1:2221:22"

web_only.yml

expose:
  - "80:80"
  - "443:443"
  - "127.0.0.1:2222:22"

Can anyone tell me whats going on?! I only want to expose http/https. Not Postgres and Redis!

Never mind, turns out a bootstrap doesn’t do what I expected to do. You have to REBUILD the containers for this to work!

If you’d like to have separate redis and postgres containers you will need to omit the 2221 part of that expose section to be able to ssh into both containers.

You can use dynamic port allocation with docker like this:

Postgres container:

expose:
# - "5432:5432"
  - "127.0.0.1::22"

Redis container:

expose:
# - "6379:6379"
  - "127.0.0.1::22"

A helpful page on this can be found here

I just thought I would document this here as this is a page I’ve kept coming back to whilst setting up multiple containers.

Thanks for the original topic @jspdng

3 Likes

Just in case anybody stumbles here for running several Discourse instances behind https://github.com/jwilder/nginx-proxy:

Set discourse to listen on a local link port and tell the proxy to use the container’s internal port 80 like

expose:
  - "127.0.0.1:2836:80"

env:
  VIRTUAL_PORT: '80'
5 Likes

@sam Thinking about the above, pups syntax seems to be a little different from what we know from docker-compose.yml files, which we already liked when they were called fig files:

Where the term ports is being used for the regular Docker NATting to the host system, the term expose can be used to offer services inside a container without forwarding to the host system.

Would it make senste to introduce this distinction to pups, too?


Edit: Escalated into an issue.

https://github.com/SamSaffron/pups/issues/14

Since nginx-proxy catched up to Docker Networking support, this became prevailent again.
One workaround for this can look like the following in your pups YAML:

expose:
#  - "80"
#  - "2222:22"

# any extra arguments for Docker?
docker_args: --net=your_desired_network --expose=80

where no ports are exposed (in pups’ language) to certain ports (in Docker’s) of the host system, but exposed to a desired Docker network.

Redacted above, but in an earlier version I had maliciously stated

It’s being said Docker does not allow to specify multiple networks upon run.

One has to add a secondary network to a stopped container.
While Discourse will only bootstrap if database containers live on the default bridge network. In my case I will have to docker network connect load-balancer-network discourse-web-container.

Best to restart discourse-web-container afterwards, i.e. for nginx-proxy to catch up with the metadata changes.

2 Likes

Expose should be used to open a door for the internal service. So it will be there anyways.
Docker network is good but not everyone can easily upgrade to that version. However, legacy link works great so far.

I am favor of adding support network for those running latest Docker since the absolute minimum network configuration. But if someone is using two web containers, it’s much harder to work with those…

Also see

https://github.com/discourse/discourse_docker/issues/257

2 Likes