This is an advanced setup. Don’t follow this unless you are experienced with Linux server administration and Docker.
Hello everyone!
If you don’t want to mess with the firewall rules and security of your server, you can configure your Docker multiple container setup with just links and no exposed ports!
This way you can share your data container (postgres/redis) with other containers without exposing it to the internet.
How to
Edit your data.yml file commenting all the expose section:
Docker adds a host entry for the source container to the /etc/hosts file
So now the containers can communicate locally! Also:
If you restart the source container, the linked containers /etc/hosts files will be automatically updated with the source container’s new IP address, allowing linked communication to continue.
Additionally, you can expose ports to localhost only. This way you can still connect to the data container via SSH but only from your server (again, without exposing the database to the internet).
How to:
Edit your data.yml file adding the localhost on the expose section for port 22:
I can’t get this going at all for the life of me. No matter what I try the docker containers always expose the Postgres, Redis and SSH ports. Its so annoying! I’ve tried commenting out the expose listings and rebooting everything this way to Sunday. I’ve also tried running many bootstraps on each instance.
If you’d like to have separate redis and postgres containers you will need to omit the 2221 part of that expose section to be able to ssh into both containers.
You can use dynamic port allocation with docker like this:
@sam Thinking about the above, pups syntax seems to be a little different from what we know from docker-compose.yml files, which we already liked when they were called fig files:
Where the term ports is being used for the regular Docker NATting to the host system, the term expose can be used to offer services inside a container without forwarding to the host system.
Would it make senste to introduce this distinction to pups, too?
Since nginx-proxy catched up to Docker Networking support, this became prevailent again.
One workaround for this can look like the following in your pups YAML:
expose:
# - "80"
# - "2222:22"
# any extra arguments for Docker?
docker_args: --net=your_desired_network --expose=80
where no ports are exposed (in pups’ language) to certain ports (in Docker’s) of the host system, but exposed to a desired Docker network.
One has to add a secondary network to a stopped container.
While Discourse will only bootstrap if database containers live on the default bridge network. In my case I will have to docker network connect load-balancer-network discourse-web-container.
Best to restart discourse-web-container afterwards, i.e. for nginx-proxy to catch up with the metadata changes.
Expose should be used to open a door for the internal service. So it will be there anyways.
Docker network is good but not everyone can easily upgrade to that version. However, legacy link works great so far.
I am favor of adding support network for those running latest Docker since the absolute minimum network configuration. But if someone is using two web containers, it’s much harder to work with those…