The goal of this guide is to containerize postgres and redis but keep ruby out of the containers.
I gave the Discourse for Development using Docker approach a try, but it was just too slow on my machine.
Next I looked at the Discourse for Development on macOS guide. But the first thing that the script did is install
brew may very well be great, but I’ve been using MacPorts for a long time and would like to continue to successfully resist installing
brew. Plus, that script was also doing global installs of things like postgresql and redis that I would prefer to have the ability to maintain on a per-project basis.
So here’s what worked for me using a combination of
docker-compose. The result is a middle ground between the two approaches described above. postgres and redis are run in a container using
docker-compose so that they can be pinned to the official versions and installations Discourse using in production. Rails runs on metal. This combination is considerably snappier for me. YMMV.
If you want to follow along you’ll need both
asdf and Docker installed on your machine. (OMG
asdf really is fantastic thought… you should definitely grab it if you have any interest in easily maintaining many different development environments. It replaces
nvm… seemingly nearly everything except
If you look at what the macOS installation script was doing, you can separate what is being installed into three categories:
- command line environments and tools like
yarn. We’ll install those and pin their versions to our project directory using
- services—specifically postgres and redis. We’ll install those using Docker compose, again so that we can pin their versions to what we need for this project and also have a development environment that we can easily start and stop.
- other—mainly libraries for image manipulation like ImageMagick and optimization. These can be installed using either
port, or directly from source.
We’re also going to need to lightly reconfigure our development environment to connect to the postgres server run by
All the steps below should be done inside your Discourse source directory:
git clone https://github.com/discourse/discourse.git && cd discourse
This is important since this is where
asdf will save its
.tool-versions configuration file and where we’ll create our
docker-compose.yml file for Docker.
There are three things we need to install using
asdf makes it easy to both install all at once and pin the versions to our project directory. First, create
.tool-versions with these contents:
yarn 1.22.2 ruby 2.6.5 postgres 10.12
Next just run
Now you should be able to do the Ruby library install steps included in the script and later in the directions:
gem update --system gem install bundler gem install rails gem install mailcatcher gem install pg -- --with-pg-config=$HOME/.asdf/installs/postgres/10.12/bin/pg_config bundle install
You may need to adjust the path to
pg_config depending on where you installed
Next we need to create our
docker-compose.yml file configured to start redis and postgres. Mine looks like this:
version: "3" networks: discourse: driver: bridge services: data: image: "geoffreychallen/discourse_data:latest" command: /sbin/boot ports: - "5432:5432" - "6379:6379" volumes: - "data_shared:/shared/" - "data_logs:/var/log/" networks: - discourse volumes: data_shared: driver: local data_logs: driver: local
Thanks to @pfaffman for the suggestion to use a standard Discourse data container.
geoffreychallen/discourse_data:latest is built from Discourse Docker. I used the sample
data.yml file with two changes. First I set the password of the discourse user to be discourse. Second I made that user a superuser so it can create testing databases. Here’s the
hooks part of my
hooks: after_postgres: - exec: stdin: | alter user discourse with password 'discourse'; cmd: sudo -u postgres psql discourse raise_on_fail: false - exec: stdin: | alter user "discourse" with superuser; cmd: sudo -u postgres psql discourse raise_on_fail: false
Again, this is just in case you want to build your own Discourse data container and not use mine. Please don’t use this container in production—it’s completely insecure!
In this configuration we expose both the standard postgres and redis ports and run the boot command the container needs to start.
docker-compose.yml is in place, take it for a spin:
Assuming everything is configured properly you should see redis and postgres boot up. Control-C to cancel or
docker-compose down if for some reason something doesn’t shut down cleanly.
Most of the image optimization libraries can be installed using either
brew. Here’s how to do it with
sudo port install imagemagick pngquant optipng jhead jpegoptim gifsicle
svgo can be installed once you have
npm. I’m not going to cover that, since it’s pretty straightforward.
FWIW AFAICT none of these tools is required. I see warnings during various later steps about them being missing, but nothing seems to go boom.
Finally we need to lightly reconfigure our development environment to correctly connect to postgres. By default it tries to use a Unix socket, which is not exported by our container.
To fix this you need to modify
config/database.yml. Essentially everywhere you see:
Replace it with:
adapter:postgresql host: localhost username: discourse password: discourse
host addition causes Discourse to not use a socket, and the
password causes Discourse to connect using the default Discourse database user and the password we set above.
I had to make this change three times in
config/database.yml: once under
development, next under
test, and finally under
profile. To get the test suite to work I also had to make a similar change in
Alright let’s hit it! In one window bring up your development environment using
In a second window lets run the database setup steps:
bundle exec rake db:create
Assuming that worked you can now pick up at the appropriate spot in the
brew-based macOS guide.
When you are done working, stop
docker-compose and you can put away your development environment until next time.
If you want to permanently delete the database and redis contents, just run a
docker-compose down -v to wipe the persistent volumes along with the containers themselves. But without the
docker-compose down will persist your database between development sessions.
My setup failed two test cases:
Failures: 1) UploadCreator#create_for pngquant should apply pngquant to optimized images Failure/Error: expect(upload.filesize).to eq(9558) expected: 9558 got: 9550 (compared using ==) # ./spec/lib/upload_creator_spec.rb:115:in `block (4 levels) in <main>' 2) tasks/uploads uploads:secure_upload_analyse_and_update when store is external when secure media is enabled rebakes the posts attached Failure/Error: expect(post1.reload.baked_at).not_to eq(post1_baked) expected: value != 2020-03-08 03:20:01.777117000 +0000 got: 2020-03-08 03:20:01.777117000 +0000 (compared using ==) Diff: <The diff is empty, are your objects producing identical `#inspect` output?> # ./spec/tasks/uploads_spec.rb:90:in `block (5 levels) in <main>' Finished in 19 minutes 21 seconds (files took 13.67 seconds to load) 4297 examples, 2 failures, 11 pending
To me the first looks like
pngquant is working a bit better than expected. Not sure why that represents failure. The second I don’t understand either. But this seems sane to me.