Note that this is an unsupported development environment install guide. The officially supported guides for macOS are here (native and here (Docker). Proceed at your own risk.
The goal of this guide is to containerize postgres and redis but keep ruby out of the containers.
I gave the Discourse for Development using Docker approach a try, but it was just too slow on my machine.
Next I looked at the Discourse for Development on macOS guide. But the first thing that the script did is install brew
. brew
may very well be great, but I’ve been using MacPorts for a long time and would like to continue to successfully resist installing brew
. Plus, that script was also doing global installs of things like postgresql and redis that I would prefer to have the ability to maintain on a per-project basis.
So here’s what worked for me using a combination of asdf
and docker-compose
. The result is a middle ground between the two approaches described above. postgres and redis are run in a container using docker-compose
so that they can be pinned to the official versions and installations Discourse using in production. Rails runs on metal. This combination is considerably snappier for me. YMMV.
If you want to follow along you’ll need both asdf
and Docker installed on your machine. (OMG asdf
really is fantastic thought… you should definitely grab it if you have any interest in easily maintaining many different development environments. It replaces renv
, nvm
… seemingly nearly everything except jenv
.)
If you look at what the macOS installation script was doing, you can separate what is being installed into three categories:
- command line environments and tools like
ruby
andyarn
. We’ll install those and pin their versions to our project directory usingasdf
. - services—specifically postgres and redis. We’ll install those using Docker compose, again so that we can pin their versions to what we need for this project and also have a development environment that we can easily start and stop.
- other—mainly libraries for image manipulation like ImageMagick and optimization. These can be installed using either
brew
,port
, or directly from source.
We’re also going to need to lightly reconfigure our development environment to connect to the postgres server run by docker-compose
.
Discourse Source
All the steps below should be done inside your Discourse source directory:
git clone https://github.com/discourse/discourse.git && cd discourse
This is important since this is where asdf
will save its .tool-versions
configuration file and where we’ll create our docker-compose.yml
file for Docker.
asdf
There are three things we need to install using asdf
: ruby
, yarn
, and postgres
. Happily, asdf
makes it easy to both install all at once and pin the versions to our project directory. First, create .tool-versions
with these contents:
yarn 1.22.2
ruby 2.6.5
postgres 10.12
Next just run asdf install
.
Now you should be able to do the Ruby library install steps included in the script and later in the directions:
gem update --system
gem install bundler
gem install rails
gem install mailcatcher
gem install pg -- --with-pg-config=$HOME/.asdf/installs/postgres/10.12/bin/pg_config
bundle install
You may need to adjust the path to pg_config
depending on where you installed asdf
.
docker-compose.yml
Next we need to create our docker-compose.yml
file configured to start redis and postgres. Mine looks like this:
version: "3"
networks:
discourse:
driver: bridge
services:
data:
image: "geoffreychallen/discourse_data:latest"
command: /sbin/boot
ports:
- "5432:5432"
- "6379:6379"
volumes:
- "data_shared:/shared/"
- "data_logs:/var/log/"
networks:
- discourse
volumes:
data_shared:
driver: local
data_logs:
driver: local
Thanks to @pfaffman for the suggestion to use a standard Discourse data container. geoffreychallen/discourse_data:latest
is built from Discourse Docker. I used the sample data.yml
file with two changes. First I set the password of the discourse user to be discourse. Second I made that user a superuser so it can create testing databases. Here’s the hooks
part of my data.yml
file:
hooks:
after_postgres:
- exec:
stdin: |
alter user discourse with password 'discourse';
cmd: sudo -u postgres psql discourse
raise_on_fail: false
- exec:
stdin: |
alter user "discourse" with superuser;
cmd: sudo -u postgres psql discourse
raise_on_fail: false
Again, this is just in case you want to build your own Discourse data container and not use mine. Please don’t use this container in production—it’s completely insecure!
In this configuration we expose both the standard postgres and redis ports and run the boot command the container needs to start.
Once your docker-compose.yml
is in place, take it for a spin:
docker-compose up
Assuming everything is configured properly you should see redis and postgres boot up. Control-C to cancel or docker-compose down
if for some reason something doesn’t shut down cleanly.
Miscellaneous Libraries
Most of the image optimization libraries can be installed using either port
or brew
. Here’s how to do it with port
:
sudo port install imagemagick pngquant optipng jhead jpegoptim gifsicle
svgo
can be installed once you have npm
. I’m not going to cover that, since it’s pretty straightforward.
FWIW AFAICT none of these tools is required. I see warnings during various later steps about them being missing, but nothing seems to go boom.
config/database.yml
and spec/fixtures/multisite/two_dbs.yml
Finally we need to lightly reconfigure our development environment to correctly connect to postgres. By default it tries to use a Unix socket, which is not exported by our container.
To fix this you need to modify config/database.yml
. Essentially everywhere you see:
adapter: postgresql
Replace it with:
adapter:postgresql
host: localhost
username: discourse
password: discourse
The host
addition causes Discourse to not use a socket, and the username
and password
causes Discourse to connect using the default Discourse database user and the password we set above.
I had to make this change three times in config/database.yml
: once under development
, next under test
, and finally under profile
. To get the test suite to work I also had to make a similar change in spec/fixtures/multisite/two_dbs.yml
.
Here We Go…
Alright let’s hit it! In one window bring up your development environment using docker-compose
:
docker-compose up
In a second window lets run the database setup steps:
bundle exec rake db:create
Assuming that worked you can now pick up at the appropriate spot in the brew
-based macOS guide.
When you are done working, stop docker-compose
and you can put away your development environment until next time.
If you want to permanently delete the database and redis contents, just run a docker-compose down -v
to wipe the persistent volumes along with the containers themselves. But without the -v
flag docker-compose down
will persist your database between development sessions.
Do the Tests Pass?
My setup failed two test cases:
Failures:
1) UploadCreator#create_for pngquant should apply pngquant to optimized images
Failure/Error: expect(upload.filesize).to eq(9558)
expected: 9558
got: 9550
(compared using ==)
# ./spec/lib/upload_creator_spec.rb:115:in `block (4 levels) in <main>'
2) tasks/uploads uploads:secure_upload_analyse_and_update when store is external when secure media is enabled rebakes the posts attached
Failure/Error: expect(post1.reload.baked_at).not_to eq(post1_baked)
expected: value != 2020-03-08 03:20:01.777117000 +0000
got: 2020-03-08 03:20:01.777117000 +0000
(compared using ==)
Diff:
<The diff is empty, are your objects producing identical `#inspect` output?>
# ./spec/tasks/uploads_spec.rb:90:in `block (5 levels) in <main>'
Finished in 19 minutes 21 seconds (files took 13.67 seconds to load)
4297 examples, 2 failures, 11 pending
To me the first looks like pngquant
is working a bit better than expected. Not sure why that represents failure. The second I don’t understand either. But this seems sane to me.
Happy hacking!