Installing Discourse for macOS Development Using asdf and docker-compose

Note that this is an unsupported development environment install guide. The officially supported guides for macOS are here (native and here (Docker). Proceed at your own risk.

I gave the Discourse for Development using Docker approach a try, but it was just too slow on my machine.

Next I looked at the Discourse for Development on macOS guide. But the first thing that the script did is install brew. brew may very well be great, but I’ve been using MacPorts for a long time and would like to continue to successfully resist installing brew. Plus, that script was also doing global installs of things like postgresql and redis that I would prefer to have the ability to maintain on a per-project basis.

So here’s what worked for me using a combination of asdf and docker-compose. The result is a middle ground between the two approaches described above. postgres and redis are run in a container using docker-compose so that they can be pinned to the official versions and installations Discourse using in production. Rails runs on metal. This combination is considerably snappier for me. YMMV.

If you want to follow along you’ll need both asdf and Docker installed on your machine. (OMG asdf really is fantastic thought… you should definitely grab it if you have any interest in easily maintaining many different development environments. It replaces renv, nvm… seemingly nearly everything except jenv.)

If you look at what the macOS installation script was doing, you can separate what is being installed into three categories:

  • command line environments and tools like ruby and yarn. We’ll install those and pin their versions to our project directory using asdf.
  • services—specifically postgres and redis. We’ll install those using Docker compose, again so that we can pin their versions to what we need for this project and also have a development environment that we can easily start and stop.
  • other—mainly libraries for image manipulation like ImageMagick and optimization. These can be installed using either brew, port, or directly from source.

We’re also going to need to lightly reconfigure our development environment to connect to the postgres server run by docker-compose.

Discourse Source

All the steps below should be done inside your Discourse source directory:

git clone https://github.com/discourse/discourse.git && cd discourse

This is important since this is where asdf will save its .tool-versions configuration file and where we’ll create our docker-compose.yml file for Docker.

asdf

There are three things we need to install using asdf: ruby, yarn, and postgres. Happily, asdf makes it easy to both install all at once and pin the versions to our project directory. First, create .tool-versions with these contents:

yarn 1.22.2
ruby 2.6.5
postgres 10.12

Next just run asdf install.

Now you should be able to do the Ruby library install steps included in the script and later in the directions:

gem update --system
gem install bundler
gem install rails
gem install mailcatcher
gem install pg -- --with-pg-config=$HOME/.asdf/installs/postgres/10.12/bin/pg_config
bundle install

You may need to adjust the path to pg_config depending on where you installed asdf.

docker-compose.yml

Next we need to create our docker-compose.yml file configured to start redis and postgres. Mine looks like this:

version: "3"
networks:
  discourse:
    driver: bridge
services:
  data:
    image: "geoffreychallen/discourse_data:latest"
    command: /sbin/boot
    ports:
      - "5432:5432"
      - "6379:6379"
    volumes:
      - "data_shared:/shared/"
      - "data_logs:/var/log/"
    networks:
      - discourse
volumes:
  data_shared:
    driver: local
  data_logs:
    driver: local

Thanks to @pfaffman for the suggestion to use a standard Discourse data container. geoffreychallen/discourse_data:latest is built from Discourse Docker. I used the sample data.yml file with two changes. First I set the password of the discourse user to be discourse. Second I made that user a superuser so it can create testing databases. Here’s the hooks part of my data.yml file:

hooks:
  after_postgres:
    - exec:
        stdin: |
          alter user discourse with password 'discourse';
        cmd: sudo -u postgres psql discourse
        raise_on_fail: false
    - exec:
        stdin: |
          alter user "discourse" with superuser;
        cmd: sudo -u postgres psql discourse
        raise_on_fail: false

Again, this is just in case you want to build your own Discourse data container and not use mine. Please don’t use this container in production—it’s completely insecure!

In this configuration we expose both the standard postgres and redis ports and run the boot command the container needs to start.

Once your docker-compose.yml is in place, take it for a spin:

docker-compose up

Assuming everything is configured properly you should see redis and postgres boot up. Control-C to cancel or docker-compose down if for some reason something doesn’t shut down cleanly.

Miscellaneous Libraries

Most of the image optimization libraries can be installed using either port or brew. Here’s how to do it with port:

sudo port install imagemagick pngquant optipng jhead jpegoptim gifsicle

svgo can be installed once you have npm. I’m not going to cover that, since it’s pretty straightforward.

FWIW AFAICT none of these tools is required. I see warnings during various later steps about them being missing, but nothing seems to go boom.

config/database.yml and spec/fixtures/multisite/two_dbs.yml

Finally we need to lightly reconfigure our development environment to correctly connect to postgres. By default it tries to use a Unix socket, which is not exported by our container.

To fix this you need to modify config/database.yml. Essentially everywhere you see:

adapter: postgresql

Replace it with:

adapter:postgresql
host: localhost
username: discourse
password: discourse

The host addition causes Discourse to not use a socket, and the username and password causes Discourse to connect using the default Discourse database user and the password we set above.

I had to make this change three times in config/database.yml: once under development, next under test, and finally under profile. To get the test suite to work I also had to make a similar change in spec/fixtures/multisite/two_dbs.yml.

Here We Go…

Alright let’s hit it! In one window bring up your development environment using docker-compose:

docker-compose up

In a second window lets run the database setup steps:

bundle exec rake db:create

Assuming that worked you can now pick up at the appropriate spot in the brew-based macOS guide.

When you are done working, stop docker-compose and you can put away your development environment until next time.

If you want to permanently delete the database and redis contents, just run a docker-compose down -v to wipe the persistent volumes along with the containers themselves. But without the -v flag docker-compose down will persist your database between development sessions.

Do the Tests Pass?

My setup failed two test cases:

Failures:

  1) UploadCreator#create_for pngquant should apply pngquant to optimized images
     Failure/Error: expect(upload.filesize).to eq(9558)

       expected: 9558
            got: 9550

       (compared using ==)
     # ./spec/lib/upload_creator_spec.rb:115:in `block (4 levels) in <main>'

  2) tasks/uploads uploads:secure_upload_analyse_and_update when store is external when secure media is enabled rebakes the posts attached
     Failure/Error: expect(post1.reload.baked_at).not_to eq(post1_baked)

       expected: value != 2020-03-08 03:20:01.777117000 +0000
            got: 2020-03-08 03:20:01.777117000 +0000

       (compared using ==)

       Diff:
         <The diff is empty, are your objects producing identical `#inspect` output?>
     # ./spec/tasks/uploads_spec.rb:90:in `block (5 levels) in <main>'

Finished in 19 minutes 21 seconds (files took 13.67 seconds to load)
4297 examples, 2 failures, 11 pending

To me the first looks like pngquant is working a bit better than expected. Not sure why that represents failure. The second I don’t understand either. But this seems sane to me.

Happy hacking!

4 Likes

Can we improve the official Discourse docker dev image instead of taking another path?

I would not rely on Bitnami images for future postgresql/redis upgrades.

9 Likes

I don’t think that it’s the image that’s the problem. The guide warns that it is “much slower than a native install on macOS” and I found that to be true. My guide is really designed more as an alternative to the development on macOS guide but that tries to avoid global installs whenever possible.

Can you elaborate on this comment? Bitnami already has containers available for postgresql 12 and their container repositories are actively updated. There are alternatives to Bitnami, but I’m just not sure where your FUD is coming from…?

Any use of Bitnami is completely unsupported here. This isn’t Fear, Uncertainty and Doubt, it comes from several years of users coming here asking for help when their Bitnami installations of Discourse run into problems. Any Bitnami support topics are #unsupported-install.

As we routinely see issues with those packages, and as they are created and maintained by a third party, we can’t offer any assistance when said problems arise. No third party packages recieve support

You’re going to need a representative dev environment to ensure that things will work in a supported live install, Bitnami isn’t the way.

3 Likes

But this isn’t a Bitnami installation of Discourse—it’s a native installation of Discourse that uses Bitnami containers to provide postgres and redis services.

Can you point out where some of the differences might arise? For example, is Discourse using a custom postgres or redis configuration that could cause different behavior? If so I didn’t see any indication of that in the previous macOS setup guide, although perhaps I missed it. Nor are any such differences reflected by the test suites, which all run fine in this environment.

There’s just no reason not to use the postgres and redis installation provided by Discourse.

Can you explain what you mean by this?

In a development setup for macOS brew is used to install postgres and redis for Discourse to use. So what is the postgres provided by Discourse?

I’m struggling to see a difference between doing that and using a container that has the exactly same version running inside. Except the fact that I don’t have to perform a global install of those services and can easily stop and start that development environment as needed.

Right, but for reasons, you want to use a docker-based postgres and redis instead

Correct, yes.

I’m a bit confused about why I’m getting so much pushback here. I explained my reasoning above. I’m sure that the other approaches to setting up development environments have worked for many and will continue to. But using Docker compose to install the services needed for a specific development environment isn’t, like, a super weird thing to do—it’s actually exactly what docker-compose was designed to do.

By using a docker-compose.yml file you can pin the postgres and redis versions so that all developers are using the exactly same version, rather than just whatever brew happens to install on that particular day. Plus, as I’ve already pointed out, other people may need other versions of postgres or redis for other projects. Or just want an easy way to start and stop their development environment when needed, so that you don’t have a copy of postgres and redis running all the time in the background.

Again, this just isn’t a weird thing to do. My goal was to just share some helpful tips in case others want to set up a development environment in a similar way. YMMV, but I don’t think that I’m telling people to do anything dangerous here. I figured at least a few other people might prefer an approach that doesn’t require a completely new package manager or modify global settings (like changing the default ruby version).

Thanks! Is this configuration system used during development? I was under the impression that it was not.

It’s because you’re introducing yet another way to “solve” the “how to set up a decent development environment” problem. Who’s going to support it? Are you going to answer all questions that anyone on the planet asks about it? Are you going to continually upgrade your stuff so that it continually matches all of the versions for everything?

But using PG and Redis in a container seems like a pretty good idea to me, but if you’re going to do that, why not do it the way that is already supported?

By using the template that Discourse uses, you can pin Postgres and Redis versions so that everyone using Discourse is using the same version.

And, you can’t use docker-compose to do much else with Discourse, so it’s a tool that lots of people who come to Docker because of Discourse are likely not to be familiar with.

Correct. After pointing out quite reasonable reservations about the other two approaches. One being far too slow and the second not exactly following best practices for how to isolate different versions of tools and services from each other.

I’d be happy to try :slight_smile:! Bring 'em on…

I can prepare a PR to add the .tool-versions and docker-compose.yml file to the repository. There are really only four versions to maintain: ruby, yarn, postgres, and redis. I’ll assume that you have the supported versions or version ranges of those tools noted somewhere in the repository, and so it will just be a matter of keeping .tool-versions and docker-compose.yml synchronized across major updates.

The biggest thing that would need support from your end would be the changes to config/database.yml, although there is already precedence in those files for using environment variables to override certain database connection options.

Again—running everything in a container proved to be very, very slow on my machine. This way the core Discourse rails process runs natively. I’ve been using this for plugin development for the past 24 hours and it is much more responsive. Meaning that I actually got some work done :slight_smile:.

IIRC Docker compose comes bundled with Docker. It’s not a hard tool to use: docker-compose up when you start and and Control-C when you are done. You could easily add this to a script similar to d/unicorn if you wanted.

Keep in mind that the alternative native install on macOS has users run a shell script that uses brew to install whatever current versions of postgres and redis happen to be available at the moment, and then also starts them for good measure where they’ll run in the background forever after I (immediately) forget about them or don’t even notice what the setup script did. So the alternative to Docker compose is remembering to both brew services (start|stop) postgres and brew services (start|stop) redis. To me that seems equivalent / worse than using docker-compose.

Right, so it does make total sense to me to use the template I linked to run PG and Redis in the very same container that thousands of production Discourse sites are using. My recommendation is that if you’re going to run PG and Redis in a container, to use the one that Discourse provides. The further you step away from the Official way of doing things the more chances there are for hard-to-guess edge cases.

Fair point!

FWIW, I don’t use a Mac so my comments are fairly high-level.

I get that.

However, I would like to point out (again) that your supported macOS development install does not run postgres or redis in a container. It doesn’t even pin the versions of these services installed by brew, which I suspect is possible. What, if anything, is the difference between that setup and mine? I feel like here you’re holding my approach to a higher standard than your official macOS development instructions, which again can end up installing any old version of postgres and redis and AFAIK also doesn’t use the configuration fragment you linked to.

I guess I’m not sure what to do with this template. AFAIK that snippet is merged with a bunch of others and used as part of a fairly involved process that builds a container that includes everything Discourse needs to run: including the parts I want to run natively.

Is there a way to build an official Discourse container that only contains postgres and redis? If so that would be great!

At some point I’d be happy to experiment with a setup using the official Discourse container. I bet that it would be possible to get that to work… although some tweaking might be required. (For example: Discourse seems to want to connect to postgres by default using a named socket, rather than over a port. So is the database in the container even listening on the default port? I’m not sure.)

My goal here was to be helpful and try to provide an alternative development path that might appeal to some, but not all. I’ve made it very clear how my instructions differ from the standard install process and why. All of the test cases pass in my environment. My goal with the original post was to offer something helpful to the community, but after the amount of sniping its received I doubt that it’s accomplished that.

Yes. Edit the file that I linked and run

./launcher rebuild data
1 Like

This is my biggest issue here.

Our docker dev setup is slow because Docker volume performance is bad on Mac

So if somehow your docker setup is fast I want to know exactly why, cause the exact same fix can be applied to the official setup

8 Likes

Not to add more confusion but I’ve been using a non-docker setup by installing all the deps manually with my macos system and didn’t have to look back. I have been doing discourse development almost everyday for the past 9 months or so.

4 Likes

Your rails application does a lot of file IO, particularly on startup—logging, rebuilding assets, etc. That may be much faster done to a native disk than into a docker container, regardless of whether the volume is shared or not.

1 Like

Yep! This definitely works, and you should stick to whatever is working for you :slight_smile:.

This was a great suggestion! I have this working and will update have updated the instructions above.

1 Like

It might clarify a lot of the confusion here if this is highlighted in the topic: this setup only runs postgres and redis in docker, and Rails itself is ran natively on macOS.

This is the biggest difference vs d/boot_dev, and would explain the performance difference.

I personally don’t think it is a huge deal what images (Bitnami or not) are used for redis and postgres in development, unless it could lead to discrepancies in behaviour…

In short,

11 Likes