Chown -R discourse /var/www/discourse is very slow in ./launcher bootstrap app


I ran ./launcher bootstrap app and currently the process chown -R discourse /var/www/discourse has been running for the last 10 minutes. Is this normal, or is there something wrong with my vm?


Someone from our tech team disabled Ceph journaling for the Ceph RBD image we’re using for this vm. That sped things up, but he suggested that I use eatmydata to disable fsync, which is apparently slowing down the process. Would the Discourse team consider not using fsync for chown?

I’m not sure; what do you think @mpalmer?

Usually complaints along these lines have to do with extraordinarily slow disk speeds.

The eatmydata tool is very well named – if it is used indiscriminately, it can definitely cause problems, so there’s no way we’d use it by default. Given that this problem is, indeed, only caused by achingly slow disks (and yes, Ceph RBD was, is, and probably always will be slow as molasses in a blizzard), I don’t really see the value in adding the complexity and support burden of an option to use the eatmydata wrapper. The first time someone misuses it and blows their foot off, they’ll yell at us. As far as I’m concerned, if you’re able to competently assess the risks and benefits of using eatmydata, you’re competent enough to be able to figure out how to wedge it into the build process yourself.


I replaced the chown with a find, so it doesn’t indiscriminately execute a write for every file, only for files that need their permissions updated. Made a big difference for rebuilds on my slow zfs.

Since this post I’ve found a few other places that execute large chowns and changed them too (postgres.yml has one).


This seems promising @mpalmer – same rule on SQL, make sure your REPLACE clause tests to make sure it’s not blindly writing every row when the value is already there.

Not doing (redundant) work is infinitely faster than doing it. I’d be a fan of this change. Can you submit a pr @Cameron_D?