Builds taking very long time

I know, but in this case I was talking about an “equality” of specifications, that is, without plugins too

1 Like

I don’t think the build is “hanging” at those warnings. It’s just silently building and the warnings are spit out as part of the process.

I.e. the warnings or their underlying issue are not contributing to the long build time.


Where can we track this @Falco? Thank you for letting us know about this, I just stumbled into it and it is killing us hard here.


This is a gigantic change we are working on for years, and is getting to the final stages. During it we have a period where “things will get worse before they get better”, and this is one of the “worse” side effects of it.

As soon as we complete New installs will default to Ember CLI builds in Production for all existing sites, and rip-off the old asset pipeline, we can then aggressively start modernizing it and hopefully collect some upstream gains.

There is also the potential for us to allow people on slow CPUs to opt-out of source maps and other “nice to have” features to speed up their rebuilds.


Appreciate the update @Falco :heart: On a quad CPU with 8GB RAM on Linode and normally this is a fantastic setup, but it is a nightmare now. We have a number of changes we were planning to make, but will need to wait now until the deployment get’s back to normal’ish speeds.

@Falco I’m also noticing for the last few releases that the server performance is degrading, it takes longer to load the sites and consuming more memory. There have bee no changes to my setup in the past 2 years (plugins, hardware etc) and the number of active users on the site are also the same. Is there a way to objectively monitor site performance from within Discourse that we can then report back here. Right now the only way I know is when I open the site it takes from >8 seconds to load the first time (with earlier builds it would always be under 2-3 seconds).

What kind of rebuild times are you guys seeing? I just needed to rebuild due to an SMTP change, and it clocked in at just shy of 12 minutes for a TINY site (30 users, 400 posts).


This topic is about “build times” not about loading page times. If you are a talking about page response times degradation, please open a new topic about it with some data.


Last build this morning took about 20 minutes

I will thanks, after today’s update it’s now back to the normal 2-3 seconds to load the pages (nice surprise).

Ouch. That’s not normal.


I think I figured out why it’s taking so long to load pages. The shared db size in app.yml was set equal to the total memory of the system. Reset it back to the default (25%), rebuild and it’s under a second now.


My rebuilds are taking about 10 minutes or so. I think they used to be more like 5. So no big deal. What does the error message mean though? I get something similar to the one in the original post above:

I, [2022-06-20T21:41:47.107238 #1]  INFO -- : > cd /var/www/discourse && [ ! -d 'node_modules' ] || su discourse -c 'yarn install --production && yarn cache clean'
warning "eslint-config-discourse > eslint-plugin-lodash@7.1.0" has unmet peer dependency "lodash@>=4".
warning " > @mixer/parallel-prettier@2.0.1" has unmet peer dependency "prettier@^2.0.0".

I’m also going to add more to this, I’m running a lean system (1GB ram) and a small site. It has 2 unicorn workers and between them each was taking up 30% of the memory which was causing a lot of memory thrashing, so I decided to cut down the number from 2 to 1 (which I believe can handle about 10 concurrent connections each). This made a HUGE difference and made the page loads almost instantaneous and reduced swapping by a factor of 5-10 (depending on what was being loaded).

The downside that I see now is that I can no longer use browser upgrades to update discourse. When I try to update via a browser I get

ABORTING, you do not have enough unicorn workers running
#<RuntimeError: Not enough workers>

So just something to note, not sure if this is something the team Discourse team can figure out/address - doing browser upgrades with a single unicorn worker.


This seems like a bug, especially as the system cuts down to a single unicorn temporarily very shortly afterwards.

The number 2 is hardcoded, as is the number 1 for the reduction.

Edit: looks like this change introduced the inconsistency

I think your post (and this reply) should be a in a new thread, in the bugs category.


How does this work?

One Unicorn to handle the upgrade process, whilst the remaining serve the ongoing calls?

And hence a minimum of 2 Unicorn Workers are required to perform online upgrades …?

This is wrong. A single unicorn can only handle one request at a time, so while usable for small groups, it’s not something we would recommend for most sites.

1 Like

@Falco I looked at data from other admins. My understanding is that each Unicorn forks a new process for each incoming connection. So while it’s technically one connection at a time, each unicorn can handle multiple concurrent users.

Going by the experience shared below, about 8 Unicorns can handle approximately 400 concurrent users

Based on that it appears that each unicorn can handle about 50 concurrent users. Now I do know that RAM and system resources make a difference in the number of forks that can be done etc, hence my assumption of 1 Unicorn worker can handle 10 concurrent users on a low RAM system (1 GB) at the low end.

Are my assumptions + conclusions completely off base? If so, what would be a range of concurrent users that each Unicorn can handle depending on the system resources (assume 1GB at low end and anything you would feel appropriate as high end)?

1 Like

There is a difference between concurrent user sessions and concurrent connections. A session is an online user, and they each make a request (connection) whenever they interact.

It does not. Unicorn forks into a set number of worker processes upon startup.

1 Like

And, I think we saw, each worker process runs 10 threads.