Builds taking very long time

My rebuilds are taking about 10 minutes or so. I think they used to be more like 5. So no big deal. What does the error message mean though? I get something similar to the one in the original post above:

I, [2022-06-20T21:41:47.107238 #1]  INFO -- : > cd /var/www/discourse && [ ! -d 'node_modules' ] || su discourse -c 'yarn install --production && yarn cache clean'
warning "eslint-config-discourse > eslint-plugin-lodash@7.1.0" has unmet peer dependency "lodash@>=4".
warning " > @mixer/parallel-prettier@2.0.1" has unmet peer dependency "prettier@^2.0.0".

I’m also going to add more to this, I’m running a lean system (1GB ram) and a small site. It has 2 unicorn workers and between them each was taking up 30% of the memory which was causing a lot of memory thrashing, so I decided to cut down the number from 2 to 1 (which I believe can handle about 10 concurrent connections each). This made a HUGE difference and made the page loads almost instantaneous and reduced swapping by a factor of 5-10 (depending on what was being loaded).

The downside that I see now is that I can no longer use browser upgrades to update discourse. When I try to update via a browser I get

ABORTING, you do not have enough unicorn workers running
Docker Manager: FAILED TO UPGRADE
#<RuntimeError: Not enough workers>

So just something to note, not sure if this is something the team Discourse team can figure out/address - doing browser upgrades with a single unicorn worker.

2 Likes

This seems like a bug, especially as the system cuts down to a single unicorn temporarily very shortly afterwards.

The number 2 is hardcoded, as is the number 1 for the reduction.

Edit: looks like this change introduced the inconsistency

I think your post (and this reply) should be a in a new thread, in the bugs category.

4 Likes

How does this work?

One Unicorn to handle the upgrade process, whilst the remaining serve the ongoing calls?

And hence a minimum of 2 Unicorn Workers are required to perform online upgrades …?

This is wrong. A single unicorn can only handle one request at a time, so while usable for small groups, it’s not something we would recommend for most sites.

1 Like

@Falco I looked at data from other admins. My understanding is that each Unicorn forks a new process for each incoming connection. So while it’s technically one connection at a time, each unicorn can handle multiple concurrent users.

Going by the experience shared below, about 8 Unicorns can handle approximately 400 concurrent users

Based on that it appears that each unicorn can handle about 50 concurrent users. Now I do know that RAM and system resources make a difference in the number of forks that can be done etc, hence my assumption of 1 Unicorn worker can handle 10 concurrent users on a low RAM system (1 GB) at the low end.

Are my assumptions + conclusions completely off base? If so, what would be a range of concurrent users that each Unicorn can handle depending on the system resources (assume 1GB at low end and anything you would feel appropriate as high end)?

1 Like

There is a difference between concurrent user sessions and concurrent connections. A session is an online user, and they each make a request (connection) whenever they interact.

It does not. Unicorn forks into a set number of worker processes upon startup.

1 Like

And, I think we saw, each worker process runs 10 threads.