The only solution I have found is to revert my Discourse installation to the previous version.
This does work for me on every container rebuild attempt:
This fails for me on every container rebuild attempt:
So I think it is definitely a Discourse bug, and if the new Discourse release exceeds the rubygems.org rate limits, it is Discourse problem, not rubygems.org problem.
A possible solution for Discourse could be to implement some timeouts in gems fetching to be compatible with the rubygems.org rate limits:
No repro on my do droplet, I can rebuild just fine without hitting rubygems rate limit.
I think it is typical for timing issues: they depend on environment (hardware, software, internet connection speed).
Particularly, I am using not a low-end 1Gb Digital Ocean droplet, but a dedicated Hetzner EX41-SSD server.
Also I have some plugins installed (the same set in the both test cases described above).
So the problem is: «how to make my server slower?». Interestingly, this question leads to another ugly solution: try to run a computationally intensive task in parallel with
./launcher rebuild. It may help somebody
But the true solution would be to implement timeouts in the Discourse update script or/and in the Discourse’s virtual machine.
That won’t help. The logic to retrieve a bunch of gems is entirely within Rubygems. We ask, “hey can you install a bunch of gems for me?” by running
bundle install, which calls off to something inside Rubygems. That then makes a whole pile of requests, too quickly, and gets a 429, which is doesn’t politely handle. We could retry that, but hammering away repeatedly at a service that’s already said “you’re going too fast” doesn’t seem like the way to win friends and influence people.
Short of reimplementing rubygems/bundler inside the Discourse update script, there’s nothing that can be done outside of rubygems to fix this problem.
Also, why did you create a new topic for this, rather than continue on the existing topic?
Official response from the rubygems folks:
I conferred with the bundler folks about this issue and they told me it was a known issue with older versions of bundler. If your users update their bundler version, the issue should go away.
The errors are actually fastly throttling individual users, which as you might expect means that those versions of bundler are sending A LOT of requests.