Self Crawling discourse site with Very big Database gives Error 502 very frequent

(Veer) #1

About 1 million topics, 20 GB of database, i am using discourse now. The RAM IS 4GB and 2 CPU’s but the site got error 502 after crawling 30-40 pages, using a crawler. Site start giving error 502 after 30-40 page requests. What to do ?

Every New Auto update cause 502 error with admin accounts. Big DB Problem
(cpradio) #2

Sounds like you are hitting the ratelimiter. There is a template in containers/templates that deals with rate limiting, you can try altering some of those settings to see if that resolves your problem. But be warned, that updates the settings for ALL requests, so you could be setting yourself up for a DDOS if everyone starts making numerous requests to your site all at once.

(Veer) #3

i am getting sidecick error while 4gb ram plus 6 gb of swap is there

(cpradio) #4

I don’t understand what that has to do with a 502 error when requesting a page… Sidekiq isn’t used on page requests.

(Rafael dos Santos Silva) #5

Why are you crawling a site that you own the database? Especially one that is a bit under provisioned?

(Veer) #6

I am not getting indexed more than 200 pages daily, second i got 502 error several times when i visit my site random times. third to check the load on the site. When i start crawling 502 error starts.

(cpradio) #7

Oh, I didn’t even catch that, as I assumed it was a typo. But that very well could explain the sidekiq errors as those jobs would run against the database and if it is under provisioned it is likely bombing/timing out.

(cpradio) #8

Well, yeah, as your DDOSing your server, you are straining it. For what you described in your initial post, you need to really boost your hardware. Discourse is not light/easy on hardware, it requires a lot of it and with 1 million topics and a database size of 20 GB, you need more than 4 GB of RAM, you are liking utilizing a LOT of your SWAP with that setup (if I had to guess)

(Veer) #9

my swap is not even getting filled. 1 mb of swap is getting used.

(Lutz Biermann) #10

[Quote=“veer, contribution: 1, topic: 57598”]
What should I do ?

  1. Have you attempted to back up and re-import the database after migration? This can really help.

  2. Or you try this to optimize the database:

Cd /var/docker ./launcher enter app sudo -u postgres psql discourse VACUUM ANALYZE; \q

  1. Try to comment out the line with “templates/web.ratelimited.template.yml” in app.yml. (Must re-create the app)

  2. You can try to set db_shared_buffers to 2GB. Possibly 3GB depending on the other services you are running. (Must re-create the app)

  3. If you have low CPU load, you can carefully increase UNICORN_WORKERS. (Must re-create the app)

  4. If nothing helps, get more RAM and more cores. I’d say 8GB, best if you could load the entire database into the RAM. And 4 physical cores with high single thread performance.

(Veer) #11

Great sir, it solved this issue, and was very very helpful.