So we upgraded our Centos7 Server to 2.7.0.beta4 from 2.2.2 and since then facing latency in page loading. Especially in pages with database or image contents involved. To an extent where it has become unusable.
Any guidance in this regard would be much appreciated.
A bunch of things happened in the past few years. There was a bug change that requires processing all of the images. I suspect that your server is slammed doing that work. You can have a look at /sidekiq to see the queue.
How big is your database? How many images? What does sidekiq show? You’re using SSD, right?
Its a VM based server so not sure if its SSD or not.
I don’t see sidekiq accessible, since this deployment wasn’t done by me so not sure how to access that.
Your best path forward is to investigate why performance is down. There are a lot of background jobs added over the years (image optimization, rebaking, etc.) that are probably now running and using your server resources. Once those complete, performance should improve.
Accessing /sidekiq (using an admin account!) to discover what jobs are running is a great first step.
Ok so I was able to access Sidekiq, can you guys help me understand this and suggest any optimizations? I am in a quite a fix here due to these performance issues.
The behaviour I see with the server that it keeps showing this empty queue even then I try to open a post to see it being listed but the sidekiq portal also jamms up when post is being loaded and only refreshes once post is fully loaded.
Also once again when it’s loaded it shows an empty queue. Any help/suggestions would be highly appreciated.
Average users logged in at the same time is not very high maybe 5-10 at max in average.
This server was raised recently and is shared with 8GB RAM 10 GB of swap space. And has jusst been up for 13 days at present. But performance issues are regardless of reboot and uptime.
Can’t remember if it was suggested and you re-ran discourse-setup to adjust Discourse’s memory usage, or if those defaults are reasonable given whatever else is using the server.
If you didn’t re-index the database after the PG13 upgrade, then you might have a look at PostgreSQL 13 update for some information about that.
Well, that’s all very strange. No one else has had such problems. You seem to have enough hardware. My only guess is some issue with a reverse proxy (I guess you’ve got other stuff on the machine?).
yes another docker based service.
But nothing really performance intensive at all since that would have shows in the performance metrics of the machine.