- Sometimes DB gets fired because of some queries, and we were told that we need to optimize the PostgreSQL, any idea hot to achieve this?
In the screen below this is our DB resources- CPU: 4 vcpu
- Memory: 22 GB
Hello all, I’m working with @Abdelrahman_MoHamed on this. We recently migrated a site with roughly 500k topics and 3.5 million users to discourse. We are self hosted on GCP. During this process our partner who assisted with the migration mentioned it might make sense to optimize Postgres to get proper performance from the DB. Our assumption was the DB would come optimized for discourse out of the box (maybe it is?). However given their (the partner suggestion) we wanted to follow up here.
Long story short are these some common ways to optimize Postgres for a DB with the numbers posted here?
Thanks in advance for the help!
Do you have more detailed evidence that the DB is the limiting factor, or is the cause of these CPU loads? Output of ps or top, for example.
If it is the DB, I imagine there’s some way of asking it which queries it is working on.
Such details might be helpful.
Thanks @Ed_S we will add more data here soon. Just monitoring things for now. Appreciate the response.
that does seem like a weird combo … 22GB memory but only 4 vcpu? Overkill on memory, but potentially too few cores.
normally you’d have a server with much more vcpu per GB
that’s because web serving is very parallel and for each unicorn you need only around 1GB.
Each Unicorn can run on a different core I believe.
This maybe why you are hitting 100% CPU utilisation so easily.
I’d recommend you consider moving to a 16/16 or even an 8/8 configuration and see if things improve.
Is small sidestep allowed? Are unicorns similar thing than workers of PHP? Some document tried explain that unicorns are actually HTTP-servers with own workers delivering requests to Ruby, and that’s why it is totally different. But both handles http-reguests allowing more concurrent requests and therefore more PHP-workers and more unicorns need more RAM — and for me those two are pretty much same.
Right or horrible wrong? I would like to understand the principle now, because I know why, when and how to adjust PHP-workers, but unicorns are just mystical and magical creatures for me.
yes, I think that’s a fair comparison
Related: How might an admin find out if they have too few, or too many, unicorns for their web traffic?
Related: How might an admin find out if they have much more RAM than they need?
My understanding is that the number of unicorns is generally scaled according to the number of CPUs, but it needn’t be so - if, for example, the host config has changed since setup.
The amount of RAM should scale according to how much is needed. I don’t know if a huge import of many many posts and users necessarily means more RAM is needed - the question would be, how much of the database is accessed to serve each web request, or each regular task?
For me, a 10 min burst of maximum CPU use, against a background of very low use, would not indicate a problem. (It would be a problem if it meant a very busy and time-critical forum, such as say a sports/gambling forum, had some delay to serving pages at times of peak interest. But my forum is not time-critical or performance-critical.)
I don’t know how one could tabulate the range of recent response times, but that might be a good bit of data to look at. Either from the web-serving side or from the database side.