Postgres server performance issues


(Junaid Mailk) #1

Hello Guys,

We are facing performance issues of Postgres server. After enabling diagonstic information to log, it turned out that there are lot of queries which are not utilizing the indexes properly. Our server details are following. Can you check configuration of Postgres server. Thanks.

Postgres Version : 9.5.4

Server Hardware details :
Dedicate machine
16 Physical cores
32 Logical cores
RAM : 64 GB

RAM Type : DDR3
Drive Type : SSD 6.0Gb/s
Raid Controller : MegaRAID SAS 2108 Raid Card
Configured Raid : 10
No of Drives : 4
File System : XFS
OS : Centos 7

Please find the copy of postgres configuration in attachment.

Regards,
Junaid


(RĂ©gis Hanol) #2

Care to share them?

I think you forgot something :wink:


(Junaid Mailk) #3

Sorry, here is the link to Postgres configurations pg_configurations · GitHub. Thanks

Here is the link for checking slow queries Slow PG Queries · GitHub


(Rafael dos Santos Silva) #4

Do your database fit in RAM? What’s the size of the data?


(Junaid Mailk) #5

How can i check this that my all data is loaded in RAM? Currently we have around 6 million posts, 5 Lac threads and around 9 lac users.


(Rafael dos Santos Silva) #6

:warning: WOW! 10GB work_mem? That can be used several times per query, so I think that’s a bit too much for a 64GB server.

Also this server is only for the database or is it running the app too?


(Matt Palmer) #7

The problem is your bottleneck. We can’t tell from the outside where that is, but something is restricting throughput, and you’ll need to look at the real-time performance statistics of the running system to identify which aspect of the system is causing the problem.

As to your slow queries: PostgreSQL isn’t a dumb flatfile-replacement like MySQL; it’s query planner is subtle and quick to anger. Without the EXPLAIN (preferably EXPLAIN ANALYZE) output for the slow queries, it can be very, very difficult to identify what is suboptimal about the slow queries.


(Jeff Atwood) #8

Elaborating on the excellent point @falco raised… if you had work_mem set to 10GB you definitely have a misconfigured database server. I would strongly suggest reverting to default settings here and following the simple recommendations in the postgres guide for tuning your server:

https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server

The main (and probably only, to be honest) setting you should tweak is shared_buffers:

If you have a system with 1GB or more of RAM, a reasonable starting value for shared_buffers is 1/4 of the memory in your system. If you have less RAM you’ll have to account more carefully for how much RAM the OS is taking up; closer to 15% is more typical there. There are some workloads where even larger settings for shared_buffers are effective, but given the way PostgreSQL also relies on the operating system cache, it’s unlikely you’ll find using more than 40% of RAM to work better than a smaller amount.


(Junaid Mailk) #9

Thanks for identifying the issue. I have reverted it back to 4MB which is the default value of the work_mem. ElasticSearch Server is also running on that system.


(Junaid Mailk) #10

@Falco Point is right and i have changed that. Its misconfigured by me as i was checking lot of site for the performance improvements of Postgres. Now i have reverted that back to its default value 4 MB. I am re-evaluating all settings which i did when deployed the solution.