Questions about db_work_mem

Hello Everyone,

I have some question about this option. So that is clear this option is commented out by default in app.yml and can improve sorting performance, but adds memory usage per-connection but what does it mean actually. It depends on how many connections? What is db_work_mem? It also sets automatically when install Discourse just like db_shared_buffers and UNICORN_WORKERS? Is that a safety thing to enable it or advanced?

Now it looks like this: #db_work_mem: "40MB"

The server is:
Vultr High Frequency Compute 2 vCore, 4096 MB

Thanks for any response! :slight_smile:

It’s a complicated issue. You can search postgres documentation for details. There are some topics that discuss tuning here

The defaults work reasonably well.

If you are having a problem, you can describe the problem and how much traffic you have and how big your database is.

2 Likes

Thank you Jay! :slight_smile: Actually I just interesting it. Just looking for ways to make better performance to the forum but if it can cause problems then maybe better if I leave it commented out.

So if I active it there is a good chance to run out memory if the tuning not correct or the traffic increasing? If i understand it correctly.

work_mem ( integer )

Sets the base maximum amount of memory to be used by a query operation (such as a sort or hash table) before writing to temporary disk files. If this value is specified without units, it is taken as kilobytes. The default value is four megabytes ( 4MB ). Note that for a complex query, several sort or hash operations might be running in parallel; each operation will generally be allowed to use as much memory as this value specifies before it starts to write data into temporary files. Also, several running sessions could be doing such operations concurrently. Therefore, the total memory used could be many times the value of work_mem ; it is necessary to keep this fact in mind when choosing the value. Sort operations are used for ORDER BY , DISTINCT , and merge joins. Hash tables are used in hash joins, hash-based aggregation, and hash-based processing of IN subqueries.

Hash-based operations are generally more sensitive to memory availability than equivalent sort-based operations. The memory available for hash tables is computed by multiplying work_mem by hash_mem_multiplier . This makes it possible for hash-based operations to use an amount of memory that exceeds the usual work_mem base amount.

2 Likes

It sometimes help to bump up the work memory to twice the commented out default. I think it helps on a big site where the indexes are big, but mostly don’t know. I have broken a site by pushing it to high.

If you want to play with tuning you can check out the prometheus plugin and make pretty graphs with grafana.

2 Likes

I found the formula here to calculate the db_work_mem.

By default it is set to 100 connections.

Total RAM * 0.25 / max_connections

I have 4096MB * 0.25 / 100 = ~10MB

In the postgres.template.yml template the db_work_mem: "10MB" is default so it calculate with this formula I think. I think this 10MB is the max right now. Thanks Jay :slight_smile:

2 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.