Sudden burst of memory usage that won't go down

A little over a week ago, I looked at the DO graph for memory and was quite surprised:

It has been stable for over a year now. I stopped/restarted the container, and the usage went back down, but in the last few hours it has gone back up to where it was before.

What causes this? top/ps are telling me the main culprits are postgres processes. Here is the info from free:

$ free -mh
              total        used        free      shared  buff/cache   available
Mem:          7.8Gi       2.5Gi       142Mi       2.3Gi       5.1Gi       2.6Gi

I can’t even access the droplet via DO console. It crashes because of OOM. Luckily, I have installed ssh keys and can access from my personal laptop, so am still at this moment able to access the droplet.

What should I do? I am on a tight budget so resizing the droplet is a last resort, and am wondering if there is anything else. Seems completely out of nowhere. I am running version 2.7.0.beta5.

1 Like

What, if anything, changed recently? Did you upgrade?

The only vertical jump on my graphs over the past 14 days was also on 11 May, but less dramatic at 78% to 89%. Maybe we upgraded then?

It seems host related, as for me the mem usage went down. The version bump to beta9 appended on the 10th, so I probably did a rebuild around that time (and downloaded a new image as the disk usage shows), I don’t remember upgrading the host though, but I might have.
image
image

You don’t have swap ?

No, I’ve been running 2.7beta5 for a while now. We’ve not changed anything in a while.

The reason I’m not on latest is there were concerns with my front end guy after the last upgrade we did, because it introduced some breaking changes to his styling stuff (im just a sysadmin, I have no idea about this kind of stuff). I think those have since been fixed in discourse, but we were waiting for the next minor version to upgrade.

What could be causing this on the host?

Resizing droplet doubles our bill - so I’m really not wanting to do that, but it may be the easiest solution.

Swap is whatever came out of the box, not sure.

:thinking: and what does /sidekiq say ?

If you add swap, you’ll avoid the OOMs. You don’t want OOMs, and you don’t want a lot of paging activity, otherwise the memory usage bump might or might not be important. (It might be interesting, but that’s a different thing.)

I’d recommend running
vmstat 5 5
or similar to see what the paging activity looks like.

Also,
free -h
is more useful than
free -mh
because it does matter how much swap you have left.

1 Like