High CPU & Disk Usage After Restore from a Failed Update

After discovering the release of 3.3.0-beta1, I immediately updated my Discourse instance from the web interface.

However, during the update process, the web interface logs stalled for more than fifteen minutes without further output (I recall the last output being a series of growing ellipses? It might have been, I’m a bit uncertain). About 2 hours later, I checked the server status from the cloud platform and suspected it had frozen, so I performed a soft reboot from the cloud platform.

After the reboot, I promptly ran a Discourse backup from the command line, downloaded the backup and app.yml locally, and then completely reinstalled Discourse (of course, the latest version). Afterwards, I uploaded the backup and initiated the restore process from the command line.

The restore was successful, but now my Discourse is facing severe performance issues. Previously, CPU usage during normal usage did not exceed 10%, but now it spikes up to around 30% even during off-peak hours, and disk reads are also relatively high. What’s worse, sometimes the server inexplicably crashes, with disk reads reaching around 1900 per second (this is the limit of my cloud server), and the CPU being over 40% in a wait state. Webpages fail to load, showing connection timeouts. At the moment, I’m running vmstat and top, but unfortunately, I didn’t keep the output. I recall that swap IO was almost zero, indicating purely disk reads. The number of blocked threads exceeded 100.

I suspect that this failed update may have caused some damage, possibly to data within the backup, rather than the software itself. Is there any way to—uh, I’m not sure?—refresh or delete some cache or similar operations? Or perhaps… run the update again? (After all, Discourse updates are quite frequent, and it can be updated almost anytime.)

As a temporary workaround, I installed a software watchdog to automatically reboot during high loads. However, this is ultimately not a long-term solution, and I haven’t found similar issues here; evidently, it’s not a problem with the Discourse software itself. I’m wondering how to address this.

If you need me to execute some commands on the server to check its status during high loads, feel free to ask. I’ll do my best to maintain my SSH connection and obtain this data without rebooting.

What does your sidekiq queue look like? It could just be rebaking all posts and load calm down eventually once the backlog has been got through. Ie the situation should be transitory.

This has happened to me a few times recently. Every time I’ve manually rebooted then performed a command line rebuild. Obviously not a good solution long term and risky.

This is a sign your machine no longer has enough memory. Consider increasing swap to cope with the spikes.

I’m finding I need a 4GB machine plus 3GB swap now to execute online updates: how much memory do you have?

Uh… actually, he wasn’t experiencing extremely high loads shortly after the restore, which I suppose is normal. But in my case, after the load had dropped to around 2% following the restore, when I was making some edits on the web interface, suddenly the page stopped loading after one action. Upon checking the cloud server platform, CPU and disk reads were very high. I don’t think this is a problem with rebaking all posts.

Since my server is currently in a normal state, here is the Sidekiq interface:


I have 2GB of physical memory and 2GB of swap. It seems like swap was automatically enabled during the Discourse installation process? Anyway, I haven’t manually configured it.


I manually add swap to 6 GB in total now.

Ok that’s your issue.

From my recent experience looks like web compilation process is much more greedy and you could experiment with much bigger swap or migrate to a bigger server.

btw, in my previous attempts to perform backups through the web interface, the logs would sometimes get stuck. Now, I execute all backups and restores from the command line (except for automatic backups, which have never failed and are working fine). I’m curious if this is a common issue? Is there a command to perform update? Should I continue performing updates through the command line in the future? Would this potentially increase the success rate?

In daily usage, I also notice a significant increase in webpage loading delays compared to before, and it’s highly likely not a network issue but a performance problem. Here are the performance data when loading the homepage:

The server freezed again. (watchdog is stopped manually to get full log without reboot) Here’s the top and vmstat output:

Note that the top output here hasn’t updated much since the server freeze, unlike vmstat, so it may not reflect the latest data.

top - 21:53:16 up  2:19,  3 users,  load average: 19.20, 4.89, 1.90
Tasks: 164 total,   1 running, 163 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.7 us,  7.2 sy,  0.0 ni,  0.0 id, 91.8 wa,  0.0 hi,  0.4 si,  0.0 st
MiB Mem :   1668.6 total,     75.9 free,   1473.1 used,    119.6 buff/cache
MiB Swap:   2048.0 total,   2048.0 free,      0.0 used.     36.8 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                   
     92 root      20   0       0      0      0 S   2.8   0.0   0:08.01 kswapd0                                                                   
   2504 root      10 -10  123496   6932      0 D   1.6   0.4   2:59.44 AliYunDunMonito                                                           
     90 root       0 -20       0      0      0 I   1.0   0.0   0:01.10 kworker/1:1H-kblockd                                                      
   1320 root      20   0 1236124   2116      0 S   0.8   0.1   0:00.96 containerd-shim                                                           
   2493 root      10 -10   87652   1744      0 S   0.6   0.1   0:44.69 AliYunDun                                                                 
   1008 root      20   0 1276372   2160      0 S   0.5   0.1   0:43.62 argusagent                                                                
    725 root      20   0  689584   3860      0 S   0.3   0.2   0:08.16 aliyun-service                                                            
   1964 message+  20   0   74800  12512      0 D   0.3   0.7   0:19.97 redis-server                                                              
   1973 www-data  20   0   56840   4872    472 D   0.3   0.3   0:01.36 nginx                                                                     
   2081 admin     20   0  519788 263284      4 S   0.3  15.4   0:07.88 ruby                                                                      
   2214 admin     25   5 5338856 495356      0 S   0.3  29.0   0:31.95 ruby                                                                      
   2227 admin     20   0 5246956 484488      4 S   0.3  28.4   0:33.31 ruby                                                                      
   2467 systemd+  20   0  215196  15052  11348 D   0.3   0.9   0:00.45 postmaster                                                                
    785 root      20   0   42256    748      0 S   0.2   0.0   0:07.58 AliYunDunUpdate                                                           
    805 root      20   0 1798676  11832      0 S   0.2   0.7   0:02.24 containerd                                                                
   1027 root      20   0 2281464  29296      0 S   0.2   1.7   0:01.96 dockerd                                                                   
   2056 systemd+  20   0   67872   2192      0 D   0.2   0.1   0:00.39 postmaster                                                                
   2243 root      20   0   17204   5068   2592 S   0.2   0.3   0:00.34 sshd                                                                      
   2555 root      20   0   10496   2736   1916 R   0.2   0.2   0:11.41 top                                                                       
   9998 admin     20   0 5057740 313028      4 S   0.2  18.3   0:04.83 ruby                                                                      
  11658 admin     20   0   10944    252      0 D   0.2   0.0   0:00.09 sleep                                                                     
      1 root      20   0  166776   6568   2808 D   0.1   0.4   0:01.42 systemd                                                                   
     22 root      20   0       0      0      0 S   0.1   0.0   0:00.31 ksoftirqd/1                                                               
    149 root       0 -20       0      0      0 I   0.1   0.0   0:00.17 kworker/0:1H-kblockd                                                      
    707 systemd+  20   0   16260   2296   1212 D   0.1   0.1   0:00.16 systemd-network                                                           
    723 root      20   0   19424    984      0 S   0.1   0.1   0:01.58 assist_daemon                                                             
   1965 root      20   0    6680    280      0 D   0.1   0.0   0:00.05 cron                                                                      
   1974 www-data  20   0   55912   3996    472 D   0.1   0.2   0:00.98 nginx                                                                     
   2054 systemd+  20   0  213244   6344   4384 S   0.1   0.4   0:02.02 postmaster                                                                
   2055 systemd+  20   0  213780   3208    924 D   0.1   0.2   0:00.12 postmaster                                                                
   2270 root      20   0   17204   3880   1404 D   0.1   0.2   0:00.67 sshd            
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 1  0      0  71456    624 134980    0    0 11302     4 2209 3766  1  1 96  2  0
 0  1      0  72756   1452 132412    0    0 15250    12 2418 3969  1  2 94  4  0
 0  0      0  67724   1244 137444    0    0  4485     2 2083 3700  1  1 98  1  0
 0  0      0  80388    928 124660    0    0 26388    10 2650 4018  1  2 92  4  0
 0  0      0  78256    696 126228    0    0 14752     6 2286 3792  1  1 95  3  0
 0  0      0  72252   1680 130784    0    0 40334    41 2938 4212  1  2 89  7  0
 0  0      0  74812    712 128588    0    0 15904    34 2309 3839  1  1 95  3  0
 2  0      0  71120    856 132224    0    0 54282     5 3292 4382  1  3 85 10  0
 0  0      0  73668    600 128232    0    0 30135    35 2795 4212  1  2 92  5  0
 0  0      0  65160   1524 135636    0    0 30239     6 2842 4247  1  2 90  7  0
 0  0      0  74480   1416 126484    0    0 11761     8 2254 3793  1  2 95  2  0
 1  0      0  80096   1188 120752    0    0 37110     2 2907 4163  1  4 88  7  0
 0  0      0  77880    688 122928    0    0  8439     5 2151 3719  1  1 97  2  0
 0  8      0  76744   1256 122536    0    0 110986    14 4410 4860  1  6 34 59  0
 0 13      0  71776    244 128496    0    0 126111    21 4386 4699  1  7  0 91  0
 1 30      0  71860    356 127916    0    0 125040    21 4634 7331  1  7  0 91  0
 3 31      0  77448    204 122688    0    0 125980    10 4167 8370  1  5  0 94  0
 3 44      0  68184    168 132192    0    0 150011     0 4296 9015  0  7  0 93  0
 1 44      0  74812    216 124700    0    0 204040     0 4949 10108  1  6  0 93  0
 6 35      0  66588    244 133160    0    0 131782     2 6238 12786  1  7  0 93  0
 1 39      0  68156    556 130996    0    0 144440    11 4608 9622  1  8  0 92  0
 0 71      0  72256    212 126520    0    0 169879    12 4780 10242  1 10  0 89  0
 0 91      0  66752    112 132476    0    0 542955    18 7422 17564  0 52  0 48  0
 6 89      0  70324    172 129516    0    0 404754    21 6033 14379  0 52  0 48  0
 3 87      0  60468    156 138060    0    0 499604     1 10640 26065  0 56  0 44  0
 5 91      0  63456    152 135588    0    0 725747    11 10827 25806  0 55  0 45  0
 4 101      0  69596    168 129132    0    0 558872     8 7755 18745  0 56  0 44  0
 6 93      0  66516    156 132772    0    0 394003     3 12549 30622  0 54  0 46  0
 4 94      0  62872    152 135976    0    0 656057     0 7790 18800  0 52  0 48  0
 2 95      0  57072    156 141552    0    0 347776     0 10390 25943  0 56  0 44  0
 4 97      0  66308    164 132964    0    0 641920     0 9963 24307  0 55  0 45  0
13 90      0  75620    152 123836    0    0 658883     0 7870 19260  0 55  0 45  0
 2 96      0  66244    152 131496    0    0 764233     0 16078 40498  0 55  0 45  0
 1 102      0  60156    224 137440    0    0 542241     0 10243 24560  0 54  0 46  0
13 90      0  62488    164 135548    0    0 671301     0 13771 34230  0 55  0 45  0
procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 3 101      0  61812    100 136048    0    0 548615     0 10328 26305  0 55  0 45  0
 2 102      0  61412     92 135928    0    0 1003845     0 17283 42751  0 56  0 44  0
 8 98      0  60408    152 136896    0    0 662925     0 11369 29130  0 55  0 45  0
 2 104      0  68292    152 129552    0    0 628286     0 13772 34122  0 56  0 44  0
 2 99      0  65352    152 132436    0    0 494863     5 8766 21559  0 54  0 46  0
10 94      0  58344     96 138800    0    0 669390     0 9294 22968  0 56  0 44  0
 3 95      0  61180    100 135444    0    0 598622     0 13342 32430  0 57  0 43  0
 8 96      0  59444    160 137584    0    0 720255     0 10812 25735  0 57  0 43  0
14 89      0  66844    160 130072    0    0 665390     0 17496 43187  0 56  0 44  0
 1 98      0  64668    152 132756    0    0 498328     0 7262 18622  0 55  0 45  0
 1 98      0  61848    172 135060    0    0 438272     0 7693 19947  0 56  0 44  0
 3 102      0  60636    152 136548    0    0 1020531     0 17162 43981  0 55  0 45  0
 2 93      0  55096    164 141908    0    0 595678     0 14379 37088  0 55  0 45  0
 9 100      0  61704    152 134692    0    0 368380     0 10621 27539  0 55  0 45  0
 2 104      0  63784    148 132620    0    0 767905     8 11090 28866  0 56  0 44  0
 2 110      0  60124    232 135636    0    0 882479     0 14778 38194  0 55  0 45  0
 3 109      0  64632    156 131448    0    0 489138     0 11521 30264  0 56  0 44  0
 2 107      0  65852     36 130592    0    0 1214409     0 17572 45932  0 55  0 45  0
 2 97      0  60260     96 135668    0    0 559811     0 14592 37198  0 56  0 44  0
 1 103      0  56064    104 139668    0    0 514522     0 11059 26834  0 57  0 43  0
 7 97      0  62624    160 133492    0    0 646888     4 7819 19552  0 57  0 43  0
 8 102      0  65404    152 130172    0    0 526268     0 11623 29396  0 56  0 44  0

I’ll keep the freezing state without rebooting. If you need me to execute some commands on the server to check its status during high loads, feel free to ask. I’ll do my best to maintain my SSH connection and obtain all data without rebooting.


other data from cloud server platform:


This looks suspicious - may or may not be the cause. Something is (probably) doing a huge amount of disk reads. AliYunDun seems to be the cloud provider’s tooling. AliYunDunMonitor is (possibly) stuck in disk wait, is running at elevated priority, has clocked up a lot of CPU time.

I see someone somewhere has written a script to kill such processes. The tone is angry. (I cannot say whether or not that script is safe, or a good idea.)

But it’s also possible some Discourse process is doing all the disk reads.

Perhaps try

apt-get install iotop
iotop -b -d 15 -P -o

and watch it until you get your freeze.

Okay and this is the output of iotop when freeze:

Total DISK READ:         7.92 M/s | Total DISK WRITE:        11.70 K/s
Current DISK READ:      12.14 M/s | Current DISK WRITE:      17.29 K/s
    PID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN      IO    COMMAND
      1 be/4 root        6.12 K/s    0.00 B/s  ?unavailable?  init noibrs
    300 be/3 root        0.00 B/s    2.93 K/s  ?unavailable?  [jbd2/vda3-8]
    374 be/3 root       10.64 K/s    0.00 B/s  ?unavailable?  systemd-journald
    709 be/4 systemd-    3.99 K/s    0.00 B/s  ?unavailable?  systemd-resolved
    723 be/4 root       51.07 K/s    0.00 B/s  ?unavailable?  assist_daemon
    725 be/4 root      856.83 K/s  272.40 B/s  ?unavailable?  aliyun-service
    770 be/4 root       63.31 K/s    0.00 B/s  ?unavailable?  python3 -Es /usr/sbin/tuned -l -P
    785 be/4 root       12.77 K/s    0.00 B/s  ?unavailable?  AliYunDunUpdate
    805 be/4 root      389.98 K/s    0.00 B/s  ?unavailable?  containerd
   1008 be/4 root      242.60 K/s  544.80 B/s  ?unavailable?  argusagent
   1027 be/4 root      690.31 K/s    0.00 B/s  ?unavailable?  dockerd -H fd:// --containerd=/run/containerd/containerd.sock
   1320 be/4 root      241.01 K/s    0.00 B/s  ?unavailable?  containerd-shim-runc-v2 -namespace moby -id 6e9880833995c5e2a295ed6571129387036824d304e09f43d5e1333ebd7fdbd5 -address /run/containerd/containerd.sock
   1953 be/4 root     1906.79 B/s    0.00 B/s  ?unavailable?  runsvdir -P /etc/service
   1962 be/4 admin      85.12 K/s    0.00 B/s  ?unavailable?  bash config/unicorn_launcher -E production -c config/unicorn.conf.rb
   1964 be/4 messageb  611.83 K/s    0.00 B/s  ?unavailable?  redis-server *:6379
   1973 be/4 www-data  530.17 K/s  272.40 B/s  ?unavailable?  nginx: worker process
   2054 be/4 systemd-    8.78 K/s    7.45 K/s  ?unavailable?  postgres: 13/main: walwriter
   2056 be/4 systemd-  102.42 K/s    0.00 B/s  ?unavailable?  postgres: 13/main: stats collector
   2081 be/4 admin     817.20 B/s    0.00 B/s  ?unavailable?  unicorn master -E production -c config/unicorn.conf.rb
   2493 ?dif root      132.21 K/s    0.00 B/s  ?unavailable?  AliYunDun
   2504 be/2 root      740.32 K/s    0.00 B/s  ?unavailable?  AliYunDunMonitor
  45686 be/4 root       47.62 K/s    0.00 B/s  ?unavailable?  sshd: root@pts/0
  45816 be/4 root       36.98 K/s    0.00 B/s  ?unavailable?  top
  46146 be/4 root      357.52 K/s    0.00 B/s  ?unavailable?  python3 /usr/sbin/iotop -b -d 15 -P -o
  46160 be/4 root      267.34 K/s    0.00 B/s  ?unavailable?  sshd: root@pts/2
  46232 be/4 root      174.24 K/s    0.00 B/s  ?unavailable?  vmstat 5
  51903 be/4 admin    1569.22 K/s  272.40 B/s  ?unavailable?  unicorn worker[0] -E production -c config/unicorn.conf.rb
  51972 be/4 admin     175.84 K/s    0.00 B/s  ?unavailable?  unicorn worker[1] -E production -c config/unicorn.conf.rb
  60365 be/4 root      272.40 B/s    0.00 B/s  ?unavailable?  [kworker/u4:0-writeback]
  63081 be/4 root      272.40 B/s    0.00 B/s  ?unavailable?  [kworker/u4:4-events_unbound]
  63825 be/4 systemd-  673.02 K/s    0.00 B/s  ?unavailable?  postgres: 13/main: discourse discourse [local] idle
  64191 be/4 root       23.14 K/s    0.00 B/s  ?unavailable?  -bash

Actually, I managed to crash the website by rapidly refreshing the homepage several times with the F5 key… So, I think it’s highly likely indeed an issue with Discourse.

top output now:

top - 13:36:47 up 18:03,  4 users,  load average: 86.10, 47.87, 19.63
Tasks: 174 total,   4 running, 170 sleeping,   0 stopped,   0 zombie
%Cpu(s):  2.2 us, 21.1 sy,  0.0 ni,  0.0 id, 76.3 wa,  0.0 hi,  0.4 si,  0.0 st
MiB Mem :   1668.6 total,     69.6 free,   1413.6 used,    185.4 buff/cache
MiB Swap:   2048.0 total,   2048.0 free,      0.0 used.     36.9 avail Mem 

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                                                                   
     92 root      20   0       0      0      0 S  37.8   0.0 100:17.66 kswapd0                                                                   
   1320 root      20   0 1236124   2320      0 S   4.0   0.1   4:18.47 containerd-shim                                                           
   1027 root      20   0 2281464  29160      0 S   2.6   1.7   2:25.38 dockerd                                                                   
    805 root      20   0 1798676  11844      0 S   2.5   0.7   2:28.22 containerd                                                                
   2504 root      10 -10  129972  11660      0 D   2.1   0.7  27:34.39 AliYunDunMonito                                                           
     90 root       0 -20       0      0      0 R   1.5   0.0   1:58.75 kworker/1:1H-kblockd                                                      
   2493 root      10 -10   88144   2244      0 D   1.1   0.1  10:49.21 AliYunDun                                                                 
  51989 root      20   0 1318960   8980      0 S   0.8   0.5   0:46.15 snapd                                                                     
   1008 root      20   0 1276372   2584      0 S   0.7   0.2  11:16.56 argusagent                                                                
  46146 root      20   0   22320   9516   1908 D   0.5   0.6   0:23.57 iotop                                                                     
  46160 root      20   0   17200   4968   2496 D   0.3   0.3   0:03.41 sshd                                                                      
  45686 root      20   0   17200   5188   2716 D   0.3   0.3   0:04.21 sshd                                                                      
    785 root      20   0   42324    876      0 S   0.2   0.1   3:41.24 AliYunDunUpdate                                                           
  45816 root      20   0   10500   3352   2544 R   0.2   0.2   0:26.01 top                                                                       
  51903 admin     20   0  845708 458296      4 S   0.2  26.8   0:32.36 ruby                                                                      
  53938 admin     25   5 5285240 378040      0 S   0.2  22.1   0:33.50 ruby                                                                      
    723 root      20   0   19424    996      0 S   0.2   0.1   1:57.31 assist_daemon                                                             
    725 root      20   0  689584   3800      0 S   0.2   0.2   3:06.25 aliyun-service                                                            
   1964 message+  20   0   74800  13944      0 D   0.2   0.8   2:51.80 redis-server                                                              
   1974 www-data  20   0   57696   6136    772 R   0.2   0.4   0:36.37 nginx                                                                     
   1975 www-data  20   0   53508   1520     72 S   0.2   0.1   0:45.76 nginx                                                                     
   2053 systemd+  20   0  213244   6952   4988 D   0.2   0.4   0:27.47 postmaster                                                                
  51972 admin     20   0 5139692 329104      4 S   0.2  19.3   0:08.21 ruby                                                                      
  64309 systemd+  20   0  213244   1956      0 D   0.2   0.1   0:00.15 postmaster                                                                
     22 root      20   0       0      0      0 S   0.1   0.0   0:09.66 ksoftirqd/1                                                               
    149 root       0 -20       0      0      0 I   0.1   0.0   0:06.26 kworker/0:1H-kblockd                                                      
   1973 www-data  20   0   56576   5088    768 D   0.1   0.3   0:35.23 nginx                                                                     
   2052 systemd+  20   0  213468  34408  32340 D   0.1   2.0   0:30.59 postmaster                                                                
   2055 systemd+  20   0  213780   3244    932 D   0.1   0.2   0:29.32 postmaster                                                                
   2056 systemd+  20   0   68132   2256      0 D   0.1   0.1   0:15.54 postmaster                                                                
   2081 admin     20   0  585324 280488      4 S   0.1  16.4   0:57.44 ruby            

vmstat output below:

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa st
 0  0      0  80348    884 230624    0    0   538     6 1980 3661  1  1 99  0  0
 0  0      0  78232   2780 226576    0    0 17644    20 2510 4063  3  1 93  2  0
 3  0      0  93580    176 186012    0    0 34986    19 3329 4751 13  4 78  6  0
 0 29      0  82988    192 177628    0    0 127805    10 4930 6456  5  7 34 54  0
 0 30      0  85520    204 175124    0    0 120935     6 3768 6887  1  5  0 94  0
 3 37      0  89408    232 170836    0    0 122472    35 3876 7157  1  7  0 92  0
10 80      0  74004    220 186372    0    0 476674    36 4532 10212  0 35  0 64  0
11 89      0  71564    160 190284    0    0 388222     0 11098 27886  0 53  0 47  0
 0 100      0  73508    160 188484    0    0 209539     4 4726 11652  0 32  0 68  0

I’m not an expert in Linux, but isn’t the kswapd0 process responsible for managing memory and swap exchange? Why does it have such high sy CPU usage when swpd, si, and so in vmstat are all 0? I’m completely puzzled.

It’s very suspicious that you have 2G of swap but there’s no swap usage and no swap activity. I would suspect your kernel settings. Here are mine - please show all of yours, please do not cherry-pick!

head /proc/sys/vm/*
==> /proc/sys/vm/admin_reserve_kbytes <==
8192

==> /proc/sys/vm/block_dump <==
0
head: cannot open '/proc/sys/vm/compact_memory' for reading: Permission denied

==> /proc/sys/vm/compact_unevictable_allowed <==
1

==> /proc/sys/vm/dirty_background_bytes <==
0

==> /proc/sys/vm/dirty_background_ratio <==
10

==> /proc/sys/vm/dirty_bytes <==
0

==> /proc/sys/vm/dirty_expire_centisecs <==
3000

==> /proc/sys/vm/dirty_ratio <==
20

==> /proc/sys/vm/dirty_writeback_centisecs <==
500

==> /proc/sys/vm/dirtytime_expire_seconds <==
43200

==> /proc/sys/vm/drop_caches <==
0

==> /proc/sys/vm/extfrag_threshold <==
500

==> /proc/sys/vm/hugepages_treat_as_movable <==
0

==> /proc/sys/vm/hugetlb_shm_group <==
0

==> /proc/sys/vm/laptop_mode <==
0

==> /proc/sys/vm/legacy_va_layout <==
0

==> /proc/sys/vm/lowmem_reserve_ratio <==
256     256     32      1

==> /proc/sys/vm/max_map_count <==
65530

==> /proc/sys/vm/memory_failure_early_kill <==
0

==> /proc/sys/vm/memory_failure_recovery <==
1

==> /proc/sys/vm/min_free_kbytes <==
45056

==> /proc/sys/vm/min_slab_ratio <==
5

==> /proc/sys/vm/min_unmapped_ratio <==
1

==> /proc/sys/vm/mmap_min_addr <==
65536

==> /proc/sys/vm/mmap_rnd_bits <==
28

==> /proc/sys/vm/mmap_rnd_compat_bits <==
8

==> /proc/sys/vm/nr_hugepages <==
0

==> /proc/sys/vm/nr_hugepages_mempolicy <==
0

==> /proc/sys/vm/nr_overcommit_hugepages <==
0

==> /proc/sys/vm/numa_stat <==
1

==> /proc/sys/vm/numa_zonelist_order <==
Node

==> /proc/sys/vm/oom_dump_tasks <==
1

==> /proc/sys/vm/oom_kill_allocating_task <==
0

==> /proc/sys/vm/overcommit_kbytes <==
0

==> /proc/sys/vm/overcommit_memory <==
1

==> /proc/sys/vm/overcommit_ratio <==
50

==> /proc/sys/vm/page-cluster <==
3

==> /proc/sys/vm/panic_on_oom <==
0

==> /proc/sys/vm/percpu_pagelist_fraction <==
0

==> /proc/sys/vm/stat_interval <==
1

==> /proc/sys/vm/stat_refresh <==

==> /proc/sys/vm/swappiness <==
60

==> /proc/sys/vm/user_reserve_kbytes <==
29305

==> /proc/sys/vm/vfs_cache_pressure <==
100

==> /proc/sys/vm/watermark_scale_factor <==
10

==> /proc/sys/vm/zone_reclaim_mode <==
0

Edit: please also run these and post the output

swapon
free
uname -a
df -T
1 Like
root@iZj6cgi365ov99veqodfgnZ:~# head /proc/sys/vm/*
==> /proc/sys/vm/admin_reserve_kbytes <==
8192

==> /proc/sys/vm/compaction_proactiveness <==
20
head: cannot open '/proc/sys/vm/compact_memory' for reading: Permission denied

==> /proc/sys/vm/compact_unevictable_allowed <==
1

==> /proc/sys/vm/dirty_background_bytes <==
0

==> /proc/sys/vm/dirty_background_ratio <==
10

==> /proc/sys/vm/dirty_bytes <==
0

==> /proc/sys/vm/dirty_expire_centisecs <==
3000

==> /proc/sys/vm/dirty_ratio <==
30

==> /proc/sys/vm/dirtytime_expire_seconds <==
43200

==> /proc/sys/vm/dirty_writeback_centisecs <==
500
head: cannot open '/proc/sys/vm/drop_caches' for reading: Permission denied

==> /proc/sys/vm/extfrag_threshold <==
500

==> /proc/sys/vm/hugetlb_shm_group <==
0

==> /proc/sys/vm/laptop_mode <==
0

==> /proc/sys/vm/legacy_va_layout <==
0

==> /proc/sys/vm/lowmem_reserve_ratio <==
256     256     32      0       0

==> /proc/sys/vm/max_map_count <==
65530

==> /proc/sys/vm/memory_failure_early_kill <==
0

==> /proc/sys/vm/memory_failure_recovery <==
1

==> /proc/sys/vm/min_free_kbytes <==
45056

==> /proc/sys/vm/min_slab_ratio <==
5

==> /proc/sys/vm/min_unmapped_ratio <==
1

==> /proc/sys/vm/mmap_min_addr <==
65536

==> /proc/sys/vm/mmap_rnd_bits <==
28

==> /proc/sys/vm/mmap_rnd_compat_bits <==
8

==> /proc/sys/vm/nr_hugepages <==
0

==> /proc/sys/vm/nr_hugepages_mempolicy <==
0

==> /proc/sys/vm/nr_overcommit_hugepages <==
0

==> /proc/sys/vm/numa_stat <==
1

==> /proc/sys/vm/numa_zonelist_order <==
Node

==> /proc/sys/vm/oom_dump_tasks <==
1

==> /proc/sys/vm/oom_kill_allocating_task <==
0

==> /proc/sys/vm/overcommit_kbytes <==
0

==> /proc/sys/vm/overcommit_memory <==
0

==> /proc/sys/vm/overcommit_ratio <==
50

==> /proc/sys/vm/page-cluster <==
3

==> /proc/sys/vm/page_lock_unfairness <==
5

==> /proc/sys/vm/panic_on_oom <==
0

==> /proc/sys/vm/percpu_pagelist_high_fraction <==
0

==> /proc/sys/vm/stat_interval <==
1

==> /proc/sys/vm/stat_refresh <==

==> /proc/sys/vm/swappiness <==
0

==> /proc/sys/vm/unprivileged_userfaultfd <==
0

==> /proc/sys/vm/user_reserve_kbytes <==
50778

==> /proc/sys/vm/vfs_cache_pressure <==
100

==> /proc/sys/vm/watermark_boost_factor <==
15000

==> /proc/sys/vm/watermark_scale_factor <==
10

==> /proc/sys/vm/zone_reclaim_mode <==
0
root@iZj6cgi365ov99veqodfgnZ:~# swapon
NAME      TYPE SIZE USED PRIO
/swapfile file   2G   0B   -2
root@iZj6cgi365ov99veqodfgnZ:~# free
               total        used        free      shared  buff/cache   available
Mem:         1708636      984352       79376       38344      644908      490004
Swap:        2097148           0     2097148
root@iZj6cgi365ov99veqodfgnZ:~# uname -a
Linux iZj6cgi365ov99veqodfgnZ 5.15.0-86-generic #96-Ubuntu SMP Wed Sep 20 08:23:49 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
root@iZj6cgi365ov99veqodfgnZ:~# df -T
Filesystem     Type    1K-blocks     Used Available Use% Mounted on
tmpfs          tmpfs      170864     1164    169700   1% /run
/dev/vda3      ext4     61540412 19791160  39020044  34% /
tmpfs          tmpfs      854316        0    854316   0% /dev/shm
tmpfs          tmpfs        5120        0      5120   0% /run/lock
/dev/vda2      vfat       201615     6186    195429   4% /boot/efi
tmpfs          tmpfs      170860        4    170856   1% /run/user/0
overlay        overlay  61540412 19791160  39020044  34% /var/lib/docker/overlay2/7754829d0ad68c8acc8b50ed96ae87d8d882a996b87fe0b6821827e527487b62/merged

Thanks, that’s great. Somehow you have swappiness set to zero , which is almost certainly most of your problem.

But there are three settings worth getting right: one is swappiness and two of them are mentioned in MKJ’s Opinionated Discourse Deployment Configuration
/proc/sys/vm/overcommit_memory
/sys/kernel/mm/transparent_hugepage/enabled
/proc/sys/vm/swappiness

You can set these once, but you should also ensure they are set at boot time - see MJK’s post, and check those files before you overwrite them.

I suspect something is happening at boot time to set the swappiness to zero. Possibly your hosting provider is trying to discourage swapping to SSD. But you must have swap, or else you must upgrade to 4G RAM at least.

Perhaps try
egrep -r swappiness /etc/
to try to find where it is being set.

There is advice elsewhere on this forum to set swappiness to 10 - I prefer to leave it at the default of 60.

1 Like

Sorry… I’m not very proficient in Linux. Could you teach me specifically how to set it up?
I’ve never manually changed any system settings. However, I think the hosting provider might be limiting swap because this server appears to be running on HDD (judging by the maximum 2k IOPS).


Can doing these things permanently modify the settings you mentioned?

run:

echo 'sys.kernel.mm.transparent_hugepage.enabled=never' > /etc/sysctl.d/10-huge-pages.conf
echo 'vm.overcommit_memory=1' > /etc/sysctl.d/90-vm_overcommit_memory.conf

and change vm.swapiness = 60 in /etc/sysctl.conf

now: (I’ve reboot after changed all settings. Commands below are run after reboot)

root@iZj6cgi365ov99veqodfgnZ:~# head /proc/sys/vm/*
==> /proc/sys/vm/admin_reserve_kbytes <==
8192

==> /proc/sys/vm/compaction_proactiveness <==
20
head: cannot open '/proc/sys/vm/compact_memory' for reading: Permission denied

==> /proc/sys/vm/compact_unevictable_allowed <==
1

==> /proc/sys/vm/dirty_background_bytes <==
0

==> /proc/sys/vm/dirty_background_ratio <==
10

==> /proc/sys/vm/dirty_bytes <==
0

==> /proc/sys/vm/dirty_expire_centisecs <==
3000

==> /proc/sys/vm/dirty_ratio <==
30

==> /proc/sys/vm/dirtytime_expire_seconds <==
43200

==> /proc/sys/vm/dirty_writeback_centisecs <==
500
head: cannot open '/proc/sys/vm/drop_caches' for reading: Permission denied

==> /proc/sys/vm/extfrag_threshold <==
500

==> /proc/sys/vm/hugetlb_shm_group <==
0

==> /proc/sys/vm/laptop_mode <==
0

==> /proc/sys/vm/legacy_va_layout <==
0

==> /proc/sys/vm/lowmem_reserve_ratio <==
256     256     32      0       0

==> /proc/sys/vm/max_map_count <==
65530

==> /proc/sys/vm/memory_failure_early_kill <==
0

==> /proc/sys/vm/memory_failure_recovery <==
1

==> /proc/sys/vm/min_free_kbytes <==
45056

==> /proc/sys/vm/min_slab_ratio <==
5

==> /proc/sys/vm/min_unmapped_ratio <==
1

==> /proc/sys/vm/mmap_min_addr <==
65536

==> /proc/sys/vm/mmap_rnd_bits <==
28

==> /proc/sys/vm/mmap_rnd_compat_bits <==
8

==> /proc/sys/vm/nr_hugepages <==
0

==> /proc/sys/vm/nr_hugepages_mempolicy <==
0

==> /proc/sys/vm/nr_overcommit_hugepages <==
0

==> /proc/sys/vm/numa_stat <==
1

==> /proc/sys/vm/numa_zonelist_order <==
Node

==> /proc/sys/vm/oom_dump_tasks <==
1

==> /proc/sys/vm/oom_kill_allocating_task <==
0

==> /proc/sys/vm/overcommit_kbytes <==
0

==> /proc/sys/vm/overcommit_memory <==
1

==> /proc/sys/vm/overcommit_ratio <==
50

==> /proc/sys/vm/page-cluster <==
3

==> /proc/sys/vm/page_lock_unfairness <==
5

==> /proc/sys/vm/panic_on_oom <==
0

==> /proc/sys/vm/percpu_pagelist_high_fraction <==
0

==> /proc/sys/vm/stat_interval <==
1

==> /proc/sys/vm/stat_refresh <==

==> /proc/sys/vm/swappiness <==
60

==> /proc/sys/vm/unprivileged_userfaultfd <==
0

==> /proc/sys/vm/user_reserve_kbytes <==
50778

==> /proc/sys/vm/vfs_cache_pressure <==
100

==> /proc/sys/vm/watermark_boost_factor <==
15000

==> /proc/sys/vm/watermark_scale_factor <==
10

==> /proc/sys/vm/zone_reclaim_mode <==
0
root@iZj6cgi365ov99veqodfgnZ:~# cat /sys/kernel/mm/transparent_hugepage/enabled
always [madvise] never

It looks like it’s running well! Thanks you all sincerely for help!

2 Likes

That’s good progress! I recommend

# cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]