char
29 בינואר, 2026, 1:27pm
1
Hi everyone, we’re trying to troubleshoot the issues our Discourse installation has been having in the last few days.
We’re running a Contabo cloud vps, 8 cores/32gb/nvme, for a userbase of ~150 people; we’ve had few issues in the last 2 and a half years.
Since last weekend, the instance is having moments of almost-unusability, due to a un unusually high CPU usage.
To troubleshoot this situation, we’ve let the forum in Read Only mode yesterday. I have a few Grafana graphs showing what’s our situation - the three markers indicate where we turned on Read Only, when we restarted the container and when we turned off Read Only.
(a couple more graphs here:
Imgur: The magic of the Internet )
As you can see, usage is pretty high. This started a few days ago, therefore there’s some kind of issue we’ve yet to troubleshoot properly.
The unusual thing we noticed is that our host reboots the vps during nighttime on sunday, and last weekend the database didn’t manage to complete one or more transactions.
We think this might have created inconsistencies in some kind of internal Discourse process, and this inconsistency is fighting against user activity - but we’re simply hypothesizing, here.
After asking the AI assistant before opening a new thread, I can add that Sidekiq has some outlier processes:
Jobs::ProcessBadgeBacklog takes about 2-5 seconds
last DestroyOldDeletionStubs took 475 seconds.
last DirectoryRefreshDaily took 580 seconds.
last TopRefreshToday took 18 seconds.
So, the question is - what could be causing this kind of situation, with the userbase and the hardware we’re using?
Is there anything more specific we should look into?
I think our userbase should not warrant emergency situations, but I’m not married to any opinion we’ve had so far, and we’d be very helpful for pointers to what else we could look into.
Thanks!
3 לייקים
Ed_S
(Ed S)
29 בינואר, 2026, 4:52pm
2
I think somehow you’re short of RAM. You have 32G. But possibly you have some process or processes using more than expected.
Perhaps capture ps aux in a window about 120 chars wide. Here’s mine:
root@rc-debian-hel:~# ps uax
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.2 167752 9096 ? Ss 2025 4:24 /sbin/init
root 2 0.0 0.0 0 0 ? S 2025 0:00 [kthreadd]
root 3 0.0 0.0 0 0 ? I< 2025 0:00 [rcu_gp]
root 4 0.0 0.0 0 0 ? I< 2025 0:00 [rcu_par_gp]
root 5 0.0 0.0 0 0 ? I< 2025 0:00 [slub_flushwq]
root 6 0.0 0.0 0 0 ? I< 2025 0:00 [netns]
root 8 0.0 0.0 0 0 ? I< 2025 0:00 [kworker/0:0H-events_highpri]
root 10 0.0 0.0 0 0 ? I< 2025 0:00 [mm_percpu_wq]
root 11 0.0 0.0 0 0 ? S 2025 0:00 [rcu_tasks_rude_]
root 12 0.0 0.0 0 0 ? S 2025 0:00 [rcu_tasks_trace]
root 13 0.0 0.0 0 0 ? S 2025 4:48 [ksoftirqd/0]
root 14 0.0 0.0 0 0 ? I 2025 45:30 [rcu_sched]
root 15 0.0 0.0 0 0 ? S 2025 0:20 [migration/0]
root 16 0.0 0.0 0 0 ? S 2025 0:00 [idle_inject/0]
root 18 0.0 0.0 0 0 ? S 2025 0:00 [cpuhp/0]
root 19 0.0 0.0 0 0 ? S 2025 0:00 [cpuhp/1]
root 20 0.0 0.0 0 0 ? S 2025 0:00 [idle_inject/1]
root 21 0.0 0.0 0 0 ? S 2025 0:21 [migration/1]
root 22 0.0 0.0 0 0 ? S 2025 4:35 [ksoftirqd/1]
root 24 0.0 0.0 0 0 ? I< 2025 0:00 [kworker/1:0H-events_highpri]
root 25 0.0 0.0 0 0 ? S 2025 0:00 [kdevtmpfs]
root 26 0.0 0.0 0 0 ? I< 2025 0:00 [inet_frag_wq]
root 28 0.0 0.0 0 0 ? S 2025 0:00 [kauditd]
root 29 0.0 0.0 0 0 ? S 2025 0:02 [khungtaskd]
root 30 0.0 0.0 0 0 ? S 2025 0:00 [oom_reaper]
root 31 0.0 0.0 0 0 ? I< 2025 0:00 [writeback]
root 32 0.0 0.0 0 0 ? S 2025 8:40 [kcompactd0]
root 33 0.0 0.0 0 0 ? SN 2025 0:00 [ksmd]
root 34 0.0 0.0 0 0 ? SN 2025 0:00 [khugepaged]
root 80 0.0 0.0 0 0 ? I< 2025 0:00 [kintegrityd]
root 81 0.0 0.0 0 0 ? I< 2025 0:00 [kblockd]
root 82 0.0 0.0 0 0 ? I< 2025 0:00 [blkcg_punt_bio]
root 83 0.0 0.0 0 0 ? I< 2025 0:00 [tpm_dev_wq]
root 84 0.0 0.0 0 0 ? I< 2025 0:00 [ata_sff]
root 85 0.0 0.0 0 0 ? I< 2025 0:00 [md]
root 86 0.0 0.0 0 0 ? I< 2025 0:00 [edac-poller]
root 87 0.0 0.0 0 0 ? I< 2025 0:00 [devfreq_wq]
root 88 0.0 0.0 0 0 ? S 2025 0:00 [watchdogd]
root 90 0.0 0.0 0 0 ? I< 2025 1:23 [kworker/0:1H-kblockd]
root 91 0.0 0.0 0 0 ? S 2025 2:15 [kswapd0]
root 92 0.0 0.0 0 0 ? S 2025 0:00 [ecryptfs-kthrea]
root 94 0.0 0.0 0 0 ? I< 2025 0:00 [kthrotld]
root 95 0.0 0.0 0 0 ? S 2025 0:00 [irq/51-aerdrv]
root 96 0.0 0.0 0 0 ? S 2025 0:00 [irq/51-pciehp]
root 97 0.0 0.0 0 0 ? S 2025 0:00 [irq/52-aerdrv]
root 98 0.0 0.0 0 0 ? S 2025 0:00 [irq/52-pciehp]
root 99 0.0 0.0 0 0 ? S 2025 0:00 [irq/53-aerdrv]
root 100 0.0 0.0 0 0 ? S 2025 0:00 [irq/53-pciehp]
root 101 0.0 0.0 0 0 ? S 2025 0:00 [irq/54-aerdrv]
root 102 0.0 0.0 0 0 ? S 2025 0:00 [irq/54-pciehp]
root 103 0.0 0.0 0 0 ? S 2025 0:00 [irq/55-aerdrv]
root 104 0.0 0.0 0 0 ? S 2025 0:00 [irq/55-pciehp]
root 105 0.0 0.0 0 0 ? S 2025 0:00 [irq/56-aerdrv]
root 106 0.0 0.0 0 0 ? S 2025 0:00 [irq/56-pciehp]
root 107 0.0 0.0 0 0 ? S 2025 0:00 [irq/57-aerdrv]
root 108 0.0 0.0 0 0 ? S 2025 0:00 [irq/57-pciehp]
root 109 0.0 0.0 0 0 ? S 2025 0:00 [irq/58-aerdrv]
root 110 0.0 0.0 0 0 ? S 2025 0:00 [irq/58-pciehp]
root 111 0.0 0.0 0 0 ? S 2025 0:00 [irq/59-aerdrv]
root 112 0.0 0.0 0 0 ? S 2025 0:00 [irq/59-pciehp]
root 113 0.0 0.0 0 0 ? S 2025 0:00 [irq/49-ACPI:Ged]
root 114 0.0 0.0 0 0 ? I< 2025 0:00 [acpi_thermal_pm]
root 116 0.0 0.0 0 0 ? I< 2025 0:00 [mld]
root 117 0.0 0.0 0 0 ? I< 2025 0:00 [ipv6_addrconf]
root 126 0.0 0.0 0 0 ? I< 2025 0:00 [kstrp]
root 129 0.0 0.0 0 0 ? I< 2025 0:00 [zswap-shrink]
root 130 0.0 0.0 0 0 ? I< 2025 0:00 [kworker/u5:0]
root 134 0.0 0.0 0 0 ? I< 2025 0:00 [cryptd]
root 173 0.0 0.0 0 0 ? I< 2025 0:00 [charger_manager]
root 197 0.0 0.0 0 0 ? I< 2025 1:21 [kworker/1:1H-kblockd]
root 209 0.0 0.0 0 0 ? S 2025 0:03 [hwrng]
root 210 0.0 0.0 0 0 ? S 2025 0:00 [scsi_eh_0]
root 221 0.0 0.0 0 0 ? I< 2025 0:00 [scsi_tmf_0]
root 300 0.0 0.0 0 0 ? I< 2025 0:00 [raid5wq]
root 346 0.0 0.0 0 0 ? S 2025 2:47 [jbd2/sda1-8]
root 347 0.0 0.0 0 0 ? I< 2025 0:00 [ext4-rsv-conver]
root 414 0.0 0.2 67632 8048 ? S<s 2025 7:05 /lib/systemd/systemd-journald
root 448 0.0 0.0 0 0 ? I< 2025 0:00 [kaluad]
root 454 0.0 0.0 0 0 ? I< 2025 0:00 [kmpath_rdacd]
root 455 0.0 0.0 0 0 ? I< 2025 0:00 [kmpathd]
root 456 0.0 0.0 0 0 ? I< 2025 0:00 [kmpath_handlerd]
root 457 0.0 0.6 289888 25700 ? SLsl 2025 8:11 /sbin/multipathd -d -s
root 459 0.0 0.0 10720 2760 ? Ss 2025 0:06 /lib/systemd/systemd-udevd
systemd+ 610 0.0 0.0 88712 1724 ? Ssl 2025 0:07 /lib/systemd/systemd-timesyncd
systemd+ 628 0.0 0.1 16456 4992 ? Ss 2025 0:39 /lib/systemd/systemd-networkd
systemd+ 630 0.0 0.0 26076 3632 ? Ss 2025 0:07 /lib/systemd/systemd-resolved
root 671 0.0 0.0 82124 2620 ? Ssl 2025 6:08 /usr/sbin/irqbalance --foreground
root 677 0.0 0.0 6540 2040 ? Ss 2025 0:08 /usr/sbin/cron -f -P
root 678 0.0 0.0 79464 960 ? Ssl 2025 64:36 /usr/sbin/qemu-ga
syslog 679 0.0 0.1 222044 4080 ? Ssl 2025 1:28 /usr/sbin/rsyslogd -n -iNONE
root 683 0.0 0.1 109620 7304 ? Ssl 2025 20:29 /usr/bin/python3 /usr/share/unattended-upgrades/unattended
root 692 0.1 0.4 1861484 17464 ? Ssl 2025 100:02 /usr/bin/containerd
daemon 698 0.0 0.0 3512 1024 ? Ss 2025 0:00 /usr/sbin/atd -f
root 708 0.0 0.0 5236 436 ttyAMA0 Ss+ 2025 0:00 /sbin/agetty -o -p -- \u --keep-baud 115200,57600,38400,96
root 709 0.0 0.0 5236 432 ttyS0 Ss+ 2025 0:00 /sbin/agetty -o -p -- \u --keep-baud 115200,57600,38400,96
root 712 0.0 0.0 15196 3744 ? Ss 2025 6:43 sshd: /usr/sbin/sshd -D [listener] 0 of 10-100 startups
root 713 0.0 0.0 5612 664 tty1 Ss+ 2025 0:00 /sbin/agetty -o -p -- \u --noclear tty1 linux
root 732 0.0 0.7 2488860 30672 ? Ssl 2025 16:37 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/con
systemd+ 955250 0.0 0.5 908900 19820 ? Ss Jan11 0:00 postgres: 15/main: discourse discourse [local] idle
root 978311 0.0 0.0 7064 1608 ? Ss+ Jan11 0:00 bash
root 2844401 0.0 0.0 0 0 ? I Jan28 0:58 [kworker/0:2-events]
root 2919926 0.0 0.0 0 0 ? I 06:00 0:18 [kworker/1:0-events]
root 2929247 0.0 0.0 0 0 ? I 08:03 0:00 [kworker/1:2-events]
root 2947488 0.0 0.0 0 0 ? I 11:58 0:00 [kworker/0:3-events]
root 2958380 0.0 0.0 0 0 ? I 14:18 0:00 [kworker/u4:2-flush-8:0]
systemd+ 2960448 0.2 5.7 928984 224576 ? Ss 14:47 0:16 postgres: 15/main: discourse discourse [local] idle
systemd+ 2966096 0.2 4.1 923428 160800 ? Ss 16:03 0:05 postgres: 15/main: discourse discourse [local] idle
root 2966159 0.0 0.0 0 0 ? I 16:04 0:00 [kworker/u4:3-events_unbound]
root 2966695 0.0 0.0 0 0 ? I 16:11 0:00 [kworker/u4:0-events_unbound]
root 2967455 0.0 0.2 18476 9584 ? Ss 16:21 0:00 sshd: root@pts/0
root 2967537 0.0 0.1 8300 4748 pts/0 Ss 16:21 0:00 -bash
systemd+ 2968782 0.0 3.0 916952 120500 ? Ss 16:35 0:00 postgres: 15/main: discourse discourse [local] idle
systemd+ 2969962 0.0 0.6 908928 23584 ? Ss 16:50 0:00 postgres: 15/main: discourse discourse [local] idle
1000 2969995 0.0 0.0 15928 2824 ? S 16:50 0:00 sleep 1
root 2969996 0.0 0.0 10412 2992 pts/0 R+ 16:50 0:00 ps uax
root 4019702 0.0 0.1 1237968 7740 ? Sl Jan01 4:06 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 6b624
root 4019724 0.0 0.0 6800 324 pts/0 Ss+ Jan01 0:00 /bin/bash /sbin/boot
root 4019750 0.0 0.0 1597212 344 ? Sl Jan01 0:02 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-po
root 4019755 0.0 0.0 1671004 0 ? Sl Jan01 0:02 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 80
root 4019763 0.0 0.0 1671004 1580 ? Sl Jan01 0:02 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-po
root 4019769 0.0 0.0 1744796 0 ? Sl Jan01 0:02 /usr/bin/docker-proxy -proto tcp -host-ip :: -host-port 44
root 4023771 0.0 0.0 2236 20 pts/0 S+ Jan01 0:35 /usr/bin/runsvdir -P /etc/service
root 4023772 0.0 0.0 2084 0 ? Ss Jan01 0:00 runsv cron
root 4023773 0.0 0.0 2084 0 ? Ss Jan01 0:00 runsv rsyslog
root 4023774 0.0 0.0 2084 8 ? Ss Jan01 0:00 runsv unicorn
root 4023775 0.0 0.0 2084 28 ? Ss Jan01 0:00 runsv nginx
root 4023776 0.0 0.0 2084 0 ? Ss Jan01 0:00 runsv postgres
root 4023777 0.0 0.0 2084 0 ? Ss Jan01 0:00 runsv redis
root 4023778 0.0 0.0 6692 868 ? S Jan01 0:06 cron -f
root 4023779 0.0 0.0 2228 792 ? S Jan01 0:01 svlogd /var/log/postgres
root 4023780 0.0 0.0 54152 2276 ? S Jan01 0:00 nginx: master process /usr/sbin/nginx
systemd+ 4023781 0.0 0.4 905940 19092 ? S Jan01 2:13 /usr/lib/postgresql/15/bin/postmaster -D /etc/postgresql/1
root 4023782 0.0 0.0 152356 208 ? Sl Jan01 0:02 rsyslogd -n
root 4023783 0.0 0.0 2228 836 ? S Jan01 0:02 svlogd /var/log/redis
1000 4023784 0.0 0.0 20592 1664 ? S Jan01 22:38 /bin/bash ./config/unicorn_launcher -E production -c confi
message+ 4023785 0.4 0.6 102368 26700 ? Sl Jan01 165:39 /usr/bin/redis-server *:6379
www-data 4023796 0.1 3.1 188008 124596 ? S Jan01 75:59 nginx: worker process
www-data 4023797 0.1 1.2 98528 49972 ? S Jan01 76:53 nginx: worker process
www-data 4023798 0.0 0.0 54352 1088 ? S Jan01 0:15 nginx: cache manager process
systemd+ 4023807 0.0 8.3 906192 327344 ? Ss Jan01 3:19 postgres: 15/main: checkpointer
systemd+ 4023808 0.0 0.6 906088 24076 ? Ss Jan01 0:24 postgres: 15/main: background writer
systemd+ 4023810 0.0 0.4 905940 18260 ? Ss Jan01 9:48 postgres: 15/main: walwriter
systemd+ 4023811 0.0 0.0 907536 2312 ? Ss Jan01 0:29 postgres: 15/main: autovacuum launcher
systemd+ 4023812 0.0 0.0 907512 2456 ? Ss Jan01 0:01 postgres: 15/main: logical replication launcher
1000 4023813 0.0 3.8 1540732 148552 ? Sl Jan01 7:41 unicorn master -E production -c config/unicorn.conf.rb
systemd+ 4023881 0.0 0.4 919884 16692 ? Ss Jan01 0:03 postgres: 15/main: discourse discourse [local] idle
1000 4024290 1.0 9.4 7103052 368788 ? SNl Jan01 410:25 sidekiq 7.3.9 discourse [0 of 5 busy]
1000 4024313 1.8 10.3 6999048 404032 ? Sl Jan01 728:22 unicorn worker[0] -E production -c config/unicorn.conf.rb
1000 4024339 0.0 9.0 6931980 354124 ? Sl Jan01 37:50 unicorn worker[1] -E production -c config/unicorn.conf.rb
1000 4024397 0.0 7.9 6921672 309392 ? Sl Jan01 14:27 unicorn worker[2] -E production -c config/unicorn.conf.rb
1000 4024478 0.0 6.5 6936200 255776 ? Sl Jan01 12:53 unicorn worker[3] -E production -c config/unicorn.conf.rb
systemd+ 4025084 0.0 1.0 911596 41712 ? Ss Jan01 0:05 postgres: 15/main: discourse discourse [local] idle
systemd+ 4035965 0.0 0.9 908812 35216 ? Ss Jan01 0:38 postgres: 15/main: discourse discourse [local] idle
systemd+ 4044886 0.0 0.8 908812 34968 ? Ss Jan01 0:39 postgres: 15/main: discourse discourse [local] idle
I don’t believe there’s any sensitive data there. If you see something, edit it out before posting!
לייק 1
Ed_S
(Ed S)
29 בינואר, 2026, 4:56pm
3
I note in the busy times a lot of system kernel time which often means paging. You can always run
vmstat 5 5
to get a snapshot of how the virtual memory system is coping.
לייק 1
char
29 בינואר, 2026, 5:02pm
4
Thanks, we’ll look into this suggestion.
char
29 בינואר, 2026, 8:50pm
5
This is what ps aux captured
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.0 166172 8424 ? Ss Jan25 2:44 /sbin/init
root 2 0.0 0.0 0 0 ? S Jan25 0:01 [kthreadd]
root 3 0.0 0.0 0 0 ? I< Jan25 0:00 [rcu_gp]
root 4 0.0 0.0 0 0 ? I< Jan25 0:00 [rcu_par_gp]
root 5 0.0 0.0 0 0 ? I< Jan25 0:00 [slub_flushwq]
root 6 0.0 0.0 0 0 ? I< Jan25 0:00 [netns]
root 8 0.0 0.0 0 0 ? I< Jan25 0:00 [kworker/0:0H-events_highpri]
root 10 0.0 0.0 0 0 ? I< Jan25 0:00 [mm_percpu_wq]
root 11 0.0 0.0 0 0 ? S Jan25 0:00 [rcu_tasks_rude_]
root 12 0.0 0.0 0 0 ? S Jan25 0:00 [rcu_tasks_trace]
root 13 0.0 0.0 0 0 ? S Jan25 1:38 [ksoftirqd/0]
root 14 1.2 0.0 0 0 ? I Jan25 82:25 [rcu_sched]
root 15 0.0 0.0 0 0 ? S Jan25 0:07 [migration/0]
root 16 0.0 0.0 0 0 ? S Jan25 0:00 [idle_inject/0]
root 18 0.0 0.0 0 0 ? S Jan25 0:00 [cpuhp/0]
root 19 0.0 0.0 0 0 ? S Jan25 0:00 [cpuhp/1]
root 20 0.0 0.0 0 0 ? S Jan25 0:00 [idle_inject/1]
root 21 0.0 0.0 0 0 ? S Jan25 0:06 [migration/1]
root 22 0.0 0.0 0 0 ? S Jan25 1:25 [ksoftirqd/1]
root 24 0.0 0.0 0 0 ? I< Jan25 0:00 [kworker/1:0H-events_highpri]
root 25 0.0 0.0 0 0 ? S Jan25 0:00 [cpuhp/2]
root 26 0.0 0.0 0 0 ? S Jan25 0:00 [idle_inject/2]
root 27 0.0 0.0 0 0 ? S Jan25 0:08 [migration/2]
root 28 0.0 0.0 0 0 ? S Jan25 1:29 [ksoftirqd/2]
root 30 0.0 0.0 0 0 ? I< Jan25 0:00 [kworker/2:0H-events_highpri]
root 31 0.0 0.0 0 0 ? S Jan25 0:00 [cpuhp/3]
root 32 0.0 0.0 0 0 ? S Jan25 0:00 [idle_inject/3]
root 33 0.0 0.0 0 0 ? S Jan25 0:08 [migration/3]
root 34 0.0 0.0 0 0 ? S Jan25 1:23 [ksoftirqd/3]
root 36 0.0 0.0 0 0 ? I< Jan25 0:00 [kworker/3:0H-kblockd]
root 37 0.0 0.0 0 0 ? S Jan25 0:00 [cpuhp/4]
root 38 0.0 0.0 0 0 ? S Jan25 0:00 [idle_inject/4]
root 39 0.0 0.0 0 0 ? S Jan25 0:07 [migration/4]
root 40 0.0 0.0 0 0 ? S Jan25 1:22 [ksoftirqd/4]
root 42 0.0 0.0 0 0 ? I< Jan25 0:00 [kworker/4:0H-events_highpri]
root 43 0.0 0.0 0 0 ? S Jan25 0:00 [cpuhp/5]
root 44 0.0 0.0 0 0 ? S Jan25 0:00 [idle_inject/5]
root 45 0.0 0.0 0 0 ? S Jan25 0:07 [migration/5]
root 46 0.3 0.0 0 0 ? S Jan25 23:16 [ksoftirqd/5]
root 48 0.0 0.0 0 0 ? I< Jan25 0:00 [kworker/5:0H-events_highpri]
root 49 0.0 0.0 0 0 ? S Jan25 0:00 [cpuhp/6]
root 50 0.0 0.0 0 0 ? S Jan25 0:00 [idle_inject/6]
root 51 0.0 0.0 0 0 ? S Jan25 0:08 [migration/6]
root 52 0.0 0.0 0 0 ? S Jan25 1:21 [ksoftirqd/6]
root 54 0.0 0.0 0 0 ? I< Jan25 0:00 [kworker/6:0H-events_highpri]
root 55 0.0 0.0 0 0 ? S Jan25 0:00 [cpuhp/7]
root 56 0.0 0.0 0 0 ? S Jan25 0:00 [idle_inject/7]
root 57 0.0 0.0 0 0 ? S Jan25 0:07 [migration/7]
root 58 0.0 0.0 0 0 ? S Jan25 3:00 [ksoftirqd/7]
root 60 0.0 0.0 0 0 ? I< Jan25 0:00 [kworker/7:0H-events_highpri]
root 61 0.0 0.0 0 0 ? S Jan25 0:00 [kdevtmpfs]
root 62 0.0 0.0 0 0 ? I< Jan25 0:00 [inet_frag_wq]
root 63 0.0 0.0 0 0 ? S Jan25 0:00 [kauditd]
root 64 0.0 0.0 0 0 ? S Jan25 0:06 [khungtaskd]
root 65 0.0 0.0 0 0 ? S Jan25 0:00 [oom_reaper]
root 66 0.0 0.0 0 0 ? I< Jan25 0:00 [writeback]
root 67 0.7 0.0 0 0 ? S Jan25 51:20 [kcompactd0]
root 68 0.0 0.0 0 0 ? SN Jan25 0:00 [ksmd]
root 69 0.0 0.0 0 0 ? SN Jan25 0:55 [khugepaged]
root 116 0.0 0.0 0 0 ? I< Jan25 0:00 [kintegrityd]
root 117 0.0 0.0 0 0 ? I< Jan25 0:00 [kblockd]
root 118 0.0 0.0 0 0 ? I< Jan25 0:00 [blkcg_punt_bio]
root 119 0.0 0.0 0 0 ? I< Jan25 0:00 [tpm_dev_wq]
root 120 0.0 0.0 0 0 ? I< Jan25 0:00 [ata_sff]
root 121 0.0 0.0 0 0 ? I< Jan25 0:00 [md]
root 122 0.0 0.0 0 0 ? I< Jan25 0:00 [edac-poller]
root 123 0.0 0.0 0 0 ? I< Jan25 0:00 [devfreq_wq]
root 124 0.0 0.0 0 0 ? S Jan25 0:00 [watchdogd]
root 126 0.0 0.0 0 0 ? I< Jan25 0:34 [kworker/2:1H-kblockd]
root 129 2.7 0.0 0 0 ? S Jan25 185:50 [kswapd0]
root 131 0.0 0.0 0 0 ? S Jan25 0:00 [ecryptfs-kthrea]
root 133 0.0 0.0 0 0 ? I< Jan25 0:00 [kthrotld]
root 134 0.0 0.0 0 0 ? I< Jan25 0:00 [acpi_thermal_pm]
root 136 0.0 0.0 0 0 ? S Jan25 0:00 [scsi_eh_0]
root 137 0.0 0.0 0 0 ? I< Jan25 0:00 [scsi_tmf_0]
root 138 0.0 0.0 0 0 ? S Jan25 0:00 [scsi_eh_1]
root 139 0.0 0.0 0 0 ? I< Jan25 0:00 [scsi_tmf_1]
root 141 0.0 0.0 0 0 ? I< Jan25 0:00 [vfio-irqfd-clea]
root 142 0.0 0.0 0 0 ? I< Jan25 0:00 [mld]
root 143 0.0 0.0 0 0 ? I< Jan25 0:00 [ipv6_addrconf]
root 156 0.0 0.0 0 0 ? I< Jan25 0:00 [kstrp]
root 160 0.0 0.0 0 0 ? I< Jan25 0:00 [zswap-shrink]
root 162 0.0 0.0 0 0 ? I< Jan25 0:00 [kworker/u17:0]
root 167 0.0 0.0 0 0 ? I< Jan25 0:00 [charger_manager]
root 189 0.0 0.0 0 0 ? I< Jan25 0:31 [kworker/5:1H-kblockd]
root 210 0.0 0.0 0 0 ? S Jan25 0:00 [scsi_eh_2]
root 211 0.0 0.0 0 0 ? I< Jan25 0:00 [scsi_tmf_2]
root 225 0.0 0.0 0 0 ? I< Jan25 0:30 [kworker/4:1H-kblockd]
root 226 0.0 0.0 0 0 ? I< Jan25 0:30 [kworker/0:1H-kblockd]
root 246 0.0 0.0 0 0 ? I< Jan25 0:32 [kworker/3:1H-kblockd]
root 248 0.0 0.0 0 0 ? S Jan25 3:43 [jbd2/sda3-8]
root 249 0.0 0.0 0 0 ? I< Jan25 0:00 [ext4-rsv-conver]
root 253 0.0 0.0 0 0 ? I< Jan25 0:31 [kworker/6:1H-kblockd]
root 254 0.0 0.0 0 0 ? I< Jan25 0:32 [kworker/1:1H-kblockd]
root 290 0.0 0.0 97136 23144 ? S<s Jan25 6:03 /lib/systemd/systemd-journald
root 326 0.0 0.0 0 0 ? I< Jan25 0:31 [kworker/7:1H-kblockd]
root 328 0.0 0.0 25336 3400 ? Ss Jan25 0:06 /lib/systemd/systemd-udevd
root 373 0.0 0.0 0 0 ? I< Jan25 0:00 [cryptd]
root 393 0.0 0.0 0 0 ? S Jan25 0:00 [jbd2/sda2-8]
root 394 0.0 0.0 0 0 ? I< Jan25 0:00 [ext4-rsv-conver]
systemd+ 465 0.0 0.0 89356 2356 ? Ssl Jan25 0:05 /lib/systemd/systemd-timesyncd
root 484 0.0 0.0 240352 4752 ? Ssl Jan25 2:33 /usr/libexec/accounts-daemon
root 485 0.0 0.0 9492 2392 ? Ss Jan25 0:13 /usr/sbin/cron -f -P
message+ 486 0.0 0.0 8908 4056 ? Ss Jan25 0:04 @dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation --syslog-only
root 494 0.0 0.0 82768 2216 ? Ssl Jan25 0:48 /usr/sbin/irqbalance --foreground
root 500 0.0 0.0 35488 7888 ? Ss Jan25 0:02 /usr/bin/python3 /usr/bin/networkd-dispatcher --run-startup-triggers
syslog 502 0.0 0.0 222404 3568 ? Ssl Jan25 2:23 /usr/sbin/rsyslogd -n -iNONE
root 503 0.0 0.0 15368 5404 ? Ss Jan25 0:05 /lib/systemd/systemd-logind
root 548 0.0 0.0 234484 3936 ? Ssl Jan25 0:00 /usr/libexec/polkitd --no-debug
systemd+ 579 0.0 0.0 16372 3340 ? Ss Jan25 0:39 /lib/systemd/systemd-networkd
systemd+ 581 0.0 0.0 25664 5360 ? Ss Jan25 1:30 /lib/systemd/systemd-resolved
root 593 0.0 0.0 112444 3940 ? Ssl Jan25 0:00 /usr/bin/python3 /usr/share/unattended-upgrades/unattended-upgrade-shutdown --wait-for-signal
root 594 0.2 0.0 2246688 20776 ? Ssl Jan25 19:40 /usr/bin/containerd
root 598 0.0 0.0 8772 884 tty1 Ss+ Jan25 0:00 /sbin/agetty -o -p -- \u --noclear tty1 linux
root 599 0.0 0.0 15424 4740 ? Ss Jan25 3:07 sshd: /usr/sbin/sshd -D [listener] 2 of 10-100 startups
root 1674 0.0 0.0 41208 3116 ? Ss Jan25 0:11 /usr/lib/postfix/sbin/master -w
root 969899 1.8 0.1 2939864 52584 ? Ssl Jan28 34:31 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
root 970216 0.1 0.0 1238360 4936 ? Sl Jan28 2:02 /usr/bin/containerd-shim-runc-v2 -namespace moby -id cc1a7cdbd0b363f7d92723dfdfeff47b80413435ef4f7e09219b80f8dc24eaa1 -address /run/containerd/
root 970224 0.1 0.0 1238360 4980 ? Sl Jan28 1:55 /usr/bin/containerd-shim-runc-v2 -namespace moby -id dcb69184548336ab022d898a0882ac4942d0d977e4dfe2057674af5b30942573 -address /run/containerd/
root 970294 0.1 0.0 1238360 5056 ? Sl Jan28 3:12 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 0e9f01d6833bef19102d3c6696c24d4621cd3561318ac6c890fcdc02dd718018 -address /run/containerd/
root 970345 0.0 0.0 1238360 4376 ? Sl Jan28 1:37 /usr/bin/containerd-shim-runc-v2 -namespace moby -id bec6a078fbb780c7edd64805b52554d8087a600ca905160f6832a9e3f5f6d491 -address /run/containerd/
nobody 970371 0.1 0.0 727104 12844 ? Ssl Jan28 2:49 /bin/node_exporter --path.rootfs=/host
root 970373 0.1 0.0 727256 16384 ? Ssl Jan28 2:29 /bin/blackbox_exporter --config.file=/config/blackbox.yaml
root 970392 13.6 0.1 214328 56412 ? Ssl Jan28 255:29 /usr/bin/cadvisor -logtostderr
root 970462 2.2 0.0 716432 13548 ? Ssl Jan28 41:29 /go/bin/docker_state_exporter -listen-address=:8080
root 970548 0.0 0.0 1238360 4832 ? Sl Jan28 1:46 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 917c9cb04189f0c7a93a6f207be3b09748f5ad220dba4eb467de40fb089faa70 -address /run/containerd/
root 970601 0.0 0.0 9072 348 ? Ss Jan28 0:00 nginx: master process nginx -g daemon off;
root 970760 0.0 0.0 1082024 212 ? Sl Jan28 0:01 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 9090 -container-ip 172.19.0.3 -container-port 9090
root 970810 0.0 0.0 1238360 4600 ? Sl Jan28 1:29 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 580b27312f395c80fb8e3187da389c064380cf30af3f13ccd902d5388915667f -address /run/containerd/
nobody 970875 2.0 0.6 7278492 203784 ? Ssl Jan28 38:01 /bin/prometheus --config.file=/app.cfg/prometheus.yaml --storage.tsdb.path=/prometheus --storage.tsdb.retention.time=60d --web.console.librarie
systemd+ 970921 0.0 0.0 9604 3232 ? S Jan28 0:11 nginx: worker process
systemd+ 970922 0.0 0.0 9604 3392 ? S Jan28 0:03 nginx: worker process
systemd+ 970923 0.0 0.0 9604 2528 ? S Jan28 0:04 nginx: worker process
systemd+ 970924 0.0 0.0 9604 2876 ? S Jan28 0:00 nginx: worker process
systemd+ 970925 0.0 0.0 9604 2556 ? S Jan28 0:13 nginx: worker process
systemd+ 970926 0.0 0.0 9604 2956 ? S Jan28 0:00 nginx: worker process
systemd+ 970928 0.0 0.0 9604 2884 ? S Jan28 0:02 nginx: worker process
systemd+ 970930 0.0 0.0 9604 2924 ? S Jan28 0:00 nginx: worker process
root 1217453 0.0 0.0 298148 13380 ? Ssl Jan28 0:03 /usr/libexec/packagekitd
root 1728150 0.0 0.0 1229488 648 ? Sl Jan28 0:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 443 -container-ip 172.20.0.2 -container-port 443
root 1728164 0.0 0.0 1230000 784 ? Sl Jan28 0:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 80 -container-ip 172.20.0.2 -container-port 80
root 1728191 0.0 0.0 1238360 5048 ? Sl Jan28 0:36 /usr/bin/containerd-shim-runc-v2 -namespace moby -id cab58782fe2d7a693718257181aa6a395a10623870f2f1c4b447c244a08065ee -address /run/containerd/
root 1728223 0.0 0.0 6936 620 pts/0 Ss+ Jan28 0:00 /bin/bash /sbin/boot
root 1728342 0.0 0.0 2500 184 pts/0 S+ Jan28 0:05 /usr/bin/runsvdir -P /etc/service
root 1728343 0.0 0.0 2348 0 ? Ss Jan28 0:00 runsv cron
root 1728344 0.0 0.0 2348 348 ? Ss Jan28 0:00 runsv rsyslog
root 1728345 0.0 0.0 2348 12 ? Ss Jan28 0:00 runsv postgres
root 1728346 0.0 0.0 2348 132 ? Ss Jan28 0:00 runsv unicorn
root 1728347 0.0 0.0 2348 364 ? Ss Jan28 0:00 runsv nginx
root 1728348 0.0 0.0 2348 0 ? Ss Jan28 0:00 runsv redis
root 1728349 0.0 0.0 6612 688 ? S Jan28 0:02 cron -f
root 1728350 0.0 0.0 2496 164 ? S Jan28 0:01 svlogd /var/log/redis
root 1728352 0.0 0.0 152128 204 ? Sl Jan28 0:01 rsyslogd -n
root 1728353 0.0 0.0 2496 484 ? S Jan28 0:20 svlogd /var/log/postgres
systemd+ 1728355 0.0 0.5 7880552 156656 ? S Jan28 1:03 /usr/lib/postgresql/13/bin/postmaster -D /etc/postgresql/13/main
root 1728356 0.0 0.0 54596 1868 ? S Jan28 0:00 nginx: master process /usr/sbin/nginx
message+ 1728358 6.2 0.1 335604 47560 ? Sl Jan28 73:09 /usr/bin/redis-server *:6379
admin 1728359 0.1 0.0 15272 1548 ? S Jan28 1:56 /bin/bash ./config/unicorn_launcher -E production -c config/unicorn.conf.rb
www-data 1728369 0.5 0.0 62048 12184 ? S Jan28 6:05 nginx: worker process
www-data 1728370 0.6 0.0 62344 10432 ? S Jan28 7:15 nginx: worker process
www-data 1728371 0.5 0.0 62636 11008 ? S Jan28 6:13 nginx: worker process
www-data 1728372 0.5 0.0 62216 10964 ? S Jan28 6:07 nginx: worker process
www-data 1728373 0.6 0.0 62652 11584 ? S Jan28 7:10 nginx: worker process
www-data 1728374 0.6 0.0 62048 10920 ? S Jan28 7:09 nginx: worker process
www-data 1728375 0.5 0.0 61460 10440 ? S Jan28 6:15 nginx: worker process
www-data 1728376 0.4 0.0 66788 14620 ? S Jan28 5:46 nginx: worker process
www-data 1728377 0.0 0.0 54740 1324 ? S Jan28 0:02 nginx: cache manager process
admin 1728383 0.2 0.6 1111608 191992 ? Sl Jan28 3:15 unicorn master -E production -c config/unicorn.conf.rb
systemd+ 1728387 0.5 24.0 7881244 7423192 ? Ss Jan28 6:55 postgres: 13/main: checkpointer
systemd+ 1728388 0.3 21.6 7880688 6669508 ? Ss Jan28 3:47 postgres: 13/main: background writer
systemd+ 1728389 0.4 0.0 7880552 18956 ? Ss Jan28 5:16 postgres: 13/main: walwriter
systemd+ 1728390 0.0 0.0 7881252 3220 ? Ss Jan28 0:03 postgres: 13/main: autovacuum launcher
systemd+ 1728391 0.5 0.0 73624 2148 ? Ss Jan28 6:20 postgres: 13/main: stats collector
systemd+ 1728392 0.0 0.0 7881084 3300 ? Ss Jan28 0:00 postgres: 13/main: logical replication launcher
systemd+ 1728533 0.0 0.1 7893444 40180 ? Ss Jan28 0:04 postgres: 13/main: discourse discourse [local] idle
admin 1729045 4.0 0.1 898156 38612 ? Sl Jan28 47:39 discourse prometheus-collector
admin 1729088 3.3 0.6 6977404 205596 ? Sl Jan28 38:37 discourse prometheus-global-reporter
systemd+ 1729928 0.0 0.0 7882432 26840 ? Ss Jan28 0:14 postgres: 13/main: discourse discourse [local] idle
admin 2510013 15.8 1.6 7383364 514764 ? SNl 12:55 56:45 sidekiq 7.3.9 discourse [3 of 5 busy]
root 2698443 0.0 0.0 0 0 ? R 15:41 0:01 [kworker/1:2-events]
root 2752725 0.0 0.0 0 0 ? I 16:26 0:03 [kworker/4:1-cgroup_destroy]
admin 2823242 0.0 0.0 17188 9040 ? Ss 17:26 0:00 /lib/systemd/systemd --user
admin 2823247 0.0 0.0 169224 3828 ? S 17:26 0:00 (sd-pam)
root 2824465 0.2 0.0 0 0 ? I 17:27 0:12 [kworker/u16:2-events_unbound]
systemd+ 2848301 26.9 25.3 8058104 7806820 ? Rs 17:45 18:18 postgres: 13/main: discourse discourse [local] SELECT
root 2857109 0.0 0.0 0 0 ? I 17:52 0:00 [kworker/7:2-events]
root 2857193 0.0 0.0 0 0 ? I 17:52 0:00 [kworker/3:1-events]
root 2857236 0.0 0.0 0 0 ? I 17:52 0:00 [kworker/0:1-events]
root 2857257 0.0 0.0 0 0 ? I 17:52 0:00 [kworker/5:0-events]
root 2858024 0.0 0.0 17044 10556 ? Ss 17:53 0:00 sshd: admin [priv]
admin 2858146 0.0 0.0 17476 8500 ? R 17:53 0:00 sshd: admin@pts/0
admin 2858151 0.0 0.0 11544 5240 pts/0 Ss 17:53 0:00 -bash
root 2858691 0.0 0.0 0 0 ? I 17:53 0:00 [kworker/4:0-events]
root 2869682 0.0 0.0 0 0 ? I 18:03 0:00 [kworker/1:1-events]
admin 2870318 39.1 1.3 7068992 416824 ? Sl 18:03 19:39 unicorn worker[1] -E production -c config/unicorn.conf.rb
systemd+ 2870879 1.4 15.7 7903276 4854432 ? Ss 18:03 0:42 postgres: 13/main: discourse discourse [local] idle
systemd+ 2870880 4.3 22.9 8058516 7072000 ? Ds 18:03 2:09 postgres: 13/main: discourse discourse [local] SELECT
systemd+ 2875058 21.0 25.4 8101952 7848140 ? Rs 18:07 9:46 postgres: 13/main: discourse discourse [local] UPDATE
root 2881854 0.0 0.0 0 0 ? I 18:11 0:01 [kworker/2:1-events]
admin 2884122 34.7 1.3 7042112 413636 ? Sl 18:12 14:10 unicorn worker[5] -E production -c config/unicorn.conf.rb
systemd+ 2884686 3.3 19.2 7901648 5920344 ? Ss 18:13 1:20 postgres: 13/main: discourse discourse [local] idle
systemd+ 2884706 3.4 20.2 8055052 6253836 ? Ss 18:13 1:23 postgres: 13/main: discourse discourse [local] idle
admin 2885027 28.3 1.3 7048512 417640 ? Sl 18:13 11:28 unicorn worker[3] -E production -c config/unicorn.conf.rb
systemd+ 2885877 2.4 17.1 8062096 5272220 ? Ss 18:13 0:58 postgres: 13/main: discourse discourse [local] idle
systemd+ 2885900 2.3 17.1 7900444 5296948 ? Ss 18:13 0:56 postgres: 13/main: discourse discourse [local] idle
admin 2888531 23.7 1.3 7008056 413112 ? Sl 18:15 9:07 unicorn worker[4] -E production -c config/unicorn.conf.rb
systemd+ 2889095 1.9 15.3 8055412 4729620 ? Ss 18:15 0:45 postgres: 13/main: discourse discourse [local] idle
systemd+ 2889140 1.8 15.7 7899580 4854332 ? Ss 18:15 0:42 postgres: 13/main: discourse discourse [local] idle
root 2889221 0.0 0.0 0 0 ? I 18:15 0:00 [kworker/7:0-mm_percpu_wq]
admin 2890464 18.8 1.5 7050176 471008 ? Sl 18:16 6:58 unicorn worker[7] -E production -c config/unicorn.conf.rb
admin 2891072 17.3 1.5 7031612 473804 ? Sl 18:17 6:21 unicorn worker[6] -E production -c config/unicorn.conf.rb
root 2893638 0.1 0.0 0 0 ? R 18:19 0:03 [kworker/u16:0-events_unbound]
admin 2898889 15.7 1.5 6998072 466892 ? Sl 18:23 4:43 unicorn worker[2] -E production -c config/unicorn.conf.rb
root 2898988 0.0 0.0 0 0 ? I 18:23 0:01 [kworker/0:2-events]
admin 2899542 14.3 1.3 6981880 408840 ? Sl 18:24 4:14 unicorn worker[0] -E production -c config/unicorn.conf.rb
systemd+ 2899645 25.6 25.3 8061284 7824768 ? Ss 18:24 7:34 postgres: 13/main: discourse discourse [local] idle
root 2899721 0.0 0.0 0 0 ? I 18:24 0:01 [kworker/6:2-rcu_gp]
systemd+ 2899819 4.4 16.1 7899748 4990744 ? Ss 18:24 1:18 postgres: 13/main: discourse discourse [local] idle
systemd+ 2900033 0.8 9.1 8042172 2826864 ? Ss 18:24 0:14 postgres: 13/main: discourse discourse [local] idle
root 2910677 0.0 0.0 0 0 ? I 18:32 0:00 [kworker/6:0-mm_percpu_wq]
root 2915088 0.0 0.0 0 0 ? I 18:35 0:00 [kworker/5:1-events]
root 2916444 0.2 0.0 0 0 ? I 18:36 0:02 [kworker/u16:1-events_unbound]
root 2919541 0.0 0.0 0 0 ? I 18:39 0:00 [kworker/3:2-cgroup_destroy]
root 2923864 0.0 0.0 0 0 ? I 18:44 0:00 [kworker/2:0-rcu_gp]
systemd+ 2923985 0.6 2.1 7891952 654148 ? Ss 18:44 0:03 postgres: 13/main: discourse discourse [local] idle
systemd+ 2924006 30.2 25.3 8112844 7824292 ? Rs 18:44 2:44 postgres: 13/main: discourse discourse [local] UPDATE
systemd+ 2925842 1.1 5.6 7898176 1741700 ? Ss 18:45 0:05 postgres: 13/main: discourse discourse [local] idle
root 2929381 0.2 0.0 0 0 ? I 18:48 0:00 [kworker/u16:3-writeback]
systemd+ 2929546 12.8 24.9 8049520 7694404 ? Ss 18:48 0:39 postgres: 13/main: discourse discourse [local] SELECT
root 2929925 0.0 0.0 0 0 ? I 18:49 0:00 [kworker/3:0-events]
root 2931376 0.0 0.0 0 0 ? I 18:50 0:00 [kworker/4:2-events]
systemd+ 2931802 17.7 24.5 8045744 7559492 ? Ss 18:50 0:36 postgres: 13/main: parallel worker for PID 946796
systemd+ 2931803 17.7 24.5 8045744 7557568 ? Ds 18:50 0:36 postgres: 13/main: parallel worker for PID 946796
root 2932526 0.0 0.0 0 0 ? I 18:51 0:00 [kworker/6:1-events]
systemd+ 2932832 0.5 0.8 7888204 259780 ? Ss 18:51 0:00 postgres: 13/main: discourse discourse [local] idle
systemd+ 2932846 1.7 2.3 7888684 726308 ? Ss 18:51 0:02 postgres: 13/main: discourse discourse [local] idle
root 2933346 0.0 0.0 0 0 ? I 18:51 0:00 [kworker/7:1-events]
root 2933403 0.0 0.0 0 0 ? I 18:51 0:00 [kworker/2:2-mm_percpu_wq]
root 2933436 0.0 0.0 0 0 ? I 18:51 0:00 [kworker/0:0]
root 2933443 0.0 0.0 0 0 ? I 18:51 0:00 [kworker/1:0]
root 2933471 0.0 0.0 0 0 ? I 18:52 0:00 [kworker/3:3]
root 2933472 0.0 0.0 0 0 ? I 18:52 0:00 [kworker/5:2-events]
systemd+ 2933505 0.7 0.8 7888412 269288 ? Ss 18:52 0:00 postgres: 13/main: discourse discourse [local] idle
systemd+ 2933512 1.6 1.5 7887976 474540 ? Ss 18:52 0:01 postgres: 13/main: discourse discourse [local] idle
root 2933523 0.0 0.0 1082280 3288 ? Sl 18:52 0:00 /usr/bin/docker-proxy -proto tcp -host-ip 0.0.0.0 -host-port 8443 -container-ip 172.20.0.3 -container-port 3000
root 2933558 0.0 0.0 0 0 ? I 18:52 0:00 [kworker/6:3]
root 2933600 0.1 0.0 1238104 12120 ? Sl 18:52 0:00 /usr/bin/containerd-shim-runc-v2 -namespace moby -id 11cab872cfcf4dc1e7bc334cbe9e21caf356bef49cf106d0bd06b3d4f3dc2d8f -address /run/containerd/
472 2933655 9.7 0.4 880212 144496 ? Ssl 18:52 0:10 grafana server --homepath=/usr/share/grafana --config=/etc/grafana/grafana.ini --packaging=docker cfg:default.log.mode=console cfg:default.path
systemd+ 2933665 0.2 0.2 7884404 77236 ? Ss 18:52 0:00 postgres: 13/main: discourse discourse [local] idle
systemd+ 2933670 0.3 0.5 7887168 161960 ? Ss 18:52 0:00 postgres: 13/main: discourse discourse [local] idle
postfix 2933985 0.0 0.0 41548 6796 ? S 18:52 0:00 pickup -l -t unix -u -c
postfix 2933986 0.0 0.0 41592 6800 ? S 18:52 0:00 qmgr -l -t unix -u
root 2934840 0.0 0.0 15424 8396 ? Ss 18:53 0:00 sshd: [accepted]
systemd+ 2935176 1.3 0.2 7884172 73424 ? Rs 18:53 0:00 postgres: 13/main: discourse discourse [local] SELECT
systemd+ 2935184 1.8 0.2 7884276 80404 ? Ss 18:53 0:00 postgres: 13/main: discourse discourse [local] idle
systemd+ 2935198 2.0 0.8 7884204 258560 ? Rs 18:53 0:00 postgres: 13/main: discourse discourse [local] SELECT
admin 2936133 0.0 0.0 13808 1344 ? S 18:53 0:00 sleep 1
root 2936138 0.0 0.0 15424 8668 ? Ss 18:53 0:00 sshd: [accepted]
sshd 2936140 0.0 0.0 15424 5460 ? S 18:53 0:00 sshd: [net]
admin 2936156 0.0 0.0 12976 3592 pts/0 R+ 18:53 0:00 ps aux
systemd+ 2936157 0.0 1.0 8032772 313012 ? Rs 18:53 0:00 postgres: 13/main: parallel worker for PID 879603
systemd+ 2936158 0.0 1.1 8032772 361572 ? Rs 18:53 0:00 postgres: 13/main: parallel worker for PID 879603
and this is what vmstat 5 5 outputted
procs -----------memory---------- —swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
8 8 1328504 492436 94200 25567584 23 25 6248 521 6 6 27 12 50 12 0
6 9 1333112 486980 94248 25407008 6 874 48655 2216 6361 8527 34 22 8 36 0
4 7 1336952 519160 94424 25402680 50 1189 37687 2834 6645 9464 39 20 10 31 0
6 5 1340792 557916 92900 25426764 118 921 41032 2745 7407 10996 41 19 10 29 0
11 8 1350228 508828 90920 25314032 11 1842 46922 3582 5351 7645 47 20 8 25 0
This is definitely a memory issue: I’m seeing some huge queries here.
The question, now, becomes “what is causing these queries to be so big”, because it happened suddenly and I assume our case (150 users, 1000 posts per day) should not be having issues with this amount of memory.
Am I wrong in assuming this?
לייק 1
pfaffman
(Jay Pfaffman)
29 בינואר, 2026, 9:07pm
6
Have you rebooted recently?
It looks like there are a bunch of idle postgres processes eating a bunch of memory, I think.
How much memory did you give postgres in your app.yml? Did you change the defaults?
Also, Postgres 13 hasn’t been supported for a year. You really need to upgrade. I didn’t think Discourse would work with PG13 – what version of Discourse are you running?
2 לייקים
supermathie
(Michael Brown)
29 בינואר, 2026, 9:48pm
7
This doesn’t look like a memory problem to me - there’s only around 6GB taken up by applications. There’s a lot of reading happening, which at a guess I’d say (along with sidekiq being busy) maybe postgres is doing a lot of reading of old data from disk, maybe to do some stats jobs. If the database doesn’t fit in RAM, it’s potentially going to do a lot of reading if that older data is being accessed. That’s what this smells like.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
admin 2510013 15.8 1.6 7383364 514764 ? SNl 12:55 56:45 sidekiq 7.3.9 discourse [3 of 5 busy]
systemd+ 2848301 26.9 25.3 8058104 7806820 ? Rs 17:45 18:18 postgres: 13/main: discourse discourse [local] SELECT
systemd+ 2870880 4.3 22.9 8058516 7072000 ? Ds 18:03 2:09 postgres: 13/main: discourse discourse [local] SELECT
systemd+ 2875058 21.0 25.4 8101952 7848140 ? Rs 18:07 9:46 postgres: 13/main: discourse discourse [local] UPDATE
systemd+ 2924006 30.2 25.3 8112844 7824292 ? Rs 18:44 2:44 postgres: 13/main: discourse discourse [local] UPDATE
jystemd+ 2929546 12.8 24.9 8049520 7694404 ? Ss 18:48 0:39 postgres: 13/main: discourse discourse [local] SELECT
systemd+ 2931802 17.7 24.5 8045744 7559492 ? Ss 18:50 0:36 postgres: 13/main: parallel worker for PID 946796
systemd+ 2931803 17.7 24.5 8045744 7557568 ? Ds 18:50 0:36 postgres: 13/main: parallel worker for PID 946796
I would look at sidekiq first - I bet it’s busy running updates. Check the /sidekiq url while logged in as admin to see what it’s doing. That’s probably going to show you some long-running update tasks.
You can also, on the SQL server:
SELECT pid, application_name, query
FROM pg_stat_activity
WHERE state IS NOT NULL
AND state != 'idle';
to see what queries are running and what’s calling them. That’ll hint towards what’s causing the load.
Do you have performance headers enabled ? You can turn those on and capture them in logs to let you know how much time Discourse is spending on each part.
3 לייקים
Ed_S
(Ed S)
29 בינואר, 2026, 10:16pm
8
The kernel swapd has racked up quite a bit of time. It’s quite possible the paging activity wasn’t captured by the short vmstat run.
There seem to me to be a lot of postgres processes using more or less 20% of RAM. Too many processes given the available RAM?? Checking the postgresql config would be wise.
But a reboot would also be wise before proceeding to re-measure.
As the memory percentages add up to much too much, there must be some sharing which isn’t visible. Still, it seems like a lot of large processes.
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
systemd+ 1728387 0.5 24.0 7881244 7423192 ? Ss Jan28 6:55 postgres: 13/main: checkpointer
systemd+ 1728388 0.3 21.6 7880688 6669508 ? Ss Jan28 3:47 postgres: 13/main: background writer
systemd+ 2848301 26.9 25.3 8058104 7806820 ? Rs 17:45 18:18 postgres: 13/main: discourse discourse [local] SELECT
systemd+ 2870879 1.4 15.7 7903276 4854432 ? Ss 18:03 0:42 postgres: 13/main: discourse discourse [local] idle
systemd+ 2870880 4.3 22.9 8058516 7072000 ? Ds 18:03 2:09 postgres: 13/main: discourse discourse [local] SELECT
systemd+ 2875058 21.0 25.4 8101952 7848140 ? Rs 18:07 9:46 postgres: 13/main: discourse discourse [local] UPDATE
systemd+ 2884686 3.3 19.2 7901648 5920344 ? Ss 18:13 1:20 postgres: 13/main: discourse discourse [local] idle
systemd+ 2884706 3.4 20.2 8055052 6253836 ? Ss 18:13 1:23 postgres: 13/main: discourse discourse [local] idle
systemd+ 2885877 2.4 17.1 8062096 5272220 ? Ss 18:13 0:58 postgres: 13/main: discourse discourse [local] idle
systemd+ 2885900 2.3 17.1 7900444 5296948 ? Ss 18:13 0:56 postgres: 13/main: discourse discourse [local] idle
systemd+ 2889095 1.9 15.3 8055412 4729620 ? Ss 18:15 0:45 postgres: 13/main: discourse discourse [local] idle
systemd+ 2889140 1.8 15.7 7899580 4854332 ? Ss 18:15 0:42 postgres: 13/main: discourse discourse [local] idle
systemd+ 2899645 25.6 25.3 8061284 7824768 ? Ss 18:24 7:34 postgres: 13/main: discourse discourse [local] idle
systemd+ 2899819 4.4 16.1 7899748 4990744 ? Ss 18:24 1:18 postgres: 13/main: discourse discourse [local] idle
systemd+ 2924006 30.2 25.3 8112844 7824292 ? Rs 18:44 2:44 postgres: 13/main: discourse discourse [local] UPDATE
systemd+ 2929546 12.8 24.9 8049520 7694404 ? Ss 18:48 0:39 postgres: 13/main: discourse discourse [local] SELECT
systemd+ 2931802 17.7 24.5 8045744 7559492 ? Ss 18:50 0:36 postgres: 13/main: parallel worker for PID 946796
systemd+ 2931803 17.7 24.5 8045744 7557568 ? Ds 18:50 0:36 postgres: 13/main: parallel worker for PID 946796
לייק 1
supermathie
(Michael Brown)
30 בינואר, 2026, 4:15am
9
Discourse forks after initialisation to benefit from sharing as much as it can, yes.
char
30 בינואר, 2026, 2:15pm
10
The situation has improved.
We were on 3.6.0, Postgres 13 was working without evident issues - so far.
We posticipated the upgrade to 15 because we didn’t have enough free space on the vps, we needed 150GB given our database size.
We pruned what we considered useless data - tracking data and search data among all -, performed some routine maintenance on the DB (VACUUM, for instance), upgraded to PSQL15, and upgraded Discourse too, to 2026.2.
We don’t know what exactly fixed the issue but we’re doing fine now.
The situation seems to be stable now.
I’m going to monitor usage metrics for a couple days more.
Thanks for your help so far.
3 לייקים