root@tymin:/var/discourse# swapon
NAME TYPE SIZE USED PRIO
/dev/dm-0 partition 1.9G 1G -1
root@tymin:/var/discourse# df -T
Filesystem Type 1K-blocks Used Available Use% Mounted on
udev devtmpfs 2008928 0 2008928 0% /dev
tmpfs tmpfs 404176 41336 362840 11% /run
/dev/mapper/VolGroup-root ext4 59629100 50018424 6991008 88% /
tmpfs tmpfs 2020876 0 2020876 0% /dev/shm
tmpfs tmpfs 5120 0 5120 0% /run/lock
tmpfs tmpfs 2020876 0 2020876 0% /sys/fs/cgroup
/dev/sda1 ext2 240972 226212 2319 99% /boot
tmpfs tmpfs 404172 0 404172 0% /run/user/1001
tmpfs tmpfs 404172 0 404172 0% /run/user/1000
/dev/loop0 btrfs 10485760 4250700 4424308 49% /chroot/compile
overlay overlay 59629100 50018424 6991008 88% /var/lib/docker/overlay2/4e9863e34f958e15f57c752fda2057b88f2aa03afaca82e0651f3aa23e56f795/merged
How much traffic are you getting? Your forum seems way to small to run into issues like this.
Is this an official install ? On what kind of VPS is your forum running?
Admin page says 50k pageviews in last 30 days.
I guess? I followed what instructions I could find at the time.
I haven’t done a lot to customize the installation.
Not sure. It’s a 2-core 4-gb virtual server from cari.net.
Your forum is running behind nginx 1.10.3 which is over 6 years old, so something fishy is up here.
Apart from that it must be one of the worst performing Discourse installs I have ever seen. I don’t want to plug our own hosting service per se but have you ever considered switching to another host? A forum with this size and traffic should perform ok on even a very small server.
I’m sure there’s a lot to learn from the miniprofiler info you shared. Only one thing jumps out right now: do you perhaps have an enormous number of draft posts??
/my/activity/drafts
I have 3 drafts apparently.
It’s running debian 9.
nginx seems to be working ok though, the rest of my site (https://fredrik.hubbe.net/) is not slow.
By what metric though?
Is it discourse itself taking a lot of memory, IO or CPU? (If so, why? I haven’t really done anything to it…)
Or is it that the system is slow? If so, I can take that up with cari.net.
And I will probably try that reboot later this evening to see if that helps.
It’s been ~2 years, guess it might be time?
Wait a minute:
$ expr 0 `ps auxwww | tail +2 | awk '{ print " + " $6}'`
787952
If the resident total of all processes is less than 800Mb, what the heck is the rest of the memory doing?
/proc/meminfo doesn’t seem very helpful either:
root@tymin:/# cat /proc/meminfo
MemTotal: 4041756 kB
MemFree: 122852 kB
MemAvailable: 53388 kB
Buffers: 15300 kB
Cached: 87636 kB
SwapCached: 125192 kB
Active: 314348 kB
Inactive: 300988 kB
Active(anon): 270652 kB
Inactive(anon): 276288 kB
Active(file): 43696 kB
Inactive(file): 24700 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 1949692 kB
SwapFree: 921348 kB
Dirty: 144 kB
Writeback: 0 kB
AnonPages: 484704 kB
Mapped: 72596 kB
Shmem: 34520 kB
Slab: 319792 kB
SReclaimable: 26836 kB
SUnreclaim: 292956 kB
KernelStack: 6272 kB
PageTables: 19484 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 3970568 kB
Committed_AS: 4146996 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 4001728 kB
DirectMap2M: 192512 kB
Maybe I need that reboot more than I think… 2 years of kernel memory leaks?
Rebooted, I have 2Gb of memory free now.
We’ll see if it lasts.
After using Discourse for a while, you will find that the hard disk space is very large.
Mainly because of the problem with Docker Image, if the more upgrades, the more space is taken up.
Run the following command:
./launcher cleanup
Can help you clean up the space occupied by Discourse.
That will help with disk space usage, unfortunately in this case:
there is a ton of swapping happening which kills the performance.
Everything is waiting on disk, which is a terrible state to be in.
It’s not about not working ok, for me it was an indication that something might be out of date or is not a standard installation.
So far, the reboot seems to be working.
Still not sure why though.
Glad the reboot seems to have helped!
It might turn out to be informative. I’ll share mine, a much smaller system.
But here’s a thought, a kernel setting which can have a performance impact, do you have transparent huge pages enabled. I don’t:
# cat /sys/kernel/mm/transparent_hugepage/enabled
always madvise [never]
See MKJ’s Opinionated Discourse Deployment Configuration for advice!
Here’s my meminfo, on a much smaller system which is running well:
# cat /proc/meminfo
MemTotal: 1009140 kB
MemFree: 91888 kB
MemAvailable: 88692 kB
Buffers: 7644 kB
Cached: 137040 kB
SwapCached: 144884 kB
Active: 418972 kB
Inactive: 380324 kB
Active(anon): 345300 kB
Inactive(anon): 345852 kB
Active(file): 73672 kB
Inactive(file): 34472 kB
Unevictable: 0 kB
Mlocked: 0 kB
SwapTotal: 2097144 kB
SwapFree: 1049764 kB
Dirty: 400 kB
Writeback: 0 kB
AnonPages: 620688 kB
Mapped: 67192 kB
Shmem: 36536 kB
Slab: 67768 kB
SReclaimable: 27832 kB
SUnreclaim: 39936 kB
KernelStack: 3804 kB
PageTables: 14968 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 2601712 kB
Committed_AS: 3784772 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 0 kB
VmallocChunk: 0 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
CmaTotal: 0 kB
CmaFree: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
DirectMap4k: 460652 kB
DirectMap2M: 587776 kB
Ok, so I just want to close the loop on this one.
The reboot helped, and it’s still helping.
I don’t know what caused the memory leak, and unless something changes, I’ll probably have to reboot again once a year or so, which I can live with.
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.