Out of memory, most likely. Make sure you have a 2GB swapfile and 1GB RAM.
What’s the specs on your server?
df -h docker version free -m
Warning for new messages while writing to topic
Running on Digital Ocean $10 (1GB) with 2GB swap as per standard install guide (we gave up on AWS), command output
$ df -h Filesystem Size Used Avail Use% Mounted on /dev/vda1 30G 12G 17G 42% / devtmpfs 488M 0 488M 0% /dev tmpfs 497M 0 497M 0% /dev/shm tmpfs 497M 63M 434M 13% /run tmpfs 497M 0 497M 0% /sys/fs/cgroup tmpfs 100M 0 100M 0% /run/user/1000
$ docker version Client: Version: 1.12.2 API version: 1.24 Go version: go1.6.3 Git commit: bb80604 Built: OS/Arch: linux/amd64 Server: Version: 1.12.2 API version: 1.24 Go version: go1.6.3 Git commit: bb80604 Built: OS/Arch: linux/amd64
$ free -m total used free shared buff/cache available Mem: 993 723 69 59 199 75 Swap: 2047 901 1146
That looks OK – are you running any plugins of any kind?
Three non stock ones, migratepassword, push-notifications and solved. (push notifications was new but I believe this happened before that was installed as well, @David_Taylor?)
Yup, this issue started before installing push notifications
How big is the board? Can you share a URL?
It’s a private board, login-only. We have 9668 topics and 151699 posts. 206 users (but only about 30 active).
Wait, this doesn’t look okay. You’re using 1GB swap during normal operation. So the 2GB swap needed during upgrades isn’t there.
So do you think its worth resizing the swap to be bigger (say 4GB just incase for the upgrades), then rebuilding to get out of the awkward state of Discourse thinking it is up to date but isnt, unless you can think of a better way to test it (reset discourse version somehow)?
It looks like you need 2GB RAM for your instance.
That would be incredibly dull if that was the case, as the board runs just fine and is very quick even with all of our active members online, its only during upgrades we are having an issue.
For now I’ve upped the swap to 4GB (although again in production only minimal swapping is actually occurring even if swap is being used) and rebuilt to fix the update issue. Next time there is a commit to tests-passed I’ll run the update and report back what happens.
One thought I did have was to check
docker stats to see what it thinks is being used memory wise, and it appears as if docker is limiting it to 1GB of memory for some reason, could this be something to do with it?
Further to that, having gone down this rabbit hole,
docker inspect also contains these two values for the Discourse container
"MemorySwap": 0, "MemorySwappiness": -1,
Just managed to test it with the latest commits today, exact same result. Managed to capture a log from the browser though this time, in a gist to save this threads length.
Other things of note, whilst running I watched top to see what was going on, swap utilization never exceeded 1.4GB (4GB now available), however loop0 and kswapd0 were both rather busy on the CPU.
docker inspect my Discourse instance I couldn’t find those parameters.
Also inside the container:
root@ubuntu-app:/var/www/discourse# cat /proc/sys/vm/swappiness 60 root@ubuntu-app:/var/www/discourse# sysctl vm.swappiness vm.swappiness = 60
Is this something you guys customized?
We had indeed changed the swappiness on the host system, which had then propagated into the docker container. Changed it back to 60 and everything seems to be working fine
Is the line “Killed sidekiq” at the end of the upgrade logs normal?
... Compressing: application-a89166edadee721acc17384349d899833af4a8ac62526618341b21fc5cbb1bd8.js uglifyjs '/var/www/discourse/public/assets/_application-a89166edadee721acc17384349d899833af4a8ac62526618341b21fc5cbb1bd8.js' -p relative -c -m -o '/var/www/discourse/public/assets/application-a89166edadee721acc17384349d899833af4a8ac62526618341b21fc5cbb1bd8.js' --source-map-root '/assets' --source-map '/var/www/discourse/public/assets/application-a89166edadee721acc17384349d899833af4a8ac62526618341b21fc5cbb1bd8.js.map' --source-map-url '/assets/application-a89166edadee721acc17384349d899833af4a8ac62526618341b21fc5cbb1bd8.js.map' gzip /var/www/discourse/public/assets/application-a89166edadee721acc17384349d899833af4a8ac62526618341b21fc5cbb1bd8.js Killed sidekiq *********************************************** *** After restart, upgrade will be complete *** *********************************************** Restarting unicorn pid: 129 DONE
Yes, it’s normal (at last I see it too) – I think sidekiq is simply killed to ensure that it is respawned with the new version
Great, in that case, all sorted
Thanks for the help everyone