Automatic upgrade fails if swap is disabled on the host

(Tom Price) #1

When using the automated upgrade from /admin/upgrade I’m getting an error relating to “Javascript was terminated” whilst a rake task was executing. Sadly I can’t find the actual log file and everything claims to have updated. This has happened a few times and I’m just wondering what on earth is going on.

(Jeff Atwood) #2

Out of memory, most likely. Make sure you have a 2GB swapfile and 1GB RAM.

(Rafael dos Santos Silva) #3

What’s the specs on your server?

df -h
docker version
free -m

EDIT: ninja’d

Warning for new messages while writing to topic
(Tom Price) #4

Running on Digital Ocean $10 (1GB) with 2GB swap as per standard install guide (we gave up on AWS), command output

$ df -h Filesystem Size Used Avail Use% Mounted on /dev/vda1 30G 12G 17G 42% / devtmpfs 488M 0 488M 0% /dev tmpfs 497M 0 497M 0% /dev/shm tmpfs 497M 63M 434M 13% /run tmpfs 497M 0 497M 0% /sys/fs/cgroup tmpfs 100M 0 100M 0% /run/user/1000
$ docker version Client: Version: 1.12.2 API version: 1.24 Go version: go1.6.3 Git commit: bb80604 Built: OS/Arch: linux/amd64 Server: Version: 1.12.2 API version: 1.24 Go version: go1.6.3 Git commit: bb80604 Built: OS/Arch: linux/amd64
$ free -m total used free shared buff/cache available Mem: 993 723 69 59 199 75 Swap: 2047 901 1146

(Jeff Atwood) #5

That looks OK – are you running any plugins of any kind?

(Tom Price) #9

Three non stock ones, migratepassword, push-notifications and solved. (push notifications was new but I believe this happened before that was installed as well, @david?)

(David Taylor) #10

Yup, this issue started before installing push notifications

(Rafael dos Santos Silva) #11

How big is the board? Can you share a URL?

(David Taylor) #12

It’s a private board, login-only. We have 9668 topics and 151699 posts. 206 users (but only about 30 active).

(Kane York) #13

Wait, this doesn’t look okay. You’re using 1GB swap during normal operation. So the 2GB swap needed during upgrades isn’t there.

(Tom Price) #14

So do you think its worth resizing the swap to be bigger (say 4GB just incase for the upgrades), then rebuilding to get out of the awkward state of Discourse thinking it is up to date but isnt, unless you can think of a better way to test it (reset discourse version somehow)?

(Jeff Atwood) #15

It looks like you need 2GB RAM for your instance.

(Tom Price) #16

That would be incredibly dull if that was the case, as the board runs just fine and is very quick even with all of our active members online, its only during upgrades we are having an issue.

For now I’ve upped the swap to 4GB (although again in production only minimal swapping is actually occurring even if swap is being used) and rebuilt to fix the update issue. Next time there is a commit to tests-passed I’ll run the update and report back what happens.
One thought I did have was to check docker stats to see what it thinks is being used memory wise, and it appears as if docker is limiting it to 1GB of memory for some reason, could this be something to do with it?

(Tom Price) #17

Further to that, having gone down this rabbit hole, docker inspect also contains these two values for the Discourse container

"MemorySwap": 0, "MemorySwappiness": -1,

(Tom Price) #18

Just managed to test it with the latest commits today, exact same result. Managed to capture a log from the browser though this time, in a gist to save this threads length.

Other things of note, whilst running I watched top to see what was going on, swap utilization never exceeded 1.4GB (4GB now available), however loop0 and kswapd0 were both rather busy on the CPU.

(Rafael dos Santos Silva) #19

By docker inspect my Discourse instance I couldn’t find those parameters.

Also inside the container:

root@ubuntu-app:/var/www/discourse# cat /proc/sys/vm/swappiness 
root@ubuntu-app:/var/www/discourse# sysctl vm.swappiness
vm.swappiness = 60

Is this something you guys customized?

(David Taylor) #20

We had indeed changed the swappiness on the host system, which had then propagated into the docker container. Changed it back to 60 and everything seems to be working fine :slight_smile:

Is the line “Killed sidekiq” at the end of the upgrade logs normal?

Compressing: application-a89166edadee721acc17384349d899833af4a8ac62526618341b21fc5cbb1bd8.js
uglifyjs '/var/www/discourse/public/assets/_application-a89166edadee721acc17384349d899833af4a8ac62526618341b21fc5cbb1bd8.js' -p relative -c -m -o '/var/www/discourse/public/assets/application-a89166edadee721acc17384349d899833af4a8ac62526618341b21fc5cbb1bd8.js' --source-map-root '/assets' --source-map '/var/www/discourse/public/assets/' --source-map-url '/assets/'
gzip /var/www/discourse/public/assets/application-a89166edadee721acc17384349d899833af4a8ac62526618341b21fc5cbb1bd8.js

Killed sidekiq
*** After restart, upgrade will be complete ***
Restarting unicorn pid: 129

(Felix Freiberger) #21

Yes, it’s normal (at last I see it too) – I think sidekiq is simply killed to ensure that it is respawned with the new version :slight_smile:

(David Taylor) #22

Great, in that case, all sorted :slight_smile:

Thanks for the help everyone