Is discourse slower after upgraded to Rails 4.2?


(Lê Trần Đạt) #1

Continuing the discussion from Discourse was just upgraded to Rails 4.2.

I’m using v1.5.0.beta1 +10, 2GB RAM, 2 Core (20 USD/month from DO). My forum was quite fast but I found it is slower recently. Every action take 500ms -> 2000+ ms to finish, for example if I go into a topic, it may take up to 1500ms to complete.

Tried reboot the server but it didn’t help, it helped since the CPU usage reduced a lot. Here is my forum: http://daynhauhoc.com/

#Here is my server before reboot

Memory usage

root@daynhauhoc:~# free -h
             total       used       free     shared    buffers     cached
Mem:          2.0G       1.9G        86M       183M       6.6M       210M
-/+ buffers/cache:       1.7G       303M
Swap:         2.0G        39M       2.0G

CPU

top - 02:44:37 up 5 days, 18:54,  1 user,  load average: 1.61, 1.89, 1.58
Tasks: 135 total,   5 running, 130 sleeping,   0 stopped,   0 zombie
%Cpu(s): 51.6 us, 29.0 sy,  0.0 ni,  9.7 id,  9.7 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem:   2049988 total,  1970224 used,    79764 free,      528 buffers
KiB Swap:  2097148 total,  1446396 used,   650752 free.   203204 cached Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 1349 1000      20   0 1150408 275744   2308 R  54.5 13.5 476:30.99 ruby
 1312 1000      20   0 1510388 214416   1920 S  36.3 10.5 209:56.72 ruby

---

top - 02:44:52 up 5 days, 18:54,  1 user,  load average: 1.76, 1.91, 1.59
Tasks: 136 total,   1 running, 135 sleeping,   0 stopped,   0 zombie
%Cpu(s): 31.7 us, 24.4 sy,  0.0 ni, 24.2 id, 19.2 wa,  0.5 hi,  0.0 si,  0.0 st
KiB Mem:   2049988 total,  1976024 used,    73964 free,      128 buffers
KiB Swap:  2097148 total,  1479940 used,   617208 free.   193164 cached Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 2064 landsca+  20   0  382792 243860    308 D  52.5 11.9   0:07.09 redis-server
   36 root      20   0       0      0      0 S  19.4  0.0  93:48.11 kswapd0
29818 1000      20   0 1165768 265852    648 S  16.7 13.0 385:40.47 ruby

---

Tasks: 133 total,   6 running, 127 sleeping,   0 stopped,   0 zombie
%Cpu(s): 42.2 us,  3.9 sy,  0.0 ni, 52.5 id,  0.7 wa,  0.7 hi,  0.0 si,  0.0 st
KiB Mem:   2049988 total,  1974344 used,    75644 free,     1276 buffers
KiB Swap:  2097148 total,  1481836 used,   615312 free.   161160 cached Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
29818 1000      20   0 1165768 269020   3108 R  46.1 13.1 385:57.90 ruby
29847 1000      20   0 1169864 264356   3168 R  25.0 12.9 388:22.82 ruby
 1368 1000      20   0 1125832 260820   3212 S   6.4 12.7 476:03.49 ruby
 1349 1000      20   0 1150408 276664   2960 R   3.4 13.5 476:51.18 ruby

#Here is my server after reboot

RAM

root@daynhauhoc:~# free -h
             total       used       free     shared    buffers     cached
Mem:          2.0G       1.9G        82M       242M       4.2M       269M
-/+ buffers/cache:       1.6G       355M
Swap:         2.0G       282M       1.7G

CPU

top - 03:05:59 up 17 min,  1 user,  load average: 0.36, 0.41, 0.36
Tasks: 119 total,   3 running, 116 sleeping,   0 stopped,   0 zombie
%Cpu(s):  3.4 us,  1.4 sy,  0.0 ni, 94.7 id,  0.0 wa,  0.5 hi,  0.0 si,  0.0 st
KiB Mem:   2049988 total,  1977344 used,    72644 free,     7792 buffers
KiB Swap:  2097148 total,   289204 used,  1807944 free.   287988 cached Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 1362 1000      20   0  437208 186832   3592 S   2.9  9.1   1:20.95 ruby
 1204 landsca+  20   0  357828 302460    668 S   1.9 14.8   0:19.62 redis-server
 1313 1000      20   0 1001900 183796   3396 S   1.9  9.0   1:17.55 ruby
 1351 1000      20   0 1012164 193800   3476 S   1.9  9.5   1:16.05 ruby
 5737 root      20   0   24956   1720   1188 R   1.9  0.1   0:00.64 top
    3 root      20   0       0      0      0 S   1.0  0.0   0:00.84 ksoftirqd/0
 1221 www-data  20   0  101368   1492    868 S   1.0  0.1   0:01.79 nginx
 1336 1000      20   0  486360 207432   3560 S   1.0 10.1   1:18.64 ruby
    1 root      20   0   33512    660     52 S   0.0  0.0   0:01.95 init

(Lê Trần Đạt) #2

After one day, the system load is getting higher

  System load:  1.08               Processes:              123
  Usage of /:   84.2% of 39.25GB   Users logged in:        1
  Memory usage: 85%                IP address for eth0:    
  Swap usage:   56%                IP address for docker0: 

  Graph this data and manage this system at:
    https://landscape.canonical.com/

(Kane York) #3

What does top / htop show?


(Matt Palmer) #4

I’m guessing that the system is probably actively swapping during request service, given the high memory and swap usage. Confirm that with sar -W 1 – if it’s reporting numbers consistently above 0, you’re swapping and could do with an upgrade to a larger specced instance.

Alternately, to determine what’s using all your memory, re-run top and sort by memory (press M). Take one capture when the machine first reboots, and another one a day or so later. Those should give you (or us) enough information to work out what’s consuming the memory, and whether or not it can be fixed (or whether an upgrade is the only sensible option).


(Lê Trần Đạt) #5

#top

top - 11:38:36 up 1 day,  8:50,  1 user,  load average: 2.10, 1.51, 1.65
Tasks: 127 total,   4 running, 123 sleeping,   0 stopped,   0 zombie
%Cpu(s): 21.3 us,  5.1 sy,  0.1 ni, 71.0 id,  1.8 wa,  0.6 hi,  0.0 si,  0.0 st
KiB Mem:   2049988 total,  1970680 used,    79308 free,      396 buffers
KiB Swap:  2097148 total,  1430332 used,   666816 free.   154324 cached Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 1351 1000      20   0 1096132 232232   2080 S  37.8 11.3 156:08.35 ruby
 1313 1000      20   0 1101256 236636   2156 R  32.4 11.5 151:20.08 ruby
   36 root      20   0       0      0      0 R  27.0  0.0  33:36.38 kswapd0
 8210 landsca+  20   0  390988 301004    348 R  27.0 14.7   0:09.48 redis-server
 1222 www-data  20   0  102068   1872    640 S  10.8  0.1   5:29.75 nginx
 8297 root      20   0   24944   1496   1080 R  10.8  0.1   0:00.04 top
 1362 1000      20   0 1088964 233764   2176 S   5.4 11.4 153:34.20 ruby
27725 message+  20   0  486516 111668  80352 S   5.4  5.4   6:33.14 postmaster

#htop


  1  [|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||      84.1%]     Tasks: 69, 72 thr; 5 running
  2  [||||||||||||||||||||||||||||||||||||||||||||||||||||||             74.4%]     Load average: 1.66 1.51 1.64
  Mem[|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||1743/2001MB]     Uptime: 1 day, 08:51:34
  Swp[||||||||||||||||||||||||||||||||||||||||||||||||             1360/2047MB]

  PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
 1362 1000       20   0 1063M  230M  3192 R 42.9 11.5  2h33:44 unicorn worker[3] -E production -c config/unicorn.conf.rb
 1351 1000       20   0 1070M  229M  2880 S 14.1 11.4  2h36:17 unicorn worker[2] -E production -c config/unicorn.conf.rb
 1336 1000       20   0 1066M  210M  2992 S  0.0 10.5  2h34:46 unicorn worker[1] -E production -c config/unicorn.conf.rb
 1313 1000       20   0 1075M  233M  2980 S 26.3 11.7  2h31:31 unicorn worker[0] -E production -c config/unicorn.conf.rb
 8567 root       20   0 26404  2608  1372 R  2.6  0.1  0:00.92 htop
 1222 www-data   20   0   99M  1800   632 S  0.6  0.1  5:30.22 nginx: worker process
 1204 landscape  20   0  381M 36520   564 S  1.3  1.8 30:19.96 /usr/bin/redis-server *:6379
16538 messagebu  20   0  477M 82340 58312 S  5.1  4.0  7:44.57 postgres: discourse discourse [local] idle
17435 messagebu  20   0  479M  108M 78676 S  0.6  5.4  6:29.68 postgres: discourse discourse [local] idle
27725 messagebu  20   0  475M   98M 72648 S  0.6  4.9  6:33.68 postgres: discourse discourse [local] idle
21559 messagebu  20   0  473M 84656 63732 S  0.0  4.1  6:33.88 postgres: discourse discourse [local] idle
 1355 1000       20   0 1070M  229M  2880 S  0.0 11.4  7:26.12 unicorn worker[2] -E production -c config/unicorn.conf.rb
 1224 www-data   20   0   99M  2064   684 S  1.3  0.1  5:43.37 nginx: worker process
26741 messagebu  20   0  474M 93036 69008 S  0.0  4.5  6:13.00 postgres: discourse discourse [local] idle

sar -W 1

root@daynhauhoc:~# sar -W 1
Linux 3.13.0-32-generic (daynhauhoc.com)        10/01/2015      _x86_64_        (2 CPU)

11:41:28 AM  pswpin/s pswpout/s
11:41:29 AM     24.00      0.00
11:41:30 AM   2547.00      7.00
11:41:31 AM   1095.00      0.00
11:41:32 AM    441.00      0.00
11:41:33 AM   3089.00      0.00
11:41:34 AM   6626.00      0.00
11:41:35 AM    309.00      0.00
11:41:36 AM    289.00      0.00
11:41:37 AM   2013.00      0.00
11:41:38 AM    373.00      0.00
11:41:39 AM   3811.00      0.00
11:41:40 AM    323.00      0.00
11:41:41 AM   1685.00      0.00
11:41:42 AM    157.00      0.00
11:41:43 AM    555.00      0.00
11:41:44 AM    123.00      0.00
11:41:45 AM     85.00      0.00
11:41:46 AM     92.00      0.00
11:41:47 AM   6396.00      2.00

#re-run top and sort by memory (press M)

top - 11:42:45 up 1 day,  8:54,  1 user,  load average: 2.41, 1.82, 1.74
Tasks: 124 total,   2 running, 122 sleeping,   0 stopped,   0 zombie
%Cpu(s): 20.0 us,  3.0 sy,  0.0 ni, 75.7 id,  0.5 wa,  0.8 hi,  0.0 si,  0.0 st

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 1362 1000      20   0 1088964 233576   3116 S  12.3 11.4 154:14.61 ruby
 1313 1000      20   0 1101256 237288   3140 S   9.6 11.6 152:01.19 ruby
 1336 1000      20   0 1092040 219628   4736 S   7.0 10.7 155:14.69 ruby
 1351 1000      20   0 1096132 231696   2920 S   7.0 11.3 156:47.54 ruby
 1204 landsca+  20   0  390596  36452    592 S   1.7  1.8  30:24.09 redis-server
 1221 www-data  20   0  102244   1756    700 S   1.3  0.1   5:37.15 nginx
16538 message+  20   0  488680 208992 178416 S   1.0 10.2   7:49.23 postmaster
26741 message+  20   0  486144  87488  64892 S   1.0  4.3   6:14.82 postmaster
17435 message+  20   0  490600 202760 161964 S   0.7  9.9   6:32.36 postmaster
31683 message+  20   0  488976 101524  76020 S   0.7  5.0   5:03.32 postmaster
    7 root      20   0       0      0      0 S   0.3  0.0   4:31.49 rcu_sched
 1222 www-data  20   0  102068   1824    708 S   0.3  0.1   5:31.31 nginx
 1223 www-data  20   0  102164   1632    628 S   0.3  0.1   6:16.30 nginx
 1231 message+  20   0  386356   4528   4440 S   0.3  0.2   1:33.68 postmaster
 1242 1000      20   0   30684   1616    444 S   0.3  0.1   3:15.25 unicorn_launche
 1299 1000      20   0 1376752 219380   4248 S   0.3 10.7  66:26.87 ruby
 9655 root      20   0   24940   1608   1116 R   0.3  0.1   0:00.08 top
21559 message+  20   0  484856 253396 225208 S   0.3 12.4   6:37.97 postmaster
27725 message+  20   0  486516 111248  83672 S   0.3  5.4   6:35.68 postmaster
31375 message+  20   0  485968 195244 165564 S   0.3  9.5   6:10.60 postmaster
32612 message+  20   0  479228 232516 206576 S   0.3 11.3   6:08.95 postmaster

(Jeff Atwood) #6

How many unicorn workers do you have specified in app.yml? It looks like five, which is more than the four we recommend for 2gb RAM?


(Lê Trần Đạt) #7

4 only

env:
  LANG: en_US.UTF-8
  ## TODO: How many concurrent web requests are supported?
  ## With 2GB we recommend 3-4 workers, with 1GB only 2
  UNICORN_WORKERS: 4

#after reboot

#top

top - 11:48:15 up 3 min,  1 user,  load average: 1.72, 0.69, 0.27
Tasks: 126 total,   2 running, 124 sleeping,   0 stopped,   0 zombie
%Cpu(s): 20.3 us,  2.5 sy,  0.0 ni, 76.4 id,  0.0 wa,  0.8 hi,  0.0 si,  0.0 st
KiB Mem:   2049988 total,  1889068 used,   160920 free,     2212 buffers
KiB Swap:  2097148 total,   118864 used,  1978284 free.   200284 cached Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 1324 1000      20   0  435132 187308   4176 S  17.2  9.1   0:27.64 ruby
 1351 1000      20   0  435132 192464   4064 S  10.2  9.4   0:24.94 ruby
 1339 1000      20   0  435132 192276   3852 S   7.6  9.4   0:27.70 ruby
 1363 1000      20   0  435132 186976   4088 S   3.0  9.1   0:26.31 ruby
 1223 landsca+  20   0  353732 320896   1192 S   1.3 15.7   0:06.99 redis-server
 1235 www-data  20   0  101716   1964   1088 S   1.0  0.1   0:00.70 nginx
 1335 message+  20   0  403756  83224  71756 S   0.7  4.1   0:02.38 postmaster
 1345 message+  20   0  407868  91944  76360 S   0.7  4.5   0:02.96 postmaster
 1360 message+  20   0  403484  51660  40556 S   0.7  2.5   0:01.99 postmaster
 2148 root      20   0   24944   1612   1116 R   0.7  0.1   0:00.04 top
    7 root      20   0       0      0      0 S   0.3  0.0   0:00.79 rcu_sched
    9 root      20   0       0      0      0 S   0.3  0.0   0:00.31 rcuos/1
 1237 www-data  20   0  101904   2228   1108 S   0.3  0.1   0:00.87 nginx
 1301 1000      20   0 1133524 248532   3876 S   0.3 12.1   0:08.11 ruby
    1 root      20   0   33508    460    404 S   0.0  0.0   0:01.71 init

#htop


  1  [||||||||||||||||||||||||||||                                       36.8%]     Tasks: 69, 71 thr; 3 running
  2  [||||||||||||||||||||||                                             28.8%]     Load average: 1.77 0.82 0.33
  Mem[|||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||1659/2001MB]     Uptime: 00:04:21
  Swp[||||                                                          111/2047MB]

  PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
 1351 1000       20   0  424M  187M  4064 S 17.4  9.4  0:29.69 unicorn worker[2] -E production -c config/unicorn.conf.rb
 1339 1000       20   0  424M  187M  4012 S 14.2  9.4  0:31.46 unicorn worker[1] -E production -c config/unicorn.conf.rb
 1324 1000       20   0  424M  183M  4180 S 11.0  9.2  0:31.26 unicorn worker[0] -E production -c config/unicorn.conf.rb
 1363 1000       20   0  424M  182M  4088 S  8.4  9.1  0:30.03 unicorn worker[3] -E production -c config/unicorn.conf.rb
 1223 landscape  20   0  345M  313M  1192 S  2.6 15.7  0:07.54 /usr/bin/redis-server *:6379
 1360 messagebu  20   0  395M 54764 42112 S  2.6  2.7  0:02.46 postgres: discourse discourse [local] idle
 2566 root       20   0 26256  2508  1448 R  1.9  0.1  0:00.26 htop
 1335 messagebu  20   0  396M 86128 72688 S  1.3  4.2  0:02.72 postgres: discourse discourse [local] idle
 1235 www-data   20   0   99M  2204  1140 S  1.3  0.1  0:00.91 nginx: worker process
 1369 messagebu  20   0  397M 77276 62636 S  0.6  3.8  0:02.81 postgres: discourse discourse [local] idle
 1345 messagebu  20   0  398M 92732 77092 S  0.6  4.5  0:03.20 postgres: discourse discourse [local] idle
 1301 1000       20   0 1110M  247M  7364 S  0.6 12.4  0:12.00 sidekiq 3.5.0 discourse [0 of 5 busy]
 1342 1000       20   0  424M  187M  4012 S  0.6  9.4  0:00.05 unicorn worker[1] -E production -c config/unicorn.conf.rb
 1343 1000       20   0  424M  187M  4012 S  0.6  9.4  0:00.14 unicorn worker[1] -E production -c config/unicorn.conf.rb
 1358 1000       20   0  424M  187M  4064 S  0.6  9.4  0:00.15 unicorn worker[2] -E production -c config/unicorn.conf.rb
 1318 1000       20   0 1110M  247M  7364 S  0.0 12.4  0:00.36 sidekiq 3.5.0 discourse [0 of 5 busy]
 1236 www-data   20   0   99M  1944  1096 S  0.0  0.1  0:00.45 nginx: worker process
 1219 1000       20   0 29684  3284  1068 S  0.0  0.2  0:00.99 /bin/bash config/unicorn_launcher -E production -c config/unicorn.conf.rb
 1243 messagebu  20   0  377M  4612  4168 S  0.0  0.2  0:00.17 postgres: wal writer process
 1320 1000       20   0 1110M  247M  7364 S  0.0 12.4  0:00.45 sidekiq 3.5.0 discourse [0 of 5 busy]
 1332 1000       20   0 1110M  247M  7364 S  0.0 12.4  0:00.36 sidekiq 3.5.0 discourse [0 of 5 busy]
 1237 www-data   20   0   99M  2228  1108 R  0.0  0.1  0:01.00 nginx: worker process
 1234 www-data   20   0   99M  2500  1104 S  0.0  0.1  0:01.82 nginx: worker process

sar -W 1

root@daynhauhoc:~# sar -W 1
Linux 3.13.0-32-generic (daynhauhoc.com)        10/01/2015      _x86_64_        (2 CPU)

11:49:13 AM  pswpin/s pswpout/s
11:49:14 AM      0.00      0.00
11:49:15 AM      0.00      0.00
11:49:16 AM    138.00      0.00
11:49:17 AM    253.00      0.00
11:49:18 AM     95.00    166.00
11:49:19 AM     16.00      0.00
11:49:20 AM      8.00      0.00
11:49:21 AM      0.00      0.00
11:49:22 AM      0.00      0.00
11:49:23 AM      0.00      0.00
11:49:24 AM      8.00      0.00
11:49:25 AM      0.00      0.00
11:49:26 AM      0.00      0.00
11:49:27 AM      0.00      0.00
11:49:28 AM      0.00      0.00
11:49:29 AM      0.00      1.00
11:49:30 AM      8.00      0.00
11:49:31 AM      0.00      0.00
11:49:32 AM      0.00      0.00
11:49:33 AM      0.00      0.00
11:49:34 AM      0.00      0.00
11:49:35 AM      0.00      0.00
11:49:36 AM      0.00      0.00
11:49:37 AM      0.00      0.00
11:49:38 AM      0.00      0.00
11:49:39 AM    403.00      0.00
11:49:40 AM     55.45      0.00
11:49:41 AM      0.00      0.00
11:49:42 AM      0.00      0.00

#re-run top and sort by memory (press M)

top - 11:50:07 up 5 min,  1 user,  load average: 1.94, 1.08, 0.46
Tasks: 118 total,   5 running, 113 sleeping,   0 stopped,   0 zombie
%Cpu(s): 55.2 us,  4.8 sy,  0.0 ni, 38.8 id,  0.0 wa,  1.2 hi,  0.0 si,  0.0 st

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 1324 1000      20   0  435132 187716   4216 R  47.1  9.2   0:41.41 ruby
 1339 1000      20   0  435132 192628   4092 R  24.2  9.4   0:42.13 ruby
 1351 1000      20   0 1005992 206108   9532 R  15.7 10.1   0:40.36 ruby
 1363 1000      20   0 1005992 199592   9036 R  15.7  9.7   0:40.98 ruby
 1345 message+  20   0  409772 101816  84240 S   4.8  5.0   0:04.12 postmaster
 1369 message+  20   0  410288  86040  68344 S   4.8  4.2   0:03.77 postmaster
 1237 www-data  20   0  101904   2228   1108 S   2.4  0.1   0:01.74 nginx
 1360 message+  20   0  412436 109480  93004 S   2.4  5.3   0:03.76 postmaster
 1405 root      20   0  105764    896    796 S   2.4  0.0   0:00.38 sshd
    7 root      20   0       0      0      0 S   1.2  0.0   0:01.12 rcu_sched
 1223 landsca+  20   0  353732 320896   1192 S   1.2 15.7   0:08.97 redis-server
 1335 message+  20   0  405976  89024  75208 S   1.2  4.3   0:03.39 postmaster
 2805 root      20   0   24936   1600   1116 R   1.2  0.1   0:00.04 top
    1 root      20   0   33508   1044    428 S   0.0  0.1   0:01.71 init
    2 root      20   0       0      0      0 S   0.0  0.0   0:00.00 kthreadd

(Jeff Atwood) #8

Ok so what’s the issue you are reporting? Why do you keep pasting top in over and over? What’s the exact problem?

edit: I see, you are basically reporting memory thrashing. How many active users do you have? What does your /about page show? It could be that your site is active enough to need more resource.

Aha I see you did link it so here are your /about stats

Topics      6.8k  217  911 
Posts      60.3k 1.5k 6.2k 
New Users   5.2k  166  663 
Active Users   —  931 1.8k 
Likes      54.0k  808 3.6k 

Comparing those to the /about page here on meta you have a fairly busy site – could be you need a bit more resources.


(Kane York) #9

Reduce UNICORN_WORKERS from 4 to 2 or 3 in app.yml. That should solve your memory issues.


(Lê Trần Đạt) #10

Reduced to 3

env:
  LANG: en_US.UTF-8
  ## TODO: How many concurrent web requests are supported?
  ## With 2GB we recommend 3-4 workers, with 1GB only 2
  UNICORN_WORKERS: 3

Tried

cd /var/discourse 
./launcher enter app
sv restart unicorn

and reboot but the number of unicorns is still 4

ps aux|grep unicorn
root      1212  0.0  0.0    168     4 ?        Ss   21:28   0:00 runsv unicorn
1000      1219  0.6  0.1  29680  3868 ?        S    21:28   0:00 /bin/bash config/unicorn_launcher -E production -c config/unicorn.conf.rb
1000      1246 21.0  9.1 406256 187544 ?       Sl   21:28   0:21 unicorn master -E production -c config/unicorn.conf.rb
1000      1411  8.1  9.6 431036 197352 ?       Sl   21:29   0:06 unicorn worker[0] -E production -c config/unicorn.conf.rb
1000      1451  7.2 10.1 439228 207520 ?       Sl   21:29   0:05 unicorn worker[1] -E production -c config/unicorn.conf.rb
1000      1463  7.1  9.7 431036 199412 ?       Rl   21:29   0:05 unicorn worker[2] -E production -c config/unicorn.conf.rb
1000      1471  6.2  9.8 435132 201436 ?       Sl   21:29   0:04 unicorn worker[3] -E production -c config/unicorn.conf.rb

(Kane York) #11

You want to do ./launcher restart app (or rebuild), not sv restart unicorn.


(Lê Trần Đạt) #12

I did a server reboot, do I need to run ./launcher restart app any more?

Edit: After ./launcher restart app there are still 4 unicorns running

ps aux | grep unicorn
root      5371  0.0  0.0    168     4 ?        Ss   21:52   0:00 runsv unicorn
1000      5380  1.0  0.1  29680  3864 ?        S    21:52   0:00 /bin/bash config/unicorn_launcher -E production -c config/unicorn.conf.rb
1000      5405 42.2  9.0 406256 186312 ?       Sl   21:52   0:21 unicorn master -E production -c config/unicorn.conf.rb
1000      5473 14.5  9.9 435132 203432 ?       Sl   21:53   0:04 unicorn worker[0] -E production -c config/unicorn.conf.rb
1000      5489 15.3  9.7 431036 198880 ?       Sl   21:53   0:04 unicorn worker[1] -E production -c config/unicorn.conf.rb
1000      5498 13.3  9.5 431036 196712 ?       Sl   21:53   0:03 unicorn worker[2] -E production -c config/unicorn.conf.rb
1000      5510  9.4  9.2 426736 188932 ?       Sl   21:53   0:02 unicorn worker[3] -E production -c config/unicorn.conf.rb

(Matt Palmer) #13

You need to ./launcher rebuild app in order to have the changes you made to containers/app.yml applied to your running environment.


(Sam Saffron) #14

Either that or

./launcher destroy app
./launcher start app

(Sam Saffron) #15

Memory on Redis seems quite high, can you try flushing it and see how fast it creeps back up, monitor daily for 5 days.

./launcher enter app
redis-cli flushall