Julien_J
(Julien J.)
Novembro 2, 2020, 2:24pm
1
Olá,
Tentei atualizar o Discourse, mas recebi a seguinte mensagem:
Desculpe, ocorreu um erro ao atualizar o Discourse. Verifique os logs abaixo.
********************************************************
*** Por favor, seja paciente, as próximas etapas podem levar algum tempo ***
********************************************************
Reiniciando o Unicorn para liberar memória
Reiniciando o unicorn pid: 663
Aguardando o recarregamento do Unicorn.
Aguardando o recarregamento do Unicorn..
Aguardando o recarregamento do Unicorn...
Aguardando o recarregamento do Unicorn....
Usando oj 3.10.15
Usando optimist 3.0.1
Usando pg 1.2.3
Usando r2 0.2.7
Usando raindrops 0.19.1
Usando rchardet 1.8.0
Usando rinku 2.0.6
Usando rotp 6.2.0
Usando rqrcode_core 0.1.2
Usando rtlit 0.0.5
Usando rubyzip 2.3.0
Usando tilt 2.0.10
Usando sshkey 2.0.0
Usando stackprof 0.2.16
Usando unf_ext 0.0.7.7
Usando xorcist 1.1.2
Usando i18n 1.8.5
Usando tzinfo 1.2.7
Usando nokogiri 1.10.10
Usando rack-test 1.1.0
Usando mail 2.7.1
Usando addressable 2.7.0
Usando aws-sigv4 1.2.0
Usando barber 0.12.2
Usando cose 1.2.0
Usando ember-data-source 3.0.2
Usando sprockets 3.7.2
Usando discourse_image_optim 0.26.2
Usando faraday 1.1.0
Usando request_store 1.5.0
Usando message_bus 3.3.4
Usando pry 0.13.1
Usando rack-mini-profiler 2.2.0
Usando rack-protection 2.1.0
Usando uglifier 4.2.0
Usando logstash-logger 0.26.1
Usando mini_racer 0.3.1
Usando sidekiq 6.1.2
Usando mini_suffix 0.3.0
Usando nokogumbo 2.0.2
Usando omniauth 1.9.1
Usando puma 5.0.4
Usando rbtrace 0.4.14
Usando redis-namespace 1.8.0
Usando rqrcode 1.1.2
Usando ruby-readability 0.7.0
Usando sassc 2.0.1
Usando unf 0.1.4
Usando unicorn 5.7.0
Usando webpush 1.0.0
Usando activesupport 6.0.3.3
Usando loofah 2.7.0
Buscando bootsnap 1.5.0
Usando ember-handlebars-template 0.8.0
Usando mini_scheduler 0.12.3
Usando oauth2 1.4.4
Usando omniauth-oauth 1.1.0
Usando sanitize 5.2.1
Usando pry-byebug 3.9.0
Usando pry-rails 0.3.9
Usando rails-dom-testing 2.0.3
Usando rails-html-sanitizer 1.3.0
Usando globalid 0.4.2
Usando activemodel 6.0.3.3
Usando aws-sdk-core 3.99.1
Usando css_parser 1.7.1
Usando actionview 6.0.3.3
Usando activejob 6.0.3.3
Usando active_model_serializers 0.8.4
Usando activerecord 6.0.3.3
Usando aws-sdk-kms 1.31.0
Usando aws-sdk-sns 1.25.1
Usando omniauth-oauth2 1.7.0
Usando omniauth-twitter 1.4.0
Usando onebox 2.1.4
Usando actionpack 6.0.3.3
Usando actionview_precompiler 0.2.3
Usando aws-sdk-s3 1.66.0
Usando omniauth-facebook 8.0.0
Usando omniauth-github 1.4.0
Usando omniauth-google-oauth2 0.8.0
Usando seed-fu 2.3.9
Usando actionmailer 6.0.3.3
Usando railties 6.0.3.3
Usando sprockets-rails 3.2.2
Usando jquery-rails 4.4.0
Usando lograge 0.11.2
Usando rails_failover 0.5.7
Usando rails_multisite 2.5.0
Usando sassc-rails 2.1.2
Usando discourse-ember-rails 0.18.6
Instalando bootsnap 1.5.0 com extensões nativas
Bundle completo! 123 dependências do Gemfile, 161 gems agora instaladas.
Gems nos grupos test e development não foram instalados.
Gems do bundle estão instaladas em `./vendor/bundle`
$ bundle exec rake plugin:pull_compatible_all
docker_manager já está na versão mais recente compatível
discourse-data-explorer já está na versão mais recente compatível
$ SKIP_POST_DEPLOYMENT_MIGRATIONS=1 bundle exec rake multisite:migrate
O migrador multisite está rodando usando 1 thread
Migrando default
== 20201027110546 CreateLinkedTopics: migrando ===============================
-- create_table(:linked_topics)
-> 0.0524s
-- add_index(:linked_topics, [:topic_id, :original_topic_id], {:unique=>true})
-> 0.0066s
-- add_index(:linked_topics, [:topic_id, :sequence], {:unique=>true})
-> 0.0045s
== 20201027110546 CreateLinkedTopics: migrado (0.0676s) ======================
Seedando default
[...]
Concluído a compressão de locales/ko-0c530732e52b234cd31ea1959ec4b5127cfcc2cb5b076d4999abfa0530e5bba5.js : 0.11 segundos
8625116.65540771 Comprimindo: application-1e74fe54a11795d2a94b9b90ac1f18294214d956e95b882737a05319d5d11ff9.js
uglifyjs '/var/www/discourse/public/assets/_application-1e74fe54a11795d2a94b9b90ac1f18294214d956e95b882737a05319d5d11ff9.js' -m -c -o '/var/www/discourse/public/assets/application-1e74fe54a11795d2a94b9b90ac1f18294214d956e95b882737a05319d5d11ff9.js' --source-map "base='/var/www/discourse/public/assets',root='/assets',url='/assets/application-1e74fe54a11795d2a94b9b90ac1f18294214d956e95b882737a05319d5d11ff9.js.map'"
Killed
Docker Manager: FALHA NA ATUALIZAÇÃO
#<RuntimeError: RuntimeError>
/var/www/discourse/plugins/docker_manager/lib/docker_manager/upgrader.rb:178:in `run'
/var/www/discourse/plugins/docker_manager/lib/docker_manager/upgrader.rb:86:in `upgrade'
/var/www/discourse/plugins/docker_manager/scripts/docker_manager_upgrade.rb:19:in `block in <main>'
/var/www/discourse/plugins/docker_manager/scripts/docker_manager_upgrade.rb:6:in `fork'
/var/www/discourse/plugins/docker_manager/scripts/docker_manager_upgrade.rb:6:in `<main>'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/bootsnap-1.4.9/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:59:in `load'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/bootsnap-1.4.9/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:59:in `load'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/railties-6.0.3.3/lib/rails/commands/runner/runner_command.rb:42:in `perform'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/thor-1.0.1/lib/thor/command.rb:27:in `run'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/thor-1.0.1/lib/thor/invocation.rb:127:in `invoke_command'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/thor-1.0.1/lib/thor.rb:392:in `dispatch'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/railties-6.0.3.3/lib/rails/command/base.rb:69:in `perform'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/railties-6.0.3.3/lib/rails/command.rb:46:in `invoke'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/railties-6.0.3.3/lib/rails/commands.rb:18:in `<main>'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/bootsnap-1.4.9/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `require'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/bootsnap-1.4.9/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:23:in `block in require_with_bootsnap_lfi'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/bootsnap-1.4.9/lib/bootsnap/load_path_cache/loaded_features_index.rb:92:in `register'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/bootsnap-1.4.9/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:22:in `require_with_bootsnap_lfi'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/bootsnap-1.4.9/lib/bootsnap/load_path_cache/core_ext/kernel_require.rb:31:in `require'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/activesupport-6.0.3.3/lib/active_support/dependencies.rb:324:in `block in require'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/activesupport-6.0.3.3/lib/active_support/dependencies.rb:291:in `load_dependency'
/var/www/discourse/vendor/bundle/ruby/2.6.0/gems/activesupport-6.0.3.3/lib/active_support/dependencies.rb:324:in `require'
bin/rails:17:in `<main>'
Iniciando 1 trabalhador(s) do Unicorn que foram inicialmente parados
1 curtida
Ed_S
(Ed S)
Novembro 2, 2020, 3:15pm
2
Does your machine have adequate RAM and swap? The free command or top commands can tell you. Also you could try
dmesg | egrep -3i kill
to see if there’s information about a process being killed. I suspect out-of-memory (the OOM killer)
1 curtida
Ed_S
(Ed S)
Novembro 2, 2020, 3:51pm
3
Well, oddly enough, my upgrade just failed the same way! I had successfully updated docker manager and data explorer, and got the ‘killed’ failure at the same step, running uglifyjs on the same file.
I have a Digital Ocean droplet with 1G RAM and 2G swap, and a relatively small forum (a backup is 700MByte)
Here’s my dmesg log, FWIW:
[36473663.447053] systemd-journal invoked oom-killer: gfp_mask=0x14200ca(GFP_HIGHUSER_MOVABLE), nodemask=(null), order=0, oom_score_adj=0
[36473663.447056] systemd-journal cpuset=/ mems_allowed=0
[36473663.447070] CPU: 0 PID: 22784 Comm: systemd-journal Not tainted 4.15.0-60-generic #67-Ubuntu
[36473663.447071] Hardware name: DigitalOcean Droplet, BIOS 20171212 12/12/2017
[36473663.447081] Call Trace:
[36473663.447349] dump_stack+0x63/0x8b
[36473663.447388] dump_header+0x71/0x285
[36473663.447393] oom_kill_process+0x21f/0x420
[36473663.447395] out_of_memory+0x2b6/0x4d0
[36473663.447405] __alloc_pages_slowpath+0xa53/0xe00
[36473663.447410] __alloc_pages_nodemask+0x29a/0x2c0
[36473663.447774] alloc_pages_current+0x6a/0xe0
[36473663.447937] __page_cache_alloc+0x81/0xa0
[36473663.447942] filemap_fault+0x3ea/0x6f0
[36473663.447953] ? page_add_file_rmap+0x134/0x180
[36473663.447957] ? filemap_map_pages+0x181/0x390
[36473663.448469] ext4_filemap_fault+0x31/0x44
[36473663.448476] __do_fault+0x5b/0x115
[36473663.448479] __handle_mm_fault+0xdef/0x1290
[36473663.448482] handle_mm_fault+0xb1/0x210
[36473663.448502] __do_page_fault+0x281/0x4b0
[36473663.448507] do_page_fault+0x2e/0xe0
[36473663.448683] ? async_page_fault+0x2f/0x50
[36473663.448851] do_async_page_fault+0x51/0x80
[36473663.448856] async_page_fault+0x45/0x50
[36473663.448865] RIP: 0033:0x7fea40ede2a0
[36473663.448866] RSP: 002b:00007ffeefb58698 EFLAGS: 00010246
[36473663.448869] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
[36473663.448870] RDX: 00005591e7e980d8 RSI: 0000000000000001 RDI: 00007ffeefb586e8
[36473663.448871] RBP: 0000000000000003 R08: 0000000000000000 R09: 0000000000000000
[36473663.448872] R10: 0000000000000000 R11: 00005591e7e980d7 R12: 0000000000000001
[36473663.448873] R13: 00007ffeefb58f2b R14: 0005b3218d061ac0 R15: 00007ffeefb5b110
[36473663.448876] Mem-Info:
[36473663.448881] active_anon:103502 inactive_anon:103644 isolated_anon:0
active_file:11 inactive_file:28 isolated_file:0
unevictable:0 dirty:0 writeback:0 unstable:0
slab_reclaimable:6881 slab_unreclaimable:11795
mapped:200 shmem:924 pagetables:5723 bounce:0
free:12159 free_pcp:177 free_cma:0
[36473663.448886] Node 0 active_anon:414008kB inactive_anon:414576kB active_file:44kB inactive_file:112kB unevictable:0kB isolated(anon):0kB isolated(file):0kB mapped:800kB dirty:0kB writeback:0kB shmem:3696kB shmem_thp: 0kB shmem_pmdmapped: 0kB anon_thp: 0kB writeback_tmp:0kB unstable:0kB all_unreclaimable? no
[36473663.448887] Node 0 DMA free:4392kB min:756kB low:944kB high:1132kB active_anon:5156kB inactive_anon:4388kB active_file:0kB inactive_file:0kB unevictable:0kB writepending:0kB present:15992kB managed:15908kB mlocked:0kB kernel_stack:16kB pagetables:376kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB
[36473663.448897] lowmem_reserve[]: 0 909 909 909 909
[36473663.448901] Node 0 DMA32 free:44244kB min:44296kB low:55368kB high:66440kB active_anon:408852kB inactive_anon:410188kB active_file:44kB inactive_file:112kB unevictable:0kB writepending:0kB present:1032172kB managed:993356kB mlocked:0kB kernel_stack:3360kB pagetables:22516kB bounce:0kB free_pcp:708kB local_pcp:708kB free_cma:0kB
[36473663.448906] lowmem_reserve[]: 0 0 0 0 0
[36473663.448910] Node 0 DMA: 20*4kB (UME) 9*8kB (UE) 27*16kB (UME) 25*32kB (UE) 13*64kB (UME) 7*128kB (UE) 3*256kB (UME) 1*512kB (E) 0*1024kB 0*2048kB 0*4096kB = 4392kB
[36473663.448924] Node 0 DMA32: 5*4kB (MH) 4460*8kB (UEH) 236*16kB (UEH) 107*32kB (UEH) 7*64kB (H) 3*128kB (H) 2*256kB (H) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 44244kB
[36473663.449132] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=1048576kB
[36473663.449147] Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
[36473663.449149] 6868 total pagecache pages
[36473663.449151] 5896 pages in swap cache
[36473663.449153] Swap cache stats: add 105451532, delete 105445636, find 1375786355/1418920464
[36473663.449153] Free swap = 0kB
[36473663.449154] Total swap = 2097144kB
[36473663.449156] 262041 pages RAM
[36473663.449156] 0 pages HighMem/MovableOnly
[36473663.449157] 9725 pages reserved
[36473663.449158] 0 pages cma reserved
[36473663.449158] 0 pages hwpoisoned
[36473663.449160] [ pid ] uid tgid total_vm rss pgtables_bytes swapents oom_score_adj name
[36473663.449180] [ 404] 0 404 24427 0 90112 53 0 lvmetad
[36473663.449184] [ 408] 0 408 11305 81 114688 169 -1000 systemd-udevd
[36473663.449187] [ 621] 62583 621 35482 0 188416 183 0 systemd-timesyn
[36473663.449189] [ 697] 100 697 17992 0 167936 194 0 systemd-network
[36473663.449192] [ 715] 101 715 17689 0 176128 187 0 systemd-resolve
[36473663.449194] [ 889] 0 889 163096 0 163840 537 0 lxcfs
[36473663.449197] [ 891] 102 891 65759 42 163840 406 0 rsyslogd
[36473663.449199] [ 894] 0 894 7937 0 102400 77 0 cron
[36473663.449202] [ 896] 0 896 7083 0 102400 61 0 atd
[36473663.449204] [ 900] 103 900 12545 0 143360 200 -900 dbus-daemon
[36473663.449212] [ 907] 0 907 42752 2 237568 1979 0 networkd-dispat
[36473663.449215] [ 910] 0 910 17665 2 176128 207 0 systemd-logind
[36473663.449217] [ 914] 0 914 71996 32 204800 198 0 accounts-daemon
[36473663.449220] [ 919] 0 919 194250 1174 303104 3846 0 containerd
[36473663.449222] [ 920] 0 920 230106 2962 585728 8271 -500 dockerd
[36473663.449225] [ 925] 0 925 4103 0 77824 38 0 agetty
[36473663.449227] [ 931] 0 931 3722 0 73728 36 0 agetty
[36473663.449230] [ 939] 0 939 72220 0 200704 249 0 polkitd
[36473663.449237] [ 942] 0 942 18074 3 176128 187 -1000 sshd
[36473663.449240] [ 951] 0 951 46930 0 266240 2008 0 unattended-upgr
[36473663.449242] [22784] 0 22784 26418 41 225280 2854 0 systemd-journal
[36473663.449245] [25413] 0 25413 101377 0 122880 360 -500 docker-proxy
[36473663.449248] [25424] 0 25424 119810 0 126976 362 -500 docker-proxy
[36473663.449256] [25430] 0 25430 27189 62 73728 194 -999 containerd-shim
[36473663.449259] [25454] 0 25454 1666 1 49152 79 0 boot
[36473663.449261] [25534] 0 25534 578 6 40960 15 0 runsvdir
[36473663.449263] [25535] 0 25535 540 0 40960 26 0 runsv
[36473663.449266] [25536] 0 25536 540 0 40960 22 0 runsv
[36473663.449268] [25537] 0 25537 540 0 40960 24 0 runsv
[36473663.449270] [25538] 0 25538 540 0 40960 29 0 runsv
[36473663.449273] [25539] 0 25539 540 0 40960 24 0 runsv
[36473663.449275] [25540] 0 25540 540 3 40960 13 0 runsv
[36473663.449277] [25541] 0 25541 576 0 45056 35 0 svlogd
[36473663.449280] [25542] 105 25542 53482 35 159744 442 0 postmaster
[36473663.449282] [25543] 0 25543 13506 1 81920 282 0 nginx
[36473663.449284] [25544] 1000 25544 3783 58 57344 62 0 unicorn_launche
[36473663.449286] [25545] 0 25545 576 0 45056 25 0 svlogd
[36473663.449289] [25547] 0 25547 2110 27 57344 42 0 cron
[36473663.449291] [25548] 0 25548 39047 0 77824 203 0 rsyslogd
[36473663.449293] [25546] 106 25546 23422 738 192512 5508 0 redis-server
[36473663.449296] [25561] 33 25561 13971 190 86016 465 0 nginx
[36473663.449298] [25562] 33 25562 13647 34 73728 302 0 nginx
[36473663.449305] [25570] 105 25570 53537 43 417792 510 0 postmaster
[36473663.449308] [25571] 105 25571 53515 67 417792 459 0 postmaster
[36473663.449310] [25572] 105 25572 53482 32 147456 457 0 postmaster
[36473663.449313] [25573] 105 25573 53650 117 159744 494 0 postmaster
[36473663.449315] [25574] 105 25574 17255 135 135168 443 0 postmaster
[36473663.449318] [25575] 105 25575 53616 46 147456 510 0 postmaster
[36473663.449320] [ 4226] 1000 4226 2211682 0 1052672 65182 0 ruby
[36473663.449323] [ 4227] 1000 4227 2211682 0 1052672 65183 0 ruby
[36473663.449326] [ 4228] 1000 4228 2209633 1 1048576 63664 0 ruby
[36473663.449329] [ 5795] 1000 5795 2192955 0 901120 56821 0 ruby
[36473663.449331] [ 5796] 1000 5796 2192955 0 901120 56821 0 ruby
[36473663.449333] [ 5797] 1000 5797 2198075 0 942080 59138 0 ruby
[36473663.449335] [ 7772] 1000 7772 2191931 0 888832 60531 0 ruby
[36473663.449338] [ 7773] 1000 7773 2193980 0 892928 60533 0 ruby
[36473663.449340] [ 7774] 1000 7774 2201147 0 966656 58519 0 ruby
[36473663.449343] [ 2288] 1000 2288 2209634 0 1060864 61931 0 ruby
[36473663.449351] [ 2289] 1000 2289 2209634 0 1060864 61931 0 ruby
[36473663.449353] [ 2290] 1000 2290 2209634 0 1060864 62542 0 ruby
[36473663.449356] [ 5703] 1000 5703 85022 31182 630784 9841 0 ruby
[36473663.449358] [ 5769] 1000 5769 85061 13030 643072 37318 0 ruby
[36473663.449360] [ 5860] 105 5860 56272 91 311296 2231 0 postmaster
[36473663.449363] [ 5966] 1000 5966 2195273 31501 925696 25578 0 ruby
[36473663.449365] [ 6043] 105 6043 54544 368 352256 1199 0 postmaster
[36473663.449368] [ 6288] 1000 6288 597 0 40960 21 0 sh
[36473663.449370] [ 6289] 1000 6289 4448400 61634 1380352 48635 0 bundle
[36473663.449373] [ 7218] 1000 7218 597 17 40960 0 0 sh
[36473663.449375] [ 7219] 1000 7219 198147 65087 4608000 134 0 node
[36473663.449377] Out of memory: Kill process 6289 (bundle) score 142 or sacrifice child
[36473663.454341] Killed process 7218 (sh) total-vm:2388kB, anon-rss:68kB, file-rss:0kB, shmem-rss:0kB
[36473663.531344] oom_reaper: reaped process 7218 (sh), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
1 curtida
Ed_S
(Ed S)
Novembro 2, 2020, 4:01pm
4
At the cost of a little downtime, I got through the upgrade by applying the usual workaround:
./launcher rebuild app
However, since 1G RAM + 2G swap is a recommended minimum configuration for a small forum, it feels like something is amiss here.
During the rebuild the virtual memory usage was not too bad - the low point, during the same uglifyjs, was
# free
total used free shared buff/cache available
Mem: 1009264 380204 537900 5712 91160 510756
Swap: 2097144 498540 1598604
4 curtidas
“killed” means out of memory.
riking
(Kane York)
Novembro 4, 2020, 6:04am
7
This is probably related to the docker manager changes that try to keep the site running more during the upgrade, which would have increased the during-upgrade RAM requirements.
Container rebuild will always work because it takes down the site temporarily, so it gets maximum RAM.
3 curtidas
system
(system)
Fechado
Dezembro 4, 2020, 6:04am
8
This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.