Nginx issue, when i am stop and starting the app getting "502 Bad Gateway"

when i do ./launcher stop app and then ./launcher start app , the site says 502 Bad Gateway . i dont know what is the issue i tried to rebuild but no luck.

Please advise how can i fix this.

Could you share the rebuild logs?

root@amsaal:/var/discourse# ./launcher logs app
x86_64 arch detected.
run-parts: executing /etc/runit/1.d/00-ensure-links
run-parts: executing /etc/runit/1.d/00-fix-var-logs
run-parts: executing /etc/runit/1.d/01-cleanup-web-pids
run-parts: executing /etc/runit/1.d/anacron
run-parts: executing /etc/runit/1.d/cleanup-pids
Cleaning stale PID files
run-parts: executing /etc/runit/1.d/copy-env
run-parts: executing /etc/runit/1.d/letsencrypt
[Mon 15 Apr 2024 10:12:06 AM UTC] Domains not changed.
[Mon 15 Apr 2024 10:12:06 AM UTC] Skip, Next renewal time is: 2024-06-12T11:28:31Z
[Mon 15 Apr 2024 10:12:06 AM UTC] Add '--force' to force to renew.
[Mon 15 Apr 2024 10:12:07 AM UTC] Installing key to: /shared/ssl/amsaal.net.key
[Mon 15 Apr 2024 10:12:07 AM UTC] Installing full chain to: /shared/ssl/amsaal.net.cer
[Mon 15 Apr 2024 10:12:07 AM UTC] Run reload cmd: sv reload nginx
warning: nginx: unable to open supervise/ok: file does not exist
[Mon 15 Apr 2024 10:12:07 AM UTC] Reload error for :
[Mon 15 Apr 2024 10:12:07 AM UTC] Domains not changed.
[Mon 15 Apr 2024 10:12:07 AM UTC] Skip, Next renewal time is: 2024-06-12T11:28:38Z
[Mon 15 Apr 2024 10:12:07 AM UTC] Add '--force' to force to renew.
[Mon 15 Apr 2024 10:12:08 AM UTC] Installing key to: /shared/ssl/amsaal.net_ecc.key
[Mon 15 Apr 2024 10:12:08 AM UTC] Installing full chain to: /shared/ssl/amsaal.net_ecc.cer
[Mon 15 Apr 2024 10:12:08 AM UTC] Run reload cmd: sv reload nginx
warning: nginx: unable to open supervise/ok: file does not exist
[Mon 15 Apr 2024 10:12:08 AM UTC] Reload error for :
Started runsvdir, PID is 537
ok: run: redis: (pid 550) 0s
ok: run: postgres: (pid 551) 0s
nginx: [warn] the "listen ... http2" directive is deprecated, use the "http2" directive instead in /etc/nginx/conf.d/discourse.conf:60
supervisor pid: 545 unicorn pid: 577
root@amsaal:/var/discourse#

That’s not what I meant, could you do ./launcher rebuild app, and then share that output from that? (Also, please put the output in code fences, it makes the topic easier to read)

It would make it a little easier for the user to rebuild it this way as it will log the stdout to a file

./launcher rebuild app >> rebuild.log
1 Like

if you want different files per rebuild:

./launcher rebuild app > "rebuild-$(date -Imin).log" 2>&1

It takes a minute or two after starting the container for it to start serving stuff. Have you tried waiting a few minutes before going to the web site?

1 Like
x86_64 arch detected.
Ensuring launcher is up to date
Launcher is up-to-date
Stopping old container
app
2.0.20231218-0429: Pulling from discourse/base
Digest: sha256:468f70b9bb4c6d0c6c2bbb3efc1a5e12d145eae57bdb6946b7fe5558beb52dc1
Status: Image is up to date for discourse/base:2.0.20231218-0429
docker.io/discourse/base:2.0.20231218-0429
/usr/local/lib/ruby/gems/3.2.0/gems/pups-1.2.1/lib/pups.rb
/usr/local/bin/pups --stdin
97:C 15 Apr 2024 18:52:04.329 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
97:C 15 Apr 2024 18:52:04.329 # Redis version=7.0.7, bits=64, commit=00000000, modified=0, pid=97, just started
97:C 15 Apr 2024 18:52:04.330 # Configuration loaded
97:M 15 Apr 2024 18:52:04.331 * monotonic clock: POSIX clock_gettime
97:M 15 Apr 2024 18:52:04.336 * Running mode=standalone, port=6379.
97:M 15 Apr 2024 18:52:04.336 # Server initialized
97:M 15 Apr 2024 18:52:04.337 * Loading RDB produced by version 7.0.7
97:M 15 Apr 2024 18:52:04.337 * RDB age 31 seconds
97:M 15 Apr 2024 18:52:04.337 * RDB memory usage when created 23.25 Mb
97:M 15 Apr 2024 18:52:04.451 * Done loading RDB, keys loaded: 1351, keys expired: 5.
97:M 15 Apr 2024 18:52:04.461 * DB loaded from disk: 0.124 seconds
97:M 15 Apr 2024 18:52:04.461 * Ready to accept connections
3507:C 15 Apr 2024 18:58:01.238 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
3507:C 15 Apr 2024 18:58:01.238 # Redis version=7.0.7, bits=64, commit=00000000, modified=0, pid=3507, just started
3507:C 15 Apr 2024 18:58:01.238 # Configuration loaded
3507:M 15 Apr 2024 18:58:01.239 * monotonic clock: POSIX clock_gettime
3507:M 15 Apr 2024 18:58:01.240 # Warning: Could not create server TCP listening socket *:6379: bind: Address already in use
3507:M 15 Apr 2024 18:58:01.240 # Failed listening on port 6379 (TCP), aborting.
97:M 15 Apr 2024 18:58:27.220 * 100 changes in 300 seconds. Saving...
97:M 15 Apr 2024 18:58:27.223 * Background saving started by pid 3555
3555:C 15 Apr 2024 18:58:30.967 * DB saved on disk
3555:C 15 Apr 2024 18:58:30.969 * Fork CoW for RDB: current 1 MB, peak 1 MB, average 1 MB
97:M 15 Apr 2024 18:58:31.058 * Background saving terminated with success
97:M 15 Apr 2024 19:03:32.047 * 100 changes in 300 seconds. Saving...
97:M 15 Apr 2024 19:03:32.063 * Background saving started by pid 3634
3634:C 15 Apr 2024 19:03:37.774 * DB saved on disk
3634:C 15 Apr 2024 19:03:37.777 * Fork CoW for RDB: current 1 MB, peak 1 MB, average 1 MB
97:M 15 Apr 2024 19:03:37.828 * Background saving terminated with success
97:signal-handler (1713208309) Received SIGTERM scheduling shutdown...
97:M 15 Apr 2024 19:11:49.130 # User requested shutdown...
97:M 15 Apr 2024 19:11:49.131 * Saving the final RDB snapshot before exiting.
97:M 15 Apr 2024 19:11:52.592 * DB saved on disk
97:M 15 Apr 2024 19:11:52.593 # Redis is now ready to exit, bye bye...
sha256:066d1fc0bf450b6f9043e13960cafef6b7751d92f0d89cc4e0865208293ce2e2
58661874f252e2e9dac3955608dbbe90e60f020a4d307d2818e7f991d39f8010
Removing old container
app

ea032a7acab7743f4ef8de3b9a536da0dbbf36b8c9dd5b2a9add0cfb01286e5d

yes so after rebuild process is complete after 30 seconds , the site was not working 502 gateway error then i did /launch restart app with launch start app then it works.

so may be there is something happening

That… Feels like an incomplete log. Does it actually end there, or does it just pause for a long period of time? If it’s the second, then you might need more ram/swap.

Try waiting for 60 seconds.

1 Like

it might be either pause or may be it takes alot of time almost 30 minutes to finish rebuild , i think its ram having 2 GB only that could make things slow but i dont have any memory issues for moment. its just slow only normally it should not take more then 1-2 minutes depending on the specification.

My below VPS specification.

CPU: AMD EPYC 7551P 32-Core Processor, 2000 MHz
RAM: 2 GB
SSD: 60 GB

You wouldn’t get a 502 if the container wasn’t running. If you had waited a bit more the site would have started working.

It’s not that stopping and starting did anything, it’s just that you waited long when you did the restart.

It is expected that you will have a 502 error for a period of time while the container starts up. You don’t have a problem.

1 Like

tbh that’s not too far out of the ordinary rebuilds aren’t fast, especially when…

…you don’t have a lot of memory.

Generally, at least 4GB of total memory (ram and swap) is recommended for Discourse these days.

While updating discourse from the admin interface i am getting this incompatible issues warning

warning Resolution field "unset-value@2.0.1" is incompatible with requested version "unset-value@^1.0.0"
[3/5] Fetching packages...
warning Pattern ["wrap-ansi-cjs@npm:wrap-ansi@^7.0.0"] is trying to unpack in the same destination "/home/discourse/.cache/yarn/v6/npm-wrap-ansi-cjs-7.0.0-67e145cff510a6a6984bdf1152911d69d2eb9e43-integrity/node_modules/wrap-ansi-cjs" as pattern ["wrap-ansi@^7.0.0"]. This could result in non-deterministic behavior, skipping.
[4/5] Linking dependencies...
warning "@discourse/lint-configs > eslint-plugin-ember > ember-eslint-parser@0.3.8" has unmet peer dependency "@typescript-eslint/parser@^6.15.0".
warning " > @glint/environment-ember-loose@1.4.0" has unmet peer dependency "@glimmer/component@^1.1.2".
warning " > discourse-markdown-it@1.0.0" has unmet peer dependency "xss@*".
warning "workspace-aggregator-e69f39ff-3f17-47f3-9e20-638bb7914a45 > discourse > @uppy/aws-s3@3.0.6" has incorrect peer dependency "@uppy/core@^3.1.2".
warning "workspace-aggregator-e69f39ff-3f17-47f3-9e20-638bb7914a45 > discourse > @uppy/aws-s3-multipart@3.1.3" has incorrect peer dependency "@uppy/core@^3.1.2".
warning "workspace-aggregator-e69f39ff-3f17-47f3-9e20-638bb7914a45 > discourse > @uppy/xhr-upload@3.1.1" has incorrect peer dependency "@uppy/core@^3.1.2".
warning "workspace-aggregator-e69f39ff-3f17-47f3-9e20-638bb7914a45 > discourse-plugins > ember-this-fallback@0.4.0" has unmet peer dependency "ember-source@^3.28.11 || ^4.0.0".
warning "workspace-aggregator-e69f39ff-3f17-47f3-9e20-638bb7914a45 > admin > ember-source > router_js@8.0.3" has unmet peer dependency "rsvp@^4.8.5".
warning "workspace-aggregator-e69f39ff-3f17-47f3-9e20-638bb7914a45 > discourse > @uppy/aws-s3 > @uppy/xhr-upload@3.3.0" has incorrect peer dependency "@uppy/core@^3.2.1".

Those are safe to ignore, it won’t prevent Discourse from rebuilding.