HAProxyでフロントエンドにした際の、一時的な503エラー

I had created a test forum with two web_only containers, the mail-receiver container, Postgres and Redis on a Docker network. The two web containers nginx’s were fronted by HAProxy. They worked well, but i found on rebuilding one of the two web containers there was always a small amount of downtime where 503 was returned.

Fortunately, i found a mitigating workaround, but nothing perfect;


before rebuilding app1, run

echo "disable server be_discourse/app1" | socat stdio /run/haproxy/admin.sock

and then after the rebuild process of app1 has finished, run

echo "enable server be_discourse/app1" | socat stdio /run/haproxy/admin.sock

Vise versa, to rebuild app2 run

echo "disable server be_discourse/app2" | socat stdio /run/haproxy/admin.sock

and then after the rebuild process of app2 has finished, run

echo "enable server be_discourse/app1" | socat stdio /run/haproxy/admin.sock

this is based on /etc/haproxy/haproxy.cfg matching in regard to be_discourse , i won’t paste the configuration file in code block yet, but if a critical mass want it i might

so the extra commands mitigates the 503 by telling HAProxy to divert traffic before it knows there would be a 503 from the container that’s down.


you can alternatively create an error page, but i wasn’t having much luck with that, and feel it needs further research. However, this doesn’t really fix the issue of downtime.

Why don’t you redispatch to the next server on a 503 ?

「いいね!」 1

nice! that would be good. Also, do you have any suggestions about sidekiq? I don’t see sidekiq at all in discourse_docker/samples at main · discourse/discourse_docker · GitHub which lead to me making the following changes to either yml


app1.yml

## Remember, this is YAML syntax - you can only have one block with a name
run:
  - exec: echo "Beginning of custom commands"
+  - exec: rm -f /etc/service/sidekiq/down
  ## If you want to configure password login for root, uncomment and change:

app2.yml

## Remember, this is YAML syntax - you can only have one block with a name
run:
  - exec: echo "Beginning of custom commands"
+  - exec: bash -lc 'mkdir -p /etc/service/sidekiq && touch /etc/service/sidekiq/down'
  ## If you want to configure password login for root, uncomment and change:

For anyone running multiple web containers, one extra piece to think about is where Sidekiq runs.

503s only affect web requests, but if Sidekiq is down your site will quietly stop handling background jobs (emails, digests, badges, webhooks). A neat way to handle this is to give Sidekiq its own container and keep it disabled on the web nodes.

Discourse uses runit under the hood: each service in /etc/service/* starts unless there’s a file named down inside its directory. That’s all the control that’s needed.

Here’s an example of a dedicated Sidekiq worker:

worker.yml — Sidekiq only
templates:
  - "templates/postgres.template.yml"
  - "templates/redis.template.yml"
  - "templates/sshd.template.yml"
  - "templates/web.template.yml"
  - "templates/web.ratelimited.template.yml"

expose: []   # no HTTP ports

env:
  DISCOURSE_DB_HOST: data
  DISCOURSE_REDIS_HOST: data
  DISCOURSE_HOSTNAME: "forum.example.com"
  # match your existing secrets / SMTP settings

hooks:
  after_code:
    - exec:
        cd: $home
        cmd:
          # disable web services
          - mkdir -p /etc/service/puma  && touch /etc/service/puma/down
          - mkdir -p /etc/service/nginx && touch /etc/service/nginx/down
          # enable Sidekiq
          - rm -f /etc/service/sidekiq/down

run:
  - exec: echo "Starting Sidekiq worker container"

On the web containers (app1/app2), the opposite should be done - keep Puma and Nginx enabled, but add:

hooks:
  after_code:
    - exec:
        cd: $home
        cmd:
          - mkdir -p /etc/service/sidekiq && touch /etc/service/sidekiq/down

With this setup, HAProxy balances only the web containers, while Sidekiq keeps running in the worker container (.\launcher rebuild <yml name>). That way, background jobs aren’t interrupted when you rebuild or rotate your web nodes.