I had created a test forum with two web_only
containers, the mail-receiver
container, Postgres and Redis on a Docker network. The two web containers nginx’s were fronted by HAProxy. They worked well, but i found on rebuilding one of the two web containers there was always a small amount of downtime where 503 was returned.
Fortunately, i found a mitigating workaround, but nothing perfect;
before rebuilding app1
, run
echo "disable server be_discourse/app1" | socat stdio /run/haproxy/admin.sock
and then after the rebuild process of app1
has finished, run
echo "enable server be_discourse/app1" | socat stdio /run/haproxy/admin.sock
Vise versa, to rebuild app2
run
echo "disable server be_discourse/app2" | socat stdio /run/haproxy/admin.sock
and then after the rebuild process of app2
has finished, run
echo "enable server be_discourse/app1" | socat stdio /run/haproxy/admin.sock
this is based on
/etc/haproxy/haproxy.cfg
matching in regard tobe_discourse
, i won’t paste the configuration file in code block yet, but if a critical mass want it i might
so the extra commands mitigates the 503 by telling HAProxy to divert traffic before it knows there would be a 503 from the container that’s down.
you can alternatively create an error page, but i wasn’t having much luck with that, and feel it needs further research. However, this doesn’t really fix the issue of downtime.