Mein Server wurde von meinem Hosting-Provider neu gestartet, was dazu führte, dass mein alter Container gleichzeitig mit meinem neuen Container lief (letzte Woche aktualisiert und den alten Container für den Fall eines Rollbacks beibehalten).
Jetzt zeigt der neue Container diese wiederholte Protokollserie an und liefert nichts:
(42) Waiting for new unicorn master pid...
(42) Waiting for new unicorn master pid... 1265109
(42) Old pid is: 1264439 New pid is: 1265109
Shutting Down
run-parts: executing /etc/runit/3.d/01-nginx
ok: down: nginx: 0s, normally up
run-parts: executing /etc/runit/3.d/02-unicorn
(42) exiting
ok: down: unicorn: 1s, normally up
run-parts: executing /etc/runit/3.d/10-redis
ok: down: redis: 0s, normally up
run-parts: executing /etc/runit/3.d/99-postgres
ok: down: postgres: 0s, normally up
ok: down: nginx: 2s, normally up
ok: down: postgres: 1s, normally up
ok: down: redis: 1s, normally up
ok: down: unicorn: 2s, normally up
ok: down: cron: 0s, normally up
ok: down: rsyslog: 0s, normally up
run-parts: executing /etc/runit/1.d/00-ensure-links
run-parts: executing /etc/runit/1.d/00-fix-var-logs
run-parts: executing /etc/runit/1.d/01-cleanup-web-pids
run-parts: executing /etc/runit/1.d/anacron
run-parts: executing /etc/runit/1.d/cleanup-pids
Cleaning stale PID files
run-parts: executing /etc/runit/1.d/copy-env
Started runsvdir, PID is 34
ok: run: redis: (pid 48) 0s
ok: run: postgres: (pid 44) 0s
supervisor pid: 45 unicorn pid: 76
(45) Reopening logs
Shutting Down
run-parts: executing /etc/runit/3.d/01-nginx
ok: down: nginx: 1s, normally up
run-parts: executing /etc/runit/3.d/02-unicorn
(45) exiting
ok: down: unicorn: 0s, normally up
run-parts: executing /etc/runit/3.d/10-redis
ok: down: redis: 1s, normally up
run-parts: executing /etc/runit/3.d/99-postgres
ok: down: postgres: 1s, normally up, want up
ok: down: nginx: 2s, normally up
ok: down: postgres: 1s, normally up, want up
ok: down: redis: 1s, normally up
ok: down: unicorn: 1s, normally up
ok: down: cron: 0s, normally up
ok: down: rsyslog: 0s, normally up
run-parts: executing /etc/runit/1.d/00-ensure-links
run-parts: executing /etc/runit/1.d/00-fix-var-logs
run-parts: executing /etc/runit/1.d/01-cleanup-web-pids
run-parts: executing /etc/runit/1.d/anacron
run-parts: executing /etc/runit/1.d/cleanup-pids
Cleaning stale PID files
run-parts: executing /etc/runit/1.d/copy-env
Started runsvdir, PID is 34
ok: run: redis: (pid 48) 0s
ok: run: postgres: (pid 44) 0s
supervisor pid: 49 unicorn pid: 70
config/unicorn_launcher: line 71: kill: (70) - No such process
config/unicorn_launcher: line 15: kill: (70) - No such process
(49) exiting
ok: run: redis: (pid 48) 5s
ok: run: postgres: (pid 86) 1s
supervisor pid: 88 unicorn pid: 92
config/unicorn_launcher: line 71: kill: (92) - No such process
config/unicorn_launcher: line 15: kill: (92) - No such process
(88) exiting
ok: run: redis: (pid 48) 7s
ok: run: postgres: (pid 109) 0s
supervisor pid: 106 unicorn pid: 112
config/unicorn_launcher: line 71: kill: (112) - No such process
config/unicorn_launcher: line 15: kill: (112) - No such process
(106) exiting
ok: run: redis: (pid 48) 10s
ok: run: postgres: (pid 121) 0s
supervisor pid: 128 unicorn pid: 132
config/unicorn_launcher: line 71: kill: (132) - No such process
config/unicorn_launcher: line 15: kill: (132) - No such process
(128) exiting
ok: run: redis: (pid 48) 13s
ok: run: postgres: (pid 149) 0s
supervisor pid: 146 unicorn pid: 152
config/unicorn_launcher: line 71: kill: (152) - No such process
config/unicorn_launcher: line 15: kill: (152) - No such process
(146) exiting
ok: run: redis: (pid 48) 16s
ok: run: postgres: (pid 171) 0s
supervisor pid: 168 unicorn pid: 174
config/unicorn_launcher: line 71: kill: (174) - No such process
config/unicorn_launcher: line 15: kill: (174) - No such process
(168) exiting
ok: run: redis: (pid 48) 20s
ok: run: postgres: (pid 193) 1s
Und ./launcher rebuild kann keinen neuen Container erstellen, da Fehler beim Verbinden mit PostgreSQL auftreten („/var/run/postgresql/.s.PGSWL.5432“ No such file or directory).
Es scheint, dass die Zeit, in der beide Apps liefen und auf dieselbe DB zugegriffen haben, Probleme verursacht hat.
Wie gehe ich bei der Reparatur vor?