After upgrade, Connection refused while connecting to upstream


I’ve been running 2.2.0-beta6 for a few days now and noticed the release of 2.2.0-beta7 so I proceed the upgrade as usual :

  • upgrading the docker manager
  • upgrading the code
  • upgrading plugins (actually oauth2 and cake which are two officials ones)

Once the update has been successfully run (according to output), I’ll back to the forum homepage which end with a blank error page.


The software used by this forum as encounter an issue...

Detailed information has been sent...

It’s returning the same page on every single forum page - except the upgrade one.
I tried to rebuild the container a couple of time, even by fixing the version to the previously 2.2.0-b6 (which was working) without any success.

Here is the doctor information:

DISCOURSE DOCTOR Mon Jan 14 00:13:42 CET 2019
OS: Linux 4.9.0-8-amd64 #1 SMP Debian 4.9.110-3+deb9u6 (2018-10-08) x86_64 GNU/Linux

Found containers/app.yml

==================== YML SETTINGS ====================

==================== DOCKER INFO ====================
DOCKER VERSION: Docker version 18.06.0-ce, build 0ffa825

DOCKER PROCESSES (docker ps -a)

CONTAINER ID        IMAGE                 COMMAND             CREATED             STATUS              PORTS               NAMES
00152ed0d7b9        local_discourse/app   "/sbin/boot"        8 minutes ago       Up 8 minutes                            app

Discourse container app is running

==================== PLUGINS ====================
          - git clone
          - git clone
          - git clone

No non-official plugins detected.

See for the official list.

Discourse version at NOT FOUND
Discourse version at localhost: NOT FOUND

==================== MEMORY INFORMATION ====================
OS: Linux
RAM (MB): 8179

              total        used        free      shared  buff/cache   available
Mem:           7987        3636        1218         253        3132        3800
Swap:             0           0           0

==================== DISK SPACE CHECK ====================
---------- OS Disk Space ----------
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        49G   35G   12G  75% /

==================== DISK INFORMATION ====================
Disk /dev/vda: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x000dcb5c

Device     Boot     Start       End   Sectors Size Id Type
/dev/vda1  *         2048 102760447 102758400  49G 83 Linux
/dev/vda2       102760448 104857599   2097152   1G 82 Linux swap / Solaris

==================== END DISK INFORMATION ====================

==================== MAIL TEST ====================
For a robust test, get an address from
Sending mail to REDACTED  . . 
Testing sending to ******* using ********:***********.
SMTP server connection successful.
Sending to ********. . . 
Mail accepted by SMTP server.

If you do not receive the message, check your SPAM folder
or test again using a service like

If the message is not delivered it is not a problem with Discourse.

Check the SMTP server logs to see why it failed to deliver the message.

==================== DONE! ====================

and bellow are the output of container logs (using launcher logs app)

run-parts: executing /etc/runit/1.d/00-ensure-links
run-parts: executing /etc/runit/1.d/00-fix-var-logs
run-parts: executing /etc/runit/1.d/anacron
run-parts: executing /etc/runit/1.d/cleanup-pids
Cleaning stale PID files
run-parts: executing /etc/runit/1.d/copy-env
run-parts: executing /etc/runit/1.d/enable-brotli
run-parts: executing /etc/runit/1.d/remove-old-socket
Started runsvdir, PID is 45
ok: run: redis: (pid 55) 0s
ok: run: postgres: (pid 56) 0s
rsyslogd: command 'KLogPermitNonKernelFacility' is currently not permitted - did you already set it via a RainerScript command (v6+ config)? [v8.16.0 try ]
rsyslogd: imklog: cannot open kernel log (/proc/kmsg): Operation not permitted.
rsyslogd: activation of module imklog failed [v8.16.0 try ]
rsyslogd: Could not open output pipe '/dev/xconsole':: No such file or directory [v8.16.0 try ]
supervisor pid: 57 unicorn pid: 78

while doing launcher enter app and showing the nginx error log, this is what I get :

root@forums:/var/www/discourse# cat /var/log/nginx/error.log
2019/01/13 23:18:54 [error] 68#68: *1 connect() failed (111: Connection refused) while connecting to upstream, client:, server: _, request: "POST /message-bus/1bc14d052e024e448724de1ccfe24394/poll?dlp=t HTTP/1.1", upstream: "", host: "", referrer: ""

Is anyone got an idea about what append and how to fix this ? :frowning:


This looks like a Docker problem to me, so system level. Did you follow our official install guide when setting up this instance of Discourse?

I did and never had issues until now.
The only diff I have is my discourse files are stored into /opt/discourse instead of /var/discourse.

Regarding docker, no update are available, and I’m running the following

Docker version 18.06.0-ce, build 0ffa825
Linux 4.9.0-8-amd64 #1 SMP Debian 4.9.110-3+deb9u6 (2018-10-08) x86_64 GNU/Linux

Can you run docker info and docker version (or the equivalents, that is me remembering the commands off the top of my head) and paste the output here in a code block?

docker info

Containers: 1
 Running: 1
 Paused: 0
 Stopped: 0
Images: 3
Server Version: 18.06.0-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: d64c661f1d51c48782c9cec8fda7604785f93587
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
  Profile: default
Kernel Version: 4.9.0-8-amd64
Operating System: Debian GNU/Linux 9 (stretch)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.8GiB
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Experimental: false
Insecure Registries:
Live Restore Enabled: false

WARNING: No swap limit support

docker version

 Version:           18.06.0-ce
 API version:       1.38
 Go version:        go1.10.3
 Git commit:        0ffa825
 Built:             Wed Jul 18 19:09:33 2018
 OS/Arch:           linux/amd64
 Experimental:      false

  Version:          18.06.0-ce
  API version:      1.38 (minimum version 1.12)
  Go version:       go1.10.3
  Git commit:       0ffa825
  Built:            Wed Jul 18 19:07:38 2018
  OS/Arch:          linux/amd64
  Experimental:     false

nginx proxy

server {
    listen 80; listen [::]:80;

    location / {
        return 301 https://$host$request_uri;

    location ^~ /.well-known/acme-challenge {
        allow all;
        root /var/www/;

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;

    ssl_certificate /etc/letsencrypt/live/;
    ssl_certificate_key /etc/letsencrypt/live/;

    http2_idle_timeout 5m;

    location / {
        proxy_pass http://unix:/opt/discourse/shared/standalone/nginx.http.sock:;
        proxy_set_header Host $http_host;
        proxy_http_version 1.1;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto https;
        proxy_set_header X-Real-IP $remote_addr;

All let’s encrypt certs are also up-to-date.

1 Like

Maybe @supermathie, @saj or @schleifer have thoughts.

Kudos on the detailed diagnostics by the way, this is an excellent and well researched support topic :beers: … I wish they were all as good as yours!


You can ignore this.

This means nginx is running but nginx can’t connect to it. Discourse may not be running.

This path looks odd, but may be right… cannot remember and I’m mobile.

Is Discourse running? If it is, is it listening on the /opt/discourse/shared/standalone/nginx.http.sock socket?

If not, do the rails logs show any errors?


@codinghorror I think your providing enough tools to reach well-formed support topic but thanks for the compliment :slight_smile:.


The proxy config has been picked from here Running other websites on the same machine as Discourse

The error message is returned by the discourse nginx. The container is up and nginx proxy is properly redirecting to the one inside the container.

Also, is working

Regarding rails logs, you’ll have to drive me a bit as I’ve no clue about the where or the command which have to be run :sweat_smile:

I only assume it has to be run from the container as I don’t have any ruby stuff available on the blade itself :yum:

BTW, I dunno if discourse-doctor is sending commands to the container or only checking yaml file, but I received its test mail.


Ah! So there’s an nginx outside the container as well as one inside.

And we know it’s making it to the inside nginx.

Output of ps axf from inside container?

Logs should be in /var/log/rails I think. I’ll check when I’m back at a computer.

root@forums:/var/www/discourse# ps axf
 6808 pts/1    Ss     0:00 /bin/bash --login
 6824 pts/1    R+     0:00  \_ ps axf
    1 pts/0    Ss+    0:00 /bin/bash /sbin/boot
   45 pts/0    S+     0:00 /usr/bin/runsvdir -P /etc/service
   46 ?        Ss     0:00  \_ runsv rsyslog
   55 ?        Sl     0:00  |   \_ rsyslogd -n
   47 ?        Ss     0:00  \_ runsv cron
   52 ?        S      0:00  |   \_ cron -f
   48 ?        Ss     0:00  \_ runsv nginx
   58 ?        S      0:00  |   \_ nginx: master process /usr/sbin/nginx
   69 ?        S      0:00  |       \_ nginx: worker process
   70 ?        S      0:00  |       \_ nginx: worker process
   71 ?        S      0:00  |       \_ nginx: worker process
   72 ?        S      0:00  |       \_ nginx: worker process
   75 ?        S      0:00  |       \_ nginx: cache manager process
   49 ?        Ss     0:00  \_ runsv redis
   53 ?        S      0:00  |   \_ svlogd /var/log/redis
   54 ?        Sl     0:09  |   \_ /usr/bin/redis-server *:6379
   50 ?        Ss     0:00  \_ runsv postgres
   56 ?        S      0:00  |   \_ svlogd /var/log/postgres
   57 ?        S      0:00  |   \_ /usr/lib/postgresql/10/bin/postmaster -D /etc/postgresql/10/main
   79 ?        Ss     0:00  |       \_ postgres: 10/main: checkpointer process
   80 ?        Ss     0:00  |       \_ postgres: 10/main: writer process
   81 ?        Ss     0:00  |       \_ postgres: 10/main: wal writer process
   82 ?        Ss     0:00  |       \_ postgres: 10/main: autovacuum launcher process
   83 ?        Ss     0:00  |       \_ postgres: 10/main: stats collector process
   84 ?        Ss     0:00  |       \_ postgres: 10/main: bgworker: logical replication launcher
  133 ?        Ss     0:00  |       \_ postgres: 10/main: discourse discourse [local] idle
 6767 ?        Ss     0:00  |       \_ postgres: 10/main: discourse discourse [local] idle
   51 ?        Ss     0:00  \_ runsv unicorn
   60 ?        S      0:01      \_ /bin/bash config/unicorn_launcher -E production -c config/unicorn.conf.rb
   85 ?        Sl     0:09          \_ unicorn master -E production -c config/unicorn.conf.rb
  180 ?        SNl    0:12          |   \_ sidekiq 5.1.3 discourse [0 of 5 busy]
  206 ?        Sl     0:07          |   \_ unicorn worker[0] -E production -c config/unicorn.conf.rb
  249 ?        Sl     0:06          |   \_ unicorn worker[1] -E production -c config/unicorn.conf.rb
  349 ?        Sl     0:07          |   \_ unicorn worker[2] -E production -c config/unicorn.conf.rb
  459 ?        Sl     0:06          |   \_ unicorn worker[3] -E production -c config/unicorn.conf.rb
  572 ?        Sl     0:07          |   \_ unicorn worker[4] -E production -c config/unicorn.conf.rb
  700 ?        Sl     0:07          |   \_ unicorn worker[5] -E production -c config/unicorn.conf.rb
  826 ?        Sl     0:06          |   \_ unicorn worker[6] -E production -c config/unicorn.conf.rb
  963 ?        Sl     0:06          |   \_ unicorn worker[7] -E production -c config/unicorn.conf.rb
 6823 ?        S      0:00          \_ sleep 1

regarding logs, I got those /opt/discourse/shared/standalone/log/rails/production.log

/var/www/discourse/vendor/bundle/ruby/2.5.0/gems/barber-0.12.0/lib/barber/precompiler.rb:33:in `rescue in compile'
Started GET "/" for at 2019-01-14 00:49:30 +0000
Processing by CategoriesController#index as HTML
  Rendering categories/index.html.erb within layouts/application
  Rendered categories/index.html.erb within layouts/application (0.9ms)
  Rendered layouts/_head.html.erb (3.5ms)
Completed 500 Internal Server Error in 280ms (ActiveRecord: 41.6ms)
ActionView::Template::Error (Pre compilation failed for:
    {{#d-modal-body title="eve_sde_modal_title" class="eve-sde-modal"}}
    <p data-typeid="{{ model.type_id }}">{{{ model.description }}}</p>
    {{#if model.attributes.shield or model.attributes.armor or model.attributes.hull }}
                <th colspan="4">
            {{#if model.attributes.shield }}
                <td colspan="4">
                {{#each model.attributes.shield as |attribute|}}
                    <img src="{{ attribute.img.src }}" alt="{{ attribute.img.alt }}" /> {{ attribute.percent }}
            {{#if model.attributes.armor }}
                <td colspan="4">
                {{#each model.attributes.armor as |attribute|}}
                    <img src="{{ attribute.img.src }}" alt="{{ attribute.img.alt }}" /> {{ attribute.percent }}
            {{#if model.attributes.hull }}
                <td colspan="4">
                {{#each model.attributes.hull as |attribute|}}
                    <img src="{{ attribute.img.src }}" alt="{{ attribute.img.alt }}" /> {{ attribute.percent }}

. Compiler said: Error: Assertion Failed: #if requires a single argument. (L4:C4) )

According to the stack, it’s related to one component I’ve attached to the default template ( - not certain everything is up to date here)

Could it be possible to detach the component from console or switch the used template in order to figures if it’s not causing troubles ?


Aha so this is another broken theme related to the Ember 3 update. You’ll need to fix or remove that kind of customization.

Now the thing is to figure how, as I don’t have any access either to the front or the backend except the upgrade page (and maybe the logs one as it’s not embedded inside the theme either :joy:)

So you have any magic commands runnable from the container (or maybe a query?)

Later question would be to know which syntax will be correct with the new version - but not an important thing right now xD

Hi @warlof,

I hit a somewhat similar problem here which required disabling a theme component for different reasons, but that caused similar access issues. The following worked for me:

This disabled all customizations and allowed me to go in and remove the troublesome component via the admin ui. I hope that helps.


oh perfect, that did the trick. safe-mode only was not working, but with your extra parameters it did.

thank you a lot everyone :heart: