Falha ao atualizar para a versão mais recente 21/08/25

The latest upgrade required rebuilding the app in launcher, but it failed.

First it complained about having the adplugin as a separate install, so I removed that.

Looks like then it failed trying to migrate the secondsite database:

2025-08-21 06:44:42.493 UTC [867] discourse@discourse_nu ERROR: must be owner of extension vector
2025-08-21 06:44:42.493 UTC [867] discourse@discourse_nu STATEMENT: ALTER EXTENSION vector UPDATE TO ‘0.7.0’;


1 migrations failed!

Failed to migrate secondsite

Cause

  • The migration tries to upgrade the “vector” extension.
  • The PostgreSQL user running the migration (e.g. discourse) must be the owner of the extension, but it’s owned by a different user (often postgres).

Solution

  • Connect to your database as the owner
  • Run the update as the owner

Checkout the discussion on the same Still an issue: ERROR: must be owner of extension vector - #2 by Falco

1 curtida

That fixed it.

However, the problem with nginx and secondsites that I reported over a year ago is still there,

in the nginx config files within the container, it checks to see if the URL is not for the first site and changes it to that. I commented out that code–again.

1 curtida

There have been big changes in how the nginx config is handled.

Do you have a multisite setup with no reverse proxy?

Well, it’s been nearly 2 years since I’ve looked at nginx much, but this problem existed when I first moved over to Discourse 2 years ago, so it is not new.

Here’s an excerpt from the nginx.conf file:

server {
    server_name  huskerlist.tssi.com;
    root         /var/www/html;

    allow 162.210.7.125;
    allow 162.210.7.112;
    allow 162.210.7.116;
    allow 76.84.125.160;
    allow 172.17.0.2;
    allow 72.250.242.47;
    allow all;

    if ( $lockdown ) {
       set $custom_server_name "lists.tssi.com";
       return 300 "site is down for maintenance";
    }


    client_max_body_size 100M;

    # Load configuration files for the default server block.
    #include /etc/nginx/default.d/*.conf;

    location / {
            proxy_pass https://127.0.0.1:8443/;
            #proxy_pass http://unix:/var/discourse/shared/standalone/nginx.http.sock;
            proxy_set_header Host $http_host;
            proxy_http_version 1.1;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-Real-IP $remote_addr;
    }

    error_page 404 /404.html;
        location = /usr/share/nginx/html/40x.html {
    }

    error_page 500 502 503 504 /50x.html;
        location = /usr/share/nginx/html/50x.html {
    }

listen [::]:443 ssl; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/lists.tssi.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/lists.tssi.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}

server {
    server_name  nu-sports.tssi.com;
    root         /var/www/html;

    allow 162.210.7.125;
    allow 162.210.7.112;
    allow 162.210.7.116;
    allow 76.84.125.160;
    allow 172.17.0.2;
    allow 72.250.242.47;
    allow all;

    if ( $lockdown ) {
       set $custom_server_name "lists.tssi.com";
       rewrite ^ https://lists.tssi.com/n-maint.html;
    }
    client_max_body_size 100M;

    # Load configuration files for the default server block.
    #include /etc/nginx/default.d/*.conf;

    location / {
            proxy_pass https://127.0.0.1:8443/;
            #proxy_pass http://unix:/var/discourse/shared/standalone/nginx2.http.sock:;
            proxy_set_header Host $http_host;
            proxy_http_version 1.1;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-Real-IP $remote_addr;
    }

    error_page 404 /404.html;
        location = /usr/share/nginx/html/40x.html {
    }

    error_page 500 502 503 504 /50x.html;
        location = /usr/share/nginx/html/50x.html {
    }

listen [::]:443 ssl; # managed by Certbot
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/lists.tssi.com/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/lists.tssi.com/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

}

Apparently every time it set up a new container (such as during a reboot) it rewrites the

/etc/nginx/conf.d/outlets/server/20-https.conf file, and these lines cause a redirect to the default discourse system:

if ($https_host != huskerlist.tssi.com) {

rewrite (.$) https://huskerlist.tssi.com

}

Is there a way to avoid this? What purpose does this code serve?

That’s right. Are you editing that file inside of the container? Building a new container builds a new container. It’s not rewriting that file, but all files.

You can add stuff to your app.yml to change the file after it’s rewritten.

What changes are you making to that file? Why?

Oh. Wait.

You didn’t answer this question, but I think the answer is yes.

It forces the site since you mostly never want your site to be available by more than one hostname.

So you’ll need to add some code to your app.yml to un-do that.

A long time ago, I had a solution for this in Setup Multisite Configuration with Let's Encrypt and no Reverse Proxy

So you’ll need to add a sed in an exec or maybe use some replace stanza(s) to remove or modify that bit. You probably still need to follow the stuff in that topic (that I think may still work) to get multiple You can now use the DISCOURSE_HOSTNAME_ALIASES: www.domain.com,otherdomain.org,www.otherdomain.org to get certs for the additional hostnames.

I suppose the most clever solution might be to contrive to add the other hostname aliases into that if ($http_host != code somehow. I don’t have any sites set up that way right now, so I’m not likely to want to spend time figuring it out for fun.

But yeah, the web ssl template has this:

        if (\$http_host != ${DISCOURSE_HOSTNAME}) {
          rewrite (.*) https://${DISCOURSE_HOSTNAME}\$1 permanent;
        }

so you could either delete it or find a way to make it also check for your other hostnames.

So, essentially what you’re saying is that the ‘secondsite’ method for hosting two independent forums on one server is broken and not on the list of things to fix.

so you could either delete it or find a way to make it also check for your other hostnames.

Deleting it in the container is what I’ve been doing, but every time a container starts up or a new container image is generated, it puts that code back, so it needs to be changed in the source somewhere so that when it builds a new container it builds it correctly checking for multiple domains in app.yml. (That’s probably preferable to just deleting those 3 lines of code.)

If the code that builds the web ssl template isn’t going to be updated to check app.yml for a secondsite (and thirdsite and …) it sounds like this needs to happen in app.yml, which makes it a custom fix for me rather than a fix for all users running multiple forums on a single server using the apparently-broken secondsite method.

Right now I’m in the middle of a major system migration project for a client, and these sites are most active during football season anyway, so I need to set up my testbed server to test writing app.yml corrections rather than try to fix the live system on the fly.

Thinking about it briefly, fixing the ssl template is somewhat challenging.

The current logic says: If the site isn’t A, make it A.

Introducing a secondsite complicates things, because if it isn’t A and it isn’t B, it also isn’t clear that changing it to either A or B is the right thing to do. (That may be why this hasn’t been addressed by Discourse.)

Maybe deleting those lines of code is the right thing to do when there are multiple sites after all, because the outside ngingx server should only be sending through https packets that match either A or B. Forcing HTTP to HTTPS should already be happening in the outside nginx server.

It was never on the list of things to support. The recommended was was always to use a reverse proxy. I contrived a way to do it without a reverse proxy. And my hack broke a couple years ago.

Doing multisite without a reverse proxy was always a parlor trick. If you’re a pro, you should remove the ssl and let’s encrypt templates and use a reverse proxy that handles ssl. Cdck uses haproxy. I’ve been using traefik. Caddy is pretty easy to manage. I quit using it because if someone removed the cname for their site it would cause all cert renewals to fail (that may no longer be the case, it’s been years).

Since I’m using nginx with proxy_pass to pass traffic to the container for two different FQDNs, am I correct that mean I’m using the reverse proxy method for multisite?

Yes. I forgot about that one!

Have nginx proxy do the https and remove the ssl and let’s encrypt templates from your yml file and rebuild.