Downgraded deployment of Discourse

Hi,
I am trying to install downgraded version of discourse on centos7 server. Can anyone suggest the discourse_docker commit hash that maybe compatible with 2.2.2. I tried all hashes of February 2017 (since that was the timeline for release of 2.2.2) but no luck so far.

You’re trying to run discourse 2.2.2? Why?

It’s a bad idea, so mostly no one knows how to do that, but I think you’d do something like this:

  • clone/change branch a suitably old version of discourse_docker
  • modify app.yml to have it use an old docker image (I guess they’re still available? if not, you’d need to figure out how to build an old image yourself)
  • modify app.yml to use an old version of Discourse.
1 Like

Seconded; this is probably a bad idea. There are many security fixes put in place since that time.

What are you trying to accomplish?

2 Likes

I see. This is what:

And

/sidekiq isn’t accessible, so it’s likely not a standard install.

1 Like

Ok so what happened a few months ago was that in attempt to try and add plugins through code build discourse version was automatically upgraded. And since then nobody is happy with the performance of the platform (images not showing or showing up late, page load delays etc etc.). That is why we have finally dropped down to trying to revert the version to the one which kept everyone happy which was 2.2.2.

As for downgrade, I’ve already commented out the version tag on app.yml, checked out the discourse_docker branch on commit
ID 6be5513f9f1a28951d3578c83a76883bbb543c81 and have added 2.2.2 revert lines under hooks for app.yml:

  • exec:
    cd: $home
    cmd:
    - git fetch --depth=1 origin tag v2.2.2 --no-tags
    - git checkout v2.2.2

And my install fails on:
2021-03-30 18:04:25.344 UTC [43] FATAL: database files are incompatible with server
2021-03-30 18:04:25.344 UTC [43] DETAIL: The data directory was initialized by PostgreSQL version 13, which is not compatible with this version 10.7 (Ubuntu 10.7-1.pgdg16.04+1).

And if i do not specify the postgres.10.template.yml then it fails on:

Success. You can now start the database server using:

/usr/lib/postgresql/13/bin/pg_ctl -D /var/lib/postgresql/13/main -l logfile start

Warning: The selected stats_temp_directory /var/run/postgresql/13-main.pg_stat_tmp
is not writable for the cluster owner. Not adding this setting in
postgresql.conf.
Ver Cluster Port Status Owner Data directory Log file
13 main 5433 down postgres /var/lib/postgresql/13/main /var/log/postgresql/postgresql-13-main.log
update-alternatives: using /usr/share/postgresql/13/man/man1/postmaster.1.gz to provide /usr/share/man/man1/postmaster.1.gz (postmaster.1.gz) in auto mode
invoke-rc.d: could not determine current runlevel
invoke-rc.d: policy-rc.d denied execution of start.
Processing triggers for libc-bin (2.23-0ubuntu10) …

  • Stopping PostgreSQL 10 database server
    …done.
  • Stopping PostgreSQL 13 database server
    …done.
    Performing Consistency Checks

Checking cluster versions
This utility cannot be used to downgrade to older major PostgreSQL versions.
Failure, exiting

UPGRADE OF POSTGRES FAILED

You are going to need to export your data and import into a clean instance:

Run ./launcher rebuild app again

When your instance is running:
Run ./launcher enter app
Run apt-get remove postgresql-client-9.5 && apt-get install postgresql-client-10
Run cd /shared/postgres_backup && sudo -u postgres pg_dump discourse > backup.db

Run: ./launcher stop app
Run: sudo mv /var/discourse/shared/standalone/postgres_data /var/discourse/shared/standalone/postgres_data_old
Run: ./launcher rebuild app

Run: ./launcher enter app
Run: cd /shared/postgres_backup
Run: sv stop unicorn
Run: sudo -iu postgres dropdb discourse
Run: sudo -iu postgres createdb discourse
Run: sudo -iu postgres psql discourse < backup.db
Run: exit
Run: ./launcher rebuild app

Since you’ve already deployed a newer version, the DBMS has been upgraded to Postgres 13 and the database migrations for the new version of Discourse have been run. Running an earlier application version will not work without restoring the pre-upgrade database files.

Regarding your performance problem, @pfaffman is on the ball here:

I will continue in that topic.

1 Like

Here I am trying to deploy an empty downgraded deployment with no data. Is it not possible to even deploy a new empty downgraded deployment for 2.2.2?

I explained the steps above. It won’t be easy. I would guess it would take me an hour or two to figure out. And then you’ll have to revert to the database from before you did the upgrade.

This isn’t the solution you should pursue.

1 Like

It’s just an attempt to make this portal functional enough for team members to use smoothly.
Since as you can see by stats I’m listing on the other post (Severe performance issues with Discourse 2.7.0.beta4 - #9 by Sirshad) that I am unable to identify many reasons why this portal is getting bogged down.

It’s an attempt and I want to pursue it in case there is any chance that this might resolve the issues we are facing at present. Even if it’s not easy but if there is a chance that it may resolve the set of issues we have been facing since last 3 months then it would be worth a shot.

I even tried installing the downgraded version directly on the VM instead of using docker but I started having compatibility issues between redis version listed for 2.2.2 and redis gem version which would not go lower because of sidekiq dependency. So that attempt ended in a dead-end as well.

But this is not sustainable, surely? - you are going to need to upgrade at some point:

  • security patches
  • plugin & theme component parity and compatibility
  • new core features you might want

and …

  • performance improvements in core!

It takes all of ~20 mins to install a completely fresh install of Discourse if you’ve done it a few of times before - is it worth all this rigmarole?

If I can get a good performance on latest version I would be happy not to go down this rabbit hole. But currently performance is becoming quite a bane.

I think you could consider focussing on where that issue is coming from?

Could it be just the indexing of latest postgres? (which will be temporary).

Could it be temporary image processing migration?

What’s htop showing? Cores maxed out?

Cores are fine. Machine stats are far from being choked.

How can I check the first two you mentioned?

Have a search around the forum. Plenty of users experience transitory slow-downs during the postgres upgrade process which cleared for most without any work:

Thanks a bunch for the share, I’ll look into this. So how long is this transitory phase usually? since we are facing this from last 3 months.

That sounds excessive! So may not be the issue. It’s hard to say without digging.

If your CPU is not maxed out, what’s taking the time - certain queries? I think it would be good to get exact details about what is slow.

Check your browser dev tools, what’s taking time to come back?

Do you see anything running very long here: your-site.com/sidekiq/scheduler?

Jobs::EnqueueOnceoffs 10.32.9.21:156 -142914 2021-04-01 13:00:42 UTC 140ms
Jobs::EnqueueOnceoffs 10.32.9.21:156 -59895 2021-04-01 12:50:37 UTC 85ms
Jobs::EnqueueOnceoffs 10.32.9.21:156 623 2021-04-01 12:40:48 UTC 88ms
Jobs::EnqueueOnceoffs 10.32.9.21:156 -93692 2021-04-01 12:30:41 UTC 82ms

these seem unusually longer than the other running scheduler jobs.

Or is that even normal run times?

Totally normal I suspect, just had a look at one of my sites that’s running normally and that’s running for ~100ms in the example I found.