Upgrade Error - rake db:migrate index on theme_field_id

In the latest release notes (3.2.0.beta1), I noticed the discourse-ai plugin which I hadn’t seen before, so I attempted to add this plugin and upgrade my discourse instance at the same time.

As mentioned in the title, I’m currently seeing an error in the bootstrap where the rake db:migrate is failing to creat a unique index on theme_field_id. Here’s some details on how I go to that point though…

Initial Upgrade Attempt (patch-package error)

I’m running a split container install, so I:

  • Edited my web_only.yml to add the new discourse-ai plugin

    eg. Added an extra line to the plugins hooks
    ## Plugins go here
    ## see https://meta.discourse.org/t/19157 for details
    hooks:
      after_code:
        - exec:
            cd: $home/plugins
            cmd:
              - sudo -E -u discourse git clone https://github.com/discourse/docker_manager.git
              - sudo -E -u discourse git clone https://github.com/discourse/discourse-voting.git
              - sudo -E -u discourse git clone https://github.com/discourse/discourse-ai.git
    
  • Ran ./launcher bootstrap web_only

It errored out with a message about patch-package not being found.

Git Pull → boostrap (pg-vector error)

I figured I would make sure I had the latest launcher updates as it wouldn’t hurt before rertrying:

  • Ran a git pull to make sure I had the latest launcher related updates
  • Ran ./launcher bootstrap web_only again

This time I received error messages related pg-vector.

:page_facing_up: Log snippet from bootstrap *with* discourse-ai

I noted down my PostgreSQL versions so I’d have them for my records when I decided to revisit the discourse-ai plugin.

  • web_only:
    • client: psql (PostgreSQL) 13.10 (Debian 13.10-1.pgdg110+1)
  • data:
    • server: PostgreSQL 13.9 (Debian 13.9-1.pgdg110+1)

Remove discourse-ai Plugin → Bootstrap

And then I removed the discourse-ai plugin the the web_only.yml file and ran a bootstrap again.

Much to my surprise, I was still seeing errors, but this time they appear to be related to rake db:migrate not being able to create a unique index index_javascript_caches_on_theme_field_id with the detail: Key (theme_field_id)=(3) is duplicated.

:page_facing_up: Log snippet from bootstrap *without* discourse-ai

Your help? :pray:

That brings me here looking for help. I figured I should take a pause and get some insights from the community before digging in further in case anyone else has seen this before.

For reference, I have 3.2.0.beta1-dev (993ed10cf0 ~ August 9th) installed.

And while I don’t think it’s related to this, I figure it doesn’t hurt to mention that I migrated between host machines at the start of this year… though I’ve done several Discourse updates through the admin UI since then.

Migration approach

From memory, this was basically upgrading the source instance to the latest Discourse version, Installing discourse on the new host, freezing the source, doing a Discourse backup on the source, rsync images/etc between hosts, restore backup on new host.

For the duplicate index errors, I think if you can restart web_only and deal with them from the UI it’s much simpler than doing it in the database. Though I don’t think I’ve seen one on theme_field_id before.

The line just above rake aborted! in the logs mentions discourse-voting now being discourse-topic-voting. You could try and update the link in the plugin section to the current one and see if that helps?

Can you clarify what resource it’s referring to as having a duplicate ID? I wasn’t sure exactly what theme_field_id refers to.

For the record, I hadn’t shut down the web_only container. I usually bootstrap the container first to minimize the outage window:

I’ll give it a try. I suspect it won’t help as I would think GitHub is redirecting things silently behind the scenes and it’s just a warning on the Discourse side. :slight_smile:

If you have an up-and-running site with the data explorer installed, I think you should be able to get a look at what that may be referring to using this query:

SELECT *
FROM theme_fields
WHERE id = 3
1 Like

I wasn’t aware of that plugin - that’s neat! I don’t have it installed, but I’m able to enter the data container and run queries though.

That’s a great clue! If I’m reading the error message correctly though, I don’t think it’s on the theme_fields table as that particular table doesn’t have a theme_field_id. I haven’t checked the source code, but I would hazard a guess that the first argument to add_index() is the table name and the second is the column.

Based on that, it looks like it would be the javascript_caches table.

== 20230817174049 EnsureJavascriptCacheIsUniquePerTheme: migrating ============
-- remove_index(:javascript_caches, :theme_id)
   -> 0.0208s
-- add_index(:javascript_caches, :theme_id, {:unique=>true})
   -> 0.0079s
-- remove_index(:javascript_caches, :theme_field_id)
   -> 0.0026s
-- add_index(:javascript_caches, :theme_field_id, {:unique=>true})

So I checked the structure of that table and it has the theme_field_id column:

discourse=# \d javascript_caches
                                          Table "public.javascript_caches"
     Column     |            Type             | Collation | Nullable |                    Default
----------------+-----------------------------+-----------+----------+-----------------------------------------------
 id             | bigint                      |           | not null | nextval('javascript_caches_id_seq'::regclass)
 theme_field_id | bigint                      |           |          |
 digest         | character varying           |           |          |
 content        | text                        |           | not null |
 created_at     | timestamp without time zone |           | not null |
 updated_at     | timestamp without time zone |           | not null |
 theme_id       | bigint                      |           |          |
 source_map     | text                        |           |          |
Indexes:
    "javascript_caches_pkey" PRIMARY KEY, btree (id)
    "index_javascript_caches_on_digest" btree (digest)
    "index_javascript_caches_on_theme_field_id" btree (theme_field_id)
    "index_javascript_caches_on_theme_id" btree (theme_id)
Check constraints:
    "enforce_theme_or_theme_field" CHECK (theme_id IS NOT NULL AND theme_field_id IS NULL OR theme_id IS NULL AND theme_field_id IS NOT NULL)
Foreign-key constraints:
    "fk_rails_58f94aecc4" FOREIGN KEY (theme_id) REFERENCES themes(id) ON DELETE CASCADE
    "fk_rails_ed33506dbd" FOREIGN KEY (theme_field_id) REFERENCES theme_fields(id) ON DELETE CASCADE

I was able to query that table (with the content fields shortened so I could actually read it) and I can see the duplicates. But I’m not sure what the implication of these duplicates are.

discourse=# select id, theme_field_id, digest, substring(content from 1 for 64) as content_64, created_at, updated_at, theme_id, substring(source_map from 1 for 64) as source_64 from javascript_caches where theme_field_id = 3;
 id | theme_field_id |                  digest                  |                            content_64                            |         created_at         |         updated_at         | theme_id |                            source_64
----+----------------+------------------------------------------+------------------------------------------------------------------+----------------------------+----------------------------+----------+------------------------------------------------------------------
  1 |              3 | d0b6ec642d5649064ff0501cadc775a9217b16e0 | "define"in window&&define("discourse/theme-3/initializers/theme- | 2019-02-25 01:26:56.606537 | 2023-08-18 20:47:19.596923 |          | {"version":3,"sources":["discourse/initializers/theme-field-3-co
  2 |              3 | 7fd74ecf4448afccdbcd9ccde87acddb4ec6f514 | "define"in window&&define("discourse/theme-3/initializers/theme- | 2019-02-25 01:26:58.228209 | 2023-08-18 20:50:41.049209 |          | {"version":3,"sources":["discourse/initializers/theme-field-3-co

The content looked familiar, so I went into the Customize → Theme → My Theme → Common → Head, made a minor tweak and saved and I can see that one of the entries was updated and one of them remains old…

discourse=# select id, theme_field_id, digest, substring(content from 1 for 64) as content_64, created_at, updated_at, theme_id, substring(source_map from 1 for 64) as source_64 from javascript_caches where theme_field_id = 3;
 id | theme_field_id |                  digest                  |                            content_64                            |         created_at         |         updated_at         | theme_id |                            source_64
----+----------------+------------------------------------------+------------------------------------------------------------------+----------------------------+----------------------------+----------+------------------------------------------------------------------
  2 |              3 | 7fd74ecf4448afccdbcd9ccde87acddb4ec6f514 | "define"in window&&define("discourse/theme-3/initializers/theme- | 2019-02-25 01:26:58.228209 | 2023-08-18 20:50:41.049209 |          | {"version":3,"sources":["discourse/initializers/theme-field-3-co
  1 |              3 | 7f4132b1f9ced1b90b8f8fc24812cc11e81fea8d | "define"in window&&define("discourse/theme-3/initializers/theme- | 2019-02-25 01:26:56.606537 | 2023-09-13 21:56:56.312263 |          | {"version":3,"sources":["discourse/initializers/theme-field-3-co
(2 rows)

Again, I’m not sure what the implication of these duplicates are, if it’s safe to delete the ‘old’ one, or if there’s some other way to ‘rebuild’ the caches which would clean this up?

To be honest, I’m 0/2 on this so far so I am likely not the best source of info for this one. :slight_smile:

There are a couple of previous topics where a similar thing has happened for index_tags_on_name if they could be any use? eg.

1 Like

Rebuild your data container. The ai plugin requires a extension you don’t have yet.

I appreciate your help! Seems like you led me down the right path. :star_struck:

I took a backup of the table with pg_dump, deleted the old duplicate entry, and the bootstrap completed successfully. :+1:

Thanks for the confirmation! I figured I might need to update the data container (or otherwise get pgvector installed). I was holding off on doing that as I didn’t want to deal with the downtime.

From the discourse-ai thread, it sounds like PG15 is on the horizon, so I might just wait a bit for that.

It seems like the pgvector dependency might be for the Embeddings feature which I didn’t plan on using, but unfortunately it sounds like it’s all bundled together. I mainly just wanted to play with some of the OpenAI / ChatGPT magic rewrite features, so perhaps one more reason to wait until the next big data container update. :slight_smile:

1 Like

You can wait if you want, but single-container people (which is the vast majority of self-hosters) upgrade their postgres potentially every time they do an upgrade, so there’s not much reason to wait other than a few extra minutes of downtime. You’ll need to rebuild (or maybe just destroy and start) the web_only container after the database rebuild.

1 Like

That’s the main reason. Even if it’s only a few minutes, I’d rather rather minimize it if it’s not critical.

On that note, I probably should be holding off on using a beta plugin for the potential for downtime alone. :stuck_out_tongue:

I’ve loosely followed some of the discussions on ‘zero downtime’ upgrades. Perhaps it’s just rose-colored glasses looking back on recent history, but it feels like I’ve been able to use /admin/upgrade for most updates over the past year, so I’ve been focused on more critical projects for the time being and lost interest in the zero downtime approach.

When I migrated to a bigger host back in January, it was ‘zero downtime’ in that users could continue to access content on the site, but there was a brief period where things were read-only as we transitioned. I suppose I could use that approach for a major data container upgrade if I really wanted to minimize downtime during the next big data upgrade.

PS. I’ve read a ton of your posts in the community here over the years. Thanks for all your efforts in supporting the community! :star2:

It really does! I haven’t done an upgrade for my “rebuilds when you need them” clients in quite a while. For sites like yours my dashboard will do the rebuild and do the post upgrade migrations after the new container has launched (if your existing container has those migrations set to not run the first time).

1 Like