A basic Discourse archival tool

A Discourse forum that I use is being taken offline in a couple weeks, so I set out to archive the site. I did a lot of research, trail and error, and I found a simple solution with HTTrack. Here’s everything I learned.

Archive a Discourse site with HTTrack
For Windows users, the best solution appears to HTTrack. This worked great and it archived the site to HTML files. All categories, threads, and posts were archived including all pages with relative navigation links.

A basic tutorial on HTTrack is here. I left the settings on default with the following custom settings.

  • Web Addresses:
    • https://forums.gearboxsoftware.com/c/homeworld/
    • https://forums.gearboxsoftware.com/c/homeworld-dok/
  • Scan Rules:
    • -gearboxsoftware.com/* -forums.gearboxsoftware.com/* +forums.gearboxsoftware.com/c/homeworld/* +forums.gearboxsoftware.com/c/homeworld-dok/* +forums.gearboxsoftware.com/t/* +forums.gearboxsoftware.com/user_avatar/* +sea2.discourse-cdn.com/*
  • Browser ID (aka User Agent):
    • Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)

Note: There’s a CSS issue preventing category links from working, however that can easily be fixed as described below.

CSS Issue
When viewing Category pages as googlebot, the thread links don’t work. An example is [here](https://web.archive.org/web/20220731051419/https://forums.gearboxsoftware.com/c/homeworld/57).

This makes navigation impossible on category pages in HTTrack, archive.org and google catch. This appears to be a Discourse issue in a CSS file. To fix the links, simply block/delete the following CSS file:

  • stylesheets/desktop_theme_10_1965d1d398092f2d9f956b36e08b127e00f53b70.css?__ws=forums.gearboxsoftware.com

@codinghorror - Can you guys address this?

Challenges
I ran into the following challenges and eventually overcame them after much trial and error.

  • Discourse pages are dynamically generated with JavaScript. This makes for poor results with most archive/crawler tools.
  • Most threads only load with the first ~20? posts, the rest of the posts don’t appear until you scroll down. Pressing Ctrl+P loads a /print page with all posts visible. Users are limited to printing 5 pages an hour with print mode, but this limit can be increased by a Discourse site admin.
  • Adrelanos noted that multi-page threads weren’t being archived properly by HTTrack, however I suspect this issue was due to his HTTrack settings, as I did not have this issue.
  • Saving a page to PDF won’t include any collapsed details sections.
  • Pages can be loaded in basic HTML by adding ?_escaped_fragment_ to the end of a URL, but this trick only works for threads not categories.

The above challenges aren’t a concern once you learn that all Discourse pages/content can be rendered properly as HTML for crawlers. To do this, you must change your crawler / browser’s user agent to googlebot to get the HTML version of pages.

Archive.org
If you use the “Save Page Now” feature on web.archive.org, it will archive the javascript version of Discourse with poor results. Archive.org uses the user agent of the person requesting the archive. So you must change your user agent to googlebot. You can get a Chrome extension called “User-Agent Switcher for Chrome”. In the options add:

  • Name: Googlebot
  • String: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
  • Group: Chrome
  • Indicator Flag: 1

Alternative Archive Tools
Many tools are listed here: Archive an old forum "in place" to start a new Discourse forum
I also briefly tested GUI tools like Cyotek WebCopy, A1 Website Download, and WAIL.
Command line tools include mcmcclur’s tool and wget. A tuturial on wget is [here](https://letswp.justifiedgrid.com/download-discourse-forum-wget/).
However for Windows users, the best solution appears to HTTrack.

Note: Since I’m a new user, I’m limited to two links in a post. Hence I turned some links into preformatted text.

8 Likes

I’ve now identified the root cause. Turned out the background image is conflicting with the links!

Within this file:
stylesheets/desktop_theme_10_1965d1d398092f2d9f956b36e08b127e00f53b70.css

Within this code:

body:before {
    backface-visibility: hidden;
    -webkit-backface-visibility: hidden;
    content: "";
    display: block;
    background-color: #000000;
    background-image: url("data:image/svg+xml,%3Csvg width='6' height='6' viewBox='0 0 6 6' xmlns='http://www.w3.org/2000/svg'%3E%3Cg fill='%23adadad' fill-opacity='0.4' fill-rule='evenodd'%3E%3Cpath d='M5 0h1L0 6V5zM6 5v1H5z'/%3E%3C/g%3E%3C/svg%3E");
    position: fixed;
    height: 100vh;
    width: 100vw;
    top: 0;
    left: 0;
    z-index: 0;
    opacity: 0.03;
    background-size: 70%;
}

CSS Issue Fix
Move the background image down a layer to fix the links.

  • Open stylesheets/desktop_theme_10_1965d1d398092f2d9f956b36e08b127e00f53b70.css and replace all three “z-index:-1;” with “z-index:-2;”. Then replace “z-index:0;” with “z-index:-1;”.
  • Then open desktop_32713c1b6551369eb391868f3d4e3f2ac9c38cf1.css and simply replace all three “z-index:-1;” with “z-index:-2;”. The links will now work.
7 Likes

Thanks for letting us know, really since this is a crawler/archive view, these images shouldn’t be displayed anyway… so I’ve opened a PR to remove them

5 Likes

For what it’s worth, I’ve written a minimum viable Python script that performs simple backup of post content using the API: GitHub - jamesob/discourse-archive: Provides a simple archive of Discourse content

It’s pretty barebones, but should give someone a rough idea of how to generate a suitbable-for-public archive.

2 Likes