A basic Discourse archival tool

i don’t know if enabling spoiler plugin or “fixing” youtube onebox would make it better to bots, but it’d sure make a better printable/escaped version, as these miss pretty relevant content from the original:

youtube onebox escaped

spoiler escaped

here’s a quick dirty and hackish way fix for them, if you’re in a rush and is lazy like me:

;(function( discoUrsa, undefined ) { // jquery-ish namespace
    //addSpoilerStyle()
    DOMready()
    window.onload = DOMready
    function DOMready () {
        fixSpoiler()
        fixOnebox()
    }
    function addSpoilerStyle () {
        var style = document.createElement('style')
        style.type = 'text/css'
        style.innerHTML = `
        .spoiler.spoiled {background-color: rgba(0, 0, 0, 0); color: rgba(0, 0, 0, 0); text-shadow: gray 0px 0px 10px; user-select: none; cursor: pointer;}
        .spoiled.half-spoiled {text-shadow: gray 0px 0px 5px;}
        .spoiler {color: gray; cursor: pointer;}`
        document.getElementsByTagName('head')[0].appendChild(style)
    }
    function fixSpoiler () {
        for (s of document.querySelectorAll('.spoiler')) {
            s.classList.add('spoiled')
            s.onclick = function(){ this.classList.toggle('spoiled') }
            s.onmouseenter = function () { this.classList.add('half-spoiled') }
            s.onmouseleave = function () { this.classList.remove('half-spoiled') }
        }
    }
    function fixOnebox () {
        fixYoutube()
    }
    function fixYoutube () {
        for (o of document.querySelectorAll('.lazyYT')) {
            o.innerHTML = `<iframe width="${ o.getAttribute('data-width') }" height="${ o.getAttribute('data-height') }" src="https://www.youtube.com/embed/${ o.getAttribute('data-youtube-id') }?${ o.getAttribute('data-parameters') }" frameborder="0" allowfullscreen></iframe>`
        }
    }
}( window.discoUrsa = window.discoUrsa || {} ))
1 Like

Yes, those links are gone, but it’s all summarized on this new page. Also, the output of the code as applied to this DiscourseMeta is now here. I even put it up on GitHub so maybe someone will get interested.

I’d like to edit the original post, but I seem to be past the edit window.

Incidentally, I do think that httrack works much better than I originally thought but I still strongly prefer my version for two main reasons:

  • My code explicitly supports MathJax, which is essential for my work.
    (I’ll probably need to update my code to work with the new MathPlugin sometime)
  • I’ve got much more control over what get’s downloaded and how it’s displayed. For example, I don’t like the way that httrack output points to user links, even if not downloadedl
9 Likes

No problem! I made the first post wiki!

3 Likes

I’m hosting a forum that is currently, in its third iteration, running Discourse. Our last two forums were (I think, phpbb2 or something like that). I have resolved to archive them using Discourse, so that:

  1. I scan the phpbb2 database into Discourse (there’s a migration tool)
  2. I create a static HTML archive using Discourse.
  3. I put up the static HTML archive into public use (preferably in the same place where our dynamic forum running Discourse is).

According to the first message

There are no user pages or category pages

Could it be somehow advanced so that creating category views would be also possible?

Also, any help on how to use the Jupyter notebook thing? First time I hear of this…

@Silvanus Can you indicate a live discourse site that you want to archive? I’d be glad to try it out.

Also, have you tried httrack? I think that a command as simple as httrack yoursiteurl might work quite well.

I’m still in the phase 1 (phpbb2 > phpbb3 > discourse) of my archival, so no site yet. After I’ve managed the phpbb conversion, I’ll get back to this. It feels very, very hard. Been trying to install phpbb3 for a while now, but I get some weird problems all the time. :frowning:

I’ll have to try that httrack, thanks.

@Silvanus Well, I noticed that you point to the forum at https://uskojarukous.fi/ on your Profile page; I went ahead and created a couple of archives of that. You can (temporarily) take a look at the results here:

Here are a few comments:

  • I definitely like my version better; no surprise there because I designed it the way I want it to look.
  • The front page of the httrack version doesn’t look so great simply because that’s what the escaped fragment version looks like.
  • I think it might make sense to start httrack at a subpage to generate something like this.
  • It wouldn’t be too hard to make my archival tool grab the categories; I might do that for the next iteration.
  • My code adds MathJax to every page because my forums are mathematical. I should probably try to detect if MathJax is necessary. I’m guessing your forum doesn’t require it.

The httrack command

The httrack version was generated with a command that looks like so:

httrack https://uskojarukous.fi -https://uskojarukous.fi/users* -*.rss -O uskojarukous_arxiv -x -o -M10000000 --user-agent "Googlebot"
  • The -https://uskojarukous.fi/users* -*.rss prevents httrack from downloading files matching those patterns.
  • The -x -o combo replaces both external links and errors with a local file indicating the error. So, for example, we don’t link to user profiles on the original that weren’t downloaded locally.
  • The -M10000000 restricts the total amount downloaded to 10MB. There appears to be some post processing and downloading of supplemental files that makes the total larger than this anyway.
  • The --user-agent "Googlebot" should not be necessary if the forum is powered by a recent version of Discourse.

The archival tool code

For the most part, the archival tool should run with minimal changes. I run it within a Jupyter notebook but the exact same code could be run from a Python script with the appropriate libraries installed. Of course, you need to tell it what forum you want to download. The few lines of my first input look like so:

base_url = 'https://uskojarukous.fi/'
path = os.path.join(os.getcwd(), 'uskojarukous')
archive_blurb = "A partial archive of uskojarukous.fi as of " + \
  date.today().strftime("%A %B %d, %Y") + '.'

Later, in input 6, I define max_more_topics = 2. Essentially, that defines a bound on k in this code here:

'/latest.json?page=k'

But again, there should be some changes made to the code to get it to work for non-mathematical forums.

4 Likes

Very cool, thank you for all the clarifications. Just a quick note, it seems that your tool can’t handle sub-categories (which is why many of the messages seem to be without a category).

3 Likes

@Silvanus Yes, I think you’re absolutely right about the sub-category thing. Thanks - I had wondered about that.

@mcmcclur: as you already realized, I’m the admin of said forum, which is the third of our forums. When we did technological jumps, we didn’t migrate, but started from scratch, and the older forum was archived. The last two forums are in SMF format - but I finally managed to start converting them into Discourse format! :slight_smile:

So, our forum had a public area and a closed area. I’m thinking that the closed area (a few categories) should be archived, but closed off via a password gate. I noticed that the static paths are something like /t/TITLE/MESSAGEID/. This, if course, lends itself for thread-by-thread gating, but is slightly cumbersome - but, heh, I guess that’s what you get when archiving huge loads of stuff from a dynamic forum to a static archive… :slight_smile:

Thank you @mcmcclur

It worked great! :heart_eyes:

1 Like

Just a few tidbits for anyone else looking for some httrack tips (which works great for my purposes).

  • A complete list of command line flags: HTTrack Website Copier - Offline Browser
  • Using the -s0 flag ignores the robots.txt (if you have a non-spider-able account)
  • If your site is behind a login, you can download a .txt file of the cookie (once logged in) using a chrome extension like cookies.txt and place that in the directory you’re running httrack from.
6 Likes

I’m using httrack via cron to create an offline archive of our Discourse site. However, the user that is logging in under httrack gets marked as a “view” for each topic, giving super-inflated numbers of views for each topic (the cron runs every hour).

Is there a way to exclude a certain user from being recorded in the statistics / view stats for the site as a whole?

6 Likes

Good point, where would this be intercepted @sam?

1 Like

We have this method for tracking page views:

We have additional methods for tracking user visits which would be even harder to override.

We only store one page view per day per user, but I get that it can add up.

Hacking this out so certain users are not tracked would either require a plugin or some sort of daily query that nukes all the views by the user and remembers to also reduce views count from the topics table.

4 Likes

For my purposes (a very minimally used site for internal coms) even a boilerplate script that I could manually run on occasion that says “nuke all views by user:archive” would be great.

Hi all – just jumping in here to say that @mcmcclur’s code was exactly what I was looking for! So thank you very much for sharing :slight_smile:

I made a few small modifications (mainly additional code that makes sure to grab all posts in a topic, not just the first twenty) and the code is here: GitHub - kitsandkats/ArchiveDiscourse: Code for archiving a Discourse site into static HTML, forked from @mcmcclur’s original repo and stored as a python file instead of a Jupyter notebook.

I’m very happy with how it turned out. Thanks again!

10 Likes

Hi just read through this whole thread and wanted to check if this tool works if the the discourse fourm is behind a login and password how would I edit the code so it will allow me to archival the site ?

1 Like

As it is currently written, the code is not designed to access any material that requires a login. It should be pretty easy to set that up, though. The code interacts with the Discourse site via the Python Requests library which does offer authentication. It’s feasible that adding an auth=('user', 'pass') to the code at the appropriate points is all that’s required. I’m not currently running a Discourse site so I can’t test that at the moment.

7 Likes

httrack does not work for me. Using:

httrack https://my-forums.org --user-agent "Googlebot"

httrack is quite promising, but long forum thread with multiple pages are incomplete. Once I click on “page 2” it does not work. I.e.

  • file:///home/user/My%20Web%20Sites/my-forums/my-forum.org/t/forum-thread-title/83394658.html looks really good (does not fetch from external resources), but
  • file:///home/user/My%20Web%20Sites/my-forums/my-forum.org/t/forum-thread-title/83394658.html?page=2 is broken.

Any suggestions?

Perhaps httrack can be told somehow to “use print mode”?

Perhaps httrack can be told to “append /print at the end”?

Is there a user agent setting which shows the whole forum thread on a single page? If not, could you please add this feature? You already implemented print mode. Most is already implemented. What’s left is a user agent to which results in providing contents generated for “print mode” to the crawler? Alternatively, if you don’t like the idea of a custom user agent for this purpose, what about a http header or cookie that could be used for this purpose?


ArchiveDiscourse improved/forked by by @kitsandkats is also broken for me.


Could you please consider also implementing /print also for front page / category pages?


Quote myself in I don't like infinite scrolling and want to disable it

(Temporarily) disabling infinite scroll (for some user agents) would make it possible to archive discourse with the htttrack web archive tool.

1 Like