SMF2 Conversion and Rake to S3 Help

I searched, but can’t find a basic overview for how to move our uploads from DigitalOcean to S3. I successfully setup S3 for new uploads and our backups a few months back. I’d like to complete the move of our uploads (~1.4GB) over to S3.

This was a conversion from SMF2 from the start. We now have two upload folders, one at the root in smf2 and one in /var/discourse. The SMF2 directory is 2.8GB. I suppose there might be two steps here? Do I need to do separate steps to move from the SMF2 directory and from the /var/discourse directory?

I came across rake to s3, but can’t find a guide other than a bunch of people talking about errors they encountered and suggestions to correct them. Is there a guide?

Please read this article, it should guide you on how to set up the task

You should be checking the configuration part for hooks section and set up the bucket using env variables to avoid the issues you’ve already read about.

2 Likes

That’s great! Thank you!

Only thing on there that I don’t know about is a CDN. I see we can use Amazon Cloudfront. I’m guessing that wasn’t setup when doing the previous S3 setup steps. I’ll look for a guide how to get that setup.

You can set up cloudfront by using your uploads bucket as origin and once it’s up you can set it as S3 cdn link. That’s all that’s there to do.

3 Likes

Thanks. Followed this video to do the Cloudfront setup:

I’m running rake posts:rebake now. 84k posts, so it’s gonna take some time.

Uh oh… trying to start it again…

root@discourse-app:/var/www/discourse# rake posts:rebake
Rebaking post markdown for ‘default’
10027 / 83358 ( 12.0%)/usr/local/bin/rake: line 2: 959 Killed RAILS_ENV=production sudo -H -E -u discourse bundle exec bin/rake “$@”

Just happened again… Any suggestions?

root@discourse-app:/var/www/discourse# rake posts:rebake
Rebaking post markdown for ‘default’
12901 / 83359 ( 15.5%)/usr/local/bin/rake: line 2: 2569 Killed RAILS_ENV=production sudo -H -E -u discourse bundle exec bin/rake “$@”

Some posts are now pulling content from Cloudfront, so at least that’s sort of working.

It looks like this is happening without me doing a rebake? Sidekiq is pounding away at pulling hotlinked images.

OK, got the site rebuilt and the S3 setup steps were followed.

Everything should be loading from S3 now. So how do I verify that? Can I delete the old uploads directory on my droplet now to free up space?

I clearly did something wrong. I’m using more disk space now than I was before.

The upload to S3 should happen during the rebuild. You should be able to browse the website assets in the s3 buckets and all the site assets should load from the S3 or CDN link supplied. If it isn’t working, there is definitely something wrong with the way you configured it. Are you getting any errors?

I suspect something may have gone wrong then. My rebuild didn’t take long enough for a few GB of data to have been uploaded during it.

EDIT:
All uploads as of several months ago were already going to S3. It’s the legacy stuff before switching to S3 that I want to move.

Bear in mind that servers are magnitude of scale faster than your broadband connection. It shouldn’t take them longer than a couple minutes to upload the images from your host to s3. but you should be looking at the rebuild log carefully to confirm if uploads happened or not.

alternatively you can run ./discourse-doctor to generate a log for review

Here’s the output from the script:

Only thing I notice is marked in bold in the DNS section.

> root@discourse:/var/discourse# ./discourse-doctor
> DISCOURSE DOCTOR Thu May 14 11:35:17 UTC 2020
> OS: Linux discourse 4.15.0-99-generic #100-Ubuntu SMP Wed Apr 22 20:32:56 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
> 
> 
> Found containers/app.yml
> 
> ==================== YML SETTINGS ====================
> DISCOURSE_HOSTNAME=
> SMTP_ADDRESS=
> DEVELOPER_EMAILS=
> SMTP_PASSWORD=
> SMTP_PORT=
> SMTP_USER_NAME=
> LETSENCRYPT_ACCOUNT_EMAIL=
> 
> ==================== DOCKER INFO ====================
> DOCKER VERSION: Docker version 18.09.6, build 481bc77
> 
> DOCKER PROCESSES (docker ps -a)
> 
> CONTAINER ID        IMAGE                 COMMAND             CREATED             STATUS              PORTS                                      NAMES
> db900fc77ebe        local_discourse/app   "/sbin/boot"        15 hours ago        Up 15 hours         0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   app
> 
> db900fc77ebe        local_discourse/app   "/sbin/boot"        15 hours ago        Up 15 hours         0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp   app
> 
> Discourse container app is running
> 
> 
> ==================== PLUGINS ====================
>             exec: {cd: $home/plugins, cmd: ['git clone https://github.com/discourse/docker_manager.git', 'git clone https://github.com/procourse/procourse-static-pages.git', 'git clone https://github.com/discourse/discourse-bbcode.git', 'git clone https://github.com/discourse/discourse-adplugin.git']}
> 
> No non-official plugins detected.
> 
> See https://github.com/discourse/discourse/blob/master/lib/plugin/metadata.rb for the official list.
> 
> ========================================
> Discourse version at : NOT FOUND
> Discourse version at localhost: Discourse 2.5.0.beta4
**> ==================== DNS PROBLEM ====================**
**> This server reports Discourse 2.5.0.beta4 , but  reports NOT FOUND.**
**> This suggests that you have a DNS problem or that an intermediate proxy is to blame.**
**> If you are using Cloudflare, or a CDN, it may be improperly configured.**
> 
> 
> ==================== MEMORY INFORMATION ====================
> RAM (MB): 1008
> 
>               total        used        free      shared  buff/cache   available
> Mem:            985         636          69         121         279          88
> Swap:          2047         775        1272
> 
> ==================== DISK SPACE CHECK ====================
> ---------- OS Disk Space ----------
> Filesystem      Size  Used Avail Use% Mounted on
> /dev/vda1        25G   20G  4.8G  81% /
> 
> ---------- Container Disk Space ----------
> Filesystem      Size  Used Avail Use% Mounted on
> overlay          25G   20G  4.8G  81% /
> /dev/vda1        25G   20G  4.8G  81% /shared
> /dev/vda1        25G   20G  4.8G  81% /var/log
> 
> ==================== DISK INFORMATION ====================
> Disk /dev/vda: 25 GiB, 26843545600 bytes, 52428800 sectors
> Units: sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disklabel type: gpt
> Disk identifier: 02CBFCD2-7495-4A08-A11B-28E7D3872FAA
> 
> Device      Start      End  Sectors  Size Type
> /dev/vda1  227328 52428766 52201439 24.9G Linux filesystem
> /dev/vda14   2048    10239     8192    4M BIOS boot
> /dev/vda15  10240   227327   217088  106M Microsoft basic data
> 
> Partition table entries are not in disk order.
> 
> ==================== END DISK INFORMATION ====================
> 
> ==================== MAIL TEST ====================
> For a robust test, get an address from http://www.mail-tester.com/
> Or just send a test message to yourself.
> Email address for mail test? ('n' to skip) []: n
> Mail test skipped.
> Replacing: SMTP_PASSWORD
> Replacing: LETSENCRYPT_ACCOUNT_EMAIL
> Replacing: DEVELOPER_EMAILS
> Replacing: DISCOURSE_DB_PASSWORD
> Replacing: Sending mail to
> 
> ==================== DONE! ====================
> Would you like to serve a publicly available version of this file? (Y/n)n
> root@discourse:/var/discourse#

Here’s the section from app.yml:

> 
>     DISCOURSE_USE_S3: true
>     DISCOURSE_S3_REGION: us-east-1
>     DISCOURSE_S3_ACCESS_KEY_ID: <MY KEY>
>     DISCOURSE_S3_SECRET_ACCESS_KEY: <MY SECRET KEY>
>     DISCOURSE_S3_CDN_URL: 'https://d2hneyr8lp58j4.cloudfront.net'
>     DISCOURSE_S3_BUCKET: brcuploads
>     DISCOURSE_S3_BACKUP_BUCKET: bcruploads-backups
>     DISCOURSE_BACKUP_LOCATION: s3

Is that telling me I’ve got the DNS on DigitalOcean set wrong? I added the CNAME from Cloudfront.

Wow, this wasn’t helpful. You chose n for

can you generate a publicly available version? you can share link in dm, I may be able to verify if migration to s3 actually worked or failed for some reason?

I’ll DM you the URL.

1 Like