He’s talking to me @itsbhanusharma. My backups aren’t working. My configuration is the same as Bill’s. I didn’t use a separate pull zone for the backup bucket though so maybe that’s why. I just have the name of the backup bucket in the env variables.
I’ll try Bhanu’s suggestion to migrate the s3 assets later.
I created two pull zones, one for each of the buckets (upload and backup). I started a backup from my admin panel but it failed. The only thing that Discourse knows is the name of the backup bucket.
It aborts the rake it seems because it adds the amazonaws.com to the url when in fact, that is incorrect. This is the problem. If you use a S3 that’s not amazon, then the rake won’t work since it’s hardcoded to change the url.
I have my backups working with S3 as well. So I have old and new uploads working with S3. I am using Cloudflare for SSL, DDoS, BunnyCDN for the upload and backup pull zones, and BackBlaze for the S3 storage. I’m all good now!
Then rebuild so that the S3 configuration is globally defined and not just in admin panel. This way, when you migrate old files to s3, you can do it easier with 3 commands:
./launcher enter app rake uploads:migrate_to_s3 rake posts:rebake
Not sure what I’m missing here. Is there a S3_CDN_BACKUP_URL ? given that the url is different for the backup.
Did you resolve this question? I’m also not clear where to put the url for the pull zone that points to the backup bucket.
Edit: Am I correct in realizing that the CDN is only needed for the uploads bucket? The guide on this topic suggests a second CDN pull zone should be created for the backup bucket. If that’s wrong, perhaps the guide should be updated @Bill
Apparently it only needs the backup bucket name. From the name it can get the backup S3 url since it will be the same as the upload except for the name difference. This is why you don’t have to define a separate S3 backup url. This is of course assuming that both buckets are in the same S3 service.
But if the backup bucket is private, how would the CDN access it? I’m new to CDN’s and may be missing something, but I suspect that Discourse doesn’t use the CDN at all to back up.
They do. I went to my backup bucket and I see the backup uploaded there. It’s private but discourse can access it. You can setup permissions if the url is there. You can make it so only your site can access the bucket too or any https source.
So I checked BunnyCDN. I can verify that it did not go through the CDN for the backup. The traffic says 0 KB. The host CDN url is different from the upload one so it seems Discourse isn’t using it. However, I can verify that the backups are being uploaded to Backblaze B2 from Discourse.
I would eliminate everything from ‘On your BunnyCDN dashboard, you should create a second pull zone’ up thru the paragraph ending with ‘“standard tier 10$/TB” that I used my for uploads bucket.’
Also, it seems you do need a second CDN pull zone, but it’s not for the backup. I was confused about this, and couldn’t get offsite uploads working until I correctly set up one pull zone to send uploads to BackBlaze (as you outlined) and a second pull zone to pull assets from BackBlaze. See my question about that and the response I got here for more info.
It looks like I don’t have the discourse permissions for editing the original thread anymore, it probably expired after a set time or I edited it too many times. I cant get to the edit area.
Hey, just want to hop in and say that Bunny has gotten a partnership with Backblaze too ! So, the transfer from Backblaze to Bunny is now completely free, so, appart for more security, you can just transfer the data to Bunny without passing through cloudflare !
Is this step overkill? I guess a better question might be… how significant are the benefits of taking this extra step if you already have Backblaze set up for storage and Bunny as the CDN? Any insights on this would be super helpful for me.