Thanks again for your answer !
The warning about google bucket was about using for backups because it couldn’t list the files.
I already posted on how to fix this
Storage Legacy Bucket Owner
Read and write access to existing buckets with object
Now it works including listing which enables the automatic backup ! houray !!
Are you suggesting I update the OP with that information ? I don’t believe I can.
Again, the backup works, but the upload of the assets doesn’t, according to the OP, this was supposed to work even without the
Storage Legacy Bucket Owner rights.
I think there might be a regression here, what do you think
There may be a regression. Are you sure you added the custom
that only Google needs?
Oh. Well, I thought that
That was what I was suggesting. It’s a wiki, so I’m pretty sure you can, though I’m not 100% sure what trust levels are involved.
Thanks for your answer, yes I did includes it:
Note that I tried with and without the subfolder
@tuanpembual initially did but referred to Storage Legacy Object Owner instead of Storage Legacy Bucket Owner
I’m only a “basic user” that must be the reason I can’t edit it.
I will try to summarize the answers to my questions:
Do the Web UI and ENV variable collide ?
When are the assets supposed to be uploaded to the bucket ?
By adding this snippet to the app.yml in the hook section, it will be uploaded after_assets_precompile (during rebuild app).
How can I debug this ? I don’t see any error in the logs
By running :
sudo ./launcher enter app
sudo -E -u discourse bundle exec rake s3:upload_assets --trace
Is it possible to set a subfolder of a bucket in the config ?
Do I really need to use separate buckets for uploads and backups?
No, you don’t, but it’s usually the easiest way to set-up. Essentially you need to either use two different buckets or a prefix for the backup bucket. For example, the following combinations will work:
You can use prefixes to organize the data that you store in Amazon S3 buckets. A prefix is a string of characters at the beginning of the object key name. A prefix can be any length, subject to the maximum length of the object key name (1,024 bytes). You can think of prefixes as a way to organize your data in a similar way to directories. However, prefixes are not directories.
Once this works, Are the previously uploaded images transferred to the bucket ? If I rebake, what will the url of the previously uploaded images look like ?
I’ve enabled S3 uploads in my Discourse instance (which has been going for a while); what do I do with the existing local uploads?
To migrate your existing uploads to S3, you can do a couple of
rake tasks. To perform this, you need SSH access, root permissions, and have entered the discourse app (as per Administrative Bulk Operations ). Oh, and you have to set some environmental variables in app.yml. Not for the faint-hearted.
Once you have done all that you are ready for the rake tasks:
Once these are done (and the uploads are working well) you no longer need to include uploads in your backups. And as a bonus, you will be able to
Restore a backup from command line in the event of catastrophe (just keep a copy of app.yml somewhere).
Hi, I’ve been looking for object storage providers, and I saw on the OP that for some of them, you’ll need “to skip CORS and configure it manually.”, I am not familiar with CORS or anything about configuring it, so should I stay clear from the ones needing this setting, or is it simple to set it up ?
If you need to ask (as I would) then I would go with another one.
Just confirm, once I’ve done the
steps, I can remove the local uploads folder in it’s entirety, yes?
@mcwumbly. This was very easy to find when I could search for “S3 clone”. I was unable to find it just now. Was there something wrong with that title? Is there a search that will find it? Could we add a (I can’t remember what it’s called) thing so it can auto link on some words like standard install does (but I can’t think of what words to use).
As someone who links that topics multiple times a week I kinda agree
Maybe adding “s3 clones” to the OP body helps the search-fu?
I’ve found “S3 compatible” more common in the wild, which is why I changed it during a sweep of updating docs titles in general, for example:
MinIO | AWS S3 Compatible Object Storage
I think the suggestion to stick other search terms in the OP body makes sense though. (I just added it in this one).
Seems fine. I guess we’ll have to change with the times.
Yes. It’s really not so hard. You can do it,
Hello, has anyone managed to get Contabo Object Storage to work for S3 Compatible uploads. It seems that when uploading it prefixes the repository name in the url.
For example if you have a bucket called community it creates a URL like
I have found this behavior in Duplicati for example but it can be excluded that it prefixes the bucket name in the domain.
I would appreciate if someone has the solution to be able to use this Object Storage because it has very good prices.
I have made several tests to configure the domain as CNAME in my domain from cloudflare to provide the SSL but for
community.cdn.midominio.com the SSL certificate is no longer covered because they use a wildcard and if I deactivate the proxy of clouflare it complains because the certificate is not correct.
Have you tried to set the S3 CDN setting to
https://community.eu2.contabostorage.com ? IMO that will work.
Not exist, its the contabo endpoint
Yes, but what will be the final URL of a example file in a bucket?
He means that if you upload a file to the bucket yourself (using whatever tool you can get to upload a file) what url would you use to access the file?
The structure is