@Pfaffman split out this Google Cloud info, which doesn’t necessarily fit in the OP, but should be saved for those having trouble with Google Clould
Thanks for the tip but it didn’t work for me until I changed the role from Storage Legacy Object Owner to Storage Legacy Bucket Owner.
It is specifically written in the tooltip while selecting the role:
Storage Legacy Object Owner
Read/write access to existing objects
Storage Legacy Bucket Owner
Read and write access to existing buckets with object
Now it works including listing which enables the automatic backup ! houray !!
About using S3 on Google Buckets:
Since you can’t list files you won’t be able to list backups, and automatic backups will fail, we don’t recommend using it for backups. However, there
might be a solution in this reply.
As I mentioned it here:
I can confirm that listing works and the automatic backup works using a service account with Storage Legacy Bucket Owner Role on the bucket.
Be aware that Using S3 for Google Bucket implies to only select a Region that has the same name in Amazon than on Google.
I find it silly that you need to choose from a dropdown menu with backend validation (I tried messing with the API without success) instead of typing it.
This implies that you can’t use a bucket in europe for instance as the prefix on Amazon is EU and EUROPE on Google, nor can you use multi-region.
US East (Ohio)
US East (N. Virginia)
US West (N. California)
US West (Oregon)
Africa (Cape Town)
Asia Pacific (Hong Kong)
Asia Pacific (Jakarta)
Asia Pacific (Mumbai)
Asia Pacific (Osaka)
Asia Pacific (Seoul)
Asia Pacific (Singapore)
Asia Pacific (Sydney)
Asia Pacific (Tokyo)
Middle East (Bahrain)
South America (São Paulo)
Salt Lake City
I also find it silly to have to set these options in Files Settings. I didn’t use S3 to upload files, I only use it for the backup. It is required to have a different bucket for uploads and backups but there is only one place to set the region which is in Files Settings.
I hope this saves somebody else time figuring this out.
PS: I debugged it using
Failed to list backups from S3: The specified location constraint is not valid. → Region problem
Failed to list backups from S3: Access denied. → Storage Legacy Object Owner instead of Storage Legacy Bucket Owner
July 20, 2022, 5:17pm
If you setup using the ENV variables as described in the OP, and set
DISCOURSE_S3_ENDPOINT as recommened the
DISCOURSE_S3_REGION is ignored making this a non-issue.
The thing is that I use the bitnami vm one click install from the Google cloud marketplace.
It is probably possible to customize the environment variables but not straightforward.
Setting the endpoint in the UI doesn’t ignore the region.
Thank you very much !
Indeed I forgot to add the snippet.
Unfortunately I get the following error:
Aws::S3::Errors::InvalidArgument: Invalid argument.
That is very consistent with the error I got while using the Web UI but not much to work with to solve the issue…
this thread that suggest that there might be an incompatibility with Google storage compared to Amazon S3.
Could this be broken for Google storage ?
Complete stack when running the task manually:
root@discourse-2-app:/var/www/discourse# sudo -E -u discourse bundle exec rake s3:upload_assets --trace
** Invoke s3:upload_assets (first_time)
** Invoke environment (first_time)
** Execute environment
** Invoke s3:ensure_cors_rules (first_time)
** Invoke environment
** Execute s3:ensure_cors_rules
Installing CORS rules...
** Execute s3:upload_assets
Aws::S3::Errors::InvalidArgument: Invalid argument.
/var/www/discourse/lib/tasks/s3.rake:37:in `block in upload'
/var/www/discourse/lib/tasks/s3.rake:192:in `block (2 levels) in <main>'
/var/www/discourse/lib/tasks/s3.rake:191:in `block in <main>'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/rake-13.0.6/lib/rake/task.rb:281:in `block in execute'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/rake-13.0.6/lib/rake/task.rb:219:in `block in invoke_with_call_chain'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/rake-13.0.6/lib/rake/application.rb:116:in `block (2 levels) in top_level'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/rake-13.0.6/lib/rake/application.rb:116:in `block in top_level'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/rake-13.0.6/lib/rake/application.rb:83:in `block in run'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/rake-13.0.6/exe/rake:27:in `<top (required)>'
/var/www/discourse/vendor/bundle/ruby/2.7.0/bin/rake:25:in `<top (required)>'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.3.20/exe/bundle:48:in `block in <top (required)>'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.3.20/exe/bundle:36:in `<top (required)>'
Tasks: TOP => s3:upload_assets
PS: I’m not considering changing bucket but wonder what would happen with the previously uploaded images on the vm disk
I found what the problem was by enabling the http_wire_trace.
The invalid argument response from the googleapis explains it:
Cannot insert legacy ACL for an object when uniform bucket-level access is enabled. Read more at 统一存储分区级访问权限 | Cloud Storage | Google Cloud
I enabled the fine grained ACL on the bucket instead of the uniform ACL because the header set during upload specifies that it is public. (I had a uniform ACL and set the all bucket to public)
I don’t have the right to update the OP but I think it should mention that in order for Google buckets to work the service account need the
Storage Legacy Bucket Owner role on the backup bucket and the upload bucket needs to be use the fine grained ACL.
I hope this saves time to the community.
Thanks again to @Falco @pfaffman @gerhard @tuanpembual for the help here.