@Pfaffman split out this Google Cloud info, which doesn’t necessarily fit in the OP, but should be saved for those having trouble with Google Clould
Hi,
Thanks for the tip but it didn’t work for me until I changed the role from Storage Legacy Object Owner to Storage Legacy Bucket Owner.
It is specifically written in the tooltip while selecting the role:
Storage Legacy Object Owner
Read/write access to existing objects without listing .
Storage Legacy Bucket Owner
Read and write access to existing buckets with object listing /creation/deletion.
Now it works including listing which enables the automatic backup ! houray !!
3 Likes
About using S3 on Google Buckets:
Since you can’t list files you won’t be able to list backups, and automatic backups will fail, we don’t recommend using it for backups. However, there might be a solution in this reply .
As I mentioned it here:
https://meta.discourse.org/t/using-object-storage-for-uploads-s3-clones/148916/334
I can confirm that listing works and the automatic backup works using a service account with Storage Legacy Bucket Owner Role on the bucket.
Be aware that Using S3 for Google Bucket implies to only select a Region that has the same name in Amazon than on Google.
I find it silly that you need to choose from a dropdown menu with backend validation (I tried messing with the API without success) instead of typing it.
This implies that you can’t use a bucket in europe for instance as the prefix on Amazon is EU and EUROPE on Google, nor can you use multi-region.
AWS:
Region Name
Code
US East (Ohio)
us-east-2
US East (N. Virginia)
us-east-1
US West (N. California)
us-west-1
US West (Oregon)
us-west-2
Africa (Cape Town)
af-south-1
Asia Pacific (Hong Kong)
ap-east-1
Asia Pacific (Jakarta)
ap-southeast-3
Asia Pacific (Mumbai)
ap-south-1
Asia Pacific (Osaka)
ap-northeast-3
Asia Pacific (Seoul)
ap-northeast-2
Asia Pacific (Singapore)
ap-southeast-1
Asia Pacific (Sydney)
ap-southeast-2
Asia Pacific (Tokyo)
ap-northeast-1
Canada (Central)
ca-central-1
China (Beijing)
cn-north-1
China (Ningxia)
cn-northwest-1
Europe (Frankfurt)
eu-central-1
Europe (Ireland)
eu-west-1
Europe (London)
eu-west-2
Europe (Milan)
eu-south-1
Europe (Paris)
eu-west-3
Europe (Stockholm)
eu-north-1
Middle East (Bahrain)
me-south-1
South America (São Paulo)
sa-east-1
Google:
Continent
Region Name
Region Description
North America
NORTHAMERICA-NORTHEAST1
Montréal
Low CO2
NORTHAMERICA-NORTHEAST2
Toronto
Low CO2
US-CENTRAL1
Iowa
Low CO2
US-EAST1
South Carolina
US-EAST4
Northern Virginia
US-EAST5
Columbus
US-SOUTH1
Dallas
US-WEST1
Oregon
Low CO2
US-WEST2
Los Angeles
US-WEST3
Salt Lake City
US-WEST4
Las Vegas
South America
SOUTHAMERICA-EAST1
São Paulo
Low CO2
SOUTHAMERICA-WEST1
Santiago
Europe
EUROPE-CENTRAL2
Warsaw
EUROPE-NORTH1
Finland
Low CO2
EUROPE-SOUTHWEST1
Madrid
Low CO2
EUROPE-WEST1
Belgium
Low CO2
EUROPE-WEST2
London
EUROPE-WEST3
Frankfurt
EUROPE-WEST4
Netherlands
EUROPE-WEST6
Zürich
Low CO2
EUROPE-WEST8
Milan
EUROPE-WEST9
Paris
Low CO2
Asia
ASIA-EAST1
Taiwan
ASIA-EAST2
Hong Kong
ASIA-NORTHEAST1
Tokyo
ASIA-NORTHEAST2
Osaka
ASIA-NORTHEAST3
Seoul
ASIA-SOUTH1
Mumbai
ASIA-SOUTH2
Delhi
ASIA-SOUTHEAST1
Singapore
Indonesia
ASIA-SOUTHEAST2
Jakarta
Australia
AUSTRALIA-SOUTHEAST1
Sydney
AUSTRALIA-SOUTHEAST2
Melbourne
I also find it silly to have to set these options in Files Settings. I didn’t use S3 to upload files, I only use it for the backup. It is required to have a different bucket for uploads and backups but there is only one place to set the region which is in Files Settings.
I hope this saves somebody else time figuring this out.
PS: I debugged it using https://discourse.example.com/logs/
…
Failed to list backups from S3: The specified location constraint is not valid. → Region problem
…
Failed to list backups from S3: Access denied. → Storage Legacy Object Owner instead of Storage Legacy Bucket Owner
1 Like
Falco
(Falco)
July 20, 2022, 5:17pm
3
If you setup using the ENV variables as described in the OP, and set DISCOURSE_S3_ENDPOINT
as recommened the DISCOURSE_S3_REGION
is ignored making this a non-issue.
2 Likes
Thanks,
The thing is that I use the bitnami vm one click install from the Google cloud marketplace.
It is probably possible to customize the environment variables but not straightforward.
Setting the endpoint in the UI doesn’t ignore the region.
Thanks anyways
Thank you very much !
Indeed I forgot to add the snippet.
Unfortunately I get the following error:
Aws::S3::Errors::InvalidArgument: Invalid argument.
That is very consistent with the error I got while using the Web UI but not much to work with to solve the issue…
I found this thread that suggest that there might be an incompatibility with Google storage compared to Amazon S3.
Could this be broken for Google storage ?
Complete stack when running the task manually:
root@discourse-2-app:/var/www/discourse# sudo -E -u discourse bundle exec rake s3:upload_assets --trace
** Invoke s3:upload_assets (first_time)
** Invoke environment (first_time)
** Execute environment
** Invoke s3:ensure_cors_rules (first_time)
** Invoke environment
** Execute s3:ensure_cors_rules
Installing CORS rules...
skipping
** Execute s3:upload_assets
Uploading: assets/docker-manager-app-ecd2975f42c4096057a046c086d6a43905c8a18442900d5293ae9a3489422bb0.js
rake aborted!
Aws::S3::Errors::InvalidArgument: Invalid argument.
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/aws-sdk-core-3.130.2/lib/seahorse/client/plugins/raise_response_errors.rb:17:in `call'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/aws-sdk-s3-1.114.0/lib/aws-sdk-s3/plugins/sse_cpk.rb:24:in `call'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/aws-sdk-s3-1.114.0/lib/aws-sdk-s3/plugins/dualstack.rb:27:in `call'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/aws-sdk-s3-1.114.0/lib/aws-sdk-s3/plugins/accelerate.rb:56:in `call'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/aws-sdk-core-3.130.2/lib/aws-sdk-core/plugins/checksum_algorithm.rb:111:in `call'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/aws-sdk-core-3.130.2/lib/aws-sdk-core/plugins/jsonvalue_converter.rb:22:in `call'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/aws-sdk-core-3.130.2/lib/aws-sdk-core/plugins/idempotency_token.rb:19:in `call'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/aws-sdk-core-3.130.2/lib/aws-sdk-core/plugins/param_converter.rb:26:in `call'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/aws-sdk-core-3.130.2/lib/seahorse/client/plugins/request_callback.rb:71:in `call'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/aws-sdk-core-3.130.2/lib/aws-sdk-core/plugins/response_paging.rb:12:in `call'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/aws-sdk-core-3.130.2/lib/seahorse/client/plugins/response_target.rb:24:in `call'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/aws-sdk-core-3.130.2/lib/seahorse/client/request.rb:72:in `send_request'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/aws-sdk-s3-1.114.0/lib/aws-sdk-s3/client.rb:12369:in `put_object'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/aws-sdk-s3-1.114.0/lib/aws-sdk-s3/object.rb:1472:in `put'
/var/www/discourse/lib/s3_helper.rb:75:in `upload'
/var/www/discourse/lib/tasks/s3.rake:37:in `block in upload'
/var/www/discourse/lib/tasks/s3.rake:36:in `open'
/var/www/discourse/lib/tasks/s3.rake:36:in `upload'
/var/www/discourse/lib/tasks/s3.rake:192:in `block (2 levels) in <main>'
/var/www/discourse/lib/tasks/s3.rake:191:in `each'
/var/www/discourse/lib/tasks/s3.rake:191:in `block in <main>'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/rake-13.0.6/lib/rake/task.rb:281:in `block in execute'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/rake-13.0.6/lib/rake/task.rb:281:in `each'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/rake-13.0.6/lib/rake/task.rb:281:in `execute'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/rake-13.0.6/lib/rake/task.rb:219:in `block in invoke_with_call_chain'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/rake-13.0.6/lib/rake/task.rb:199:in `synchronize'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/rake-13.0.6/lib/rake/task.rb:199:in `invoke_with_call_chain'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/rake-13.0.6/lib/rake/task.rb:188:in `invoke'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/rake-13.0.6/lib/rake/application.rb:160:in `invoke_task'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/rake-13.0.6/lib/rake/application.rb:116:in `block (2 levels) in top_level'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/rake-13.0.6/lib/rake/application.rb:116:in `each'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/rake-13.0.6/lib/rake/application.rb:116:in `block in top_level'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/rake-13.0.6/lib/rake/application.rb:125:in `run_with_threads'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/rake-13.0.6/lib/rake/application.rb:110:in `top_level'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/rake-13.0.6/lib/rake/application.rb:83:in `block in run'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/rake-13.0.6/lib/rake/application.rb:186:in `standard_exception_handling'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/rake-13.0.6/lib/rake/application.rb:80:in `run'
/var/www/discourse/vendor/bundle/ruby/2.7.0/gems/rake-13.0.6/exe/rake:27:in `<top (required)>'
/var/www/discourse/vendor/bundle/ruby/2.7.0/bin/rake:25:in `load'
/var/www/discourse/vendor/bundle/ruby/2.7.0/bin/rake:25:in `<top (required)>'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.3.20/lib/bundler/cli/exec.rb:58:in `load'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.3.20/lib/bundler/cli/exec.rb:58:in `kernel_load'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.3.20/lib/bundler/cli/exec.rb:23:in `run'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.3.20/lib/bundler/cli.rb:485:in `exec'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.3.20/lib/bundler/vendor/thor/lib/thor/command.rb:27:in `run'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.3.20/lib/bundler/vendor/thor/lib/thor/invocation.rb:127:in `invoke_command'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.3.20/lib/bundler/vendor/thor/lib/thor.rb:392:in `dispatch'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.3.20/lib/bundler/cli.rb:31:in `dispatch'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.3.20/lib/bundler/vendor/thor/lib/thor/base.rb:485:in `start'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.3.20/lib/bundler/cli.rb:25:in `start'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.3.20/exe/bundle:48:in `block in <top (required)>'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.3.20/lib/bundler/friendly_errors.rb:120:in `with_friendly_errors'
/usr/local/lib/ruby/gems/2.7.0/gems/bundler-2.3.20/exe/bundle:36:in `<top (required)>'
/usr/local/bin/bundle:25:in `load'
/usr/local/bin/bundle:25:in `<main>'
Tasks: TOP => s3:upload_assets
PS: I’m not considering changing bucket but wonder what would happen with the previously uploaded images on the vm disk
EDIT (SOLVED):
@gerhard @Falco
I found what the problem was by enabling the http_wire_trace .
The invalid argument response from the googleapis explains it:
Cannot insert legacy ACL for an object when uniform bucket-level access is enabled. Read more at 统一存储分区级访问权限 | Cloud Storage | Google Cloud
I enabled the fine grained ACL on the bucket instead of the uniform ACL because the header set during upload specifies that it is public. (I had a uniform ACL and set the all bucket to public)
I don’t have the right to update the OP but I think it should mention that in order for Google buckets to work the service account need the Storage Legacy Bucket Owner role on the backup bucket and the upload bucket needs to be use the fine grained ACL.
I hope this saves time to the community.
Thanks again to @Falco @pfaffman @gerhard @tuanpembual for the help here.
2 Likes