Just to be clear, though: Backblaze is not S3 compatible.
Just Curious to know if this functionality is available to be tested as me & my team pretty much testing our discourse setup prior to launch and are open to trying out such features. DO spaces looks like an attractive deal if it can be implemented with Discourse without much hassle. can anyone here shed some light on how is AmazonS3 implemented? (hardcoded stuff or propritary content or apis etc?) Surely gonna look at it.
Step 1:
Demonstrating in code how you would get aws-s3 gem to talk to a different provider. I have not researched that yet but that is the first blocker here.
This may not be trivial considering amazon are the ones maintaining that gem.
Azure has blob storage which is not s3 api compatible but is a similar serviceâŚ
EDIT: Sorry probably shouldnât be replying to a post years ago. Sorry to disrupt the thread.
I am late to this party. Any conclusion around the nature/ volume of work involved?
This is totally unknown until this is answered.
Maybe this will help? https://docs.minio.io/docs/how-to-use-aws-sdk-for-ruby-with-minio-server
Minio is s3 compatible storage, so it should be similar configuration with DO Spaces.
Trouble is Amazon control the aws-sdk gem and donât appear to provide easy extensibility to which urls you hit.
You dont need to make changes in Amazon aws-sdk gem. As I understand, its higher level (storage), where s3 client is initialized. Changes are in the initialization params of sdk, not in the gem itself.
At first, it looks like here is the place to add custom endpoint https://github.com/discourse/discourse/blob/4f28c71b5082d8194129f10084738775b36b8ed3/lib/s3_helper.rb#L152
hmmm but that place just lets you choose âaws regionâ how would we enter a completely different HTTPS based endpoint there?
https://github.com/aws/aws-sdk-ruby/issues/1616#issuecomment-329991391
Looks like we can add a s3_endpoint
setting that defaults to aws when empty?
Hello!
We use the Minio on our site as s3 compatible storage without any problems.
So to make it works, it is necessary:
Add to s3_options
(/lib/s3_helper.rb
) your s3 compatible storage endpoint, and force_path_style
param depending on its settings:
opts = {region: obj.s3_region, endpoint: 'http://minio:9000', force_path_style: true}
and modify absolute_base_url
setting (/app/models/site_setting.rb
) to return this:
def self.absolute_base_url
bucket = SiteSetting.enable_s3_uploads ? Discourse.store.s3_bucket_name : GlobalSetting.s3_bucket
return "//files.you_site.tld/#{bucket}"
end
Also, you need to make the appropriate settings for your nginx server.
Itâs work fine, but there is one problem, particularly with Minio, that should be solved - update_lifecycle
- Minio does not support this.
So any ideas how to port this settings to original codebase?
Or how to add script to change this code after updates?
This is PR welcome, the correct way is to have a PR that adds the feature.
What setting do I need to change in nginx config? For minio or discourse or both?
Just add something like that to your nginx.conf file:
upstream storage {
server minio:9000;
}
server {
listen 80;
server_name files.you_site.ltd;
access_log off;
return 301 https://files.you_site.ltd$request_uri;
}
server {
listen 443 ssl http2;
server_name files.you_site.ltd;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES256-SHA;
ssl_prefer_server_ciphers on;
ssl_ecdh_curve secp384r1:prime256v1;
ssl_certificate /shared/ssl/you_site.ltd.cer;
ssl_certificate_key /shared/ssl/you_site.ltd.key;
ssl_session_tickets off;
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:1m;
add_header Strict-Transport-Security 'max-age=63072000'; # remember the certificate for a year and automatically connect to HTTPS for this domain
access_log off;
error_log /var/log/nginx/error.log;
if ($http_host != files.you_site.ltd) {
rewrite (.*) https://files.you_site.ltd$1 permanent;
}
location / {
limit_conn connperip 30;
limit_req zone=flood burst=12 nodelay;
limit_req zone=bot burst=200 nodelay;
add_header X-Cache-Status $upstream_cache_status;
add_header Referrer-Policy 'no-referrer-when-downgrade';
add_header Strict-Transport-Security 'max-age=63072000';
proxy_cache one;
proxy_cache_revalidate on;
proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
proxy_cache_lock on;
proxy_cache_valid 1m;
proxy_ignore_headers Set-Cookie;
proxy_set_header Host $http_host;
proxy_pass http://storage;
}
}
This settings just for Minio, itâs not overwrite any Discourse specified Nginx settings.
If you have âLetâs encryptâ enabled, you also should change LE_WORKING_DIR
var in Docker/templates/web.letsencrypt.ssl.template.yml
to:
LE_WORKING_DIR="${LETSENCRYPT_DIR}" $$ENV_LETSENCRYPT_DIR/acme.sh --issue -d $$ENV_YOUR_SITE_HOSTNAME -d $$ENV_YOUR_SITE_STORAGE_HOSTNAME -k 4096 -w /var/www/discourse/public
if [ ! "$(cd $$ENV_LETSENCRYPT_DIR/$$ENV_YOUR_SITE_HOSTNAME && openssl verify -CAfile ca.cer fullchain.cer | grep "OK")" ]; then
# Try to issue the cert again if something goes wrong
LE_WORKING_DIR="${LETSENCRYPT_DIR}" $$ENV_LETSENCRYPT_DIR/acme.sh --issue -d $$ENV_YOUR_SITE_HOSTNAME -d $$ENV_YOUR_SITE_STORAGE_HOSTNAME -k 4096 --force -w /var/www/discourse/public
fi
so does discourse support digital ocean spaces?
SoonâŚ
Thanks @Falco cant wait for this!!
Hi Alexander, thatâs correct!
Minio works great for backups only but did you ever get Minio to work for hosting image uploads?
Edit: For anyone in this thread who has used Minio in the past, have you ever managed to access Minio objects using virtual-host style requests? e.g. http://bucketname.endpoint/original/1X/xyz.jpg