Error in rebuilding using minio as object store

Rebuild display when using minio as object store

I, [2022-09-01T00:37:48.192311 #1]  INFO -- : > cd /var/www/discourse && sudo -E -u discourse bundle exec rake s3:upload_assets
rake aborted!
Aws::S3::Errors::BadRequest: An error occurred when parsing the HTTP request PUT at '/'

I have configured multiple domains for minio

minio.example.com (as minio access console)
s3.example.com (as minio’s API)

Also added the bucket name
bucket.s3.example.com (as minio’s API)

All domains are properly credentialed and trying to connect to the account using Cyberduck using s3.example.com or bucket.s3.example.com is available for uploading and downloading files.

My app.yml s3 settings

  DISCOURSE_USE_S3: true
  DISCOURSE_S3_REGION: anything
  discourse_s3_endpoint: https://s3.example.com
  DISCOURSE_S3_ACCESS_KEY_ID: *****
  DISCOURSE_S3_SECRET_ACCESS_KEY: ********
  #Discourse_s3_cdn_url: 
  DISCOURSE_S3_BUCKET: bucket
  DISCOURSE_S3_BACKUP_BUCKET: bucket/backups
  DISCOURSE_BACKUP_LOCATION: S3

hooks:
  after_code:
    - exec:
        cd: $home/plugins
        cmd:
          - git clone https://github.com/discourse/docker_manager.git

  after_assets_precompile:
    - exec:
        cd: $home
        cmd:
          - sudo -E -u discourse bundle exec rake s3:upload_assets

I searched related issues and haven’t solved it, it works fine if using vultr object storage. So, is it that minio and discourse don’t work well together, but I’ve seen people using minio successfully. Ask everyone, I believe this problem will be solved soon

1 Like

Did you follow Using Object Storage for Uploads (S3 & Clones)?

2 Likes

Yes, I watched it many times and also watched Basic How-To for Using MinIO storage server run by you for your Discourse Instance

The problem has not been solved, what bothers me is that the connection to the minio account using Cyberduck’s amazon s3 transport protocol is available and I think my minio settings seem to be working fine

1 Like

Have you confirmed that your MinIO configuration works in general with other mechanisms such as the MinIO client or the ? And that you’re using the proper URLs and configuration against MinIO?

My suggestion is make sure everything is compliant with the MinIO and s3cmd command line clients first - I’ve never heard of this “Cyberduck” client (for good reason: it’s Windows and Mac, I’m a Linux guy), and I can’t confirm it’s compliant with MinIO and other things since it says “AWS S3” on its thing and likely is designed for the full S3 API at Amazon, not S3-compliant/compatible items. Set up the minio client (mcli) on the command line at or near the box you’re trying to work with and then try and push a file manually to your buckets.

Additionally, keep in mind that DISCOURSE_S3_BACKUP_BUCKET with MinIO is designed to be its own bucket, not a subpath within an existing bucket (to my knowledge). It’s possible that may break things in the current setup as well, it’s why the example I wrote and the link to my “How to” that you provided has it as a separate bucket.

What I don’t have here is info on the specific request that actually was made - the URL path, etc. that was used by the system when it made that inquiry wiht the BadRequest. It looks like that’s becuase it only is INFO level logging. There’s no way to get a debug level of logging during the rake process is there @pfaffman (or others who are more familiar with the Discourse side of things)?

ALSO, make sure you also pass in DISCOURSE_S3_INSTALL_CORS_RULE: false for your Discourse configuration - if the app rebuilder/baker tries to push CORS rules it will result in an error message.

2 Likes

I created a new bucket using mcli and manually sent a file to the bucket

I see the sent file in the bucket, does this mean I installed minio with the correct steps, I installed minio using docker compose and my docker-compose.yml file


version: '3'

services:
  minio:
    image: minio/minio:latest
    container_name: minio
    restart: always
    ports:
      - "9000:9000"
      - "9001:9001"
    
    volumes:
      - ./:/data

    environment:
      MINIO_ROOT_USER: ***** 
      MINIO_ROOT_PASSWORD: *****
      MINIO_SERVER_URL: https://s3.example.com
      MINIO_BROWSER_REDIRECT_URL: https://minio.example.com/

    command: server --console-address ":9001" /data

volumes:
  minio:

Then go to the web console, create two new buckets, and set the access policy for the object store to public

I use Nginx proxy manager to forward minio.example.com to port 9001 and s3.example.com and bucket-name.example.com to port 9000

DISCOURSE_S3_BACKUP_BUCKET: I have tried to use a separate bucket and configured domain name forwarding to port 9000 for the bucket, but it doesn’t work

Forward what ports? 80/443 so http/https works? That’s all it needs, you should NEVER have to configure port 9000 on a separate port. The separate bucket will have the same endpoint as s3.example.com - it’s not something separate, so you’re doing THAT configuration wrong. Don’t forget also that in MinIO speak, if you’re using path authentication you would end up with s3.example.com/BUCKETNAME or with DNS authentication like you should use BUCKET.s3.example.com for the URL endpoints that you need to accept on the nginx side and forward to the internal port 9000. You don’t need to configure that on your end though, that needs to be configured on the MinIO side.

The MinIO client supports both path and dns style setup. To my knowledge, Discourse uses a URL based mechanism for bucket identification, not path style setups (feel free to correct me Discourse devs). Therefore the ‘default’ behavior you’re configuring is incorrect.

Now, my MinIO is not dockerized, but to be compliant here with Discourse, you need to use DNS style pathing, i.e. you need to add in the environment variable of MINIO_DOMAIN=BASEDOMAINHERE so that DNS style pathing that DIscourse wants to use works. In your example it would be MINIO_DOMAIN=s3.example.com and then your NGINX would need to be configured to pass the Host header to the backend on port 9000 or wherever the base non-console server components run. You then need to make sure that NGINX accepts for *.s3.example.com and forwards it properly to the MinIO container. This is part of MinIO Federation setup, but for single-node instances with multiple bucket names on a base URL you need to make sure it’s properly configured anyways if you want it to work with Discourse.

Unfortunately, though, this is where you have to start delving into the MinIO configurations. And one of the requisites I specify in the document is that you have a fully functional properly configured MinIO instance which is beyond the scope of Discourse’s site. I believe that your MinIO is not properly configured for DNS style bucket resolution like AWS S3 does (bucket.s3.example.com for instance) and as such does not function.

Note that I run the Discourse instance for the Lubuntu Project (lubuntu.me) (a variant of Ubuntu that uses LXQt) using a MinIO with DNS style bucket URL resolution in order to make it work properly with Discourse, otherwise a request for BUCKET.basedomain.example.com would fail.

Fun fact, I even state you need your MinIO properly configured for DNS style paths, if you did not include the MINIO_DOMAIN during setup of MinIO it won’t do DNS style paths. Which it needs here for Discourse, as per item 3 in my caveats section I wrote:

2 Likes

Hi bro, setting MINIO_DOMAIN makes the above errors go away, but new ones appear

Aws::S3::Errors::MalformedXML: The XML you provided was not well-formed or did not validate against our published schema.

I feel that I am about to succeed, because discourse can correctly access my minio, I try to delete all minio buckets, rebuild discourse, it will prompt that the specified bucket does not exist

Via this article Resolve AWS Config MalformedXML errors think this error is caused by bucket permissions? Through the tutorial Setting up file and image uploads to S3 I saw that it seems that I need to add Bucket Policy


               "s3:PutObject",
               "s3:PutObjectAcl",
               "s3:PutObjectVersionAcl",
               ....

But minio does not support Acl, prompt unsupported action ‘s3:PutObjectAcl’.

Maybe, I may need to use an older version of minio , which might make things easier :sweat_smile:

1 Like

The problem is solved by not adding object storage variables via app.yml. Otherwise a MalformedXML error will occur, just add the s3 parameter to the settings. MINIO_DOMAIN variable needs to be added when installing minio (I am using single node deployment).

Thanks @teward for your help

now I can upload files and backup using minio

1 Like

No, MinIO does not support PutObjectAcl. It supports bucket level permissions but not object level ACLs in that form of the API.

MinIO does NOT support the full AWS API. See Policy Management — MinIO Baremetal Documentation for the fully supported API set.

MINIO_DOMAIN needs added for DNS style buckets to happen, which is why the invalid PUT happens. Once we get further we see the failures in the XML against what is allowed in the schemas. Make sure you NEVER put any policy in place that has variables not supported in the supported set that MinIO actually supports.

Remember: S3-compatible does NOT mean it’s a 100% match for all supported AWS S3 API variables/endpoints/values.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.