How to add a third (or fourth, fifth, etc.) site with Docker Multisite. Getting 500 Internal Server Error


(Shayden Martin) #1

After adding my third and fourth site (second and third additional sites in a multisite install), I’m getting the following error for both:

500 Internal Server Error
If you are the administrator of this website, then please read this web application's log file and/or the web server's log file to find out what went wrong.

I’ve checked the logs, I can’t seem to find any errors that would pertain to the above. I have checked the data container and can see that the additional databases are created and configured fine, I can see that the info has been copied over to the multisite file in the web container. And the second site (first multisite) is working fine. Any ideas? I have copied our web and data container files below.

Web:

templates:
  - "templates/sshd.template.yml"
  - "templates/web.template.yml"

expose:
  - "80:80"
  - "2222:22"

params:
  ## Which Git revision should this container use? (default: tests-passed)
  #version: tests-passed

env:
  LANG: en_US.UTF-8

  UNICORN_WORKERS: 3

  DISCOURSE_DB_SOCKET: '5432'
  DISCOURSE_DB_USER: discourse
  DISCOURSE_DB_PASSWORD: mypassword
  DISCOURSE_DB_HOST: mydataip
  DISCOURSE_REDIS_HOST: mydataip

  DISCOURSE_DEVELOPER_EMAILS: 'mymail@mydomain.com'

  DISCOURSE_HOSTNAME: 'mydomain.com'

  DISCOURSE_SMTP_ADDRESS: smtp.mydomain.com
  DISCOURSE_SMTP_PORT: 587
  DISCOURSE_SMTP_USER_NAME: mymail@mydomain.com
  DISCOURSE_SMTP_PASSWORD: mypassword

  ## The CDN address for this Discourse instance (configured to pull)
  #DISCOURSE_CDN_URL: //discourse-cdn.example.com

  #CORS Settings, to enable ajax with json endpoints
  DISCOURSE_ENABLE_CORS: true
  DISCOURSE_CORS_ORIGIN: '*'

volumes:
  - volume:
      host: /var/docker/shared/web
      guest: /shared
  - volume:
      host: /var/docker/shared/web/log/var-log
      guest: /var/log

## The docker manager plugin allows you to one-click upgrade Discouse
## http://discourse.example.com/admin/docker
hooks:
  before_bundle_exec:
    - exec:
        cd: /var/www/discourse/plugins
        cmd:
          - mkdir -p plugins
          - git clone https://github.com/discourse/docker_manager.git

  after_bundle_exec:
    - exec:
        cd: /var/www/discourse
        cmd:
          - sudo -E -u discourse bundle exec rake multisite:migrate

hooks: #sites
  before_bundle_exec:
    - file:
        path: /var/www/discourse/config/multisite.yml
        contents: |
          site2:
            adapter: postgresql
            host: mydataip
            username: discourse
            password: mypassword
            database: b_discourse
            pool: 25
            timeout: 5000
            db_id: 2
            host_names:
              - site2.mydomain.com
          site3:
            adapter: postgresql
            host: mydataip
            username: discourse
            password: mypassword
            database: c_discourse
            pool: 25
            timeout: 5000
            db_id: 3
            host_names:
              - site3.mydomain.com
          site4:
            adapter: postgresql
            host: mydataip
            username: discourse
            password: mypassword
            database: d_discourse
            pool: 25
            timeout: 5000
            db_id: 4
            host_names:
              - site4.mydomain.com

## Remember, this is YAML syntax - you can only have one block with a name
run:
  - exec: echo "Beginning of custom commands"

  ## If you want to configure password login for root, uncomment and change:
  #- exec: apt-get -y install whois # for mkpasswd
  ## Use only one of the following lines:
  #- exec: /usr/sbin/usermod -p 'PASSWORD_HASH' root
  #- exec: /usr/sbin/usermod -p "$(mkpasswd -m sha-256 'RAW_PASSWORD')" root

  ## If you want to authorized additional users, uncomment and change:
  #- exec: ssh-import-id username
  #- exec: ssh-import-id anotherusername

  - exec: echo "End of custom commands"
  - exec: awk -F\# '{print $1;}' ~/.ssh/authorized_keys | awk 'BEGIN { print "Authorized SSH keys for this container:"; } NF>=2 {print $NF;}'

Data:

templates:
  - "templates/postgres.template.yml"
  - "templates/redis.template.yml"
  - "templates/sshd.template.yml"

expose:
  - "5432:5432"
  - "6379:6379"
  - "2221:22"

params:
  db_default_text_search_config: "pg_catalog.english"
  ## Set db_shared_buffers to 1/3 of the memory you wish to allocate to postgres
  ## on 1GB install set to 128MB on a 4GB instance you may raise to 1GB
  db_shared_buffers: "256MB"

env:
  # ensure locale exists in container, you may need to install it
  LANG: en_US.UTF-8

volumes:
  - volume:
        host: /var/docker/shared/data
        guest: /shared
  - volume:
        host: /var/docker/shared/data/log/var-log
        guest: /var/log

hooks:
  after_postgres:
    - exec:
        stdin: |
          alter user discourse with password 'mypassword';
        cmd: sudo -u postgres psql
        raise_on_fail: false

hooks: # site2.mydomain.com
  after_postgres:
    - exec: sudo -u postgres createdb b_discourse || exit 0

    - exec: /bin/bash -c 'sudo -u postgres psql b_discourse <<< "grant all privileges on database b_discourse to discourse;"'
    - exec: /bin/bash -c 'sudo -u postgres psql b_discourse <<< "alter schema public owner to discourse;"'
    - exec: /bin/bash -c 'sudo -u postgres psql b_discourse <<< "create extension if not exists hstore;"'
    - exec: /bin/bash -c 'sudo -u postgres psql b_discourse <<< "create extension if not exists pg_trgm;"'

hooks: #site3.mydomain.com
  after_postgres:
    - exec: sudo -u postgres createdb c_discourse || exit 0

    - exec: /bin/bash -c 'sudo -u postgres psql c_discourse <<< "grant all privileges on database c_discourse to discourse;"'
    - exec: /bin/bash -c 'sudo -u postgres psql c_discourse <<< "alter schema public owner to discourse;"'
    - exec: /bin/bash -c 'sudo -u postgres psql c_discourse <<< "create extension if not exists hstore;"'
    - exec: /bin/bash -c 'sudo -u postgres psql c_discourse <<< "create extension if not exists pg_trgm;"'

hooks: #site4.mydomain.com
  after_postgres:
    - exec: sudo -u postgres createdb d_discourse || exit 0

    - exec: /bin/bash -c 'sudo -u postgres psql d_discourse <<< "grant all privileges on database d_discourse to discourse;"'
    - exec: /bin/bash -c 'sudo -u postgres psql d_discourse <<< "alter schema public owner to discourse;"'
    - exec: /bin/bash -c 'sudo -u postgres psql d_discourse <<< "create extension if not exists hstore;"'
    - exec: /bin/bash -c 'sudo -u postgres psql d_discourse <<< "create extension if not exists pg_trgm;"'

Thanks in advance for any and all help. To confirm, site2 is working, site3 and site4 aren’t and returning the 500 error.


(Sam Saffron) #2

not seeing any command that create the dbs so they are likely missing


(Shayden Martin) #3

@sam and I can actually see the dbs if I enter the container.


(Sam Saffron) #4

if you are getting 500s there is something in a log somewhere, recommend you look through them


(Shayden Martin) #5

In rails/unicorn.stderr.log, seeing various instances of this error:

Error during failsafe response: PG::UndefinedTable: ERROR:  relation "permalinks" does not exist
LINE 1: SELECT  1 AS one FROM "permalinks"  WHERE "permalinks"."url"...
                              ^
: SELECT  1 AS one FROM "permalinks"  WHERE "permalinks"."url" = '500' LIMIT 1
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/rack-mini-profiler-0.9.2/lib/patches/sql_patches.rb:160:in `exec'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/rack-mini-profiler-0.9.2/lib/patches/sql_patches.rb:160:in `async_exec'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/activerecord-4.1.8/lib/active_record/connection_adapters/postgresql_adapter.rb:822:in `block in exec_no_cache'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/activerecord-4.1.8/lib/active_record/connection_adapters/abstract_adapter.rb:373:in `block in log'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/activesupport-4.1.8/lib/active_support/notifications/instrumenter.rb:20:in `instrument'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/activerecord-4.1.8/lib/active_record/connection_adapters/abstract_adapter.rb:367:in `log'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/activerecord-4.1.8/lib/active_record/connection_adapters/postgresql_adapter.rb:822:in `exec_no_cache'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/activerecord-4.1.8/lib/active_record/connection_adapters/postgresql/database_statements.rb:137:in `exec_query'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/activerecord-4.1.8/lib/active_record/connection_adapters/postgresql_adapter.rb:954:in `select'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/activerecord-4.1.8/lib/active_record/connection_adapters/abstract/database_statements.rb:24:in `select_all'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/activerecord-4.1.8/lib/active_record/connection_adapters/abstract/query_cache.rb:70:in `select_all'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/activerecord-4.1.8/lib/active_record/connection_adapters/abstract/database_statements.rb:30:in `select_one'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/activerecord-4.1.8/lib/active_record/connection_adapters/abstract/database_statements.rb:35:in `select_value'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/activerecord-4.1.8/lib/active_record/relation/finder_methods.rb:298:in `exists?'
  /var/www/discourse/lib/permalink_constraint.rb:4:in `matches?'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/actionpack-4.1.8/lib/action_dispatch/routing/mapper.rb:37:in `block in matches?'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/actionpack-4.1.8/lib/action_dispatch/routing/mapper.rb:36:in `each'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/actionpack-4.1.8/lib/action_dispatch/routing/mapper.rb:36:in `all?'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/actionpack-4.1.8/lib/action_dispatch/routing/mapper.rb:36:in `matches?'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/actionpack-4.1.8/lib/action_dispatch/routing/mapper.rb:45:in `call'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/actionpack-4.1.8/lib/action_dispatch/journey/router.rb:73:in `block in call'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/actionpack-4.1.8/lib/action_dispatch/journey/router.rb:59:in `each'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/actionpack-4.1.8/lib/action_dispatch/journey/router.rb:59:in `call'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/actionpack-4.1.8/lib/action_dispatch/routing/route_set.rb:678:in `call'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/actionpack-4.1.8/lib/action_dispatch/middleware/show_exceptions.rb:46:in `render_exception'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/actionpack-4.1.8/lib/action_dispatch/middleware/show_exceptions.rb:35:in `rescue in call'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/actionpack-4.1.8/lib/action_dispatch/middleware/show_exceptions.rb:30:in `call'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/logster-0.1.6/lib/logster/middleware/reporter.rb:23:in `call'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/railties-4.1.8/lib/rails/rack/logger.rb:38:in `call_app'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/railties-4.1.8/lib/rails/rack/logger.rb:22:in `call'
  /var/www/discourse/config/initializers/quiet_logger.rb:10:in `call_with_quiet_assets'
  /var/www/discourse/config/initializers/silence_logger.rb:26:in `call'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/actionpack-4.1.8/lib/action_dispatch/middleware/request_id.rb:21:in `call'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/rack-1.5.2/lib/rack/methodoverride.rb:21:in `call'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/rack-1.5.2/lib/rack/runtime.rb:17:in `call'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/rack-1.5.2/lib/rack/sendfile.rb:112:in `call'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/rack-mini-profiler-0.9.2/lib/mini_profiler/profiler.rb:193:in `call'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/railties-4.1.8/lib/rails/engine.rb:514:in `call'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/railties-4.1.8/lib/rails/application.rb:144:in `call'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/railties-4.1.8/lib/rails/railtie.rb:194:in `public_send'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/railties-4.1.8/lib/rails/railtie.rb:194:in `method_missing'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/rack-1.5.2/lib/rack/builder.rb:138:in `call'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/rack-1.5.2/lib/rack/urlmap.rb:65:in `block in call'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/rack-1.5.2/lib/rack/urlmap.rb:50:in `each'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/rack-1.5.2/lib/rack/urlmap.rb:50:in `call'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/unicorn-4.8.3/lib/unicorn/http_server.rb:576:in `process_client'
  /var/www/discourse/lib/middleware/unicorn_oobgc.rb:95:in `process_client'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/unicorn-4.8.3/lib/unicorn/http_server.rb:670:in `worker_loop'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/unicorn-4.8.3/lib/unicorn/http_server.rb:525:in `spawn_missing_workers'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/unicorn-4.8.3/lib/unicorn/http_server.rb:140:in `start'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/gems/unicorn-4.8.3/bin/unicorn:126:in `<top (required)>'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/bin/unicorn:23:in `load'
  /var/www/discourse/vendor/bundle/ruby/2.0.0/bin/unicorn:23:in `<main>'
  • rails/unicorn.stdout.log: Starting up 1 supervised sidekiqs
  • rails/production_errors.log is blank.
  • rails/production.log: Nothing stating a 500 error after reading through and then grepping it.
  • var-log/auth.log is blank
  • var-log/syslog is blank
  • var-log/nginx/error.log: Nothing at all related to the sites that are experiencing the error, after reading through and grepping.
  • var-log/nginx/access.log: Nothing at all related to the sites that are experiencing the error, after reading through and grepping.

Please let me know if any of the above is helpful and/or you need me to post logs from the data container.


(Sam Saffron) #6

clearly your multisite has not migrated. did you bootstrap it?


(Kane York) #7

Try putting in a sudo blah blah -u discourse -E blah rake multisite:migrate at the end?


(Shayden Martin) #8

Yes, it’s been rebuilt several times and stopped, destroyed, bootstrapped, and started again.


(Shayden Martin) #9

Will try this now and report back.


(Shayden Martin) #10

Thanks for your help, it returns the following error. However the container is functioning perfectly for the default site and the first “multi-sited” site.

root@localhost-web_ms:/var/www/discourse# su discourse -c 'bundle exec rake multisite:migrate --trace'
** Invoke multisite:migrate (first_time)
** Invoke environment (first_time)
** Execute environment
rake aborted!
PG::ConnectionBad: could not connect to server: No such file or directory
	Is the server running locally and accepting
	connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?

This first multi-site was boot-strapped well before the others and the configuration in the web and data containers was structured slightly differently. Is it possible there is an error in the structure of one or both of the container files? @sam Thoughts?


(Sam Saffron) #11

its trying to connect to a local socket but the db is in another container.

are you running our latest docker image?

cd /var/docker
git pull
./launcher rebuild abb

(Shayden Martin) #12

Followed your steps, but the problem is still there.


(Shayden Martin) #13

Going through the bootstrap output, it doesn’t seem that the multisite:migrate command is run at any point, is this due to an error in my hooks perhaps? Or would it be run silently?


(Sam Saffron) #14

well I can see hooks: twice in the sample above so, yes, very likely.


(Shayden Martin) #15

OK. So to confirm, with pups, an element that is used twice like that will be overridden? What about with the actual hook calls, before_bundle_exec for example?


(Sam Saffron) #16

More accurately, this is the way YAML works. But yeah.


(Shayden Martin) #17

OK, I’ll restructure my containers and report back then. Thanks.


(Shayden Martin) #18

OK, that’s resolved the issue. Thanks very much to @sam for your help.


(Sam Saffron) #19