Difficulty configuring SSL with CloudFlare enabled


(@SenpaiMass) #1

@sam

getting this error

nginx: [emerg] SSL_CTX_use_PrivateKey_file("/shared/ssl/ssl.key") failed (SSL: error:0906406D:PEM routines:PEM_def_callback:problems getting password error:0906A068:PEM routines:PEM_do_header:bad password read error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib)

By running launcher logs


Advanced Setup Only: Allowing SSL / HTTPS for your Discourse Docker setup
(Jens Maier) #2

Looks like the private key is encrypted, but nginx has no way to prompt you for the required passphrase.

The obvious solution is to decrypt the key… the more secure and recommended alternatives generally require dedicated hardware.

Decrypting the key is easy, fortunately. Run in a console:
openssl rsa -in ssl.key -out ssl-decrypted.key
Type in your passphrase, move the original key file somewhere else and rename the decrypted key file to the original file’s name.


(@SenpaiMass) #3

Did that

Went to logs still getting the error

Enter PEM pass phrase:
nginx: [emerg] SSL_CTX_use_PrivateKey_file("/shared/ssl/ssl.key") failed (SSL: error:0906406D:PEM routines:PEM_def_callback:problems getting password error:0906A068:PEM routines:PEM_do_header:bad password read error:140B0009:SSL routines:SSL_CTX_use_PrivateKey_file:PEM lib)

(Jens Maier) #4

The openssl command created a new file; the original ssl.key hasn’t changed yet. :wink:

mv ssl.key ssl.encrypted.key
mv ssl-decrypted.key ssl.key

And while you’re at it:
chmod 0400 ssl.key


(@SenpaiMass) #5

Done, logs shows every thing fine .

But my site isnt loading
https://greedgamer.com

And i have deleted the encrypted key file Or do i need to keep it ?


(Jens Maier) #6

Personally, I never ever delete key material. But I’m a bit paranoid that way.

Did you forward port 443 to the container, i.e. add "443:443" under the expose section in your app.yml?


(@SenpaiMass) #7

Here is my app.yml

##
## After making changes to this file, you MUST rebuild for any changes
## to take effect in your live Discourse instance:
## 
## ./var/docker/launcher rebuild app
##

## this is the all-in-one, standalone Discourse Docker container template
templates:
  - "templates/cron.template.yml"
  - "templates/postgres.template.yml"
  - "templates/redis.template.yml"
  - "templates/sshd.template.yml"
  - "templates/web.template.yml"
  - "templates/web.ssl.template.yml"

## which TCP/IP ports should this container expose?
expose:
  - "80:80"   # fwd host port 80   to container port 80 (http)
  - "2222:22" # fwd host port 2222 to container port 22 (ssh)
  - "443:443"
#params:
  ## Which Git revision should this container use? (default: tests-passed)
  #version: tests-passed

env:
  LANG: en_US.UTF-8
  ## How many concurrent web requests are supported?
  ## With 2GB we recommend 3-4 workers, with 1GB only 2
  #UNICORN_WORKERS: 3
  STEAM_WEB_API_KEY: 'xxxxxxxxxxxxxxxxxxxxxxx'
  ##
  ## List of comma delimited emails that will be made admin and developer
  ## on initial signup example 'user1@example.com, user2@example.com'
  DISCOURSE_DEVELOPER_EMAILS: 'alankrt@gmail.com'
  ##
  ## The domain name this Discourse instance will respond to
  DISCOURSE_HOSTNAME: 'greedgamer.com'
  ##
  ## The mailserver this Discourse instance will use
  DISCOURSE_SMTP_ADDRESS: smtp.mandrillapp.com
  DISCOURSE_SMTP_PORT: 587
  DISCOURSE_SMTP_USER_NAME: xxxxxxxxxxx
  DISCOURSE_SMTP_PASSWORD: xxxxxxxxxxxx
  ##
  ## the origin pull CDN address for this Discourse instance
  #DISCOURSE_CDN_URL: //discourse-cdn.example.com

## These containers are stateless, all data is stored in /shared
volumes:
  - volume:
      host: /var/docker/shared/standalone
      guest: /shared

## The docker manager plugin allows you to one-click upgrade Discouse
## http://discourse.example.com/admin/docker
hooks:
  after_code:
    - exec:
        cd: $home/plugins
        cmd:
          - mkdir -p plugins
          - git clone https://github.com/discourse/docker_manager.git
          - git clone https://github.com/discourse/discourse-spoiler-alert.git
          - git clone https://github.com/defaye/discourse-steam-login.git

## Remember, this is YAML syntax - you can only have one block with a name
run:
  - exec: echo "Beginning of custom commands"

  ## If you want to configure password login for root, uncomment and change:
  #- exec: apt-get -y install whois # for mkpasswd
  ## Use only one of the following lines:
  #- exec: /usr/sbin/usermod -p 'PASSWORD_HASH' root
  #- exec: /usr/sbin/usermod -p "$(mkpasswd -m sha-256 'RAW_PASSWORD')" root

  ## If you want to authorized additional users, uncomment and change:
  #- exec: ssh-import-id username
  #- exec: ssh-import-id anotherusername

  - exec: echo "End of custom commands"
  - exec: awk -F\# '{print $1;}' ~/.ssh/authorized_keys | awk 'BEGIN { print "Authorized SSH keys for this container:"; } NF>=2 {print $NF;}'

Should i change the Discourse hostname to https://greedgamer.com from greedgamer.com ??


(Jens Maier) #8

The app.yml looks about right. Did you rebuild the container? Can you start it and leave it running so I can see if it responds at all?


(@SenpaiMass) #9

Started and rebuild the app.

PS- i have the key and CRT file uploaded to

/var/docker/shared/standalone/ssl/

Should i change permission of the folder to 755 ??
Edit : I am using Startcom SSL


(Jens Maier) #10

Well. This is weird. Nginx answers on port 80 with a permanent redirect to https so the container appears to have been properly built, but it doesn’t answer on port 443.

You can check if something else is hogging the https port: stop the container and run this on the host: sudo netstat -anpt |grep ":443\s" |grep LISTEN

If that doesn’t yield a result, it might be the firewall.


(@SenpaiMass) #11

Getting this result


(Jens Maier) #12

Why is your docker daemon process listening on all addresses? Did you configure this? The docker daemon should normally create and listen to a UNIX socket file, not a public IP address.


(@SenpaiMass) #13

No i have not configured my docker at all.

Installed discourse using official installation method
Here is my Logs

    WARNING: No swap limit support
- runit: $Id: 25da3b86f7bed4038b8a039d2f8e8c9bbcf0822b $: booting.
- runit: enter stage: /etc/runit/1
run-parts: executing /etc/runit/1.d/cleanup-pids
Cleaning stale PID files
run-parts: executing /etc/runit/1.d/copy-env
- runit: leave stage: /etc/runit/1
- runit: enter stage: /etc/runit/2
ok: run: redis: (pid 30) 0s
ok: run: postgres: (pid 31) 0s
                _._
           _.-``__ ''-._
      _.-``    `.  `_.  ''-._           Redis 2.8.10 (00000000/0) 64 bit
  .-`` .-```.  ```\/    _.,_ ''-._
 (    '      ,       .-`  | `,    )     Running in stand alone mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
 |    `-._   `._    /     _.-'    |     PID: 37
  `-._    `-._  `-./  _.-'    _.-'
 |`-._`-._    `-.__.-'    _.-'_.-'|
 |    `-._`-._        _.-'_.-'    |           http://redis.io
  `-._    `-._`-.__.-'_.-'    _.-'
 |`-._`-._    `-.__.-'    _.-'_.-'|
 |    `-._`-._        _.-'_.-'    |
  `-._    `-._`-.__.-'_.-'    _.-'
      `-._    `-.__.-'    _.-'
          `-._        _.-'
              `-.__.-'

[37] 13 Aug 14:51:57.840 # Server started, Redis version 2.8.10
[37] 13 Aug 14:51:57.840 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
Server listening on 0.0.0.0 port 22.
Server listening on :: port 22.
2014-08-13 14:51:58 UTC LOG:  database system was shut down at 2014-08-13 14:51:30 UTC
2014-08-13 14:51:58 UTC LOG:  database system is ready to accept connections
2014-08-13 14:51:58 UTC LOG:  autovacuum launcher started
[37] 13 Aug 14:51:58.208 * DB loaded from disk: 0.368 seconds
[37] 13 Aug 14:51:58.208 * The server is now ready to accept connections on port 6379
supervisor pid: 40 unicorn pid: 54
[37] 13 Aug 14:56:58.087 * 10 changes in 300 seconds. Saving...
[37] 13 Aug 14:56:58.090 * Background saving started by pid 431
[431] 13 Aug 14:56:58.763 * DB saved on disk
[431] 13 Aug 14:56:58.765 * RDB: 1 MB of memory used by copy-on-write
[37] 13 Aug 14:56:58.794 * Background saving terminated with success
[37] 13 Aug 15:01:59.049 * 10 changes in 300 seconds. Saving...
[37] 13 Aug 15:01:59.050 * Background saving started by pid 753
[753] 13 Aug 15:01:59.403 * DB saved on disk
[753] 13 Aug 15:01:59.404 * RDB: 1 MB of memory used by copy-on-write
[37] 13 Aug 15:01:59.453 * Background saving terminated with success
[37] 13 Aug 15:07:00.087 * 10 changes in 300 seconds. Saving...
[37] 13 Aug 15:07:00.089 * Background saving started by pid 1074
[1074] 13 Aug 15:07:00.748 * DB saved on disk
[1074] 13 Aug 15:07:00.750 * RDB: 1 MB of memory used by copy-on-write
[37] 13 Aug 15:07:00.793 * Background saving terminated with success
[37] 13 Aug 15:12:01.080 * 10 changes in 300 seconds. Saving...
[37] 13 Aug 15:12:01.100 * Background saving started by pid 1394
[1394] 13 Aug 15:12:01.694 * DB saved on disk
[1394] 13 Aug 15:12:01.697 * RDB: 1 MB of memory used by copy-on-write
[37] 13 Aug 15:12:01.704 * Background saving terminated with success
[37] 13 Aug 15:17:02.061 * 10 changes in 300 seconds. Saving...
[37] 13 Aug 15:17:02.063 * Background saving started by pid 1720
[1720] 13 Aug 15:17:02.686 * DB saved on disk
[1720] 13 Aug 15:17:02.688 * RDB: 1 MB of memory used by copy-on-write
[37] 13 Aug 15:17:02.768 * Background saving terminated with success
[37] 13 Aug 15:22:03.050 * 10 changes in 300 seconds. Saving...
[37] 13 Aug 15:22:03.068 * Background saving started by pid 2041
[2041] 13 Aug 15:22:03.744 * DB saved on disk
[2041] 13 Aug 15:22:03.748 * RDB: 1 MB of memory used by copy-on-write
[37] 13 Aug 15:22:03.772 * Background saving terminated with success
[37] 13 Aug 15:27:04.057 * 10 changes in 300 seconds. Saving...
[37] 13 Aug 15:27:04.065 * Background saving started by pid 2364
[2364] 13 Aug 15:27:04.421 * DB saved on disk
[2364] 13 Aug 15:27:04.423 * RDB: 1 MB of memory used by copy-on-write
[37] 13 Aug 15:27:04.466 * Background saving terminated with success
[37] 13 Aug 15:32:05.062 * 10 changes in 300 seconds. Saving...
[37] 13 Aug 15:32:05.066 * Background saving started by pid 2690
[2690] 13 Aug 15:32:05.783 * DB saved on disk
[2690] 13 Aug 15:32:05.786 * RDB: 1 MB of memory used by copy-on-write
[37] 13 Aug 15:32:05.876 * Background saving terminated with success
[37] 13 Aug 15:37:06.080 * 10 changes in 300 seconds. Saving...
[37] 13 Aug 15:37:06.097 * Background saving started by pid 3010
[3010] 13 Aug 15:37:06.764 * DB saved on disk
[3010] 13 Aug 15:37:06.767 * RDB: 1 MB of memory used by copy-on-write
[37] 13 Aug 15:37:06.803 * Background saving terminated with success
[37 | signal handler] (1407944494) Received SIGTERM, scheduling shutdown...
exiting
2014-08-13 15:41:34 UTC LOG:  autovacuum launcher shutting down
- runit: enter stage: /etc/runit/3
- runit: fatal: unable to start child: /etc/runit/3: file does not exist
- runit: leave stage: /etc/runit/3
- runit: sending KILL signal to all processes...
- runit: power off...
- runit: system halt.
- runit: $Id: 25da3b86f7bed4038b8a039d2f8e8c9bbcf0822b $: booting.
- runit: enter stage: /etc/runit/1
run-parts: executing /etc/runit/1.d/cleanup-pids
Cleaning stale PID files
run-parts: executing /etc/runit/1.d/copy-env
- runit: leave stage: /etc/runit/1
- runit: enter stage: /etc/runit/2
ok: run: redis: (pid 30) 0s
ok: run: postgres: (pid 22) 0s
Server listening on 0.0.0.0 port 22.
Server listening on :: port 22.
                _._
           _.-``__ ''-._
      _.-``    `.  `_.  ''-._           Redis 2.8.10 (00000000/0) 64 bit
  .-`` .-```.  ```\/    _.,_ ''-._
 (    '      ,       .-`  | `,    )     Running in stand alone mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
 |    `-._   `._    /     _.-'    |     PID: 34
  `-._    `-._  `-./  _.-'    _.-'
 |`-._`-._    `-.__.-'    _.-'_.-'|
 |    `-._`-._        _.-'_.-'    |           http://redis.io
  `-._    `-._`-.__.-'_.-'    _.-'
 |`-._`-._    `-.__.-'    _.-'_.-'|
 |    `-._`-._        _.-'_.-'    |
  `-._    `-._`-.__.-'_.-'    _.-'
      `-._    `-.__.-'    _.-'
          `-._        _.-'
              `-.__.-'

[34] 13 Aug 15:41:44.508 # Server started, Redis version 2.8.10
[34] 13 Aug 15:41:44.509 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
2014-08-13 15:41:44 UTC LOG:  database system was interrupted; last known up at 2014-08-13 14:51:58 UTC
2014-08-13 15:41:44 UTC LOG:  database system was not properly shut down; automatic recovery in progress
2014-08-13 15:41:44 UTC LOG:  record with zero length at 0/AD7FAA8
2014-08-13 15:41:44 UTC LOG:  redo is not required
2014-08-13 15:41:44 UTC LOG:  database system is ready to accept connections
2014-08-13 15:41:44 UTC LOG:  autovacuum launcher started
[34] 13 Aug 15:41:44.750 * DB loaded from disk: 0.241 seconds
[34] 13 Aug 15:41:44.750 * The server is now ready to accept connections on port 6379
supervisor pid: 35 unicorn pid: 52
config/unicorn_launcher: line 36: kill: (52) - No such process
config/unicorn_launcher: line 10: kill: (52) - No such process
exiting
ok: run: redis: (pid 30) 3s
ok: run: postgres: (pid 22) 3s
supervisor pid: 60 unicorn pid: 62
config/unicorn_launcher: line 36: kill: (62) - No such process
config/unicorn_launcher: line 10: kill: (62) - No such process
exiting
ok: run: redis: (pid 30) 22s
ok: run: postgres: (pid 22) 22s
supervisor pid: 141 unicorn pid: 144

(Jens Maier) #14

Are you on Digital Ocean? If so, someone with access to a droplet should probably investigate their current default docker daemon configuration and adapt the guide if necessary. (I was wrong, the docker daemon opens exposed ports and proxies incoming traffic through userland.)

By the way, the logs aren’t particularily helpful because the problem isn’t in the container, it’s in the host… I think. I would have to look at it myself and figure out how docker has set up the firewall.


(@SenpaiMass) #15

Sent u details of my SSH


(@SenpaiMass) #16

@sam
Could this be a server side problem ?


(Jens Maier) #17

Got it. There’s nothing wrong with your droplet or Discourse. You messed up your DNS.

greedgamer.com resolves to 104.28.27.70 which is presumably some CloudFlare server (because the SOA lists CloudFlare nameservers), but your droplet’s actual IP address is 128.199.136.189.


(@SenpaiMass) #18

So i should change my DNS from Cloudflare to Digital ocean and it wold solve it ?


(Jens Maier) #19

Yes. In fact you pretty much have to do that.

Alternatively:

CloudFlare is SSL-compatible on any paid plan (Pro, Business or Enterprise). If you are using the CloudFlare free plan, you need to upgrade to a paid plan. Once you upgrade, SSL should start to work within 15 minutes.

See Why isn't SSL working for my site? – Cloudflare Support

By the way, is it just me or is it somewhat worrying that a supposedly trusted CA will issue SSL certificates to CloudFlare for domains they do not own, just as long as the domain root and www name point to a CloudFlare IP?


(@SenpaiMass) #20

Jesus Cloudflare…