Deploying Discourse to Amazon (and other clouds)

(Marco Ceppi) #1


This is a work in progress still, but I’ve been working on making Discourse dead simple to deploy to the cloud (AWS, OpenStack, etc) with Juju. You can find the progress so far on Juju Charms site and mirrored on Github; there is still a bit of work left to get this working 100% but figured I put this out there if people were interested. This now works to get discourse running, future updates are simply to streamline the deployment and management of discourse.

The goal is to have something like this when all said and done:

juju deploy discourse
juju deploy postgresql
juju add-relation discourse postgresql:db-admin
juju expose discourse

If you’re interested in helping to test this, check out getting started with Juju, pull requests are always welcome!

Current usage

If you want to try this out as it is, first check what is still left to be done so you can set appropriate expectations.

Install Juju

If you don’t have access to juju (IE: You don’t have Ubuntu anywhere to use) you can use one of these vagrant boxes that include the latest Juju application.

Once you have Juju installed you’ll need to type juju boostrap then edit ~/.juju/environments.yaml (You can use vim, nano, or any other command-line editor). Then either follow the instructions for Amazon AWS, HP Cloud, or OpenStack. If you don’t have an account with any of these cloud providers you can use the local provider. However, the local provider will not work in the vagrant box.


Use the following commands to deploy discourse and postgresql to your cloud:

juju deploy cs:~marcoceppi/discourse
juju deploy postgresql

The discourse charm isn’t yet in the charm store since it’s not done, so you can deploy from my personal branch for now. Once those are deployed you’ll need to relate postgresql to discourse

juju add-relation discourse postgresql:db-admin

Finally, expose discourse

juju expose discourse

At any time you can check the status of your environment by typing juju status if each node is marked as started and there are no errors, you can proceed.

I need admins

You can change the admins at anytime by running the following command:

juju set discourse admins="marcoceppi"

This will set the account marcoceppi as an admin. If you want more than one admin do the following:

juju set discourse admins="marcoceppi,codinghorror,eviltrout"

You can have as many admins as you’d like just provide them in a comma separated string. If you want to remove an admin, simply remove them from the list:

juju set discourse admins="codinghorror,eviltrout"


  • Create an upstart/init.d script

  • Make sure discourse can scale

  • Test redis-master charm connection

  • configuration option for web servers (apache/nginx)

  • configuration option for repository

  • Version pinning and proper upgrade paths

Organiziaton of documentation for installs - docker vs. non-docker / official vs. unofficial
(Marco Ceppi) #2

I’ve gotten this work finally, it just needs a few tweaks to the charm before prime time. For those interested I’m rolling a “juju” vagrant box so if you’re not on Ubuntu you can still use Juju to deploy discourse to the cloud and scale.

(Ted Lilley) #3

I’m curious how this might fit into the model of Capistrano deployments. I understand things like deploying postgresql with juju, but how does it handle updating an existing app with a new revision from a git repo? That’s how I see working on Discourse’s code…change the code, check it in and push it to deployment (a la cap deploy). Would juju take over that role or somehow work beside it?

(Dave H) #4

Great job, I’ll have to check this out during the weekend. Does it work on LXC also?

(Marco Ceppi) #5

I haven’t tested on LXC, but theoretically it should work just fine!

(Dave H) #6

I’ll let you know how it goes.

(Mark Mims) #7

So the easiest way to handle app version updates is to expose some sort of app version via juju config for the charm. Then you can just do something like

juju set discourse-version=x.x.x


juju set discourse-repo=

at anytime throughout the lifecycle of the service. Which one of these will work best is still unclear… depends on the direction the discourse project takes for version mgmt.

I’ve got plans for a jujustrano gem which’d effectively wrap the juju cli in cap tasks. It should map quite well actually. Note that juju makes it really easy to upgrade the app by upgrading the entire service at once rather than the standard cap update of symlinks on a single service unit. I plan for the gem to support either of these update modes and let you choose. It’d also be pretty straightforward to get the same rollback ability with entire services that you currently have with symlinks too… it’s crazy powerful when you start looking at that sort of stuff. Love some help with this if you’ve hacked around with capistrano before!

(tmeusburger) #8

Hey @marcoceppi, just wanted to say thankyou for the work you’ve put in to get this to work.

I’ve never had prior experience with juju before but am unsure how to install a charm from your personal repo. As the below command fails for me.

Could you update the doc to explain exactly how to install this charm from your repo, or let me know what else I should be doing.


(tmeusburger) #9

Nevermind… it now works as expected.

(Marco Ceppi) #10

That should work, if it doesn’t you can do something like this:

mkdir -p ~/charms/precise
cd ~/charms/precise
brz branch lp:~marcoceppi/charms/precise/discourse/trunk discourse

Then you can edit and more importantly deploy with:

juju deploy --repository=~/charms local:discourse

There’s a bunch of work to do still, and I’m working on having this mirrored on my github page, but if you have any changes feel free to push your changes to launchpad and open a merge request there. I’ll update the first post with development instructions when I get the charm on github and figure out the launchpad <-> github connection

(tmeusburger) #11

So I was able to get everything working to the point where the agent-state is started for all the servers. However I’m unsure how to finish the deploy. It seems nginx is installed and running, but I’m not sure what to do past here.

What would be the exact commands to run? Sorry again, I’ve done a good amount of manual rail deploys to apache but not nginx, much less had any previous experience with discourse. Thanks for your help and quick responses.

(Marco Ceppi) #12

If you’ve already done the juju add-relation then what you’ll need to do is the following:

juju ssh discourse/0
sudo su - root
cd /home/discourse/discourse
export RAILS_ENV=production
sudo -u discourse -H bundle exec sidekiq -e production > log/sidekiq.log 2>&1 &
sudo -u discourse -H bundle exec clockwork config/clock.rb > log/clock.log 2>&1 &
sudo -u discourse -H bundle exec rails server -p 3001 > /dev/null 2>&1 &

You’ll need to wait a few mins while rails start, but if you log out of the machine (or in another terminal) type juju status you can get the public address of the discourse server and open it in a web browser.

(Dave H) #13

It seems to work fine using a local LXC container setup also, however the local setup is not preserved at reboot, so it is bit tricky to get going after again, other than doing a destroy and redeploy (see launchpad bug report here).

(Dave H) #14

One issue following the steps above was that the postgresql charm did not work very well for me, in that it (hook db-admin-relation-joined) generated a /etc/postgresql/9.1/main/pg_hba.conf with the correct IP address of the discourse server, but missing the /32 mask prefix, resulting in:

 * Restarting PostgreSQL 9.1 database server
 * The PostgreSQL server failed to start. Please check the log output:
 2013-02-10 19:48:10 UTC LOG:  invalid IP mask "md5": Name or service not known
 2013-02-10 19:48:10 UTC CONTEXT:  line 8 of configuration file "/etc/postgresql/9.1/main/pg_hba.conf"
 2013-02-10 19:48:10 UTC FATAL:  could not load pg_hba.conf!

I’m running on precise, didn’t anybody else get bitten by this?

(Marco Ceppi) #15

I actually patched it to remove the /32 because EC2 uses hostnames and not ip addresses. I’ve reopened the original bug for this and I’ll dig in to a fix later tonight.

(Marco Ceppi) #16

I’ve pushed a merge request to resolve this bug, it should land this week.

(Marco Ceppi) #17

Okay, the charm is done and awaiting review for the charm store! I’ve also mirrored the code on Github. Please feel free to open merge requests!

(Marco Ceppi) #18

The aforementioned fix has landed, postgresql once again works

(Dave H) #19

Sweet, thanks for your work. I’ll have to test it again tomorrow.

(colin) #20

Awesome work on this Marco. Great to see your progress. When you initially posted that you were working on this, I looked in to Juju. Am I correct in understanding that it doesn’t work on a generic vps running ubuntu?