My discourse is going crazy. I need to hire someone who can help!


(Adrian D'atri Guiran) #1

I have funds, and willing to pay a fair price. My discourse install is going totally bonkers. the ruby task was up to 12 GB of virtual memory, and the whole machine was grinding to a halt. There is basically zero activity on this forum (like 10 posts in total), so i don’t think it was a load thing. I just rebooted the server and its now using 100% CPU constantly.

My forum is here: forum.bid13.com

Please contact me if you are familiar working with discourse and need work.


(Mittineague) #2

Are you on a later version of Discourse?

I don’t recall the exact version where perf fixes were put in place, but some of the older versions had some serious performance issues.


(Kane York) #3
<meta name="generator" content="Discourse 1.2.0.beta4 - https://github.com/discourse/discourse version 0de6226a20150e7e82b489878667deb367831407">

Try upgrading first! We’re on 1.4 betas now.


(Adrian D'atri Guiran) #4

Well i’ve tried upgrading a couple times now. but because that sidekiq process is running at 100% i’ve not be able to complete an upgrade through the /admin/upgrade interface.


(AstonJ) #5

Where/what is it hosted on? (Server specs)
Is it a Docker install?

I got a few errors, one was an internal server error the other said that there was a problem connecting.

(I would probably take a back up and start again.)


(Adrian D'atri Guiran) #6

Sever is Linode 2GB instance. only running this one forum. Docker install originally, never upgraded until now.

So after trying and failing to upgrade via /admin/upgrade i’ve tried to upgrade directly via ssh.

git pull
./launcher rebuild app

But that crashed on me, and now i’m really at a loss.

RuntimeError: cd /var/www/discourse && su discourse -c 'bundle exec rake db:migrate' failed with return #<Process::Status: pid 474 exit 137>
Location of failure: /pups/lib/pups/exec_command.rb:105:in `spawn'
exec failed with the params {"cd"=>"$home", "hook"=>"bundle_exec", "cmd"=>["su discourse -c 'bundle install --deployment --verbose --without test --without development'", "su discourse -c 'bundle exec rake db:migrate'", "su discourse -c 'bundle exec rake assets:precompile'"]}
1c64408d4cd311899bc8949d74a4d58bb6d42e5b55f6f198f5378254e3296d58
** FAILED TO BOOTSTRAP ** please scroll up and look for earlier error messages, there may be more than one

(Adrian D'atri Guiran) #7

Ok, i’m gonna just trash the server and start over. What key files do i need to backup so i import all my data/users/posts/etc on the new install?


(Adrian D'atri Guiran) #8

Nevermind, i found this.

so i’m gonna try that first.


(AstonJ) #9

All you need really is a copy of the backup from your admin control panel and any changes you made to app.yml.

Then follow the instructions here: discourse/INSTALL-cloud.md at master · discourse/discourse · GitHub and simply upload and import your backup :slight_smile:


(Adrian D'atri Guiran) #10

Thanks. i didn’t realize how easy it would be to export the data and import into the new site. thats a really slick feature! loving discourse for that. still have no idea what exactly went wrong, but things seem to be running smoothly again now that i’m on a brand new install. Who knows how long it will last though. :frowning:


(Marius Corîci) #11

You should monitor the app and share here if it starts go “nuts” again. That can be helpful for the community.


(Adrian D'atri Guiran) #12

Before i deleted the old instance i dug through some of the log files. From what i could tell the redis server was dying. Then the sidekiq instance was repeatedly trying to contact redis and looping with a 1second timeout in an infinte loop of some sort. What caused redis to die in the first place i have no idea. Anyway, things seem to be going ok for now. will report back if it happens again on the new install.


(Jeff Atwood) #13

If you have a very old Docker based install you probably just needed to upgrade it, SSH in and upgrade the Docker service to latest version, then

cd /var/discourse
git pull
./launcher rebuild app

(Kane York) #14

Yes… that sounds exactly like a bug we fixed a few months ago that required a container rebuild, @Adrian_D_Atri_Guiran