These two commits were backported to fix the specific issue you’re experiencing…
@Falco, did we forget something when backporting the “fix”?
These two commits were backported to fix the specific issue you’re experiencing…
@Falco, did we forget something when backporting the “fix”?
The only thing that changed was the mini_racer version, I tested with a clean install and it was backing up successfully.
I will create a new install on digital ocean later today to test again, just in case.
The issue was not reported on stable as far as I know.
However, I do know that the “fix” created the issue for us.
Looks like the auto backup from last evening completed successfully, but again left things in a state where it thinks it is backing up. Error log shows me:
stack trace:
/var/www/discourse/vendor/bundle/ruby/2.3.0/gems/logster-1.2.5/lib/logster/logger.rb:76:in `add_with_opts'
/var/www/discourse/vendor/bundle/ruby/2.3.0/gems/logster-1.2.5/lib/logster/logger.rb:37:in `add'
/usr/local/lib/ruby/2.3.0/logger.rb:498:in `warn'
config/unicorn.conf.rb:129:in `check_sidekiq_heartbeat'
config/unicorn.conf.rb:146:in `master_sleep'
/var/www/discourse/vendor/bundle/ruby/2.3.0/gems/unicorn-5.1.0/lib/unicorn/http_server.rb:284:in `join'
/var/www/discourse/vendor/bundle/ruby/2.3.0/gems/unicorn-5.1.0/bin/unicorn:126:in `<top (required)>'
/var/www/discourse/vendor/bundle/ruby/2.3.0/bin/unicorn:23:in `load'
/var/www/discourse/vendor/bundle/ruby/2.3.0/bin/unicorn:23:in `<main>'
You guys rename ‘stable’ to ‘old stuff’ and ‘latest’ to ‘stable’ and I’ll switch.
Is it safe to run redis-cli flushall
daily?
The workaround I have currently in mind is to setup a cronjob that runs:
docker exec data /usr/bin/redis-cli flushall
every night after (or before) the backup so the forum is in a good state to run the next backup.
But I’m unsure how clearing the forum state every night impacts its daily operation. Any recommandation on this matter?
Thank you.
What’s preventing you from updating to latest version? That bug has been fixed and should not happen anymore. There should be no need to flush redis.
Why is there a version called ‘stable’?
@sam, @codinghorror : Is it an official recommendation from discourse that the correct solution to a bug in the stable
branch is to switch to the latest
branch? If it is, I’ll switch. If it’s not, are you guys going to fix this or what?
I understand that the bug no more exists on latest. We are currently on stable, well, for feature stability. We have a development environment on beta to prepare the next stable release (mostly to deal with breaking changes that impact our CSS customizations).
Maybe it is after all a safer bet for us to use beta in production.
But what’s the point of the stable branch if it is not recommended to use anyway?
Guys,
Just got a brand new Droplet and the problem is still happening on stable.
I’m not sure WHY it doesn’t happen on latest. I’m not sure why it didn’t happen when I last tested it.
However, this bug is very hard to debug, because it happens in a fork
inside a wrapper to Chrome engine that we use to parse markdown.
So, yes, we want stable
to be stable and bug free. We know the bug is happening. And will fix as soon as possible.
If necessary you can choose to disable automatic backups and run RAILS_ENV=production bundle exec ruby script/discourse backup
manually while we triage this.
Thank you for taking the time to reproduce the bug.
Sam made these commits in an attempt to fix this bug:
https://github.com/discourse/discourse/commit/7e43e73df69a5ca70e7f4546465525c7392612fb
https://github.com/discourse/discourse/commit/c995fd65be3896de23c00c3d063cb9e4c55f24d8
They are not on stable. Aren’t they required to fully fix the bug?
You’re right We only backported the first 2 commits which were parts of the fix… We’ll get it fixed pretty soon (unless there’s a merge conflict…)
Hey guys, sorry this is my bad.
Version 1.6.7 with working backups is now out.
Clean auto backup last night with v. 1.6.7. Thanks for fixing it.
I confirm. Things seem to run smoothly again. Thanks!