404 after discourse upgrade on subfolder install


(Olivier Baillon) #1

Hi,

After upgrading my discourse installation to discourse 2, all the forum give a 404 error.

Discourse is installed on a docker container.
Forum is serve under subfolder url
https://www.domain.com/forum

Here is my host nginx config

server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;
    server_name www.domain.com;
    root /home/forge/www.domain.com/public;

    # FORGE SSL (DO NOT REMOVE!)
    ssl_certificate /etc/nginx/ssl/www.domain.com/194605/server.crt;
    ssl_certificate_key /etc/nginx/ssl/www.domain.com/194605/server.key;

    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256
:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-
ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-S
HA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';
    ssl_prefer_server_ciphers on;
    ssl_dhparam /etc/nginx/dhparams.pem;

#    add_header X-Frame-Options "SAMEORIGIN";
#    add_header X-XSS-Protection "1; mode=block";
#    add_header X-Content-Type-Options "nosniff";

    index index.html index.htm index.php;

    charset utf-8;

    # FORGE CONFIG (DOT NOT REMOVE!)
    include forge-conf/www.domain.com/server/*;

    location / {
        try_files $uri $uri/ /index.php?$query_string;
    }

    location = /favicon.ico { access_log off; log_not_found off; }
    location = /robots.txt  { access_log off; log_not_found off; }

    access_log off;
    error_log  /var/log/nginx/www.domain.com-error.log error;

    error_page 404 /index.php;

    location ~ \.php$ {
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass unix:/var/run/php/php7.1-fpm.sock;
        fastcgi_index index.php;
        include fastcgi_params;
    }

    location /forum/ {
        proxy_pass http://unix:/var/discourse/shared/standalone/nginx.http.sock:;
        proxy_read_timeout 120;        
        proxy_set_header Host $http_host;
        proxy_http_version 1.1;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }

    location ~ /\.ht {
        deny all;
    }
}

and here is my app.yml

## this is the all-in-one, standalone Discourse Docker container template
##
## After making changes to this file, you MUST rebuild
## /var/discourse/launcher rebuild app
##
## BE *VERY* CAREFUL WHEN EDITING!
## YAML FILES ARE SUPER SUPER SENSITIVE TO MISTAKES IN WHITESPACE OR ALIGNMENT!
## visit http://www.yamllint.com/ to validate this file as needed

templates:
  - "templates/postgres.template.yml"
  - "templates/redis.template.yml"
  - "templates/web.template.yml"
  - "templates/web.ratelimited.template.yml"
## Uncomment these two lines if you wish to add Lets Encrypt (https)
  #- "templates/web.ssl.template.yml"
  #- "templates/web.letsencrypt.ssl.template.yml"
  - "templates/web.socketed.template.yml"

## which TCP/IP ports should this container expose?
## If you want Discourse to share a port with another webserver like Apache or nginx,
## see https://meta.discourse.org/t/17247 for details
expose:
 # - "80:80"   # http
 # - "443:443" # https
  - "2222:22"
  - "5432:5432"
params:
  db_default_text_search_config: "pg_catalog.french"

  ## Set db_shared_buffers to a max of 25% of the total memory.
  ## will be set automatically by bootstrap based on detected RAM, or you can override
  db_shared_buffers: "8192MB"
  
  ## can improve sorting performance, but adds memory usage per-connection
  #db_work_mem: "40MB"
  
  ## Which Git revision should this container use? (default: tests-passed)
  #version: tests-passed

env:
  LANG: fr_FR.UTF-8
  DISCOURSE_DEFAULT_LOCALE: fr

  ## How many concurrent web requests are supported? Depends on memory and CPU cores.
  ## will be set automatically by bootstrap based on detected CPUs, or you can override
  UNICORN_WORKERS: 12

  ## TODO: The domain name this Discourse instance will respond to
  DISCOURSE_HOSTNAME: www.domain.com
  DISCOURSE_RELATIVE_URL_ROOT: /forum

  ## Uncomment if you want the container to be started with the same
  ## hostname (-h option) as specified above (default "$hostname-$config")
  #DOCKER_USE_HOSTNAME: true

  ## TODO: List of comma delimited emails that will be made admin and developer
  ## on initial signup example 'user1@example.com,user2@example.com'
  DISCOURSE_DEVELOPER_EMAILS: 'admin@domain.com'

  ## TODO: The SMTP mail server used to validate new accounts and send notifications
  DISCOURSE_SMTP_ADDRESS: smtp.mailgun.org
  DISCOURSE_SMTP_PORT: 587
  DISCOURSE_SMTP_USER_NAME: postmaster@domain.com
  DISCOURSE_SMTP_PASSWORD: "azerty"
  DISCOURSE_SMTP_ENABLE_START_TLS: false           # (optional, default true)
  
  ## If you added the Lets Encrypt template, uncomment below to get a free SSL certificate
  #LETSENCRYPT_ACCOUNT_EMAIL: me@example.com


  ## The CDN address for this Discourse instance (configured to pull)
  ## see https://meta.discourse.org/t/14857 for details
  #DISCOURSE_CDN_URL: //discourse-cdn.example.com

## The Docker container is stateless; all data is stored in /shared
volumes:
  - volume:
      host: /var/discourse/shared/standalone
      guest: /shared
  - volume:
      host: /var/discourse/shared/standalone/log/var-log
      guest: /var/log

## Plugins go here
## see https://meta.discourse.org/t/19157 for details
hooks:
  after_code:
    - exec:
        cd: $home/plugins
        cmd:
          - git clone https://github.com/discourse/docker_manager.git
          - git clone https://github.com/discourse/discourse-spoiler-alert.git
          - git clone https://github.com/Kelio---/discourse-replayer.git 
          - git clone https://github.com/davidtaylorhq/discourse-whos-online.git
          - git clone https://github.com/discourse/discourse-bbcode.git
          - git clone https://github.com/discourse/discourse-bbcode-color.git
## Second part to change according to the howto on "Subfolder support with Docker"

run:

    - exec:

        cd: $home

        cmd:

          - mkdir -p public/forum

          - cd public/forum && ln -s ../uploads && ln -s ../backups

          - rm public/uploads

          - rm public/backups

    - replace:

       global: true

       filename: /etc/nginx/conf.d/discourse.conf

       from: proxy_pass http://discourse;

       to: |

          rewrite ^/(.*)$ /forum/$1 break;

          proxy_pass http://discourse;

    - replace:

       filename: /etc/nginx/conf.d/discourse.conf

       from: etag off;

       to: |

          etag off;

          location /forum {

             rewrite ^/forum/?(.*)$ /$1;

          }

#    - replace:

 #        filename: /etc/nginx/conf.d/discourse.conf

  #       from: $proxy_add_x_forwarded_for

   #      to: $http_fastly_client_ip

    #     global: true



    - exec: echo "End of custom commands"

    - exec: awk -F\# '{print $1;}' ~/.ssh/authorized_keys | awk 'BEGIN { print "Authorized SSH keys for this container:"; } NF>=2 {print $NF;}'

Thanks you for your help


Subfolder installation: links in Top Referred Topics do not contain subfolder in the path
#2

Have you tried to rebuild with out any plugins?

Any errors on rebuild?


(Olivier Baillon) #3

Yes i have commented all the plugin lines and i have rebuild the app.

I have also reboot the server


(David Brookes) #4

Having the same issue after upgrading today


(Jeremy M (Jerdog)) #5

Same issue as well - upgrade via /admin/upgrade just sat there and never came back up. Ran ./launcher rebuild app and it builds just fine but 404s happen.

Edit: I also rebuilt without plugins and experienced the same issues.

@dbrookes / @Olivier_Baillon - you guys still having the issue? It told me when I did the upgrade that I was 16 (i believe that was the number) changes behind and I am on the “tests-passed” branch if that helps anyone with troubleshooting


(Olivier Baillon) #6

@jerdog

Yes issue still here.

OS : ubuntu 16.04.3 LTS with apt-get upgrade done
Docker : Docker version 17.12.0-ce, build c97c6d6
Discourse : branch master with git pull done


(Olivier Baillon) #7

In the nginx host log : i fount that

2018/01/15 15:02:26 [crit] 9044#9044: *23058 connect() to unix:/var/discourse/shared/standalone/nginx.http.sock failed (2: No such file or directory) while connecting to upstream, client: 92.152.178.134, server: www.domain.com, request: “POST /forum/message-bus/84b095f9eabd49a8b5d2b75a4a1935af/poll?dlp=t HTTP/2.0”, upstream: “http://unix:/var/discourse/shared/standalone/nginx.http.sock:/forum/message-bus/84b095f9eabd49a8b5d2b75a4a1935af/poll?dlp=t”, host: “www.domain.com”, referrer: “https://www.domain.com/forum/t/range-de-push/73047


(Jeremy M (Jerdog)) #8

Seems weird this happened after performing an upgrade. I am still down (but I am not in a production system at the moment)


(Michael Brown) #9

This is the important part; nginx can’t even connect to discourse as the socket is not listening.

Is unicorn running?


(Jeremy M (Jerdog)) #10

is there any reason why that would all of a sudden not be the case after a normal upgrade? In my case I did an upgrade through the admin docker upgrade screen and it hung and never came back and is giving 404 errors all over

here is my app.yml file which hasn’t been changed:

##
## After making changes to this file, you MUST rebuild for any changes
## to take effect in your live Discourse instance:
##
## /var/docker/launcher rebuild app
##

## this is the all-in-one, standalone Discourse Docker container template
templates:
  - "templates/cron.template.yml"
  - "templates/postgres.template.yml"
  - "templates/redis.template.yml"
  - "templates/sshd.template.yml"
  - "templates/web.template.yml"

## which TCP/IP ports should this container expose?
expose:
  - "8080:80"   # fwd host port 80   to container port 80 (http)
  - "2222:22" # fwd host port 2222 to container port 22 (ssh)

params:
  ## Which Git revision should this container use?
  version: HEAD
  ##version: tests-passed
  db_shared_buffers: "3256MB"

env:
  ## How many concurrent web requests are supported?
  ## With 2GB we recommend 3-4 workers, with 1GB only 2
  UNICORN_WORKERS: 3
  ##
  ## List of comma delimited emails that will be made admin on signup
  DISCOURSE_DEVELOPER_EMAILS: '-=-----------'
  ##
  ## The domain name this Discourse instance will respond to
  DISCOURSE_HOSTNAME: 'auth0.com'
  ##
  ## The mailserver this Discourse instance will use

  DISCOURSE_SMTP_ADDRESS: s-----    # (mandatory)
  DISCOURSE_SMTP_PORT: ----                        # (optional)
  DISCOURSE_SMTP_USER_NAME: ----                 # (optional)
  DISCOURSE_SMTP_PASSWORD: ------ # (optional)

  ##
  ## the origin pull CDN address for this Discourse instance
  # DISCOURSE_CDN_URL: //discourse-cdn.example.com
  DISCOURSE_RELATIVE_URL_ROOT: /forum

## These containers are stateless, all data is stored in /shared
volumes:
  - volume:
      host: /var/docker/shared/standalone
      guest: /shared

## The docker manager plugin allows you to one-click upgrade Discouse
## http://discourse.example.com/admin/docker
hooks:
  after_code:
    - exec:
        cd: $home/plugins
        cmd:
          - mkdir -p plugins
          - git clone https://github.com/discourse/docker_manager.git
          - git clone https://github.com/auth0/discourse-plugin.git auth0
          - git clone https://github.com/discourse/discourse-solved.git
          - git clone https://github.com/discoursehosting/discourse-sitemap.git
          - git clone https://github.com/discourse/discourse-zendesk-plugin.git
          - git clone https://github.com/discourse/discourse-voting.git
          - git clone https://github.com/discourse/discourse-chat-integration.git
#          - git clone https://github.com/vinkashq/discourse-branding.git
          - git clone https://github.com/discourse/discourse-assign.git
          - git clone https://github.com/discourse/discourse-data-explorer.git
          - git clone https://github.com/discourse-league/dl-static-pages.git
          - git clone https://github.com/davidtaylorhq/discourse-whos-online.git
          - git clone https://github.com/discourse/discourse-push-notifications.git
          - git clone https://github.com/discourse/discourse-spoiler-alert.git
          - git clone https://github.com/discourse/discourse-cakeday.git

## Remember, this is YAML syntax - you can only have one block with a name
run:
  - exec: echo "Beginning of custom commands"
  - exec:
      cd: $home
      cmd:
        - mkdir -p public/forum
        - cd public/forum && ln -s ../uploads && ln -s ../backups
#        - ln -s /shared/uploads public/uploads
#        - ln -s /shared/backups public/backups
  - replace:
     global: true
     filename: /etc/nginx/conf.d/discourse.conf
     from: proxy_pass http://discourse;
     to: |
        rewrite ^/(.*)$ /forum/$1 break;
        proxy_pass http://discourse;
  - replace:
     filename: /etc/nginx/conf.d/discourse.conf
     from: etag off;
     to: |
        etag off;
        location /forum {
           rewrite ^/forum/?(.*)$ /$1;
        }
  - replace:
       filename: /etc/nginx/conf.d/discourse.conf
       from: $http_fastly_client_ip
       to: $proxy_add_x_forwarded_for
       global: true

  ## If you want to configure password login for root, uncomment and change:
  #- exec: apt-get -y install whois # for mkpasswd
  ## Use only one of the following lines:
  #- exec: /usr/sbin/usermod -p 'PASSWORD_HASH' root
  #- exec: /usr/sbin/usermod -p "$(mkpasswd -m sha-256 'RAW_PASSWORD')" root

  ## If you want to authorized additional users, uncomment and change:
  #- exec: ssh-import-id username
  #- exec: ssh-import-id anotherusername

  - exec: echo "End of custom commands"
  - exec: awk -F\# '{print $1;}' ~/.ssh/authorized_keys | awk 'BEGIN { print "Authorized SSH keys for this container:"; } NF>=2 {print $NF;}'

(Michael Brown) #11

Nothing known, but the best thing to do here is check the app logs and determine:

  • is unicorn running?
  • is it trying to run?
  • what logs are in the rails logs?

(Jeremy M (Jerdog)) #12

Ok - so I will profess to being a bit noob in this department… so is that checking on the host side or docker image side?

edit: so I see unicorn logs in /var/www/discourse/log on my docker image and they have both been updated today

lrwxrwxrwx  1 discourse root        36 Jan 15 18:44 unicorn.stderr.log -> /shared/log/rails/unicorn.stderr.log
lrwxrwxrwx  1 discourse root        36 Jan 15 18:44 unicorn.stdout.log -> /shared/log/rails/unicorn.stdout.log

the following is the tail of my unicorn.stderr.log:

:/var/www/discourse/log# tail unicorn.stderr.log
Failed to report error: Error connecting to Redis on localhost:6379 (Errno::ECONNREFUSED) 2 Error connecting to Redis on localhost:6379 (Errno::ECONNREFUSED) subscribe failed, reconnecting in 1 second. Call stack ["/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis/client.rb:345:in `rescue in establish_connection'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis/client.rb:331:in `establish_connection'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis/client.rb:101:in `block in connect'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis/client.rb:293:in `with_reconnect'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis/client.rb:100:in `connect'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis/client.rb:364:in `ensure_connected'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis/client.rb:221:in `block in process'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis/client.rb:306:in `logging'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis/client.rb:220:in `process'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis/client.rb:134:in `block in call_loop'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis/client.rb:280:in `with_socket_timeout'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis/client.rb:133:in `call_loop'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis/subscribe.rb:43:in `subscription'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis/subscribe.rb:12:in `subscribe'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis.rb:2775:in `_subscription'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis.rb:2143:in `block in subscribe'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis.rb:58:in `block in synchronize'", "/usr/local/lib/ruby/2.4.0/monitor.rb:214:in `mon_synchronize'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis.rb:58:in `synchronize'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis.rb:2142:in `subscribe'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/message_bus-2.1.2/lib/message_bus/backends/redis.rb:336:in `global_subscribe'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/message_bus-2.1.2/lib/message_bus.rb:525:in `global_subscribe_thread'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/message_bus-2.1.2/lib/message_bus.rb:473:in `block in new_subscriber_thread'"]
Failed to report error: Error connecting to Redis on localhost:6379 (Errno::ECONNREFUSED) 3 Job exception: Error connecting to Redis on localhost:6379 (Errno::ECONNREFUSED)

Failed to report error: Error connecting to Redis on localhost:6379 (Errno::ECONNREFUSED) 2 Error connecting to Redis on localhost:6379 (Errno::ECONNREFUSED) subscribe failed, reconnecting in 1 second. Call stack ["/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis/client.rb:345:in `rescue in establish_connection'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis/client.rb:331:in `establish_connection'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis/client.rb:101:in `block in connect'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis/client.rb:293:in `with_reconnect'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis/client.rb:100:in `connect'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis/client.rb:364:in `ensure_connected'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis/client.rb:221:in `block in process'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis/client.rb:306:in `logging'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis/client.rb:220:in `process'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis/client.rb:120:in `call'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis.rb:862:in `block in get'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis.rb:58:in `block in synchronize'", "/usr/local/lib/ruby/2.4.0/monitor.rb:214:in `mon_synchronize'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis.rb:58:in `synchronize'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/redis-3.3.5/lib/redis.rb:861:in `get'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/message_bus-2.1.2/lib/message_bus/backends/redis.rb:284:in `process_global_backlog'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/message_bus-2.1.2/lib/message_bus/backends/redis.rb:320:in `block in global_subscribe'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/message_bus-2.1.2/lib/message_bus/backends/redis.rb:333:in `global_subscribe'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/message_bus-2.1.2/lib/message_bus.rb:525:in `global_subscribe_thread'", "/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/message_bus-2.1.2/lib/message_bus.rb:473:in `block in new_subscriber_thread'"]
I, [2018-01-15T18:50:00.738134 #76]  INFO -- : Refreshing Gem list
I, [2018-01-15T18:50:09.458856 #76]  INFO -- : listening on addr=0.0.0.0:3000 fd=15
I, [2018-01-15T18:50:16.664702 #76]  INFO -- : master process ready
I, [2018-01-15T18:50:20.964325 #183]  INFO -- : worker=0 ready
I, [2018-01-15T18:50:22.301900 #212]  INFO -- : worker=1 ready
I, [2018-01-15T18:50:23.009560 #260]  INFO -- : worker=2 ready

a rebuild doesn’t return any errors that I can see as it processes, and says everything is fine…

I also did a docker restart of the image/container and still getting the same errors it seems

edit2: so I looked into that error and it seems that the upgrade that just blew up earlier today (as mentioned above) blew some things up and now REDIS doesn’t work? I am wondering if I need to do a full new install, and how I would go about that without losing any data?

edit3: i do have a backup from yesterday that I could use but I do want to see if I can fix this first so I don’t have the same problem - plus a bit concerned that doing an upgrade caused this


(Jeremy M (Jerdog)) #13

@dbrookes / @Olivier_Baillon - you guys get it resolved yet? does you info mimic what I am seeing?


(Michael Brown) #14

OK, the next step is determining why redis isn’t starting. Can you check the redis logs?


(Jeremy M (Jerdog)) #15

which logs are those @supermathie? where are they typically located?


(Jeremy M (Jerdog)) #16

According to here they’re supposed to be in shared/standalone/log/var-log/redis but I don’t have that folder


(David Brookes) #17

In my case the install process failed as the sever ran out of memory (2gb) and bundle was terminated.

I have the same redis errors, but my log (var/log/redis) doesn’t have anything in it from today.


(Jeremy M (Jerdog)) #18

@supermathie here is my /var/log/redis/current log:

        _._
   _.-``__ ''-._
  _.-``    `.  `_.  ''-._           Redis 3.0.6 (00000000/0) 64 bit
  .-`` .-```.  ```\/    _.,_ ''-._
 (    '      ,       .-`  | `,    )     Running in standalone mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
 |    `-._   `._    /     _.-'    |     PID: 51
  `-._    `-._  `-./  _.-'    _.-'
 |`-._`-._    `-.__.-'    _.-'_.-'|
 |    `-._`-._        _.-'_.-'    |           http://redis.io
  `-._    `-._`-.__.-'_.-'    _.-'
 |`-._`-._    `-.__.-'    _.-'_.-'|
 |    `-._`-._        _.-'_.-'    |
  `-._    `-._`-.__.-'_.-'    _.-'
  `-._    `-.__.-'    _.-'
  `-._        _.-'
      `-.__.-'

51:M 15 Jan 18:49:59.445 # WARNING: The TCP backlog setting of 511 cannot be enf
orced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
51:M 15 Jan 18:49:59.445 # Server started, Redis version 3.0.6
51:M 15 Jan 18:49:59.445 # WARNING overcommit_memory is set to 0! Background sav
e may fail under low memory condition. To fix this issue add 'vm.overcommit_memo
ry = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overco
mmit_memory=1' for this to take effect.
51:M 15 Jan 18:49:59.445 # WARNING you have Transparent Huge Pages (THP) support
 enabled in your kernel. This will create latency and memory usage issues with R
edis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent
_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain
the setting after a reboot. Redis must be restarted after THP is disabled.
51:M 15 Jan 18:49:59.657 * DB loaded from disk: 0.212 seconds
51:M 15 Jan 18:49:59.657 * The server is now ready to accept connections on port
 6379
51:M 15 Jan 18:55:00.094 * 10 changes in 300 seconds. Saving...
51:M 15 Jan 18:55:00.095 * Background saving started by pid 865
865:C 15 Jan 18:55:00.163 * DB saved on disk
865:C 15 Jan 18:55:00.164 * RDB: 12 MB of memory used by copy-on-write
51:M 15 Jan 18:55:00.195 * Background saving terminated with success
51:M 15 Jan 19:00:01.034 * 10 changes in 300 seconds. Saving...
51:M 15 Jan 19:00:01.034 * Background saving started by pid 1210
1210:C 15 Jan 19:00:01.105 * DB saved on disk
1210:C 15 Jan 19:00:01.105 * RDB: 8 MB of memory used by copy-on-write
51:M 15 Jan 19:00:01.134 * Background saving terminated with success
51:M 15 Jan 19:05:02.076 * 10 changes in 300 seconds. Saving...
51:M 15 Jan 19:05:02.076 * Background saving started by pid 1567
1567:C 15 Jan 19:05:02.147 * DB saved on disk
1567:C 15 Jan 19:05:02.147 * RDB: 8 MB of memory used by copy-on-write
51:M 15 Jan 19:05:02.177 * Background saving terminated with success
51:M 15 Jan 19:10:03.012 * 10 changes in 300 seconds. Saving...
51:M 15 Jan 19:10:03.012 * Background saving started by pid 1901
1901:C 15 Jan 19:10:03.081 * DB saved on disk
1901:C 15 Jan 19:10:03.082 * RDB: 8 MB of memory used by copy-on-write
51:M 15 Jan 19:10:03.112 * Background saving terminated with success
51:M 15 Jan 19:15:04.089 * 10 changes in 300 seconds. Saving...
51:M 15 Jan 19:15:04.090 * Background saving started by pid 2229
2229:C 15 Jan 19:15:04.237 * DB saved on disk
2229:C 15 Jan 19:15:04.238 * RDB: 2 MB of memory used by copy-on-write
51:M 15 Jan 19:15:04.292 * Background saving terminated with success
51:M 15 Jan 19:20:05.020 * 10 changes in 300 seconds. Saving...
51:M 15 Jan 19:20:05.020 * Background saving started by pid 2558
2558:C 15 Jan 19:20:05.086 * DB saved on disk
2558:C 15 Jan 19:20:05.087 * RDB: 2 MB of memory used by copy-on-write
51:M 15 Jan 19:20:05.120 * Background saving terminated with success
51:M 15 Jan 19:25:06.073 * 10 changes in 300 seconds. Saving...
51:M 15 Jan 19:25:06.073 * Background saving started by pid 2901
2901:C 15 Jan 19:25:06.140 * DB saved on disk
2901:C 15 Jan 19:25:06.140 * RDB: 1 MB of memory used by copy-on-write
51:M 15 Jan 19:25:06.173 * Background saving terminated with success
51:signal-handler (1516044500) Received SIGTERM scheduling shutdown...
51:M 15 Jan 19:28:20.366 # User requested shutdown...
51:M 15 Jan 19:28:20.366 * Saving the final RDB snapshot before exiting.
51:M 15 Jan 19:28:20.432 * DB saved on disk
51:M 15 Jan 19:28:20.432 # Redis is now ready to exit, bye bye...
        _._
   _.-``__ ''-._
  _.-``    `.  `_.  ''-._           Redis 3.0.6 (00000000/0) 64 bit
  .-`` .-```.  ```\/    _.,_ ''-._
 (    '      ,       .-`  | `,    )     Running in standalone mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
 |    `-._   `._    /     _.-'    |     PID: 50
  `-._    `-._  `-./  _.-'    _.-'
 |`-._`-._    `-.__.-'    _.-'_.-'|
 |    `-._`-._        _.-'_.-'    |           http://redis.io
  `-._    `-._`-.__.-'_.-'    _.-'
 |`-._`-._    `-.__.-'    _.-'_.-'|
 |    `-._`-._        _.-'_.-'    |
  `-._    `-._`-.__.-'_.-'    _.-'
  `-._    `-.__.-'    _.-'
  `-._        _.-'
      `-.__.-'

50:M 15 Jan 19:28:23.346 # WARNING: The TCP backlog setting of 511 cannot be enf
orced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
50:M 15 Jan 19:28:23.346 # Server started, Redis version 3.0.6
50:M 15 Jan 19:28:23.346 # WARNING overcommit_memory is set to 0! Background sav
e may fail under low memory condition. To fix this issue add 'vm.overcommit_memo
ry = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overco
mmit_memory=1' for this to take effect.
50:M 15 Jan 19:28:23.346 # WARNING you have Transparent Huge Pages (THP) support
 enabled in your kernel. This will create latency and memory usage issues with R
edis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent
_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain
the setting after a reboot. Redis must be restarted after THP is disabled.
50:M 15 Jan 19:28:23.421 * DB loaded from disk: 0.074 seconds
50:M 15 Jan 19:28:23.421 * The server is now ready to accept connections on port
 6379
50:M 15 Jan 19:33:24.087 * 10 changes in 300 seconds. Saving...
50:M 15 Jan 19:33:24.089 * Background saving started by pid 881
881:C 15 Jan 19:33:24.158 * DB saved on disk
881:C 15 Jan 19:33:24.158 * RDB: 16 MB of memory used by copy-on-write
50:M 15 Jan 19:33:24.191 * Background saving terminated with success
50:signal-handler (1516045021) Received SIGTERM scheduling shutdown...
50:M 15 Jan 19:37:01.113 # User requested shutdown...
50:M 15 Jan 19:37:01.113 * Saving the final RDB snapshot before exiting.
50:M 15 Jan 19:37:01.178 * DB saved on disk
50:M 15 Jan 19:37:01.179 # Redis is now ready to exit, bye bye...
        _._
   _.-``__ ''-._
  _.-``    `.  `_.  ''-._           Redis 3.0.6 (00000000/0) 64 bit
  .-`` .-```.  ```\/    _.,_ ''-._
 (    '      ,       .-`  | `,    )     Running in standalone mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379
 |    `-._   `._    /     _.-'    |     PID: 49
  `-._    `-._  `-./  _.-'    _.-'
 |`-._`-._    `-.__.-'    _.-'_.-'|
 |    `-._`-._        _.-'_.-'    |           http://redis.io
  `-._    `-._`-.__.-'_.-'    _.-'
 |`-._`-._    `-.__.-'    _.-'_.-'|
 |    `-._`-._        _.-'_.-'    |
  `-._    `-._`-.__.-'_.-'    _.-'
  `-._    `-.__.-'    _.-'
  `-._        _.-'
      `-.__.-'

49:M 15 Jan 19:37:04.649 # WARNING: The TCP backlog setting of 511 cannot be enf
orced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
49:M 15 Jan 19:37:04.649 # Server started, Redis version 3.0.6
49:M 15 Jan 19:37:04.649 # WARNING overcommit_memory is set to 0! Background sav
e may fail under low memory condition. To fix this issue add 'vm.overcommit_memo
ry = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overco
mmit_memory=1' for this to take effect.
49:M 15 Jan 19:37:04.649 # WARNING you have Transparent Huge Pages (THP) support
 enabled in your kernel. This will create latency and memory usage issues with R
edis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent
_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain
the setting after a reboot. Redis must be restarted after THP is disabled.
49:M 15 Jan 19:37:04.737 * DB loaded from disk: 0.087 seconds
49:M 15 Jan 19:37:04.737 * The server is now ready to accept connections on port
 6379
49:M 15 Jan 19:42:05.093 * 10 changes in 300 seconds. Saving...
49:M 15 Jan 19:42:05.094 * Background saving started by pid 861
861:C 15 Jan 19:42:05.164 * DB saved on disk
861:C 15 Jan 19:42:05.165 * RDB: 12 MB of memory used by copy-on-write
49:M 15 Jan 19:42:05.195 * Background saving terminated with success
49:M 15 Jan 19:47:06.026 * 10 changes in 300 seconds. Saving...
49:M 15 Jan 19:47:06.026 * Background saving started by pid 1186
1186:C 15 Jan 19:47:06.120 * DB saved on disk
1186:C 15 Jan 19:47:06.120 * RDB: 12 MB of memory used by copy-on-write
49:M 15 Jan 19:47:06.126 * Background saving terminated with success
49:M 15 Jan 19:52:07.061 * 10 changes in 300 seconds. Saving...
49:M 15 Jan 19:52:07.062 * Background saving started by pid 1512
1512:C 15 Jan 19:52:07.129 * DB saved on disk
1512:C 15 Jan 19:52:07.130 * RDB: 4 MB of memory used by copy-on-write
49:M 15 Jan 19:52:07.162 * Background saving terminated with success
49:M 15 Jan 19:57:08.021 * 10 changes in 300 seconds. Saving...
49:M 15 Jan 19:57:08.022 * Background saving started by pid 1839
1839:C 15 Jan 19:57:08.088 * DB saved on disk
1839:C 15 Jan 19:57:08.089 * RDB: 5 MB of memory used by copy-on-write
49:M 15 Jan 19:57:08.122 * Background saving terminated with success
49:M 15 Jan 20:02:09.071 * 10 changes in 300 seconds. Saving...
49:M 15 Jan 20:02:09.071 * Background saving started by pid 2162
2162:C 15 Jan 20:02:09.139 * DB saved on disk
2162:C 15 Jan 20:02:09.139 * RDB: 4 MB of memory used by copy-on-write
49:M 15 Jan 20:02:09.172 * Background saving terminated with success
49:M 15 Jan 20:07:10.101 * 10 changes in 300 seconds. Saving...
49:M 15 Jan 20:07:10.102 * Background saving started by pid 2503
2503:C 15 Jan 20:07:10.169 * DB saved on disk
2503:C 15 Jan 20:07:10.170 * RDB: 4 MB of memory used by copy-on-write
49:M 15 Jan 20:07:10.202 * Background saving terminated with success
49:M 15 Jan 20:12:11.044 * 10 changes in 300 seconds. Saving...
49:M 15 Jan 20:12:11.044 * Background saving started by pid 2842
2842:C 15 Jan 20:12:11.111 * DB saved on disk
2842:C 15 Jan 20:12:11.112 * RDB: 4 MB of memory used by copy-on-write
49:M 15 Jan 20:12:11.145 * Background saving terminated with success

(Neil Lalonde) #19

Subfolder setups are broken:

Started GET "/forum/" for 127.0.0.1 at 2018-01-15 20:23:28 +0000
ActionController::RoutingError (No route matches [GET] "/forum")
/var/www/discourse/vendor/bundle/ruby/2.4.0/gems/actionpack-5.1.4/lib/action_dispatch/middleware/debug_exceptions.rb:63:in `call'

(Jeremy M (Jerdog)) #20

I’m looking right here where they get created in the app.yml and they exist in the docker image. Keep in mind that all that happened was the upgrade failed - no other configuration files were changed and the rebuild app works fine and creates those folders.

edit: @neil I don’t see that entry in my logs anywhere?