I have a running discourse installation (two actually, one in stage another in production, different vms and all). I’m testing on the staging environment. Installation via the official guide.
Currently there is a Grafana/Prometheus/Node Exporter stack deployed via docker compose
on the same VM where the discourse installation is already deployed.
Here is the docker-compose.yaml
version: "3"
services:
cadvisor:
image: gcr.io/cadvisor/cadvisor:latest
container_name: cadvisor
restart: unless-stopped
volumes:
- /:/rootfs:ro
- /var/run:/var/run:ro
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:ro
- /dev/disk/:/dev/disk:ro
networks:
- prometheus-cadvisor
node_exporter:
image: quay.io/prometheus/node-exporter:latest
container_name: node_exporter
command:
- '--path.rootfs=/host'
pid: host
restart: unless-stopped
volumes:
- '/:/host:ro,rslave'
networks:
- prometheus-node_exporter
prometheus:
image: prom/prometheus:latest
restart: unless-stopped
container_name: prometheus
ports:
- "9090:9090"
volumes:
- ./prometheus:/app.cfg
networks:
- world
- prometheus-cadvisor
- prometheus-node_exporter
- discourse
- grafana-prometheus
command: >-
--config.file=/app.cfg/prometheus.yaml
--storage.tsdb.path=/prometheus
--web.console.libraries=/usr/share/prometheus/console_libraries
--web.console.templates=/usr/share/prometheus/consoles
grafana:
image: grafana/grafana:latest
container_name: grafana
restart: unless-stopped
ports:
- "3000:3000"
environment:
GF_SECURITY_ADMIN_USER: [OMITTED]
GF_SECURITY_ADMIN_PASSWORD: [OMITTED]
GF_PATHS_PROVISIONING: '/app.cfg/provisioning'
volumes:
- ./grafana:/app.cfg
- ./grafana/provisioning:/etc/grafana/provisioning
networks:
- world
- grafana-prometheus
networks:
world:
grafana-prometheus:
internal: true
prometheus-cadvisor:
internal: true
prometheus-node_exporter:
internal: true
discourse:
external: true
I rebuilt the discourse specifying a network so that it doesn’t deploy on bridge
and connected the Prometheus on the same network.
docker network create -d bridge discourse
/var/discourse/launcher rebuild app --docker-args '--network discourse'
I tested by entering the Prometheus container and pinging the discourse container using the internal network alias and it could reach it.
Now, when configuring Prometheus job so that it would scrape the metrics, using an internal IP, I can only see server returned HTTP status 404 Not Found
.
This is the Prometheus configuration:
global:
scrape_interval: 30s
scrape_timeout: 10s
rule_files:
scrape_configs:
- job_name: prometheus
metrics_path: /metrics
static_configs:
- targets:
- 'prometheus:9090'
- job_name: node_exporter
static_configs:
- targets:
- 'node_exporter:9100'
- job_name: discourse_exporter
static_configs:
- targets:
- 'vmuniqueID-app:80'
vmuniqueID
is a replace of the actual name of the VM.
As per documentation here, access via internal IP should be allowed:
Out of the box we allow the
metrics
route to admins and private ips.
Please help me see what I am missing