Still not following the reasoning here. Promethues follows a pull model and yanks metrics from lots of endpoints. Each service is meant to have 1 exporter.
This I understand. I my grafana displays don’t have redis or postgres stats (because they are not in that container?)
Exactly! That was my point. I have everything separated also, redis - sidekiq - postgresql - discourse app.
I did not dig too much into it, but I guess that parametrizing that value would allow us to have all metrics from external services. Authors, am I right?
Still not following …
- You run an exporter on the dedicated sidekiq container
- You run an exporter on various web containers
- You run an exporter on postgres
- and so on
This is how you are meant to deploy prometheus monitoring, 1 exporter per service.
Thanks @sam. That exporter only sends metrics to localhost:9405, the server where Discourse is supposed to be running.
If you have separate components, only the ones on the same server as Discourse app will be captured.
You expose that port from the container, and configure prom grab metrics from there, this is how our monitoring is configured, lots of containers … lots of exporters.
That makes sense. So the part that I’m missing is exporters for redis and Postgres. I’ve looked a couple of times but not found an obvious solution.
And I think when I’ve had the prometheus exporter in place on a single container config it found those stats, but it was a long time ago, and I pretty much use multiple containers exclusively now.
If you have everything under one server/pod, different containers, it works.
Whenever you have some of the components in a separete pod from Discourse app, those relatif metrics won’t be sent because of the
localhost variable, giving a:
Prometheus Exporter, failed to send message Connection refused - connect(2) for "localhost" port 9405
Either parametrizing that value, or reading it from an environment variable, can also cover that specific scenario I think.