Prometheus + Nginx Exporter Metrics Collection
Prometheus is an open-source monitoring system that uses a pull model to collect metrics from target servers. By exposing Nginx and Tomcat internal state as Prometheus metrics, you can build real-time dashboards and alerting systems with visualization tools like Grafana. This chapter walks through every step from enabling Nginx stub_status to building a full monitoring stack with Docker Compose.
Enabling the Nginx stub_status Moduleβ
Nginx exposes basic statistics β active connections, processed request counts, and more β as an HTTP endpoint via the ngx_http_stub_status_module.
Verify Module Availabilityβ
nginx -V 2>&1 | grep stub_status
# Output must contain: --with-http_stub_status_module
Configure the /nginx_status Endpointβ
# /etc/nginx/conf.d/status.conf
server {
listen 127.0.0.1:8080; # Local access only (security)
server_name localhost;
location /nginx_status {
stub_status on;
access_log off;
allow 127.0.0.1;
allow 172.16.0.0/12; # Allow Docker internal network
deny all;
}
}
nginx -t && systemctl reload nginx
# Verify the endpoint
curl http://127.0.0.1:8080/nginx_status
Sample response:
Active connections: 42
server accepts handled requests
1024 1024 5678
Reading: 2 Writing: 5 Waiting: 35
| Field | Description |
|---|---|
Active connections | Current active connections (Reading + Writing + Waiting) |
accepts | Total TCP connections accepted |
handled | Total connections handled (equals accepts when no drops) |
requests | Total HTTP requests processed |
Reading | Connections where Nginx is reading the request header |
Writing | Connections where Nginx is sending a response |
Waiting | Idle keep-alive connections |
Installing and Configuring nginx-prometheus-exporterβ
The official Nginx exporter converts the stub_status text output into Prometheus metrics format.
Install the Binaryβ
# Check latest release: https://github.com/nginxinc/nginx-prometheus-exporter/releases
VERSION="1.3.0"
ARCH="amd64"
wget https://github.com/nginxinc/nginx-prometheus-exporter/releases/download/v${VERSION}/nginx-prometheus-exporter_${VERSION}_linux_${ARCH}.tar.gz
tar -xzf nginx-prometheus-exporter_${VERSION}_linux_${ARCH}.tar.gz
sudo mv nginx-prometheus-exporter /usr/local/bin/
sudo chmod +x /usr/local/bin/nginx-prometheus-exporter
# Test run
nginx-prometheus-exporter -nginx.scrape-uri=http://127.0.0.1:8080/nginx_status
Register as a systemd Serviceβ
# /etc/systemd/system/nginx-prometheus-exporter.service
[Unit]
Description=Nginx Prometheus Exporter
After=network.target
[Service]
User=nobody
ExecStart=/usr/local/bin/nginx-prometheus-exporter \
-nginx.scrape-uri=http://127.0.0.1:8080/nginx_status \
-web.listen-address=:9113 \
-web.telemetry-path=/metrics
Restart=on-failure
RestartSec=5s
[Install]
WantedBy=multi-user.target
systemctl daemon-reload
systemctl enable --now nginx-prometheus-exporter
# Verify metrics are exposed
curl http://localhost:9113/metrics
Sample output:
# HELP nginx_connections_accepted Accepted client connections
# TYPE nginx_connections_accepted counter
nginx_connections_accepted 1024
# HELP nginx_connections_active Active client connections
# TYPE nginx_connections_active gauge
nginx_connections_active 42
# HELP nginx_connections_handled Handled client connections
# TYPE nginx_connections_handled counter
nginx_connections_handled 1024
# HELP nginx_http_requests_total Total http requests
# TYPE nginx_http_requests_total counter
nginx_http_requests_total 5678
# HELP nginx_connections_reading Connections where Nginx is reading the request header
# TYPE nginx_connections_reading gauge
nginx_connections_reading 2
# HELP nginx_connections_waiting Idle client connections
# TYPE nginx_connections_waiting gauge
nginx_connections_waiting 35
# HELP nginx_connections_writing Connections where Nginx is writing the response
# TYPE nginx_connections_writing gauge
nginx_connections_writing 5
Installing Prometheus and Configuring Scrapeβ
Install Prometheus Binaryβ
VERSION="2.51.2"
wget https://github.com/prometheus/prometheus/releases/download/v${VERSION}/prometheus-${VERSION}.linux-amd64.tar.gz
tar -xzf prometheus-${VERSION}.linux-amd64.tar.gz
sudo mv prometheus-${VERSION}.linux-amd64/{prometheus,promtool} /usr/local/bin/
sudo mkdir -p /etc/prometheus /var/lib/prometheus
sudo mv prometheus-${VERSION}.linux-amd64/{consoles,console_libraries} /etc/prometheus/
prometheus.yml Configurationβ
# /etc/prometheus/prometheus.yml
global:
scrape_interval: 15s # Default scrape interval
evaluation_interval: 15s # Rule evaluation interval
alerting:
alertmanagers:
- static_configs:
- targets:
- alertmanager:9093
rule_files:
- /etc/prometheus/rules/*.yml
scrape_configs:
# Prometheus self-monitoring
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
# Nginx Exporter
- job_name: 'nginx'
static_configs:
- targets: ['localhost:9113']
relabel_configs:
- source_labels: [__address__]
target_label: instance
regex: '([^:]+).*'
replacement: '${1}'
# Tomcat JMX Exporter (see section below)
- job_name: 'tomcat'
static_configs:
- targets: ['localhost:9404']
metrics_path: /metrics
Register as a systemd Serviceβ
# /etc/systemd/system/prometheus.service
[Unit]
Description=Prometheus
After=network.target
[Service]
User=prometheus
ExecStart=/usr/local/bin/prometheus \
--config.file=/etc/prometheus/prometheus.yml \
--storage.tsdb.path=/var/lib/prometheus/data \
--storage.tsdb.retention.time=30d \
--web.listen-address=:9090 \
--web.enable-lifecycle
Restart=on-failure
[Install]
WantedBy=multi-user.target
useradd --no-create-home --shell /bin/false prometheus
chown -R prometheus:prometheus /etc/prometheus /var/lib/prometheus
systemctl daemon-reload
systemctl enable --now prometheus
# Open web UI: http://localhost:9090
Tomcat JMX Exporter Integrationβ
Tomcat exposes JVM heap memory, thread pool stats, request throughput, and other internal metrics via JMX (Java Management Extensions). The jmx_prometheus_javaagent translates these into Prometheus metrics format.
Download the javaagent JARβ
VERSION="0.20.0"
wget https://repo1.maven.org/maven2/io/prometheus/jmx/jmx_prometheus_javaagent/${VERSION}/jmx_prometheus_javaagent-${VERSION}.jar \
-O /opt/tomcat/lib/jmx_prometheus_javaagent.jar
JMX Exporter Configuration Fileβ
# /opt/tomcat/conf/jmx-exporter.yml
---
lowercaseOutputLabelNames: true
lowercaseOutputName: true
whitelistObjectNames:
- "Catalina:type=GlobalRequestProcessor,name=*"
- "Catalina:type=ThreadPool,name=*"
- "java.lang:type=Memory"
- "java.lang:type=GarbageCollector,name=*"
- "java.lang:type=Threading"
- "java.lang:type=ClassLoading"
- "java.lang:type=OperatingSystem"
rules:
# Tomcat request processing metrics
- pattern: 'Catalina<type=GlobalRequestProcessor, name="(.+)"><>(\w+)'
name: tomcat_$2_total
labels:
connector: "$1"
help: Tomcat global request processor metric $2
type: COUNTER
# Tomcat thread pool metrics
- pattern: 'Catalina<type=ThreadPool, name="(.+)"><>(\w+)'
name: tomcat_threadpool_$2
labels:
connector: "$1"
help: Tomcat thread pool metric $2
type: GAUGE
# JVM heap memory
- pattern: "java.lang<type=Memory><HeapMemoryUsage>(\\w+)"
name: jvm_memory_heap_$1_bytes
help: JVM heap memory $1
# GC statistics
- pattern: "java.lang<type=GarbageCollector, name=(.+)><>(CollectionCount|CollectionTime)"
name: jvm_gc_$2_total
labels:
gc: "$1"
Add javaagent to Tomcat JVM Optionsβ
# /opt/tomcat/bin/setenv.sh
CATALINA_OPTS="$CATALINA_OPTS -javaagent:/opt/tomcat/lib/jmx_prometheus_javaagent.jar=9404:/opt/tomcat/conf/jmx-exporter.yml"
systemctl restart tomcat
# Verify metrics
curl http://localhost:9404/metrics | head -30
Full Monitoring Stack with Docker Composeβ
# docker-compose.yml
version: '3.8'
services:
# Nginx web server
nginx:
image: nginx:1.25-alpine
container_name: nginx
ports:
- "80:80"
- "8080:8080"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/status.conf:/etc/nginx/conf.d/status.conf:ro
networks:
- monitoring
# Nginx Prometheus Exporter
nginx-exporter:
image: nginx/nginx-prometheus-exporter:1.3.0
container_name: nginx-exporter
command:
- -nginx.scrape-uri=http://nginx:8080/nginx_status
ports:
- "9113:9113"
depends_on:
- nginx
networks:
- monitoring
# Prometheus
prometheus:
image: prom/prometheus:v2.51.2
container_name: prometheus
ports:
- "9090:9090"
volumes:
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml:ro
- ./prometheus/rules:/etc/prometheus/rules:ro
- prometheus-data:/prometheus
command:
- --config.file=/etc/prometheus/prometheus.yml
- --storage.tsdb.retention.time=30d
- --web.enable-lifecycle
networks:
- monitoring
# Alertmanager
alertmanager:
image: prom/alertmanager:v0.27.0
container_name: alertmanager
ports:
- "9093:9093"
volumes:
- ./alertmanager/alertmanager.yml:/etc/alertmanager/alertmanager.yml:ro
- alertmanager-data:/alertmanager
networks:
- monitoring
# Grafana
grafana:
image: grafana/grafana:10.4.2
container_name: grafana
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_USER=admin
- GF_SECURITY_ADMIN_PASSWORD=admin123
- GF_USERS_ALLOW_SIGN_UP=false
volumes:
- grafana-data:/var/lib/grafana
- ./grafana/provisioning:/etc/grafana/provisioning:ro
depends_on:
- prometheus
networks:
- monitoring
volumes:
prometheus-data:
alertmanager-data:
grafana-data:
networks:
monitoring:
driver: bridge
# prometheus/prometheus.yml (Docker Compose environment)
global:
scrape_interval: 15s
alerting:
alertmanagers:
- static_configs:
- targets: ['alertmanager:9093']
rule_files:
- /etc/prometheus/rules/*.yml
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
- job_name: 'nginx'
static_configs:
- targets: ['nginx-exporter:9113']
- job_name: 'tomcat'
static_configs:
- targets: ['tomcat:9404']
Start the stack:
docker compose up -d
docker compose ps
Alertmanager Notification Configurationβ
Slack and Email Alertsβ
# alertmanager/alertmanager.yml
global:
slack_api_url: 'https://hooks.slack.com/services/YOUR/WEBHOOK/URL'
smtp_smarthost: 'smtp.gmail.com:587'
smtp_from: 'alertmanager@example.com'
smtp_auth_username: 'your-email@gmail.com'
smtp_auth_password: 'your-app-password'
route:
group_by: ['alertname', 'job']
group_wait: 30s
group_interval: 5m
repeat_interval: 12h
receiver: 'default'
routes:
- match:
severity: critical
receiver: 'critical-alerts'
- match:
severity: warning
receiver: 'slack-warnings'
receivers:
- name: 'default'
slack_configs:
- channel: '#alerts'
title: '[{{ .Status | toUpper }}] {{ .CommonAnnotations.summary }}'
text: '{{ range .Alerts }}*Alert:* {{ .Annotations.description }}\n*Labels:* {{ range .Labels.SortedPairs }}{{ .Name }}={{ .Value }} {{ end }}{{ end }}'
- name: 'critical-alerts'
slack_configs:
- channel: '#critical-alerts'
title: 'CRITICAL: {{ .CommonAnnotations.summary }}'
text: '{{ range .Alerts }}{{ .Annotations.description }}{{ end }}'
email_configs:
- to: 'ops-team@example.com'
subject: 'CRITICAL Alert: {{ .CommonAnnotations.summary }}'
body: '{{ range .Alerts }}{{ .Annotations.description }}{{ end }}'
- name: 'slack-warnings'
slack_configs:
- channel: '#alerts'
title: 'WARNING: {{ .CommonAnnotations.summary }}'
text: '{{ range .Alerts }}{{ .Annotations.description }}{{ end }}'
Prometheus Alert Rulesβ
# prometheus/rules/nginx-alerts.yml
groups:
- name: nginx_alerts
rules:
# High 5xx error rate
- alert: NginxHighErrorRate
expr: rate(nginx_http_requests_total{status=~"5.."}[5m]) / rate(nginx_http_requests_total[5m]) > 0.05
for: 2m
labels:
severity: critical
annotations:
summary: "Nginx 5xx error rate exceeds 5%"
description: "{{ $labels.instance }}: 5xx error rate is {{ $value | humanizePercentage }}"
# Active connections threshold exceeded
- alert: NginxHighConnections
expr: nginx_connections_active > 1000
for: 5m
labels:
severity: warning
annotations:
summary: "Nginx active connections exceed 1000"
description: "{{ $labels.instance }}: current connection count is {{ $value }}"
# Nginx is down
- alert: NginxDown
expr: up{job="nginx"} == 0
for: 1m
labels:
severity: critical
annotations:
summary: "Nginx exporter is not responding"
description: "{{ $labels.instance }}: Nginx server or exporter is down"
PromQL Query Examplesβ
These queries can be used in the Prometheus web UI (http://localhost:9090) or in Grafana panels.
# 1. Nginx requests per second (RPS)
rate(nginx_http_requests_total[5m])
# 2. Total request increase over the last 1 hour
increase(nginx_http_requests_total[1h])
# 3. Current active Nginx connections
nginx_connections_active
# 4. Waiting connection ratio
nginx_connections_waiting / nginx_connections_active * 100
# 5. Tomcat JVM heap usage percentage
jvm_memory_heap_used_bytes / jvm_memory_heap_max_bytes * 100
# 6. Tomcat active thread count
tomcat_threadpool_currentthreadcount{connector="http-nio-8080"}
# 7. Tomcat request processing rate (RPS)
rate(tomcat_requestcount_total[5m])
# 8. 95th percentile response time (requires histogram format)
histogram_quantile(0.95, rate(nginx_request_duration_seconds_bucket[5m]))
# 9. Nginx connection acceptance rate (per second)
rate(nginx_connections_accepted[1m])
# 10. GC execution frequency (per second)
rate(jvm_gc_CollectionCount_total[5m])
Register these metrics as panels in Grafana dashboards to monitor system health in real time. The next chapter covers Grafana dashboard configuration in detail.