Skip to main content

Grafana Dashboard Configuration

Grafana is an open-source analytics platform that visualizes metrics and logs from data sources including Prometheus, Loki, and InfluxDB. This chapter covers everything from installing Grafana to building dedicated Nginx and Tomcat dashboards, configuring alerting, automating dashboard provisioning via JSON, and integrating Loki for log panels — all at a production-ready level.


Installing Grafana

docker run -d \
--name grafana \
-p 3000:3000 \
-e GF_SECURITY_ADMIN_USER=admin \
-e GF_SECURITY_ADMIN_PASSWORD=admin123 \
-v grafana-data:/var/lib/grafana \
grafana/grafana:10.4.2

# Access in browser: http://localhost:3000
# Default credentials: admin / admin123

apt (Debian/Ubuntu)

# Add GPG key and repository
sudo apt-get install -y apt-transport-https software-properties-common wget
sudo mkdir -p /etc/apt/keyrings/
wget -q -O - https://apt.grafana.com/gpg.key | gpg --dearmor | sudo tee /etc/apt/keyrings/grafana.gpg > /dev/null
echo "deb [signed-by=/etc/apt/keyrings/grafana.gpg] https://apt.grafana.com stable main" | \
sudo tee /etc/apt/sources.list.d/grafana.list

sudo apt-get update
sudo apt-get install -y grafana

# Start and enable the service
sudo systemctl enable --now grafana-server

# Check status
sudo systemctl status grafana-server
# Web UI: http://localhost:3000

Key Configuration Options

# /etc/grafana/grafana.ini (key settings)
[server]
http_port = 3000
domain = your-domain.com
root_url = https://%(domain)s/

[security]
admin_user = admin
admin_password = your-secure-password

[users]
allow_sign_up = false
default_theme = dark

[auth.anonymous]
enabled = false
sudo systemctl restart grafana-server

Connecting the Prometheus Data Source

Manual Setup via Web UI

  1. Left menu → Connections→ ** Data sources**→ ** Add data source**
  2. Select Prometheus
  3. Configure:
    • Name: Prometheus
    • URL: http://prometheus:9090 (Docker Compose) or http://localhost:9090
    • Scrape interval: 15s
  4. Click Save & test→ confirm "Successfully queried the Prometheus API."

Register via API (Automation)

curl -X POST http://admin:admin123@localhost:3000/api/datasources \
-H "Content-Type: application/json" \
-d '{
"name": "Prometheus",
"type": "prometheus",
"url": "http://prometheus:9090",
"access": "proxy",
"isDefault": true,
"jsonData": {
"httpMethod": "POST",
"scrapeInterval": "15s"
}
}'

Nginx Dashboard Configuration

Import Public Dashboard (ID: 12559)

The fastest way to get a production Nginx dashboard is to import the official community dashboard:

  1. Left menu → Dashboards→ ** Import**
  2. Enter Dashboard ID: 12559, click ** Load**
  3. Select Prometheus as the data source, then click Import

Building Panels Manually

The following panels cover the most important Nginx metrics.

Panel 1: Requests Per Second (RPS)

Panel type: Time series
Query:
rate(nginx_http_requests_total[5m])
Title: Nginx RPS
Unit: reqps (requests/sec)

Panel 2: Response Time (p95)

Panel type: Time series
Query:
histogram_quantile(0.95, rate(nginx_request_duration_seconds_bucket[5m]))
Title: Response Time (p95)
Unit: seconds
Thresholds: 1.0s = warning, 2.0s = critical

Panel 3: HTTP Status Code Distribution

Panel type: Pie chart
Query:
sum by (status) (rate(nginx_http_requests_total[5m]))
Title: HTTP Status Distribution

Panel 4: 5xx Error Rate

Panel type: Stat
Query:
sum(rate(nginx_http_requests_total{status=~"5.."}[5m]))
/
sum(rate(nginx_http_requests_total[5m])) * 100
Title: 5xx Error Rate (%)
Thresholds: 1% = yellow, 5% = red

Panel 5: Active Connections

Panel type: Gauge
Query:
nginx_connections_active
Title: Active Connections
Max: 1000
Thresholds: 500 = yellow, 800 = red

Panel 6: Connection State Breakdown (Reading/Writing/Waiting)

Panel type: Bar chart
Queries:
nginx_connections_reading (Legend: Reading)
nginx_connections_writing (Legend: Writing)
nginx_connections_waiting (Legend: Waiting)
Title: Connection State

Tomcat Dashboard Configuration

Import Public Dashboard (ID: 4701)

For JVM monitoring, import this widely-used community dashboard:

  1. Left menu → Dashboards→ ** Import**
  2. Enter Dashboard ID: 4701, click ** Load**
  3. Select Prometheus as the data source, then click Import

Building Tomcat Panels Manually

Panel 1: JVM Heap Memory Usage

Panel type: Time series
Queries:
jvm_memory_heap_used_bytes (Legend: Used)
jvm_memory_heap_max_bytes (Legend: Max)
Title: JVM Heap Memory
Unit: bytes (auto)

Panel 2: JVM Heap Utilization (%)

Panel type: Gauge
Query:
jvm_memory_heap_used_bytes / jvm_memory_heap_max_bytes * 100
Title: JVM Heap Usage (%)
Max: 100
Thresholds: 70 = yellow, 90 = red

Panel 3: Tomcat Thread Pool Status

Panel type: Time series
Queries:
tomcat_threadpool_currentthreadcount{connector="http-nio-8080"} (Legend: Current)
tomcat_threadpool_currentthreadsbusy{connector="http-nio-8080"} (Legend: Busy)
tomcat_threadpool_maxthreads{connector="http-nio-8080"} (Legend: Max)
Title: Thread Pool (http-nio-8080)

Panel 4: Request Throughput (RPS)

Panel type: Time series
Query:
rate(tomcat_requestcount_total{connector="http-nio-8080"}[5m])
Title: Request Rate (RPS)
Unit: reqps

Panel 5: GC Collection Rate

Panel type: Time series
Query:
rate(jvm_gc_CollectionCount_total[5m])
Title: GC Collection Rate

Panel 6: GC Collection Time

Panel type: Time series
Query:
rate(jvm_gc_CollectionTime_total[5m])
Title: GC Collection Time (ms/s)

Alerting Configuration

Contact Point — Slack

  1. Left menu → Alerting→ ** Contact points**→ ** Add contact point**
  2. Name: Slack-Ops
  3. Select Slack
  4. Webhook URL: enter your Slack incoming webhook URL
  5. Optional Slack settings:
    • Channel: #alerts
    • Username: Grafana Alert
    • Icon emoji: :bell:
  6. Click Test to confirm, then Save contact point

Contact Point — Email

# Add SMTP configuration to /etc/grafana/grafana.ini
[smtp]
enabled = true
host = smtp.gmail.com:587
user = your-email@gmail.com
password = your-app-password
skip_verify = false
from_address = grafana@example.com
from_name = Grafana
sudo systemctl restart grafana-server

Create the Email contact point:

  1. Alerting→ ** Contact points**→ ** Add contact point**
  2. Select Email
  3. Addresses: ops-team@example.com
  4. Save contact point

Creating Alert Rules

  1. Alerting→ ** Alert rules**→ ** New alert rule**
  2. Write the query:
    # Alert when Nginx 5xx error rate exceeds 5%
    sum(rate(nginx_http_requests_total{status=~"5.."}[5m]))
    /
    sum(rate(nginx_http_requests_total[5m])) * 100
  3. Conditions settings:
    • Condition: IS ABOVE 5 (fires when above 5%)
    • For: 2m (must persist for 2 minutes before firing)
  4. Labels: severity=critical
  5. Connect to a Contact Point under the Notification policy tab
  6. Save rule

Notification Policy

  1. Alerting→ ** Notification policies**
  2. Edit the default policy or add a new one
  3. Matchers: severity=critical → Contact Point: Slack-Ops
  4. Repeat interval: 12h

Dashboard JSON Provisioning (Auto-Load)

Grafana's provisioning feature automatically loads dashboards on startup. This is essential for managing dashboards as code in CI/CD pipelines.

Dashboard Provider Configuration

# /etc/grafana/provisioning/dashboards/default.yml
# or grafana/provisioning/dashboards/default.yml (Docker)
apiVersion: 1

providers:
- name: 'default'
orgId: 1
type: file
disableDeletion: false
updateIntervalSeconds: 30 # Check for changes every 30 seconds
allowUiUpdates: true
options:
path: /var/lib/grafana/dashboards
foldersFromFilesStructure: true

Data Source Provisioning

# /etc/grafana/provisioning/datasources/prometheus.yml
apiVersion: 1

datasources:
- name: Prometheus
type: prometheus
access: proxy
url: http://prometheus:9090
isDefault: true
jsonData:
httpMethod: POST
scrapeInterval: "15s"
version: 1
editable: true

Exporting and Saving Dashboard JSON

To export a dashboard you built in the UI:

  1. Click the Share icon in the top-right → Export
  2. Enable Export for sharing externally
  3. Click Save to file→ download nginx-dashboard.json
  4. Copy the file to /var/lib/grafana/dashboards/ or the Docker volume mount path

Docker Compose automatic loading:

# docker-compose.yml
grafana:
image: grafana/grafana:10.4.2
volumes:
- grafana-data:/var/lib/grafana
- ./grafana/provisioning:/etc/grafana/provisioning:ro
- ./grafana/dashboards:/var/lib/grafana/dashboards:ro

Grafana + Loki Integration (Log Panels)

Loki is a log aggregation system that uses the same label model as Prometheus. Integrating it with Grafana lets you analyze metrics and logs side-by-side on the same dashboard.

Add Loki and Promtail to Docker Compose

# Add to docker-compose.yml
loki:
image: grafana/loki:3.0.0
container_name: loki
ports:
- "3100:3100"
volumes:
- ./loki/loki-config.yml:/etc/loki/loki-config.yml:ro
- loki-data:/loki
command: -config.file=/etc/loki/loki-config.yml
networks:
- monitoring

promtail:
image: grafana/promtail:3.0.0
container_name: promtail
volumes:
- /var/log/nginx:/var/log/nginx:ro
- ./promtail/promtail-config.yml:/etc/promtail/config.yml:ro
command: -config.file=/etc/promtail/config.yml
depends_on:
- loki
networks:
- monitoring
# loki/loki-config.yml
auth_enabled: false

server:
http_listen_port: 3100

ingester:
lifecycler:
ring:
kvstore:
store: inmemory
replication_factor: 1
chunk_idle_period: 5m
chunk_retain_period: 30s

schema_config:
configs:
- from: 2024-01-01
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h

storage_config:
boltdb_shipper:
active_index_directory: /loki/index
cache_location: /loki/cache
filesystem:
directory: /loki/chunks

limits_config:
reject_old_samples: true
reject_old_samples_max_age: 168h
# promtail/promtail-config.yml
server:
http_listen_port: 9080

positions:
filename: /tmp/positions.yaml

clients:
- url: http://loki:3100/loki/api/v1/push

scrape_configs:
- job_name: nginx
static_configs:
- targets:
- localhost
labels:
job: nginx
__path__: /var/log/nginx/access.log
pipeline_stages:
- regex:
expression: '^(?P<remote_addr>\S+) - (?P<remote_user>\S+) \[(?P<time_local>[^\]]+)\] "(?P<request>[^"]+)" (?P<status>\d+) (?P<body_bytes_sent>\d+)'
- labels:
status:
remote_addr:

Register the Loki Data Source

curl -X POST http://admin:admin123@localhost:3000/api/datasources \
-H "Content-Type: application/json" \
-d '{
"name": "Loki",
"type": "loki",
"url": "http://loki:3100",
"access": "proxy"
}'

Adding a Log Panel to a Dashboard

  1. Edit a dashboard → Add panel→ Panel type: ** Logs**
  2. Data source: Loki
  3. Write LogQL queries:
    # Nginx 5xx error logs
    {job="nginx"} |= "\" 5" | regexp `(?P<status>\d{3})` | status =~ "5.."

    # All requests from a specific IP
    {job="nginx"} |= "203.0.113.5"

    # Slow requests (requires JSON log format)
    {job="nginx"} | json | request_time > 1.0
  4. Save the panel

Pro Tips

Dashboard folder structure: When foldersFromFilesStructure: true is set in the provisioning configuration, the directory structure maps automatically to Grafana's folder hierarchy.

grafana/dashboards/
├── nginx/
│ ├── nginx-overview.json
│ └── nginx-errors.json
└── tomcat/
├── jvm-metrics.json
└── tomcat-requests.json

Template variables for multi-server dashboards: When managing multiple servers with a single dashboard, use template variables to make panels dynamic.

  1. Dashboard settings → Variables→ ** Add variable**
  2. Type: Query
  3. Query: label_values(nginx_connections_active, instance)
  4. Apply the $instance variable in panel queries:
    nginx_connections_active{instance="$instance"}

Explore mode: The ** Explore**page in the left menu lets you run ad-hoc Prometheus and Loki queries and correlate metrics with logs in real time — invaluable for incident investigation.