Skip to main content

Docker Log Management

In production environments, logs are the core asset for root cause analysis and system monitoring. In Docker environments, collecting container stdout/stderr centrally is the standard approach. This chapter covers everything from container log fundamentals to building a centralized log collection stack with Fluentd and Loki+Grafana — all at a level you can use in production immediately.


Container Log Fundamentals: stdout/stderr

Docker log collection is based on a container process's standard output (stdout) and standard error (stderr). If a container writes logs directly to files, Docker cannot recognize them.

Principle: Applications must output logs to stdout/stderr rather than files.

Container process
└─ stdout / stderr
└─ Docker log driver
└─ json-file / syslog / fluentd / loki / ...

Redirect Nginx Logs to stdout

Nginx writes logs to /var/log/nginx/access.log and /var/log/nginx/error.log by default. In container environments, symlinking these files to stdout/stderr is the standard pattern.

The official Nginx Docker image already applies this, as seen in its Dockerfile:

# Pattern already applied in the official Nginx Dockerfile
RUN ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log

Apply the same approach when building custom Nginx images:

FROM nginx:1.25-alpine

# Remove existing log files and create symlinks
RUN rm -f /var/log/nginx/access.log /var/log/nginx/error.log \
&& ln -sf /dev/stdout /var/log/nginx/access.log \
&& ln -sf /dev/stderr /var/log/nginx/error.log

COPY nginx/conf.d /etc/nginx/conf.d

Basic Log Inspection Commands

# View logs for a specific container
docker logs <container-name>

# Stream logs in real time (follow)
docker logs -f nginx

# View only the last 100 lines
docker logs --tail 100 nginx

# Include timestamps
docker logs -t nginx

# View logs only after a specific time
docker logs --since 2024-01-01T00:00:00 nginx

# Stream all service logs via docker compose
docker compose logs -f

# Stream specific services only
docker compose logs -f nginx app

# Last 50 lines + real-time
docker compose logs -f --tail 50

Log Driver Types

Docker supports multiple log drivers. Drivers can be configured per container or for the entire Docker daemon.

DriverDescriptionSuitable For
json-fileSaves as JSON file (default)Single server, development
syslogSends to system syslogTraditional Linux servers
journaldSends to systemd journalsystemd-based Linux
fluentdSends to Fluentd agentLarge-scale log aggregation
gelfSends in Graylog GELF formatGraylog/ELK stack
lokiSends to Grafana LokiPrometheus-based monitoring
awslogsSends to AWS CloudWatchAWS environments
noneDisable loggingWhen logging is not needed

Setting Log Drivers in docker-compose.yml

version: "3.9"

services:
nginx:
image: nginx:1.25-alpine
logging:
driver: "json-file"
options:
max-size: "10m" # Max size per file
max-file: "5" # Max number of files (rotation)
compress: "true" # Gzip compress older files

app:
image: my-app:latest
logging:
driver: "json-file"
options:
max-size: "20m"
max-file: "10"
labels: "service,version"
env: "NODE_ENV,APP_VERSION"

Log Rotation: max-size and max-file

Unlimited log accumulation can fill up the disk and halt the server. Configure automatic rotation with the max-size and max-file options of the json-file driver.

services:
app:
image: my-app:latest
logging:
driver: "json-file"
options:
max-size: "50m" # Rotate to new file every 50MB
max-file: "7" # Keep at most 7 files (max ~350MB total)

To configure the default log driver for the entire Docker daemon, edit /etc/docker/daemon.json:

{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
# Apply daemon configuration
sudo systemctl reload docker

Centralized Log Collection with Fluentd

Fluentd is a log aggregation tool that collects logs from various sources, parses them, and forwards them to any storage destination.

Fluentd Configuration File

fluentd/fluent.conf:

# Receive data from Docker log driver
<source>
@type forward
port 24224
bind 0.0.0.0
</source>

# Parse Nginx access logs
<filter nginx.**>
@type parser
key_name log
<parse>
@type nginx
</parse>
</filter>

# Parse app logs (assuming JSON format)
<filter app.**>
@type parser
key_name log
<parse>
@type json
</parse>
</filter>

# Output all logs to stdout (for debugging)
<match **>
@type stdout
</match>

# Save to file with daily rotation
# <match **>
# @type file
# path /fluentd/log/output
# <buffer time>
# timekey 1d
# timekey_use_utc true
# timekey_wait 10m
# </buffer>
# </match>

docker-compose.yml with Fluentd

version: "3.9"

services:
fluentd:
image: fluent/fluentd:v1.16-1
container_name: fluentd
volumes:
- ./fluentd/fluent.conf:/fluentd/etc/fluent.conf:ro
- fluentd_logs:/fluentd/log
ports:
- "24224:24224"
- "24224:24224/udp"
networks:
- logging

nginx:
image: nginx:1.25-alpine
ports:
- "80:80"
- "443:443"
logging:
driver: "fluentd"
options:
fluentd-address: "localhost:24224"
tag: "nginx.access"
fluentd-async: "true"
depends_on:
- fluentd
networks:
- logging

app:
image: my-app:latest
logging:
driver: "fluentd"
options:
fluentd-address: "localhost:24224"
tag: "app.server"
fluentd-async: "true"
depends_on:
- fluentd
networks:
- logging

volumes:
fluentd_logs:

networks:
logging:

Loki + Grafana Local Stack Setup

Grafana Loki is a lightweight log aggregation system that applies Prometheus's label-based query model to logs. Combined with Grafana, you can view metrics and logs on the same dashboard.

loki/loki-config.yml:

auth_enabled: false

server:
http_listen_port: 3100

ingester:
lifecycler:
address: 127.0.0.1
ring:
kvstore:
store: inmemory
replication_factor: 1
final_sleep: 0s
chunk_idle_period: 5m
chunk_retain_period: 30s

schema_config:
configs:
- from: 2024-01-01
store: boltdb-shipper
object_store: filesystem
schema: v11
index:
prefix: index_
period: 24h

storage_config:
boltdb_shipper:
active_index_directory: /loki/boltdb-shipper-active
cache_location: /loki/boltdb-shipper-cache
shared_store: filesystem
filesystem:
directory: /loki/chunks

limits_config:
reject_old_samples: true
reject_old_samples_max_age: 168h

chunk_store_config:
max_look_back_period: 0s

table_manager:
retention_deletes_enabled: false
retention_period: 0s

promtail/promtail-config.yml (log collection agent):

server:
http_listen_port: 9080
grpc_listen_port: 0

positions:
filename: /tmp/positions.yaml

clients:
- url: http://loki:3100/loki/api/v1/push

scrape_configs:
- job_name: docker-logs
docker_sd_configs:
- host: unix:///var/run/docker.sock
refresh_interval: 5s
relabel_configs:
- source_labels: ['__meta_docker_container_name']
target_label: 'container'
- source_labels: ['__meta_docker_container_log_stream']
target_label: 'logstream'

Loki + Grafana + Promtail docker-compose.yml:

version: "3.9"

services:
loki:
image: grafana/loki:2.9.0
container_name: loki
ports:
- "3100:3100"
volumes:
- ./loki/loki-config.yml:/etc/loki/local-config.yaml:ro
- loki_data:/loki
command: -config.file=/etc/loki/local-config.yaml
networks:
- monitoring

promtail:
image: grafana/promtail:2.9.0
container_name: promtail
volumes:
- ./promtail/promtail-config.yml:/etc/promtail/config.yml:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
- /var/lib/docker/containers:/var/lib/docker/containers:ro
command: -config.file=/etc/promtail/config.yml
networks:
- monitoring
depends_on:
- loki

grafana:
image: grafana/grafana:10.2.0
container_name: grafana
ports:
- "3000:3000"
environment:
- GF_SECURITY_ADMIN_USER=admin
- GF_SECURITY_ADMIN_PASSWORD=admin123
- GF_PATHS_PROVISIONING=/etc/grafana/provisioning
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/provisioning:/etc/grafana/provisioning:ro
networks:
- monitoring
depends_on:
- loki

nginx:
image: nginx:1.25-alpine
ports:
- "80:80"
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
networks:
- monitoring

volumes:
loki_data:
grafana_data:

networks:
monitoring:

Grafana datasource auto-provisioning grafana/provisioning/datasources/loki.yml:

apiVersion: 1

datasources:
- name: Loki
type: loki
access: proxy
url: http://loki:3100
isDefault: true
jsonData:
maxLines: 1000

After starting the stack, open Grafana (http://localhost:3000), go to the Explore menu, select the Loki datasource, and query logs with LogQL:

# All nginx container logs
{container="nginx"}

# Filter error-level logs
{container="app"} |= "ERROR"

# Parse JSON then filter by field
{container="app"} | json | level="error"

Production Pattern: Application Log JSON Formatting

Outputting logs in JSON format makes them easy to parse with log collection tools like Fluentd and Loki, enabling field-based filtering.

Node.js (using pino):

// npm install pino
const pino = require('pino');

const logger = pino({
level: process.env.LOG_LEVEL || 'info',
timestamp: pino.stdTimeFunctions.isoTime,
formatters: {
level: (label) => ({ level: label }),
},
});

// JSON log output example
logger.info({ requestId: '123', userId: 42, duration: 52 }, 'Request processed');
// {"level":"info","time":"2024-01-01T00:00:00.000Z","requestId":"123","userId":42,"duration":52,"msg":"Request processed"}

logger.error({ err: new Error('DB connection failed'), service: 'database' }, 'Connection error');

Python (using structlog):

# pip install structlog
import structlog
import logging

structlog.configure(
processors=[
structlog.processors.TimeStamper(fmt="iso"),
structlog.stdlib.add_log_level,
structlog.processors.JSONRenderer(),
],
logger_factory=structlog.stdlib.LoggerFactory(),
)

logger = structlog.get_logger()

# JSON log output
logger.info("request_processed", request_id="123", user_id=42, duration_ms=52)
logger.error("db_connection_failed", service="database", error=str(e))

Java (Logback + JSON encoder):

<!-- Add dependency to pom.xml -->
<!-- net.logstash.logback:logstash-logback-encoder:7.4 -->

<!-- logback-spring.xml -->
<configuration>
<appender name="JSON_STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<includeCallerData>false</includeCallerData>
<fieldNames>
<timestamp>time</timestamp>
<version>[ignore]</version>
</fieldNames>
</encoder>
</appender>

<root level="INFO">
<appender-ref ref="JSON_STDOUT" />
</root>
</configuration>

Summary: Choosing a Log Strategy by Environment

EnvironmentRecommended Strategy
Local developmentdocker logs -f, json-file driver
Single-server productionjson-file + max-size/max-file rotation
Small-to-medium clusterLoki + Grafana (lightweight, integrates with Prometheus)
Large enterpriseFluentd + Elasticsearch + Kibana (EFK stack)
AWS environmentawslogs driver + CloudWatch Logs