Skip to main content

Docker-Based Server Integration Architecture

Docker is a technology that packages applications and their runtime environments into isolated units called containers. In traditional server environments, Nginx, Tomcat, and databases were installed on a single physical server or VM and communicated directly. In Docker environments, each service is separated into independent containers, enabling clearer role separation and more flexible deployment.


Traditional Server Environment vs Docker Environment​

ItemTraditional ServerDocker Environment
Deployment unitServer (physical/VM)Container
Environment setupManual package installationCodified via Dockerfile
Isolation levelProcess levelNamespaces + cgroups
ScalabilityAdd VM (minutes)Add container (seconds)
Environment consistencyMay differ per serverGuaranteed identical via image
Resource usageFull OS boot requiredShared host kernel, lightweight
RollbackSnapshot or manual recoveryInstant via previous image tag
Config managementDirect server SSH and editVolume mount or embedded in image
Network designDirect iptables managementDocker network drivers
Log managementDirect filesystem accessdocker logs or centralized aggregation

Container Role Separation​

The core principle of Docker-based server integration is one role per container. This enables independent updates, scaling, and fault isolation for each service.

Nginx Reverse Proxy Container​

  • Entry point that receives all external traffic first
  • SSL/TLS termination: converts HTTPS to HTTP before forwarding internally
  • Directly serves static files (HTML, CSS, JS, images)
  • Load balancing: distributes traffic across multiple WAS containers
  • Request buffering and caching to reduce WAS load
  • Access logging and security headers
Official image: nginx:alpine (~7MB, lightweight)
Ports: 80 (HTTP), 443 (HTTPS) β†’ exposed to host

Tomcat / Spring Boot WAS Container​

  • Handles business logic, provides REST APIs
  • Receives only proxied requests from Nginx (no direct external access)
  • Session management, authentication/authorization
  • Database connection pool management (HikariCP, etc.)
  • Horizontally scalable: run multiple instances from same image
Official image: tomcat:10-jdk17-temurin-alpine or eclipse-temurin:17-jre
Port: 8080 β†’ not exposed externally, internal container network only

DB Container​

  • Persistent data storage (PostgreSQL, MySQL, MariaDB, etc.)
  • Located at the innermost layer, completely blocked from external access
  • Data persistence guaranteed via volumes (data preserved across container restarts)
  • Managed DB services (RDS, etc.) recommended for production; containers suitable for dev/staging
Official image: postgres:16-alpine
Port: 5432 β†’ not exposed externally, accessible only within backend network

Container Network Design Principles​

For security and isolation, separating networks into frontend and backend is the standard pattern.

Frontend Network​

  • Network shared by Nginx and WAS containers
  • Handles proxy communication in Nginx β†’ WAS direction
  • The only network connected to external (host) traffic

Backend Network​

  • Network shared by WAS and DB containers
  • Handles database query communication in WAS β†’ DB direction
  • Completely isolated from external traffic

Core Principles of Network Separation​

  • Nginx container: participates in both frontend and backend networks (gateway role)
  • WAS container: participates in both frontend and backend networks
  • DB container: participates only in backend network (external access completely blocked)

Traffic Flow​

Client (Browser/App)
β”‚
β”‚ HTTPS:443 / HTTP:80
β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” frontend network
β”‚ Nginx Containerβ”‚ ─────────────────────────────┐
β”‚ (Reverse Proxy) β”‚ β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚ HTTP:8080 (proxy_pass) β”‚
β”‚ frontend network β”‚
β–Ό β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” backend network β”‚
β”‚ WAS Container β”‚ ──────────────────┐ β”‚
β”‚ (Spring Boot / β”‚ β”‚ β”‚
β”‚ Tomcat) β”‚ β”‚ β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ β”‚
β”‚ JDBC:5432 β”‚ β”‚
β”‚ backend network β”‚ β”‚
β–Ό β”‚ β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ β”‚
β”‚ DB Container β”‚ β—„β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚ (PostgreSQL) β”‚ backend network β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚
β”‚
frontend network β”€β”€β”€β”˜

Architecture


Docker Compose Basic Skeleton Example​

Below is a basic skeleton for configuring Nginx + Spring Boot + PostgreSQL. Review the role of each service and how network connections work.

# docker-compose.yml
version: '3.9'

services:
# ── 1. Nginx Reverse Proxy ──────────────────────────
nginx:
image: nginx:alpine
container_name: nginx-proxy
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- ./certbot/conf:/etc/letsencrypt:ro
- nginx-logs:/var/log/nginx
- static-files:/usr/share/nginx/html/static:ro
depends_on:
app:
condition: service_healthy
networks:
- frontend
restart: unless-stopped

# ── 2. Spring Boot WAS ──────────────────────────────
app:
build:
context: .
dockerfile: Dockerfile
container_name: spring-app
expose:
- "8080" # No external exposure, internal container network only
environment:
- SPRING_PROFILES_ACTIVE=prod
- DB_HOST=db # Service name acts as DNS
- DB_PORT=5432
- DB_NAME=${POSTGRES_DB}
- DB_USER=${POSTGRES_USER}
- DB_PASSWORD=${POSTGRES_PASSWORD}
volumes:
- app-logs:/app/logs
- static-files:/app/static
depends_on:
db:
condition: service_healthy
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:8080/actuator/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 60s
networks:
- frontend
- backend
restart: unless-stopped

# ── 3. PostgreSQL DB ────────────────────────────────
db:
image: postgres:16-alpine
container_name: postgres-db
expose:
- "5432" # No external exposure
environment:
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
volumes:
- postgres-data:/var/lib/postgresql/data
- ./db/init:/docker-entrypoint-initdb.d:ro
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"]
interval: 10s
timeout: 5s
retries: 5
networks:
- backend
restart: unless-stopped

# ── Network Definitions ──────────────────────────────
networks:
frontend:
driver: bridge
backend:
driver: bridge
internal: true # Blocks external internet communication

# ── Volume Definitions ───────────────────────────────
volumes:
postgres-data:
nginx-logs:
app-logs:
static-files:

Dev/Staging/Production Environment Separation Strategy​

docker-compose.override.yml Pattern​

Docker Compose automatically merges the base file (docker-compose.yml) with override files. Use this to separate environment-specific configurations.

docker-compose.yml              # Common base configuration
docker-compose.override.yml # Development environment (auto-applied)
docker-compose.staging.yml # Staging environment
docker-compose.prod.yml # Production environment

Development Override (docker-compose.override.yml)

# Development: source code hot reload, direct port exposure, debug settings
version: '3.9'

services:
app:
build:
target: development # Select development stage in multi-stage build
volumes:
- ./src:/app/src # Real-time source code mount
environment:
- SPRING_PROFILES_ACTIVE=dev
- JAVA_TOOL_OPTIONS=-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005
ports:
- "8080:8080" # Allow direct access for debugging
- "5005:5005" # Remote debug port

db:
ports:
- "5432:5432" # Allow direct DB client connection

Running staging environment

docker compose -f docker-compose.yml -f docker-compose.staging.yml up -d

Running production environment

docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d

Volume Design​

Containers are inherently stateless. When a container is deleted, its internal data is lost too. Volumes allow data to persist independently of the container lifecycle.

Volume Types by Purpose​

Volume TypeSyntaxUse Case
Named Volumepostgres-data:/var/lib/postgresql/dataDB data, must persist after restart
Bind Mount./nginx/nginx.conf:/etc/nginx/nginx.conf:roConfig files, editable directly from host
tmpfstype: tmpfs, target: /tmpTemporary data, stored in memory
Project root/
β”œβ”€β”€ nginx/
β”‚ β”œβ”€β”€ nginx.conf # Nginx main config (bind mount)
β”‚ β”œβ”€β”€ conf.d/
β”‚ β”‚ └── default.conf # Virtual host config (bind mount)
β”‚ └── ssl/ # SSL certificates (bind mount, :ro)
β”œβ”€β”€ db/
β”‚ └── init/ # DB initialization SQL (bind mount, :ro)
β”œβ”€β”€ .env # Environment variables (excluded from git)
└── docker-compose.yml

Named Volumes are managed by Docker: Host paths are automatically created under /var/lib/docker/volumes/.

Config files use Bind Mount: Files like nginx.conf that are frequently edited are mounted directly from the host path, allowing modifications without restarting the container (using Nginx reload command).

# Reload Nginx without restarting the container after config changes
docker exec nginx-proxy nginx -s reload

Expert Tips​

1. Use internal: true network to completely block external DB access

Setting internal: true on the backend network prevents containers in that network from communicating with external internet. This is the simplest way to enhance DB container security.

2. Manage sensitive information with Docker Secrets

Using secrets in Docker Swarm or Compose allows sensitive information to be injected securely as files instead of environment variables.

secrets:
db_password:
file: ./secrets/db_password.txt

services:
db:
secrets:
- db_password
environment:
POSTGRES_PASSWORD_FILE: /run/secrets/db_password

3. Use managed DB services instead of DB containers in production

Managed databases like AWS RDS and GCP Cloud SQL provide automatic backups, failover, and patching. In production, replace the DB container with an external service and simply change the WAS container's environment variables.