Connection Optimization — keepalive, Buffers, Timeout Tuning
This chapter covers the key settings for maximizing connection efficiency between Nginx and Tomcat: keepalive connection reuse, buffer sizing, and tuning all three timeout types (connect, send, read).
Understanding the Connection Flow
To optimize effectively, first understand the full flow from Nginx to Tomcat.
Client ←→ Nginx ←→ Tomcat
[Connection establishment]
1. Client → Nginx: TCP connection + TLS handshake
2. Nginx → Tomcat: TCP connection (within proxy_connect_timeout)
[Request forwarding]
3. Nginx → Tomcat: request headers + body (within proxy_send_timeout)
[Response receiving]
4. Tomcat → Nginx: response headers + body (within proxy_read_timeout)
5. Nginx → Client: response delivery
keepalive — Connection Reuse
Establishing a new TCP connection for every request puts pressure on Tomcat. Reusing connections with keepalive significantly reduces latency and CPU usage.
Nginx Upstream keepalive
upstream tomcat_backend {
server 127.0.0.1:8080;
# keepalive: number of idle connections Nginx keeps open with Tomcat
# Recommended: ~50% of concurrent requests
keepalive 64;
# Max requests per keepalive connection (then renew)
keepalive_requests 1000;
# Keepalive connection duration
keepalive_time 1h;
keepalive_timeout 75s;
}
server {
location / {
proxy_pass http://tomcat_backend;
# Required for keepalive: HTTP/1.1 + empty Connection header
proxy_http_version 1.1;
proxy_set_header Connection ""; # empty string, NOT "keep-alive"
}
}
Important: You must set both
proxy_http_version 1.1andproxy_set_header Connection ""for keepalive to work. PassingConnection: keep-aliveto Tomcat can cause issues.
Tomcat HTTP Connector keepalive
Both Nginx and Tomcat must support keepalive.
<!-- server.xml -->
<Connector port="8080" protocol="HTTP/1.1"
keepAliveTimeout="75000" <!-- Match Nginx keepalive_timeout -->
maxKeepAliveRequests="1000" <!-- Match Nginx keepalive_requests -->
connectionTimeout="20000"
maxThreads="200"/>
Three Timeout Types Explained
location / {
proxy_pass http://tomcat_backend;
# 1. connect_timeout: max wait for Nginx → Tomcat TCP connection
# Returns 502 if Tomcat doesn't respond
proxy_connect_timeout 10s;
# 2. send_timeout: max wait between two packets during Nginx → Tomcat request send
# Affected by slow client uploads
proxy_send_timeout 60s;
# 3. read_timeout: max wait between two packets during Tomcat → Nginx response
# Must be longer than Tomcat's processing time
proxy_read_timeout 60s;
}
Timeout Configuration Guide
| Scenario | connect | send | read |
|---|---|---|---|
| General API | 5~10s | 30s | 30~60s |
| File upload | 5~10s | 300s | 60s |
| File download | 5~10s | 60s | 300s |
| Long-running processing | 5~10s | 60s | 600s+ |
| WebSocket | 5~10s | 3600s | 3600s |
Per-Path Timeout Separation
# General API
location /api/ {
proxy_pass http://tomcat_backend;
proxy_connect_timeout 5s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
}
# File upload
location /api/upload {
proxy_pass http://tomcat_backend;
proxy_connect_timeout 5s;
proxy_send_timeout 300s;
proxy_read_timeout 60s;
proxy_request_buffering off;
client_max_body_size 500m;
}
# Batch operations
location /api/batch/ {
proxy_pass http://tomcat_backend;
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 600s;
}
Buffer Configuration
Buffers are temporary storage where Nginx holds Tomcat responses. Correct buffer sizing lets Tomcat threads be released quickly regardless of client speed.
location / {
proxy_pass http://tomcat_backend;
proxy_buffering on;
# Buffer for response headers (increase if headers are large)
proxy_buffer_size 16k;
# Buffers for response body (count × size)
proxy_buffers 8 64k; # 512KB total
# Max buffer size while delivering to client
proxy_busy_buffers_size 128k;
# Temporary file (when buffers are full)
proxy_temp_path /var/cache/nginx/proxy_temp;
proxy_max_temp_file_size 1024m;
}
Buffer Size Guidelines
| Response Size | Recommended Setting |
|---|---|
| Small API (< 4KB) | buffer_size 4k, buffers 4 4k |
| Normal web page (< 64KB) | buffer_size 16k, buffers 4 64k (default) |
| Large JSON (< 1MB) | buffer_size 32k, buffers 8 128k |
| File download | proxy_buffering off (streaming) |
Proxy Cache (Optional)
Caching API responses that don't change frequently reduces Tomcat load.
# nginx.conf http block
proxy_cache_path /var/cache/nginx/proxy_cache
levels=1:2
keys_zone=tomcat_cache:10m
max_size=1g
inactive=60m
use_temp_path=off;
location /api/products {
proxy_pass http://tomcat_backend;
proxy_cache tomcat_cache;
proxy_cache_key "$scheme$request_method$host$request_uri";
proxy_cache_valid 200 5m;
proxy_cache_valid 404 1m;
proxy_cache_use_stale error timeout updating;
proxy_cache_lock on;
add_header X-Cache-Status $upstream_cache_status;
proxy_cache_bypass $http_authorization;
proxy_no_cache $http_authorization;
}
Complete Optimized Configuration
upstream tomcat_backend {
server 127.0.0.1:8080;
keepalive 64;
keepalive_requests 1000;
keepalive_timeout 75s;
}
server {
listen 443 ssl http2;
server_name example.com;
client_max_body_size 50m;
client_body_buffer_size 128k;
location /api/ {
proxy_pass http://tomcat_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_connect_timeout 10s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
proxy_buffering on;
proxy_buffer_size 16k;
proxy_buffers 4 64k;
proxy_busy_buffers_size 128k;
}
location ~* \.(css|js|png|jpg|webp|woff2)$ {
root /var/www/myapp;
expires 1y;
add_header Cache-Control "public, immutable";
access_log off;
}
}
Summary
| Optimization | Setting | Effect |
|---|---|---|
| keepalive | keepalive 64 + HTTP/1.1 | Eliminate TCP reconnect overhead |
| connect timeout | proxy_connect_timeout 10s | Fast detection of failed Tomcat |
| read timeout | proxy_read_timeout 30~60s | Match Tomcat processing time |
| Buffering | proxy_buffers 4 64k | Fast release of Tomcat threads |
| Cache | proxy_cache_path | Reduce repeated request load on Tomcat |
| HTTP/2 | listen 443 ssl http2 | Client multiplexing benefits |