Skip to main content

Worker Process & Thread Tuning: Nginx and Apache Optimization

Worker settings for Nginx and Apache determine how efficiently server resources are used. Tuning them to match your CPU count, memory, and request profile can dramatically increase throughput.


Nginx Worker Architecture​

Nginx uses an asynchronous event-driven architecture. Each worker process handles thousands of connections concurrently, making it far more memory-efficient than Apache's thread-based model.

                    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
Client connectionsβ”‚ Master Process (root) β”‚
─────────────────▢│ config reload, worker mgmt β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
β”‚ fork
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β–Ό β–Ό β–Ό
Worker #1 Worker #2 Worker #3
(1 CPU core) (1 CPU core) (1 CPU core)
async I/O async I/O async I/O
10,000+ conns 10,000+ conns 10,000+ conns

Nginx Worker Configuration​

# /etc/nginx/nginx.conf

# auto = one worker per CPU core (recommended)
worker_processes auto;

# Pin workers to specific CPUs (improves L3 cache efficiency on NUMA)
# worker_cpu_affinity auto; # Nginx 1.9.10+

# Maximum open files per worker (match OS limit)
worker_rlimit_nofile 65535;

events {
# Maximum simultaneous connections per worker
# Total = worker_processes Γ— worker_connections
worker_connections 4096;

# Best event model on Linux
use epoll;

# accept_mutex: one worker accepts at a time
# High traffic: off (better throughput); low traffic: on (saves CPU)
accept_mutex off;

# Accept multiple connections in one call
multi_accept on;
}

OS File Descriptor Limits​

# Check current limits
ulimit -n # soft limit
ulimit -Hn # hard limit

# /etc/security/limits.conf
nginx soft nofile 65535
nginx hard nofile 65535
* soft nofile 65535
* hard nofile 65535

# For systemd: /etc/systemd/system/nginx.service.d/override.conf
[Service]
LimitNOFILE=65535

sudo systemctl daemon-reload
sudo systemctl restart nginx

# System-wide limit
echo 2097152 | sudo tee /proc/sys/fs/file-max
echo "fs.file-max = 2097152" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p

Apache MPM Selection​

Apache's Multi-Processing Module (MPM) determines how requests are handled.

MPMModelNotesHTTP/2
PreforkProcessCompatible with mod_php; high memory❌
WorkerThreadBetter memory; PHP unstableLimited
EventAsync threadBest Keep-Alive efficiency; recommendedβœ…
# Switch Prefork β†’ Event
sudo a2dismod mpm_prefork
sudo a2enmod mpm_event
sudo systemctl restart apache2

# If using PHP: switch from mod_php to php-fpm
sudo a2dismod php8.1
sudo a2enmod proxy_fcgi setenvif
sudo a2enconf php8.1-fpm

MPM Event Tuning​

# /etc/apache2/mods-available/mpm_event.conf

<IfModule mpm_event_module>
StartServers 2
MinSpareThreads 25
MaxSpareThreads 75
ThreadsPerChild 25
MaxRequestWorkers 400
ServerLimit 16
MaxConnectionsPerChild 10000
AsyncRequestWorkerFactor 2
</IfModule>

Small server (2 CPU, 4 GB RAM)​

worker_processes 2;
worker_connections 2048;
MaxRequestWorkers 150
ThreadsPerChild 25
ServerLimit 6

Medium server (8 CPU, 16 GB RAM)​

worker_processes auto;   # 8
worker_connections 4096;
MaxRequestWorkers 400
ThreadsPerChild 25
ServerLimit 16

Large server (32 CPU, 64 GB RAM)​

worker_processes auto;    # 32
worker_connections 16384;
worker_rlimit_nofile 65535;
MaxRequestWorkers 2000
ThreadsPerChild 50
ServerLimit 40

Monitoring Worker Status​

# Enable Nginx status page
server {
listen 127.0.0.1:8080;
location /nginx_status {
stub_status on;
access_log off;
}
}
curl http://127.0.0.1:8080/nginx_status
# Active connections: 150
# Reading: 3 Writing: 12 Waiting: 135
# Waiting = idle keepalive connections
# Writing = connections currently sending a response
# Reading = connections reading request headers
# Apache status
sudo a2enmod status
curl http://127.0.0.1/server-status?auto
# BusyWorkers: 25
# IdleWorkers: 175

Capacity Calculation​

Nginx max concurrent connections:
= worker_processes Γ— worker_connections
= 8 Γ— 4096 = 32,768

Apache max concurrent requests:
= ServerLimit Γ— ThreadsPerChild
= 16 Γ— 25 = 400

Tomcat max concurrent requests:
= maxThreads (default: 200)

When Nginx/Apache proxies to Tomcat, make sure Tomcat's maxThreads is not the bottleneck relative to the upstream worker count.