Skip to main content

HTTPS Offloading (SSL Termination)

HTTPS offloading (SSL Termination) is the architecture pattern where a web server or load balancer terminates TLS and communicates with backend servers over plain HTTP. It offloads the TLS processing burden from application servers, improving both performance and operational simplicity.


SSL Termination vs SSL Passthrough​

SSL Termination vs Passthrough

SSL Termination (Offloading)​

[Client] ──HTTPS──▢ [Nginx/Apache] ──HTTP──▢ [App Server]
TLS ends here plain-text internally

Advantages:

  • Certificate lives in one place (the load balancer)
  • Backend servers have zero TLS overhead
  • L7 routing based on HTTP headers and URLs is possible

Trade-offs:

  • Load balancer β†’ backend segment is unencrypted (acceptable on a trusted internal network)
  • Not suitable when end-to-end encryption is required

SSL Passthrough​

[Client] ──HTTPS──▢ [L4 LB] ──HTTPS──▢ [App Server]
passes TLS through TLS ends here

Advantages: End-to-end encryption maintained Trade-offs: Only L4 routing possible; no header-based routing


Nginx SSL Termination Configuration​

# /etc/nginx/conf.d/ssl-termination.conf

upstream backend_http {
server app1.internal:8080;
server app2.internal:8080;
server app3.internal:8080;
keepalive 32;
}

# HTTPS offloading
server {
listen 443 ssl;
listen [::]:443 ssl;
http2 on;
server_name example.com;

ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_session_cache shared:SSL:10m;

location / {
# Forward as plain HTTP to backend
proxy_pass http://backend_http;
proxy_http_version 1.1;
proxy_set_header Connection "";

# Pass original HTTPS info via headers (essential!)
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-Port 443;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
}
}

# HTTP redirect
server {
listen 80;
server_name example.com;
return 301 https://$host$request_uri;
}

Apache SSL Termination Configuration​

# Certificate on Apache only; backend receives plain HTTP
<VirtualHost *:443>
ServerName example.com

SSLEngine On
SSLCertificateFile /etc/ssl/example.com/fullchain.pem
SSLCertificateKeyFile /etc/ssl/example.com/privkey.pem
SSLProtocol all -SSLv3 -TLSv1 -TLSv1.1

ProxyRequests Off
ProxyPreserveHost On

# Forward to HTTP backend pool
<Proxy "balancer://backend">
BalancerMember "http://app1.internal:8080"
BalancerMember "http://app2.internal:8080"
ProxySet lbmethod=bybusyness
</Proxy>

ProxyPass "/" "balancer://backend/"
ProxyPassReverse "/" "balancer://backend/"

# Forward HTTPS info via headers
RequestHeader set X-Forwarded-Proto "https"
RequestHeader set X-Forwarded-Port "443"
</VirtualHost>

Handling X-Forwarded-Proto in the Backend​

Without the header, backend applications cannot tell whether the client connected over HTTPS. They must read X-Forwarded-Proto to handle redirects, HSTS, and cookie Secure flags correctly.

Spring Boot​

// application.yml
server:
forward-headers-strategy: native # trust X-Forwarded-* headers

# Or in Spring Boot 2.x
server:
use-forward-headers: true
// Force HTTPS in Spring Security when behind an offloading proxy
@Configuration
public class SecurityConfig {

@Bean
public SecurityFilterChain securityFilterChain(HttpSecurity http) throws Exception {
http
// Detect HTTPS from X-Forwarded-Proto header
.requiresChannel(channel ->
channel.requestMatchers(r ->
r.getHeader("X-Forwarded-Proto") != null
).requiresSecure()
)
// HSTS
.headers(headers -> headers
.httpStrictTransportSecurity(hsts -> hsts
.includeSubDomains(true)
.maxAgeInSeconds(63072000)
)
);
return http.build();
}
}

Tomcat RemoteIpValve​

<!-- server.xml: trust X-Forwarded-Proto -->
<Valve className="org.apache.catalina.valves.RemoteIpValve"
remoteIpHeader="X-Real-IP"
protocolHeader="X-Forwarded-Proto"
protocolHeaderHttpsValue="https"
trustedProxies="10\.0\.0\.\d{1,3}"
/>

SSL Re-encryption (Encrypt Internal Traffic Too)​

In high-security environments, traffic between the load balancer and backend is also encrypted.

upstream backend_https {
server app1.internal:8443;
server app2.internal:8443;
}

server {
listen 443 ssl;
# ... client-side SSL config ...

location / {
proxy_pass https://backend_https; # HTTPS to backend

# Internal certificate verification
proxy_ssl_verify on;
proxy_ssl_trusted_certificate /etc/ssl/internal-ca.crt;
proxy_ssl_certificate /etc/ssl/client.crt; # mTLS
proxy_ssl_certificate_key /etc/ssl/client.key;

proxy_set_header X-Forwarded-Proto https;
}
}

Certificate Placement by Architecture​

ConfigurationCertificate LocationInternal Traffic
Simple Termination1 LBHTTP (plain)
HA LB + Termination2 LBs shared (Keepalived)HTTP (plain)
Re-encryptionLB + each App serverHTTPS (internal cert OK)
PassthroughEach App serverHTTPS (client certificate)

Performance Comparison​

TLS processing is CPU-intensive. When Nginx handles TLS, Tomcat's CPU is freed for application work.

# Without TLS offloading (Tomcat handles TLS directly)
ab -n 10000 -c 100 -k https://app:8443/api/test

# With TLS offloading (Nginx β†’ Tomcat over HTTP)
ab -n 10000 -c 100 -k https://nginx/api/test

# Offloading typically reduces Tomcat CPU usage by 30–50%

The next page covers security-hardening headers: HSTS, CSP, and more.