Skip to main content

CI/CD Pipeline Integration — Automating Build, Deploy, Health Check, and Traffic Switching with GitHub Actions and Jenkins

In modern web service operations, manual deployments are a primary source of human error, longer release cycles, and difficulty deploying during nights or weekends. CI/CD (Continuous Integration/Continuous Delivery) pipelines solve these problems by automatically carrying a code change through build, test, deploy, and verification stages. This chapter covers building a fully automated CI/CD pipeline for a Nginx + Tomcat stack using GitHub Actions and Jenkins.

CI/CD Pipeline Stage Overview

A complete pipeline consists of six stages:

Source → Build → Test → Deploy → Verify → Traffic Switch
│ │ │ │ │ │
Git Maven JUnit WAR Health Nginx
Push Build Test Copy Check Reload
StageDescriptionToolOn Failure
SourceDetect code changesGitHub/GitLabPipeline not triggered
BuildCompile and package sourceMaven/GradleImmediate stop + notification
TestUnit/integration testsJUnit, TestcontainersImmediate stop + notification
DeployDeploy WAR to serverSSH/SCP, AnsibleImmediate stop + notification
VerifyHealth check and smoke testscurl, NewmanAutomatic rollback
Traffic SwitchSwitch load balancer trafficNginx, HAProxyManual intervention required

GitHub Actions Workflow

Basic CI/CD Workflow

# .github/workflows/deploy.yml
name: Build and Deploy to Production

on:
push:
branches:
- main # Deploy to production on main branch push
- staging # Deploy to staging on staging branch push
pull_request:
branches:
- main
types: [opened, synchronize] # Only build + test on PRs

env:
JAVA_VERSION: '17'
MAVEN_OPTS: '-Xmx1g'

jobs:
# ================================================================
# Job 1: Build and Test
# ================================================================
build-and-test:
name: Build and Test
runs-on: ubuntu-latest

steps:
- name: Checkout source code
uses: actions/checkout@v4

- name: Set up Java ${{ env.JAVA_VERSION }}
uses: actions/setup-java@v4
with:
java-version: ${{ env.JAVA_VERSION }}
distribution: 'temurin'
cache: 'maven'

- name: Maven build and test
run: mvn clean package -B --no-transfer-progress

- name: Upload test results
uses: actions/upload-artifact@v4
if: always()
with:
name: test-results
path: target/surefire-reports/

- name: Upload WAR artifact
uses: actions/upload-artifact@v4
with:
name: app-war
path: target/*.war
retention-days: 7

# ================================================================
# Job 2: Staging Deployment (staging branch)
# ================================================================
deploy-staging:
name: Deploy to Staging
runs-on: ubuntu-latest
needs: build-and-test
if: github.ref == 'refs/heads/staging' && github.event_name == 'push'
environment:
name: staging
url: https://staging.example.com

steps:
- name: Download WAR artifact
uses: actions/download-artifact@v4
with:
name: app-war
path: ./artifacts

- name: Set up SSH key
uses: webfactory/ssh-agent@v0.9.0
with:
ssh-private-key: ${{ secrets.STAGING_SSH_KEY }}

- name: Deploy to staging server
run: |
# 1. Remove node from Nginx (draining)
ssh -o StrictHostKeyChecking=no deploy@${{ secrets.STAGING_HOST }} \
"sudo sed -i 's|server 127.0.0.1:8080 weight=1;|server 127.0.0.1:8080 down;|g' \
/etc/nginx/conf.d/upstream.conf && sudo nginx -s reload"

echo "Waiting for draining (20s)..."
sleep 20

# 2. Transfer WAR file
scp ./artifacts/*.war \
deploy@${{ secrets.STAGING_HOST }}:/opt/tomcat/webapps/ROOT.war

# 3. Restart Tomcat
ssh deploy@${{ secrets.STAGING_HOST }} \
"sudo systemctl restart tomcat"

- name: Health check
run: |
MAX_RETRIES=12
RETRY_INTERVAL=10
HEALTH_URL="http://${{ secrets.STAGING_HOST }}:8080/health"

for i in $(seq 1 $MAX_RETRIES); do
HTTP_STATUS=$(curl -s -o /dev/null -w "%{http_code}" $HEALTH_URL || echo "000")
echo "Health check attempt $i/$MAX_RETRIES: HTTP $HTTP_STATUS"
if [ "$HTTP_STATUS" == "200" ]; then
echo "Health check passed!"
break
fi
if [ $i -eq $MAX_RETRIES ]; then
echo "Health check failed! Deployment failed."
exit 1
fi
sleep $RETRY_INTERVAL
done

- name: Restore Nginx upstream
run: |
ssh deploy@${{ secrets.STAGING_HOST }} \
"sudo sed -i 's|server 127.0.0.1:8080 down;|server 127.0.0.1:8080 weight=1;|g' \
/etc/nginx/conf.d/upstream.conf && sudo nginx -s reload"
echo "Staging deployment complete!"

# ================================================================
# Job 3: Production Blue-Green Deployment (main branch)
# ================================================================
deploy-production:
name: Deploy to Production (Blue-Green)
runs-on: ubuntu-latest
needs: build-and-test
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
environment:
name: production
url: https://example.com

steps:
- name: Checkout source code
uses: actions/checkout@v4

- name: Download WAR artifact
uses: actions/download-artifact@v4
with:
name: app-war
path: ./artifacts

- name: Set up SSH key
uses: webfactory/ssh-agent@v0.9.0
with:
ssh-private-key: ${{ secrets.PROD_SSH_KEY }}

- name: Check current active environment
id: check-active
run: |
ACTIVE=$(ssh -o StrictHostKeyChecking=no deploy@${{ secrets.PROD_HOST }} \
"cat /etc/nginx/current_env || echo blue")
echo "active=$ACTIVE" >> $GITHUB_OUTPUT
if [ "$ACTIVE" == "blue" ]; then
echo "target=green" >> $GITHUB_OUTPUT
echo "active_port=8080" >> $GITHUB_OUTPUT
echo "target_port=8081" >> $GITHUB_OUTPUT
else
echo "target=blue" >> $GITHUB_OUTPUT
echo "active_port=8081" >> $GITHUB_OUTPUT
echo "target_port=8080" >> $GITHUB_OUTPUT
fi
echo "Current active: $ACTIVE, Deploy target: $([ "$ACTIVE" == "blue" ] && echo green || echo blue)"

- name: Deploy WAR to inactive environment
run: |
TARGET_PORT=${{ steps.check-active.outputs.target_port }}
scp ./artifacts/*.war \
deploy@${{ secrets.PROD_HOST }}:/opt/tomcat-${{ steps.check-active.outputs.target }}/webapps/ROOT.war
ssh deploy@${{ secrets.PROD_HOST }} \
"sudo systemctl restart tomcat-${{ steps.check-active.outputs.target }}"

- name: Health check inactive environment
run: |
TARGET_PORT=${{ steps.check-active.outputs.target_port }}
HEALTH_URL="http://${{ secrets.PROD_HOST }}:${TARGET_PORT}/health"

for i in $(seq 1 18); do
HTTP_STATUS=$(curl -s -o /dev/null -w "%{http_code}" $HEALTH_URL || echo "000")
echo "Health check attempt $i/18: HTTP $HTTP_STATUS"
if [ "$HTTP_STATUS" == "200" ]; then
echo "Health check passed!"
exit 0
fi
if [ $i -eq 18 ]; then
echo "Health check failed! Canceling traffic switch."
exit 1
fi
sleep 10
done

- name: Switch traffic (Blue-Green Swap)
run: |
TARGET_ENV=${{ steps.check-active.outputs.target }}
TARGET_PORT=${{ steps.check-active.outputs.target_port }}

ssh deploy@${{ secrets.PROD_HOST }} "
sudo sed -i 's|proxy_pass http://tomcat_.*|proxy_pass http://tomcat_${TARGET_ENV};|g' \
/etc/nginx/conf.d/app.conf
sudo nginx -t && sudo nginx -s reload
echo '${TARGET_ENV}' | sudo tee /etc/nginx/current_env
"
echo "Traffic switch complete: ${{ steps.check-active.outputs.active }} -> $TARGET_ENV"

- name: Slack notification on success
if: success()
run: |
curl -X POST ${{ secrets.SLACK_WEBHOOK_URL }} \
-H 'Content-Type: application/json' \
-d '{
"text": "Production deployment succeeded",
"attachments": [{
"color": "good",
"fields": [
{"title": "Environment", "value": "${{ steps.check-active.outputs.target }}", "short": true},
{"title": "Commit", "value": "${{ github.sha }}", "short": true},
{"title": "Deployer", "value": "${{ github.actor }}", "short": true},
{"title": "Branch", "value": "${{ github.ref_name }}", "short": true}
]
}]
}'

- name: Slack notification on failure
if: failure()
run: |
curl -X POST ${{ secrets.SLACK_WEBHOOK_URL }} \
-H 'Content-Type: application/json' \
-d '{
"text": "Production deployment FAILED",
"attachments": [{
"color": "danger",
"fields": [
{"title": "Commit", "value": "${{ github.sha }}", "short": true},
{"title": "Deployer", "value": "${{ github.actor }}", "short": true},
{"title": "Workflow", "value": "${{ github.workflow }}", "short": true}
]
}]
}'

Jenkins Pipeline (Jenkinsfile)

Complete Declarative Pipeline

// Jenkinsfile
pipeline {
agent any

options {
buildDiscarder(logRotator(numToKeepStr: '10'))
timeout(time: 30, unit: 'MINUTES')
disableConcurrentBuilds()
}

environment {
JAVA_HOME = tool 'JDK-17'
MAVEN_HOME = tool 'Maven-3.9'
PATH = "${JAVA_HOME}/bin:${MAVEN_HOME}/bin:${env.PATH}"

// Load SSH keys from Jenkins Credentials
PROD_SSH_KEY = credentials('prod-ssh-key')
STAGING_SSH_KEY = credentials('staging-ssh-key')
SLACK_WEBHOOK = credentials('slack-webhook-url')

PROD_HOST = '192.168.1.10'
STAGING_HOST = '192.168.1.20'
TOMCAT_HOME = '/opt/tomcat'
}

parameters {
choice(
name: 'DEPLOY_ENV',
choices: ['auto', 'staging', 'production'],
description: 'Select deploy environment (auto: determined by branch)'
)
booleanParam(
name: 'SKIP_TESTS',
defaultValue: false,
description: 'Skip tests (use only for emergency deployments)'
)
}

stages {
stage('Checkout') {
steps {
checkout scm
script {
env.GIT_COMMIT_MSG = sh(
script: 'git log -1 --format="%s"',
returnStdout: true
).trim()
env.GIT_AUTHOR = sh(
script: 'git log -1 --format="%an"',
returnStdout: true
).trim()
echo "Commit: ${env.GIT_COMMIT_MSG} by ${env.GIT_AUTHOR}"
}
}
}

stage('Build') {
steps {
sh '''
mvn clean package \
-DskipTests=${SKIP_TESTS} \
-B \
--no-transfer-progress \
-Pproduction
'''
}
post {
always {
junit '**/target/surefire-reports/*.xml'
archiveArtifacts artifacts: 'target/*.war', fingerprint: true
}
}
}

stage('Test') {
when {
not { expression { params.SKIP_TESTS } }
}
steps {
sh 'mvn verify -B --no-transfer-progress'
}
}

stage('Deploy to Staging') {
when {
anyOf {
branch 'develop'
branch 'staging'
expression { params.DEPLOY_ENV == 'staging' }
}
}
steps {
script {
deployToServer(
host: env.STAGING_HOST,
sshKeyFile: env.STAGING_SSH_KEY,
warFile: 'target/app.war',
healthUrl: "http://${env.STAGING_HOST}:8080/health"
)
}
}
}

stage('Production Approval') {
when {
anyOf {
branch 'main'
expression { params.DEPLOY_ENV == 'production' }
}
}
steps {
timeout(time: 10, unit: 'MINUTES') {
input message: 'Deploy to production?',
ok: 'Approve Deployment',
submitter: 'devops,admin'
}
}
}

stage('Deploy to Production') {
when {
anyOf {
branch 'main'
expression { params.DEPLOY_ENV == 'production' }
}
}
steps {
script {
deployToServer(
host: env.PROD_HOST,
sshKeyFile: env.PROD_SSH_KEY,
warFile: 'target/app.war',
healthUrl: "http://${env.PROD_HOST}:8080/health",
nginxConf: '/etc/nginx/conf.d/upstream.conf'
)
}
}
}
}

post {
success {
script {
slackNotify(
message: "Build ${env.JOB_NAME} #${env.BUILD_NUMBER} succeeded\nCommit: ${env.GIT_COMMIT_MSG}"
)
}
}
failure {
script {
slackNotify(
message: "Build ${env.JOB_NAME} #${env.BUILD_NUMBER} FAILED\nLogs: ${env.BUILD_URL}"
)
}
}
always {
cleanWs()
}
}
}

// ================================================================
// Shared deployment function
// ================================================================
def deployToServer(Map config) {
def host = config.host
def warFile = config.warFile
def healthUrl = config.healthUrl
def nginxConf = config.nginxConf ?: '/etc/nginx/conf.d/upstream.conf'

// 1. Remove node from Nginx
sh """
ssh -i ${config.sshKeyFile} -o StrictHostKeyChecking=no deploy@${host} \
"sudo sed -i 's|server 127.0.0.1:8080 weight=1;|server 127.0.0.1:8080 down;|g' \
${nginxConf} && sudo nginx -s reload"
"""
echo "Node removed from Nginx, waiting for draining (20s)..."
sleep(20)

// 2. Deploy WAR file
sh """
scp -i ${config.sshKeyFile} -o StrictHostKeyChecking=no \
${warFile} deploy@${host}:/opt/tomcat/webapps/ROOT.war
ssh -i ${config.sshKeyFile} -o StrictHostKeyChecking=no deploy@${host} \
"sudo systemctl restart tomcat"
"""

// 3. Health check
retry(12) {
sleep(10)
def status = sh(
script: "curl -s -o /dev/null -w '%{http_code}' ${healthUrl}",
returnStdout: true
).trim()
if (status != '200') {
error("Health check failed: HTTP ${status}")
}
echo "Health check passed: HTTP ${status}"
}

// 4. Restore Nginx upstream
sh """
ssh -i ${config.sshKeyFile} -o StrictHostKeyChecking=no deploy@${host} \
"sudo sed -i 's|server 127.0.0.1:8080 down;|server 127.0.0.1:8080 weight=1;|g' \
${nginxConf} && sudo nginx -s reload"
"""
echo "Deployment complete!"
}

def slackNotify(Map config) {
sh """
curl -X POST ${SLACK_WEBHOOK} \
-H 'Content-Type: application/json' \
-d '{"text": "${config.message}"}'
"""
}

Deployment Health Check Script

#!/bin/bash
# health-check.sh — Health check with N retries

HEALTH_URL="${1:-http://localhost:8080/health}"
MAX_RETRIES="${2:-12}"
RETRY_INTERVAL="${3:-10}"
EXPECTED_STATUS="${4:-200}"

echo "=== Health Check Start ==="
echo "URL: $HEALTH_URL"
echo "Max retries: $MAX_RETRIES (interval: ${RETRY_INTERVAL}s)"

for i in $(seq 1 $MAX_RETRIES); do
HTTP_STATUS=$(curl -s \
--connect-timeout 5 \
--max-time 10 \
-o /tmp/health_response.txt \
-w "%{http_code}" \
$HEALTH_URL 2>/dev/null || echo "000")

TIMESTAMP=$(date '+%H:%M:%S')

if [ "$HTTP_STATUS" == "$EXPECTED_STATUS" ]; then
RESPONSE=$(cat /tmp/health_response.txt)
echo "[$TIMESTAMP] Health check passed! (HTTP $HTTP_STATUS)"
echo "Response: $RESPONSE"
exit 0
else
echo "[$TIMESTAMP] Attempt $i/$MAX_RETRIES: HTTP $HTTP_STATUS (waiting...)"
fi

if [ $i -lt $MAX_RETRIES ]; then
sleep $RETRY_INTERVAL
fi
done

echo "Health check ultimately failed (no response after $MAX_RETRIES attempts)"
echo "Last response:"
cat /tmp/health_response.txt
exit 1

Rollback Automation

A script that automatically restores the previous version when a health check fails.

#!/bin/bash
# auto-rollback.sh — Automatic rollback on health check failure

TOMCAT_HOME="/opt/tomcat"
BACKUP_DIR="/opt/tomcat/backup"
CURRENT_WAR="${TOMCAT_HOME}/webapps/ROOT.war"
HEALTH_URL="http://localhost:8080/health"
NGINX_CONF="/etc/nginx/conf.d/upstream.conf"

log() { echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1"; }

# Back up current WAR before deployment
backup_current() {
BACKUP_FILE="${BACKUP_DIR}/ROOT.war.$(date '+%Y%m%d_%H%M%S')"
cp $CURRENT_WAR $BACKUP_FILE
# Keep only the latest 5 backups
ls -t ${BACKUP_DIR}/ROOT.war.* | tail -n +6 | xargs rm -f 2>/dev/null || true
log "Current WAR backed up: $BACKUP_FILE"
echo $BACKUP_FILE
}

# Perform rollback
perform_rollback() {
local PREV_WAR=$(ls -t ${BACKUP_DIR}/ROOT.war.* 2>/dev/null | head -1)

if [ -z "$PREV_WAR" ]; then
log "Rollback failed: no backup file found"
exit 1
fi

log "Starting rollback: $PREV_WAR"

# 1. Remove node from Nginx
sed -i 's|server 127.0.0.1:8080 weight=1;|server 127.0.0.1:8080 down;|g' $NGINX_CONF
nginx -s reload
sleep 15

# 2. Restore previous WAR
cp $PREV_WAR $CURRENT_WAR
systemctl restart tomcat

# 3. Health check after rollback
log "Health check after rollback..."
for i in $(seq 1 12); do
HTTP_STATUS=$(curl -s -o /dev/null -w "%{http_code}" $HEALTH_URL || echo "000")
if [ "$HTTP_STATUS" == "200" ]; then
log "Rollback successful! Service restored."
sed -i 's|server 127.0.0.1:8080 down;|server 127.0.0.1:8080 weight=1;|g' $NGINX_CONF
nginx -s reload
return 0
fi
sleep 10
done

log "Health check still failing after rollback! Immediate manual intervention required."
return 1
}

# Main deploy + rollback logic
BACKUP_FILE=$(backup_current)
log "Deploying new version..."
cp /deploy/app.war $CURRENT_WAR
systemctl restart tomcat

# Run health check
bash health-check.sh $HEALTH_URL 12 10
if [ $? -ne 0 ]; then
log "Deployment failure detected! Starting automatic rollback..."
perform_rollback
fi

Environment-Based Branching

# Environment branching in GitHub Actions
jobs:
determine-environment:
runs-on: ubuntu-latest
outputs:
environment: ${{ steps.set-env.outputs.environment }}
should-deploy: ${{ steps.set-env.outputs.should-deploy }}
steps:
- name: Determine deployment environment
id: set-env
run: |
if [[ "${{ github.ref }}" == "refs/heads/main" ]]; then
echo "environment=production" >> $GITHUB_OUTPUT
echo "should-deploy=true" >> $GITHUB_OUTPUT
elif [[ "${{ github.ref }}" == "refs/heads/staging" ]]; then
echo "environment=staging" >> $GITHUB_OUTPUT
echo "should-deploy=true" >> $GITHUB_OUTPUT
elif [[ "${{ github.ref }}" == "refs/heads/develop" ]]; then
echo "environment=development" >> $GITHUB_OUTPUT
echo "should-deploy=true" >> $GITHUB_OUTPUT
else
echo "environment=none" >> $GITHUB_OUTPUT
echo "should-deploy=false" >> $GITHUB_OUTPUT
fi

Managing Secrets for SSH Deployments

Setting Up GitHub Secrets

# Register secrets via GitHub CLI
gh secret set PROD_SSH_KEY < ~/.ssh/id_rsa_prod
gh secret set STAGING_SSH_KEY < ~/.ssh/id_rsa_staging
gh secret set SLACK_WEBHOOK_URL --body "https://hooks.slack.com/services/xxx/yyy/zzz"
gh secret set PROD_HOST --body "192.168.1.10"

Generating SSH Keys and Registering on Server

# Generate a dedicated deploy SSH key (without passphrase)
ssh-keygen -t ed25519 -C "github-actions-deploy" -f ~/.ssh/id_ed25519_deploy -N ""

# Register public key on the server
ssh-copy-id -i ~/.ssh/id_ed25519_deploy.pub deploy@192.168.1.10

# Register private key in GitHub Secrets
cat ~/.ssh/id_ed25519_deploy | gh secret set PROD_SSH_KEY

# Limit sudo permissions for deploy user (allow only required commands)
# /etc/sudoers.d/deploy:
# deploy ALL=(ALL) NOPASSWD: /bin/systemctl restart tomcat, /usr/sbin/nginx -s reload, /usr/sbin/nginx -t

Docker-Based CI/CD

# .github/workflows/docker-deploy.yml
name: Docker Build and Deploy

on:
push:
branches: [main]

jobs:
docker-build-push:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4

- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3

- name: Log in to Docker Hub
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}

- name: Build and push image
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: |
${{ secrets.DOCKERHUB_USERNAME }}/myapp:latest
${{ secrets.DOCKERHUB_USERNAME }}/myapp:${{ github.sha }}
cache-from: type=gha
cache-to: type=gha,mode=max

deploy:
needs: docker-build-push
runs-on: ubuntu-latest
steps:
- name: Pull new image and restart on server via SSH
uses: appleboy/ssh-action@v1.0.0
with:
host: ${{ secrets.PROD_HOST }}
username: deploy
key: ${{ secrets.PROD_SSH_KEY }}
script: |
# Pull new image
docker pull ${{ secrets.DOCKERHUB_USERNAME }}/myapp:latest

# Zero-downtime restart via docker compose
cd /opt/app
docker compose pull tomcat
docker compose up -d --no-deps tomcat

# Health check
sleep 15
for i in $(seq 1 10); do
if curl -sf http://localhost:8080/health; then
echo "Deployment successful!"
exit 0
fi
sleep 10
done
echo "Deployment failed!" && exit 1

Once a CI/CD pipeline is in place, developers can focus purely on code changes while build, test, deploy, and verification happen automatically. The initial setup seems complex, but once built, the entire team's deployment efficiency and stability improves significantly.