Kubernetes Deployment — Go Application Operations Automation
Kubernetes (K8s) is a container orchestration platform. Here we explore core patterns for deploying and operating Go apps on K8s.
Basic Deployment Manifest
# k8s/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
namespace: production
labels:
app: user-service
version: v1.0.0
spec:
replicas: 3
selector:
matchLabels:
app: user-service
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1 # Max additional pods during update
maxUnavailable: 0 # Maintain minimum availability during update
template:
metadata:
labels:
app: user-service
spec:
containers:
- name: user-service
image: myregistry/user-service:1.0.0
ports:
- containerPort: 8080
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: user-service-secrets
key: database-url
- name: APP_ENV
value: production
# Resource limits (required)
resources:
requests:
cpu: "100m" # 0.1 CPU
memory: "64Mi"
limits:
cpu: "500m" # 0.5 CPU
memory: "128Mi"
# Health checks
livenessProbe:
httpGet:
path: /health
port: 8080
initialDelaySeconds: 10
periodSeconds: 10
failureThreshold: 3
readinessProbe:
httpGet:
path: /ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
failureThreshold: 3
# Graceful shutdown
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 5"]
terminationGracePeriodSeconds: 60
# Pod distribution (avoid clustering on same node)
topologySpreadConstraints:
- maxSkew: 1
topologyKey: kubernetes.io/hostname
whenUnsatisfiable: DoNotSchedule
labelSelector:
matchLabels:
app: user-service
Service & Ingress
# k8s/service.yaml
apiVersion: v1
kind: Service
metadata:
name: user-service
namespace: production
spec:
selector:
app: user-service
ports:
- port: 80
targetPort: 8080
protocol: TCP
type: ClusterIP # Cluster-internal access
# k8s/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: user-service
namespace: production
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- api.example.com
secretName: api-tls
rules:
- host: api.example.com
http:
paths:
- path: /users
pathType: Prefix
backend:
service:
name: user-service
port:
number: 80
Configuration Management
# k8s/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: user-service-config
namespace: production
data:
APP_ENV: "production"
LOG_LEVEL: "info"
MAX_CONNECTIONS: "100"
---
# k8s/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: user-service-secrets
namespace: production
type: Opaque
stringData:
database-url: "postgres://user:password@postgres:5432/mydb"
redis-url: "redis://:password@redis:6379"
jwt-secret: "supersecretkey"
# Use ConfigMap in deployment.yaml
spec:
containers:
- name: user-service
envFrom:
- configMapRef:
name: user-service-config
- secretRef:
name: user-service-secrets
HPA — Horizontal Pod Autoscaling
# k8s/hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: user-service
namespace: production
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: user-service
minReplicas: 2
maxReplicas: 20
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleUp:
stabilizationWindowSeconds: 30 # Prevent abrupt scaling up
scaleDown:
stabilizationWindowSeconds: 300 # Prevent abrupt scaling down
Database Migration Job
# k8s/migration-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
name: user-service-migration-v1-0-0
namespace: production
spec:
ttlSecondsAfterFinished: 300 # Delete 5 minutes after completion
template:
spec:
restartPolicy: Never
containers:
- name: migration
image: myregistry/user-service:1.0.0
command: ["/migrate"]
args: ["up"]
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: user-service-secrets
key: database-url
backoffLimit: 3
Go App Kubernetes Signal Handling
// cmd/server/main.go
package main
import (
"context"
"log"
"net/http"
"os"
"os/signal"
"syscall"
"time"
)
func main() {
server := &http.Server{
Addr: ":8080",
Handler: setupRouter(),
}
// Start server
go func() {
log.Println("Server starting: :8080")
if err := server.ListenAndServe(); err != http.ErrServerClosed {
log.Fatalf("Server error: %v", err)
}
}()
// Kubernetes signal handling
quit := make(chan os.Signal, 1)
signal.Notify(quit, syscall.SIGTERM, syscall.SIGINT)
sig := <-quit
log.Printf("Signal received: %v — graceful shutdown starting", sig)
// Give K8s time to update iptables (together with preStop)
time.Sleep(5 * time.Second)
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()
if err := server.Shutdown(ctx); err != nil {
log.Printf("Shutdown error: %v", err)
}
log.Println("Server shutdown complete")
}
GitHub Actions CI/CD
# .github/workflows/deploy.yml
name: Build and Deploy
on:
push:
branches: [main]
env:
IMAGE_NAME: myregistry/user-service
K8S_DEPLOYMENT: user-service
K8S_NAMESPACE: production
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-go@v5
with:
go-version: '1.22'
- run: go test -race ./...
build-push:
needs: test
runs-on: ubuntu-latest
outputs:
image-tag: ${{ steps.meta.outputs.version }}
steps:
- uses: actions/checkout@v4
- name: Docker meta
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.IMAGE_NAME }}
tags: type=sha,prefix=
- name: Build and push
uses: docker/build-push-action@v5
with:
push: true
tags: ${{ steps.meta.outputs.tags }}
build-args: |
VERSION=${{ github.sha }}
deploy:
needs: build-push
runs-on: ubuntu-latest
steps:
- name: Deploy to K8s
run: |
kubectl set image deployment/${{ env.K8S_DEPLOYMENT }} \
${{ env.K8S_DEPLOYMENT }}=${{ env.IMAGE_NAME }}:${{ needs.build-push.outputs.image-tag }} \
-n ${{ env.K8S_NAMESPACE }}
kubectl rollout status deployment/${{ env.K8S_DEPLOYMENT }} \
-n ${{ env.K8S_NAMESPACE }} \
--timeout=5m
Key kubectl Commands
# Check deployment status
kubectl get pods -n production -l app=user-service
kubectl get deployment user-service -n production
# Rollout status
kubectl rollout status deployment/user-service -n production
kubectl rollout history deployment/user-service -n production
# Rollback to previous version
kubectl rollout undo deployment/user-service -n production
# Pod logs
kubectl logs -f deployment/user-service -n production
kubectl logs -f user-service-7d9fb96c4-xk2p5 -n production --previous
# Access pod shell
kubectl exec -it user-service-7d9fb96c4-xk2p5 -n production -- /bin/sh
# Manual scaling
kubectl scale deployment user-service --replicas=5 -n production
Key Takeaways
| Resource | Role |
|---|---|
| Deployment | Pod management, rolling updates |
| Service | Pod load balancing, DNS naming |
| Ingress | External HTTP/HTTPS routing |
| ConfigMap | Non-sensitive configuration data |
| Secret | Sensitive data (encrypted storage) |
| HPA | CPU/memory-based autoscaling |
| Job | One-time tasks (migrations) |
- Liveness Probe failure → Pod restart
- Readiness Probe failure → Removed from load balancer (no restart)
- Setting
resources.requestsandlimitsis required (without them, node resources can be exhausted)