ZITADEL Docs
Deploy & OperateSelf-HostedKubernetes

Operations

This guide covers day-2 operations for Zitadel on Kubernetes.

Upgrades

General Upgrade Process

  1. Review the release notes for your target version
  2. Back up your database
  3. Update your values.yaml with any required changes
  4. Upgrade the Helm release:
helm repo update
helm upgrade my-zitadel zitadel/zitadel --values values.yaml --version <target-version>
  1. Monitor the upgrade. Watch the pods:
kubectl get pods --watch

Check the Helm release status:

helm status my-zitadel

Scaling

Manual Scaling

Adjust the replica count in your values:

replicaCount: 3

Or scale directly:

kubectl scale deployment my-zitadel --replicas=3

Horizontal Pod Autoscaler

Enable HPA for automatic scaling based on resource utilization:

zitadel:
  autoscaling:
    enabled: true
    minReplicas: 2
    maxReplicas: 10
    targetCPUUtilizationPercentage: 80
    targetMemoryUtilizationPercentage: 80

This creates an HPA that:

  • Maintains at least 2 replicas
  • Scales up to 10 replicas
  • Targets 80% CPU and memory utilization

View HPA status:

kubectl get hpa

Get detailed HPA information:

kubectl describe hpa my-zitadel

Resource Requests and Limits

Configure appropriate resource allocations:

resources:
  requests:
    cpu: 100m
    memory: 256Mi
  limits:
    cpu: 1000m
    memory: 1Gi

Recommendations by deployment size:

SizeCPU RequestCPU LimitMemory RequestMemory Limit
Small (dev)100m500m256Mi512Mi
Medium250m1000m512Mi1Gi
Large500m2000m1Gi2Gi

Pod Disruption Budget

Ensure availability during voluntary disruptions by using minAvailable:

podDisruptionBudget:
  enabled: true
  minAvailable: 1

Alternatively, use maxUnavailable:

podDisruptionBudget:
  enabled: true
  maxUnavailable: 1

This ensures at least one pod remains available during node drains, upgrades, or other voluntary disruptions.

Database Scaling Considerations

When scaling Zitadel horizontally, ensure your PostgreSQL database can handle the increased connection load:

  • Each Zitadel pod opens multiple connections
  • Consider using PgBouncer for connection pooling
  • Monitor database connection usage

Example connection pooling setup with PgBouncer:

zitadel:
  configmapConfig:
    Database:
      Postgres:
        Host: "pgbouncer.database.svc.cluster.local"
        Port: 6432
        MaxOpenConns: 20
        MaxIdleConns: 10

Next Steps

Was this page helpful?

On this page