Skip to main content
Version: 21.5 - latest

High Availability Deployment

This guide shows how to deploy Gitea Enterprise in a highly available (HA) topology. HA keeps code hosting online when a node fails by running multiple application instances behind a load balancer and offloading state to shared services.

1. Prerequisites

  • At least two application nodes (Linux VMs, bare-metal, or Kubernetes workloads) with outbound access to the shared services below.
  • An external database cluster (MySQL/MariaDB or PostgreSQL) that provides its own replication or managed high availability.
  • A Redis/Redis cluster deployment for cache, session, task queues and globalLock.
  • A Elastic Search deployment for code search if enabled
  • Shared non-git-repository storage: either an S3-compatible bucket or a POSIX volume (NFS/Gluster/NetApp) mounted at the same path on every node.
  • Shared git repository storage: a POSIX volume (NFS/Gluster/NetApp) mounted at the same path on every node.
  • A load balancer that can forward HTTP/HTTPS (port 3000/443) and SSH (port 22/222) traffic with health checks.

Follow Install on Linux, Install with Docker, or Install on Kubernetes for per-node provisioning. The HA-specific steps below build on top of those guides.

2. Bootstrap the first application node

  1. Install the Enterprise binary or container following the single-node guide of your choice.
  2. During configuration set APP_DATA_PATH to the shared storage mount (/data).
  3. Update /data/gitea/conf/app.ini (or the equivalent environment variables) with HA-friendly settings:
APP_NAME = Gitea Enterprise
RUN_MODE = prod

[server]
DOMAIN = git.example.com
ROOT_URL = https://git.example.com/
PROTOCOL = http
HTTP_PORT = 3000
SSH_DOMAIN = git.example.com
SSH_PORT = 222
LANDING_PAGE = explore
LFS_START_SERVER = true
LFS_JWT_SECRET = <shared secret>

[database]
DB_TYPE = mysql
HOST = db.internal:3306
NAME = gitea
USER = gitea
PASSWD = <password>

[session]
PROVIDER = redis
PROVIDER_CONFIG = redis://:<password>@redis.internal:6379/0?pool_size=100&idle_timeout=120s

[cache]
ADAPTER = redis
HOST = redis://:<password>@redis.internal:6379/1

[queue]
TYPE = redis
CONN_STR = redis://:<password>@redis.internal:6379/2

[storage]
STORAGE_TYPE = minio ; once the storage added, the path on other places should be removed like attachment
; lfs. Use gitea migrate-storage to migrate the original files to the new places if it's a migration
MINIO_BUCKET = gitea-ee-data
MINIO_ENDPOINT = s3.internal:9000
MINIO_ACCESS_KEY_ID = <access>
MINIO_SECRET_ACCESS_KEY = <secret>

[indexer]
ISSUE_INDEXER_TYPE = db, elasticsearch or melllisearch ; bleve will not work for HA mode
ISSUE_INDEXER_CONN_STR = http://elastic:changeme@localhost:9200

REPO_INDEXER_ENABLED = true
REPO_INDEXER_TYPE = elasticsearch
REPO_INDEXER_CONN_STR = http://elastic:changeme@localhost:9200

[log]
; please ensure the log path hasn't been shared by different instances

[global_lock]
SERVICE_TYPE = redis
SERVICE_CONN_STR = redis://:<password>@redis.internal:6379/5

3. Add additional nodes

Repeat the installation for every extra node:

  1. Reuse the same Enterprise version and configuration files. If you template app.ini, consider using Ansible, Helm, or Terraform to avoid drift.
  2. Mount the shared /data volume read/write. When using object storage, only /data/gitea/conf need to stay on shared disk, while repositories live in the bucket.
  3. Double-check file permissions (git:git) and confirm that the node can reach the database, Redis, and storage endpoints.
  4. Start the service and verify GET /api/healthz returns 200 OK.

4. Configure the load balancer

  • Add all healthy nodes to the HTTP/HTTPS pool. If TLS terminates on the balancer, forward X-Forwarded-For, X-Forwarded-Proto, and X-Forwarded-Host headers so Gitea generates correct URLs.
  • Forward SSH traffic via TCP (either a separate VIP or port 222 on the same address). When using Kubernetes, expose SSH with a LoadBalancer service or NodePort plus an external LB.
  • Configure health probes against /api/healthz or /api/v1/version. Mark nodes unhealthy if the check fails three times to avoid sending user traffic to a degraded instance.
  • Enable connection draining so running git pushes can finish before a node is removed for maintenance.

5. Operations checklist

  • Backups: snapshot the database, Redis, and object storage regularly. Follow the upstream backup guide.
  • Upgrades: roll nodes one at a time. Drain traffic at the load balancer, stop the service, upgrade the binary or container tag, run migrations (first node only), then re-add it.
  • Monitoring: scrape the /metrics endpoint with Prometheus, alert on queue depth, HTTP error rates, and replication lag.
  • Disaster recovery: test restoring a node by bootstrapping from the same config plus shared storage. Verify license status and outbound integrations (SMTP, webhooks) afterward.

With these steps, your Gitea Enterprise deployment can survive single-node failures and scale horizontally as your organization grows.