← Back to blog

10 AI Agents, Self-Hosted Gitea, and Zero Manual Deploys

awsterraformgiteaaici-cdinfrastructure

I run a multi-agent AI system where 10 autonomous agents each have their own Gitea user account, can be tagged in issues, review pull requests, and approve deployments. The entire infrastructure — Gitea server, CI runners, VPC networking, agent credentials, team permissions — is managed with Terraform. No ClickOps. No manual setup.

This post covers the infrastructure I built to make that work.

Why Self-Hosted Gitea

GitHub is great for public repos and team collaboration, but when you need AI agents to operate as first-class contributors — creating branches, opening PRs, reviewing code, approving merges — GitHub’s API rate limits and bot restrictions get in the way fast.

Self-hosted Gitea gives you:

The Architecture

┌──────────────────────────────────────────────────────┐
│  Management Account (114306020843)                   │
│  ┌─────────────────┐  ┌──────────────────────────┐   │
│  │ TerraformCIRole  │  │ S3 State + DynamoDB Lock │   │
│  └────────┬────────┘  └──────────────────────────┘   │
│           │ assume role                              │
├───────────┼──────────────────────────────────────────┤
│           ▼                                          │
│  Gitea VPC (10.1.0.0/16)                             │
│  ┌────────────────────┐  ┌──────────────────────┐    │
│  │ Gitea EC2 (t4g.sm) │  │ Actions Runner       │    │
│  │ Port 3000          │  │ (t4g.medium)         │    │
│  │ 20GB EBS persistent│  │ Go, Node, Tofu,      │    │
│  │ SSM-only access    │  │ Docker, AWS CLI       │    │
│  └────────────────────┘  └──────────────────────┘    │
│           ▲                                          │
│    VPC Peering (pcx-*)                               │
│           ▼                                          │
│  Agent VPC (10.0.0.0/16)                             │
│  ┌────────────────────────────────────────────────┐  │
│  │ Agent EC2 — 10 AI agents                       │  │
│  │ Each agent: .gitea-env, .git-credentials       │  │
│  │ Heartbeat cron: issue triage every 10min       │  │
│  │ Worker cron: picks up claimed issues           │  │
│  └────────────────────────────────────────────────┘  │
│           │                                          │
│    Private DNS: gitea.internal.openclaw → Gitea IP   │
└──────────────────────────────────────────────────────┘

The key decision: Gitea lives in its own VPC (10.1.0.0/16), peered with the agent VPC (10.0.0.0/16). The agents resolve gitea.internal.openclaw via Route 53 private hosted zone. No public endpoints. Security groups allow only port 3000 between the two VPCs.

Terraform Everything

Gitea Server

The Gitea instance is a t4g.small with a persistent 20GB EBS volume mounted at /var/lib/gitea. No public IP — access is SSM-only for admin, VPC peering for agents.

resource "aws_instance" "gitea" {
  ami                    = data.aws_ami.al2023_arm.id
  instance_type          = "t4g.small"
  subnet_id              = aws_subnet.private.id
  iam_instance_profile   = aws_iam_instance_profile.gitea.name
  vpc_security_group_ids = [aws_security_group.gitea.id]

  user_data = templatefile("${path.module}/userdata.sh.tpl", {
    ebs_volume_id = aws_ebs_volume.gitea_data.id
    region        = var.region
  })
}

No SSH keys. No bastion. When I need to access it, I use make tunnel which creates an SSM port-forward to localhost:3000.

Agent Users and Teams

This is where it gets interesting. The Gitea Terraform provider lets me manage users, teams, and tokens declaratively:

resource "gitea_user" "agents" {
  for_each = toset(var.agent_names)

  username             = each.key
  email                = "${each.key}@internal.openclaw"
  password             = random_password.agent[each.key].result
  must_change_password = false
}

resource "gitea_team" "agent_teams" {
  for_each = toset(var.agent_names)

  name         = each.key
  organization = gitea_org.openclaw.name
  permission   = "admin"
}

resource "gitea_team_members" "agent_membership" {
  for_each = toset(var.agent_names)

  team_id = gitea_team.agent_teams[each.key].id
  members = [gitea_user.agents[each.key].username]
}

Each agent gets:

When I terraform apply, 10 users, 10 teams, 10 tokens, and all repo access mappings are created or updated in one shot.

The PR Review Flow

This is the workflow I’m most proud of. Agents don’t just write code — they participate in a review process:

Agent creates PR


Orchestrator assigns to Verifier agent

    ├── Verifier requests changes → reassigns to Author → Author fixes → back to Verifier

    └── Verifier approves → assigns to Moses → Moses reviews and merges

Each step is a real Gitea action. The Verifier agent reviews diffs, leaves comments, and either approves or requests changes. I get the final say — nothing merges without my review.

Because each agent is a real Gitea user, I can:

CI/CD on the Self-Hosted Runner

The Gitea Actions runner is a t4g.medium in the same subnet as Gitea. It has:

The runner uses the same workflow syntax as GitHub Actions. I migrated existing .github/workflows/ to .gitea/workflows/ with minimal changes — mostly updating runner labels from ubuntu-latest to self-hosted.

The same CI patterns I use for my other projects (Lineup’s 19-repo infrastructure, TicklePickle’s Go Lambda pipeline) run identically on Gitea. Same OIDC role chaining, same Terraform plan/apply, same deployment patterns.

Heartbeat and Automated Issue Triage

A cron job runs every 10 minutes on the agent EC2:

  1. Heartbeat scan: Checks all 27 repos for unassigned issues
  2. Keyword matching: Routes issues to the right agent based on content (infra issues → infra agent, frontend → frontend agent)
  3. Agent worker: Picks up issues labeled agent:claimed, runs claude --print with the issue context, and posts the output as a comment from the agent’s own token

The worker also supports dependency checking — if an issue has blocked-by: #42, it won’t be picked up until issue 42 is closed.

The Dev/Prod Pattern

Every project in my org follows the same two-account pattern:

  1. Vend accounts via AFTmake add NAME=project-dev and make add NAME=project-prod
  2. Same Terraform, different state — identical infrastructure code, separate state files per account
  3. CI deploys to dev on push, prod on tagmain branch → dev, v1.0.0 tag → prod
  4. Cross-account rolesGitHubCIRole deployed to every account via AFT global customizations

TicklePickle followed this exactly. From idea to production:

Same pattern for Lineup (19 repos), same pattern for the Gitea infrastructure itself.

AI Security: Guardrails That Actually Work

Giving AI agents access to AWS accounts is a terrible idea — unless you build the guardrails first. Here’s how I make sure agents can deploy without burning money or breaking things:

SCPs as Hard Limits

Service Control Policies at the OU level define what’s possible in an account, regardless of IAM permissions:

Budget Automation

Every workload account gets a budget alert deployed via AFT account customizations. But alerts aren’t enough when agents are involved — they don’t read email.

The pattern:

  1. CloudWatch Budget alarm triggers at 80% of monthly budget
  2. Lambda function automatically stops non-essential EC2 instances and scales ECS services to zero
  3. Data persists — EBS volumes, RDS snapshots, and S3 buckets survive the shutdown
  4. At 100%, a second automation moves the account into a Suspended OU with an SCP that denies all compute actions

The agent’s IAM role doesn’t have permission to modify the budget, the Lambda, or the OU assignment. It can’t override the kill switch.

Persistent Data, Ephemeral Compute

Every infrastructure module follows the same principle: data outlives the instance.

When budgets trigger a shutdown, I lose compute but never data. terraform apply brings everything back.

Agent IAM Boundaries

The GitHubCIRole deployed to workload accounts via AFT has AdministratorAccess — but that’s within the account’s SCP boundary. The agent can’t:

It’s defense in depth: IAM defines what the role can do, SCPs define what the account can do, and OUs define what class of account can do.

What This Enables

The boring answer is “infrastructure as code.” The real answer is that I can spin up a new project — accounts, networking, DNS, CI/CD, monitoring — in under an hour, and I can hand work to AI agents that operate as real contributors with real credentials and real review processes.

The agents can’t merge without approval. They can’t deploy to prod without a tag. They can’t blow past budget limits. They can’t spin up resources in unauthorized regions. Every guardrail that applies to human engineers applies to them — plus additional automated kill switches they can’t override.

That’s the whole point: the infrastructure doesn’t care if the contributor is human or AI. Same Terraform. Same CI/CD. Same approval flows. Same audit trail. Same budget limits.

The Stack

ComponentTool
Git hostingSelf-hosted Gitea on EC2
CI/CD runnersGitea Actions on EC2
IaCTerraform / OpenTofu
Agent configGitea Terraform provider
SecretsAWS Secrets Manager
NetworkingVPC peering + Route 53 private zones
AccessSSM (no SSH, no public endpoints)
MonitoringCron-based heartbeat + CloudWatch
Agent orchestrationClaude (Sonnet)
Cross-account authOIDC role chaining

All of it is version-controlled. All of it is reproducible. If the Gitea instance dies, I terraform apply and it’s back — data on a persistent EBS volume, config in Terraform state.