/images/blog-generated/introducing-fossilrepo.webp

Every serious software team in the world keeps their code in one place. The same place. You know where.

This is not inherently a problem. Centralized platforms are convenient. The network effects are real. The ecosystem is mature. But when every agent, every CI pipeline, every deployment, and every developer on earth is hitting the same API at the same time, the cracks start to show — and they are showing.

We built fossilrepo because we ran into these cracks ourselves. We are releasing it today as open source because we think other teams are about to hit them too, especially teams running agentic development workflows at scale.


The Concentration Problem

The modern software supply chain has a single point of failure that nobody likes to talk about. Not a technical single point of failure — a systemic one. The overwhelming majority of the world’s source code, CI pipelines, package registries, and deployment triggers flow through one platform. When it is working, this is wonderful. When it is not, everything stops.

Over the past year, we have watched availability incidents become more frequent. Not dramatic outages — the kind that make the news — but the quieter kind. Degraded API responses. Elevated error rates on Git operations. Webhook delivery delays that cascade into missed deployments. Status pages that show green while your pipeline is stalled.

For a human developer, a five-minute hiccup is an annoyance. For a fleet of AI agents running concurrent development tasks, it is a hard stop. Agents do not have the patience to wait for a retry. They do not know to check the status page. They fail, and you have to restart them.

This is the world we have been living in for the past several months as we scaled our agentic development infrastructure.


Rate Limits and the Agentic Wall

If you are running one or two agents, API rate limits are invisible. You will never hit them. But the moment you scale to ten, twenty, or fifty concurrent agents — each one pushing commits, creating branches, opening pull requests, reading file contents, and triggering CI — you hit a wall.

The wall is not theoretical. It is a specific number. And it is shared across your entire organization, not per-agent. So your human developers and your AI agents are competing for the same budget. Every API call an agent makes is one fewer call available for your team.

We tried all the standard mitigations. Caching. Request batching. Exponential backoff. Token pooling. They help, but they do not solve the fundamental problem: you are a tenant on someone else’s infrastructure, and your usage pattern — high-frequency, highly parallel, API-first — is exactly the pattern that rate limits are designed to throttle.

This is not a complaint about any platform’s policies. Rate limits exist for good reasons. But when your development velocity is directly proportional to your API throughput, and your API throughput is capped, your velocity is capped. We needed a way around that.


What We Actually Needed

The requirements were simple:

  1. A version control system that we could self-host with zero API rate limits
  2. Something that could handle dozens of concurrent agents pushing simultaneously
  3. Continuous backup and disaster recovery without complex infrastructure
  4. The ability to sync bidirectionally with existing Git platforms so we are not creating an island
  5. A modern web interface — not a 1998-era HTML table

When we wrote those requirements down, we realized we were describing Fossil SCM — with a better frontend.


Why Fossil

Fossil is the version control system created by D. Richard Hipp, the same person who created SQLite. That lineage matters. Just as SQLite is a single-file database that powers billions of devices, a Fossil repository is a single SQLite file that contains the complete history of a project — every commit, every ticket, every wiki page, every forum post.

This architecture has properties that turn out to be extraordinary for our use case.

No server required for basic operations. A Fossil repository is a file. You can copy it, back it up, email it. You do not need a daemon running to read or write to it. For agent-to-agent workflows where one agent produces output that another agent consumes, this is remarkably convenient. The repository is the artifact.

SQLite’s concurrency model. SQLite handles concurrent readers with zero contention and serializes writers with WAL mode. For a workload where many agents are reading the repository and occasionally writing, this is ideal. No lock contention. No merge conflicts at the infrastructure level.

Built-in issue tracker, wiki, and forum. Fossil is not just version control. A single .fossil file contains the project’s entire collaboration surface. Agents that need to file tickets, update documentation, or discuss implementation details can do so through one tool and one protocol. No API keys. No OAuth flows. No rate limits.

Continuous replication with Litestream. Because every Fossil repository is a SQLite database, we can use Litestream to continuously replicate it to S3. This gives us point-in-time recovery for every repository with zero operational overhead. New repository? Litestream picks it up automatically. No configuration change. No restart.

Built-in sync. Fossil has native clone and sync operations. Two Fossil repositories can synchronize their contents with a single command. This is how we keep our self-hosted Fossil repositories in sync with downstream Git mirrors — the Fossil repository is the source of truth, and changes flow outward to wherever they need to go.


The Problem Fossil Did Not Solve

Fossil’s built-in web interface was designed in an era when web development looked very different. It is functional and fast. It is also, to put it diplomatically, not what modern developers expect. The timeline view is powerful but visually spartan. The ticket system works but lacks the polish of contemporary issue trackers. The wiki renders correctly but looks like it was styled with a default CSS reset.

For internal use, this is tolerable. For a tool that we want to hand to a team and say “use this instead of what you are using now,” it is a non-starter. Nobody is going to switch version control systems to look at a 1998-era interface, no matter how technically superior the backend is.

This is the gap we decided to fill.


What fossilrepo Is

fossilrepo is a Django application that wraps Fossil SCM with a modern management layer. It does not replace Fossil. It puts a contemporary interface in front of it.

You get:

  • A project dashboard with real-time statistics — commits, tickets, wiki pages, team activity
  • Full repository browsing — file trees, syntax-highlighted code, blame, history, diffs
  • A timeline view that shows the complete project history in a format that developers expect
  • Ticket management with the full Fossil ticket system exposed through a modern UI
  • Wiki and documentation with Markdown rendering, syntax highlighting, and table of contents
  • Git sync bridge — connect any repository to GitHub or GitLab as a downstream mirror
  • OAuth integration — authenticate with existing GitHub or GitLab accounts
  • Team and organization management — group-based permissions, project teams, role assignments
  • Caddy for automatic SSL — point a domain at your server and HTTPS works immediately
  • Continuous backup — Litestream replicates every .fossil file to S3 in near-real-time

The entire stack installs with one command. We ship an omnibus installer that handles bare metal Linux or Docker, detects your OS, installs dependencies, configures PostgreSQL, Redis, Caddy, and Fossil, runs migrations, creates the admin user, and starts all services. It takes about five minutes on a clean server.


How We Built It

fossilrepo was built with the assistance of several AI agents working in parallel. We used our own agentic development tools — the same infrastructure that motivated building fossilrepo in the first place — to construct the application. The irony is not lost on us.

The Django backend, the HTMX frontend, the Fossil integration layer, the Git sync bridge, the installer script, the infrastructure-as-code, the documentation site, and this blog post were all produced through a combination of human direction and agent execution. The agents committed to Fossil. The sync bridge mirrored to Git. The CI pipeline ran tests. The cycle continued.

This is not a proof of concept. We have been running fossilrepo internally for our own development workflow. Every CONFLICT project has a Fossil repository as its source of truth, with Git mirrors pushed downstream for public consumption and CI integration. The agents push directly to Fossil with no API limits, no token rotation, and no rate limit anxiety.


The Self-Hosting Advantage for Agentic Development

When your agents are pushing to a Fossil server running on your own infrastructure, several things change:

Zero rate limits. Your server, your rules. An agent can push a hundred commits in a minute if it needs to. There is no budget, no throttle, no shared quota.

Predictable latency. The Fossil server is on your network. Clone and push operations complete in milliseconds, not seconds. For agents that are making rapid, incremental commits — which is how the best agentic workflows operate — this latency reduction is material.

Full control over availability. Your uptime is your uptime. If the centralized platform has a bad day, your agents keep working. You sync when connectivity is convenient, not when it is required.

No vendor dependency for core operations. The .fossil file is the repository. You own it completely. It is not in someone else’s database. If you decide to move it, you copy the file. That is the entire migration.

Audit and compliance. Everything is local. Every commit, every access log, every sync event is on your infrastructure. For teams in regulated industries, this is not optional.


How Sync Works

We are not suggesting that you abandon your existing Git platform. That would be impractical and unnecessary. fossilrepo is designed to coexist.

The sync bridge works like this: Fossil is the source of truth. When changes are committed to a Fossil repository, the sync bridge exports them as Git commits and pushes to one or more configured remotes — GitHub, GitLab, Bitbucket, or any Git remote. Optionally, tickets can be synced to GitHub Issues, and wiki pages can be synced to repository documentation.

This means your public repositories, your CI pipelines, your issue trackers, and your pull request workflows can continue exactly as they are. The difference is that the primary development activity happens locally against Fossil, and the results flow outward.

For open source projects, this is particularly powerful. Your contributors interact with the project through GitHub as usual. Your internal development — especially automated, agent-driven development — happens against Fossil with no friction.


The SQLite Advantage

It is worth pausing to appreciate what SQLite’s architecture gives us here. A Fossil repository is a single file. This means:

Backup is copy. cp project.fossil project.fossil.bak is a complete backup. No dump. No export. No special tooling. For automated backup systems like Litestream, this is ideal — it replicates the WAL frames continuously, giving you point-in-time recovery to any moment.

Migration is move. Want to move a repository to a different server? Copy the file. Want to give someone a complete copy of the project — history, tickets, wiki, everything? Send them the file. This portability is unprecedented in modern version control.

Size is predictable. The repository is exactly as large as its contents. There is no hidden overhead, no pack file fragmentation, no garbage collection needed. A typical project repository is a few megabytes. Even the entire Fossil SCM source history — over twenty years of development — fits in 68 megabytes.

Concurrency is solved. SQLite’s WAL mode provides concurrent read access with serialized writes. This is exactly the access pattern of a version control server: many concurrent reads (browsing, cloning) with occasional writes (pushing). The database handles this natively, without an external locking mechanism or connection pool.


What This Means for Your Team

If you are a team that is:

  • Running multiple AI agents that interact with your codebase
  • Hitting API rate limits on your centralized Git platform
  • Concerned about single-vendor dependency for your source code
  • Operating in a regulated environment that requires on-premises source control
  • Looking for a simpler, more portable alternative to self-hosted Git platforms

Then fossilrepo might be worth your time.

It installs in five minutes. It syncs with your existing Git remotes. It backs up continuously to S3. It has a modern interface that your team will not hate. And it removes the bottleneck that stands between your agents and your codebase.


Eating Our Own Cooking

The source code for fossilrepo is hosted on fossilrepo. The project’s primary repository lives at fossilrepo.io — a fully functional deployment of the software it contains. You can browse the timeline, read the tickets, explore the code, and clone the repository using Fossil’s native protocol. The deployment itself was provisioned using fossilrepo’s own omnibus installer.

For convenience, we also maintain a mirror on GitHub. Changes flow from Fossil to Git via the sync bridge. If you prefer to interact with the project through pull requests on GitHub, that works perfectly — we review and merge there, and the changes sync back.

This dual-hosting arrangement is itself a demonstration of the workflow fossilrepo enables. The canonical source lives in Fossil, self-hosted, on our own infrastructure. The downstream mirror lives on GitHub for visibility, CI, and community access. Both stay in sync automatically.


Getting It Running

We wanted fossilrepo to be genuinely easy to deploy. Not “easy if you already know Docker and Terraform” easy. Actually easy.

The omnibus installer is a single bash script that handles everything. It detects your operating system, installs the dependencies you are missing, builds Fossil from source, sets up PostgreSQL and Redis, configures Caddy for automatic SSL, runs Django migrations, creates your admin account, and starts all services under systemd. On a clean server, the entire process takes about five minutes.

The installer supports two deployment modes:

Bare metal installs everything natively with systemd services. This is what we run in production. It is simple, fast, and easy to debug. There is no container layer between you and the running processes. When you need to check a log, you use journalctl. When you need to restart a service, you use systemctl. No surprises.

Docker generates a production-ready docker-compose.yml and starts the full stack in containers — the application, PostgreSQL, Redis, Caddy, Celery workers, and optionally Litestream for S3 backups. This is convenient for teams that prefer containerized deployments or want to run fossilrepo alongside other Docker services.

Both modes can run unattended with command-line flags for automation, or interactively with a guided setup menu for first-time installs. Both support YAML configuration files for reproducible deployments.

We also distribute fossilrepo through the channels you would expect:

  • PyPIpip install fossilrepo gives you the Django application and the fossilrepo-ctl management CLI
  • Docker Hubdocker pull conflicthq/fossilrepo gives you a multi-architecture image that runs on both AMD64 and ARM64 (including Raspberry Pi and AWS Graviton instances)
  • GitHub Releases — downloadable tarballs with vendored dependencies for offline or air-gapped installations
  • Fossil clonefossil clone https://fossilrepo.io/fossilrepo gives you the repository directly from the source, because of course it does

The installer also handles updates. Re-running it on an existing installation performs an in-place upgrade — pulling the latest code, installing new dependencies, running migrations, and restarting services. Your data is never touched during an upgrade. Repositories, database contents, SSH keys, and configuration are preserved across every update.


Contributing

fossilrepo is MIT licensed and we genuinely welcome contributions.

The project is young. There are rough edges, missing features, and optimizations waiting to be made. If you are interested in Fossil, Django, HTMX, or the intersection of AI and developer tooling, there is meaningful work to be done.

You can contribute through GitHub pull requests or by pushing directly to the Fossil repository. The codebase follows standard Django conventions. The bootstrap documentation in the repository covers everything you need to know about the architecture, patterns, and how to add new features.

We are particularly interested in contributions around:

  • Additional Git platform integrations (Bitbucket, Gitea, Codeberg)
  • Improved Fossil timeline and ticket rendering
  • Webhook support for CI/CD integration
  • Performance optimization for large repositories
  • Packaging for additional Linux distributions
  • Documentation and tutorials

If you are running agentic development workflows and hitting the same walls we hit, we would love to hear about your experience. File an issue, open a pull request, or just clone the repository and take it for a spin.


Getting Started

fossilrepo is open source under the MIT license.

Install on any Linux server:

curl -sSL https://fossilrepo.dev/install.sh | sudo bash

Install via PyPI:

pip install fossilrepo

Run with Docker:

docker pull conflicthq/fossilrepo

Clone from Fossil:

fossil clone https://fossilrepo.io/fossilrepo fossilrepo.fossil

Documentation: fossilrepo.dev Live demo: fossilrepo.io GitHub mirror: github.com/ConflictHQ/fossilrepo PyPI: pypi.org/project/fossilrepo Docker Hub: hub.docker.com/r/conflicthq/fossilrepo

We built this because we needed it. We are releasing it because we think you might need it too.