The workspace layer for AI agent infrastructure

One repo, thousands of sessions. Zero duplicates.

One repo, thousands of sessions. Zero duplicates.

AetherFS gives every agent an isolated workspace without cloning the repo. Copy-on-write sessions share a common base, start in under a second, and cost a fraction of a VM.

Speed

<10s

Session startup vs. 3–5 min for VM clone

Storage

<1%

Storage overhead for typical edits vs. 100% duplications

Concurrency

1,000+

Concurrent isolated sessions per host, rather than dozens.

Savings

5x

Infrastructure cost reduction by eliminating idle VM waste

The Problem

Your agents are waiting on infra, not writing code.

Traditional agent platforms use one VM per agent, even though over 70% of VM time is idle during planning, editing, and review. Each clone-and-setup cycle takes 3-5 minutes per session, and costs grow linearly with the number of agents.

Cut Your Infra Costs

VM-Based Inefficiency

Every session boots an OS, clones the full repo, and reinstalls dependencies. The previous session did the same thing 30 seconds ago.

Developer-Agent Friction

Agents and developers work in different environments with no shared state. Handing off work means repackaging artifacts. There's no way for a human to review what an agent just did.

Cost Inefficiency

Most of your cloud spend goes to idle compute. Agents can't share dependencies or artifacts, so every session duplicates the same work. At $0.50/hr per agent, it adds up fast.

Complex Session Management

Forking is expensive, checkpointing doesn't exist, and rollback is manual. Fan out 50 agents on the same codebase and you get 50 full clones.

How AetherFS Solves It

Replace per-agent VMs with a shared foundation.

Every session is a lightweight, in-memory overlay that records only the differences from a shared, immutable base repository. File contents are stored once in a deduplicated object store shared across all sessions.

Copy-on-Write Sessions

Sessions start from a shared base in milliseconds. Forking is a metadata operation, not a full clone. A 312 MB repo forks at less than 1% of the original size.

Zero Duplicate Storage

The deduplicating object store normalizes all file content across sessions by hash. Storage scales with the size of your changes, not the number of agents. 90%+ deduplication in practice.

Session ID Handoff

One session ID works across HTTP, gRPC, and local FUSE mounts. Backend creates, agent mutates, human reviews. No images, no tarballs, no repackaging.

Built-in Review Gates

Checkpoints, approvals, and annotations are first-class objects. Agents request human sign-off before committing. Reviewers inspect and approve without leaving the session. No separate PR workflow.

Performance Benchmarks

Measured, not marketed

Real-world session fork benchmark on a 312 MB repository. All operations served from the shared object store — no network I/O required.

Without AetherFS

With AetherFS

3-5 min

Cold start: VM boot + full clone

Cold start: VM boot + full clone

<10s

Session created from shared base

312 MB per agent

Full workspace copy every time

2.8 MB

<1% of repo size for typical edits

$0.50/hr per agent

70%+ idle waste included

$0.10/hr per agent

<10% idle waste, 5x cost reduction

10-20 per host

Concurrent agents before infra limits

1,000+ per host

50x density on same shared base

Repackage image/tarball

Repackage image/tarball

Handoff between agent and reviewer

Pass session ID

Same workspace, instant handoff

The Platform

Four pillars of efficiency and scale.

AetherFS replaces the wasteful VM-per-agent model with a unified substrate built on proven, battle-tested patterns from Dropbox, Google Drive, Bazel, and Elastic Block Storage.

Pillar 01

Copy-on-Write Overlays

Instant speed, minimal storage

Every session is a lightweight overlay that only records differences from a shared, immutable base. File contents are stored once in a hash-indexed object store with an LRU cache and bounded memory. Forking is a near-zero-cost metadata operation.

Pillar 02

Unified Control Plane

Collaboration and governance

A central Session Manager persists all metadata, checkpoints, and tags. Coordination services provide a message bus for real-time events, pessimistic locks for critical sections, and formal approval workflows. All events flow through a single, auditable log of record.

Pillar 03

Decoupled Compute

Eliminate idle spend

A ProcessService and pool of containerized workers execute commands on-demand. Jobs dispatch via a durable queue, run in isolated containers with resource limits, and stream results back. 95%+ of an agent's lifecycle spent planning or editing generates near-zero compute spend.

Pillar 04

Dual Access Model

Maximum adoption, zero rewrite

Sessions are accessible as a mounted local directory via FUSE/WebDAV (no agent rewrite required — even npm install works), or programmatically through a rich gRPC/HTTP API for reading files, submitting changes, and subscribing to events.

Access Surfaces

Use HTTP or gRPC first. Add Aether CLI only when a real mount is needed.

Use HTTP or gRPC first. Add Aether CLI only when a real mount is needed.

HTTP

JSON‑first integration for backends and workflow engines, with RESTful endpoints exposed via an Axum-based gateway.

gRPC

Streaming, binary-optimized, auto-generated clients, powered by twelve core services: FileSystem, Session, Persistence, MessageBus, Collaboration, Oversight, and more.

Aether CLI / FUSE

Network-backed storage that feels local, with sub-1 ms cache hits, intelligent prefetching, and write-back optimization. Expose it as a local mount for editors, shells, and builds.

Architecture Overview

Layered overlay filesystem, built in Rust.

System Layers

Agent or developer requests are sent via gRPC, FUSE, WebDAV, or HTTP to the Session Manager, which forwards them to the overlay filesystem. The overlay combines a writable in‑RAM layer with a read‑only underlay (Git, S3, local, or empty). Beneath the overlay, the Content‑Addressable Store manages deduplication and reference counting, with support from the Aether multi‑level cache.

Run more agents. Cut your infra costs.

Replace your VM-per-agent pipeline with a shared workspace layer. Start in milliseconds, commit only what changed, retire the rest.

©AetherFS 2026

Run more agents. Cut your infra costs.

Replace your VM-per-agent pipeline with a shared workspace layer. Start in milliseconds, commit only what changed, retire the rest.

©AetherFS 2026

Run more agents. Cut your infra costs.

Replace your VM-per-agent pipeline with a shared workspace layer. Start in milliseconds, commit only what changed, retire the rest.

©AetherFS 2026

Run more agents. Cut your infra costs.

Replace your VM-per-agent pipeline with a shared workspace layer. Start in milliseconds, commit only what changed, retire the rest.

©AetherFS 2026