Not a sandbox. The layer below sandboxes.

Stop paying for one VM per agent.

AetherFS is the hosted filesystem and coordination layer for file-centric agent fleets. Products call the service over HTTP or gRPC, and users or agents add Aether only when they need a real mount.

Shared bases, copy-on-write sessions, and coordination primitives replace the waste of cloning the same workspace into another idle VM every time you fork work.

  • Shared bases with per-session changes
  • Fast forks instead of booting and cloning another VM
  • HTTP, gRPC, and Aether CLI access
  • Locks, approvals, annotations, and bus events

Hosted session

service     hosted afs-server
source      src_checkout_service_v17
session     sess_design_review
surfaces    http grpc aether
status      pending-approval
review      annotations approvals
persist     checkpoint commit promote

Hosted service

Start with an endpoint, credentials, and one integration path.

Validate sessions, fast forks, and coordination before taking on a larger rollout.

Coordination

Locks, approvals, annotations, and bus events where the work lives.

Migration

Start API-first. Add Aether when a workload truly needs a mount.

How Customers Start

Start with the hosted API. Add local mounts only when needed.

The fastest test path is API integration against the hosted service. Teams point agents at our endpoint, create sessions from a source, read and write files, and only introduce Aether for workloads that still need local filesystem semantics.

HTTP API

The default integration path for products, backends, and workflow engines that need the hosted control plane without a local mount.

gRPC services

Use the hosted gRPC services when your client wants direct service contracts, richer streaming, or a protocol-first integration path.

Aether CLI

The easiest migration path when users or agents still need local filesystem semantics for editors, shells, builds, or tests.

Why It Exists

The cost problem nobody is fixing is below the model.

GPU spend gets the attention. What quietly scales with every deployed coding agent is the CPU, RAM, and filesystem cost of giving every worker its own box and its own copy of the same workspace. That is the line item AetherFS attacks.

Agents spend most of their time editing, planning, or waiting.

Idle compute

  • A full VM still stays alive while almost no compute is being used
  • The cost scales linearly with agent count
  • Teams pay for machine boundaries they barely touch

Every agent gets another copy of the same repo and dependencies.

Storage duplication

  • Repeated clone and setup work burns time before the task even starts
  • Forking often means duplicating the whole workspace again
  • The same bytes are paid for over and over

Isolated sandboxes are blind to each other until merge time.

Coordination gaps

  • No built-in intent broadcast, locking, or shared activity view
  • Review state and workflow metadata end up bolted on elsewhere
  • Fork, merge, and handoff stay more expensive than they should be

What AetherFS Changes

The filesystem and coordination layer below your agent fleet.

AetherFS does not run the agents for you. It removes the repeated clone, duplicated storage, and isolated-sandbox coordination problems underneath them.

Copy-on-write sessions

Sessions share a common base and store only their deltas, so the working set grows with real changes instead of full clones.

Shared storage reuse

Shared storage reuses the same workspace content across sessions instead of making customers pay for the same files over and over again.

Forking as metadata

Forking becomes a metadata operation instead of booting another machine and recloning the same repository.

Coordination at the filesystem layer

AetherFS exposes the missing control surfaces: changefeed, message bus, locking, annotations, approvals, and fork-and-review workflows.

API-first service surfaces

Products connect over HTTP or gRPC, and add Aether only when users or agents truly need a mounted filesystem.

Fast pilot path

Point agents at our hosted endpoint, create sessions over HTTP or gRPC, and validate the value before taking on a bigger migration.

Session Model

The session is the boundary.

AetherFS is easiest to understand as a session lifecycle: create a workspace, inspect it, mutate it through the right hosted surface, review it, persist it when it matters, then fork, restore, or retire it.

01

Create a session

Start from a repository, snapshot, template, dataset, or blank workspace. The session becomes the unit of work.

02

Inspect the state

Read metadata, fetch manifests, or mount locally with Aether before any mutation begins.

03

Mutate through the right surface

Products can use HTTP or gRPC, and users can switch to Aether when they need local filesystem semantics.

04

Review and coordinate

Annotations, approvals, bus messages, and knowledge entries let people and automation share context without hiding it in the filesystem.

05

Persist the outcome

Capture checkpoints for recovery, commit durable work, or export session content when the result should leave the workspace.

06

Fork, restore, or retire

Keep working, branch for alternatives, restore a checkpoint, or archive the session when it has done its job.

What Users Actually Get

Not just files. Service surfaces around the files.

The public product surface is more than one route family or one local mount. It covers session lifecycle, filesystem control, collaboration, persistence, reporting, and local user workflows through Aether.

Session lifecycle

Create, inspect, list, fork, checkpoint, restore, archive, and delete workspaces with the session as the clear user-facing boundary.

Filesystem service

Browse trees, retrieve manifests and metadata, read files, patch content, rename paths, manage directories, and use richer file routes when the workflow needs them.

Collaboration and review

Attach annotations, request approvals, publish session-scoped messages, and store structured knowledge without hiding workflow state inside ordinary files.

Persistence and delivery

Use checkpoints for recoverability, commits for durable outcomes, and imports or exports when a result has to cross the service boundary.

Health and reporting

Expose health, usage, analytics, and other non-file signals as first-class service data for support, workflow logic, and reporting.

Aether CLI

Give users a local mount for a remote session, plus cache controls, runtime diagnostics, logs, and metrics on a machine they control.

Two Migration Paths

Mount first for easy savings. Go API-native for the bigger unlock.

Aether mount is the easy migration path when existing tools still expect local paths. For deeper savings, agents can author directly against the hosted HTTP or gRPC surface and skip the full VM-style workspace shape entirely.

Both routes keep the same session model, the same review boundary, and the same durable outcomes.

CLI quickstart

export AFS_AETHER_SERVER_ENDPOINT='https://your-service.example.com'
aether mount --session <session-id> ./workdir
aether doctor
aether status
aether metrics show

Config example

[bridge]
server_endpoint = "https://your-service.example.com"

[logging]
format = "text"

metrics_addr = "127.0.0.1:9464"

Common use cases

Built for fleets where the file tree is the actual unit of work.

Agent-assisted software engineering

Feature work, bug fixes, modernization, security remediation, tests, and infrastructure changes all benefit from cheap forks and explicit review boundaries.

API-first agent fleets

Point agents at the hosted service over HTTP or gRPC, create sessions from a repo, mutate files, and persist only the results worth keeping.

Build-and-test agents with Aether

When an agent still needs local filesystem semantics for builds or tests, mount the remote session locally instead of copying the whole workspace onto a larger VM.

Review-heavy approval flows

Fork one baseline into several proposals, attach evidence and annotations, request approval, and only then commit or promote the winner.

Data and analytics workspaces

Notebooks, SQL migrations, pipeline configs, and structured file trees benefit from the same session and coordination model as codebases.

Short-lived task sandboxes

Create a session, do the work, checkpoint or commit if useful, and retire the live workspace quickly instead of carrying idle boxes forever.

Strong fit

Use AetherFS when the agent fleet economics start hurting.

  • You run or are building a fleet of file-centric agents
  • Per-agent VM cost is becoming a real budget line
  • You need HTTP or gRPC integration plus optional local mounts
  • You mix human review with automation
  • File trees are the primary unit of work

When to simplify

If none of this workflow exists, you may not need it.

  • You only need upload and download
  • There is no local mount requirement
  • You do not need review or approval state
  • There is no real session lifecycle to model

Positioning

Not another sandbox. The filesystem and coordination layer that sits below them.