The durable runtime for long-running AI agents.
Everruns combines durable execution, a clear system shape, integration surfaces, and an operator console for teams running long-lived AI workflows.
A system shape teams can reason about.
State, execution, and interfaces are separated cleanly so teams can understand how the platform behaves under real load.
REST API, agent definitions, session lifecycle, secrets, and event fan-out.
Stateless executors running the reason to act loop from persisted state.
PostgreSQL stores workflow state, events, configuration, and encrypted secrets.
Use the API, SDKs, CLI, or the management UI depending on the team and task.
Built as a platform, not just a runtime.
Everruns spans harnesses, agents, skills, capabilities, MCP servers, apps, and the operator console around them.
Harnesses
Reusable durable execution units with their own dedicated surface in the product.
Agents
Configurable AI workers with optional model overrides, capabilities, and markdown prompts.
Skills
Instruction packages discovered from the workspace filesystem and activated per session.
Capabilities
A registry of tools and behaviors spanning execution, browser, network, storage, UI, and session control.
MCP Servers
Dedicated surface for external model-context providers and tool bridges.
Apps
Deployment layer that connects agents to channels such as Slack.
Integrations belong in the core story.
Execution sandboxes, evaluation tooling, model interfaces, and MCP bridges make Everruns useful inside real stacks. This matters more than local setup.
Daytona
Use isolated cloud sandboxes for code execution without changing the durable runtime model.
Braintrust
Connect observability and evaluation workflows to agent traces and real runs.
Open Responses
Use a vendor-neutral model layer instead of rewriting integrations for every provider.
MCP servers
Attach external tools and context providers through a platform-level integration surface.
Try the stack locally in a few minutes.
Docker Compose is useful for evaluation and onboarding. Production deployments still depend on your own topology, providers, and operational model.
- Download the published Docker Compose example.
- Start the control plane, workers, UI, and database with local secrets configured.
- Create an agent, open a session, and stream events back to the client.
A good way to understand the stack quickly before wiring it into a larger environment.
Quick try locallyFor most teams, the faster signal is the API: create an agent, start a session, and stream events.
curl -X POST http://localhost:9300/api/v1/agents \
-H "Content-Type: application/json" \
-d '{"name":"Assistant","system_prompt":"You are helpful."}'
curl -X POST http://localhost:9300/api/v1/sessions \
-H "Content-Type: application/json" \
-d '{"agent_id":"{agent_id}"}'
curl -N http://localhost:9300/api/v1/sessions/{session_id}/events Durability keeps long-running work resumable.
Durability matters because long-running work stays observable and resumable when infrastructure moves underneath it.
Service restart
Sessions resume from stored workflow state instead of replaying from scratch.
Worker loss
Execution continues because workers are stateless and progress already lives in PostgreSQL.
Long tool run
Event history and in-flight state remain observable over extended tasks.
Operate the platform with the same clarity you build on it.
Create agents with prompts and capabilities, then manage providers, API keys, members, and connections from the same system.
LLM Providers
Configure providers first, then manage the models available to agents.
API Keys
Programmatic access is managed inside the operator console, not hidden in separate tooling.
Members
Membership management lives in settings alongside providers, API keys, and connections.
Connections
Personal auth and integration surfaces are part of the real operator workflow.
Open source core, built for real systems.
MIT licensed and built in Rust. Start with the repo, then follow the docs into architecture, API, and operations.
github.com/everruns/everruns