How the Agent Works
Architecture and lifecycle of the agent SDK.
The core loop: user sends a message, the model generates a response, tool calls execute in the sandbox, and results stream back to the client.
The Core Loop
When a user sends a message, this is what happens:
- User sends message →
session.send("...") - Model generates response → The LLM streams tokens and may emit tool calls
- Tool calls execute in sandbox → Read, Write, Bash, etc. run in an isolated environment
- Results stream back → Tool outputs feed back to the model; the cycle repeats until done
Everything persists to storage as it happens. If the client disconnects, conversation history is safe. If using workflow, the entire run is durable.
Agent = Model + System + Tools + Sandbox + Storage
An agent is a single configuration:
import { agent } from "experimental-agent";
export const myAgent = agent("my-agent", {
model: "anthropic/claude-opus-4.6",
system: "You are a helpful coding assistant.",
});Under the hood, an agent bundles:
| Component | Purpose |
|---|---|
| Model | The LLM (via Vercel AI Gateway) |
| System prompt | Instructions that shape behavior |
| Tools | Built-in tools + any custom tools you add |
| Sandbox | Isolated environment where tools execute |
| Storage | Persists sessions, messages, and parts |
See Tools for built-in and custom tools. See Sessions for conversation lifecycle.
Workflow for Durability (Opt-In)
By default, the agent runs in-process — bound to the request lifetime. For durability, opt into Vercel Workflow by writing a "use workflow" function:
import { myAgent } from "@/agent";
import type { SessionSendArgs } from "experimental-agent";
export async function agentWorkflow(
sessionId: string,
...args: SessionSendArgs<typeof myAgent>
) {
"use workflow";
return await myAgent.session(sessionId).send(...args);
}Then start it from your route:
import { start } from "workflow/api";
import { agentWorkflow } from "./workflow";
const session = myAgent.session(chatId);
const result = await start(agentWorkflow, [chatId, message, opts]);
const stream = await session.stream(result);
return createUIMessageStreamResponse({ stream });Workflows are durable: they survive crashes, timeouts, and deploys. send() detects it's inside a workflow and automatically uses "use step" boundaries for retryability. No code changes needed in the agent itself.
Without workflow, everything still works — the run just dies with the request. Use waitUntil(done) for background execution beyond the response timeout.
Storage Persists Everything
Storage holds what the agent needs to function:
- Sessions — Conversation identity and metadata
- Messages — User and assistant turns
- Parts — Tool calls, tool results, text chunks
- Sandboxes — Sandbox records and provider metadata
By default, storage uses the local filesystem (.agent/). For production, provide your own StorageHandlers backed by any database. See Storage for details.
Sandbox Provides Isolation
Tools run in a sandbox — an isolated environment. The agent can read files, run bash commands, and start dev servers without touching your production system.
- Local — Uses your machine (dev)
- Vercel — Managed cloud sandbox (prod)
- Docker — Local container
- Custom — Your own backend
See Sandbox for setup and configuration.
Approvals Gate Tools
Require human approval before sensitive tools run. Map tool names to true, false, or a function. The agent suspends until the user approves or denies. See Approvals for setup and frontend integration.
Next Steps
- Sessions — Persistent conversations, context, and usage
- Tools — Built-in tools and custom tools with
tool()from the AI SDK - Sandbox — Sandbox types, setup, and lifecycle
- Storage — Handler-based storage with optional workflow support
- Approvals — Gate tools behind human approval
- Custom Storage — Implement your own storage backend
- Frontend — useChat, tool rendering, approvals, status
- Quickstart — Create your first agent in 5 minutes