---
title: How the Agent Works
description: Architecture and lifecycle of the agent SDK.
type: conceptual
summary: Understand the core architecture — AI SDK, storage, sandbox, and optional workflow — and how messages flow from user input to streamed response.
---

# How the Agent Works



The core loop: user sends a message, the model generates a response, tool calls execute in the sandbox, and results stream back to the client.

## The Core Loop

When a user sends a message, this is what happens:

1. **User sends message** → `session.send("...")`
2. **Model generates response** → The LLM streams tokens and may emit tool calls
3. **Tool calls execute in sandbox** → Read, Write, Bash, etc. run in an isolated environment
4. **Results stream back** → Tool outputs feed back to the model; the cycle repeats until done

Everything persists to storage as it happens. If the client disconnects, conversation history is safe. If using workflow, the entire run is durable.

## Agent = Model + System + Tools + Sandbox + Storage

An agent is a single configuration:

```ts title="src/agent.ts"
import { agent } from "experimental-agent";

export const myAgent = agent("my-agent", {
  model: "anthropic/claude-opus-4.6",
  system: "You are a helpful coding assistant.",
});
```

Under the hood, an agent bundles:

| Component         | Purpose                                   |
| ----------------- | ----------------------------------------- |
| **Model**         | The LLM (via Vercel AI Gateway)           |
| **System prompt** | Instructions that shape behavior          |
| **Tools**         | Built-in tools + any custom tools you add |
| **Sandbox**       | Isolated environment where tools execute  |
| **Storage**       | Persists sessions, messages, and parts    |

See [Tools](/docs/concepts/tools) for built-in and custom tools. See [Sessions](/docs/concepts/sessions) for conversation lifecycle.

## Workflow for Durability (Opt-In)

By default, the agent runs in-process — bound to the request lifetime. For durability, opt into Vercel Workflow by writing a `"use workflow"` function:

```ts title="workflow.ts"
import { myAgent } from "@/agent";
import type { SessionSendArgs } from "experimental-agent";

export async function agentWorkflow(
  sessionId: string,
  ...args: SessionSendArgs<typeof myAgent>
) {
  "use workflow";
  return await myAgent.session(sessionId).send(...args);
}
```

Then start it from your route:

```ts title="app/api/chat/[chatId]/route.ts"
import { start } from "workflow/api";
import { agentWorkflow } from "./workflow";

const session = myAgent.session(chatId);
const result = await start(agentWorkflow, [chatId, message, opts]);
const stream = await session.stream(result);
return createUIMessageStreamResponse({ stream });
```

Workflows are durable: they survive crashes, timeouts, and deploys. `send()` detects it's inside a workflow and automatically uses `"use step"` boundaries for retryability. No code changes needed in the agent itself.

Without workflow, everything still works — the run just dies with the request. Use `waitUntil(done)` for background execution beyond the response timeout.

## Storage Persists Everything

Storage holds what the agent needs to function:

* **Sessions** — Conversation identity and metadata
* **Messages** — User and assistant turns
* **Parts** — Tool calls, tool results, text chunks
* **Sandboxes** — Sandbox records and provider metadata

By default, storage uses the local filesystem (`.agent/`). For production, provide your own `StorageHandlers` backed by any database. See [Storage](/docs/concepts/storage) for details.

## Sandbox Provides Isolation

Tools run in a sandbox — an isolated environment. The agent can read files, run bash commands, and start dev servers without touching your production system.

* **Local** — Uses your machine (dev)
* **Vercel** — Managed cloud sandbox (prod)
* **Docker** — Local container
* **Custom** — Your own backend

See [Sandbox](/docs/concepts/sandbox) for setup and configuration.

## Approvals Gate Tools

Require human approval before sensitive tools run. Map tool names to `true`, `false`, or a function. The agent suspends until the user approves or denies. See [Approvals](/docs/concepts/approvals) for setup and frontend integration.

## Next Steps

* **[Sessions](/docs/concepts/sessions)** — Persistent conversations, context, and usage
* **[Tools](/docs/concepts/tools)** — Built-in tools and custom tools with `tool()` from the AI SDK
* **[Sandbox](/docs/concepts/sandbox)** — Sandbox types, setup, and lifecycle
* **[Storage](/docs/concepts/storage)** — Handler-based storage with optional workflow support
* **[Approvals](/docs/concepts/approvals)** — Gate tools behind human approval
* **[Custom Storage](/docs/guides/custom-storage)** — Implement your own storage backend
* **[Frontend](/docs/guides/frontend)** — useChat, tool rendering, approvals, status
* **[Quickstart](/docs/quickstart)** — Create your first agent in 5 minutes
