Use with AI SDK
Integrate experimental-agent agents with the Vercel AI SDK. useChat, custom transports, typed messages, and streaming.
Tool definitions use tool() from the ai package. Streaming follows the UIMessageStream protocol. Frontend integration works through useChat from @ai-sdk/react.
Tools
Custom tools are defined using tool() from the ai package:
import { agent } from "experimental-agent";
import { tool } from "ai";
import { z } from "zod";
export const myAgent = agent("my-agent", {
model: "anthropic/claude-opus-4.6",
tools: {
SearchDocs: tool({
description: "Search the documentation",
parameters: z.object({
query: z.string(),
}),
execute: async ({ query }) => {
return { results: [`Result for: ${query}`] };
},
}),
},
});tool() is the same function used throughout the AI SDK ecosystem. Any tool compatible with the AI SDK works with experimental-agent.
Streaming
session.stream() returns a ReadableStream<UIMessageChunk> that follows the AI SDK's UIMessageStream protocol. Return it with createUIMessageStreamResponse:
import { createUIMessageStreamResponse } from "ai";
import { myAgent } from "@/agent";
export async function sendAndStream({
chatId,
message,
}: {
chatId: string;
message: string;
}) {
const session = myAgent.session(chatId);
await session.send(message);
const stream = await session.stream();
return createUIMessageStreamResponse({ stream });
}useChat and DefaultChatTransport
Use Chat and useChat from @ai-sdk/react with DefaultChatTransport from ai to build React chat UIs:
"use client";
import { Chat, useChat } from "@ai-sdk/react";
import { DefaultChatTransport } from "ai";
import { useMemo, useState } from "react";
export function ChatUI({ chatId }: { chatId: string }) {
const chat = useMemo(
() =>
new Chat({
id: chatId,
transport: new DefaultChatTransport({
api: `/api/chat/${chatId}`,
prepareSendMessagesRequest: ({ messages }) => ({
body: {
message: messages.at(-1),
interruptIfStreaming: true,
},
}),
prepareReconnectToStreamRequest: (request) => ({
...request,
api: `/api/chat/${chatId}/stream`,
}),
}),
}),
[chatId]
);
const { messages, sendMessage, status } = useChat({ chat });
const [input, setInput] = useState("");
return (
<div>
{messages.map((m) => (
<div key={m.id}>
<strong>{m.role}:</strong>
{m.parts.map((part, i) => {
if (part.type === "text") return <span key={i}>{part.text}</span>;
return null;
})}
</div>
))}
<form
onSubmit={async (e) => {
e.preventDefault();
if (input.trim()) {
await sendMessage({ parts: [{ type: "text", text: input }] });
setInput("");
}
}}
>
<input
value={input}
onChange={(e) => setInput(e.target.value)}
placeholder="Send a message..."
/>
<button type="submit" disabled={status !== "ready"}>
Send
</button>
</form>
</div>
);
}prepareSendMessagesRequestcontrols the request body — send only the last message and includeinterruptIfStreamingto cancel any in-progress response.prepareReconnectToStreamRequestpoints reconnection to a dedicated/streamendpoint.
See Frontend Integration for the full pattern with status handling, tool rendering, approvals, and resumeStream().
Rendering Tool Invocations
Tool invocation parts have a state field that tracks execution:
| State | Description |
|---|---|
input-available | Tool called, executing |
approval-requested | Waiting for human approval |
approval-responded | User approved or denied, tool executing |
output-available | Tool finished, result in output |
output-error | Tool execution failed |
Render different UI based on state:
import type { DynamicToolUIPart } from "ai";
function ToolPart({ part }: { part: DynamicToolUIPart }) {
switch (part.state) {
case "input-available":
return <div>Running {part.toolName}...</div>;
case "approval-requested":
return (
<div>
{part.toolName} needs approval
<button onClick={() => approve(part.approval.id)}>
Approve
</button>
<button onClick={() => deny(part.approval.id)}>
Deny
</button>
</div>
);
case "output-available":
return <pre>{JSON.stringify(part.output, null, 2)}</pre>;
case "output-error":
return <div>Error: {part.errorText}</div>;
default:
return null;
}
}See Approvals for the approval resolution API.
Type Safety with InferUIMessage
Use InferUIMessage to get a typed UIMessage from your agent. This includes typed tool invocation parts with input and output types:
import type { InferUIMessage } from "experimental-agent";
import type { myAgent } from "@/agent";
type Message = InferUIMessage<typeof myAgent>;You can also use typeof myAgent.$UIMessage for the same result:
type Message = typeof myAgent.$UIMessage;Status Updates
The agent emits AgentStatus updates during long operations. These arrive as data parts in the stream:
type AgentStatus =
| { type: "sandbox-setup" }
| { type: "sandbox-setup-cold" }
| { type: "loading-skills" }
| { type: "processing-approvals" }
| { type: "needs-approval" }
| { type: "thinking" }
| { type: "custom"; status: string };Status updates are transient — they are not persisted to storage.
Models
The agent uses the AI SDK's model gateway for model selection. Pass any model identifier:
agent("my-agent", { model: "anthropic/claude-opus-4.6" })
agent("my-agent", { model: "openai/gpt-4o" })
agent("my-agent", { model: "google/gemini-2.0-flash" })Set the corresponding API key as an environment variable (ANTHROPIC_API_KEY, OPENAI_API_KEY, GOOGLE_GENERATIVE_AI_API_KEY).
Next Steps
- Frontend Integration — Building a complete chat UI with approval handling and reconnection
- Streaming — Stream protocol details, reconnection, and interruption
- API Reference —
session.stream()andsession.history()details