Frontend Integration
Integrate experimental-agent agents with React using @ai-sdk/react. useChat with DefaultChatTransport, tool rendering, approval UI, status updates, stream reconnection, and type safety.
Use @ai-sdk/react's useChat to build chat UIs that handle streaming, reconnection, tool rendering, and approval flows.
Simpler alternative: If you're using
handleRequeston the server, theuseAgenthook handles transport setup, reconnection, and interruption automatically. This guide covers the manualuseChatsetup for full control.
Overview
agent()from experimental-agent — Define your agenttool()from ai — Define tools (Vercel AI SDK)useChatfrom@ai-sdk/react— React hook for chat state and streamingDefaultChatTransportfromai— Custom transport that hits your API routes
Installation
npm i @ai-sdk/react aiBasic useChat Setup
"use client";
import { Chat, useChat } from "@ai-sdk/react";
import { DefaultChatTransport } from "ai";
import type { AgentStatus } from "experimental-agent";
import { useEffect, useMemo, useState } from "react";
export function ChatUI({
chatId,
streamingMessageId,
}: {
chatId: string;
streamingMessageId: string | null;
}) {
const [status, setStatus] = useState<AgentStatus | null>(null);
const chat = useMemo(
() =>
new Chat({
id: streamingMessageId
? `${chatId}-${streamingMessageId}`
: chatId,
transport: new DefaultChatTransport({
api: `/api/chat/${chatId}`,
prepareSendMessagesRequest: ({ messages }) => {
const lastAssistant = messages.findLast(
(m) => m.role === "assistant"
);
const lastPartContent = lastAssistant?.parts.at(-1);
const lastPart =
lastAssistant && lastPartContent != null
? {
index: lastAssistant.parts.length - 1,
part: lastPartContent,
}
: undefined;
return {
body: {
message: messages.at(-1),
interruptIfStreaming: lastPart ? { lastPart } : true,
},
};
},
prepareReconnectToStreamRequest: (request) => {
return { ...request, api: `/api/chat/${chatId}/stream` };
},
}),
onData: (part) => {
if (part.type === "data-status") {
setStatus(part.data as AgentStatus);
}
},
}),
[chatId, streamingMessageId]
);
const { messages, sendMessage, status: chatStatus, resumeStream } = useChat({ chat });
const [input, setInput] = useState("");
useEffect(() => {
if (streamingMessageId) {
resumeStream();
}
}, [streamingMessageId, resumeStream]);
return (
<div>
{messages.map((m) => (
<div key={m.id}>
<strong>{m.role}:</strong>{" "}
{m.parts.map((p, i) => (p.type === "text" ? <span key={i}>{p.text}</span> : null))}
</div>
))}
{status && <StatusIndicator status={status} />}
<form
onSubmit={async (e) => {
e.preventDefault();
if (input.trim()) {
await sendMessage({ parts: [{ type: "text", text: input }] });
setInput("");
}
}}
>
<input
value={input}
onChange={(e) => setInput(e.target.value)}
placeholder="Send a message..."
/>
<button type="submit" disabled={chatStatus !== "ready" && chatStatus !== "streaming"}>
Send
</button>
</form>
</div>
);
}Key details:
prepareSendMessagesRequestsends only the last message and includesinterruptIfStreamingwith the last part for clean interruption.prepareReconnectToStreamRequestpoints to/api/chat/${chatId}/stream— a dedicated reconnection endpoint (see API Routes).onDatacapturesAgentStatusupdates for loading indicators.streamingMessageIdfrom your session page's server data. When present,resumeStream()reconnects to the active stream on mount.
Your backend should expose one endpoint for sending and another for reconnection. See API Routes for backend patterns.
Rendering Tool Invocations
Tool parts have a state that indicates progress. Render them appropriately:
| State | Meaning |
|---|---|
input-available | Tool called, executing (no approval needed) |
approval-requested | Waiting for human approval |
approval-responded | User approved/denied, tool executing or skipped |
output-available | Tool finished, result available |
output-error | Tool execution failed |
import type { UIMessage } from "ai";
type Part = UIMessage["parts"][number];
function isToolPart(
part: Part
): part is Part & { state: string; input?: unknown; output?: unknown; approval?: { id: string } } {
return part.type.startsWith("tool-") && "state" in part;
}
function MessagePart({ part, chatId }: { part: Part; chatId: string }) {
if (part.type === "text") return <span>{part.text}</span>;
if (part.type === "reasoning") {
return (
<details>
<summary>Thinking</summary>
<pre>{part.text}</pre>
</details>
);
}
if (isToolPart(part)) {
const toolName = part.type.replace("tool-", "");
if (part.state === "approval-requested" && part.approval) {
return <ToolApproval chatId={chatId} approvalId={part.approval.id} />;
}
return (
<details>
<summary>Tool: {toolName} ({part.state})</summary>
<pre>{JSON.stringify({ input: part.input, output: part.output }, null, 2)}</pre>
</details>
);
}
return null;
}Approval UI
When a tool part is in approval-requested, show approve/deny buttons and call your approval route:
function ToolApproval({ chatId, approvalId }: { chatId: string; approvalId: string }) {
const [pending, setPending] = useState(false);
const respond = async (approved: boolean) => {
setPending(true);
await fetch(`/api/chat/${chatId}/approval`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ approvalId, approved }),
});
setPending(false);
};
return (
<div className="flex gap-2">
<button disabled={pending} onClick={() => respond(true)}>Approve</button>
<button disabled={pending} onClick={() => respond(false)}>Deny</button>
</div>
);
}See Approvals for the backend approval endpoint.
Handling Status Updates
During long operations, the agent emits AgentStatus as data-status chunks in the stream. Use them for loading indicators:
| Status Type | When |
|---|---|
sandbox-setup | Sandbox is initializing |
sandbox-setup-cold | Cold start (no snapshot) |
loading-skills | Loading skill instructions |
processing-approvals | Checking approval rules |
needs-approval | Waiting for user approval |
thinking | Model is generating |
custom | Custom status from your code |
Status handling is included in the Chat setup above via the onData callback. Render indicators based on status.type:
function StatusIndicator({ status }: { status: AgentStatus }) {
switch (status.type) {
case "sandbox-setup":
case "sandbox-setup-cold":
return <span>Setting up sandbox...</span>;
case "loading-skills":
return <span>Loading skills...</span>;
case "thinking":
return <span>Thinking...</span>;
case "needs-approval":
return <span>Waiting for approval...</span>;
default:
return null;
}
}Stream Reconnection
When the client disconnects mid-stream (page refresh, tab close), prepareReconnectToStreamRequest sends the reconnection request to /api/chat/${chatId}/stream. The resumeStream() call in the useEffect above handles this on mount when streamingMessageId is present.
Your stream endpoint returns the existing stream or an error if none is active:
import { createUIMessageStreamResponse } from "ai";
import { myAgent } from "@/agent";
export async function GET(
_req: Request,
{ params }: { params: Promise<{ chatId: string }> }
) {
const { chatId } = await params;
const session = myAgent.session(chatId);
try {
const stream = await session.stream();
return createUIMessageStreamResponse({ stream });
} catch (error) {
return Response.json({ error: (error as Error).message }, { status: 404 });
}
}See Streaming for details.
Loading Message History
Fetch existing messages on mount using session.history(). Create a helper that returns the UI payload:
import { myAgent } from "@/agent";
export async function loadMessages({ chatId }: { chatId: string }) {
const session = myAgent.session(chatId);
return await session.history();
}Then fetch on mount and pass messages to useChat as initial state, or use a separate data-fetching pattern. The Chat component and useChat may accept initial messages—check the AI SDK docs for the exact API.
Interruption
Let users stop the current generation:
const handleInterrupt = async () => {
await fetch(`/api/chat/${chatId}/interrupt`, { method: "POST" });
};
{status === "streaming" && (
<button onClick={handleInterrupt}>Stop</button>
)}See API Routes for the interrupt backend pattern.
Type Safety with InferUIMessage
Get typed messages for your agent's tools and parts:
import type { InferUIMessage } from "experimental-agent";
import type { myAgent } from "@/agent";
type Message = InferUIMessage<typeof myAgent>;Use Message with Chat<Message> and useChat for full type safety on tool parts and custom data.
Next Steps
- React Hooks —
useAgent,useSessionHistory,useInterruptSessionfor less boilerplate - API Routes — Send/reconnect patterns, approvals, interrupt, session management
- Streaming — Reconnection and status handling
- Approvals — Tool part states and resolution flow
Sandbox Setup & Snapshots
One-time setup with version and run, snapshot-based fast resume, network policies, exposed ports, and lifecycle management.
Error Handling
Error handling in experimental-agent—methods throw typed errors. Error types, try/catch patterns, backend patterns, and sandbox error handling.