Guides

Frontend Integration

Integrate experimental-agent agents with React using @ai-sdk/react. useChat with DefaultChatTransport, tool rendering, approval UI, status updates, stream reconnection, and type safety.

Use @ai-sdk/react's useChat to build chat UIs that handle streaming, reconnection, tool rendering, and approval flows.

Simpler alternative: If you're using handleRequest on the server, the useAgent hook handles transport setup, reconnection, and interruption automatically. This guide covers the manual useChat setup for full control.

Overview

  • agent() from experimental-agent — Define your agent
  • tool() from ai — Define tools (Vercel AI SDK)
  • useChat from @ai-sdk/react — React hook for chat state and streaming
  • DefaultChatTransport from ai — Custom transport that hits your API routes

Installation

npm i @ai-sdk/react ai

Basic useChat Setup

components/chat.tsx
"use client";

import { Chat, useChat } from "@ai-sdk/react";
import { DefaultChatTransport } from "ai";
import type { AgentStatus } from "experimental-agent";
import { useEffect, useMemo, useState } from "react";

export function ChatUI({
  chatId,
  streamingMessageId,
}: {
  chatId: string;
  streamingMessageId: string | null;
}) {
  const [status, setStatus] = useState<AgentStatus | null>(null);

  const chat = useMemo(
    () =>
      new Chat({
        id: streamingMessageId
          ? `${chatId}-${streamingMessageId}`
          : chatId,
        transport: new DefaultChatTransport({
          api: `/api/chat/${chatId}`,
          prepareSendMessagesRequest: ({ messages }) => {
            const lastAssistant = messages.findLast(
              (m) => m.role === "assistant"
            );
            const lastPartContent = lastAssistant?.parts.at(-1);
            const lastPart =
              lastAssistant && lastPartContent != null
                ? {
                    index: lastAssistant.parts.length - 1,
                    part: lastPartContent,
                  }
                : undefined;
            return {
              body: {
                message: messages.at(-1),
                interruptIfStreaming: lastPart ? { lastPart } : true,
              },
            };
          },
          prepareReconnectToStreamRequest: (request) => {
            return { ...request, api: `/api/chat/${chatId}/stream` };
          },
        }),
        onData: (part) => {
          if (part.type === "data-status") {
            setStatus(part.data as AgentStatus);
          }
        },
      }),
    [chatId, streamingMessageId]
  );

  const { messages, sendMessage, status: chatStatus, resumeStream } = useChat({ chat });
  const [input, setInput] = useState("");

  useEffect(() => {
    if (streamingMessageId) {
      resumeStream();
    }
  }, [streamingMessageId, resumeStream]);

  return (
    <div>
      {messages.map((m) => (
        <div key={m.id}>
          <strong>{m.role}:</strong>{" "}
          {m.parts.map((p, i) => (p.type === "text" ? <span key={i}>{p.text}</span> : null))}
        </div>
      ))}
      {status && <StatusIndicator status={status} />}
      <form
        onSubmit={async (e) => {
          e.preventDefault();
          if (input.trim()) {
            await sendMessage({ parts: [{ type: "text", text: input }] });
            setInput("");
          }
        }}
      >
        <input
          value={input}
          onChange={(e) => setInput(e.target.value)}
          placeholder="Send a message..."
        />
        <button type="submit" disabled={chatStatus !== "ready" && chatStatus !== "streaming"}>
          Send
        </button>
      </form>
    </div>
  );
}

Key details:

  • prepareSendMessagesRequest sends only the last message and includes interruptIfStreaming with the last part for clean interruption.
  • prepareReconnectToStreamRequest points to /api/chat/${chatId}/stream — a dedicated reconnection endpoint (see API Routes).
  • onData captures AgentStatus updates for loading indicators.
  • streamingMessageId from your session page's server data. When present, resumeStream() reconnects to the active stream on mount.

Your backend should expose one endpoint for sending and another for reconnection. See API Routes for backend patterns.

Rendering Tool Invocations

Tool parts have a state that indicates progress. Render them appropriately:

StateMeaning
input-availableTool called, executing (no approval needed)
approval-requestedWaiting for human approval
approval-respondedUser approved/denied, tool executing or skipped
output-availableTool finished, result available
output-errorTool execution failed
components/message-part.tsx
import type { UIMessage } from "ai";

type Part = UIMessage["parts"][number];

function isToolPart(
  part: Part
): part is Part & { state: string; input?: unknown; output?: unknown; approval?: { id: string } } {
  return part.type.startsWith("tool-") && "state" in part;
}

function MessagePart({ part, chatId }: { part: Part; chatId: string }) {
  if (part.type === "text") return <span>{part.text}</span>;

  if (part.type === "reasoning") {
    return (
      <details>
        <summary>Thinking</summary>
        <pre>{part.text}</pre>
      </details>
    );
  }

  if (isToolPart(part)) {
    const toolName = part.type.replace("tool-", "");
    if (part.state === "approval-requested" && part.approval) {
      return <ToolApproval chatId={chatId} approvalId={part.approval.id} />;
    }
    return (
      <details>
        <summary>Tool: {toolName} ({part.state})</summary>
        <pre>{JSON.stringify({ input: part.input, output: part.output }, null, 2)}</pre>
      </details>
    );
  }

  return null;
}

Approval UI

When a tool part is in approval-requested, show approve/deny buttons and call your approval route:

components/tool-approval.tsx
function ToolApproval({ chatId, approvalId }: { chatId: string; approvalId: string }) {
  const [pending, setPending] = useState(false);

  const respond = async (approved: boolean) => {
    setPending(true);
    await fetch(`/api/chat/${chatId}/approval`, {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({ approvalId, approved }),
    });
    setPending(false);
  };

  return (
    <div className="flex gap-2">
      <button disabled={pending} onClick={() => respond(true)}>Approve</button>
      <button disabled={pending} onClick={() => respond(false)}>Deny</button>
    </div>
  );
}

See Approvals for the backend approval endpoint.

Handling Status Updates

During long operations, the agent emits AgentStatus as data-status chunks in the stream. Use them for loading indicators:

Status TypeWhen
sandbox-setupSandbox is initializing
sandbox-setup-coldCold start (no snapshot)
loading-skillsLoading skill instructions
processing-approvalsChecking approval rules
needs-approvalWaiting for user approval
thinkingModel is generating
customCustom status from your code

Status handling is included in the Chat setup above via the onData callback. Render indicators based on status.type:

function StatusIndicator({ status }: { status: AgentStatus }) {
  switch (status.type) {
    case "sandbox-setup":
    case "sandbox-setup-cold":
      return <span>Setting up sandbox...</span>;
    case "loading-skills":
      return <span>Loading skills...</span>;
    case "thinking":
      return <span>Thinking...</span>;
    case "needs-approval":
      return <span>Waiting for approval...</span>;
    default:
      return null;
  }
}

Stream Reconnection

When the client disconnects mid-stream (page refresh, tab close), prepareReconnectToStreamRequest sends the reconnection request to /api/chat/${chatId}/stream. The resumeStream() call in the useEffect above handles this on mount when streamingMessageId is present.

Your stream endpoint returns the existing stream or an error if none is active:

app/api/chat/[chatId]/stream/route.ts
import { createUIMessageStreamResponse } from "ai";
import { myAgent } from "@/agent";

export async function GET(
  _req: Request,
  { params }: { params: Promise<{ chatId: string }> }
) {
  const { chatId } = await params;
  const session = myAgent.session(chatId);
  try {
    const stream = await session.stream();
    return createUIMessageStreamResponse({ stream });
  } catch (error) {
    return Response.json({ error: (error as Error).message }, { status: 404 });
  }
}

See Streaming for details.

Loading Message History

Fetch existing messages on mount using session.history(). Create a helper that returns the UI payload:

src/chat.ts
import { myAgent } from "@/agent";

export async function loadMessages({ chatId }: { chatId: string }) {
  const session = myAgent.session(chatId);
  return await session.history();
}

Then fetch on mount and pass messages to useChat as initial state, or use a separate data-fetching pattern. The Chat component and useChat may accept initial messages—check the AI SDK docs for the exact API.

Interruption

Let users stop the current generation:

const handleInterrupt = async () => {
  await fetch(`/api/chat/${chatId}/interrupt`, { method: "POST" });
};

{status === "streaming" && (
  <button onClick={handleInterrupt}>Stop</button>
)}

See API Routes for the interrupt backend pattern.

Type Safety with InferUIMessage

Get typed messages for your agent's tools and parts:

import type { InferUIMessage } from "experimental-agent";
import type { myAgent } from "@/agent";

type Message = InferUIMessage<typeof myAgent>;

Use Message with Chat<Message> and useChat for full type safety on tool parts and custom data.

Next Steps

  • React HooksuseAgent, useSessionHistory, useInterruptSession for less boilerplate
  • API Routes — Send/reconnect patterns, approvals, interrupt, session management
  • Streaming — Reconnection and status handling
  • Approvals — Tool part states and resolution flow