Add logging to your Vercel AI SDK application with Humanloop.

Add Humanloop observability to a chat agent by calling the tool built with Vercel AI SDK. It builds on the AI SDK’s Next.js example.

Looking for Node.js? See the guide here.

Prerequisites

Create a Humanloop Account

  1. Create an account or log in to Humanloop

  2. Get a Humanloop API key from Organization Settings.

Add an OpenAI API Key

If you’re the first person in your organization, you’ll need to add an API key to a model provider.

  1. Go to OpenAI and grab an API key.
  2. In Humanloop Organization Settings set up OpenAI as a model provider.

Using the Prompt Editor will use your OpenAI credits in the same way that the OpenAI playground does. Keep your API keys for Humanloop and the model providers private.

Create a new Next.js project.

$pnpm create next-app@latest my-ai-app
>cd my-ai-app

To follow this guide, you’ll also need Node.js 18+ installed on your machine.

Install ai, ‘@ai-sdk/react’, and @ai-sdk/openai, along with other necessary dependencies.

$pnpm add ai @ai-sdk/react @ai-sdk/openai zod
>pnpm add -D @types/node tsx typescript

Add a .env.local file to your project with your Humanloop and OpenAI API keys.

$touch .env.local
.env.local
HUMANLOOP_API_KEY=<YOUR_HUMANLOOP_KEY>
OPENAI_API_KEY=<YOUR_OPENAI_KEY>

Full code

If you’d like to immediately try out the full example, you can copy and paste the code below and run the app.

$pnpm add ai @ai-sdk/react @ai-sdk/openai zod
>pnpm add -D @types/node tsx typescript
>pnpm add @vercel/otel @opentelemetry/api @opentelemetry/sdk-logs @opentelemetry/api-logs @opentelemetry/instrumentation
.env.local
HUMANLOOP_API_KEY=<YOUR_HUMANLOOP_KEY>
OPENAI_API_KEY=<YOUR_OPENAI_KEY>
OTEL_EXPORTER_OTLP_ENDPOINT=https://api.humanloop.com/v5/import/otel
OTEL_EXPORTER_OTLP_PROTOCOL=http/json
OTEL_EXPORTER_OTLP_HEADERS="X-API-KEY=<YOUR_HUMANLOOP_KEY>" # Humanloop API key
instrumentation.ts
1import { registerOTel } from "@vercel/otel";
2
3export function register() {
4 registerOTel({
5 serviceName: "humanloop-ai-sdk-agent",
6 });
7}
app/api/chat/route.ts
1import { openai } from "@ai-sdk/openai";
2import { streamText, tool } from "ai";
3import { z } from "zod";
4
5export const maxDuration = 30;
6
7export async function POST(req: Request) {
8 const { messages } = await req.json();
9
10 const result = streamText({
11 model: openai("gpt-4o"),
12 messages,
13 experimental_telemetry: {
14 isEnabled: true,
15 metadata: {
16 humanloopPromptPath: "path/to/prompt",
17 humanloopFlowPath: "path/to/flow",
18 },
19 },
20 tools: {
21 weather: tool({
22 description: "Get the weather in a location (fahrenheit)",
23 parameters: z.object({
24 location: z.string().describe("The location to get the weather for"),
25 }),
26 execute: async ({ location }) => {
27 const temperature = Math.round(Math.random() * (90 - 32) + 32);
28 return {
29 location,
30 temperature,
31 };
32 },
33 }),
34 },
35 });
36
37 return result.toDataStreamResponse();
38}
app/page.tsx
1'use client';
2
3import { useChat } from '@ai-sdk/react';
4
5export default function Chat() {
6 const { messages, input, handleInputChange, handleSubmit } = useChat({
7 maxSteps: 5,
8 });
9
10 return (
11 <div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">
12 {messages.map(m => (
13 <div key={m.id} className="whitespace-pre-wrap">
14 {m.role === 'user' ? 'User: ' : 'AI: '}
15 {m.parts.map((p, idx) =>
16 p.type === "tool-invocation" || p.type === "source" ? (
17 <pre key={idx}>{JSON.stringify(p, null, 2)}</pre>
18 ) : p.type === "text" ? (
19 <p key={idx}>{p.text}</p>
20 ) : (
21 <p key={idx}>{p.reasoning}</p>
22 )
23 )}
24 </div>
25 ))}
26
27 <form onSubmit={handleSubmit}>
28 <input
29 className="fixed dark:bg-zinc-900 bottom-0 w-full max-w-md p-2 mb-8 border border-zinc-300 dark:border-zinc-800 rounded shadow-xl"
30 value={input}
31 placeholder="Say something..."
32 onChange={handleInputChange}
33 />
34 </form>
35 </div>
36 );
37}
$pnpm run dev

Create the agent

1

Create a Route Handler

We start with a backend route that handles a chat request and streams back a response from a model. This model can call a function to get the weather in a given location.

app/api/chat/route.ts
1import { openai } from "@ai-sdk/openai";
2import { streamText, tool } from "ai";
3import { z } from "zod";
4
5export const maxDuration = 30;
6
7export async function POST(req: Request) {
8 const { messages } = await req.json();
9
10 const result = streamText({
11 model: openai("gpt-4o"),
12 messages,
13 tools: {
14 weather: tool({
15 description: "Get the weather in a location (fahrenheit)",
16 parameters: z.object({
17 location: z.string().describe("The location to get the weather for"),
18 }),
19 execute: async ({ location }) => {
20 const temperature = Math.round(Math.random() * (90 - 32) + 32);
21 return {
22 location,
23 temperature,
24 };
25 },
26 }),
27 },
28 });
29
30 return result.toDataStreamResponse();
31}
2

Wire up the UI

Now that you have a Route Handler that can query an LLM, it’s time to setup your frontend. The AI SDK’s UI package abstracts the complexity of a chat interface into one hook, useChat. Update your root page to show a chat interface and provide a user message input.

The maxSteps prop allows the model to take multiple “steps” for a given generation, using tool calls to refine its response.

app/page.tsx
1'use client';
2
3import { useChat } from '@ai-sdk/react';
4
5export default function Chat() {
6 const { messages, input, handleInputChange, handleSubmit } = useChat({
7 maxSteps: 5,
8 });
9
10 return (
11 <div className="flex flex-col w-full max-w-md py-24 mx-auto stretch">
12 {messages.map(m => (
13 <div key={m.id} className="whitespace-pre-wrap">
14 {m.role === 'user' ? 'User: ' : 'AI: '}
15 {m.parts.map((p, idx) =>
16 p.type === "tool-invocation" || p.type === "source" ? (
17 <pre key={idx}>{JSON.stringify(p, null, 2)}</pre>
18 ) : p.type === "text" ? (
19 <p key={idx}>{p.text}</p>
20 ) : (
21 <p key={idx}>{p.reasoning}</p>
22 )
23 )}
24 </div>
25 ))}
26
27 <form onSubmit={handleSubmit}>
28 <input
29 className="fixed dark:bg-zinc-900 bottom-0 w-full max-w-md p-2 mb-8 border border-zinc-300 dark:border-zinc-800 rounded shadow-xl"
30 value={input}
31 placeholder="Say something..."
32 onChange={handleInputChange}
33 />
34 </form>
35 </div>
36 );
37}
3

Run the agent

Start your app and give the agent a try.

$pnpm run dev
The agent calls its tools to answer the user's question.

Log to Humanloop

The agent works and is capable of function calling. However, we rely on inputs and outputs to reason about the behavior.

Humanloop logging allows you to observe the steps taken by the agent, which we will demonstrate below.

We’ll use Vercel AI SDK’s built-in OpenTelemetry tracing to log to Humanloop.

1

Set up OpenTelemetry

Install dependencies.

$pnpm add @vercel/otel @opentelemetry/sdk-logs @opentelemetry/api-logs @opentelemetry/instrumentation

Create a file called instrumentation.ts in your root or /src directory and add the following code:

instrumentation.ts
1import { registerOTel } from "@vercel/otel";
2
3export function register() {
4 registerOTel({
5 serviceName: "humanloop-ai-sdk-agent",
6 });
7}

Configure the OpenTelemetry exporter to forward logs to Humanloop.

.env.local
HUMANLOOP_API_KEY=<YOUR_HUMANLOOP_KEY>
OPENAI_API_KEY=<YOUR_OPENAI_KEY>
OTEL_EXPORTER_OTLP_ENDPOINT=https://api.humanloop.com/v5/import/otel
OTEL_EXPORTER_OTLP_PROTOCOL=http/json
OTEL_EXPORTER_OTLP_HEADERS="X-API-KEY=<YOUR_HUMANLOOP_KEY>" # Humanloop API key
2

Trace AI SDK calls

The telemetry metadata associates Logs with your Files on Humanloop.

We will use a Humanloop Prompt to log LLM calls, and a Humanloop Flow to group related generation calls into a trace.

The humanloopPromptPath specifies the path to a Prompt and the humanloopFlowPath specifies the path to a Flow.

app/api/chat/route.ts
1import { openai } from "@ai-sdk/openai";
2import { streamText, tool } from "ai";
3import { z } from "zod";
4
5export const maxDuration = 30;
6
7export async function POST(req: Request) {
8 const { messages } = await req.json();
9
10 const result = streamText({
11 model: openai("gpt-4o"),
12 messages,
13 experimental_telemetry: {
14 isEnabled: true,
15 metadata: {
16 humanloopPromptPath: "path/to/prompt",
17 humanloopFlowPath: "path/to/flow", // Optional
18 },
19 },
20 tools: {
21 weather: tool({
22 description: "Get the weather in a location (fahrenheit)",
23 parameters: z.object({
24 location: z.string().describe("The location to get the weather for"),
25 }),
26 execute: async ({ location }) => {
27 const temperature = Math.round(Math.random() * (90 - 32) + 32);
28 return {
29 location,
30 temperature,
31 };
32 },
33 }),
34 },
35 });
36
37 return result.toDataStreamResponse();
38}

Restart your app, and have a conversation with the agent.

3

Explore logs on Humanloop

Now you can explore your logs on the Humanloop platform, and see the steps taken by the agent during your conversation.

The logs capture the agent calling its tools to answer the user's question.

Debugging

If you run into any issues, add OpenTelemetry debug logging to ensure the Exporter is working correctly.

$pnpm add @opentelemetry/api
instrumentation.ts
1import { registerOTel } from "@vercel/otel";
2import { diag, DiagConsoleLogger, DiagLogLevel } from "@opentelemetry/api";
3
4diag.setLogger(new DiagConsoleLogger(), DiagLogLevel.DEBUG);
5export function register() {
6 registerOTel({
7 serviceName: "humanloop-ai-sdk-agent",
8 });
9}

Next steps

Logging is the first step to observing your AI product. Read these guides to learn more about evals on Humanloop:

Built with