Vercel AI SDK

Use Confident AI for LLM observability and evals for Vercel AI SDK on typescript

The AI SDK by vercel is a powerful typescript framework that allows you to use various LLM providers and models for AI based applications. Confident AI allows you to trace and evaluate AI SDK based LLM applications in just a few lines of code.

Tracing Quickstart

For users in the EU region, please set the OTEL endpoint to the EU version as shown below:

$export CONFIDENT_OTEL_URL="https://eu.otel.confident-ai.com"
1

Install Dependencies

Run the following command to install the required packages:

$npm install ai deepeval-ts
2

Configure AI SDK

Use DeepEval’s configureAiSdkTracing to trace the LLM operations.

1import { generateText } from "ai";
2import { configureAiSdkTracing } from "deepeval-ts";
3
4const tracer = configureAiSdkTracing();
5
6const { text } = await generateText({
7 model: "openai/gpt-4o",
8 prompt: "How to make the best coffee?",
9 experimental_telemetry: {
10 isEnabled: true,
11 tracer: tracer,
12 },
13});
14
15console.log(text);
3

Run AI SDK Generation

Run your LLM application by executing the following script:

$npx ts-node

You can directly view the traces on Confident AI’s traces page inside the observatory.

Advanced Usage

Logging prompts

If you are managing prompts on Confident AI and wish to log them, pass your Prompt object to the configureAiSdkTracing.

1import { generateText } from "ai";
2import { configureAiSdkTracing, Prompt } from "deepeval-ts";
3
4const prompt = new Prompt({ alias: "PROMPT_ALIAS" });
5prompt.pull();
6const tracer = configureAiSdkTracing({
7 confident_prompt: prompt,
8});
9
10const { text } = await generateText({
11 model: "openai/gpt-4o",
12 prompt: "How to make the best coffee?",
13 experimental_telemetry: {
14 isEnabled: true,
15 tracer: tracer,
16 },
17});
18
19console.log(text);

Logging prompts lets you attribute specific prompts to AI SDK LLM spans. Be sure to pull the prompt before logging it, otherwise the prompt will not be visible on Confident AI.

Setting trace attributes

Confident AI’s LLM tracing advanced features provide teams with the ability to set certain attributes for each trace when invoking your AI SDK applications.

For example, thread_id and user_id are used to group related traces together, and are useful for chat apps, agents, or any multi-turn interactions. You can learn more about threads here.

You can set these attributes in the configureAiSdkTracing from deepeval-ts:

1import { generateText } from "ai";
2import { configureAiSdkTracing } from "deepeval-ts";
3
4const tracer = configureAiSdkTracing({
5 threadId: "123",
6 userId: "456",
7});
8
9const { text } = await generateText({
10 model: "openai/gpt-4o",
11 prompt: "How to make the best coffee?",
12 experimental_telemetry: {
13 isEnabled: true,
14 tracer: tracer,
15 },
16});
17
18console.log(text);
name
str

The name of the trace. Learn more.

tags
List[str]

Tags are string labels that help you group related traces. Learn more.

metadata
Dict

Attach any metadata to the trace. Learn more.

threadId
str

Supply the thread or conversation ID to view and evaluate conversations. Learn more.

userId
str

Supply the user ID to enable user analytics. Learn more.

Each attribute is optional, and works the same way as the native tracing features on Confident AI.

Evals Usage

Online evals

You can run online evals on your AI SDK application by setting a metricCollection which will run evaluations on all incoming traces on Confident AI’s servers. This approach is recommended if your agent is in production.

1

Create metric collection

Create a metric collection on Confident AI with the metrics you wish to use to evaluate your AI SDK based application.

Create metric collection

Your metric collection must only contain metrics that only evaluate the input and actual output of your AI SDK application.

2

Run evals

You can run evals at both the trace and span level. We recommend creating separate metric collections for each component, since each requires its own evaluation criteria and metrics. You can pass different metric collections like metricCollection (for entire trace), llmMetricCollection, toolMetricCollection in the configureAiSdkTracing as shown below:

1import { generateText } from "ai";
2import { configureAiSdkTracing } from "deepeval-ts";
3
4const tracer = configureAiSdkTracing({
5 metricCollection: "metric-collection-name",
6 llmMetricCollection: "llm-metric-collection-name",
7 toolMetricCollection: "tool-metric-collection-name"
8});
9
10const { text } = await generateText({
11 model: "openai/gpt-4o",
12 prompt: "How to make the best coffee?",
13 experimental_telemetry: {
14 isEnabled: true,
15 tracer: tracer,
16 },
17});
18
19console.log(text);

All incoming traces will now be evaluated using metrics from your metric collection.

You can view evals on Confident AI by visiting the traces pages inside the observatory on Confident AI platform.