Vercel AI SDK
The AI SDK by vercel is a powerful typescript framework that allows you to use various LLM providers and models for AI based applications. Confident AI allows you to trace and evaluate AI SDK based LLM applications in just a few lines of code.
Tracing Quickstart
For users in the EU region, please set the OTEL endpoint to the EU version as shown below:
Configure AI SDK
Use DeepEval’s configureAiSdkTracing to trace the LLM operations.
Generate Text
Stream Text
Tool Calling
Generate Structured Data
Embedding Text
Run AI SDK Generation
Run your LLM application by executing the following script:
You can directly view the traces on Confident AI’s traces page inside the observatory.
Advanced Usage
Logging prompts
If you are managing prompts on Confident AI and wish to log them, pass your Prompt object to the configureAiSdkTracing.
Logging prompts lets you attribute specific prompts to AI SDK LLM spans. Be sure to pull the prompt before logging it, otherwise the prompt will not be visible on Confident AI.
Setting trace attributes
Confident AI’s LLM tracing advanced features provide teams with the ability to set certain attributes for each trace when invoking your AI SDK applications.
For example, thread_id and user_id are used to group related traces together, and are useful for chat apps, agents, or any multi-turn interactions. You can learn more about threads here.
You can set these attributes in the configureAiSdkTracing from deepeval-ts:
View Trace Attributes
The name of the trace. Learn more.
Tags are string labels that help you group related traces. Learn more.
Attach any metadata to the trace. Learn more.
Supply the thread or conversation ID to view and evaluate conversations. Learn more.
Supply the user ID to enable user analytics. Learn more.
Each attribute is optional, and works the same way as the native tracing features on Confident AI.
Evals Usage
Online evals
You can run online evals on your AI SDK application by setting a metricCollection which will run evaluations on all incoming traces on Confident AI’s servers. This approach is recommended if your agent is in production.
Create metric collection
Create a metric collection on Confident AI with the metrics you wish to use to evaluate your AI SDK based application.
Your metric collection must only contain metrics that only evaluate the input and actual output of your AI SDK application.
Run evals
You can run evals at both the trace and span level. We recommend creating separate metric collections for each component, since each requires its own evaluation criteria and metrics.
You can pass different metric collections like metricCollection (for entire trace), llmMetricCollection, toolMetricCollection in the configureAiSdkTracing as shown below:
All incoming traces will now be evaluated using metrics from your metric collection.
You can view evals on Confident AI by visiting the traces pages inside the observatory on Confident AI platform.