Open Inference
Overview
OpenInference is an open standard for capturing and storing AI model inferences. Confident AI allows you to trace and evaluate any application instrumented with OpenInference in just a few lines of code.
Tracing Quickstart
For users in the EU region, please set the OTEL endpoint to the EU version as shown below:
Instrument OpenInference
Call instrument_openinference once at startup, before your agent runs. It attaches to the active OpenTelemetry TracerProvider and begins forwarding spans to Confident AI.
You will also need the specific OpenInference instrumentor for your framework, such as openinference-instrumentation-langchain, openinference-instrumentation-openai or @arizeai/openinference-instrumentation-anthropic, etc.
Python
TypeScript
Confident AI strictly adheres to OpenInference semantic conventions and will only capture the telemetry data your instrumentors explicitly expose. To get full traces of your entire application flow, please ensure you instrument your application in nested layers for full observability from OpenInference.
Run your code
Get your traces by running your code as shown here:
You can directly view the traces on Confident AI’s observatory page
Advanced Usage
Logging prompts
If you are managing prompts on Confident AI and wish to log them, pass your Prompt object to instrument_openinference.
Python
TypeScript
Logging prompts lets you attribute specific prompts to OpenInference LLM spans. Be sure to pull the prompt before logging it, otherwise the prompt will not be visible on Confident AI.
Logging threads
Threads are used to group related traces together, and are useful for chat apps, agents, or any multi-turn interactions. You can learn more about threads here. Pass the thread_id to instrument_openinference.
Python
TypeScript
Trace attributes
Other trace attributes can also be passed to instrument_openinference.
Python
TypeScript
View Trace Attributes
The name of the trace. Learn more.
Tags are string labels that help you group related traces. Learn more.
Attach any metadata to the trace. Learn more.
Supply the thread or conversation ID to view and evaluate conversations. Learn more.
Supply the user ID to enable user analytics. Learn more.
The deployment environment. Accepted values: "production", "staging", "development", "testing". Defaults to "development".
Each attribute is optional, and works the same way as the native tracing features on Confident AI.
Evals Usage
Online evals
You can run online evals on your OpenInference instrumentation, which will run evaluations on all incoming traces on Confident AI’s servers.
Create metric collection
Create a metric collection on Confident AI with the metrics you wish to use to evaluate your AgentCore agent.
Your metric collection must only contain metrics that evaluate the input and actual output of the component it is assigned to.
Run evals
You can run evals at both the trace and span level. We recommend creating separate metric collections for each component, since each requires its own evaluation criteria and metrics.
After instrumenting your AgentCore agent, pass the metric collection name to the respective component:
Trace
LLM Span
Tool Span
All incoming traces will now be evaluated using metrics from your metric collection.
You can view evals on Confident AI by clicking on the link in the output printed in the console.