Agent Core
Overview
Amazon AgentCore is AWS’s managed runtime for deploying and scaling AI agents. Confident AI allows you to trace and evaluate AgentCore agents built on frameworks like Strands — in just a few lines of code.
Tracing Quickstart
For users in the EU region, please set the OTEL endpoint to the EU version as shown below:
Instrument AgentCore
Call instrument_agentcore once at startup, before your agent runs. It attaches to the active OpenTelemetry TracerProvider and begins forwarding spans to Confident AI.
instrument_agentcore is framework-agnostic. It works with any underlying agent framework that AgentCore supports — Strands, LangChain, LangGraph, and CrewAI are all detected automatically.
Run your agent
Invoke your agent by executing the script:
You can directly view the traces on Confident AI by clicking on the link in the output printed in the console.
Advanced Usage
Logging prompts
If you are managing prompts on Confident AI and wish to log them, pass your Prompt object to instrument_agentcore.
Logging prompts lets you attribute specific prompts to AgentCore LLM spans. Be sure to pull the prompt before logging it, otherwise the prompt will not be visible on Confident AI.
Logging threads
Threads are used to group related traces together, and are useful for chat apps, agents, or any multi-turn interactions. You can learn more about threads here. Pass the thread_id to instrument_agentcore.
If your agent framework already sets a session.id attribute on spans, AgentCore integration will automatically use it as the thread_id when none is explicitly provided.
Trace attributes
Other trace attributes can also be passed to instrument_agentcore.
View Trace Attributes
The name of the trace. Learn more.
Tags are string labels that help you group related traces. Learn more.
Attach any metadata to the trace. Learn more.
Supply the thread or conversation ID to view and evaluate conversations. Learn more.
Supply the user ID to enable user analytics. Learn more.
The deployment environment. Accepted values: "production", "staging", "development", "testing". Defaults to "development".
Each attribute is optional, and works the same way as the native tracing features on Confident AI.
Evals Usage
Online evals
You can run online evals on your AgentCore agent, which will run evaluations on all incoming traces on Confident AI’s servers. This approach is recommended if your agent is in production.
Create metric collection
Create a metric collection on Confident AI with the metrics you wish to use to evaluate your AgentCore agent.
Your metric collection must only contain metrics that evaluate the input and actual output of the component it is assigned to.
Run evals
You can run evals at both the trace and span level. We recommend creating separate metric collections for each component, since each requires its own evaluation criteria and metrics.
After instrumenting your AgentCore agent, pass the metric collection name to the respective component:
Trace
Agent Span
LLM Span
Tool Span
Pass the trace_metric_collection parameter to instrument_agentcore.
All incoming traces will now be evaluated using metrics from your metric collection.
You can view evals on Confident AI by clicking on the link in the output printed in the console.