Agent Core

Use Confident AI for LLM observability and evals for Amazon AgentCore

Overview

Amazon AgentCore is AWS’s managed runtime for deploying and scaling AI agents. Confident AI allows you to trace and evaluate AgentCore agents built on frameworks like Strands — in just a few lines of code.

Tracing Quickstart

For users in the EU region, please set the OTEL endpoint to the EU version as shown below:

$export CONFIDENT_OTEL_URL="https://eu.otel.confident-ai.com"
1

Install Dependencies

Run the following command to install the required packages:

$pip install -U deepeval bedrock-agentcore strands-agents
2

Instrument AgentCore

Call instrument_agentcore once at startup, before your agent runs. It attaches to the active OpenTelemetry TracerProvider and begins forwarding spans to Confident AI.

main.py
1import os
2from bedrock_agentcore import BedrockAgentCoreApp
3from strands import Agent
4from deepeval.integrations.agentcore import instrument_agentcore
5
6instrument_agentcore()
7
8app = BedrockAgentCoreApp()
9agent = Agent(model="amazon.nova-lite-v1:0")
10
11@app.entrypoint
12def invoke(payload):
13 user_message = payload.get("prompt", "Hello! How can I help you today?")
14 result = agent(user_message)
15 return {"result": result.message}
16
17if __name__ == "__main__":
18 response = invoke({"prompt": "Explain OpenTelemetry in one sentence."})
19 print(f"Agent Response: {response['result']}")

instrument_agentcore is framework-agnostic. It works with any underlying agent framework that AgentCore supports — Strands, LangChain, LangGraph, and CrewAI are all detected automatically.

3

Run your agent

Invoke your agent by executing the script:

$python main.py

You can directly view the traces on Confident AI by clicking on the link in the output printed in the console.

Advanced Usage

Logging prompts

If you are managing prompts on Confident AI and wish to log them, pass your Prompt object to instrument_agentcore.

main.py
1import os
2from bedrock_agentcore import BedrockAgentCoreApp
3from strands import Agent
4from deepeval.prompt import Prompt
5from deepeval.integrations.agentcore import instrument_agentcore
6
7prompt = Prompt(alias="my-prompt")
8prompt.pull(version="00.00.01")
9
10system_prompt = prompt.interpolate()
11
12instrument_agentcore(
13 confident_prompt=prompt,
14)
15
16app = BedrockAgentCoreApp()
17agent = Agent(model="amazon.nova-lite-v1:0", system_prompt=system_prompt)
18
19@app.entrypoint
20def invoke(payload):
21 user_message = payload.get("prompt", "Hello! How can I help you today?")
22 result = agent(user_message)
23 return {"result": result.message}

Logging prompts lets you attribute specific prompts to AgentCore LLM spans. Be sure to pull the prompt before logging it, otherwise the prompt will not be visible on Confident AI.

Logging threads

Threads are used to group related traces together, and are useful for chat apps, agents, or any multi-turn interactions. You can learn more about threads here. Pass the thread_id to instrument_agentcore.

main.py
1import os
2from bedrock_agentcore import BedrockAgentCoreApp
3from strands import Agent
4from deepeval.integrations.agentcore import instrument_agentcore
5
6instrument_agentcore(
7 thread_id="thread_1",
8 user_id="user_1"
9)
10
11app = BedrockAgentCoreApp()
12agent = Agent(model="amazon.nova-lite-v1:0")
13
14@app.entrypoint
15def invoke(payload):
16 user_message = payload.get("prompt", "Hello! How can I help you today?")
17 result = agent(user_message)
18 return {"result": result.message}

If your agent framework already sets a session.id attribute on spans, AgentCore integration will automatically use it as the thread_id when none is explicitly provided.

Trace attributes

Other trace attributes can also be passed to instrument_agentcore.

main.py
1import os
2from bedrock_agentcore import BedrockAgentCoreApp
3from strands import Agent
4from deepeval.integrations.agentcore import instrument_agentcore
5
6instrument_agentcore(
7 name="Name of Trace",
8 tags=["Tag 1", "Tag 2"],
9 metadata={"Key": "Value"},
10 user_id="user_1",
11 thread_id="conversation-abc123",
12 environment="production",
13)
14
15app = BedrockAgentCoreApp()
16agent = Agent(model="amazon.nova-lite-v1:0")
17
18@app.entrypoint
19def invoke(payload):
20 user_message = payload.get("prompt", "Hello! How can I help you today?")
21 result = agent(user_message)
22 return {"result": result.message}
name
str

The name of the trace. Learn more.

tags
List[str]

Tags are string labels that help you group related traces. Learn more.

metadata
Dict

Attach any metadata to the trace. Learn more.

thread_id
str

Supply the thread or conversation ID to view and evaluate conversations. Learn more.

user_id
str

Supply the user ID to enable user analytics. Learn more.

environment
str

The deployment environment. Accepted values: "production", "staging", "development", "testing". Defaults to "development".

Each attribute is optional, and works the same way as the native tracing features on Confident AI.

Evals Usage

Online evals

You can run online evals on your AgentCore agent, which will run evaluations on all incoming traces on Confident AI’s servers. This approach is recommended if your agent is in production.

1

Create metric collection

Create a metric collection on Confident AI with the metrics you wish to use to evaluate your AgentCore agent.

Create metric collection

Your metric collection must only contain metrics that evaluate the input and actual output of the component it is assigned to.

2

Run evals

You can run evals at both the trace and span level. We recommend creating separate metric collections for each component, since each requires its own evaluation criteria and metrics.

After instrumenting your AgentCore agent, pass the metric collection name to the respective component:

Pass the trace_metric_collection parameter to instrument_agentcore.

main.py
1import os
2from bedrock_agentcore import BedrockAgentCoreApp
3from strands import Agent
4from deepeval.integrations.agentcore import instrument_agentcore
5
6instrument_agentcore(
7 trace_metric_collection="my-trace-collection",
8)
9
10app = BedrockAgentCoreApp()
11agent = Agent(model="amazon.nova-lite-v1:0")
12
13@app.entrypoint
14def invoke(payload):
15 user_message = payload.get("prompt", "Hello! How can I help you today?")
16 result = agent(user_message)
17 return {"result": result.message}

All incoming traces will now be evaluated using metrics from your metric collection.

You can view evals on Confident AI by clicking on the link in the output printed in the console.