LangGraph

Use Confident AI for LLM observability and evals for LangGraph

Overview

LangGraph is a framework for building reactive, multi-agent systems. Confident AI provides a CallbackHandler to trace and evaluate LangGraph agents.

Tracing Quickstart

1

Install Dependencies

Run the following command to install the required packages:

$pip install -U deepeval langgraph langchain langchain-openai
2

Setup Confident AI Key

Login to Confident AI using your Confident API key.

$deepeval login
3

Configure LangGraph

Provide DeepEval’s CallbackHandler to your LangGraph agent’s invoke method.

main.py
1from langgraph.prebuilt import create_react_agent
2from langchain_openai import ChatOpenAI
3
4from deepeval.integrations.langchain import CallbackHandler
5
6def get_weather(city: str) -> str:
7 """Returns the weather in a city"""
8 return f"It's always sunny in {city}!"
9
10llm = ChatOpenAI(model="gpt-4o-mini")
11
12agent = create_react_agent(
13 model=llm,
14 tools=[get_weather],
15 prompt="You are a helpful assistant",
16)
17
18result = agent.invoke(
19 input={"messages": [{"role": "user", "content": "what is the weather in sf"}]},
20 config={"callbacks": [CallbackHandler()]}
21)

DeepEval’s CallbackHandler extends LangChain’s BaseCallbackHandler.

4

Run LangGraph

Invoke your agent by executing the script:

$python main.py

You can directly view the traces on Confident AI by clicking on the link in the output printed in the console.

Advanced Features

Setting Trace Attributes

Confident AI’s LLM tracing advanced features provide teams with the ability to set certain attributes for each trace when invoking your LangChain application.

For example, thread_id and user_id are used to group related traces together, and are useful for chat apps, agents, or any multi-turn interactions. You can learn more about threads here.

You can set these attributes in the CallbackHandler when invoking your LangChain application.

main.py
1result = agent_executor.invoke(
2 {"input": "What is 8 multiplied by 6?"},
3 config={
4 "callbacks": [CallbackHandler(thread_id="123")]
5 },
6)
name
str

The name of the trace. Learn more.

tags
List[str]

Tags are string labels that help you group related traces. Learn more.

metadata
Dict

Attach any metadata to the trace. Learn more.

thread_id
str

Supply the thread or conversation ID to view and evaluate conversations. Learn more.

user_id
str

Supply the user ID to enable user analytics. Learn more.

Each attribute is optional, and works the same way as the native tracing features on Confident AI.

Logging prompts

If you are managing prompts on Confident AI and wish to log them, pass your Prompt object to the language model instance’s metadata parameter.

main.py
1from langchain_openai import ChatOpenAI
2from deepeval.prompt import Prompt
3
4prompt = Prompt(alias="<prompt-alias>")
5prompt.pull(version="00.00.01")
6
7llm = ChatOpenAI(
8 model="gpt-4o-mini",
9 metadata={"prompt": prompt}
10)

Logging prompts lets you attribute specific prompts to OpenAI Agent LLM spans. Be sure to pull the prompt before logging it, otherwise the prompt will not be visible on Confident AI.

Evals Usage

Online evals

If your LangChain application is in production, and you still want to run evaluations on your traces, use online evals. It lets you run evaluations on all incoming traces on Confident AI’s server.

1

Create metric collection

Create a metric collection on Confident AI with the metrics you wish to use to evaluate your LangGraph agent. Copy the name of the metric collection.

Create metric collection

The current LangChain integration supports metrics that only evaluate Input and Actual Output in addition to the Task Completion metric.

2

Run evals

Set the metric_collection name to evaluate various components of your LangChain application.

This is the top level component of your LangChain application. Also a very idle component to evaluate with the Task Completion metric.

main.py
1from langchain_openai import ChatOpenAI
2from langgraph.prebuilt import create_react_agent
3
4from deepeval.integrations.langchain import CallbackHandler
5
6def get_weather(city: str) -> str:
7 """Returns the weather in a city"""
8 return f"It's always sunny in {city}!"
9
10llm = ChatOpenAI(model="gpt-4o-mini")
11
12agent = create_react_agent(
13 model=llm,
14 tools=[get_weather],
15 prompt="You are a helpful assistant",
16)
17
18result = agent.invoke(
19 input={"messages": [{"role": "user", "content": "what is the weather in sf"}]},
20 config={
21 "callbacks": [
22 CallbackHandler(metric_collection="task_completion")
23 ],
24 },
25)

All incoming traces will now be evaluated using metrics from your metric collection.

End-to-end evals

Running end-to-end evals on your LangGraph agent evaluates your agent locally, and is the recommended approach if your agent is in a development or testing environment.

1

Create metric

1from deepeval.metrics import TaskCompletionMetric
2
3task_completion = TaskCompletionMetric(
4 threshold=0.7,
5 model="gpt-4o-mini",
6 include_reason=True
7)

Similar to online evals, you can only run end-to-end evals on LangGraph using TaskCompletionMetric.

3

Run evals

Provide your metrics to the CallbackHandler. Then, use the dataset’s evals_iterator to invoke your LangGraph agent for each golden.

main.py
1from langgraph.prebuilt import create_react_agent
2from deepeval.metrics import TaskCompletionMetric
3from deepeval.integrations.langchain import CallbackHandler
4
5def get_weather(city: str) -> str:
6 """Returns the weather in a city"""
7 return f"It's always sunny in {city}!"
8
9agent = create_react_agent(model="openai:gpt-4o-mini",tools=[get_weather],prompt="You are a helpful assistant",)
10task_completion = TaskCompletionMetric(threshold=0.7, model="gpt-4o-mini", include_reason=True)
11
12from deepeval.dataset import Golden, EvaluationDataset
13
14goldens = [
15 Golden(input="What is the weather in Bogotá, Colombia?"),
16 Golden(input="What is the weather in Paris, France?"),
17]
18
19dataset = EvaluationDataset(goldens=goldens)
20
21for golden in dataset.evals_iterator():
22 agent.invoke(
23 input={"messages": [{"role": "user", "content": golden.input}]},
24 config={"callbacks": [CallbackHandler(metrics=[task_completion])]},
25 )

This will automatically generate a test run with evaluated traces using inputs from your dataset.

View on Confident AI

You can view the evals on Confident AI by clicking on the link in the output printed in the console.