LlamaIndex
Overview
LlamaIndex is an LLM framework that makes it easy to build knowledge agents from complex data. Confident AI allows you to trace and evaluate LlamaIndex agents in just a few lines of code.
Tracing Quickstart
Configure LlamaIndex
Instrument LlamaIndex using instrument_llama_index to enable Confident AI’s LlamaIndexHandler.
Now whenever you use LlamaIndex, DeepEval will collect LlamaIndex traces and publish them to Confident AI.
You can directly view the traces on Confident AI by clicking on the link in the output printed in the console.
Evals Usage
Online evals
You can run online evals on your LlamaIndex agent, which will run evaluations on all incoming traces on Confident AI’s servers. This approach is recommended if your agent is in production.
Create metric collection
Create a metric collection on Confident AI with the metrics you wish to use to evaluate your LlamaIndex agent.
Your metric collection should only contain metrics that don’t require
retrieval_context, context, expected_output, or expected_tools for
evaluation.
Run evals
Confident AI supports online evals for LlamaIndex’s FunctionAgent, ReActAgent and CodeActAgent. Replace your LlamaIndex agent with DeepEval’s, and provide metric collection as an argument to the agent.
All incoming traces will now be evaluated using metrics from your metric collection.
End-to-end evals
Running end-to-end evals on your LlamaIndex agent evaluates your agent locally, and is the recommended approach if your agent is in a development or testing environment.
Similar to online evals, you can only run end-to-end evals on metrics that don’t require retrieval_context, context, expected_output, or expected_tools for evaluation.
View on Confident AI
You can view the evals on Confident AI by clicking on the link in the output printed in the console.