LlamaIndex
Overview
LlamaIndex is an LLM framework that makes it easy to build knowledge agents from complex data. Confident AI allows you to trace and evaluate LlamaIndex agents in just a few lines of code.
Tracing Quickstart
Configure LlamaIndex
Instrument LlamaIndex using instrument_llama_index to enable Confident AI’s LlamaIndexHandler.
Now whenever you use LlamaIndex, DeepEval will collect LlamaIndex traces and publish them to Confident AI.
You can directly view the traces on Confident AI by clicking on the link in the output printed in the console.
Evals Usage
Online evals
You can run online evals on your LlamaIndex agent, which will run evaluations on all incoming traces on Confident AI’s servers. This approach is recommended if your agent is in production.
Create metric collection
Create a metric collection on Confident AI with the metrics you wish to use to evaluate your LlamaIndex agent.
Your metric collection should only contain metrics that don’t require
retrieval_context, context, expected_output, or expected_tools for
evaluation.
Run evals
Confident AI supports online evals for LlamaIndex applications. Evaluations are configured using metric_collection as an argument on the trace context which applies to all spans emitted during the trace.
All incoming traces will now be evaluated using metrics from your metric collection.
View on Confident AI
You can view the evals on Confident AI by clicking on the link in the output printed in the console.