Introduction to LLM Tracing

Trace your LLM applications and evaluate them on a component level.

Overview

Confident AI offers LLM tracing for teams to trace and monitor LLM applications. Think Datadog for LLM apps, but with an additional suite of 30+ evaluation metrics to track continuous performance over time.

Get Started

Get LLM tracing for your LLM app with best in-class-evals.

Advanced Features

You can configure tracing on Confident AI in virtually any way you wish:

Integrations

You can also setup tracing via 1-line integrations.

Only Python is supported for itnegrations, with Typescript coming very soon.

FAQs

You can run evaluations using metrics for RAG, agents, chatbots, on:

  1. Traces (end-to-end)
  2. Spans (individual components)
  3. Threads (multi-turn conversations)

And these are be either done in an online fashion (run evals as they are being ingested in the platform), or offline (run evals retrospectively).

Confident AI tracing is designed to be completely non-intrusive to your application. It:

  • Can be disabled/enabled anytime through the CONFIDENT_TRACING_ENABLED="YES"/"NO" enviornment variable.
  • Requires no rewrite of your existing code - just add the @observe decorator.
  • Runs asynchronously in the background with zero impact on latency.
  • Fails silently if there are any issues, ensuring your app keeps running.
  • Works with any function signature - you can set input/output at runtime.