SlackJust In: New Slack Community! Connect with AI engineers building with Confident AI, join now →
KNOWLEDGE BASE

When Should You Start Tracing?

Three weeks into a Confident AI trial, a number of customers realize they never set up tracing. They hit a production failure, ask us to help debug it, and we have nothing to look at — no spans, no threads, no history. We cannot even give advice. They have to go back and instrument from scratch, losing a week they did not have.

The answer is: as early as possible. Before you build datasets. Before you annotate. Before you write your first eval. Get traces flowing first, because everything downstream depends on having the data.

Most teams think of tracing as a debugging tool — something you turn on when something breaks. That is backwards. Tracing is how you build the foundation:

  • Datasets come from traces. Real production inputs are better than anything you will synthesize on day one.
  • Annotations happen on traces. You cannot label outcomes if you cannot see what happened.
  • Evals run against traces. Online evaluation needs spans and threads to attach to.

Without traces, you are guessing what your system does. With traces, you are looking at what it actually did.

The objection is always "we will add tracing later, we just want to get the product working first." The problem is that "later" arrives as a production incident you cannot diagnose. You spend the time either way — the question is whether you spend it calmly during setup or frantically during a fire drill.

Instrument first. Everything else gets easier once the data is in.