Troubleshooting
Common issues and fixes when using @observe for tracing
Overview
This page covers common tracing issues caused by Python’s concurrency model and process lifecycle. If you’re experiencing any of the following, check the relevant sections below:
- Unexpected new traces appearing instead of spans nesting under a parent trace.
- Traces not showing up on the Confident AI dashboard after execution.
- Trace attributes not set correctly — output, name, or metadata reflecting the wrong values.
- Missing output on streamed responses — trace appears but with no output.
Using @observe with ThreadPoolExecutor
Python’s concurrent.futures.ThreadPoolExecutor spawns new threads that do not inherit ContextVar values from the calling thread. Since deepeval tracing relies on ContextVar to track the active span, submitting an @observe-decorated function directly to an executor produces a separate, orphaned trace instead of nesting under the parent.
The fix is to snapshot the caller’s context with contextvars.copy_context() and use ctx.run when submitting work:
copy_context() must be called inside the @observe-decorated parent
function so it captures the active tracing context. Call it before each
batch of executor.submit() calls — the snapshot is point-in-time, so earlier
snapshots will be stale if the parent context changes between batches.
Traces Not Showing Up
Confident AI uses batch ingestion for traces, so it is normal for a trace to take up to 30 seconds to appear on the dashboard after it has been posted. If your traces still don’t show up after that window, the most likely cause is your process exiting before the background worker finishes posting — common in serverless functions (AWS Lambda, Google Cloud Functions, etc.) and short-lived scripts.
To fix this, set the CONFIDENT_TRACE_FLUSH environment variable to force DeepEval to flush traces synchronously before the function returns:
Or set it inline when running a script:
With synchronous flushing enabled, the process will not shut down until all pending traces have been posted. This does not add latency to individual function calls, but it may delay script or serverless function exit while traces are being flushed.
Using @observe with asyncio.run_in_executor()
loop.run_in_executor() delegates work to a thread pool under the hood, so it has the same ContextVar propagation issue as ThreadPoolExecutor — child spans will create orphaned traces instead of nesting under the parent.
Apply the same copy_context() fix:
Undecorated Parent Function
If the outermost calling function is not decorated with @observe, there is no parent trace for child spans to nest under. Each @observe-decorated function called inside it will create its own independent trace.
This is easy to miss on entry points like Flask route handlers, FastAPI endpoints, or task-queue workers — make sure the top-level function that kicks off your pipeline is decorated.
Using @observe with multiprocessing
multiprocessing.Process and concurrent.futures.ProcessPoolExecutor spawn entirely separate OS processes that do not share memory with the parent. Unlike threads, contextvars.copy_context() cannot propagate tracing context across process boundaries.
Traces created inside child processes will always be independent, top-level traces. There is no workaround for this — if you need child processes to produce spans that nest under a parent, consider switching to ThreadPoolExecutor with the copy_context() fix described above.
update_current_trace vs update_current_span
update_current_trace() updates the trace (the top-level unit), not the span of the function it’s called in. If you call it from a child @observe-decorated function expecting it to set that child’s span data, it will set the trace-level fields instead.
To update a child function’s own span, use update_current_span():
Use update_current_trace() in the top-level function to set trace-level fields like name, tags, or metadata. Use update_current_span() everywhere else.
Streaming Functions Missing Trace Output
When an @observe-decorated function uses yield to stream its response (e.g. a FastAPI StreamingResponse), the trace output won’t be captured automatically because the return value is a generator — not the final assembled text.
To fix this, collect the streamed output and set it explicitly with update_current_trace():
Without this, the trace will appear on Confident AI with no output.