Log Prompts
Overview
When you use prompts managed on Confident AI, you can log the exact prompt version used in each LLM call. Prompt logging works by:
- Pulling a prompt from Confident AI
- Logging it to the LLM span via
update_llm_span/updateLlmSpan
That’s it! This lets you monitor what prompts are running in production and which prompts performs best over time.
If you haven’t already, learn how prompt management works on Confident AI here.
Log a Prompt
Prompt logging is only available for LLM spans. Make sure your observed function has type="llm" set.
Pull and interpolate your prompt
Pull the prompt version from Confident AI and interpolate any variables.
Python
TypeScript
If you don’t have any variables, you must still call interpolate() to create a usable copy of your prompt template.
Use the prompt and log it to the span
Inside an observed LLM function, use the interpolated prompt for generation and log the original prompt object to the span.
Python
TypeScript
Always pass the original pulled prompt object (not the interpolated version) to update_llm_span / updateLlmSpan. Confident AI uses it to link the span back to the versioned prompt — passing the interpolated string would log a raw string instead.
Once logged, Confident AI will display the prompt alias and version directly on the LLM span in the trace view, making it easy to see exactly which prompt was used for each LLM call.
Next Steps
With prompts logged, set up cost tracking or refine what data your traces capture.