Log Prompts

Log prompts to LLM spans for version tracking in production

Overview

When you use prompts managed on Confident AI, you can log the exact prompt version used in each LLM call. Prompt logging works by:

  1. Pulling a prompt from Confident AI
  2. Logging it to the LLM span via update_llm_span / updateLlmSpan

That’s it! This lets you monitor what prompts are running in production and which prompts performs best over time.

Prompt Observability & Performance

If you haven’t already, learn how prompt management works on Confident AI here.

Log a Prompt

Prompt logging is only available for LLM spans. Make sure your observed function has type="llm" set.

1

Pull and interpolate your prompt

Pull the prompt version from Confident AI and interpolate any variables.

main.py
1from deepeval.prompt import Prompt
2
3prompt = Prompt(alias="YOUR-PROMPT-ALIAS")
4prompt.pull()
5interpolated_prompt = prompt.interpolate(name="Joe")

If you don’t have any variables, you must still call interpolate() to create a usable copy of your prompt template.

2

Use the prompt and log it to the span

Inside an observed LLM function, use the interpolated prompt for generation and log the original prompt object to the span.

main.py
1from deepeval.tracing import observe, update_llm_span
2from deepeval.prompt import Prompt
3from openai import OpenAI
4
5@observe(type="llm", model="gpt-4o")
6def generate_response(user_input: str) -> str:
7 prompt = Prompt(alias="YOUR-PROMPT-ALIAS")
8 prompt.pull()
9 interpolated_prompt = prompt.interpolate(name="Joe")
10
11 response = OpenAI().chat.completions.create(
12 model="gpt-4o",
13 messages=interpolated_prompt,
14 )
15 update_llm_span(prompt=prompt)
16 return response.choices[0].message.content

Always pass the original pulled prompt object (not the interpolated version) to update_llm_span / updateLlmSpan. Confident AI uses it to link the span back to the versioned prompt — passing the interpolated string would log a raw string instead.

Once logged, Confident AI will display the prompt alias and version directly on the LLM span in the trace view, making it easy to see exactly which prompt was used for each LLM call.

Next Steps

With prompts logged, set up cost tracking or refine what data your traces capture.