Working with Prompts

Learn how to test and use prompts in your LLM app

Overview

You can pull a prompt version from Confident AI like how you would pull a dataset. It works by:

  • Providing Confident AI with the alias and optionally version of the prompt you wish to retrieve
  • Confident AI will provide the non-interpolated version of the prompt
  • You will then interpolate the variables in code

You should pull prompts once and save it in memory instead of pulling it everytime you need to use it.

Using Prompt Versions

1

Pull prompt with alias

Pull your prompt version by providing the alias you’ve defined:

1from deepeval.prompt import Prompt
2
3prompt = Prompt(alias="YOUR-PROMPT-ALIAS")
4prompt.pull()

By default, Confident AI will return the latest version of your prompt. However, you can also specify the version to override this behavior.

2

Interpolate variables

Now that you have your prompt template, interpolate any dynamic variables you may have defined in your prompt version. For example, if this is your prompt version:

1{
2 "role": "system",
3 "content": "You are a helpful assistant called {{ name }}. Speak normally like a human."
4}

And your interpolation type is {{ variable }}, interpolating the name (e.g. “Joe”) would give you this prompt that is ready for use:

1{
2 "role": "system",
3 "content": "You are a helpful assistant called Joe. Speak normally like a human."
4}
1interpolated_prompt = prompt.interpolate(name="Joe")

And if you don’t have any variables, you must still use the interpolate() method to create a copy of your prompt template to be used in your LLM application.

3

Use interpolated prompt

By now you should have an interpolated prompt version, for example:

1{
2 "role": "system",
3 "content": "You are a helpful assistant called Joe. Speak normally like a human."
4}

Which you can use to generate text from your LLM provider of choice. Here are some examples with OpenAI:

main.py
1from deepeval.prompt import Prompt
2from openai import OpenAI
3
4prompt = Prompt(alias="YOUR-PROMPT-ALIAS")
5prompt.pull()
6interpolated_prompt = prompt.interpolate() # interpolate prompt
7
8response = OpenAI().chat.completions.create(
9 model="gpt-4o-mini",
10 messages=interpolated_prompt
11)
12
13print(response.choices[0].message.content)

Pull Prompts By Label

Previously we saw how we can pull a prompt by supplying the version number. You can also “deploy” a prompt using a label, which allows you to select a specific version without defaulting to the latest one:

main.py
1from deepeval.prompt import Prompt
2
3prompt = Prompt(alias="YOUR-PROMPT-ALIAS")
4prompt.pull(label="staging")

You must manually label each prompt version. Click here to learn how to do so.

Logging Prompts in Traces

1

Setup tracing

Attach the @observe decorator to functions/methods that make up your agent, and specify type llm for your LLM-calling functions.

main.py
1from deepeval.tracing import observe
2
3@observe(type="llm", model="gpt-4.1")
4def your_llm_component():
5 ...

Specifying the type is necessary because logging prompts is only available for LLM spans.

2

Pull and interpolate prompt

Pull and interpolate the prompt version to use it for LLM generation.

main.py
1from deepeval.tracing import observe
2from deepeval.prompt import Prompt
3from openai import OpenAI
4
5@observe(type="llm", model="gpt-4.1")
6def your_llm_component():
7 prompt = Prompt(alias="YOUR-PROMPT-ALIAS")
8 prompt.pull()
9 interpolated_prompt = prompt.interpolate(name="Joe")
10 response = OpenAI().chat.completions.create(model="gpt-4o-mini", messages=interpolated_prompt)
11 return response.choices[0].message.content
3

Execute your function

Then simply provide the prompt to the update_llm_span function.

main.py
1from deepeval.tracing import observe, update_llm_span
2from deepeval.prompt import Prompt
3from openai import OpenAI
4
5@observe(type="llm", model="gpt-4.1")
6def your_llm_component():
7 prompt = Prompt(alias="YOUR-PROMPT-ALIAS")
8 prompt.pull()
9 interpolated_prompt = prompt.interpolate(name="Joe")
10 response = OpenAI().chat.completions.create(model="gpt-4o-mini", messages=interpolated_prompt)
11 update_llm_span(prompt=prompt)
12 return response.choices[0].message.content

Remember to pull the prompt before updating the span, otherwise the prompt will not be logged.

This will automatically attribute the prompt used to the LLM span.

Prompt caching

Confident AI automatically caches prompts on the client side to minimize API call latency and ensure prompt availability, which is especially useful in production environments.

Customize refresh rate

By default, the cache is refetched every 60 seconds, where DeepEval will automatically update the cached prompt with the up-to-date version from Confident AI. This can be overridden by setting the refresh parameter to a different value. Fetching is done asynchronously, so it will not block your application.

main.py
1from deepeval.prompt import Prompt
2
3prompt = Prompt(alias="YOUR-PROMPT-ALIAS")
4prompt.pull(refresh=60)
5interpolated_prompt = prompt.interpolate(name="Joe")

Disable Caching

To disable caching, you can set refresh=0. This will force an API call every time you pull the prompt, which is particularly useful for development and testing.

main.py
1from deepeval.prompt import Prompt
2
3prompt = Prompt(alias="YOUR-PROMPT-ALIAS")
4prompt.pull(refresh=0)
5interpolated_prompt = prompt.interpolate(name="Joe")