Working with Prompts
Overview
You can pull a prompt version from Confident AI like how you would pull a dataset. It works by:
- Providing Confident AI with the alias and optionally version of the prompt you wish to retrieve
- Confident AI will provide the non-interpolated version of the prompt
- You will then interpolate the variables in code
You should pull prompts once and save it in memory instead of pulling it everytime you need to use it.
You can pull and manage your prompts in any project by configuring a CONFIDENT_API_KEY.
- For default usage, set
CONFIDENT_API_KEYas an environment variable. - To target a specific project, pass a
confident_api_keydirectly when creating thePromptobject.
When both are provided, the confident_api_key passed to Prompt always takes precedence over the environment variable.
Using Prompt Versions
Pull prompt with alias
Pull your prompt version by providing the alias you’ve defined:
Python
TypeScript
curL
By default, Confident AI will return the latest version of your prompt.
However, you can also specify the version to override this behavior.
Interpolate variables
Now that you have your prompt template, interpolate any dynamic variables you may have defined in your prompt version.
Python
TypeScript
curL
For example, if this is your prompt version:
Messages
Text
And your interpolation type is {{ variable }}, interpolating the name (e.g. “Joe”) would give you this prompt that is ready for use:
And if you don’t have any variables, you must still use the interpolate() method to create a copy of your prompt template to be used in your LLM application.
Accessing Tools from Prompts
After pulling a prompt, you can access any tools that were defined in the prompt version via the tools property. Each tool contains:
- name: The name of the tool
- description: A description of what the tool does
- input schema: The JSON schema defining the tool’s input parameters
Python
TypeScript
Pull Prompts By Label
You can pull a prompt from Confident AI using its alias, you can also pull specific versions of prompts using verion and label.
Version
Label
How Are Prompts Pulled?
Confident AI automatically caches prompts on the client side to minimize API call latency and ensure prompt availability, which is especially useful in production environments.
Cache
No Cache
Customize refresh rate
By default, the cache is refetched every 60 seconds, where DeepEval will automatically update the cached prompt with the up-to-date version from Confident AI. This can be overridden by setting the refresh parameter to a different value. Fetching is done asynchronously, so it will not block your application.
Python
TypeScript
Disable Caching
To disable caching, you can set refresh=0. This will force an API call every time you pull the prompt, which is particularly useful for development and testing.
Python
TypeScript
Log Prompts During Evals
You can associate a prompt with your evals to get detailed insights on how each prompt and their versions are performing. It works by:
- Pulling or creating prompts in
deepevalusing thePromptobject - Logging prompts as hyperparameters in your evaluations
Pull prompts
You should first pull a prompt from Confident AI using alias, version or label.
Python
TypeScript
curL
You can now interpolate and use this prompt in your LLM app.
Logging prompt in evaluate
Now, simply add this prompt as a free-form key-value pair to the hyperparameters argument in the evaluate() function
Python
Typescript
curL
This will automatically attribute the prompt used during this test run, which will allow you get detailed insights in the Confident AI platform.
Log Prompts During Tracing
Associating prompts with LLM traces and spans is a great way to determine which prompts performs best in production.
Setup tracing
Attach the @observe decorator to functions/methods that make up your agent, and specify type llm for your LLM-calling functions.
Specifying the type is necessary because logging prompts is only available for LLM spans.