Version Prompts
Overview
Prompt versioning allows you to optimize and test different versions of your prompts. Managing prompts on Confident AI allows you to:
- Collaborate and centralize where prompts are stored and edited, even for non-technical team members
- Pinpoint which version, or even combination of your prompt versions, performed best
- Optionally co-locate model settings, output type, and tools with prompts to version them as a single unit
There are a million places you can keep your prompts - on GitHub, CSV files, in memory in code, Google Sheets, Notion, or even written in a diary hidden under your table drawer. But only by keeping prompts on Confident AI can you fully leverage Confident AI’s evaluation features.
Prompts are a type of hyperparameter on Confident AI. Others include things like models, embedders, top-K, and max tokens. When you run evals against prompts kept on Confident AI, we can tell you which version performs best, and later automatically optimize it for you.
Prompts vs Prompts + Model Config: You can use Confident AI purely for prompt versioning — pull your prompts and use them with whatever model you configure in your code. Alternatively, if you want to manage prompts and model configurations together as a single versioned unit, you can attach model settings, output type, and tools to prompt versions.
Types of Prompts
There are two types of prompts you can create:
- (Single) Text Prompt: Use this when you need a straightforward, one-off prompt for simple completions.
- Prompt Message List: Use this when you need to define multiple messages with specific roles (system, user, assistant) in an OpenAI messages format. This format is ideal for few-shot prompting, where you can start with a system message that sets the context.
If you ever see a prompt being mentioned without any mention of “message” or “list”, assume it is a single prompt we’re talking about.
Understanding Prompt Versioning
In Confident AI, each prompt is identified by a unique alias. This alias acts as a unique identifier and refers to a single, specific prompt. Different aliases refer to completely separate prompts.
Every change you make to a prompt is tracked as a commit. This ensures complete history and traceability of all prompt modifications. When you’re ready to mark a commit as a stable release, you can promote it to a version.
Example
Suppose you have a prompt with the alias MyPrompt. Every edit creates a new commit. You can then promote specific commits to versions.
- Commit: Every change to a prompt creates a new commit. Commits are automatically tracked and provide a complete history of all modifications.
- Version: A promoted commit that represents a stable release. Version numbers are controlled by Confident AI in the format
00.00.0X(e.g.,00.00.01,00.00.02). - Label: Labels (like
stagingorproduction) can only be assigned to versions, not commits. This ensures that only stable, versioned prompts are deployed to different environments.
Commit a New Prompt
You can create a prompt in Project > Prompt Studio through two simple steps:
- Create a text or messages prompt
- Edit and commit your changes in the prompt editor
Messages
Text
Don’t forget to commit your changes after you’re done editing. Every commit is tracked, and you can later promote any commit to a version. You can also create commits from code.
A new version can only be created for commits made after the most recently versioned commit. Commits made before an existing version cannot be promoted to a version.
For more advanced push options including model settings, output type, and tools, see Automate Prompt Management.
Templating Options
Dynamic variables
You can include variables that can be interpolated dynamically in your LLM application later on. There are five interpolation types available:
Variable names must not contain spaces:
Conditional logic
Conditional logic can be added when using JINJA interpolation. JINJA supports jinja templates, which allows you to render more complex logic such as conditional if/else blocks:
As well as for loops:
Including images
You can also include images simply by dragging and dropping something into the text areas.
Model Configs
Beyond creating and editing prompts, you can also configure model settings, output type, and tools associated with your prompt. These configurations are included in each commit, allowing you to:
- Track not just prompt changes, but also model configuration changes
- Use directly in code when pulling prompts in code
- Compare the impact of models on the same prompt (and vice versa) when running experiments on your AI app
Keeping model configs for prompts on Confident AI does no harm but it doesn’t mean you have to use it - for both in code and on the platform in the Arena or when running experiments. However, if model configs confuses you, feel free to leave them out.
Model settings
You can configure the model provider, model name, and model parameters for each prompt. These settings are tracked with each commit, ensuring that when you pull a prompt in code, you also get the exact model configuration needed to run it.
Example
For an OpenAI GPT-4.1 configuration with custom temperature and max tokens:
- Provider:
openai - Model:
gpt-4.1 - Parameters:
Output type
You can specify the expected output format for your prompt by selecting an output type. This configuration is tracked with each commit:
- Text Output: Standard text response (default)
- JSON Output: Structured JSON response
- Schema Output: Structured response conforming to a defined schema
When using Schema Output, you can define a custom schema that your LLM response should conform to. This is useful for ensuring structured, predictable outputs.
To configure a schema:
- Click on Schema Output in the output type dropdown
- Enter a Schema Name (required)
- Add Schema Fields with their property names and types (String, Number, Boolean, etc.)
- Click Save Schema
The schema will be previewed as a Pydantic BaseModel class, making it easy to visualize how your structured output will look.
Attach tools
You can attach tools to your prompt for function calling capabilities. This allows your LLM to invoke external tools like web search, APIs, or custom functions. Tool configurations are tracked with each commit.
To attach tools:
- Click on the Tools button in the prompt editor
- Search for available tools
- Select the tools you want to enable for this prompt version
Assign Prompt Labels
Labels can only be assigned to versions, not commits. This ensures that only stable, versioned prompts are deployed to different environments. You can assign labels in the Version History page so no code changes are required to “deploy” a new version into a certain environment.
To assign a label, first promote a commit to a version, then assign the desired label (e.g., staging, production) to that version.
Only users with sufficient permissions are able to modify prompt labels.
The next section will dive deeper into this topic but this is how you can pull a prompt via its label in python:
Next Steps
Now that you know how to version prompts on the platform, put them to work in evaluations.