Automate Prompt Management
Overview
Instead of manually creating and updating prompts on the platform, you can automate prompt management via the Evals API. This allows you to:
- Push new prompt commits from your codebase or CI/CD pipeline
- Promote commits to versions programmatically
- Optionally configure model settings, output type, and tools to track alongside prompts
- Integrate prompt management into your development workflow
Most of this page focuses on the core use case: tracking prompt changes via commits. Model settings, output types, and other configurations are optional add-ons for teams who want to manage their entire LLM configuration (prompt + model) as a single tracked unit.
If you haven’t already, get familiar with prompt commits and versions on the platform to understand the relationship between prompts, commits, versions, and labels.
Push Prompt Commits
Push a new commit of a prompt to Confident AI. If the prompt alias doesn’t exist, it will be created automatically. Every push creates a new commit that tracks your changes.
Python
TypeScript
curL
For message prompts:
For text prompts:
You can also specify the interpolation type:
Each push creates a new commit automatically. When you’re ready to mark a commit as a stable release, you can promote it to a version. Version numbers are controlled by Confident AI in the format 00.00.0X.
Create a Version
When you’re ready to mark a commit as a stable release, you can promote it to a version. Version numbers are automatically assigned by Confident AI in the format 00.00.0X (e.g., 00.00.01, 00.00.02).
Python
TypeScript
curL
A new version can only be created for commits made after the most recently versioned commit. Commits made before an existing version cannot be promoted to a version.
Once a commit is promoted to a version, you can assign labels (like staging or production) to it. Labels can only exist on versions, not commits.
Adding Model Configs
Model settings, output type, and tools are completely optional. You can track and use prompts without any of these — simply pull the prompt and use it with whatever model you choose in your code. These options are for teams who want to co-locate their model configuration with their prompts.
If you want to manage your model configuration alongside your prompts — tracking the prompt + model together in each commit — you can include model_settings and output_type when pushing.
This is useful when:
- You want to ensure a specific prompt always runs with a specific model and parameters
- You’re A/B testing different prompt + model combinations together
- You want to centralize both prompt and model configuration in one place
Python
TypeScript
curL
Reference
Model settings
Model settings include the provider, model name, and model parameters:
Parameters
Here are all the available parameters you could set:
Only include parameters that are valid for your chosen model provider and
model name. For example, reasoning_effort may only apply to certain OpenAI
models, while other parameters may not be supported by all providers.
Confident AI does not exhaustively validate which parameter combinations are
allowed — invalid configurations may result in runtime errors when using the
prompt in your code.
Providers
Here are the list of available model providers:
Output configurations
You can also configure structured outputs:
Output Types:
TEXT- Plain text outputJSON- JSON formatted outputSCHEMA- Structured output validated against a Pydantic schema
Interpolation types
Specify how variables are interpolated in your prompts:
What about Tools?
You can create and update tools using prompts by pushing prompts with tools. Tools in Confident AI are identified using their names — passing a tool with a new name creates a tool and passing a tool with an existing name updates the tool on the platform. Each push creates a new commit that tracks the tool configuration. Here’s how you can create / update tools:
Python
TypeScript
curL
Prompts in CI/CD
Automate prompt tracking as part of your CI/CD pipeline. A common pattern is to push prompt commits whenever your prompt files change:
Your push_prompts.py script can read prompt files and push them:
Combine automated prompt pushing with prompt labeling to control which versions are deployed to different environments (e.g., staging, production). Remember that labels can only be assigned to versions, so you’ll need to promote commits to versions before labeling them.
Next Steps
Now that you can push prompts programmatically, learn how to pull them into your app for usage.