Code-Driven Assessments
5 min quickstart guide for code-driven AI red teaming
Overview
Confident AI’s red teaming capabilities offer a variety of features to test AI safety and security in development for a pre-deployment workflow, offering a wide range of features for:
- Vulnerability assessment: Systematically identify weaknesses like bias, toxicity, PII leakage, and prompt injection vulnerabilities.
- Adversarial testing: Simulate real-world attacks using jailbreaking, prompt injection, and other sophisticated attack methods.
- Risk profiling: Comprehensive evaluation across 40+ vulnerability types with detailed risk assessments and remediation guidance.
You can either run red teaming locally or remotely on Confident AI, both of which uses deepteam and gives you the same functionality:
- Run red teaming locally using
deepteamwith full control over vulnerabilities and attacks - Support for custom vulnerabilities, attack methods, and advanced red teaming algorithms
Suitable for: Python users, development, and pre-deployment security workflows
- Run red teaming on Confident AI platform with pre-built vulnerability frameworks
- Integrated with monitoring, risk assessments, and team collaboration features
Suitable for: Non-python users, continuous monitoring, and production safety assessments
Create a Risk Assessment
This examples goes through a comprehensive safety assessment using adversarial attacks to identify vulnerabilities in your AI system.
You’ll need to get your API key as shown in the setup and installation section before continuing.
Running red teaming locally executes attacks on your machine and uploads results to Confident AI. This gives full control over custom vulnerabilities and attack methods.
Install DeepTeam
First, install DeepTeam - Confident AI’s open-source LLM red teaming framework, by simply running the following command:
DeepTeam is powered by DeepEval’s evaluation framework, so you’ll also need to set up your API keys for the underlying LLM providers.
You can now set your API key for Confident AI by running deepteam login command from the CLI or by saving it as CONFIDENT_API_KEY in your env. This allows you to upload your red teaming results to the Confident AI.
Set up your target model
Define your AI system as a model callback function. This is the AI application you want to red team:
The model callback must accept a single string parameter (the adversarial input), and return an RTTurn object with role as assistant and content being your AI system’s response.
You can also pass retrieval_context and tools_called in your RTTurn object when testing RAG or agentic systems. retrieval_context can be a list of strings and tools_called must be a list of ToolCall objects.
Configure vulnerabilities and attacks
Choose which vulnerabilities to test for and which attack methods to use:
Run the red team assessment
Execute the red teaming assessment with your configured parameters:
This will run red teaming on your model_callback using the configured vulnerabilities and attacks and generate a risk assessment which is printed onto your console and also uploaded to the Confident AI platform. You can now view these risk assessments in the Risk Profile section on the Confident AI paltform.
You need to run deepteam login command from the CLI or save your API key as CONFIDENT_API_KEY in your env for your risk assessments to be uploaded to the Confident AI platform.
Best Practices
- Start with frameworks: Use OWASP Top 10 or NIST AI RMF for comprehensive coverage
- Test early and often: Integrate red teaming into your development cycle
- Focus on your use case: Customize vulnerabilities based on your application’s risks
- Monitor continuously: Set up ongoing safety assessments for production systems
- Document and remediate: Keep detailed records of findings and remediation efforts
Next Steps
Use industry-standard frameworks like OWASP Top 10 and NIST AI RMF for comprehensive security assessments
Create custom vulnerabilities and attack methods tailored to your specific use case and industry requirements
Red teaming works seamlessly with your existing LLM evaluation and tracing workflows on Confident AI.