Quickstart
Confident AI red teaming is in private beta.
Overview
Confident AI’s red teaming capabilities offer a variety of features to test AI safety and security in development for a pre-deployment workflow, offering a wide range of features for:
- Vulnerability assessment: Systematically identify weaknesses like bias, toxicity, PII leakage, and prompt injection vulnerabilities.
- Adversarial testing: Simulate real-world attacks using jailbreaking, prompt injection, and other sophisticated attack methods.
- Risk profiling: Comprehensive evaluation across 40+ vulnerability types with detailed risk assessments and remediation guidance.
You can either run red teaming locally or remotely on Confident AI, both of which uses deepteam and gives you the same functionality:
- Run red teaming locally using
deepteamwith full control over vulnerabilities and attacks - Support for custom vulnerabilities, attack methods, and advanced red teaming algorithms
Suitable for: Python users, development, and pre-deployment security workflows
- Run red teaming on Confident AI platform with pre-built vulnerability frameworks
- Integrated with monitoring, risk assessments, and team collaboration features
Suitable for: Non-python users, continuous monitoring, and production safety assessments
Create a Risk Assessment
This examples goes through a comprehensive safety assessment using adversarial attacks to identify vulnerabilities in your AI system.
You’ll need to get your API key as shown in the setup and installation section before continuing.
Running red teaming locally executes attacks on your machine and uploads results to Confident AI. This gives full control over custom vulnerabilities and attack methods.
Install DeepTeam
First, install DeepTeam by reaching out to your representative at Confident AI to get access. The OSS version does not allow access to Confident AI red teaming as of now.
DeepTeam is powered by DeepEval’s evaluation framework, so you’ll also need to set up your API keys for the underlying LLM providers.
Set up your target model
Define your AI system as a model callback function. This is the system you want to red team:
The model callback must accept a single string parameter (the adversarial input), return a single string (your AI system’s response), and can be async for better performance.
Best Practices
- Start with frameworks: Use OWASP Top 10 or NIST AI RMF for comprehensive coverage
- Test early and often: Integrate red teaming into your development cycle
- Focus on your use case: Customize vulnerabilities based on your application’s risks
- Monitor continuously: Set up ongoing safety assessments for production systems
- Document and remediate: Keep detailed records of findings and remediation efforts
Next Steps
Use industry-standard frameworks like OWASP Top 10 and NIST AI RMF for comprehensive security assessments
Create custom vulnerabilities and attack methods tailored to your specific use case and industry requirements
Red teaming works seamlessly with your existing LLM evaluation and tracing workflows on Confident AI.