Portkey
Overview
Confident AI lets you trace and evaluate Portkey LLM calls, whether standalone or used as a component within a larger application.
Tracing Quickstart
Configure Portkey
To begin tracing your Portkey LLM calls as a component in your application, import OpenAI and use the PORTKEY_GATEWAY_URL to trace the calls.
Chat Completions
Async Chat Completions
Responses
Async Responses
DeepEval’s Portkey client traces chat.completions.create method.
Run Portkey
Invoke your agent by executing the script:
You can directly view the traces on Confident AI by clicking on the link in the output printed in the console.
Advanced Usage
Logging prompts
If you are managing prompts on Confident AI and wish to log them, pass your Prompt to the create method.
Logging threads
Threads are used to group related traces together, and are useful for chat apps, agents, or any multi-turn interactions. Learn more about threads here. You can set the thread_id in the trace context.
This is an example of using STRING type prompt interpolation.
Evals Usage
Online evals
If your OpenAI application is in production, and you still want to run evaluations on your traces, use online evals. It lets you run evaluations on all incoming traces on Confident AI’s server.
Create metric collection
Create a metric collection on Confident AI with the metrics you wish to use to evaluate your OpenAI agent. Copy the name of the metric collection.
End-to-end evals
Confident AI allows you to run end-to-end evals on your OpenAI client to evaluate your Portkey calls directly. This is recommended if you are testing your Portkey calls in isolation.
Create metric
You can only run end-to-end evals on Portkey using metrics that evaluate
input, output, or tools_called. You can pass parameters like expected_output, expected_tools, context and retrieval_context to the trace context.
Run evals
Replace your OpenAI client with DeepEval’s. Then, use the dataset’s evals_iterator to invoke your OpenAI client for each golden. Remember to replace base_url and api_key with the Portkey gateway URL and API key.
Chat Completions
Responses
Async Chat Completions
Async Responses
This will automatically generate a test run with evaluated Portkey traces using inputs from your dataset.
Using OpenAI in component-level evals
You can also evaluate Portkey calls through component-level evals. This approach is recommended if you are testing your Portkey calls as a component in a larger application system.