Conversation Completeness

Conversation Completeness is a multi-turn metric to determine if a conversation is complete.

Overview

The conversational completeness metric is a multi-turn metric that uses LLM-as-a-judge to evaluate whether your chatbot satisfies the user’s requirements at each turn throughout the conversation.

Required Parameters

These are the parameters you must supply in your test case to run evaluations for conversation completeness metric:

turns
list of TurnRequired

A list of Turns as exchanges between user and assistant.

Parameters of Turn:

role
user | assistantRequired

The role of the person speaking, it’s either user or assistant

content
stringRequired

The content provided by the role for the turn

How Is It Calculated?

The conversation completeness metric first extracts distinct user intentions from all turns using an LLM, then uses the same LLM to check if the corresponding assistant turns have satisfied those intentions.


Conversation Completeness=Number of Satisfied User Intentions in ConversationTotal Number of User Intentions in Conversation\text{Conversation Completeness} = \frac{\text{Number of Satisfied User Intentions in Conversation}}{\text{Total Number of User Intentions in Conversation}}

The final score is the proportion of satisfied user intentions found in the conversation.

Create Locally

You can create the ConversationCompletenessMetric in deepeval as follows:

1from deepeval.metrics import ConversationCompletenessMetric
2
3metric = ConversationCompletenessMetric()

Here’s a list of parameters you can configure when creating a ConversationCompletenessMetric:

threshold
numberDefaults to 0.5

A float to represent the minimum passing threshold.

model
string | ObjectDefaults to gpt-4.1

A string specifying which of OpenAI’s GPT models to use OR any custom LLM model of type DeepEvalBaseLLM.

include_reason
booleanDefaults to true

A boolean to enable the inclusion a reason for its evaluation score.

async_mode
booleanDefaults to true

A boolean to enable concurrent execution within the measure() method.

strict_mode
booleanDefaults to false

A boolean to enforce a binary metric score: 0 for perfection, 1 otherwise.

verbose_mode
booleanDefaults to false

A boolean to print the intermediate steps used to calculate the metric score.

This can be used for multi-turn E2E

Create Remotely

For users not using deepeval python, or want to run evals remotely on Confident AI, you can use the conversation completeness metric by adding it to a single-turn metric collection. This will allow you to use conversation completeness metric for:

  • Multi-turn E2E testing
  • Online and offline evals for traces and spans