Collect Feedback
Incorperate real user feedback into your evaluation pipeline
Overview
Confident AI allows you to collect feedback from end users that are interacting with you LLM app. End user feedback can be left on:
- Traces
- Spans, and
- Threads
When you send an annotation of a user feedback, you’ll get the opportunity to incoporate them into a dataset.
User feedback can be ingested via the Evals API, or DeepEval for those using python or typescript.
How It Works
To collect feedback, you need to:
- Setup a custom UI for users to enter their rating (thumbs up/down or 5 star system), and optionally expected outcome/output, and explanation
- Either collect the trace UUID, span UUID, or thread ID you’d like to leave feedback for
- Send the feedback to Confident AI via the Evals API
Since the thread ID is something you provide (click here if unsure) during LLM tracing, it is generally easier to setup feedback collection on threads than on traces and spans.