Set Input/Output

Learn how to supply input and output of your LLM application in a trace

Overview

Both traces and spans have inputs and outputs, which you can set dynamically within your application using the update_current_span/updateCurrentSpan and update_current_trace/updateCurrentTrace function respectively.

Setting the I/O on traces is also important for threads view on Confident AI.

Set Trace I/O

By default, the input and output of a trace is defaulted to the input arguments of the first span and the output of the last span you’ve wrapped/decorated. You can however override the input and output on spans at runtime.

main.py
1from openai import OpenAI
2from deepeval.tracing import observe, update_current_trace
3
4client = OpenAI()
5
6@observe()
7def llm_app(query: str):
8 res = client.chat.completions.create(
9 model="gpt-4o",
10 messages=[{"role": "user", "content": query}]
11 ).choices[0].message.content
12
13 update_current_trace(input=query, output=res)
14 return res
15
16llm_app("Write me a poem.")

The input and output can be ANY TYPE, and is useful for visualization on the UI (even more so if you’re using conversation threads).

Set Span I/O

By default, the input and output of a span is defaulted to the input arguments and output of the function/method you’re mapping. You can however override the input and output on spans at runtime.

For example, the "retriever" span type expects a string as the input and list of strings as the output, which you might violate if setting I/O yourself. This will decrease the chances that you run into an error.

main.py
1from openai import OpenAI
2from deepeval.tracing import observe, update_current_span
3
4client = OpenAI()
5
6@observe()
7def llm_app(query: str):
8 res = client.chat.completions.create(
9 model="gpt-4o",
10 messages=[{"role": "user", "content": query}]
11 ).choices[0].message.content
12
13 update_current_span(input=query, output=res)
14 return res
15
16llm_app("Write me a poem.")

This example is the same as the one for tracing except for the update_current_trace, and that’s not a mistake. You can set input and outputs the same way as you do for traces, and if a trace’s I/O is not set it defaults to the I/O of the root span.

The input and output can be ANY TYPE for custom span types, and is useful for visualization on the UI.

I/O for Threads

For multu-turn AI apps that create a thread from the traces, it is highly recommended that you provide the strings instead, where the input will represent the user input, and output representing the AI generated output. You can also leave out any input or output for consecutive user/LLM behaviors.

You will also need the input and output to run online evaluations on a thread, as these will be used as the turns for a conversational test case.

Next Steps

With your trace and span I/O configured, connect traces into conversations or start evaluating them.