-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Description
Bug Report: Citations/Annotations Lost When Using LitellmModel with Perplexity Models
Describe the bug
When using LitellmModel
with Perplexity models (like perplexity/sonar-reasoning
), citations/annotations are not preserved in the final output. The raw litellm response contains citations, but when processed through the agents framework, the annotations field is empty.
Debug information
- Agents SDK version:
v0.2.4
(latest) - Python version: Python 3.12.7
Repro steps
1. Raw litellm code (works correctly):
from litellm import completion
import os
messages = [
{"role": "user", "content": "What is the capital of France?"}
]
os.environ['PERPLEXITYAI_API_KEY'] = "pplx.."
response = completion(
model="perplexity/sonar-reasoning",
messages=messages,
reasoning_effort="high",
allowed_openai_params = ["reasoning_effort"]
)
print(response.citations)
Output (correct):
['https://home.adelphi.edu/~ca19535/page%204.html',
'https://en.wikipedia.org/wiki/Paris',
'https://www.iroamly.com/france-travel/what-is-the-capital-of-france.html',
'https://www.coe.int/en/web/interculturalcities/paris',
'https://www.instagram.com/reel/DEVmGsIMGRM/']
2. Agents SDK code (buggy):
from agents.extensions.models.litellm_model import LitellmModel
import asyncio
from agents import Agent, Runner
# Define the research agent
research_agent1 = Agent(
name="Research Agent",
model=LitellmModel(model="perplexity/sonar-reasoning", api_key="pplx..."),
instructions="You perform deep empirical research based on the user's question."
)
# Async function to run the research and print streaming progress
async def basic_research(query):
print(f"Researching: {query}")
result_stream = Runner.run_streamed(
research_agent1,
query
)
count = 1
async for item in result_stream.stream_events():
agent_name = getattr(item.agent, "name", "Unknown Agent") if hasattr(item, "agent") else "Unknown Agent"
if item.type == "agent_updated_stream_event":
print(f"\n--- switched to agent: {item.new_agent.name} ---")
print(f"\n--- RESEARCHING ---")
elif item.type == "raw_response_event":
print(f"{count}. [{item.data}] → Raw Response")
if (hasattr(item.data, "item")
and hasattr(item.data.item, "action")
):
action = item.data.item.action or {}
if action.get("type") == "search":
print(f"[Web search] query={action.get('query')!r}")
elif item.data.type == "response.reasoning_summary_part.done":
print(f"{item.data.part.text[0]}")
count += 1
elif item.data.type == "response.output_item.done":
if item.data.item.type == "function_call":
print(f"Function Name: {item.data.item.name}")
print(f"Function Arguments: {item.data.item.arguments}")
elif item.data.item.type == "mcp_list_tools":
print(f"Succcessfully called MCP: {item.data.item.server_label} and it has {len(item.data.item.tools)} tools")
elif item.data.item.type == "message":
print(f"Output Text: {item.data.item.content[0].annotations}")
count += 1
elif item.type == "message_output_item":
print(f"{count}. [{agent_name}] → Message Output")
count += 1
with open("mcp_events1.txt", "w") as f:
f.write(str(result_stream))
return result_stream.final_output
asyncio.run(basic_research("What is the capital of France?"))
Output (buggy):
Output Text: []
Expected behavior
The annotations field should contain the citations from the Perplexity model, similar to what the raw litellm response provides. The output should show the citation URLs instead of an empty list.
Root Cause Analysis
The issue is in the LitellmConverter.convert_annotations_to_openai
method in src/agents/extensions/models/litellm_model.py
. The method doesn't properly handle citations from Perplexity models because it expects annotations in a specific format that doesn't match what Perplexity returns.
The current implementation:
@classmethod
def convert_annotations_to_openai(
cls, message: litellm.types.utils.Message
) -> list[Annotation] | None:
annotations: list[litellm.types.llms.openai.ChatCompletionAnnotation] | None = message.get(
"annotations", None
)
if not annotations:
return None
return [
Annotation(
type="url_citation",
url_citation=AnnotationURLCitation(
start_index=annotation["url_citation"]["start_index"],
end_index=annotation["url_citation"]["end_index"],
url=annotation["url_citation"]["url"],
title=annotation["url_citation"]["title"],
),
)
for annotation in annotations
]
Problems:
- It only looks for annotations in the
"annotations"
field, but Perplexity may return citations in a different field structure - It assumes all annotations are URL citations with specific fields (
start_index
,end_index
,url
,title
) - It doesn't handle the case where citations might be returned as a simple list of URLs (like
response.citations
)
The LitellmConverter.convert_message_to_openai
method calls this converter, but since the annotations aren't in the expected format, they're not converted and end up as empty in the final output.
I would like to work on this issue and can provide a PR with the fix