-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Closed
Labels
Description
Please read this first
- Have you read the docs?Agents SDK docs - Yes
- Have you searched for related issues? Others may have had similar requests - Yes
Question
Based on the docs (https://openai.github.io/openai-agents-python/sessions/), when using sessions, the memory between runs/in between runs are managed the following way:
- Before each run: The runner automatically retrieves the conversation history for the session and prepends it to the input items.
- After each run: All new items generated during the run (user input, assistant responses, tool calls, etc.) are automatically stored in the session.
- Context preservation: Each subsequent run with the same session includes the full conversation history, allowing the agent to maintain context.
Upon closer inspection, I noticed that the following are logged as dicts in a list when using session.get_item() :
type, id, call_id, arguments, name, content, role.
- My question is are all these pushed in entirety to the LLM as part of the input?
- If so, doesn't it seem a little overloaded (though useful for debugging) to pass in things like id, call_id for the agents to manage the conversation state?
- Also, is there a simple way without creating a custom memory to only save certain types of items?