You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I confirm that I'm using the latest version of Pydantic AI
Description
Hi everybody,
This is not necessarily a bug, but I am having trouble figuring something out: We allow users to chat with agents. A user enters a message, the agent processes it, and a response is sent back to the user. The user may then enter follow-up questions, which means the conversation history fills up and is also passed back to the agent. So far, so good.
Now, with the first user message, the Pydantic AI Agent uses the provided system prompt that I pass to Agent(system_prompt=...).
However, this system message is never exposed to the user. So, when the user receives the result from the agent, they only see their question and the result (one user message and one assistant message).
Follow-up messages now contain a history without a system prompt. The agent then ignores my initial system prompt because the history is not empty (see the Pydantic AI code below).
How can I ensure it always uses my system prompt? Without it, all follow-up requests fail in my case. I understand the reasoning for not using the system message if there is already a history, but is there a way to enforce it in such Human-in-the-Loop cases?
Thanks!!
From _agent_graph.py:
if message_history:
# Shallow copy messages
messages.extend(message_history)
# Reevaluate any dynamic system prompt parts
await self._reevaluate_dynamic_prompts(messages, run_context)
return messages, _messages.ModelRequest([_messages.UserPromptPart(user_prompt)])
else:
parts = await self._sys_parts(run_context)
parts.append(_messages.UserPromptPart(user_prompt))
return messages, _messages.ModelRequest(parts)
Example Code
Python, Pydantic AI & LLM client version
0.0.30
The text was updated successfully, but these errors were encountered:
Thanks, GitHub Bot, that is the same issue. Our users are less experienced with Python, and they love Pydantic AI for its simplicity. I see that dynamic system prompt reevaluation is a solution, thanks!
Just to make sure: If there is a message history without a system prompt, why wouldn't the agent's system prompt be applied? Dynamic system prompt reevaluation is a bit more advanced 😅
Proposal: Change the logic from:
"If it is the first message without history, apply the Agent's system prompt, do not touch otherwise"
to:
"If there is no system prompt, apply the Agent's system prompt."
Because there is also a technique to add synthetic history to help the agent learn from the interaction and respond in a certain way, that would also be covered by the new logic.
Initial Checks
Description
Hi everybody,
This is not necessarily a bug, but I am having trouble figuring something out: We allow users to chat with agents. A user enters a message, the agent processes it, and a response is sent back to the user. The user may then enter follow-up questions, which means the conversation history fills up and is also passed back to the agent. So far, so good.
Now, with the first user message, the Pydantic AI Agent uses the provided system prompt that I pass to
Agent(system_prompt=...)
.However, this system message is never exposed to the user. So, when the user receives the result from the agent, they only see their question and the result (one user message and one assistant message).
Follow-up messages now contain a history without a system prompt. The agent then ignores my initial system prompt because the history is not empty (see the Pydantic AI code below).
How can I ensure it always uses my system prompt? Without it, all follow-up requests fail in my case. I understand the reasoning for not using the system message if there is already a history, but is there a way to enforce it in such Human-in-the-Loop cases?
Thanks!!
From _agent_graph.py:
Example Code
Python, Pydantic AI & LLM client version
The text was updated successfully, but these errors were encountered: