Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

System Prompt for Human In The Loop Use Cases #1023

Open
1 task done
jkuehn opened this issue Mar 1, 2025 · 2 comments
Open
1 task done

System Prompt for Human In The Loop Use Cases #1023

jkuehn opened this issue Mar 1, 2025 · 2 comments

Comments

@jkuehn
Copy link

jkuehn commented Mar 1, 2025

Initial Checks

  • I confirm that I'm using the latest version of Pydantic AI

Description

Hi everybody,

This is not necessarily a bug, but I am having trouble figuring something out: We allow users to chat with agents. A user enters a message, the agent processes it, and a response is sent back to the user. The user may then enter follow-up questions, which means the conversation history fills up and is also passed back to the agent. So far, so good.

Now, with the first user message, the Pydantic AI Agent uses the provided system prompt that I pass to Agent(system_prompt=...).

However, this system message is never exposed to the user. So, when the user receives the result from the agent, they only see their question and the result (one user message and one assistant message).

Follow-up messages now contain a history without a system prompt. The agent then ignores my initial system prompt because the history is not empty (see the Pydantic AI code below).

How can I ensure it always uses my system prompt? Without it, all follow-up requests fail in my case. I understand the reasoning for not using the system message if there is already a history, but is there a way to enforce it in such Human-in-the-Loop cases?

Thanks!!

From _agent_graph.py:

        if message_history:
            # Shallow copy messages
            messages.extend(message_history)
            # Reevaluate any dynamic system prompt parts
            await self._reevaluate_dynamic_prompts(messages, run_context)
            return messages, _messages.ModelRequest([_messages.UserPromptPart(user_prompt)])
        else:
            parts = await self._sys_parts(run_context)
            parts.append(_messages.UserPromptPart(user_prompt))
            return messages, _messages.ModelRequest(parts)

Example Code

Python, Pydantic AI & LLM client version

0.0.30
@pydanticai-bot
Copy link

pydanticai-bot bot commented Mar 1, 2025

PydanticAI Github Bot Found 1 issues similar to this one:

  1. "Add system prompts independently of history of messages #531" (90% similar)

@jkuehn
Copy link
Author

jkuehn commented Mar 1, 2025

Thanks, GitHub Bot, that is the same issue. Our users are less experienced with Python, and they love Pydantic AI for its simplicity. I see that dynamic system prompt reevaluation is a solution, thanks!

Just to make sure: If there is a message history without a system prompt, why wouldn't the agent's system prompt be applied? Dynamic system prompt reevaluation is a bit more advanced 😅

Proposal: Change the logic from:
"If it is the first message without history, apply the Agent's system prompt, do not touch otherwise"
to:
"If there is no system prompt, apply the Agent's system prompt."

Because there is also a technique to add synthetic history to help the agent learn from the interaction and respond in a certain way, that would also be covered by the new logic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant