Bug in create_react_agent with response_format?? #3845
Unanswered
Courvoisier13
asked this question in
Q&A
Replies: 1 comment 3 replies
-
it's not a bug, but i can see how this can be confusing - you can pass dedicated prompt for the structured output LLM call by passing a tuple |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi,
I have been getting an issue with
create_react_agent
withresponse_format
enabled. The issue occurs when I usecreate_react_agent
with no tools and a prompt and a response_format:The prompt asks the agent to pick one of ENUMs in
next
.I am getting a response with a message say equal to "option 1", by virtue of the prompt, it restricts the answer to one of the ENUMs in the Router.
However, in the response, I get a
structured_response
equal to "option 4". I thought it was weird that the llm generate an exact answer equal to one of the ENUMs but the parsed one is something else.I debugged and found that two calls were happening in
chat_agent_executor.py
in langgraph/prebuilt. The first tocall_model
which gives the right answer and the other togenerate_structured_response
which gives thestructured_response
field. After following the trail ofgenerate_structured_response
I reach_create_chat_result
in langchain_openai/chat_models/base.py. However, There is no prompt anymore. The llm is simply getting the messages, with no prompt, and it's just picking that makes no sense.Adding the system prompt to the front of the messages solves the issue. But somewhere along the way the system prompt given to the llm when using response_format disappear from the final llm call.
This seems like a bug, doesn't it?
Beta Was this translation helpful? Give feedback.
All reactions