You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now we only do one LLM call deciding whether to send to the query tool or not. This is great at answering the core questions when we know what we are looking for but it's not that great at answering more open-ended questions e.g. "return ETHDenver 2024 MEV-related videos".
Answering the latter would require more ReAct freedom with the agent to break down the high-level questions into sub-questions, perform the query engine look-ups, and then aggregate back.
As of issue write-time, the query engine response is directly sent back to the user because the agent performed very poorly by removing all the substance from the otherwise very thorough query engine answer. Perhaps a router to "breaking down into sub question" and aggregate back could work for this type of questions
The text was updated successfully, but these errors were encountered:
Right now we only do one LLM call deciding whether to send to the query tool or not. This is great at answering the core questions when we know what we are looking for but it's not that great at answering more open-ended questions e.g. "return ETHDenver 2024 MEV-related videos".
Answering the latter would require more ReAct freedom with the agent to break down the high-level questions into sub-questions, perform the query engine look-ups, and then aggregate back.
As of issue write-time, the query engine response is directly sent back to the user because the agent performed very poorly by removing all the substance from the otherwise very thorough query engine answer. Perhaps a router to "breaking down into sub question" and aggregate back could work for this type of questions
The text was updated successfully, but these errors were encountered: