-
As mentioned in the Quick Start Guide, when setting the kernel to chatcompletion and automating the execution of functions from planning, is there a way to configure the prompt used during the planning phase? I am aware that there is a method to use a planner, as shown in the following notebook (which might include a prompt configuration), but I've come across comments stating that using the planner is not recommended. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
@ntmkkc the best suggestion is to perform "vanilla function" calling in the following manner: https://github.com/microsoft/semantic-kernel/blob/main/python/samples/concepts/auto_function_calling/chat_completion_with_auto_function_calling.py. As you pointed out, we recommend staying away from the existing planners (that will be deprecated in the future) as those take too much of a "one-way" approach to function calling. Using function calling in a chat based scenario saves tokens, reduces latency, and can be better for your application because if extra user input is required, the model will ask, and you/your user can provide it. You can check out some other samples in that directory to see auto function calling for streaming and/or manual function calling for both non-streaming/streaming. |
Beta Was this translation helpful? Give feedback.
@ntmkkc the best suggestion is to perform "vanilla function" calling in the following manner: https://github.com/microsoft/semantic-kernel/blob/main/python/samples/concepts/auto_function_calling/chat_completion_with_auto_function_calling.py.
As you pointed out, we recommend staying away from the existing planners (that will be deprecated in the future) as those take too much of a "one-way" approach to function calling. Using function calling in a chat based scenario saves tokens, reduces latency, and can be better for your application because if extra user input is required, the model will ask, and you/your user can provide it.
You can check out some other samples in that directory to see…