You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm currently building a chatbot using llama.cpp, and I want the model to respond multiple times to a single user query. My plan is to decide whether the model should continue its response based on the probability of the token. For low-latency responses, I'm using stream=True, but I noticed that when streaming is enabled, I cannot get the logprobs to display.
Is there any way to enable logprobs while streaming responses? If not, are there any potential workarounds to achieve both streaming and access to token probabilities, particularly for ?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello,
I'm currently building a chatbot using llama.cpp, and I want the model to respond multiple times to a single user query. My plan is to decide whether the model should continue its response based on the probability of the token. For low-latency responses, I'm using stream=True, but I noticed that when streaming is enabled, I cannot get the logprobs to display.
Is there any way to enable logprobs while streaming responses? If not, are there any potential workarounds to achieve both streaming and access to token probabilities, particularly for ?
Thank you for your assistance!
Beta Was this translation helpful? Give feedback.
All reactions