You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Well, I am surprised that the "main" and "great" new feature of the new OpenAI o1 model is actually doing say "more sophisticated" inference workflow while employing something like Chain-of-thought process. Basically I understand it that even a "dumb" model can perform much better when it "thinks more" during inference. The great news they are telling us is that by "thinking more" you can get smarter, which is probably very true also for humans.
The o1 model is probably trained to come up with its own CoT workflow for any given prompt, but I think it could be interesting to try to even hardcode some kind of workflow which any standard LLM model may try to follow during inference. Basically let the model analyze the prompt from various perspectives first and then try to judge on what type of "inference workflow" it should employ.
The hardcoded workflow could look like this:
Prompt is submitted to the model.
The model asks itself couple of hard-coded questions about the prompt, maybe:
is that some light conversation (needing soft-skills like empathy etc)
does it look like a science problem (math, physics etc.)
can I break the prompt down to subtasks - if yes, the workflow will feed each subtask into the model separately, then combine the result etc.
is the problem easy/hard
do I have all information I need (do I need to ask the user for further input/clarification)
The workflow would run, maybe in multiple iterations on various its levels, maybe trying to fit some "quality checks" for the answer
The output is presented to the user (the "hidden" thinking may be optionally viewed by user)
Anyone having the same feelings as I do about the CoT thing? Looks like even a hard-coded process may give some interesting results.
Alternatives
No response
Additional context
No response
Before submitting a new issue...
Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
The text was updated successfully, but these errors were encountered:
🚀 The feature, motivation and pitch
Well, I am surprised that the "main" and "great" new feature of the new OpenAI o1 model is actually doing say "more sophisticated" inference workflow while employing something like Chain-of-thought process. Basically I understand it that even a "dumb" model can perform much better when it "thinks more" during inference. The great news they are telling us is that by "thinking more" you can get smarter, which is probably very true also for humans.
The o1 model is probably trained to come up with its own CoT workflow for any given prompt, but I think it could be interesting to try to even hardcode some kind of workflow which any standard LLM model may try to follow during inference. Basically let the model analyze the prompt from various perspectives first and then try to judge on what type of "inference workflow" it should employ.
The hardcoded workflow could look like this:
Anyone having the same feelings as I do about the CoT thing? Looks like even a hard-coded process may give some interesting results.
Alternatives
No response
Additional context
No response
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: