You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
I don't seem to have access to "o1" and "o1-mini" seemed to fail so I tried claude-3.5-sonnet-xxxx and had to turn reasoning_effort=high off and set token output to 8192 and I got some… modest output.
Anyone able to get reasoning_effort=high working? For which models? I have ollama so I can use that too.
Code to reproduce the error
Just run.py in the deep research example
Error logs (if any)
raise self._make_status_error_from_response(err.response) from None
openai.NotFoundError: Error code: 404 - {'error': {'message': 'The model `o1` does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/julian/.local/share/virtualenvs/open_deep_research-hTz85m-6/lib/python3.11/site-packages/litellm/main.py", line 1724, in completion
raise e
File "/Users/julian/.local/share/virtualenvs/open_deep_research-hTz85m-6/lib/python3.11/site-packages/litellm/main.py", line 1697, in completion
response = openai_chat_completions.completion(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/julian/.local/share/virtualenvs/open_deep_research-hTz85m-6/lib/python3.11/site-packages/litellm/llms/openai/openai.py", line 736, in completion
raise OpenAIError(
litellm.llms.openai.common_utils.OpenAIError: Error code: 404 - {'error': {'message': 'The model `o1` does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}
Expected behavior
That the example would use a model that is widely available. I assume o1 is tier-restricted?
@boxabirds it's normal not to be able to use reasoning_effotr="high" on any other model than o1. I don't know about the model being tier restricted, I've never seen this! I get no problem with using "o1" as the model_id.
Yes, when I go to openAI’s playground o1 is not a model available to me. I
don’t use OpenAI much: I understand it is a tier restriction issue.
I bring this up because out of the box you can’t use this unless you have
o1 access and it’s not universal at least for now.
Describe the bug
I don't seem to have access to "o1" and "o1-mini" seemed to fail so I tried claude-3.5-sonnet-xxxx and had to turn
reasoning_effort
=high off and set token output to 8192 and I got some… modest output.Anyone able to get reasoning_effort=high working? For which models? I have ollama so I can use that too.
Code to reproduce the error
Just
run.py
in the deep research exampleError logs (if any)
Expected behavior
That the example would use a model that is widely available. I assume o1 is tier-restricted?
Packages version:
The text was updated successfully, but these errors were encountered: