Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dev Branch: Ollama - After successful litellm model selection, unable to move beyond splash screen #79

Closed
jupiter80 opened this issue Feb 12, 2025 · 1 comment
Assignees

Comments

@jupiter80
Copy link

jupiter80 commented Feb 12, 2025

What i planned and tested , It work for all LLM , only the UI part for modelselection will have to do accordingly with the client_utils.py.

Originally posted by @Chittaranjans in #49

This project looks amazing. However, I'm having trouble post-model selection on the web app. I've installed the Dev branch (as of Feb 11th at 11:21 PM ET) along with Ollama llama3.2 package locally on Windows. After successful installation and development build of the backend and frontend, I'm able to run the app and select the Ollama model. The app reflect the selection in the header and on the splash screen but does not allow for next steps, like selection of files/data. The screen continues to say
"Data Formulator
Let's MODEL: LLAMA3.2 [config icon]
Specify an OpenAI or Azure OpenAI endpoint to run Data Formulator."

Debugging:
I've tried to stop and restart the servers.
I've tried to remove and reselect the model.

Browser: Edge

Image

Image

local_server.bat:

  • Serving Flask app 'py-src/data_formulator/app.py'
  • Debug mode: off
    WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
  • Running on all addresses (0.0.0.0)
  • Running on http://127.0.0.1:5000
  • Running on xxxxxxxxxxxxxx
    Press CTRL+C to quit
    127.0.0.1 - - [11/Feb/2025 23:58:57] "OPTIONS /check-available-models HTTP/1.1" 200 -
    127.0.0.1 - - [11/Feb/2025 23:58:57] "POST /check-available-models HTTP/1.1" 200 -
    127.0.0.1 - - [11/Feb/2025 23:58:57] "POST /check-available-models HTTP/1.1" 200 -
    127.0.0.1 - - [11/Feb/2025 23:59:18] "OPTIONS /test-model HTTP/1.1" 200 -
    content------------------------------
    {'model': {'endpoint': 'ollama', 'model': 'llama3.2', 'id': 'ollama-llama3.2-undefined-undefined-undefined'}}
    model: {'endpoint': 'ollama', 'model': 'llama3.2', 'id': 'ollama-llama3.2-undefined-undefined-undefined'}
    welcome message: I can hear you.
    127.0.0.1 - - [11/Feb/2025 23:59:29] "POST /test-model HTTP/1.1" 200 -
    127.0.0.1 - - [12/Feb/2025 00:00:25] "OPTIONS /test-model HTTP/1.1" 200 -
@Chenglong-MS Chenglong-MS self-assigned this Feb 12, 2025
@Chenglong-MS
Copy link
Collaborator

Chenglong-MS commented Feb 12, 2025

Thanks a lot for figuring this out -- it was a bug related to model selection indeed.

The new dev branch should have addressed it: https://github.com/microsoft/data-formulator/tree/dev

The new release 0.1.5.1 have addressed this issue. Have fun playing with Data Formulator!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants