You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This project looks amazing. However, I'm having trouble post-model selection on the web app. I've installed the Dev branch (as of Feb 11th at 11:21 PM ET) along with Ollama llama3.2 package locally on Windows. After successful installation and development build of the backend and frontend, I'm able to run the app and select the Ollama model. The app reflect the selection in the header and on the splash screen but does not allow for next steps, like selection of files/data. The screen continues to say
"Data Formulator
Let's MODEL: LLAMA3.2 [config icon]
Specify an OpenAI or Azure OpenAI endpoint to run Data Formulator."
Debugging:
I've tried to stop and restart the servers.
I've tried to remove and reselect the model.
Browser: Edge
local_server.bat:
Serving Flask app 'py-src/data_formulator/app.py'
Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
Originally posted by @Chittaranjans in #49
This project looks amazing. However, I'm having trouble post-model selection on the web app. I've installed the Dev branch (as of Feb 11th at 11:21 PM ET) along with Ollama llama3.2 package locally on Windows. After successful installation and development build of the backend and frontend, I'm able to run the app and select the Ollama model. The app reflect the selection in the header and on the splash screen but does not allow for next steps, like selection of files/data. The screen continues to say
"Data Formulator
Let's MODEL: LLAMA3.2 [config icon]
Specify an OpenAI or Azure OpenAI endpoint to run Data Formulator."
Debugging:
I've tried to stop and restart the servers.
I've tried to remove and reselect the model.
Browser: Edge
local_server.bat:
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
Press CTRL+C to quit
127.0.0.1 - - [11/Feb/2025 23:58:57] "OPTIONS /check-available-models HTTP/1.1" 200 -
127.0.0.1 - - [11/Feb/2025 23:58:57] "POST /check-available-models HTTP/1.1" 200 -
127.0.0.1 - - [11/Feb/2025 23:58:57] "POST /check-available-models HTTP/1.1" 200 -
127.0.0.1 - - [11/Feb/2025 23:59:18] "OPTIONS /test-model HTTP/1.1" 200 -
content------------------------------
{'model': {'endpoint': 'ollama', 'model': 'llama3.2', 'id': 'ollama-llama3.2-undefined-undefined-undefined'}}
model: {'endpoint': 'ollama', 'model': 'llama3.2', 'id': 'ollama-llama3.2-undefined-undefined-undefined'}
welcome message: I can hear you.
127.0.0.1 - - [11/Feb/2025 23:59:29] "POST /test-model HTTP/1.1" 200 -
127.0.0.1 - - [12/Feb/2025 00:00:25] "OPTIONS /test-model HTTP/1.1" 200 -
The text was updated successfully, but these errors were encountered: