OllamaLit is a Streamlit-based user interface for interacting with the Ollama. This project provides a simple yet powerful chat interface for querying models hosted on an Ollama server.
- User-Friendly Chat Interface: Engage in real-time conversations with the Ollama models directly through a Streamlit-based UI.
- Dynamic Model Selection: Select from multiple models hosted on your Ollama server.
- Live Streaming Responses: Get responses from the model in a stream format for a more interactive experience.
- Session Memory: Maintains a history of interactions in the current session.
Ensure you have the following installed:
- Python 3.7+
pip
for Python package management- Ollama LLM API server set up locally or remotely
-
Clone the repository:
git clone https://github.com/prady00/ollamalit.git cd ollamalit
-
Install dependencies:
pip install -r requirements.txt
-
Run the app:
streamlit run app.py
By default, OllamaLit assumes the Ollama server is hosted locally on http://localhost:11434
. You can modify the host address directly within the app's interface under the Ollama Host input field.
- Select a Model: Choose a model from the dropdown list of available models on your Ollama server.
- Enter a Prompt: Type your message in the chat input field and press Enter.
- View Responses: Responses from the model will stream in real-time below the chat input.
app.py
: Main application file for running the Streamlit app.requirements.txt
: Lists the necessary Python packages for the project.
- No models available in the dropdown: Ensure the Ollama server is running and accessible.
- Connection errors: Verify the host URL and check for network/firewall restrictions.
This project is licensed under the MIT License. See the LICENSE file for details.