A chat interface for Ollama/ChatGPT built in Streamlit, created as an example for those wanting to build local LLM chat applications.
Made by @jmemcc
- Ollama installed with at least one model, like llama3.2 (if installing manually)
- For OpenAI models: API key from their developer dashboard
- Python 3.10 or higher (if installing manually)
- Docker (if using containerised deployment)
Below are steps to setup using pyenv, but an environment with venv is also fine.
-
Create a Python environment (Python 3.10):
pyenv virtualenv 3.10 python3.10.14 pyenv local python3.10.14
-
Install requirements:
pip install -r requirements.txt
-
Launch the app:
streamlit run app.py
Note: If you are already using port 8501
, run the streamlit
command above with --server.port=XXXX
on your port of choice.
In the directory, build and run with the compose file:
docker compose up -d
or build and run with the Dockerfile:
docker build -t streamlit_llm
docker run streamlit_llm
Note: If you already are using port 8501
, you need to change the port in the Docker files to use another port.
Select an LLM provider and model in the sidebar, then you can interact with the model.
Roles can be selected for more tailored responses - you can add your own in the data/prompts.toml
file.
Contributions are welcome. Please feel free to submit a pull request. For major changes, please open an issue first to discuss the change.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature
) - Commit your changes (
git commit -m 'Add some AmazingFeature'
) - Push to the branch (
git push origin feature/AmazingFeature
) - Open a pull request
This project is licensed under the MIT License - see the LICENSE file for details.