Releases: jasonacox/TinyLLM
Releases · jasonacox/TinyLLM
v0.12.4 - Chatbot Fixes
- Add encoding user prompts to correctly display html code in Chatbot.
- Fix
chat.py
CLI chatbot to handle user/assistant prompts for vLLM.
Full Changelog: v0.12.3...v0.12.4
v0.12.3 - Extract from URL
- Bug fix for
handle_url_prompt()
to extract text from URL.
Full Changelog: v0.12.2...v0.12.3
v0.12.2 - Misc Improvements
v0.12.2
- Speed up command functions using
aiohttp
. - Fix prompt_expand for rag command.
- Added topic option to
/news
command.
v0.12.1 - Performance Improvements
- Speed up user prompt echo. Immediately send to chat windows instead of waiting for LLM stream to start.
- Optimize message handling dispatching using async.
- Use AsyncOpenAI for non-streamed queries.
Full Changelog: v0.12.0...v0.12.2
v0.12.0 - FastAPI and Uvicorn
Chatbot - v0.12.0 - FastAPI and Uvicorn
- Ported Chatbot to the async FastAPI and Uvicorn ASGI high speed web server implementation (#3).
- Added /stats page to display configuration settings and current stats (optional
?format=json
) - UI updated to help enforce focus on text entry box.
- Moved
prompts.json
and Sentence Transformer model location to a./.tinyllm
for Docker support.
v0.11.4 - Stats Page
- Add
/stats
URL to Chatbot for settings and current status information. - Update Chatbot HTML to set focus on user textbox.
- Move
prompts.json
and Sentence Transformer models into.tinyllm
directory.
Full Changelog: v0.11.3...v0.11.4
v0.11.3 - Optimize for Docker
What's Changed
- Fix docker PWD and network by @the023 in #2
- Improve Chatbot for Docker
- Added admin alert broadcast feature (
POST /alert
)
New Contributors
Full Changelog: https://github.com/jasonacox/TinyLLM/commits/v0.11.3