-
Notifications
You must be signed in to change notification settings - Fork 962
Pull requests: abetlen/llama-cpp-python
Author
Label
Projects
Milestones
Reviews
Assignee
Sort
Pull requests list
chore(deps): bump conda-incubator/setup-miniconda from 3.0.4 to 3.1.0
dependencies
Pull requests that update a dependency file
github_actions
Pull requests that update GitHub Actions code
#1821
opened Nov 4, 2024 by
dependabot
bot
Loading…
docs: Remove ref to llama_eval in llama_cpp.py docs
#1819
opened Nov 2, 2024 by
richdougherty
Loading…
Support LoRA hotswapping and multiple LoRAs at a time
#1817
opened Oct 30, 2024 by
richdougherty
•
Draft
9 of 13 tasks
fix: make content not required in ChatCompletionRequestAssistantMessage
#1807
opened Oct 21, 2024 by
feloy
Loading…
fix: Avoid thread starvation on many concurrent requests by making use of asyncio to lock llama_proxy context
#1798
opened Oct 15, 2024 by
gjpower
Loading…
fix: added missing exit_stack.close() to /v1/chat/completions
#1796
opened Oct 14, 2024 by
Ian321
Loading…
Fix: Refactor Batching notebook to use new sampler chain API
#1793
opened Oct 13, 2024 by
lukestanley
Loading…
chore(deps): bump pypa/cibuildwheel from 2.21.1 to 2.21.3
dependencies
Pull requests that update a dependency file
github_actions
Pull requests that update GitHub Actions code
#1790
opened Oct 9, 2024 by
dependabot
bot
Loading…
server types: Move 'model' parameter to clarify it is used
#1786
opened Oct 5, 2024 by
domdomegg
Loading…
Resync llama_grammar with llama.cpp implementation and use curly braces quantities instead of repetitions
#1721
opened Aug 31, 2024 by
gbloisi-openaire
Loading…
feat: adding support for external chat format contribution
#1716
opened Aug 29, 2024 by
axel7083
Loading…
Allow server to accept openai's new structured output "json_schema" format.
#1677
opened Aug 13, 2024 by
cerealbox
Loading…
Previous Next
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.