-
-
Notifications
You must be signed in to change notification settings - Fork 8
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
refactor: Update build/install commands
- Updates README.md with docs - Adds some lua dev conveniences Signed-off-by: John McBride <[email protected]>
- Loading branch information
Showing
7 changed files
with
206 additions
and
50 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,5 +1,86 @@ | ||
# 🦙 nvim-llama | ||
|
||
Llama 2 interfaces for Neovim | ||
_[Llama 2](https://ai.meta.com/llama/) and [llama.cpp](https://github.com/ggerganov/llama.cpp/) interfaces for Neovim_ | ||
|
||
🚧 _there's not alot here right now. Come back soon!_ 🚧 | ||
🏗️ 👷 Warning! Under active development!! 👷 🚧 | ||
|
||
# Installation | ||
|
||
Use your favorite package manager to install the plugin: | ||
|
||
### Packer | ||
|
||
```lua | ||
use 'jpmcb/nvim-llama' | ||
``` | ||
|
||
### lazy.nvim | ||
|
||
```lua | ||
{ | ||
'jpmcb/nvim-llama' | ||
} | ||
``` | ||
|
||
### vim-plug | ||
|
||
```lua | ||
Plug 'jpmcb/nvim-llama' | ||
``` | ||
|
||
# Setup & configuration | ||
|
||
In your `init.vim`, setup the plugin: | ||
|
||
```lua | ||
require('nvim-llama').setup {} | ||
``` | ||
|
||
You can provide the following optional configuration table to the `setup` function: | ||
|
||
```lua | ||
local defaults = { | ||
-- See plugin debugging logs | ||
debug = false, | ||
|
||
-- Build llama.cpp for GPU acceleration on Apple M chip devices. | ||
-- If you are using an Apple M1/M2 laptop, it is highly recommended to | ||
-- use this since, depending on the model, may drastically increase performance. | ||
build_metal = false, | ||
} | ||
``` | ||
|
||
# Models | ||
|
||
Llama.cpp supports an incredible number of models. | ||
|
||
To start using one, you'll need to download an appropriately sized model that | ||
is supported by llama.cpp. | ||
|
||
The 13B GGUF CodeLlama model is a really good place to start: | ||
https://huggingface.co/TheBloke/CodeLlama-13B-GGUF | ||
|
||
In order to use a model, it must be in the `llama.cpp/models/` directory which | ||
is expected to be found at `~/.local/share/llama.cpp/models`. | ||
|
||
The following script can be useful for downloading a model to that directory: | ||
|
||
```sh | ||
LLAMA_CPP="~/.local/share/nvim/llama.cpp" | ||
MODEL="codellama-13b.Q4_K_M.gguf" | ||
|
||
pushd "${LLAMA_CPP}" | ||
if [ ! -f models/${MODEL} ]; then | ||
curl -L "https://huggingface.co/TheBloke/CodeLlama-13B-GGUF/resolve/main/${MODEL}" -o models/${MODEL} | ||
fi | ||
popd | ||
``` | ||
|
||
In the future, this project may provide the capability to download models automatically. | ||
|
||
# License | ||
|
||
This project is dual licensed under [MIT](./LICENSE.txt) (first party plugin code) | ||
and the [Llama 2 license](./LICENSE.llama.txt). | ||
By using this plugin, you agree to both terms and assert you have already have | ||
[your own non-transferable license for Llama 2 from Meta AI](https://ai.meta.com/resources/models-and-libraries/llama-downloads/). |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,16 @@ | ||
#!/bin/bash | ||
|
||
# This is an example convenience script to demonstrate downloading the 13b GGUF | ||
# model for llama.cpp from Huggingface. | ||
# | ||
# It drops the model into the expected directory for nvim-llama and llama.cpp to | ||
# be able to utilize it | ||
|
||
LLAMA_CPP_CLONE="~/.local/share/nvim/llama.cpp" | ||
MODEL="codellama-13b.Q4_K_M.gguf" | ||
|
||
pushd "${LLAMA_CPP_CLONE}" | ||
if [ ! -f models/${MODEL} ]; then | ||
curl -L "https://huggingface.co/TheBloke/CodeLlama-13B-GGUF/resolve/main/${MODEL}" -o models/${MODEL} | ||
fi | ||
popd |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,5 @@ | ||
{ | ||
"diagnostics.globals": [ | ||
"vim" | ||
] | ||
} |
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
local M = {} | ||
|
||
M.namespace = vim.api.nvim_create_namespace("nvim-llama") | ||
|
||
local defaults = { | ||
-- See plugin debugging logs | ||
debug = false, | ||
|
||
-- Build llama.cpp for GPU acceleration on Apple M chip devices. | ||
-- If you are using an Apple M1/M2 laptop, it is highly recommended to | ||
-- use this since, depending on the model, may drastically increase performance. | ||
build_metal = false, | ||
} | ||
|
||
M.current = defaults | ||
|
||
function M.set(opts) | ||
M.current = vim.tbl_deep_extend("force", defaults, opts or {}) | ||
end | ||
|
||
return M |