AI plugin that works with new DeepSeek models? #2980
-
I've tried getting Avante to work with the new DeepSeek models based on the instructions given by DeepSeek here however it seems like the Avante plugin has not been updated to accept the DeepSeek provider as an option, and I was having trouble using the extraLuaConfig and avante.luaConfig properties to make Avante work with DeepSeek models. Has anyone found an AI plugin available to NixVim that currently works with the DeepSeek models? Or just how to get DeepSeek set up with the currently available version of Avante for NixVim? Getting Avante installed without using Lazy seems like quite the hassle, so I'm not sure what to do besides wait for Avante to get updated. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 4 replies
-
Avante supports declaring your own custom provider. I have setup ollama to run deepseek recently so I can try setting up an example, if you need it. https://github.com/yetone/avante.nvim/wiki/Custom-providers provider = "deepseek",
vendors = {
deepseek = {
__inherited_from = "openai",
api_key_name = "DEEPSEEK_API_KEY",
endpoint = "https://api.deepseek.com",
model = "deepseek-coder",
},
}, |
Beta Was this translation helpful? Give feedback.
-
Oh this doesn't look too bad to get set up locally. I'll give this a go.
Thanks a bunch, I'll try to get it running this way.
…On Wed, Feb 5, 2025, 2:34 PM Austin Horstman ***@***.***> wrote:
Ah, I didn't realize I could use the __inhereted_from property in the
nixvim configuration for Avante. I believe I misunderstood as well that
deepseek coder was free via the API, and that is why you mentioned that you
are running it locally via ollama I am guessing? Would you know of any
documentation or guide for getting it set up in that capacity? I have never
really integrated AI models into neovim until now, and I assume that is
much more complicated than just using the API.
Mostly just a desire to have the LLM running locally instead of using
their service.
Home-Manager module
{
config,
lib,
namespace,
osConfig,
pkgs,
...}:let
inherit (lib.${namespace}) mkBoolOpt;
cfg = config.${namespace}.services.ollama;
amdCfg = osConfig.khanelinix.hardware.gpu.amd;
hasHardwareConfig = lib.hasAttr "hardware" osConfig.khanelinix;in{
options.${namespace}.services.ollama = {
enable = mkBoolOpt false "Whether to enable ollama.";
enableDebug = lib.mkEnableOption "debug";
};
config = lib.mkIf cfg.enable {
services.ollama = {
enable = true;
host = lib.mkIf pkgs.stdenv.hostPlatform.isDarwin "0.0.0.0";
environmentVariables =
lib.optionalAttrs cfg.enableDebug {
OLLAMA_DEBUG = "1";
}
// lib.optionalAttrs (hasHardwareConfig && amdCfg.enable && amdCfg.enableRocmSupport) {
HCC_AMDGPU_TARGET = "gfx1100";
HSA_OVERRIDE_GFX_VERSION = "11.0.0";
AMD_LOG_LEVEL = lib.mkIf cfg.enableDebug "3";
};
};
};}
—
Reply to this email directly, view it on GitHub
<#2980 (reply in thread)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AKADF2PHHQ37BO7MK6VCOXL2OJ7VVAVCNFSM6AAAAABWR7DUV2VHI2DSMVQWIX3LMV43URDJONRXK43TNFXW4Q3PNVWWK3TUHMYTEMBXGQ4DIMQ>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
Beta Was this translation helpful? Give feedback.
Mostly just a desire to have the LLM running locally instead of using their service.
Home-Manager module