We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"llama-gpt_llama-gpt-api_1" Never starts on a fresh install of Umbrel v.5.4.0.
Here are the logs: `llama-gpt
Attaching to llama-gpt_llama-gpt-api_1, llama-gpt_llama-gpt-ui_1, llama-gpt_app_proxy_1 app_proxy_1 | yarn run v1.22.19 app_proxy_1 | $ node ./bin/www app_proxy_1 | [HPM] Proxy created: / -> http://llama-gpt-ui:3000 app_proxy_1 | Waiting for llama-gpt-ui:3000 to open... llama-gpt-api_1 | File "", line 189, in _run_module_as_main llama-gpt-api_1 | File "", line 112, in _get_module_details llama-gpt-api_1 | File "/app/llama_cpp/init.py", line 1, in llama-gpt-api_1 | from .llama_cpp import * llama-gpt-api_1 | File "/app/llama_cpp/llama_cpp.py", line 80, in llama-gpt-api_1 | _lib = _load_shared_library(_lib_base_name) llama-gpt-api_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ llama-gpt-api_1 | File "/app/llama_cpp/llama_cpp.py", line 71, in _load_shared_library llama-gpt-api_1 | raise FileNotFoundError( llama-gpt-api_1 | FileNotFoundError: Shared library with base name 'llama' not found llama-gpt-ui_1 | [INFO wait] Host [llama-gpt-api:8000] not yet available...`
System Information:
I have looked at the other issues and mine is similar but docker-compose up and some of the other suggestions do not work.
Logs - umbrel-1702211544820-debug(1).log
The text was updated successfully, but these errors were encountered:
Blew the VM away, same exact issue. logs - umbrel
Where is '/app/llama_cpp/llama_cpp.py' supposed to be?
Both directories have llama_ccp.py:
Sorry, something went wrong.
No branches or pull requests
"llama-gpt_llama-gpt-api_1" Never starts on a fresh install of Umbrel v.5.4.0.
Here are the logs:
`llama-gpt
Attaching to llama-gpt_llama-gpt-api_1, llama-gpt_llama-gpt-ui_1, llama-gpt_app_proxy_1
app_proxy_1 | yarn run v1.22.19
app_proxy_1 | $ node ./bin/www
app_proxy_1 | [HPM] Proxy created: / -> http://llama-gpt-ui:3000
app_proxy_1 | Waiting for llama-gpt-ui:3000 to open...
llama-gpt-api_1 | File "", line 189, in _run_module_as_main
llama-gpt-api_1 | File "", line 112, in _get_module_details
llama-gpt-api_1 | File "/app/llama_cpp/init.py", line 1, in
llama-gpt-api_1 | from .llama_cpp import *
llama-gpt-api_1 | File "/app/llama_cpp/llama_cpp.py", line 80, in
llama-gpt-api_1 | _lib = _load_shared_library(_lib_base_name)
llama-gpt-api_1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
llama-gpt-api_1 | File "/app/llama_cpp/llama_cpp.py", line 71, in _load_shared_library
llama-gpt-api_1 | raise FileNotFoundError(
llama-gpt-api_1 | FileNotFoundError: Shared library with base name 'llama' not found
llama-gpt-ui_1 | [INFO wait] Host [llama-gpt-api:8000] not yet available...`
System Information:
I have looked at the other issues and mine is similar but docker-compose up and some of the other suggestions do not work.
Logs - umbrel-1702211544820-debug(1).log
The text was updated successfully, but these errors were encountered: