Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FileNotFoundError: Shared library with base name 'llama' not found #568

Open
4 tasks done
mghaoui-interpulse opened this issue Aug 4, 2023 · 11 comments
Open
4 tasks done
Labels

Comments

@mghaoui-interpulse
Copy link

Prerequisites

Please answer the following questions for yourself before submitting an issue.

  • I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • I carefully followed the README.md.
  • I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • I reviewed the Discussions, and have a new bug or useful enhancement to share.

Expected Behavior

I'm following the instructions on the README. llama_cpp is buildable on my machine with cuBLAS support (libraries and paths are correct).

> python3 -m venv .venv
> source .venv/bin/activate
(.venv) > CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir

The installation seems to go well:

Collecting llama-cpp-python
  Downloading llama_cpp_python-0.1.77.tar.gz (1.6 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.6/1.6 MB 12.2 MB/s eta 0:00:00
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Collecting typing-extensions>=4.5.0 (from llama-cpp-python)
  Downloading typing_extensions-4.7.1-py3-none-any.whl (33 kB)
Collecting numpy>=1.20.0 (from llama-cpp-python)
  Downloading numpy-1.25.2-cp311-cp311-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.2 MB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 18.2/18.2 MB 15.5 MB/s eta 0:00:00
Collecting diskcache>=5.6.1 (from llama-cpp-python)
  Downloading diskcache-5.6.1-py3-none-any.whl (45 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 45.6/45.6 kB 306.0 MB/s eta 0:00:00
Building wheels for collected packages: llama-cpp-python
  Building wheel for llama-cpp-python (pyproject.toml) ... done
  Created wheel for llama-cpp-python: filename=llama_cpp_python-0.1.77-cp311-cp311-linux_x86_64.whl size=1386177 sha256=67bb0d8316976217d7638216027ad89c76bc58241d7d64f49a1b6b76a40f0c74
  Stored in directory: /tmp/pip-ephem-wheel-cache-q0i3qayl/wheels/e2/67/cb/481cfaabbb5fd5edab627c5b475de63e1b6f7d4d7b678d4d25
Successfully built llama-cpp-python
Installing collected packages: typing-extensions, numpy, diskcache, llama-cpp-python
  Attempting uninstall: typing-extensions
    Found existing installation: typing_extensions 4.7.1
    Uninstalling typing_extensions-4.7.1:
      Successfully uninstalled typing_extensions-4.7.1
  Attempting uninstall: numpy
    Found existing installation: numpy 1.25.2
    Uninstalling numpy-1.25.2:
      Successfully uninstalled numpy-1.25.2
  Attempting uninstall: diskcache
    Found existing installation: diskcache 5.6.1
    Uninstalling diskcache-5.6.1:
      Successfully uninstalled diskcache-5.6.1
  Attempting uninstall: llama-cpp-python
    Found existing installation: llama-cpp-python 0.1.77
    Uninstalling llama-cpp-python-0.1.77:
      Successfully uninstalled llama-cpp-python-0.1.77
Successfully installed diskcache-5.6.1 llama-cpp-python-0.1.77 numpy-1.25.2 typing-extensions-4.7.1

I expected to be able to import the library but that doesn't work.

Current Behavior

> python3
Python 3.11.4 (main, Jun 28 2023, 19:51:46) [GCC] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from llama_cpp import Llama
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/moni/samples/llama-cpp-python/llama_cpp/__init__.py", line 1, in <module>
    from .llama_cpp import *
  File "/home/moni/samples/llama-cpp-python/llama_cpp/llama_cpp.py", line 80, in <module>
    _lib = _load_shared_library(_lib_base_name)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/moni/samples/llama-cpp-python/llama_cpp/llama_cpp.py", line 71, in _load_shared_library
    raise FileNotFoundError(
FileNotFoundError: Shared library with base name 'llama' not found

Environment and Context

  • Physical (or virtual) hardware you are using, e.g. for Linux:

$ lscpu

Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         48 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  16
  On-line CPU(s) list:   0-15
Vendor ID:               AuthenticAMD
  Model name:            AMD Ryzen 7 5800X 8-Core Processor
    CPU family:          25
    Model:               33
    Thread(s) per core:  2
    Core(s) per socket:  8
    Socket(s):           1
    Stepping:            0
    Frequency boost:     disabled
    CPU(s) scaling MHz:  52%
    CPU max MHz:         4850.1948
    CPU min MHz:         2200.0000
    BogoMIPS:            7588.01
    Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy ab
                         m sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzer
                         o irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm
Virtualization features:
  Virtualization:        AMD-V
Caches (sum of all):
  L1d:                   256 KiB (8 instances)
  L1i:                   256 KiB (8 instances)
  L2:                    4 MiB (8 instances)
  L3:                    32 MiB (1 instance)
NUMA:
  NUMA node(s):          1
  NUMA node0 CPU(s):     0-15
Vulnerabilities:
  Itlb multihit:         Not affected
  L1tf:                  Not affected
  Mds:                   Not affected
  Meltdown:              Not affected
  Mmio stale data:       Not affected
  Retbleed:              Not affected
  Spec store bypass:     Mitigation; Speculative Store Bypass disabled via prctl
  Spectre v1:            Mitigation; usercopy/swapgs barriers and __user pointer sanitization
  Spectre v2:            Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
  Srbds:                 Not affected
  Tsx async abort:       Not affected
  • Operating System, e.g. for Linux:

$ uname -a

Linux moni-opensuse-bp 6.4.6-1-default #1 SMP PREEMPT_DYNAMIC Tue Jul 25 04:42:30 UTC 2023 (55520bc) x86_64 x86_64 x86_64 GNU/Linux
  • SDK version, e.g. for Linux:
$ python3 --version
$ make --version
$ g++ --version
Python 3.11.4

GNU Make 4.4.1
Built for x86_64-suse-linux-gnu

g++ (SUSE Linux) 13.1.1 20230720 [revision 9aac37ab8a7b919a89c6d64bc7107a8436996e93]

Steps to Reproduce

Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.

  1. step 1
  2. step 2
  3. step 3
  4. etc.

Note: Many issues seem to be regarding functional or performance issues / differences with llama.cpp. In these cases we need to confirm that you're comparing against the version of llama.cpp that was built with your python package, and which parameters you're passing to the context.

Try the following:

  1. git clone https://github.com/abetlen/llama-cpp-python
  2. cd llama-cpp-python
  3. rm -rf _skbuild/ # delete any old builds
  4. python setup.py develop
  5. cd ./vendor/llama.cpp
  6. Follow llama.cpp's instructions to cmake llama.cpp
  7. Run llama.cpp's ./main with the same arguments you previously passed to llama-cpp-python and see if you can reproduce the issue. If you can, log an issue with llama.cpp

I tried, and I get this:

/usr/lib/python3.11/site-packages/setuptools/command/develop.py:40: EasyInstallDeprecationWarning: easy_install command is deprecated.
!!

        ********************************************************************************
        Please avoid running ``setup.py`` and ``easy_install``.
        Instead, use pypa/build, pypa/installer, pypa/build or
        other standards-based tools.

        See https://github.com/pypa/setuptools/issues/917 for details.
        ********************************************************************************

!!
  easy_install.initialize_options(self)
Traceback (most recent call last):
  File "/home/moni/.local/lib/python3.11/site-packages/skbuild/setuptools_wrap.py", line 645, in setup
    cmkr = cmaker.CMaker(cmake_executable)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/moni/.local/lib/python3.11/site-packages/skbuild/cmaker.py", line 148, in __init__
    self.cmake_version = get_cmake_version(self.cmake_executable)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/moni/.local/lib/python3.11/site-packages/skbuild/cmaker.py", line 105, in get_cmake_version
    raise SKBuildError(msg) from err

Problem with the CMake installation, aborting build. CMake executable is cmake
@mghaoui-interpulse
Copy link
Author

The repo is cloned recursively and I am able to go into the vendor directory and compile llama_cpp and run it.

cd ./vendor/llama.cpp
make clean && make LLAMA_CUBLAS=1 -j
./main -i --interactive-first -m /run/media/moni/T7/samples/llama.cpp/models/13B-chat/ggml-model-q4_0.bin -n 128 -ngl 999

and that works fine.

Going back to llama-cpp-python and trying to load the library didn't work.

I even tried:

CMAKE_ARGS="-DLLAMA_CUBLAS=on -DBUILD_SHARED_LIBS=ON" FORCE_CMAKE=1 pip install llama-cpp-python --force-reinstall --upgrade --no-cache-dir

With a shared libs option turned on, but no dice.

@mghaoui-interpulse
Copy link
Author

mghaoui-interpulse commented Aug 4, 2023

Hm. Weird. When I run it in a Jupyter Notebook in Python, it works perfectly?

image

So why doesn't it work in the command line?

@mghaoui-interpulse
Copy link
Author

mghaoui-interpulse commented Aug 4, 2023

Ok weird. If I create a test.py file with

from llama_cpp import Llama

and launch it:

python test.py

It works perfectly.

So it's only the interactive Python that is having a problem. Ok...

@gjmulder gjmulder added the build label Aug 4, 2023
@mapa17
Copy link

mapa17 commented Aug 14, 2023

Hi, I had similar issues. As it turns out, the problem was, that the "wrong" llama_cpp.py was used to perform the import.
Instead of the llama_cpp.py that is located in the python site-packages folder after install, the llama_cpp.py within my current folder/repo was used.

Thats a problem because llama_cpp.py::_load_shared_library() is using _base_path = pathlib.Path(__file__).parent.resolve() to find the shared library file and was looking for the shared library in the folder in which its finds the first llama_cpp.py file.

I am not sure, but I think instead of using __file__ one would be making use of site.getsitepackages() to get the path of the current site folder and look for the so file there.

@corv89
Copy link

corv89 commented Aug 27, 2023

You're exactly right, when I move out of the project directory I can suddenly do from llama_cpp import Llama just fine.

@abetlen Please take a look at this issue

@tgmerritt
Copy link

Thank you both - I had the same experience. Within the llama-cpp-python project directory, it wouldn't work, as soon as I cd .. and tried, it worked fine. Big thanks to @mapa17 for figuring this out

@mghaoui-interpulse
Copy link
Author

Sounds like something needs to be modified in the code ...

@chaddwick25
Copy link

Thanks to all that posted. This bug was driving me crazy.

@ghevge
Copy link

ghevge commented Jan 30, 2024

I am seeing a similar error when trying to start llama-cpp-pyton container for llama-gpt: getumbrel/llama-gpt#144 . Any idea if it is caused by the same problem ? The stacktrace looks similar...

@sdoshi
Copy link

sdoshi commented Jul 29, 2024

Thank you for posting the workaround!

@HAOYON-666
Copy link

HAOYON-666 commented Sep 26, 2024

FileNotFoundError: Shared library with base name 'llama' not found,please tell me how to deal with ?thanks!!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

9 participants