Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I can't install. Please help. #845

Closed
kgboyko opened this issue Feb 14, 2025 · 2 comments
Closed

I can't install. Please help. #845

kgboyko opened this issue Feb 14, 2025 · 2 comments

Comments

@kgboyko
Copy link

kgboyko commented Feb 14, 2025

Site Kaggle. GPU T4x2.

import os
container_name = os.environ.get('CONTAINER_NAME', '')
container_date = os.environ.get('BUILD_DATE', '').split('-')[0]
container_vers = {'20250109':'v156', '20250205':'v157'}
if not container_name:
    container_name = container_vers.get(container_date, '')

! cat /etc/os-release | grep -oP "PRETTY_NAME=\"\K([^\"]*)" && uname -r
print(f"CONTAINER_NAME={container_name}, BUILD_DATE={container_date}")
! free -h
! nv_version="$(nvidia-smi --query-gpu=driver_version --format=csv,noheader)" && echo "My NVIDIA driver version is '${nv_version}'."
! ls -l /usr/local | grep cuda

! python -c "import torch; print(torch.__version__, torch.cuda.is_available(), torch.version.cuda, torch.cuda.get_arch_list());"
! python -c "import torch; print(torch.cuda.get_device_capability(), torch.cuda.get_device_properties(0));"
! python -c "import torch; print(torch.version.cuda)"
! python -c "import torch; print(torch.backends.cudnn.enabled, torch.backends.cudnn.version());"
#! python -c "import torch; print(torch.cuda.memory_summary(device=None, abbreviated=False));"
! pip list | grep torch

! pip install --target=/kaggle/working  -U flashinfer-python==0.2.1.post1 -i https://flashinfer.ai/whl/cu121/torch2.5/
#! pip wheel --wheel-dir=/kaggle/working flashinfer-python==0.2.1.post1 -i https://flashinfer.ai/whl/cu121/torch2.5/

Ubuntu 22.04.3 LTS
6.6.56+
CONTAINER_NAME=v157, BUILD_DATE=20250205
total used free shared buff/cache available
Mem: 31Gi 871Mi 21Gi 1.0Mi 8.7Gi 30Gi
Swap: 0B 0B 0B
My NVIDIA driver version is '560.35.03
560.35.03'.
lrwxrwxrwx 1 root root 22 Nov 10 2023 cuda -> /etc/alternatives/cuda
lrwxrwxrwx 1 root root 25 Nov 10 2023 cuda-12 -> /etc/alternatives/cuda-12
drwxr-xr-x 1 root root 4096 Nov 10 2023 cuda-12.2
2.5.1+cu121 True 12.1 ['sm_50', 'sm_60', 'sm_70', 'sm_75', 'sm_80', 'sm_86', 'sm_90']
(7, 5) _CudaDeviceProperties(name='Tesla T4', major=7, minor=5, total_memory=15095MB, multi_processor_count=40, uuid=60e7409d-d0f8-c0a8-4de6-2eee400354ad, L2_cache_size=4MB)
12.1
True 90100
pytorch-ignite 0.5.1
pytorch-lightning 2.5.0.post0
torch 2.5.1+cu121
torchaudio 2.5.1+cu121
torchinfo 1.8.0
torchmetrics 1.6.1
torchsummary 1.5.1
torchtune 0.5.0
torchvision 0.20.1+cu121
Looking in indexes: https://flashinfer.ai/whl/cu121/torch2.5/
Collecting flashinfer-python==0.2.1.post1
Using cached https://github.com/flashinfer-ai/flashinfer/releases/download/v0.2.1.post1/flashinfer_python-0.2.1.post1%2Bcu121torch2.5-cp38-abi3-linux_x86_64.whl (527.1 MB)
INFO: pip is looking at multiple versions of flashinfer-python to determine which version is compatible with other requirements. This could take a while.
ERROR: Could not find a version that satisfies the requirement torch==2.5.* (from flashinfer-python) (from versions: none)
ERROR: No matching distribution found for torch==2.5.*

@yzh119
Copy link
Collaborator

yzh119 commented Feb 14, 2025

Try pip install with --no-dependencies flag:

pip install --no-dependencies --target=/kaggle/working  -U flashinfer-python==0.2.1.post1 -i https://flashinfer.ai/whl/cu121/torch2.5/

@kgboyko
Copy link
Author

kgboyko commented Feb 14, 2025

Thank you very much, it worked!

@kgboyko kgboyko closed this as completed Feb 14, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants