Created
February 19, 2025 00:49
-
-
Save versipellis/fdc6aadce9e2bec0e5c47858cb035beb to your computer and use it in GitHub Desktop.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
$ python -m xformers.info | |
xFormers 0.0.29.post3 | |
memory_efficient_attention.ckF: unavailable | |
memory_efficient_attention.ckB: unavailable | |
memory_efficient_attention.ck_decoderF: unavailable | |
memory_efficient_attention.ck_splitKF: unavailable | |
memory_efficient_attention.cutlassF-pt: available | |
memory_efficient_attention.cutlassB-pt: available | |
[email protected]: available | |
[email protected]: available | |
[email protected]: unavailable | |
[email protected]: unavailable | |
memory_efficient_attention.triton_splitKF: available | |
indexing.scaled_index_addF: available | |
indexing.scaled_index_addB: available | |
indexing.index_select: available | |
sp24.sparse24_sparsify_both_ways: available | |
sp24.sparse24_apply: available | |
sp24.sparse24_apply_dense_output: available | |
sp24._sparse24_gemm: available | |
[email protected]: available | |
[email protected]: available | |
swiglu.dual_gemm_silu: available | |
swiglu.gemm_fused_operand_sum: available | |
swiglu.fused.p.cpp: available | |
is_triton_available: True | |
pytorch.version: 2.6.0+cu126 | |
pytorch.cuda: available | |
gpu.compute_capability: 8.6 | |
gpu.name: NVIDIA GeForce RTX 3070 Ti Laptop GPU | |
dcgm_profiler: unavailable | |
build.info: available | |
build.cuda_version: 1206 | |
build.hip_version: None | |
build.python_version: 3.12.8 | |
build.torch_version: 2.6.0+cu126 | |
build.env.TORCH_CUDA_ARCH_LIST: 6.0+PTX 7.0 7.5 8.0+PTX 9.0a | |
build.env.PYTORCH_ROCM_ARCH: None | |
build.env.XFORMERS_BUILD_TYPE: Release | |
build.env.XFORMERS_ENABLE_DEBUG_ASSERTIONS: None | |
build.env.NVCC_FLAGS: -allow-unsupported-compiler | |
build.env.XFORMERS_PACKAGE_FROM: wheel-v0.0.29.post3 | |
build.nvcc_version: 12.6.85 | |
source.privacy: open source | |
--- | |
$ python -m bitsandbytes | |
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ | |
++++++++++++++++++ BUG REPORT INFORMATION ++++++++++++++++++ | |
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ | |
++++++++++++++++++++++++++ OTHER +++++++++++++++++++++++++++ | |
CUDA specs: CUDASpecs(highest_compute_capability=(8, 6), cuda_version_string='126', cuda_version_tuple=(12, 6)) | |
PyTorch settings found: CUDA_VERSION=126, Highest Compute Capability: (8, 6). | |
To manually override the PyTorch CUDA version please see: https://github.com/TimDettmers/bitsandbytes/blob/main/docs/source/nonpytorchcuda.mdx | |
Found duplicate CUDA runtime files (see below). | |
We select the PyTorch default CUDA runtime, which is 12.6, | |
but this might mismatch with the CUDA version that is needed for bitsandbytes. | |
To override this behavior set the `BNB_CUDA_VERSION=<version string, e.g. 122>` environmental variable. | |
For example, if you want to use the CUDA version 122, | |
BNB_CUDA_VERSION=122 python ... | |
OR set the environmental variable in your .bashrc: | |
export BNB_CUDA_VERSION=122 | |
In the case of a manual override, make sure you set LD_LIBRARY_PATH, e.g. | |
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-11.2, | |
* Found CUDA runtime at: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.6\bin\cudart64_110.dll | |
* Found CUDA runtime at: C:\Windows\system32\nvcuda.dll | |
* Found CUDA runtime at: C:\Windows\system32\nvcudadebugger.dll | |
* Found CUDA runtime at: C:\Program Files (x86)\NVIDIA Corporation\PhysX\Common\cudart64_65.dll | |
* Found CUDA runtime at: C:\WINDOWS\system32\nvcuda.dll | |
* Found CUDA runtime at: C:\WINDOWS\system32\nvcudadebugger.dll | |
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ | |
++++++++++++++++++++++ DEBUG INFO END ++++++++++++++++++++++ | |
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ | |
Checking that the library is importable and CUDA is callable... | |
SUCCESS! | |
Installation was successful! | |
--- | |
$ pip list | |
Package Version | |
------------------------ ------------ | |
accelerate 1.4.0 | |
aiohappyeyeballs 2.4.6 | |
aiohttp 3.11.12 | |
aiosignal 1.3.2 | |
attrs 25.1.0 | |
bitsandbytes 0.45.2 | |
certifi 2025.1.31 | |
charset-normalizer 3.4.1 | |
colorama 0.4.6 | |
cut-cross-entropy 25.1.1 | |
datasets 3.3.1 | |
dill 0.3.8 | |
docstring_parser 0.16 | |
filelock 3.13.1 | |
frozenlist 1.5.0 | |
fsspec 2024.6.1 | |
hf_transfer 0.1.9 | |
huggingface-hub 0.28.1 | |
idna 3.10 | |
Jinja2 3.1.4 | |
markdown-it-py 3.0.0 | |
MarkupSafe 2.1.5 | |
mdurl 0.1.2 | |
mpmath 1.3.0 | |
multidict 6.1.0 | |
multiprocess 0.70.16 | |
networkx 3.3 | |
numpy 2.1.2 | |
nvidia-cublas-cu12 12.8.3.14 | |
nvidia-cuda-nvcc-cu12 12.8.61 | |
nvidia-cuda-nvrtc-cu12 12.8.61 | |
nvidia-cuda-runtime-cu12 12.8.57 | |
nvidia-cudnn-cu12 9.5.1.17 | |
packaging 24.2 | |
pandas 2.2.3 | |
peft 0.14.0 | |
pillow 11.0.0 | |
pip 24.3.1 | |
propcache 0.2.1 | |
protobuf 3.20.3 | |
psutil 7.0.0 | |
pyarrow 19.0.1 | |
Pygments 2.19.1 | |
python-dateutil 2.9.0.post0 | |
pytz 2025.1 | |
PyYAML 6.0.2 | |
regex 2024.11.6 | |
requests 2.32.3 | |
rich 13.9.4 | |
safetensors 0.5.2 | |
sentencepiece 0.2.0 | |
setuptools 70.2.0 | |
shtab 1.7.1 | |
six 1.17.0 | |
sympy 1.13.1 | |
tokenizers 0.21.0 | |
torch 2.6.0+cu126 | |
torchaudio 2.6.0+cu126 | |
torchvision 0.21.0+cu126 | |
tqdm 4.67.1 | |
transformers 4.49.0 | |
triton 3.1.0 | |
trl 0.15.1 | |
typeguard 4.4.2 | |
typing_extensions 4.12.2 | |
tyro 0.9.14 | |
tzdata 2025.1 | |
unsloth 2025.2.12 | |
unsloth_zoo 2025.2.5 | |
urllib3 2.3.0 | |
wheel 0.45.1 | |
xformers 0.0.29.post3 | |
xxhash 3.5.0 | |
yarl 1.18.3 |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment