Datasets:
Add comprehensive README with usage guide, FAQ, and wheel reference
Browse files
README.md
CHANGED
|
@@ -7,14 +7,9 @@ tags:
|
|
| 7 |
- llama-cpp-python
|
| 8 |
- wheels
|
| 9 |
- prebuilt
|
| 10 |
-
- manylinux
|
| 11 |
- cpu
|
| 12 |
- gpu
|
| 13 |
-
-
|
| 14 |
-
- openblas
|
| 15 |
-
- mkl
|
| 16 |
-
- avx2
|
| 17 |
-
- avx512
|
| 18 |
- gguf
|
| 19 |
- inference
|
| 20 |
pretty_name: "llama-cpp-python Prebuilt Wheels"
|
|
@@ -24,185 +19,181 @@ size_categories:
|
|
| 24 |
|
| 25 |
# π llama-cpp-python Prebuilt Wheels
|
| 26 |
|
| 27 |
-
**The most complete collection of prebuilt `llama-cpp-python` wheels for manylinux.**
|
|
|
|
|
|
|
| 28 |
|
| 29 |
-
|
|
|
|
|
|
|
| 30 |
|
| 31 |
-
## π
|
| 32 |
|
| 33 |
| | Count |
|
| 34 |
-
|--|---
|
| 35 |
-
| **Total Wheels** | 3,
|
| 36 |
| **Versions** | 0.3.0 β 0.3.16 (17 versions) |
|
| 37 |
| **Python** | 3.8, 3.9, 3.10, 3.11, 3.12, 3.13, 3.14 |
|
| 38 |
-
| **CPU Backends** | OpenBLAS, Intel MKL, Basic (no BLAS) |
|
| 39 |
-
| **GPU Backends** | Vulkan, CLBlast, OpenCL, SYCL, RPC |
|
| 40 |
-
| **CPU Optimizations** | AVX, AVX2, AVX512, FMA, F16C, VNNI, VBMI, BF16, AMX, AVX-VNNI |
|
| 41 |
| **Platform** | `manylinux_2_31_x86_64` |
|
|
|
|
|
|
|
| 42 |
|
| 43 |
-
## β‘
|
| 44 |
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
``
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 49 |
|
| 50 |
-
##
|
| 51 |
|
| 52 |
-
|
| 53 |
-
# OpenBLAS + AVX2/FMA/F16C (most modern desktops, 2013+)
|
| 54 |
-
pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+openblas_avx2_fma_f16c-cp311-cp311-manylinux_2_31_x86_64.whl
|
| 55 |
|
| 56 |
-
|
| 57 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 58 |
|
| 59 |
-
#
|
| 60 |
-
pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+vulkan-cp311-cp311-manylinux_2_31_x86_64.whl
|
| 61 |
|
| 62 |
-
|
| 63 |
-
pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+basic_basic-cp311-cp311-manylinux_2_31_x86_64.whl
|
| 64 |
-
```
|
| 65 |
|
| 66 |
-
|
| 67 |
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
```
|
| 72 |
|
| 73 |
-
##
|
| 74 |
-
|
| 75 |
-
```dockerfile
|
| 76 |
-
RUN pip install --no-cache-dir \
|
| 77 |
-
https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16%2Bopenblas_avx2_fma_f16c-cp311-cp311-manylinux_2_31_x86_64.whl
|
| 78 |
-
```
|
| 79 |
|
| 80 |
-
|
| 81 |
|
| 82 |
-
**packages.txt:**
|
| 83 |
```
|
| 84 |
-
|
| 85 |
```
|
| 86 |
|
| 87 |
-
|
| 88 |
```
|
| 89 |
-
|
|
|
|
|
|
|
| 90 |
```
|
| 91 |
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
|
| 95 |
|
| 96 |
-
|
| 97 |
-
|---------|----------|-----|
|
| 98 |
-
| **OpenBLAS** | General CPU inference, good default | `openblas` |
|
| 99 |
-
| **Intel MKL** | Intel CPUs, potentially faster BLAS | `mkl` |
|
| 100 |
-
| **Basic** | Maximum compatibility, no external deps | `basic` |
|
| 101 |
-
| **Vulkan** | GPU acceleration (NVIDIA, AMD, Intel) | `vulkan` |
|
| 102 |
-
| **CLBlast** | OpenCL GPU acceleration | `clblast` |
|
| 103 |
-
| **OpenCL** | Generic OpenCL devices | `opencl` |
|
| 104 |
-
| **SYCL** | Intel GPU (Arc, Flex, Data Center) | `sycl` |
|
| 105 |
-
| **RPC** | Distributed inference over network | `rpc` |
|
| 106 |
|
| 107 |
-
###
|
| 108 |
-
|
| 109 |
-
Check what your CPU supports:
|
| 110 |
|
| 111 |
```bash
|
| 112 |
-
|
| 113 |
-
|
|
|
|
| 114 |
```
|
| 115 |
|
| 116 |
-
|
| 117 |
-
|---------|-------------|-----------------|
|
| 118 |
-
| **2013+ Desktop** | Haswell, Ryzen 1st gen | `avx2_fma_f16c` |
|
| 119 |
-
| **2017+ Server** | Skylake-X, EPYC | `avx512_fma_f16c` |
|
| 120 |
-
| **2019+ Server** | Ice Lake, EPYC 3rd gen | `avx512_fma_f16c_vnni_vbmi` |
|
| 121 |
-
| **2021+ Desktop** | Alder Lake, 12th gen Intel | `avx2_fma_f16c_avxvnni` |
|
| 122 |
-
| **2023+ Server** | Sapphire Rapids, 4th gen Xeon | `avx512_fma_f16c_vnni_vbmi_bf16_amx` |
|
| 123 |
-
| **2012+ Legacy** | Ivy Bridge | `avx_f16c` |
|
| 124 |
-
| **2011+ Legacy** | Sandy Bridge | `avx` |
|
| 125 |
-
| **Any x86-64** | Anything 64-bit | `basic` |
|
| 126 |
|
| 127 |
-
|
|
|
|
| 128 |
|
| 129 |
-
|
| 130 |
-
llama_cpp_python-{VERSION}+{BACKEND}_{CPU_TAG}-{PYTHON}-{PYTHON}-manylinux_2_31_x86_64.whl
|
| 131 |
```
|
| 132 |
|
| 133 |
-
|
| 134 |
-
```
|
| 135 |
-
llama_cpp_python-0.3.16+openblas_avx2_fma_f16c-cp312-cp312-manylinux_2_31_x86_64.whl
|
| 136 |
-
```
|
| 137 |
|
| 138 |
-
|
| 139 |
-
|
| 140 |
-
llama_cpp_python-0.3.16+vulkan-cp312-cp312-manylinux_2_31_x86_64.whl
|
| 141 |
```
|
| 142 |
|
| 143 |
-
##
|
| 144 |
-
|
| 145 |
-
| Tag | CPU Instructions Enabled | CMake Flags |
|
| 146 |
-
|-----|-------------------------|-------------|
|
| 147 |
-
| `basic` | None (pure x86-64) | β |
|
| 148 |
-
| `avx` | AVX | `-DGGML_AVX=ON` |
|
| 149 |
-
| `avx_f16c` | AVX + F16C | `-DGGML_AVX=ON -DGGML_F16C=ON` |
|
| 150 |
-
| `avx2_fma_f16c` | AVX2 + FMA + F16C | `-DGGML_AVX2=ON -DGGML_FMA=ON -DGGML_F16C=ON` |
|
| 151 |
-
| `avx512_fma_f16c` | AVX512 + FMA + F16C | `-DGGML_AVX512=ON -DGGML_FMA=ON -DGGML_F16C=ON` |
|
| 152 |
-
| `avx512_fma_f16c_vnni_vbmi` | AVX512 + FMA + F16C + VNNI + VBMI | `+ -DGGML_AVX512_VNNI=ON -DGGML_AVX512_VBMI=ON` |
|
| 153 |
-
| `avx512_fma_f16c_vnni_vbmi_bf16_amx` | Full server (Sapphire Rapids) | `+ -DGGML_AVX512_BF16=ON -DGGML_AMX_TILE/INT8/BF16=ON` |
|
| 154 |
-
| `avx2_fma_f16c_avxvnni` | AVX2 + FMA + F16C + AVX-VNNI | `+ -DGGML_AVX_VNNI=ON` |
|
| 155 |
-
|
| 156 |
-
## π Python Version Support
|
| 157 |
-
|
| 158 |
-
| Python | Tag | Status |
|
| 159 |
-
|--------|-----|--------|
|
| 160 |
-
| 3.8 | `cp38` | β
Full coverage |
|
| 161 |
-
| 3.9 | `cp39` | β
Full coverage |
|
| 162 |
-
| 3.10 | `cp310` | β
Full coverage |
|
| 163 |
-
| 3.11 | `cp311` | β
Full coverage |
|
| 164 |
-
| 3.12 | `cp312` | β
Full coverage |
|
| 165 |
-
| 3.13 | `cp313` | β
Full coverage |
|
| 166 |
-
| 3.14 | `cp314` | β
Full coverage |
|
| 167 |
-
|
| 168 |
-
## π¦ Naming Convention (PEP 440)
|
| 169 |
|
| 170 |
-
|
|
|
|
| 171 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 172 |
```
|
| 173 |
-
llama_cpp_python-{VERSION}+{LOCAL_TAG}-{PYTHON}-{ABI}-{PLATFORM}.whl
|
| 174 |
-
^
|
| 175 |
-
βββ Local version label (backend + CPU flags)
|
| 176 |
-
```
|
| 177 |
|
| 178 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 179 |
|
| 180 |
## π How These Wheels Are Built
|
| 181 |
|
| 182 |
-
These wheels are built
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 183 |
|
| 184 |
-
|
| 185 |
-
-
|
| 186 |
-
|
| 187 |
|
| 188 |
-
|
| 189 |
|
| 190 |
## β FAQ
|
| 191 |
|
| 192 |
-
**Q:
|
| 193 |
-
|
|
|
|
|
|
|
|
|
|
| 194 |
|
| 195 |
-
**Q:
|
| 196 |
-
|
|
|
|
|
|
|
| 197 |
|
| 198 |
-
**Q:
|
| 199 |
-
|
| 200 |
|
| 201 |
-
**Q:
|
| 202 |
-
|
|
|
|
|
|
|
|
|
|
| 203 |
|
| 204 |
-
**Q:
|
| 205 |
-
|
| 206 |
|
| 207 |
## π License
|
| 208 |
|
|
@@ -210,6 +201,6 @@ MIT β same as [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
|
|
| 210 |
|
| 211 |
## π Credits
|
| 212 |
|
| 213 |
-
- [
|
| 214 |
-
- [
|
| 215 |
-
- Built by [AIencoder](https://huggingface.co/AIencoder)
|
|
|
|
| 7 |
- llama-cpp-python
|
| 8 |
- wheels
|
| 9 |
- prebuilt
|
|
|
|
| 10 |
- cpu
|
| 11 |
- gpu
|
| 12 |
+
- manylinux
|
|
|
|
|
|
|
|
|
|
|
|
|
| 13 |
- gguf
|
| 14 |
- inference
|
| 15 |
pretty_name: "llama-cpp-python Prebuilt Wheels"
|
|
|
|
| 19 |
|
| 20 |
# π llama-cpp-python Prebuilt Wheels
|
| 21 |
|
| 22 |
+
**The most complete collection of prebuilt `llama-cpp-python` wheels for manylinux x86_64.**
|
| 23 |
+
|
| 24 |
+
Stop compiling. Start inferencing.
|
| 25 |
|
| 26 |
+
```bash
|
| 27 |
+
pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+openblas_avx2_fma_f16c-cp311-cp311-manylinux_2_31_x86_64.whl
|
| 28 |
+
```
|
| 29 |
|
| 30 |
+
## π What's Inside
|
| 31 |
|
| 32 |
| | Count |
|
| 33 |
+
|---|---|
|
| 34 |
+
| **Total Wheels** | 3,794+ |
|
| 35 |
| **Versions** | 0.3.0 β 0.3.16 (17 versions) |
|
| 36 |
| **Python** | 3.8, 3.9, 3.10, 3.11, 3.12, 3.13, 3.14 |
|
|
|
|
|
|
|
|
|
|
| 37 |
| **Platform** | `manylinux_2_31_x86_64` |
|
| 38 |
+
| **Backends** | 8 |
|
| 39 |
+
| **CPU Profiles** | 13+ flag combinations |
|
| 40 |
|
| 41 |
+
## β‘ Backends
|
| 42 |
|
| 43 |
+
| Backend | Tag | Description |
|
| 44 |
+
|---------|-----|-------------|
|
| 45 |
+
| **OpenBLAS** | `openblas` | CPU BLAS acceleration β best general-purpose choice |
|
| 46 |
+
| **Intel MKL** | `mkl` | Intel Math Kernel Library β fastest on Intel CPUs |
|
| 47 |
+
| **Basic** | `basic` | No BLAS β maximum compatibility, no extra dependencies |
|
| 48 |
+
| **Vulkan** | `vulkan` | Universal GPU acceleration β works on NVIDIA, AMD, Intel |
|
| 49 |
+
| **CLBlast** | `clblast` | OpenCL GPU acceleration |
|
| 50 |
+
| **SYCL** | `sycl` | Intel GPU acceleration (Data Center, Arc, iGPU) |
|
| 51 |
+
| **OpenCL** | `opencl` | Generic OpenCL GPU backend |
|
| 52 |
+
| **RPC** | `rpc` | Distributed inference over network |
|
| 53 |
|
| 54 |
+
## π₯οΈ CPU Optimization Profiles
|
| 55 |
|
| 56 |
+
Wheels are built with specific CPU instruction sets enabled. Pick the one that matches your hardware:
|
|
|
|
|
|
|
| 57 |
|
| 58 |
+
| CPU Tag | Instructions | Best For |
|
| 59 |
+
|---------|-------------|----------|
|
| 60 |
+
| `basic` | None | Any x86-64 CPU (maximum compatibility) |
|
| 61 |
+
| `avx` | AVX | Sandy Bridge+ (2011) |
|
| 62 |
+
| `avx_f16c` | AVX + F16C | Ivy Bridge+ (2012) |
|
| 63 |
+
| `avx2_fma_f16c` | AVX2 + FMA + F16C | **Haswell+ (2013) β most common** |
|
| 64 |
+
| `avx2_fma_f16c_avxvnni` | AVX2 + FMA + F16C + AVX-VNNI | Alder Lake+ (2021) |
|
| 65 |
+
| `avx512_fma_f16c` | AVX-512 + FMA + F16C | Skylake-X+ (2017) |
|
| 66 |
+
| `avx512_fma_f16c_vnni` | + AVX512-VNNI | Cascade Lake+ (2019) |
|
| 67 |
+
| `avx512_fma_f16c_vnni_vbmi` | + AVX512-VBMI | Ice Lake+ (2019) |
|
| 68 |
+
| `avx512_fma_f16c_vnni_vbmi_bf16_amx` | + BF16 + AMX | Sapphire Rapids+ (2023) |
|
| 69 |
|
| 70 |
+
### How to Pick the Right Wheel
|
|
|
|
| 71 |
|
| 72 |
+
**Don't know your CPU?** Start with `avx2_fma_f16c` β it works on any CPU from 2013 onwards (Intel Haswell, AMD Ryzen, and newer).
|
|
|
|
|
|
|
| 73 |
|
| 74 |
+
**Want maximum compatibility?** Use `basic` β works on literally any x86-64 CPU.
|
| 75 |
|
| 76 |
+
**Have a server CPU?** Check if it supports AVX-512:
|
| 77 |
+
```bash
|
| 78 |
+
grep -o 'avx[^ ]*\|fma\|f16c\|bmi2\|sse4_2' /proc/cpuinfo | sort -u
|
| 79 |
```
|
| 80 |
|
| 81 |
+
## π¦ Filename Format
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 82 |
|
| 83 |
+
All wheels follow the [PEP 440](https://peps.python.org/pep-0440/) local version identifier standard:
|
| 84 |
|
|
|
|
| 85 |
```
|
| 86 |
+
llama_cpp_python-{version}+{backend}_{cpu_flags}-{python}-{python}-{platform}.whl
|
| 87 |
```
|
| 88 |
|
| 89 |
+
Examples:
|
| 90 |
```
|
| 91 |
+
llama_cpp_python-0.3.16+openblas_avx2_fma_f16c-cp311-cp311-manylinux_2_31_x86_64.whl
|
| 92 |
+
llama_cpp_python-0.3.16+vulkan-cp312-cp312-manylinux_2_31_x86_64.whl
|
| 93 |
+
llama_cpp_python-0.3.16+basic-cp310-cp310-manylinux_2_31_x86_64.whl
|
| 94 |
```
|
| 95 |
|
| 96 |
+
The local version label (`+openblas_avx2_fma_f16c`) encodes:
|
| 97 |
+
- **Backend**: `openblas`, `mkl`, `basic`, `vulkan`, `clblast`, `sycl`, `opencl`, `rpc`
|
| 98 |
+
- **CPU flags** (in order): `avx`, `avx2`, `avx512`, `fma`, `f16c`, `vnni`, `vbmi`, `bf16`, `avxvnni`, `amx`
|
| 99 |
|
| 100 |
+
## π Quick Start
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 101 |
|
| 102 |
+
### CPU (OpenBLAS + AVX2 β recommended for most users)
|
|
|
|
|
|
|
| 103 |
|
| 104 |
```bash
|
| 105 |
+
sudo apt-get install libopenblas-dev
|
| 106 |
+
|
| 107 |
+
pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+openblas_avx2_fma_f16c-cp311-cp311-manylinux_2_31_x86_64.whl
|
| 108 |
```
|
| 109 |
|
| 110 |
+
### GPU (Vulkan β works on any GPU vendor)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 111 |
|
| 112 |
+
```bash
|
| 113 |
+
sudo apt-get install libvulkan1
|
| 114 |
|
| 115 |
+
pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+vulkan-cp311-cp311-manylinux_2_31_x86_64.whl
|
|
|
|
| 116 |
```
|
| 117 |
|
| 118 |
+
### Basic (zero dependencies)
|
|
|
|
|
|
|
|
|
|
| 119 |
|
| 120 |
+
```bash
|
| 121 |
+
pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+basic-cp311-cp311-manylinux_2_31_x86_64.whl
|
|
|
|
| 122 |
```
|
| 123 |
|
| 124 |
+
### Example Usage
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 125 |
|
| 126 |
+
```python
|
| 127 |
+
from llama_cpp import Llama
|
| 128 |
|
| 129 |
+
llm = Llama.from_pretrained(
|
| 130 |
+
repo_id="Qwen/Qwen2.5-Coder-7B-Instruct-GGUF",
|
| 131 |
+
filename="*q4_k_m.gguf",
|
| 132 |
+
n_ctx=4096,
|
| 133 |
+
)
|
| 134 |
+
|
| 135 |
+
output = llm.create_chat_completion(
|
| 136 |
+
messages=[{"role": "user", "content": "Write a Python hello world"}],
|
| 137 |
+
max_tokens=256,
|
| 138 |
+
)
|
| 139 |
+
print(output["choices"][0]["message"]["content"])
|
| 140 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
| 141 |
|
| 142 |
+
## π§ Runtime Dependencies
|
| 143 |
+
|
| 144 |
+
| Backend | Required Packages |
|
| 145 |
+
|---------|------------------|
|
| 146 |
+
| OpenBLAS | `libopenblas0` (runtime) or `libopenblas-dev` (build) |
|
| 147 |
+
| MKL | Intel oneAPI MKL |
|
| 148 |
+
| Vulkan | `libvulkan1` |
|
| 149 |
+
| CLBlast | `libclblast1` |
|
| 150 |
+
| OpenCL | `ocl-icd-libopencl1` |
|
| 151 |
+
| Basic | **None** |
|
| 152 |
+
| SYCL | Intel oneAPI DPC++ runtime |
|
| 153 |
+
| RPC | Network access to RPC server |
|
| 154 |
|
| 155 |
## π How These Wheels Are Built
|
| 156 |
|
| 157 |
+
These wheels are built by the **Ultimate Llama Wheel Factory** β a distributed build system running entirely on free HuggingFace Spaces:
|
| 158 |
+
|
| 159 |
+
| Component | Link |
|
| 160 |
+
|-----------|------|
|
| 161 |
+
| π Dispatcher | [wheel-factory-dispatcher](https://huggingface.co/spaces/AIencoder/wheel-factory-dispatcher) |
|
| 162 |
+
| βοΈ Workers 1-4 | [wheel-factory-worker-1](https://huggingface.co/spaces/AIencoder/wheel-factory-worker-1) ... 4 |
|
| 163 |
+
| π Auditor | [wheel-factory-auditor](https://huggingface.co/spaces/AIencoder/wheel-factory-auditor) |
|
| 164 |
+
|
| 165 |
+
The factory uses explicit cmake flags matching llama.cpp's official CPU variant builds:
|
| 166 |
|
| 167 |
+
```
|
| 168 |
+
CMAKE_ARGS="-DGGML_BLAS=ON -DGGML_BLAS_VENDOR=OpenBLAS -DGGML_AVX2=ON -DGGML_FMA=ON -DGGML_F16C=ON -DGGML_AVX=OFF -DGGML_AVX512=OFF -DGGML_NATIVE=OFF"
|
| 169 |
+
```
|
| 170 |
|
| 171 |
+
Every flag is set explicitly (no cmake defaults) to ensure reproducible, deterministic builds.
|
| 172 |
|
| 173 |
## β FAQ
|
| 174 |
|
| 175 |
+
**Q: Which wheel should I use?**
|
| 176 |
+
For most people: `openblas_avx2_fma_f16c` with your Python version. It's fast, works on 90%+ of modern CPUs, and only needs `libopenblas`.
|
| 177 |
+
|
| 178 |
+
**Q: Can I use these on Ubuntu / Debian / Fedora / Arch?**
|
| 179 |
+
Yes β `manylinux_2_31` wheels work on any Linux distro with glibc 2.31 or newer (Ubuntu 20.04+, Debian 11+, Fedora 34+, Arch).
|
| 180 |
|
| 181 |
+
**Q: What about Windows / macOS / CUDA wheels?**
|
| 182 |
+
This repo focuses on manylinux x86_64. For other platforms, see:
|
| 183 |
+
- [abetlen's official wheel index](https://abetlen.github.io/llama-cpp-python/whl/) β CPU, CUDA 12.1-12.5, Metal
|
| 184 |
+
- [jllllll's CUDA wheels](https://github.com/jllllll/llama-cpp-python-cuBLAS-wheels) β cuBLAS + AVX combos
|
| 185 |
|
| 186 |
+
**Q: These wheels don't work on Alpine Linux.**
|
| 187 |
+
Alpine uses musl, not glibc. These are `manylinux` (glibc) wheels. Build from source or use `musllinux` wheels.
|
| 188 |
|
| 189 |
+
**Q: I get "illegal instruction" errors.**
|
| 190 |
+
You're using a wheel with CPU flags your processor doesn't support. Try `basic` (no SIMD) or check your CPU flags with:
|
| 191 |
+
```bash
|
| 192 |
+
grep -o 'avx[^ ]*\|fma\|f16c' /proc/cpuinfo | sort -u
|
| 193 |
+
```
|
| 194 |
|
| 195 |
+
**Q: Can I contribute more wheels?**
|
| 196 |
+
Yes! The factory source code is open. See the Dispatcher and Worker Spaces linked above.
|
| 197 |
|
| 198 |
## π License
|
| 199 |
|
|
|
|
| 201 |
|
| 202 |
## π Credits
|
| 203 |
|
| 204 |
+
- [llama.cpp](https://github.com/ggml-org/llama.cpp) by Georgi Gerganov and the ggml community
|
| 205 |
+
- [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) by Andrei Betlen
|
| 206 |
+
- Built with π by [AIencoder](https://huggingface.co/AIencoder)
|