AIencoder commited on
Commit
288d898
Β·
verified Β·
1 Parent(s): 4f9884f

Add comprehensive README with usage guide, FAQ, and wheel reference

Browse files
Files changed (1) hide show
  1. README.md +124 -133
README.md CHANGED
@@ -7,14 +7,9 @@ tags:
7
  - llama-cpp-python
8
  - wheels
9
  - prebuilt
10
- - manylinux
11
  - cpu
12
  - gpu
13
- - vulkan
14
- - openblas
15
- - mkl
16
- - avx2
17
- - avx512
18
  - gguf
19
  - inference
20
  pretty_name: "llama-cpp-python Prebuilt Wheels"
@@ -24,185 +19,181 @@ size_categories:
24
 
25
  # 🏭 llama-cpp-python Prebuilt Wheels
26
 
27
- **The most complete collection of prebuilt `llama-cpp-python` wheels for manylinux.**
 
 
28
 
29
- Never compile `llama-cpp-python` from source again. Just `pip install` the exact wheel you need.
 
 
30
 
31
- ## πŸ“Š Stats
32
 
33
  | | Count |
34
- |--|-------|
35
- | **Total Wheels** | 3,795+ |
36
  | **Versions** | 0.3.0 β€” 0.3.16 (17 versions) |
37
  | **Python** | 3.8, 3.9, 3.10, 3.11, 3.12, 3.13, 3.14 |
38
- | **CPU Backends** | OpenBLAS, Intel MKL, Basic (no BLAS) |
39
- | **GPU Backends** | Vulkan, CLBlast, OpenCL, SYCL, RPC |
40
- | **CPU Optimizations** | AVX, AVX2, AVX512, FMA, F16C, VNNI, VBMI, BF16, AMX, AVX-VNNI |
41
  | **Platform** | `manylinux_2_31_x86_64` |
 
 
42
 
43
- ## ⚑ Quick Install
44
 
45
- ```bash
46
- # Direct install β€” just replace the filename with the wheel you need
47
- pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/WHEEL_FILENAME.whl
48
- ```
 
 
 
 
 
 
49
 
50
- ### Examples
51
 
52
- ```bash
53
- # OpenBLAS + AVX2/FMA/F16C (most modern desktops, 2013+)
54
- pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+openblas_avx2_fma_f16c-cp311-cp311-manylinux_2_31_x86_64.whl
55
 
56
- # AVX512 + VNNI + VBMI (Ice Lake servers, 2019+)
57
- pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+openblas_avx512_fma_f16c_vnni_vbmi-cp311-cp311-manylinux_2_31_x86_64.whl
 
 
 
 
 
 
 
 
 
58
 
59
- # Vulkan GPU acceleration
60
- pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+vulkan-cp311-cp311-manylinux_2_31_x86_64.whl
61
 
62
- # Basic β€” maximum compatibility (any x86-64 CPU)
63
- pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+basic_basic-cp311-cp311-manylinux_2_31_x86_64.whl
64
- ```
65
 
66
- ### In a requirements.txt
67
 
68
- ```
69
- # URL-encode the + as %2B
70
- https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16%2Bopenblas_avx2_fma_f16c-cp311-cp311-manylinux_2_31_x86_64.whl
71
  ```
72
 
73
- ### In a Dockerfile
74
-
75
- ```dockerfile
76
- RUN pip install --no-cache-dir \
77
- https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16%2Bopenblas_avx2_fma_f16c-cp311-cp311-manylinux_2_31_x86_64.whl
78
- ```
79
 
80
- ### In a HuggingFace Space (packages.txt + requirements.txt)
81
 
82
- **packages.txt:**
83
  ```
84
- libopenblas-dev
85
  ```
86
 
87
- **requirements.txt:**
88
  ```
89
- https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16%2Bopenblas_avx2_fma_f16c-cp311-cp311-manylinux_2_31_x86_64.whl
 
 
90
  ```
91
 
92
- ## πŸ”§ Which Wheel Do I Need?
93
-
94
- ### Step 1: Choose Your Backend
95
 
96
- | Backend | Best For | Tag |
97
- |---------|----------|-----|
98
- | **OpenBLAS** | General CPU inference, good default | `openblas` |
99
- | **Intel MKL** | Intel CPUs, potentially faster BLAS | `mkl` |
100
- | **Basic** | Maximum compatibility, no external deps | `basic` |
101
- | **Vulkan** | GPU acceleration (NVIDIA, AMD, Intel) | `vulkan` |
102
- | **CLBlast** | OpenCL GPU acceleration | `clblast` |
103
- | **OpenCL** | Generic OpenCL devices | `opencl` |
104
- | **SYCL** | Intel GPU (Arc, Flex, Data Center) | `sycl` |
105
- | **RPC** | Distributed inference over network | `rpc` |
106
 
107
- ### Step 2: Choose Your CPU Optimization
108
-
109
- Check what your CPU supports:
110
 
111
  ```bash
112
- # Linux β€” check CPU flags
113
- grep -o 'avx[a-z0-9_]*\|fma\|f16c\|sse4_2' /proc/cpuinfo | sort -u
 
114
  ```
115
 
116
- | CPU Era | Example CPUs | Recommended Tag |
117
- |---------|-------------|-----------------|
118
- | **2013+ Desktop** | Haswell, Ryzen 1st gen | `avx2_fma_f16c` |
119
- | **2017+ Server** | Skylake-X, EPYC | `avx512_fma_f16c` |
120
- | **2019+ Server** | Ice Lake, EPYC 3rd gen | `avx512_fma_f16c_vnni_vbmi` |
121
- | **2021+ Desktop** | Alder Lake, 12th gen Intel | `avx2_fma_f16c_avxvnni` |
122
- | **2023+ Server** | Sapphire Rapids, 4th gen Xeon | `avx512_fma_f16c_vnni_vbmi_bf16_amx` |
123
- | **2012+ Legacy** | Ivy Bridge | `avx_f16c` |
124
- | **2011+ Legacy** | Sandy Bridge | `avx` |
125
- | **Any x86-64** | Anything 64-bit | `basic` |
126
 
127
- ### Step 3: Build the Filename
 
128
 
129
- ```
130
- llama_cpp_python-{VERSION}+{BACKEND}_{CPU_TAG}-{PYTHON}-{PYTHON}-manylinux_2_31_x86_64.whl
131
  ```
132
 
133
- **Example:** Python 3.12 + OpenBLAS + AVX2/FMA/F16C + version 0.3.16:
134
- ```
135
- llama_cpp_python-0.3.16+openblas_avx2_fma_f16c-cp312-cp312-manylinux_2_31_x86_64.whl
136
- ```
137
 
138
- GPU backends don't need a CPU tag:
139
- ```
140
- llama_cpp_python-0.3.16+vulkan-cp312-cp312-manylinux_2_31_x86_64.whl
141
  ```
142
 
143
- ## πŸ“‹ CPU Optimization Tags Reference
144
-
145
- | Tag | CPU Instructions Enabled | CMake Flags |
146
- |-----|-------------------------|-------------|
147
- | `basic` | None (pure x86-64) | β€” |
148
- | `avx` | AVX | `-DGGML_AVX=ON` |
149
- | `avx_f16c` | AVX + F16C | `-DGGML_AVX=ON -DGGML_F16C=ON` |
150
- | `avx2_fma_f16c` | AVX2 + FMA + F16C | `-DGGML_AVX2=ON -DGGML_FMA=ON -DGGML_F16C=ON` |
151
- | `avx512_fma_f16c` | AVX512 + FMA + F16C | `-DGGML_AVX512=ON -DGGML_FMA=ON -DGGML_F16C=ON` |
152
- | `avx512_fma_f16c_vnni_vbmi` | AVX512 + FMA + F16C + VNNI + VBMI | `+ -DGGML_AVX512_VNNI=ON -DGGML_AVX512_VBMI=ON` |
153
- | `avx512_fma_f16c_vnni_vbmi_bf16_amx` | Full server (Sapphire Rapids) | `+ -DGGML_AVX512_BF16=ON -DGGML_AMX_TILE/INT8/BF16=ON` |
154
- | `avx2_fma_f16c_avxvnni` | AVX2 + FMA + F16C + AVX-VNNI | `+ -DGGML_AVX_VNNI=ON` |
155
-
156
- ## 🐍 Python Version Support
157
-
158
- | Python | Tag | Status |
159
- |--------|-----|--------|
160
- | 3.8 | `cp38` | βœ… Full coverage |
161
- | 3.9 | `cp39` | βœ… Full coverage |
162
- | 3.10 | `cp310` | βœ… Full coverage |
163
- | 3.11 | `cp311` | βœ… Full coverage |
164
- | 3.12 | `cp312` | βœ… Full coverage |
165
- | 3.13 | `cp313` | βœ… Full coverage |
166
- | 3.14 | `cp314` | βœ… Full coverage |
167
-
168
- ## πŸ“¦ Naming Convention (PEP 440)
169
 
170
- All wheels follow the [PEP 440](https://peps.python.org/pep-0440/) local version identifier standard:
 
171
 
 
 
 
 
 
 
 
 
 
 
 
172
  ```
173
- llama_cpp_python-{VERSION}+{LOCAL_TAG}-{PYTHON}-{ABI}-{PLATFORM}.whl
174
- ^
175
- └── Local version label (backend + CPU flags)
176
- ```
177
 
178
- The `+` separates the upstream version from the local build variant. The local tag uses `_` to separate components. This is fully PEP 440 compliant and works with `pip`, `requirements.txt`, and all standard Python packaging tools.
 
 
 
 
 
 
 
 
 
 
 
179
 
180
  ## 🏭 How These Wheels Are Built
181
 
182
- These wheels are built automatically by the **Ultimate Llama Wheel Factory** β€” a distributed build system running on HuggingFace Spaces:
 
 
 
 
 
 
 
 
183
 
184
- - **[Dispatcher](https://huggingface.co/spaces/AIencoder/wheel-factory-dispatcher)** β€” Command center for creating and managing build jobs
185
- - **[Workers 1-4](https://huggingface.co/spaces/AIencoder/wheel-factory-worker-1)** β€” Autonomous Docker-based build agents
186
- - **[Auditor](https://huggingface.co/spaces/AIencoder/wheel-factory-auditor)** β€” Validates filenames and repo health
187
 
188
- Each wheel is compiled from source with explicit cmake flags β€” no `-march=native`, ensuring the exact instruction set advertised in the filename.
189
 
190
  ## ❓ FAQ
191
 
192
- **Q: Do I need to install OpenBLAS separately?**
193
- A: For `openblas` wheels on Linux, yes: `sudo apt install libopenblas-dev`. For `basic` wheels, no external dependencies needed. For HuggingFace Spaces, add `libopenblas-dev` to `packages.txt`.
 
 
 
194
 
195
- **Q: Which wheel is fastest?**
196
- A: Use the most specific wheel your CPU supports. `avx2_fma_f16c` is the sweet spot for most modern hardware. If your CPU has AVX512, use the `avx512` variants for potentially better performance on large batch sizes.
 
 
197
 
198
- **Q: Can I use these on Ubuntu/Debian/Fedora/etc?**
199
- A: Yes! `manylinux_2_31` works on any Linux distro with glibc β‰₯ 2.31. That includes Ubuntu 20.04+, Debian 11+, Fedora 34+, RHEL 9+, and most other modern distros.
200
 
201
- **Q: What about Windows/macOS/CUDA wheels?**
202
- A: This repo currently focuses on manylinux. For other platforms, check [abetlen's wheel index](https://abetlen.github.io/llama-cpp-python/whl/) or [jllllll's cuBLAS wheels](https://github.com/jllllll/llama-cpp-python-cuBLAS-wheels).
 
 
 
203
 
204
- **Q: A wheel doesn't work / crashes with SIGILL?**
205
- A: You're probably using a wheel with CPU instructions your hardware doesn't support (e.g., AVX512 on a non-AVX512 CPU). Try a less specific wheel like `avx2_fma_f16c` or `basic`.
206
 
207
  ## πŸ“„ License
208
 
@@ -210,6 +201,6 @@ MIT β€” same as [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)
210
 
211
  ## πŸ™ Credits
212
 
213
- - [Georgi Gerganov](https://github.com/ggerganov) β€” llama.cpp
214
- - [Andrei Betlen](https://github.com/abetlen) β€” llama-cpp-python
215
- - Built by [AIencoder](https://huggingface.co/AIencoder) with the Ultimate Llama Wheel Factory
 
7
  - llama-cpp-python
8
  - wheels
9
  - prebuilt
 
10
  - cpu
11
  - gpu
12
+ - manylinux
 
 
 
 
13
  - gguf
14
  - inference
15
  pretty_name: "llama-cpp-python Prebuilt Wheels"
 
19
 
20
  # 🏭 llama-cpp-python Prebuilt Wheels
21
 
22
+ **The most complete collection of prebuilt `llama-cpp-python` wheels for manylinux x86_64.**
23
+
24
+ Stop compiling. Start inferencing.
25
 
26
+ ```bash
27
+ pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+openblas_avx2_fma_f16c-cp311-cp311-manylinux_2_31_x86_64.whl
28
+ ```
29
 
30
+ ## πŸ“Š What's Inside
31
 
32
  | | Count |
33
+ |---|---|
34
+ | **Total Wheels** | 3,794+ |
35
  | **Versions** | 0.3.0 β€” 0.3.16 (17 versions) |
36
  | **Python** | 3.8, 3.9, 3.10, 3.11, 3.12, 3.13, 3.14 |
 
 
 
37
  | **Platform** | `manylinux_2_31_x86_64` |
38
+ | **Backends** | 8 |
39
+ | **CPU Profiles** | 13+ flag combinations |
40
 
41
+ ## ⚑ Backends
42
 
43
+ | Backend | Tag | Description |
44
+ |---------|-----|-------------|
45
+ | **OpenBLAS** | `openblas` | CPU BLAS acceleration β€” best general-purpose choice |
46
+ | **Intel MKL** | `mkl` | Intel Math Kernel Library β€” fastest on Intel CPUs |
47
+ | **Basic** | `basic` | No BLAS β€” maximum compatibility, no extra dependencies |
48
+ | **Vulkan** | `vulkan` | Universal GPU acceleration β€” works on NVIDIA, AMD, Intel |
49
+ | **CLBlast** | `clblast` | OpenCL GPU acceleration |
50
+ | **SYCL** | `sycl` | Intel GPU acceleration (Data Center, Arc, iGPU) |
51
+ | **OpenCL** | `opencl` | Generic OpenCL GPU backend |
52
+ | **RPC** | `rpc` | Distributed inference over network |
53
 
54
+ ## πŸ–₯️ CPU Optimization Profiles
55
 
56
+ Wheels are built with specific CPU instruction sets enabled. Pick the one that matches your hardware:
 
 
57
 
58
+ | CPU Tag | Instructions | Best For |
59
+ |---------|-------------|----------|
60
+ | `basic` | None | Any x86-64 CPU (maximum compatibility) |
61
+ | `avx` | AVX | Sandy Bridge+ (2011) |
62
+ | `avx_f16c` | AVX + F16C | Ivy Bridge+ (2012) |
63
+ | `avx2_fma_f16c` | AVX2 + FMA + F16C | **Haswell+ (2013) β€” most common** |
64
+ | `avx2_fma_f16c_avxvnni` | AVX2 + FMA + F16C + AVX-VNNI | Alder Lake+ (2021) |
65
+ | `avx512_fma_f16c` | AVX-512 + FMA + F16C | Skylake-X+ (2017) |
66
+ | `avx512_fma_f16c_vnni` | + AVX512-VNNI | Cascade Lake+ (2019) |
67
+ | `avx512_fma_f16c_vnni_vbmi` | + AVX512-VBMI | Ice Lake+ (2019) |
68
+ | `avx512_fma_f16c_vnni_vbmi_bf16_amx` | + BF16 + AMX | Sapphire Rapids+ (2023) |
69
 
70
+ ### How to Pick the Right Wheel
 
71
 
72
+ **Don't know your CPU?** Start with `avx2_fma_f16c` β€” it works on any CPU from 2013 onwards (Intel Haswell, AMD Ryzen, and newer).
 
 
73
 
74
+ **Want maximum compatibility?** Use `basic` β€” works on literally any x86-64 CPU.
75
 
76
+ **Have a server CPU?** Check if it supports AVX-512:
77
+ ```bash
78
+ grep -o 'avx[^ ]*\|fma\|f16c\|bmi2\|sse4_2' /proc/cpuinfo | sort -u
79
  ```
80
 
81
+ ## πŸ“¦ Filename Format
 
 
 
 
 
82
 
83
+ All wheels follow the [PEP 440](https://peps.python.org/pep-0440/) local version identifier standard:
84
 
 
85
  ```
86
+ llama_cpp_python-{version}+{backend}_{cpu_flags}-{python}-{python}-{platform}.whl
87
  ```
88
 
89
+ Examples:
90
  ```
91
+ llama_cpp_python-0.3.16+openblas_avx2_fma_f16c-cp311-cp311-manylinux_2_31_x86_64.whl
92
+ llama_cpp_python-0.3.16+vulkan-cp312-cp312-manylinux_2_31_x86_64.whl
93
+ llama_cpp_python-0.3.16+basic-cp310-cp310-manylinux_2_31_x86_64.whl
94
  ```
95
 
96
+ The local version label (`+openblas_avx2_fma_f16c`) encodes:
97
+ - **Backend**: `openblas`, `mkl`, `basic`, `vulkan`, `clblast`, `sycl`, `opencl`, `rpc`
98
+ - **CPU flags** (in order): `avx`, `avx2`, `avx512`, `fma`, `f16c`, `vnni`, `vbmi`, `bf16`, `avxvnni`, `amx`
99
 
100
+ ## πŸš€ Quick Start
 
 
 
 
 
 
 
 
 
101
 
102
+ ### CPU (OpenBLAS + AVX2 β€” recommended for most users)
 
 
103
 
104
  ```bash
105
+ sudo apt-get install libopenblas-dev
106
+
107
+ pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+openblas_avx2_fma_f16c-cp311-cp311-manylinux_2_31_x86_64.whl
108
  ```
109
 
110
+ ### GPU (Vulkan β€” works on any GPU vendor)
 
 
 
 
 
 
 
 
 
111
 
112
+ ```bash
113
+ sudo apt-get install libvulkan1
114
 
115
+ pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+vulkan-cp311-cp311-manylinux_2_31_x86_64.whl
 
116
  ```
117
 
118
+ ### Basic (zero dependencies)
 
 
 
119
 
120
+ ```bash
121
+ pip install https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.16+basic-cp311-cp311-manylinux_2_31_x86_64.whl
 
122
  ```
123
 
124
+ ### Example Usage
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
125
 
126
+ ```python
127
+ from llama_cpp import Llama
128
 
129
+ llm = Llama.from_pretrained(
130
+ repo_id="Qwen/Qwen2.5-Coder-7B-Instruct-GGUF",
131
+ filename="*q4_k_m.gguf",
132
+ n_ctx=4096,
133
+ )
134
+
135
+ output = llm.create_chat_completion(
136
+ messages=[{"role": "user", "content": "Write a Python hello world"}],
137
+ max_tokens=256,
138
+ )
139
+ print(output["choices"][0]["message"]["content"])
140
  ```
 
 
 
 
141
 
142
+ ## πŸ”§ Runtime Dependencies
143
+
144
+ | Backend | Required Packages |
145
+ |---------|------------------|
146
+ | OpenBLAS | `libopenblas0` (runtime) or `libopenblas-dev` (build) |
147
+ | MKL | Intel oneAPI MKL |
148
+ | Vulkan | `libvulkan1` |
149
+ | CLBlast | `libclblast1` |
150
+ | OpenCL | `ocl-icd-libopencl1` |
151
+ | Basic | **None** |
152
+ | SYCL | Intel oneAPI DPC++ runtime |
153
+ | RPC | Network access to RPC server |
154
 
155
  ## 🏭 How These Wheels Are Built
156
 
157
+ These wheels are built by the **Ultimate Llama Wheel Factory** β€” a distributed build system running entirely on free HuggingFace Spaces:
158
+
159
+ | Component | Link |
160
+ |-----------|------|
161
+ | 🏭 Dispatcher | [wheel-factory-dispatcher](https://huggingface.co/spaces/AIencoder/wheel-factory-dispatcher) |
162
+ | βš™οΈ Workers 1-4 | [wheel-factory-worker-1](https://huggingface.co/spaces/AIencoder/wheel-factory-worker-1) ... 4 |
163
+ | πŸ” Auditor | [wheel-factory-auditor](https://huggingface.co/spaces/AIencoder/wheel-factory-auditor) |
164
+
165
+ The factory uses explicit cmake flags matching llama.cpp's official CPU variant builds:
166
 
167
+ ```
168
+ CMAKE_ARGS="-DGGML_BLAS=ON -DGGML_BLAS_VENDOR=OpenBLAS -DGGML_AVX2=ON -DGGML_FMA=ON -DGGML_F16C=ON -DGGML_AVX=OFF -DGGML_AVX512=OFF -DGGML_NATIVE=OFF"
169
+ ```
170
 
171
+ Every flag is set explicitly (no cmake defaults) to ensure reproducible, deterministic builds.
172
 
173
  ## ❓ FAQ
174
 
175
+ **Q: Which wheel should I use?**
176
+ For most people: `openblas_avx2_fma_f16c` with your Python version. It's fast, works on 90%+ of modern CPUs, and only needs `libopenblas`.
177
+
178
+ **Q: Can I use these on Ubuntu / Debian / Fedora / Arch?**
179
+ Yes β€” `manylinux_2_31` wheels work on any Linux distro with glibc 2.31 or newer (Ubuntu 20.04+, Debian 11+, Fedora 34+, Arch).
180
 
181
+ **Q: What about Windows / macOS / CUDA wheels?**
182
+ This repo focuses on manylinux x86_64. For other platforms, see:
183
+ - [abetlen's official wheel index](https://abetlen.github.io/llama-cpp-python/whl/) β€” CPU, CUDA 12.1-12.5, Metal
184
+ - [jllllll's CUDA wheels](https://github.com/jllllll/llama-cpp-python-cuBLAS-wheels) β€” cuBLAS + AVX combos
185
 
186
+ **Q: These wheels don't work on Alpine Linux.**
187
+ Alpine uses musl, not glibc. These are `manylinux` (glibc) wheels. Build from source or use `musllinux` wheels.
188
 
189
+ **Q: I get "illegal instruction" errors.**
190
+ You're using a wheel with CPU flags your processor doesn't support. Try `basic` (no SIMD) or check your CPU flags with:
191
+ ```bash
192
+ grep -o 'avx[^ ]*\|fma\|f16c' /proc/cpuinfo | sort -u
193
+ ```
194
 
195
+ **Q: Can I contribute more wheels?**
196
+ Yes! The factory source code is open. See the Dispatcher and Worker Spaces linked above.
197
 
198
  ## πŸ“„ License
199
 
 
201
 
202
  ## πŸ™ Credits
203
 
204
+ - [llama.cpp](https://github.com/ggml-org/llama.cpp) by Georgi Gerganov and the ggml community
205
+ - [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) by Andrei Betlen
206
+ - Built with 🏭 by [AIencoder](https://huggingface.co/AIencoder)