Datasets:
File size: 7,443 Bytes
74f8a12 ab94b2c 74f8a12 ab94b2c d361e6f ab94b2c d361e6f 4f9884f d361e6f 74f8a12 ab94b2c 56f015f d361e6f 288d898 ab94b2c 5051e66 d361e6f 5051e66 d361e6f ab94b2c d361e6f ab94b2c d361e6f ab94b2c d361e6f 4f9884f ab94b2c d361e6f 4f9884f ab94b2c 4f9884f d361e6f 4f9884f ab94b2c 4f9884f d361e6f ab94b2c d361e6f 4f9884f ab94b2c d361e6f 4f9884f d361e6f 4f9884f d361e6f ab94b2c 288d898 4f9884f ab94b2c 5051e66 ab94b2c 5051e66 d361e6f 5051e66 ab94b2c d361e6f | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 | ---
license: mit
task_categories:
- text-generation
language:
- en
tags:
- code
- llama-cpp
- llama-cpp-python
- wheels
- pre-built
- binary
- linux
- windows
- macos
pretty_name: llama-cpp-python Pre-Built Wheels
size_categories:
- 1K<n<10K
---
# π llama-cpp-python Mega-Factory Wheels
> **"Stop waiting for `pip` to compile. Just install and run."**
The most complete collection of pre-built `llama-cpp-python` wheels in existence β **8,333 wheels** across every platform, Python version, backend, and CPU optimization level.
No more `cmake`, `gcc`, or compilation hell. No more waiting 10 minutes for a build that might fail. Just find your wheel and `pip install` it directly.
---
## π Why These Wheels?
Standard wheels target the "lowest common denominator" to avoid crashes on old hardware. This collection goes further β the manylinux wheels are built using a massive **Everything Preset** targeting specific CPU instruction sets, maximizing your **Tokens per Second (T/s)**.
- **Zero Dependencies:** No `cmake`, `gcc`, or `nvcc` required on your target machine.
- **Every Platform:** Linux (manylinux, aarch64, i686, RISC-V), Windows (amd64, 32-bit), macOS (Intel + Apple Silicon).
- **Server-Grade Power:** Optimized builds for `Sapphire Rapids`, `Ice Lake`, `Alder Lake`, `Haswell`, and more.
- **Full Backend Support:** `OpenBLAS`, `MKL`, `Vulkan`, `CLBlast`, `OpenCL`, `RPC`, and plain CPU builds.
- **Cutting Edge:** Python `3.8` through experimental `3.14`, plus PyPy `pp38`β`pp310`.
- **GPU Too:** CUDA wheels (cu121βcu124) and macOS Metal wheels included.
---
## π Collection Stats
| Platform | Wheels |
|:---|---:|
| π§ Linux x86_64 (manylinux) | 4,940 |
| π macOS Intel (x86\_64) | 1,040 |
| πͺ Windows (amd64) | 1,010 |
| πͺ Windows (32-bit) | 634 |
| π macOS Apple Silicon (arm64) | 289 |
| π§ Linux i686 | 214 |
| π§ Linux aarch64 | 120 |
| π§ Linux x86\_64 (plain) | 81 |
| π§ Linux RISC-V | 5 |
| **Total** | **8,333** |
The manylinux builds alone cover **3,600+ combinations** across versions, backends, Python versions, and CPU profiles.
---
## π How to Install
### Quick Install
Find your wheel filename (see naming convention below), then:
```bash
pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/YOUR_WHEEL_NAME.whl"
```
### Common Examples
```bash
# Linux x86_64, Python 3.11, OpenBLAS, Haswell CPU (most common Linux setup)
pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18+openblas_haswell-cp311-cp311-manylinux_2_31_x86_64.whl"
# Linux x86_64, Python 3.12, Basic CPU (maximum compatibility)
pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18+basic_basic-cp312-cp312-manylinux_2_31_x86_64.whl"
# Windows, Python 3.11
pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18-cp311-cp311-win_amd64.whl"
# macOS Apple Silicon, Python 3.12
pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18-cp312-cp312-macosx_11_0_arm64.whl"
# macOS Intel, Python 3.11
pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18-cp311-cp311-macosx_10_9_x86_64.whl"
# Linux ARM64 (Raspberry Pi, AWS Graviton), Python 3.11
pip install "https://huggingface.co/datasets/AIencoder/llama-cpp-wheels/resolve/main/llama_cpp_python-0.3.18-cp311-cp311-linux_aarch64.whl"
```
---
## π Wheel Naming Convention
### manylinux wheels (custom-built)
```
llama_cpp_python-{version}+{backend}_{profile}-{pytag}-{pytag}-{platform}.whl
```
**Versions covered:** `0.3.0` through `0.3.18+`
**Backends:**
| Backend | Description |
|:---|:---|
| `openblas` | OpenBLAS BLAS acceleration β best general-purpose CPU performance |
| `mkl` | Intel MKL acceleration β best on Intel CPUs |
| `basic` | No BLAS, maximum compatibility |
| `vulkan` | Vulkan GPU backend |
| `clblast` | CLBlast OpenCL GPU backend |
| `opencl` | Generic OpenCL GPU backend |
| `rpc` | Distributed inference over network |
**CPU Profiles:**
| Profile | Instruction Sets | Era | Notes |
|:---|:---|:---|:---|
| `basic` | x86-64 baseline | Any | Maximum compatibility |
| `sse42` | SSE 4.2 | 2008+ | Nehalem |
| `sandybridge` | AVX | 2011+ | |
| `ivybridge` | AVX + F16C | 2012+ | |
| `haswell` | AVX2 + FMA + BMI2 | 2013+ | **Most common** |
| `skylakex` | AVX-512 | 2017+ | |
| `icelake` | AVX-512 + VNNI + VBMI | 2019+ | |
| `alderlake` | AVX-VNNI | 2021+ | |
| `sapphirerapids` | AVX-512 BF16 + AMX | 2023+ | Highest performance |
**Python tags:** `cp38`, `cp39`, `cp310`, `cp311`, `cp312`, `cp313`, `cp314`, `pp38`, `pp39`, `pp310`
**Platform:** `manylinux_2_31_x86_64` (glibc 2.31+, compatible with Ubuntu 20.04+, Debian 11+)
### Windows / macOS / Linux ARM wheels (from abetlen)
```
llama_cpp_python-{version}-{pytag}-{pytag}-{platform}.whl
```
These are the official pre-built wheels from the upstream maintainer, covering versions `0.2.82` through `0.3.18+`.
---
## π How to Find Your Wheel
1. **Identify your Python version:** `python --version` β e.g. `3.11` β tag `cp311`
2. **Identify your platform:**
- Linux x86\_64 β `manylinux_2_31_x86_64`
- Windows 64-bit β `win_amd64`
- macOS Apple Silicon β `macosx_11_0_arm64`
- macOS Intel β `macosx_10_9_x86_64`
3. **Pick a backend** (manylinux only): `openblas` for most use cases
4. **Pick a CPU profile** (manylinux only): `haswell` works on virtually all modern CPUs
5. **Browse the files** in this repo or construct the filename directly
---
## ποΈ Sources & Credits
### manylinux Wheels β Built by AIencoder
The 4,940 manylinux x86\_64 wheels were built by a distributed **4-worker HuggingFace Space factory** system (`AIencoder/wheel-factory-*`) β a custom-built automated pipeline covering every possible llama.cpp cmake option on manylinux:
- Every backend: OpenBLAS, MKL, Basic, Vulkan, CLBlast, OpenCL, RPC
- Every CPU hardware profile from baseline x86-64 up to Sapphire Rapids AMX
- Python 3.8 through 3.14
- llama-cpp-python versions 0.3.0 through 0.3.18+
### Windows / macOS / Linux ARM Wheels β abetlen
The remaining 3,393 wheels (Windows, macOS, Linux aarch64/i686/riscv64, PyPy) were sourced from the official releases by **Andrei Betlen ([@abetlen](https://github.com/abetlen))**, the original author and maintainer of `llama-cpp-python`. These include:
- CPU wheels for all platforms via `https://abetlen.github.io/llama-cpp-python/whl/cpu/`
- Metal wheels for macOS GPU acceleration
- CUDA wheels (cu121βcu124) for Windows and Linux
> All credit for the underlying library goes to **Georgi Gerganov ([@ggerganov](https://github.com/ggerganov))** and the [llama.cpp](https://github.com/ggml-org/llama.cpp) team, and to **Andrei Betlen** for the Python bindings.
---
## π Notes
- All wheels are **MIT licensed** (same as llama-cpp-python upstream)
- manylinux wheels require **glibc 2.31+** (Ubuntu 20.04+, Debian 11+)
- `manylinux` and `linux_x86_64` are **not the same thing** β manylinux wheels have broad distro compatibility, plain linux wheels do not
- CUDA wheels require the matching CUDA toolkit to be installed
- Metal wheels require macOS 11.0+ and an Apple Silicon or AMD GPU
- This collection is updated periodically as new versions are released
|