SmolLM-Smashed is a collection of optimized language models. Each model is quantized and compiled for maximum efficiency while preserving performance.
Parag Ekbote
AINovice2005
AI & ML interests
ML Engineer passionate about taking models from research to production. 1 year supporting tech startups. Active OSS contributor.
Recent Activity
upvoted
an
article
2 days ago
GGML and llama.cpp join HF to ensure the long-term progress of Local AI
upvoted
an
article
9 days ago
Custom Kernels for All from Codex and Claude
reacted
to
danielhanchen's
post
with π₯
12 days ago
We collaborated with Hugging Face to enable you to train MoE models 12Γ faster with 35% less VRAM via our new Triton kernels (no accuracy loss). π€
Train gpt-oss locally on 12.8GB VRAM with our free notebooks: https://unsloth.ai/docs/new/faster-moe