Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up

All HF Hub posts

SeaWolf-AIΒ 
posted an update 2 days ago
view post
Post
10037
🏟️ Smol AI WorldCup: A 4B Model Just Beat 8B β€” Here's the Data

We evaluated 18 small language models from 12 makers on 125 questions across 7 languages. The results challenge the assumption that bigger is always better.

Community Article: https://huggingface.co/blog/FINAL-Bench/smol-worldcup
Live Leaderboard: ginigen-ai/smol-worldcup
Dataset: ginigen-ai/smol-worldcup

What we found:

β†’ Gemma-3n-E4B (4B, 2GB RAM) outscores Qwen3-8B (8B, 5.5GB). Doubling parameters gained only 0.4 points. RAM cost: 2.75x more.

β†’ GPT-OSS-20B fits in 1.5GB yet matches Champions-league dense models requiring 8.5GB. MoE architecture is the edge AI game-changer.

β†’ Thinking models hurt structured output. DeepSeek-R1-7B scores 8.7 points below same-size Qwen3-8B and runs 2.7x slower.

β†’ A 1.3B model fabricates confident fake content 80% of the time when prompted with nonexistent entities. Qwen3 family hits 100% trap detection across all sizes.

β†’ Qwen3-1.7B (1.2GB) outscores Mistral-7B, Llama-3.1-8B, and DeepSeek-R1-14B. Latest architecture at 1.7B beats older architecture at 14B.

What makes this benchmark different?

Most benchmarks ask "how smart?" β€” we measure five axes simultaneously: Size, Honesty, Intelligence, Fast, Thrift (SHIFT). Our ranking metric WCS = sqrt(SHIFT x PIR_norm) rewards models that are both high-quality AND efficient. Smart but massive? Low rank. Tiny but poor? Also low.

Top 5 by WCS:
1. GPT-OSS-20B β€” WCS 82.6 β€” 1.5GB β€” Raspberry Pi tier
2. Gemma-3n-E4B β€” WCS 81.8 β€” 2.0GB β€” Smartphone tier
3. Llama-4-Scout β€” WCS 79.3 β€” 240 tok/s β€” Fastest model
4. Qwen3-4B β€” WCS 76.6 β€” 2.8GB β€” Smartphone tier
5. Qwen3-1.7B β€” WCS 76.1 β€” 1.2GB β€” IoT tier

Built in collaboration with the FINAL Bench research team. Interoperable with ALL Bench Leaderboard for full small-to-large model comparison.

Dataset is open under Apache 2.0 (125 questions, 7 languages). We welcome new model submissions.
  • 1 reply
Β·
SeaWolf-AIΒ 
posted an update 3 days ago
view post
Post
7991
πŸš€ Introducing MARL β€” Runtime Middleware That Reduces LLM Hallucination Without Fine-Tuning

Now available on PyPI Β· GitHub Β· ClawHub Β· HuggingFace
AI models sense they could be wrong, but they can't actually fix what's broken.

πŸ€— Live A/B test: VIDraft/MARL

We evaluated 9 SOTA models (GPT-5.2, Claude Opus 4.6, Gemini 3 Pro, etc.) across 1,800 assessments in FINAL Bench and found a 39.2%p gap between "recognizing potential errors (MA=0.694)" and "actually finding and fixing them (ER=0.302)."

MARL (Model-Agnostic Runtime Middleware for LLMs) was built to close this metacognitive gap. It decomposes a single LLM call into a 5-stage expert pipeline (Hypothesis β†’ Solver β†’ Auditor β†’ Adversarial Verifier β†’ Synthesizer), transforming "answer in one shot" into "think, doubt, correct, and rewrite."

No weight modification β€” works instantly with GPT-5.4, Claude, Gemini, Llama, or any OpenAI API-compatible LLM by changing one line: base_url. Ships with 9 domain-specific emergence engines (invention, pharma, genomics, chemistry, ecology, law, and more β€” 5,538 expert data items) activated by a simple tag like model="gpt-5.4::pharma".

pip install marl-middleware

MARL is also officially registered on ClawHub, the skill marketplace of OpenClaw β€” an AI agent platform with 260K+ developers and 3,200+ skills. It's the first middleware in the Reasoning Enhancement category. One command β€” clawhub install marl-middleware β€” gives your AI agent a metacognition upgrade.

πŸ“ Technical deep dive: https://huggingface.co/blog/FINAL-Bench/marl-middleware
πŸ“¦ PyPI: https://pypi.org/project/marl-middleware/
πŸ™ GitHub: https://github.com/Vidraft/MARL
πŸ¦€ ClawHub: https://clawhub.ai/Cutechicken99/marl-middleware

#MARL #LLM #Hallucination #Metacognition #MultiAgent #AIMiddleware #FINALBench #OpenClaw #ClawHub #PyPI #AGI #HuggingFace #ReasoningAI #SelfCorrection #GlassBoxAI
DavidAUΒ 
posted an update 3 days ago
view post
Post
3430
21 Qwen 3.5 Fine Tunes (thinking and instruct) ; reg and uncensored (2B to 27B) exceed benchmarks, and work better than org models.

All are bench marked against org model.
Many exceed all benchmarks of org model.
Claude, GLM, Gemini and other distills.
Thinking AND dedicated Instruct versions.

Core goal: Increase benchmarks, and address long thinking blocks.

Highlights:

9B and 27B instruct "Claude" versions hit 624 and 675 on the "ARC-C" (hard challenge).

Thinking fine tunes exceed org model performance (in thinking mode).

In many cases there is a drastic reduction in thinking block size.

9B Claude Heretic Uncensored, GGUF :
-Neo, Code Imatrix (duel imatrix)
- Updated Jinja template
- Custom tensor enhancements.

DavidAU/Qwen3.5-9B-Claude-4.6-OS-Auto-Variable-HERETIC-UNCENSORED-THINKING-MAX-NEOCODE-Imatrix-GGUF

COLLECTION [21 models]:
https://huggingface.co/collections/DavidAU/qwen-35-08-2-4-9-27-35b-regular-uncensored

UPDATE:
Now 31 models, including experimental 21B and new 13B models.
  • 5 replies
Β·
JonnaMatΒ 
posted an update about 14 hours ago
view post
Post
563
πŸš€ FlashHead: Efficient Drop-In Replacement for the Classification Head in Language Model Inference

πŸ”Ž Check out our latest FlashHead-enabled model: embedl/Cosmos-Reason2-2B-W4A16-Edge2-FlashHead

🧩 Seamless integration with vllm:
docker run --rm -it \
  --network host \
  --shm-size=8g \
  --ulimit memlock=-1 \
  --ulimit stack=67108864 \
  --runtime=nvidia \
  --name=vllm-serve \
  -e HF_TOKEN=hf_*** \
  -e HF_HOME=/root/.cache/huggingface \
  embedl/vllm:latest-jetson-orin-flashhead \
  vllm serve "embedl/Cosmos-Reason2-2B-W4A16-Edge2-FlashHead" \
    --max-model-len 8192 \
    --gpu-memory-utilization 0.75 \
    --max-num-seqs 2 \
    --trust-remote-code


  • 1 reply
Β·
branikitaΒ 
posted an update 1 day ago
view post
Post
974
Testing a parallel gripper with a MaixSense-A010 ToF depth camera (100-point sensor) and pressure sensors.

By combining depth data with force feedback, the gripper closes only when the object is in a graspable position. If the object slips or leaves the grasp zone before closing, the system can automatically retry β€” as shown in the video.

Gripper repository (version without camera and sensors):
https://github.com/roboninecom/SO-ARM100-101-Parallel-Gripper
  • 1 reply
Β·
BestWishYshΒ 
posted an update 7 days ago
view post
Post
3048
πŸš€ Introducing Helios: a 14B real-time long-video generation model!

It’s completely wildβ€”faster than 1.3B models and achieves this without using self-forcing. Welcome to the new era of video generation! πŸ˜ŽπŸ‘‡

πŸ’» Code: https://github.com/PKU-YuanGroup/Helios
🏠 Page: https://pku-yuangroup.github.io/Helios-Page
πŸ“„ Paper: Helios: Real Real-Time Long Video Generation Model (2603.04379)

πŸ”Ή True Single-GPU Extreme Speed ⚑️
No need to rely on traditional workarounds like KV-cache, quantization, sparse/linear attention, or TinyVAE. Helios hits an end-to-end 19.5 FPS on a single H100!

Training is also highly accessible: an 80GB VRAM can fit four 14B models.

πŸ”Ή Solving Long-Video "Drift" from the Core πŸŽ₯
Tired of visual drift and repetitive loops? We ditched traditional hacks (like error banks, self-forcing, or keyframe sampling).

Instead, our innovative training strategy simulates & eliminates drift directly, keeping minute-long videos incredibly coherent with stunning quality. ✨

πŸ”Ή 3 Model Variants for Full Coverage πŸ› οΈ
With a unified architecture natively supporting T2V, I2V, and V2V, we are open-sourcing 3 flavors:

1️⃣ Base: Single-stage denoising for extreme high-fidelity.
2️⃣ Mid: Pyramid denoising + CFG-Zero for the perfect balance of quality & throughput.
3️⃣ Distilled: Adversarial Distillation (DMD) for ultra-fast, few-step generation.

πŸ”Ή Day-0 Ecosystem Ready 🌍
We wanted deployment to be a breeze from the second we launched. Helios drops with comprehensive Day-0 hardware and framework support:

βœ… Huawei Ascend-NPU
βœ… HuggingFace Diffusers
βœ… vLLM-Omni
βœ… SGLang-Diffusion

Try it out and let us know what you think!
  • 6 replies
Β·
AbstractPhilΒ 
posted an update about 5 hours ago
view post
Post
50
geolip-captionbert-8192

This bert is currently being distilled using 5 bert teachers using the conceptual captions dataset. The recall accuracy is based on the whitened procrustes alignment, and the losses reflect keeping that rotation aligned correctly.

The expectation from the smaller prototypes show this model will align to 100% accuracy recall based on the most optimal opinions based on the correct answer, aligning specifically to the correct answers in conjunction with all the geometric losses.

No joke, this may be the smallest, least computation, most accurate, and fastest bert I've trained thus far - and it will be based entirely on five teachers simultaneously feeding opinions through a relay hub.
  • 8 replies
Β·
Teen-DifferentΒ 
posted an update about 18 hours ago
view post
Post
87
Adaptive Attention at Inference Time: Does It Actually Work?

A hypernetwork that rewires GPT's value heads on every forward pass. The answer: not a clean win β€” but not a failure either.

Blog post: https://teendifferent.substack.com/p/adaptive-attention-at-inference-time
Code: https://github.com/REDDITARUN/a-gpt
Weights: Teen-Different/adaptive-gpts


What This Is

Five small language model variants trained for 12k steps on a 300M token mixed corpus, answering one question: can the residual stream be used to slightly rewrite the model's own computation while it's running?

Instead of a fixed W_v for every context, a TinyHeadTransformer hypernetwork generates low-rank (LoRA-style) updates to the value projection of each attention head β€” conditioned on the current residual stream. Each token gets a dynamically adapted value transformation.


The Five Models

Base GPT β€” 28.9M params, 139 tok/s, val loss ~3.82
Matched GPT (+2 layers) β€” 30.5M params, 204 tok/s, val loss ~3.80
Adaptive GPT β€” 30.5M params, 38.7 tok/s, val loss ~3.88–3.92
Diffusion GPT β€” 28.9M params, 110 tok/s, val loss ~5.0–5.2
Adaptive Diffusion GPT β€” 30.5M params, 40.4 tok/s, val loss ~5.0–5.2

Architecture: 4 layers, 4 heads, d_model=256, context=256, RoPE, GPT-2 tokenizer.


How the Hypernetwork Works

For each attention head, a TinyHeadTransformer encodes the head's residual stream slice, mean-pools it to a conditioning vector, then projects into low-rank factors A (dΓ—r) and B (rΓ—d) at rank=8. The dynamic value update follows LoRA conventions with alpha/r scaling. B is zero-initialized so the adaptive path starts inert and the model begins as a vanilla GPT β€” critical for training stability.

The diffusion variant uses bidirectional attention, RMSNorm, squared ReLU, and a learned timestep embedding.

sergiopaniegoΒ 
posted an update about 18 hours ago
view post
Post
109
Nemotron 3 Super by @nvidia is here! NVIDIA's hybrid Mamba2/Transformer models are now natively supported in transformers (no trust_remote_code needed)

Fine-tune them with TRL in just a few lines of code. Notebook + script included to get started right away. goooo!

- Notebook: https://colab.research.google.com/github/huggingface/trl/blob/main/examples/notebooks/sft_nemotron_3.ipynb
- Script: https://github.com/huggingface/trl/blob/main/examples/scripts/sft_nemotron_3.py
- Collection with all the models: https://huggingface.co/collections/nvidia/nvidia-nemotron-v3
nightmediaΒ 
posted an update about 18 hours ago
view post
Post
87
The Qwen3.5-27B performance landscape

I started gathering some numbers on the 27Bs.

You might have noticed that reported metrics differ from Thinking to Instruct models and this is expected. The mxfp8/mxfp4 are the most stable quants I could measure, and I provided Deckard(qx) quants where possible

Converting a Thinking model to Instruct

The model is thinking/instruct, and the instruct mode can be forced by setting the first line of the jinja template to
{%- set enable_thinking = false %}


Qwen3.5-27B-Text

This is a model I tested where the vision tower was removed, its performance is the same as the VL model.
nightmedia/Qwen3.5-27B-Text-qx86-hi-mlx
quant     arc   arc/e boolq hswag obkqa piqa  wino
qx86-hi   0.443,0.498,0.857,0.701,0.372,0.770,0.752
mxfp4     0.460,0.527,0.871,0.694,0.370,0.772,0.752


DavidAU/Qwen3.5-27B-Claude-4.6-OS-INSTRUCT

On the top of the heap of the models I tested, as far as metrics go, is this model created by DavidAU. Samples of the output are provided on the model card.
nightmedia/Qwen3.5-27B-Claude-4.6-OS-INSTRUCT-mxfp8-mlx
quant     arc   arc/e boolq hswag obkqa piqa  wino
mxfp8     0.675,0.827,0.900,0.750,0.496,0.800,0.721
qx86-hi   0.667,0.824,0.902,0.752,0.502,0.791,0.725
qx64-hi   0.664,0.820,0.902
mxfp4     0.653,0.815,0.899

For the Thinking version, see nightmedia/Qwen3.5-27B-Architect-Claude-qx86-hi-mlx

More metrics in comments.

-G

P.S. I will update this as soon as I have new numbers or I found a typo--whichever comes first. The models that show just the arc-check numbers are in the test queue and will be updated soon.

  • 4 replies
Β·