Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
44
1
Tom
TomLucidor
Follow
0 followers
·
7 following
AI & ML interests
None yet
Recent Activity
new
activity
about 7 hours ago
nightmedia/Qwen3.5-35B-A3B-Text-qx64-hi-mlx:
How is this different from the other quants?
new
activity
2 days ago
mlx-community/LFM2-8B-A1B-4bit:
ValueError: Model type lfm2_moe not supported.
replied
to
SeaWolf-AI
's
post
2 days ago
FINAL Bench Released: The Real Bottleneck to AGI Is Self-Correction We release FINAL Bench, the first benchmark for measuring functional metacognition in LLMs — the ability to detect and correct one's own reasoning errors. Every existing benchmark measures final-answer accuracy. None measures whether AI knows it is wrong. Dataset: [FINAL-Bench/Metacognitive](https://huggingface.co/datasets/FINAL-Bench/Metacognitive) | 100 Tasks | 15 Domains | 8 TICOS Types | Apache 2.0 Leaderboard: https://huggingface.co/spaces/FINAL-Bench/Leaderboard Article: https://huggingface.co/blog/FINAL-Bench/metacognitive Core Innovation Our 5-axis rubric separates what no prior benchmark could: MA (Metacognitive Accuracy) — the ability to say "I might be wrong", and ER (Error Recovery) — the ability to actually fix it. This maps directly to the monitoring-control model of Nelson & Narens (1990) in cognitive psychology. Three Findings Across 9 SOTA Models We evaluated GPT-5.2, Claude Opus 4.6, Gemini 3 Pro, DeepSeek-V3.2, Kimi K2.5, and others across 100 expert-level tasks: 1. ER Dominance. 94.8% of MetaCog gain comes from Error Recovery alone. The bottleneck to AGI is not knowledge or reasoning — it is self-correction. 2. Declarative-Procedural Gap. All 9 models can verbalize uncertainty (MA = 0.694) but cannot act on it (ER = 0.302). They sound humble but fail to self-correct — the most dangerous AI safety profile. 3. Difficulty Effect. Harder tasks benefit dramatically more from metacognition (Pearson r = -0.777, p < 0.001). ```python from datasets import load_dataset dataset = load_dataset("FINAL-Bench/Metacognitive", split="train") ``` Paper: FINAL Bench: Measuring Functional Metacognitive Reasoning in LLMs FINAL Bench is the first tool to tell apart what AI truly knows from what it merely pretends to know.
View all activity
Organizations
models
6
Sort: Recently updated
TomLucidor/Qwen3-Coder-Next-REAM-mlx-3Bit
Text Generation
•
60B
•
Updated
11 days ago
•
427
TomLucidor/AI21-Jamba2-3B-mlx-4Bit
Text Generation
•
0.5B
•
Updated
11 days ago
•
36
TomLucidor/Qwen3-Coder-Next-REAM-mlx-4Bit
Text Generation
•
60B
•
Updated
12 days ago
•
372
TomLucidor/Ring-mini-sparse-2.0-exp-mlx-6Bit
Text Generation
•
16B
•
Updated
19 days ago
•
59
TomLucidor/Ring-mini-sparse-2.0-exp-mlx-4Bit
Text Generation
•
16B
•
Updated
19 days ago
•
44
TomLucidor/Ring-mini-sparse-2.0-exp-mlx-8Bit
Text Generation
•
16B
•
Updated
19 days ago
•
53
datasets
0
None public yet