| --- |
| --- |
| annotations_creators: |
| - expert-generated |
| language: |
| - en |
| license: mit |
| pretty_name: "ALL Bench Leaderboard 2026" |
| size_categories: |
| - n<1K |
| source_datasets: |
| - original |
| tags: |
| - benchmark |
| - leaderboard |
| - llm |
| - vlm |
| - ai-evaluation |
| - gpt-5 |
| - claude |
| - gemini |
| - final-bench |
| - metacognition |
| - multimodal |
| - ai-agent |
| - image-generation |
| - video-generation |
| - music-generation |
| - union-eval |
| task_categories: |
| - text-generation |
| - visual-question-answering |
| - text-to-image |
| - text-to-video |
| - text-to-audio |
| configs: |
| - config_name: llm |
| data_files: |
| - split: train |
| path: data/llm.jsonl |
| - config_name: vlm_flagship |
| data_files: |
| - split: train |
| path: data/vlm_flagship.jsonl |
| - config_name: agent |
| data_files: |
| - split: train |
| path: data/agent.jsonl |
| - config_name: image |
| data_files: |
| - split: train |
| path: data/image.jsonl |
| - config_name: video |
| data_files: |
| - split: train |
| path: data/video.jsonl |
| - config_name: music |
| data_files: |
| - split: train |
| path: data/music.jsonl |
| models: |
| |
| - Qwen/Qwen3.5-122B-A10B |
| - Qwen/Qwen3.5-27B |
| - Qwen/Qwen3.5-35B-A3B |
| - Qwen/Qwen3.5-9B |
| - Qwen/Qwen3.5-4B |
| - Qwen/Qwen3-Next-80B-A3B-Thinking |
| - deepseek-ai/DeepSeek-V3 |
| - deepseek-ai/DeepSeek-R1 |
| - zai-org/GLM-5 |
| - meta-llama/Llama-4-Scout-17B-16E-Instruct |
| - meta-llama/Llama-4-Maverick-17B-128E-Instruct |
| - microsoft/phi-4 |
| - upstage/Solar-Open-100B |
| - K-intelligence/Midm-2.0-Base-Instruct |
| - Nanbeige/Nanbeige4.1-3B |
| - MiniMaxAI/MiniMax-M2.5 |
| - stepfun-ai/Step-3.5-Flash |
| |
| - OpenGVLab/InternVL3-78B |
| - Qwen/Qwen2.5-VL-72B-Instruct |
| - Qwen/Qwen3-VL-30B-A3B |
| |
| - black-forest-labs/FLUX.1-dev |
| - stabilityai/stable-diffusion-3.5-large |
| |
| - Lightricks/LTX-Video |
| |
| - facebook/musicgen-large |
| - facebook/jasco-chords-drums-melody-1B |
| --- |
| |
| # 🏆 ALL Bench Leaderboard 2026 |
|
|
| **The only AI benchmark dataset covering LLM · VLM · Agent · Image · Video · Music in a single unified file.** |
|
|
| <p align="center"> |
| <a href="https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard"><img src="https://img.shields.io/badge/🏆_Live_Leaderboard-ALL_Bench-6366f1?style=for-the-badge" alt="Live Leaderboard"></a> |
| </p> |
|
|
| <p align="center"> |
| <a href="https://github.com/final-bench/ALL-Bench-Leaderboard"><img src="https://img.shields.io/badge/GitHub-Repo-black?style=flat-square&logo=github" alt="GitHub"></a> |
| <a href="https://huggingface.co/datasets/FINAL-Bench/Metacognitive"><img src="https://img.shields.io/badge/🧬_FINAL_Bench-Dataset-blueviolet?style=flat-square" alt="FINAL Bench"></a> |
| <a href="https://huggingface.co/spaces/FINAL-Bench/Leaderboard"><img src="https://img.shields.io/badge/🧬_FINAL_Bench-Leaderboard-teal?style=flat-square" alt="FINAL Leaderboard"></a> |
| </p> |
|
|
|  |
|
|
|  |
|
|
|
|
| ## Dataset Summary |
|
|
| ALL Bench Leaderboard aggregates and cross-verifies benchmark scores for **90+ AI models** across 6 modalities. Every numerical score is tagged with a confidence level (`cross-verified`, `single-source`, or `self-reported`) and its original source. The dataset is designed for researchers, developers, and decision-makers who need a trustworthy, unified view of the AI model landscape. |
|
|
| | Category | Models | Benchmarks | Description | |
| |----------|--------|------------|-------------| |
| | **LLM** | 41 | 32 fields | MMLU-Pro, GPQA, AIME, HLE, ARC-AGI-2, Metacog, SWE-Pro, IFEval, LCB, **Union Eval**, etc. | |
| | **VLM Flagship** | 11 | 10 fields | MMMU, MMMU-Pro, MathVista, AI2D, OCRBench, MMStar, HallusionBench, etc. | |
| | **Agent** | 10 | 8 fields | OSWorld, τ²-bench, BrowseComp, Terminal-Bench 2.0, GDPval-AA, SWE-Pro | |
| | **Image Gen** | 10 | 7 fields | Photo realism, text rendering, instruction following, style, aesthetics | |
| | **Video Gen** | 10 | 7 fields | Quality, motion, consistency, text rendering, duration, resolution | |
| | **Music Gen** | 8 | 6 fields | Quality, vocals, instrumental, lyrics, duration | |
|
|
|  |
|
|
|  |
|
|
|  |
|
|
|
|
| ## What's New — v2.2.1 |
|
|
| ### 🏅 Union Eval ★NEW |
|
|
| **ALL Bench's proprietary integrated benchmark.** Fuses the discriminative core of 10 existing benchmarks (GPQA, AIME, HLE, MMLU-Pro, IFEval, LiveCodeBench, BFCL, ARC-AGI, SWE, FINAL Bench) into a single 1000-question pool with a season-based rotation system. |
|
|
| **Key features:** |
| - **100% JSON auto-graded** — every question requires mandatory JSON output with verifiable fields. Zero keyword matching. |
| - **Fuzzy JSON matching** — tolerates key name variants, fraction formats, text fallback when JSON parsing fails. |
| - **Season rotation** — 70% new questions each season, 30% anchor questions for cross-season IRT calibration. |
| - **8 rounds of empirical testing** — v2 (82.4%) → v3 (82.0%) → Final (79.5%) → S2 (81.8%) → S3 (75.0%) → Fuzzy (69.9/69.3%). |
|
|
| **Key discovery:** *"The bottleneck in benchmarking is not question difficulty — it's grading methodology."* |
|
|
| **Empirically confirmed LLM weakness map:** |
| - 🔴 Poetry + code cross-constraints: 18-28% |
| - 🔴 Complex JSON structure (10+ constraints): 0% |
| - 🔴 Pure series computation (Σk²/3ᵏ): 0% |
| - 🟢 Metacognitive reasoning (Bayes, proof errors): 95% |
| - 🟢 Revised science detection: 86% |
|
|
| **Current scores (S3, 20Q sample, Fuzzy JSON):** |
|
|
| | Model | Union Eval | |
| |-------|-----------| |
| | Claude Sonnet 4.6 | **69.9** | |
| | Claude Opus 4.6 | **69.3** | |
|
|
| ### Other v2.2 changes |
| - Fair Coverage Correction: composite scoring ^0.5 → ^0.7 |
| - +7 FINAL Bench scores (15 total) |
| - Columns sorted by fill rate |
| - Model Card popup (click model name) · FINAL Bench detail popup (click Metacog score) |
| - 🔥 Heatmap, 💰 Price vs Performance scatter tools |
|
|
|
|
| ## Live Leaderboard |
|
|
| 👉 **[https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard](https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard)** |
|
|
| Interactive features: composite ranking, dark mode, advanced search (`GPQA > 90 open`, `price < 1`), Model Finder, Head-to-Head comparison, Trust Map heatmap, Bar Race animation, Model Card popup, FINAL Bench detail popup, and downloadable Intelligence Report (PDF/DOCX). |
|
|
| ## Data Structure |
|
|
| ``` |
| data/ |
| ├── llm.jsonl # 41 LLMs × 32 fields (incl. unionEval ★NEW) |
| ├── vlm_flagship.jsonl # 11 flagship VLMs × 10 benchmarks |
| ├── agent.jsonl # 10 agent models × 8 benchmarks |
| ├── image.jsonl # 10 image gen models × S/A/B/C ratings |
| ├── video.jsonl # 10 video gen models × S/A/B/C ratings |
| └── music.jsonl # 8 music gen models × S/A/B/C ratings |
| ``` |
|
|
| ## LLM Field Schema |
|
|
| | Field | Type | Description | |
| |-------|------|-------------| |
| | `name` | string | Model name | |
| | `provider` | string | Organization | |
| | `type` | string | `open` or `closed` | |
| | `group` | string | `flagship`, `open`, `korean`, etc. | |
| | `released` | string | Release date (YYYY.MM) | |
| | `mmluPro` | float \| null | MMLU-Pro score (%) | |
| | `gpqa` | float \| null | GPQA Diamond (%) | |
| | `aime` | float \| null | AIME 2025 (%) | |
| | `hle` | float \| null | Humanity's Last Exam (%) | |
| | `arcAgi2` | float \| null | ARC-AGI-2 (%) | |
| | `metacog` | float \| null | FINAL Bench Metacognitive score | |
| | `swePro` | float \| null | SWE-bench Pro (%) | |
| | `bfcl` | float \| null | Berkeley Function Calling (%) | |
| | `ifeval` | float \| null | IFEval instruction following (%) | |
| | `lcb` | float \| null | LiveCodeBench (%) | |
| | `sweV` | float \| null | SWE-bench Verified (%) — deprecated | |
| | `mmmlu` | float \| null | Multilingual MMLU (%) | |
| | `termBench` | float \| null | Terminal-Bench 2.0 (%) | |
| | `sciCode` | float \| null | SciCode (%) | |
| | `unionEval` | float \| null | **★NEW** Union Eval S3 — ALL Bench integrated benchmark (100% JSON auto-graded) | |
| | `priceIn` / `priceOut` | float \| null | USD per 1M tokens | |
| | `elo` | int \| null | Arena Elo rating | |
| | `license` | string | `Prop`, `Apache2`, `MIT`, `Open`, etc. | |
|
|
|  |
|
|
|  |
|
|
|  |
|
|
| ## Composite Score |
|
|
| ``` |
| Score = Avg(confirmed benchmarks) × (N/10)^0.7 |
| ``` |
|
|
| 10 core benchmarks across the **5-Axis Intelligence Framework**: Knowledge · Expert Reasoning · Abstract Reasoning · Metacognition · Execution. |
|
|
| **v2.2 change:** Exponent adjusted from 0.5 to 0.7 for fairer coverage weighting. Models with 7/10 benchmarks receive ×0.79 (was ×0.84), while 4/10 receives ×0.53 (was ×0.63). |
|
|
| ## Confidence System |
|
|
| Each benchmark score in the `confidence` object is tagged: |
|
|
| | Level | Badge | Meaning | |
| |-------|-------|---------| |
| | `cross-verified` | ✓✓ | Confirmed by 2+ independent sources | |
| | `single-source` | ✓ | One official or third-party source | |
| | `self-reported` | ~ | Provider's own claim, unverified | |
|
|
| Example: |
| ```json |
| "Claude Opus 4.6": { |
| "gpqa": { "level": "cross-verified", "source": "Anthropic + Vellum + DataCamp" }, |
| "arcAgi2": { "level": "cross-verified", "source": "Vellum + llm-stats + NxCode + DataCamp" }, |
| "metacog": { "level": "single-source", "source": "FINAL Bench dataset" }, |
| "unionEval": { "level": "single-source", "source": "Union Eval S3 — ALL Bench official" } |
| } |
| ``` |
|
|
| ## Usage |
|
|
| ```python |
| from datasets import load_dataset |
| |
| # Load LLM data |
| ds = load_dataset("FINAL-Bench/ALL-Bench-Leaderboard", "llm") |
| df = ds["train"].to_pandas() |
| |
| # Top 5 LLMs by GPQA |
| ranked = df.dropna(subset=["gpqa"]).sort_values("gpqa", ascending=False) |
| for _, m in ranked.head(5).iterrows(): |
| print(f"{m['name']:25s} GPQA={m['gpqa']}") |
| |
| # Union Eval scores |
| union = df.dropna(subset=["unionEval"]).sort_values("unionEval", ascending=False) |
| for _, m in union.iterrows(): |
| print(f"{m['name']:25s} Union Eval={m['unionEval']}") |
| ``` |
|
|
|  |
|
|
|  |
|
|
|  |
|
|
|
|
| ## Union Eval — Integrated AI Assessment |
|
|
| Union Eval is ALL Bench's proprietary benchmark designed to address three fundamental problems with existing AI evaluations: |
|
|
| 1. **Contamination** — Public benchmarks leak into training data. Union Eval rotates 70% of questions each season. |
| 2. **Single-axis measurement** — AIME tests only math, IFEval only instruction-following. Union Eval integrates arithmetic, poetry constraints, metacognition, coding, calibration, and myth detection. |
| 3. **Score inflation via keyword matching** — Traditional rubric grading gives 100% to "well-written" answers even if content is wrong. Union Eval enforces mandatory JSON output with zero keyword matching. |
|
|
| **Structure (S3 — 100 Questions from 1000 Pool):** |
|
|
| | Category | Questions | Role | Expected Score | |
| |----------|-----------|------|---------------| |
| | Pure Arithmetic | 10 | Confirmed Killer #1 | 0-57% | |
| | Poetry/Verse IFEval | 8 | Confirmed Killer #2 | 18-28% | |
| | Structured Data IFEval | 7 | JSON/CSV verification | 0-70% | |
| | FINAL Bench Metacognition | 20 | Core brand | 50-95% | |
| | Union Complex Synthesis | 15 | Extreme multi-domain | 40-73% | |
| | Revised Science / Myths | 5 | Calibration traps | 50-86% | |
| | Code I/O, GPQA, HLE | 19 | Expert + execution | 50-100% | |
| | BFCL Tool Use, Anchors | 16 | Cross-season calibration | varies | |
|
|
| Note: The 100-question dataset is **not publicly released** to prevent contamination. Only scores are published. |
|
|
|
|
| ## FINAL Bench — Metacognitive Benchmark |
|
|
| FINAL Bench measures AI self-correction ability. Error Recovery (ER) explains 94.8% of metacognitive performance variance. 15 frontier models evaluated. |
|
|
| - 🧬 [FINAL-Bench/Metacognitive Dataset](https://huggingface.co/datasets/FINAL-Bench/Metacognitive) |
| - 🏆 [FINAL-Bench/Leaderboard](https://huggingface.co/spaces/FINAL-Bench/Leaderboard) |
|
|
|
|
| ## Changelog |
|
|
| | Version | Date | Changes | |
| |---------|------|---------| |
| | **v2.2.1** | 2026-03-10 | 🏅 **Union Eval ★NEW** — integrated benchmark column (`unionEval` field). Claude Opus 4.6: 69.3 · Sonnet 4.6: 69.9 | |
| | v2.2 | 2026-03-10 | Fair Coverage (^0.7), +7 Metacog scores, Model Cards, FINAL Bench popup, Heatmap, Price-Perf | |
| | v2.1 | 2026-03-08 | Confidence badges, Intelligence Report, source tracking | |
| | v2.0 | 2026-03-07 | All blanks filled, Korean AI data, 42 LLMs cross-verified | |
| | v1.9 | 2026-03-05 | +3 LLMs, dark mode, mobile responsive | |
|
|
| ## Citation |
|
|
| ```bibtex |
| @misc{allbench2026, |
| title={ALL Bench Leaderboard 2026: Unified Multi-Modal AI Evaluation}, |
| author={ALL Bench Team}, |
| year={2026}, |
| url={https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard} |
| } |
| ``` |
|
|
| --- |
|
|
| `#AIBenchmark` `#LLMLeaderboard` `#GPT5` `#Claude` `#Gemini` `#ALLBench` `#FINALBench` `#Metacognition` `#UnionEval` `#VLM` `#AIAgent` `#MultiModal` `#HuggingFace` `#ARC-AGI` `#AIEvaluation` `#VIDRAFT.net` |