MM-JudgeBias: A Benchmark for Evaluating Compositional Biases in MLLM-as-a-Judge
Abstract
Research identifies systematic biases in multimodal large language models used as automatic evaluators, revealing reliability issues and proposing a benchmark for measuring compositional bias through controlled perturbations and specific metrics.
Multimodal Large Language Models (MLLMs) have been increasingly used as automatic evaluators-a paradigm known as MLLM-as-a-Judge. However, their reliability and vulnerabilities to biases remain underexplored. We find that many MLLM judges fail to reliably integrate key visual or textual cues, yielding unreliable evaluations when evidence is missing or mismatched, and exhibiting instability under semantically irrelevant perturbations. To address this, we systematically define Compositional Bias in MLLM-as-a-Judge systems and introduce MM-JudgeBias, a benchmark for evaluating it. MM-JudgeBias introduces controlled perturbations across Query, Image, and Response, and evaluates model behavior via two complementary metrics: Bias-Deviation (BD) for sensitivity and Bias-Conformity (BC) for stability. Our dataset of over 1,800 curated and refined multimodal samples, drawn from 29 source benchmarks, enables a fine-grained diagnosis of nine bias types across diverse tasks and domains. Experiments on 26 state-of-the-art MLLMs reveal systematic modality neglect and asymmetric evaluation tendencies, underscoring the need for more reliable judges.
Community
ACL 2026 Main
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- When Vision-Language Models Judge Without Seeing: Exposing Informativeness Bias (2026)
- Toward Robust LLM-Based Judges: Taxonomic Bias Evaluation and Debiasing Optimization (2026)
- Advancing Multimodal Judge Models through a Capability-Oriented Benchmark and MCTS-Driven Data Generation (2026)
- Mitigating Translationese Bias in Multilingual LLM-as-a-Judge via Disentangled Information Bottleneck (2026)
- RubricBench: Aligning Model-Generated Rubrics with Human Standards (2026)
- LFQA-HP-1M: A Large-Scale Human Preference Dataset for Long-Form Question Answering (2026)
- Instinct vs. Reflection: Unifying Token and Verbalized Confidence in Multimodal Large Models (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper