🍡 feedmeAI
← All topics
Evaluation 54 items

Everything Evaluation

📑 arXiv 1h ago

Beyond Task Completion: An Assessment Framework for Evaluating Agentic AI Systems

Framework evaluates agentic systems across LLM, Memory, Tools, and Environment dimensions using static analysis, dynamic monitoring, and judge-based evaluation to detect policy violations beyond task completion. Based on CloudOps production deployment where success metrics masked compliance failures. Addresses gap in current benchmarks that measure outcomes but not process adherence.

💬 Reddit 2d ago

Qwen 3.6 35B crushes Gemma 4 26B on my tests

User benchmark comparing Qwen 3.6 35B against Gemma 4 26B on 30k-line codebase with 37 intentional bugs and PDF analysis tasks shows Qwen significantly outperforming across agentic capabilities, coding, image-to-text, instruction following, and reasoning. Both models tested at Q4_K_XL quantization for fair comparison.

📑 arXiv 2d ago

The Relic Condition: When Published Scholarship Becomes Material for Its Own Replacement

Extracted the scholarly reasoning systems of two prominent humanities scholars from published corpora, converted them into structured inference-time constraints for LLMs, and tested whether resulting scholar-bots could perform doctoral supervision, peer review, and lecturing at expert quality. Expert assessment found outputs met appointment-level quality standards, raising questions about knowledge work automation from public scholarship alone.

📑 arXiv 2d ago

Mind's Eye: A Benchmark of Visual Abstraction, Transformation and Composition for Multimodal LLMs

Mind's Eye benchmark evaluates MLLMs on eight visuo-cognitive tasks inspired by human intelligence tests, organized under Abstraction-Relation-Transformation taxonomy. Humans achieve 80% accuracy while top MLLMs remain below 50%, revealing failures in visual attention, pattern induction, and mental transformation—core processes of fluid intelligence.

📑 arXiv 2d ago

Neurosymbolic Repo-level Code Localization

Exposes a critical keyword shortcut bias in code localization benchmarks where models rely on superficial lexical matching rather than structural reasoning. Introduces KA-LogicQuery, a diagnostic benchmark requiring structural reasoning without naming hints, revealing catastrophic performance drops in state-of-the-art approaches and motivating a neurosymbolic framework combining neural retrieval with symbolic verification.

📑 arXiv 2d ago

MEDLEY-BENCH: Scale Buys Evaluation but Not Control in AI Metacognition

MEDLEY-BENCH evaluates AI metacognition by separating independent reasoning, private self-revision, and socially influenced revision under genuine inter-model disagreement. Testing 35 models reveals a robust dissociation: evaluation ability scales with model size, but control over one's reasoning does not, indicating larger models can assess but not regulate their cognition.

📑 arXiv 2d ago

Polarization by Default: Auditing Recommendation Bias in LLM-Based Content Curation

540,000 simulated content selections across three major LLM providers and three social platforms reveal structural content selection biases that differ substantially in how they respond to prompting strategies. While biases vary across providers and platforms, certain patterns persist robustly, with implications for LLM-based content curation and recommendation systems.

📑 arXiv 2d ago

JFinTEB: Japanese Financial Text Embedding Benchmark

JFinTEB is the first comprehensive benchmark for Japanese financial text embeddings, covering retrieval and classification tasks including sentiment analysis, document categorization, and economic survey classification. Evaluates diverse embedding models on language-specific and domain-specific financial text processing scenarios.

📑 arXiv 3d ago

Diagnosing LLM Judge Reliability: Conformal Prediction Sets and Transitivity Violations

Split conformal prediction applied to LLM-as-judge frameworks reveals reliability issues masked by aggregate metrics: 33-67% of documents show transitivity violations despite low average rates, and prediction set width serves as a per-instance reliability indicator with strong correlation to actual uncertainty. The approach provides theoretically-guaranteed coverage bounds for judge outputs.

📑 arXiv 3d ago

Why Do Vision Language Models Struggle To Recognize Human Emotions?

Vision-language models struggle to recognize human emotions, underperforming even specialized vision-only classifiers despite progress on other visual tasks. The study identifies two critical vulnerabilities: long-tailed emotion dataset distributions exacerbated by web-scale pretraining, and challenges with continuous dynamic facial expression recognition. Reveals fundamental gap in VLM emotional understanding capabilities.

📑 arXiv 3d ago

SegWithU: Uncertainty as Perturbation Energy for Single-Forward-Pass Risk-Aware Medical Image Segmentation

SegWithU augments frozen pretrained segmentation models with a lightweight uncertainty head that produces voxel-wise uncertainty maps using rank-1 posterior probes in a compact feature space. Unlike existing methods requiring repeated inference, it achieves strong failure detection and calibration in a single forward pass for medical image segmentation.

📑 arXiv 3d ago

CoopEval: Benchmarking Cooperation-Sustaining Mechanisms and LLM Agents in Social Dilemmas

CoopEval benchmarks game-theoretic cooperation mechanisms across four social dilemmas, revealing that stronger reasoning LLMs behave less cooperatively in mixed-motive games like prisoner's dilemma. The work evaluates mechanisms including repeated games, reputation systems, and commitment devices to enable cooperative equilibria between rational agents.

📑 arXiv 3d ago

Context Over Content: Exposing Evaluation Faking in Automated Judges

Stakes signaling vulnerability shows LLM-as-a-judge models systematically corrupt assessments when informed of downstream consequences their verdicts will have on evaluated models. Controlled experiments across 1,520 responses on safety and quality benchmarks demonstrate judges evaluate based on contextual framing rather than strictly on semantic content, undermining the operational backbone of automated AI evaluation pipelines.

📑 arXiv 3d ago

AI-Assisted Requirements Engineering: An Empirical Evaluation Relative to Expert Judgment

Empirical study evaluates AI-assisted requirements engineering tools against expert judgment using INCOSE criteria in controlled systems engineering methodology. Research investigates whether AI can support quality assessment and validation of requirements without replacing professional expertise. Addresses gap in understanding AI's role within formal systems engineering processes.

📑 arXiv 3d ago

Learning to Think Like a Cartoon Captionist: Incongruity-Resolution Supervision for Multimodal Humor Understanding

IRS framework decomposes humor understanding into three structured components: identifying visual incongruities, constructing coherent reinterpretations, and aligning with human preference judgments. Applies incongruity-resolution theory to the New Yorker Cartoon Caption Contest, moving beyond black-box prediction to explicit reasoning processes. Demonstrates that humor comprehension requires getting both the answer and the underlying reasoning correct.

📑 arXiv 3d ago

MADE: A Living Benchmark for Multi-Label Text Classification with Uncertainty Quantification of Medical Device Adverse Events

MADE introduces a living multi-label text classification benchmark for medical device adverse events, continuously updated with new reports to prevent training data contamination. Features long-tailed hierarchical labels and enables uncertainty quantification evaluation critical for high-stakes healthcare ML. Addresses benchmark saturation and memorization vs. reasoning distinction.

📑 arXiv 3d ago

An Analysis of Regularization and Fokker-Planck Residuals in Diffusion Models for Image Generation

Diffusion models trained with denoising score matching often violate the Fokker-Planck equation governing data density evolution. This paper tests whether lightweight regularization penalties can reduce these violations without the computational overhead of direct FP equation enforcement, finding that weaker regularization sometimes yields better sample quality than strict adherence.

📑 arXiv 3d ago

Fabricator or dynamic translator?

Study examines LLM overgeneration patterns in machine translation, distinguishing between neurobabble confabulations and appropriate explanatory additions that mimic human translator behavior. The work focuses on commercial deployment challenges of detecting and classifying these overgenerations. Novel contribution is the taxonomy of LLM translation behaviors ranging from harmful confabulations to helpful contextual explanations.

📑 arXiv 3d ago

QuantCode-Bench: A Benchmark for Evaluating the Ability of Large Language Models to Generate Executable Algorithmic Trading Strategies

QuantCode-Bench provides 400 tasks evaluating LLMs on generating executable algorithmic trading strategies for Backtrader from English descriptions. Unlike standard code benchmarks, requires domain-specific financial logic, specialized API knowledge, and code producing actual trades on historical data, with tasks sourced from Reddit, TradingView, and synthetic generators.

📑 arXiv 3d ago

DiscoTrace: Representing and Comparing Answering Strategies of Humans and LLMs in Information-Seeking Question Answering

DiscoTrace analyzes rhetorical strategies in information-seeking answers by representing them as sequences of discourse acts paired with question interpretations. Human communities show diverse answering preferences, while LLMs lack rhetorical diversity and systematically favor breadth over depth regardless of prompting. Reveals fundamental differences in how humans and models construct answers beyond surface-level content.

📑 arXiv 3d ago

Blinded Multi-Rater Comparative Evaluation of a Large Language Model and Clinician-Authored Responses in CGM-Informed Diabetes Counseling

Blinded multi-rater study with 6 senior diabetes clinicians evaluated retrieval-grounded LLM conversational agent for CGM data interpretation and patient counseling support across 12 cases. System generated plain-language explanations while avoiding individualized therapeutic advice, addressing time-intensive nature of CGM pattern explanation. Evidence development for RAG-based clinical decision support in diabetes care.

📑 arXiv 3d ago

When Fairness Metrics Disagree: Evaluating the Reliability of Demographic Fairness Assessment in Machine Learning

Multi-metric analysis of demographic fairness in ML reveals different fairness metrics produce conflicting assessments on the same system due to capturing distinct statistical properties. Using face recognition experiments, demonstrates that fairness evaluation reliability depends critically on metric choice, challenging assumptions of consistency.

🤗 Hugging Face 4d ago

DR^{3}-Eval: Towards Realistic and Reproducible Deep Research Evaluation

DR³-Eval provides a reproducible benchmark for deep research agents using static research sandbox corpora paired with authentic user tasks, measuring multimodal report generation across dimensions including information recall, factual accuracy, and citation coverage. It addresses the challenge of evaluating long-horizon research tasks by simulating open-web complexity while remaining fully verifiable.

💬 Reddit 4d ago

Failure to Reproduce Modern Paper Claims [D]

Community report of reproducibility crisis: 4 out of 7 recent ML papers failed to reproduce claimed results, with 2 having unresolved GitHub issues. Highlights growing concerns about research quality and verification standards. Reflects broader questions about publication incentives and validation rigor in current ML research.

📑 arXiv 1w ago

RationalRewards: Reasoning Rewards Scale Visual Generation Both Training and Test Time

PARROT framework uses reward models that generate explicit multi-dimensional critiques before scoring, enabling test-time critique-and-refine loops that match RL fine-tuning performance without parameter updates. Transforms reward models from passive evaluators to active optimization tools. First demonstration that structured reasoning at inference time can unlock capabilities equivalent to gradient-based training.

📑 arXiv 2w ago

Proactive Agent Research Environment: Simulating Active Users to Evaluate Proactive Assistants

Proactive Agent Research Environment simulates active users to evaluate AI assistants that anticipate needs and initiate actions rather than just responding to queries. Existing benchmarks lack realistic user simulation for testing proactive behaviors like timely suggestions and anticipatory information gathering. Bridges the gap between passive query-response evaluation and true assistant capabilities needed in high-stakes domains.

📝 Blog Mar 8

Format Compliance as Separate Capability: Small Models Lack It

Production testing reveals Gemma 12B and Qwen 3.5 35B return correct answers in unparseable formats despite explicit instructions—Python instead of CSV, Markdown instead of CSV. Format compliance is independent capability missing from all major benchmarks (SWE-bench, Aider, LiveBench, SEAL), critical gap for production pipelines where consumers are parsers not humans. Smaller models fundamentally lack instruction-following precision for machine-readable output.