Zero-shot World Model (ZWM) achieves state-of-the-art performance on visual-cognitive tasks using only a single child's visual experience data, requiring orders of magnitude less training data than current AI. BabyZWM demonstrates zero-shot transfer without task-specific training, offering a blueprint for human-scale data efficiency.
Hugging Face tutorial on building a fast multilingual OCR model using synthetic data generation. Demonstrates techniques for creating training data without manual annotation. Practical guide for scaling OCR across multiple languages efficiently.
JFinTEB is the first comprehensive benchmark for Japanese financial text embeddings, covering retrieval and classification tasks including sentiment analysis, document categorization, and economic survey classification. Evaluates diverse embedding models on language-specific and domain-specific financial text processing scenarios.
CoEvolve is an agent-data mutual evolution framework enabling LLM agents to improve through closed-loop, interaction-driven training. Extracts feedback signals like forgetting and uncertainty to identify failure-prone patterns, then uses LLM-based task synthesis to adapt the training data distribution alongside the agent.
MADE introduces a living multi-label text classification benchmark for medical device adverse events, continuously updated with new reports to prevent training data contamination. Features long-tailed hierarchical labels and enables uncertainty quantification evaluation critical for high-stakes healthcare ML. Addresses benchmark saturation and memorization vs. reasoning distinction.
OpenMobile is an open-source framework for synthesizing high-quality mobile agent task instructions and trajectories, achieving nearly 70% success on AndroidWorld. Features scalable task synthesis using global environment memory and policy-switching strategy alternating between learner and expert models during trajectory rollout. Makes training recipes transparent unlike closed leading models.
VCR-Agent is a multi-agent framework that generates mechanistic action graphs to represent biological reasoning in virtual cells, enabling verification and falsification of LLM-generated explanations. The approach releases VC-TRACES, a dataset of verified biological mechanisms, addressing the challenge of factually grounded scientific explanations from LLMs in open-ended domains like biology.