📑 arXiv 3d ago
An Axiomatic Benchmark for Evaluation of Scientific Novelty Metrics
Proposes axiomatic benchmark for scientific novelty metrics that avoids confounded proxies like citation counts or peer review scores. Addresses fundamental evaluation challenge for AI scientist systems by enabling reliable, automated novelty assessment without conflating novelty with impact, quality, or reviewer preference.