NeurIPS 2025: Reward Reasoning Model
Description
The source details the development and evaluation of Reward Reasoning Models (RRMs), which are designed to enhance Large Language Model (LLM) alignment by incorporating an explicit chain-of-thought reasoning process before generating a final reward. This innovative structure enables RRMs to adaptively utilize computational resources at inference time for complex evaluation tasks requiring nuanced judgment. The models are trained using a novel reinforcement learning framework that promotes the self-evolution of reasoning skills without requiring explicit reasoning traces as initial training data. Experimental results confirm that RRMs achieve superior performance across diverse reward modeling and reasoning benchmarks, often outperforming competing models with much larger parameter sizes. The document further validates the practical effectiveness of RRMs in tasks such as reward-guided best-of-N response selection and robust LLM post-training alignment. Overall, the work establishes a new state-of-the-art approach by demonstrating the scalable benefits of marrying reasoning capabilities with reward prediction.
Source: https://openreview.net/pdf?id=V8Kbz7l2cr




