DiscoverAI: post transformersNeurIPS 2025: Reward Reasoning Model
NeurIPS 2025: Reward Reasoning Model

NeurIPS 2025: Reward Reasoning Model

Update: 2025-11-29
Share

Description

The source details the development and evaluation of Reward Reasoning Models (RRMs), which are designed to enhance Large Language Model (LLM) alignment by incorporating an explicit chain-of-thought reasoning process before generating a final reward. This innovative structure enables RRMs to adaptively utilize computational resources at inference time for complex evaluation tasks requiring nuanced judgment. The models are trained using a novel reinforcement learning framework that promotes the self-evolution of reasoning skills without requiring explicit reasoning traces as initial training data. Experimental results confirm that RRMs achieve superior performance across diverse reward modeling and reasoning benchmarks, often outperforming competing models with much larger parameter sizes. The document further validates the practical effectiveness of RRMs in tasks such as reward-guided best-of-N response selection and robust LLM post-training alignment. Overall, the work establishes a new state-of-the-art approach by demonstrating the scalable benefits of marrying reasoning capabilities with reward prediction.


Source: https://openreview.net/pdf?id=V8Kbz7l2cr

Comments 
loading
In Channel
Meta: SAM 3

Meta: SAM 3

2025-11-2014:22

loading
00:00
00:00
1.0x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

NeurIPS 2025: Reward Reasoning Model

NeurIPS 2025: Reward Reasoning Model

mcgrof