DiscoverAI: post transformersNeurIPS 2025: SeRL: Self-Play Reinforcement Learning for Large Language Models with Limited Data
NeurIPS 2025: SeRL: Self-Play Reinforcement Learning for Large Language Models with Limited Data

NeurIPS 2025: SeRL: Self-Play Reinforcement Learning for Large Language Models with Limited Data

Update: 2025-11-29
Share

Description

The academic paper introduces Self-play Reinforcement Learning (SeRL), a framework engineered to enhance the reasoning capabilities of Large Language Models (LLMs) specifically in scenarios lacking extensive, high-quality labeled data. SeRL consists of two core, complementary modules: the self-instruction module generates new and diverse training problems from a small seed dataset, ensuring data quality and appropriate difficulty via an online filtering strategy. Simultaneously, the self-rewarding module bypasses the need for external supervision by estimating response rewards using a stable majority-voting mechanism among sampled outputs. This integrated approach facilitates sustained, unsupervised reinforcement learning across multiple training iterations. Experiments demonstrate that SeRL is highly effective, consistently outperforming existing self-play methods and matching the performance levels achieved by models trained on full datasets with verifiable rewards.


Source:

https://openreview.net/pdf?id=ZF93vyH9He

Comments 
In Channel
Meta: SAM 3

Meta: SAM 3

2025-11-2014:22

loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

NeurIPS 2025: SeRL: Self-Play Reinforcement Learning for Large Language Models with Limited Data

NeurIPS 2025: SeRL: Self-Play Reinforcement Learning for Large Language Models with Limited Data

mcgrof