DiscoverDaily Paper CastDuPO: Enabling Reliable LLM Self-Verification via Dual Preference Optimization
DuPO: Enabling Reliable LLM Self-Verification via Dual Preference Optimization

DuPO: Enabling Reliable LLM Self-Verification via Dual Preference Optimization

Update: 2025-08-22
Share

Description

🤗 Upvotes: 57 | cs.LG, cs.CL



Authors:

Shuaijie She, Yu Bao, Yu Lu, Lu Xu, Tao Li, Wenhao Zhu, Shujian Huang, Shanbo Cheng, Lu Lu, Yuxuan Wang



Title:

DuPO: Enabling Reliable LLM Self-Verification via Dual Preference Optimization



Arxiv:

http://arxiv.org/abs/2508.14460v1



Abstract:

We present DuPO, a dual learning-based preference optimization framework that generates annotation-free feedback via a generalized duality. DuPO addresses two key limitations: Reinforcement Learning with Verifiable Rewards (RLVR)'s reliance on costly labels and applicability restricted to verifiable tasks, and traditional dual learning's restriction to strictly dual task pairs (e.g., translation and back-translation). Specifically, DuPO decomposes a primal task's input into known and unknown components, then constructs its dual task to reconstruct the unknown part using the primal output and known information (e.g., reversing math solutions to recover hidden variables), broadening applicability to non-invertible tasks. The quality of this reconstruction serves as a self-supervised reward to optimize the primal task, synergizing with LLMs' ability to instantiate both tasks via a single model. Empirically, DuPO achieves substantial gains across diverse tasks: it enhances the average translation quality by 2.13 COMET over 756 directions, boosts the mathematical reasoning accuracy by an average of 6.4 points on three challenge benchmarks, and enhances performance by 9.3 points as an inference-time reranker (trading computation for accuracy). These results position DuPO as a scalable, general, and annotation-free paradigm for LLM optimization.

Comments 
loading
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

DuPO: Enabling Reliable LLM Self-Verification via Dual Preference Optimization

DuPO: Enabling Reliable LLM Self-Verification via Dual Preference Optimization

Jingwen Liang, Gengyu Wang