DiscoverDaily Paper CastLiveTalk: Real-Time Multimodal Interactive Video Diffusion via Improved On-Policy Distillation
LiveTalk: Real-Time Multimodal Interactive Video Diffusion via Improved On-Policy Distillation

LiveTalk: Real-Time Multimodal Interactive Video Diffusion via Improved On-Policy Distillation

Update: 2025-12-31
Share

Description

🤗 Upvotes: 51 | cs.CV



Authors:

Ethan Chern, Zhulin Hu, Bohao Tang, Jiadi Su, Steffi Chern, Zhijie Deng, Pengfei Liu



Title:

LiveTalk: Real-Time Multimodal Interactive Video Diffusion via Improved On-Policy Distillation



Arxiv:

http://arxiv.org/abs/2512.23576v1



Abstract:

Real-time video generation via diffusion is essential for building general-purpose multimodal interactive AI systems. However, the simultaneous denoising of all video frames with bidirectional attention via an iterative process in diffusion models prevents real-time interaction. While existing distillation methods can make the model autoregressive and reduce sampling steps to mitigate this, they focus primarily on text-to-video generation, leaving the human-AI interaction unnatural and less efficient. This paper targets real-time interactive video diffusion conditioned on a multimodal context, including text, image, and audio, to bridge the gap. Given the observation that the leading on-policy distillation approach Self Forcing encounters challenges (visual artifacts like flickering, black frames, and quality degradation) with multimodal conditioning, we investigate an improved distillation recipe with emphasis on the quality of condition inputs as well as the initialization and schedule for the on-policy optimization. On benchmarks for multimodal-conditioned (audio, image, and text) avatar video generation including HDTF, AVSpeech, and CelebV-HQ, our distilled model matches the visual quality of the full-step, bidirectional baselines of similar or larger size with 20x less inference cost and latency. Further, we integrate our model with audio language models and long-form video inference technique Anchor-Heavy Identity Sinks to build LiveTalk, a real-time multimodal interactive avatar system. System-level evaluation on our curated multi-turn interaction benchmark shows LiveTalk outperforms state-of-the-art models (Sora2, Veo3) in multi-turn video coherence and content quality, while reducing response latency from 1 to 2 minutes to real-time generation, enabling seamless human-AI multimodal interaction.

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

LiveTalk: Real-Time Multimodal Interactive Video Diffusion via Improved On-Policy Distillation

LiveTalk: Real-Time Multimodal Interactive Video Diffusion via Improved On-Policy Distillation

Jingwen Liang, Gengyu Wang