DiscoverDaily Paper CastDeContext as Defense: Safe Image Editing in Diffusion Transformers
DeContext as Defense: Safe Image Editing in Diffusion Transformers

DeContext as Defense: Safe Image Editing in Diffusion Transformers

Update: 2025-12-20
Share

Description

🤗 Upvotes: 22 | cs.CV



Authors:

Linghui Shen, Mingyue Cui, Xingyi Yang



Title:

DeContext as Defense: Safe Image Editing in Diffusion Transformers



Arxiv:

http://arxiv.org/abs/2512.16625v1



Abstract:

In-context diffusion models allow users to modify images with remarkable ease and realism. However, the same power raises serious privacy concerns: personal images can be easily manipulated for identity impersonation, misinformation, or other malicious uses, all without the owner's consent. While prior work has explored input perturbations to protect against misuse in personalized text-to-image generation, the robustness of modern, large-scale in-context DiT-based models remains largely unexamined. In this paper, we propose DeContext, a new method to safeguard input images from unauthorized in-context editing. Our key insight is that contextual information from the source image propagates to the output primarily through multimodal attention layers. By injecting small, targeted perturbations that weaken these cross-attention pathways, DeContext breaks this flow, effectively decouples the link between input and output. This simple defense is both efficient and robust. We further show that early denoising steps and specific transformer blocks dominate context propagation, which allows us to concentrate perturbations where they matter most. Experiments on Flux Kontext and Step1X-Edit show that DeContext consistently blocks unwanted image edits while preserving visual quality. These results highlight the effectiveness of attention-based perturbations as a powerful defense against image manipulation.

Comments 
In Channel
loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

DeContext as Defense: Safe Image Editing in Diffusion Transformers

DeContext as Defense: Safe Image Editing in Diffusion Transformers

Jingwen Liang, Gengyu Wang