SP
BravenNow
Tracing Copied Pixels and Regularizing Patch Affinity in Copy Detection
| USA | technology | ✓ Verified - arxiv.org

Tracing Copied Pixels and Regularizing Patch Affinity in Copy Detection

#Image Copy Detection #PixTrace #CopyNCE #Self‑Supervised Learning #Contrastive Loss #Patch Affinity #DISC21 dataset #Geometric Traceability #Pixel‑Coordinate Tracking

📌 Key Takeaways

  • - Introduced PixTrace, a pixel‑coordinate tracking module that preserves spatial information through editing transformations.
  • - Developed CopyNCE, a geometrically‑guided contrastive loss that regularizes patch affinity via verified overlap ratios.
  • - Demonstrated state‑of‑the‑art accuracy on DISC21 (88.7 % uAP / 83.9 % RP90 for matchers, 72.6 % uAP / 68.4 % RP90 for descriptors).
  • - Showed enhanced interpretability compared to previous view‑level contrastive ICD methods.
  • - Validate that geometric traceability improves self‑supervised learning for fine‑grained correspondence tasks.

📖 Full Retelling

In February 2026, a research team composed of Yichen Lu, Siwei Nie, Minlong Lu, Xudong Yang, Xiaobo Zhang and Peng Zhang published the paper "Tracing Copied Pixels and Regularizing Patch Affinity in Copy Detection" on arXiv under the computer‑vision domain. The authors propose a novel framework for Image Copy Detection (ICD), a sub‑field of computer‑vision that attempts to locate copied or manipulated regions across image pairs. Their approach, which harnesses the geometric traceability inherent in editing operations, is introduced to overcome limitations of existing self‑supervised, view‑level contrastive learning methods that struggle with fine‑grained correspondences. By integrating a pixel‑coordinate tracking module, PixTrace, with a geometry‑guided contrastive loss, CopyNCE, the team achieves a new state‑of‑the‑art performance on the DISC21 benchmark (88.7 % uAP / 83.9 % RP90 for the matcher and 72.6 % uAP / 68.4 % RP90 for the descriptor), and offers improved interpretability over prior methods. The paper’s key innovation lies in bridging pixel‑level traceability with patch‑level similarity learning. PixTrace explicitly maintains spatial maps across editing transformations, allowing the model to identify which pixels correspond after a copy or edit. CopyNCE leverages the overlap ratios derived from PixTrace’s verified mappings to regularize patch affinity, reducing noisier supervision signals that traditionally plague self‑supervised training. Extensive experiments conducted on the DISC21 dataset confirm that this geometry‑aware strategy not only surpasses previous benchmarks but also provides clearer explanations of why certain patches are deemed matched. Overall, the work demonstrates that incorporating geometric grounding into contrastive learning can significantly enhance the robustness of copy detection in image forensics, and sets the stage for future research that further marries explicit spatial reasoning with deep feature representations.

🏷️ Themes

Computer Vision, Image Forensics, Self‑Supervised Learning, Contrastive Representation Learning, Geometric Reasoning

Entity Intersection Graph

No entity connections available yet for this article.

Deep Analysis

Why It Matters

The paper introduces PixTrace and CopyNCE, improving image copy detection by tracking pixel coordinates and regularizing patch similarity. These innovations enable more accurate detection of sophisticated edits, which is critical for media forensics and copyright enforcement.

Context & Background

  • Image copy detection is vital for identifying manipulated media
  • Existing contrastive methods lack fine-grained correspondence
  • PixTrace tracks pixel mappings across edits
  • CopyNCE uses geometric overlap to guide contrastive learning

What Happens Next

Researchers may adopt PixTrace and CopyNCE in larger forensic pipelines, and the methods could be extended to video copy detection. Further studies may explore integration with deep learning frameworks for real-time applications.

Frequently Asked Questions

What is PixTrace?

PixTrace is a pixel coordinate tracking module that maintains explicit spatial mappings across editing transformations, allowing the system to know where pixels moved.

How does CopyNCE differ from traditional contrastive loss?

CopyNCE incorporates geometric overlap ratios derived from PixTrace to regularize patch affinity, reducing supervision noise compared to view-level contrastive methods.

What datasets were used to evaluate the method?

The method was evaluated on the DISC21 dataset, achieving state-of-the-art performance with 88.7% uAP for matchers and 72.6% uAP for descriptors.

Original Source
--> Computer Science > Computer Vision and Pattern Recognition arXiv:2602.17484 [Submitted on 19 Feb 2026] Title: Tracing Copied Pixels and Regularizing Patch Affinity in Copy Detection Authors: Yichen Lu , Siwei Nie , Minlong Lu , Xudong Yang , Xiaobo Zhang , Peng Zhang View a PDF of the paper titled Tracing Copied Pixels and Regularizing Patch Affinity in Copy Detection, by Yichen Lu and 5 other authors View PDF HTML Abstract: Image Copy Detection aims to identify manipulated content between image pairs through robust feature representation learning. While self-supervised learning has advanced ICD systems, existing view-level contrastive methods struggle with sophisticated edits due to insufficient fine-grained correspondence learning. We address this limitation by exploiting the inherent geometric traceability in edited content through two key innovations. First, we propose PixTrace - a pixel coordinate tracking module that maintains explicit spatial mappings across editing transformations. Second, we introduce CopyNCE, a geometrically-guided contrastive loss that regularizes patch affinity using overlap ratios derived from PixTrace's verified mappings. Our method bridges pixel-level traceability with patch-level similarity learning, suppressing supervision noise in SSL training. Extensive experiments demonstrate not only state-of-the-art performance (88.7% uAP / 83.9% RP90 for matcher, 72.6% uAP / 68.4% RP90 for descriptor on DISC21 dataset) but also better interpretability over existing methods. Subjects: Computer Vision and Pattern Recognition (cs.CV) ; Artificial Intelligence (cs.AI) Cite as: arXiv:2602.17484 [cs.CV] (or arXiv:2602.17484v1 [cs.CV] for this version) https://doi.org/10.48550/arXiv.2602.17484 Focus to learn more arXiv-issued DOI via DataCite (pending registration) Submission history From: Yichen Lu [ view email ] [v1] Thu, 19 Feb 2026 15:54:55 UTC (13,380 KB) Full-text links: Access Paper: View a PDF of the paper titled Tracing Copied Pixels and...
Read full article at source

Source

arxiv.org

More from USA

News from Other Countries

🇬🇧 United Kingdom

🇺🇦 Ukraine