INRetouch: Context Aware Implicit Neural Representation for Photography Retouching

Computer Vision Lab, University of Würzburg
1 / 7
2 / 7
3 / 7
4 / 7
5 / 7
6 / 7
7 / 7


INRetouch transfer edits from an image to a video without any visual or temporal artifacts.

We train our model on image pairs before and after editing. Designed to be lightweight and efficient, our model enables affordable inference, which motivated us to extend its ap- plication to video editing. As shown in the video, our method effectively learns edits from images and applies them to videos, producing visually pleasing results with excellent temporal consistency and no noticeable artifacts. This can be attributed to the clarity from the use of image pairs before and after editing as a reference and the design of our method to focus on color modification through local awareness.
Unlike existing methods, such as style transfer and generative-based models, which often struggle with temporal consistency and introduce significant noise, our approach overcomes these limitations. This demonstrates both the effectiveness of our network and the controllability of the learned edits

Image Teaser.

We propose InRetouch, a novel implicit neural representation method for one-shot image retouching transfer.

Abstract

Professional photo editing remains challenging, requiring extensive knowledge of imaging pipelines and significant expertise. While recent deep learning approaches, particularly style transfer methods, have attempted to automate this process, they often struggle with output fidelity, editing control, and complex retouching capabilities. We propose a novel retouch transfer approach that learns from professional edits through before-after image pairs, enabling precise replication of complex editing operations. We develop a context-aware Implicit Neural Representation that learns to apply edits adaptively based on image content and context, and is capable of learning from a single example. Our method extracts implicit transformations from reference edits and adaptively applies them to new images. To facilitate this research direction, we introduce a comprehensive Photo Retouching Dataset comprising 100,000 high-quality images edited using over 170 professional Adobe Lightroom presets. Through extensive evaluation, we demonstrate that our approach not only surpasses existing methods in photo retouching but also enhances performance in related image reconstruction tasks like Gamut Mapping and Raw Reconstruction. By bridging the gap between professional editing capabilities and automated solutions, our work presents a significant step toward making sophisticated photo editing more accessible while maintaining high-fidelity results.

INRetouch Pipeline

Arch

Our proposed InRetouch Pipeline and INR Architecture.

Qualitative Results

Qualitative

Transfer retouches from professionally edited images collected online.

Qualitative

Comparison between different methods on retouching transfer Benchmark.

Local Modifications Applities

Qualitative

Importance of context awareness for local and region specific modifications.

Dataset Styles Vararity

Qualitative

Visualization of the variety of edits in the used presets applied to a natural image (highlighted top-left).

Contacts

Omar Elezabi: omar.elezabi@uni-wuerzburg.de
Marcos V. Conde: marcos.conde@uni-wuerzburg.de

BibTeX


      @article{elezabi2024inretouch,
        title={INRetouch: Context Aware Implicit Neural Representation for Photography Retouching},
        author={Elezabi, Omar and Conde, Marcos V and Wu, Zongwei and Timofte, Radu},
        journal={arXiv preprint arXiv:2412.03848},
        year={2024}
      }