INRetouch: Context Aware Implicit Neural Representation for Photography Retouching

1Computer Vision Lab, University of Würzburg, 2Sony PlayStation, FTG
1 / 7
2 / 7
3 / 7
4 / 7
5 / 7
6 / 7
7 / 7


INRetouch transfer edits applied to an image to a video without any visual or temporal artifacts.

We train our model on image pairs before and after editing. Designed to be lightweight and efficient, our model enables affordable inference, which motivated us to extend its ap- plication to video editing. As shown in the video, our method effectively learns edits from images and applies them to videos, producing visually pleasing results with excellent temporal consistency and no noticeable artifacts. This can be attributed to the clarity from the use of image pairs before and after editing as a reference and the design of our method to focus on color modification through local awareness.
Unlike existing methods, such as style transfer and generative-based models, which often struggle with tem- poral consistency and introduce significant noise, our ap- proach overcomes these limitations. This demonstrates both the effectiveness of our network and the controllability of the learned edits

Abstract

Professional photo editing remains challenging, requiring extensive knowledge of imaging pipelines and significant expertise. With the ubiquity of smartphone photography, there is an increasing demand for accessible yet sophisticated image editing solutions. While recent deep learning approaches, particularly style transfer methods, have attempted to automate this process, they often struggle with output fidelity, editing control, and complex retouching capabilities. We propose a novel retouch transfer approach that learns from professional edits through before-after image pairs, enabling precise replication of complex editing operations. To facilitate this research direction, we introduce a comprehensive Photo Retouching Dataset comprising 100,000 high-quality images edited using over 170 professional Adobe Lightroom presets. We develop a context-aware Implicit Neural Representation that learns to apply edits adaptively based on image content and context, requiring no pretraining and capable of learning from a single example. Our method extracts implicit transformations from reference edits and adaptively applies them to new images. Through extensive evaluation, we demonstrate that our approach not only surpasses existing methods in photo retouching but also enhances performance in related image reconstruction tasks like Gamut Mapping and Raw Reconstruction. By bridging the gap between professional editing capabilities and automated solutions, our work presents a significant step toward making sophisticated photo editing more accessible while maintaining high-fidelity results.

Image Teaser.

Comparison of different approaches for style transfer in the context of reference based image editing

INR Architecture

Arch

Our proposed INR Pipeline in comparison with the MLP INR Pipeline.

Qualitative Results

Qualitative

Comparison between different methods Adapted for our retouching transfer task

Contacts

Omar Elezabi: omar.elezabi@uni-wuerzburg.de
Marcos V. Conde: marcos.conde@uni-wuerzburg.de

BibTeX


      @article{elezabi2024inretouch,
        title={INRetouch: Context Aware Implicit Neural Representation for Photography Retouching},
        author={Elezabi, Omar and Conde, Marcos V and Wu, Zongwei and Timofte, Radu},
        journal={arXiv preprint arXiv:2412.03848},
        year={2024}
      }