We present a photograph relighting method that enables explicit control over light sources akin to CG pipelines. Users can insert different types of light sources, such as spot lights, point lights, or environmental illumination into the scene. We achieve this in a pipeline involving mid-level computer vision, physically-based rendering, and neural rendering. We introduce a self-supervised training methodology using differentiable rendering to train our neural renderer with real-world photograph collections for in-the-wild generalization.
Abstract
We present a self-supervised approach to in-the-wild image relighting that enables fully controllable, physically based illumination editing.
We achieve this by combining the physical accuracy of traditional rendering with the photorealistic appearance made possible by neural rendering.
Our pipeline works by inferring a colored mesh representation of a given scene using monocular estimates of geometry and intrinsic components.
This representation allows users to define their desired illumination configuration in 3D. The scene under the new lighting can then be rendered using a path-tracing engine.
We send this approximate rendering of the scene through a feed-forward neural renderer to predict the final photorealistic relighting result.
We develop a differentiable rendering process to reconstruct in-the-wild scene illumination, enabling self-supervised training of our neural renderer on raw image collections.
Our method represents a significant step in bringing the explicit physical control over lights available in typical 3D computer graphics tools, such as Blender, to in-the-wild relighting.
Paper
BibTeX
@INPROCEEDINGS{careagaRelighting,
author={Chris Careaga and Ya\u{g}{\i}z Aksoy},
title={Physically Controllable Relighting of Photographs},
booktitle={Proc. SIGGRAPH},
year={2025},
}
License
The methodology presented in this work is safeguarded under intellectual property protection. For inquiries regarding licensing opportunities, kindly reach out to SFU Technology Licensing Office <tlo_dir ατ sfu δøτ ca> and Dr. Yağız Aksoy <yagiz ατ sfu δøτ ca>.
ACM Transactions on Graphics (Proc. SIGGRAPH Asia), 2024
Best Paper Award Honorable Mention
Intrinsic image decomposition aims to separate the surface reflectance and the effects from the illumination given a single photograph.
Due to the complexity of the problem, most prior works assume a single-color illumination and a Lambertian world, which limits their use in illumination-aware image editing applications.
In this work, we separate an input image into its diffuse albedo, colorful diffuse shading, and specular residual components.
We arrive at our result by gradually removing first the single-color illumination and then the Lambertian-world assumptions.
We show that by dividing the problem into easier sub-problems, in-the-wild colorful diffuse shading estimation can be achieved despite the limited ground-truth datasets.
Our extended intrinsic model enables illumination-aware analysis of photographs and can be used for image editing applications such as specularity removal and per-pixel white balancing.
@ARTICLE{careagaColorful,
author={Chris Careaga and Ya\u{g}{\i}z Aksoy},
title={Colorful Diffuse Intrinsic Image Decomposition in the Wild},
journal={ACM Trans. Graph.},
year={2024},
volume = {43},
number = {6},
articleno = {178},
numpages = {12},
}
Intrinsic decomposition is a fundamental mid-level vision problem that plays a crucial role in various inverse rendering and computational photography pipelines.
Generating highly accurate intrinsic decompositions is an inherently under-constrained task that requires precisely estimating continuous-valued shading and albedo.
In this work, we achieve high-resolution intrinsic decomposition by breaking the problem into two parts.
First, we present a dense ordinal shading formulation using a shift- and scale-invariant loss in order to estimate ordinal shading cues without restricting the predictions to obey the intrinsic model.
We then combine low- and high-resolution ordinal estimations using a second network to generate a shading estimate with both global coherency and local details.
We encourage the model to learn an accurate decomposition by computing losses on the estimated shading as well as the albedo implied by the intrinsic model.
We develop a straightforward method for generating dense pseudo ground truth using our models predictions and multi-illumination data, enabling generalization to in-the-wild imagery.
We present exhaustive qualitative and quantitative analysis of our predicted intrinsic components against state-of-the-art methods.
Finally, we demonstrate the real-world applicability of our estimations by performing otherwise difficult editing tasks such as recoloring and relighting.
@ARTICLE{careagaIntrinsic,
author={Chris Careaga and Ya\u{g}{\i}z Aksoy},
title={Intrinsic Image Decomposition via Ordinal Shading},
journal={ACM Trans. Graph.},
year={2023},
volume = {43},
number = {1},
articleno = {12},
numpages = {24},
}
Chris Careaga, S. Mahdi H. Miangoleh, and Yağız Aksoy
SIGGRAPH Asia, 2023
Despite significant advancements in network-based image harmonization techniques, there still exists a domain disparity between typical training pairs and real-world composites encountered during inference.
Most existing methods are trained to reverse global edits made on segmented image regions, which fail to accurately capture the lighting inconsistencies between the foreground and background found in composited images.
In this work, we introduce a self-supervised illumination harmonization approach formulated in the intrinsic image domain.
First, we estimate a simple global lighting model from mid-level vision representations to generate a rough shading for the foreground region.
A network then refines this inferred shading to generate a harmonious re-shading that aligns with the background scene.
In order to match the color appearance of the foreground and background, we utilize ideas from prior harmonization approaches to perform parameterized image edits in the albedo domain.
To validate the effectiveness of our approach, we present results from challenging real-world composites and conduct a user study to objectively measure the enhanced realism achieved compared to state-of-the-art harmonization methods.
@INPROCEEDINGS{careagaCompositing,
author={Chris Careaga and S. Mahdi H. Miangoleh and Ya\u{g}{\i}z Aksoy},
title={Intrinsic Harmonization for Illumination-Aware Compositing},
booktitle={Proc. SIGGRAPH Asia},
year={2023},
}