A Dataset of Flash and Ambient Illumination Pairs from the Crowd

Yagiz Aksoy, Changil Kim, Petr Kellnhofer, Sylvain Paris, Mohamed Elgharib, Marc Pollefeys and Wojciech Matusik
ECCV, 2018
A Dataset of Flash and Ambient Illumination Pairs from the Crowd

We introduce a diverse dataset of thousands of photograph pairs with flash-only and ambient-only illuminations, collected via crowdsourcing.

Abstract

Illumination is a critical element of photography and is essential for many computer vision tasks. Flash light is unique in the sense that it is a widely available tool for easily manipulating the scene illumination. We present a dataset of thousands of ambient and flash illumination pairs to enable studying flash photography and other applications that can benefit from having separate illuminations. Different than the typical use of crowdsourcing in generating computer vision datasets, we make use of the crowd to directly take the photographs that make up our dataset. As a result, our dataset covers a wide variety of scenes captured by many casual photographers. We detail the advantages and challenges of our approach to crowdsourcing as well as the computational effort to generate completely separate flash illuminations from the ambient light in an uncontrolled setup. We present a brief examination of illumination decomposition, a challenging and underconstrained problem in flash photography, to demonstrate the use of our dataset in a data-driven approach.

The Dataset


The Flash and Ambient Illuminations Dataset (FAID) consists of aligned flash-only and ambient-only illumination pairs captured with mobile devices by many crowd-workers participated in our collection effort. This dataset accompanies our ECCV 2018 paper.
@INPROCEEDINGS{flashambient,
author={Ya\u{g}{\i}z Aksoy and Changil Kim and Petr Kellnhofer and Sylvain Paris and Mohamed Elgharib and Marc Pollefeys and Wojciech Matusik},
booktitle={Proc. ECCV},
title={A Dataset of Flash and Ambient Illumination Pairs from the Crowd},
year={2018},
}

Manuscript

BibTeX

@INPROCEEDINGS{flashambient,
author={Ya\u{g}{\i}z Aksoy and Changil Kim and Petr Kellnhofer and Sylvain Paris and Mohamed Elgharib and Marc Pollefeys and Wojciech Matusik},
booktitle={Proc. ECCV},
title={A Dataset of Flash and Ambient Illumination Pairs from the Crowd},
year={2018},
}

Related Publications


Sepideh Sarajian Maralan, Chris Careaga, and Yağız Aksoy
CVPR, 2023
Flash is an essential tool as it often serves as the sole controllable light source in everyday photography. However, the use of flash is a binary decision at the time a photograph is captured with limited control over its characteristics such as strength or color. In this work, we study the computational control of the flash light in photographs taken with or without flash. We present a physically motivated intrinsic formulation for flash photograph formation and develop flash decomposition and generation methods for flash and no-flash photographs, respectively. We demonstrate that our intrinsic formulation outperforms alternatives in the literature and allows us to computationally control flash in in-the-wild images.
@INPROCEEDINGS{Maralan2023Flash,
author={Sepideh Sarajian Maralan and Chris Careaga and Ya\u{g}{\i}z Aksoy},
title={Computational Flash Photography through Intrinsics},
journal={Proc. CVPR},
year={2023},
}

Sepideh Sarajian Maralan
MSc Thesis, Simon Fraser University, 2022
The majority of common cameras have an integrated flash that improves lighting in a variety of situations, particularly in low-light environments. Before capturing an image, the photographer must make a decision regarding the usage of flash. However, flash strength cannot be adjusted once it has been utilised in an image. In this work, we target two application scenarios in computational flash photography: decomposition of a flash photograph into its illumination components and generating the flash illumination from a given single no-flash photograph. Two distinct approaches based on image-to-image transfer and intrinsic decomposition with the use of convolutional neural networks are employed to address these tasks. An additional network boosts and upscales the estimated results to generate the final illuminations. Key advantages of our approach include the preparation of a large flash/no-flash dataset and presenting models based on state-of-the-art methods to address subtasks specific to our problem.
@MASTERSTHESIS{cfp-msc,
author={Sepideh Sarajian Maralan},
title={Computational Flash Photography},
year={2022},
school={Simon Fraser University},
}

Alexandre Kaspar, Geneviève Patterson, Changil Kim, Yağız Aksoy, Wojciech Matusik and Mohamed Elgharib
ACM CHI Conference on Human Factors in Computing Systems, 2018
In this work, we propose two ensemble methods leveraging a crowd workforce to improve video annotation, with a focus on video object segmentation. Their shared principle is that while individual candidate results may likely be insufficient, they often complement each other so that they can be combined into something better than any of the individual results - the very spirit of collaborative working. For one, we extend a standard polygon-drawing interface to allow workers to annotate negative space, and combine the work of multiple workers instead of relying on a single best one as commonly done in crowdsourced image segmentation. For the other, we present a method to combine multiple automatic propagation algorithms with the help of the crowd. Such combination requires an understanding of where the algorithms fail, which we gather using a novel coarse scribble video annotation task. We evaluate our ensemble methods, discuss our design choices for them, and make our web-based crowdsourcing tools and results publicly available.
@INPROCEEDINGS{crowdensembles,
author={Alexandre Kaspar and Genevi\`eve Patterson and Changil Kim and Ya\u{g}{\i}z Aksoy and Wojciech Matusik and Mohamed Elgharib},
title={Crowd-Guided Ensembles: How Can We Choreograph Crowd Workers for Video Segmentation?},
booktitle={Proc. ACM CHI},
year={2018},
}