Designing Effective Inter-Pixel Information Flow for Natural Image Matting

Yagiz Aksoy, Tunc Ozan Aydin and Marc Pollefeys
CVPR, 2017

Extended version on arXiv
Designing Effective Inter-Pixel Information Flow for Natural Image Matting

For an input image and a trimap (a), we define several forms of information flow inside the image. We begin with color-mixture flow (b), then add direct channels of information flow from known to unknown regions (c), and let effective share of information inside the unknown region (d) to increase the matte quality in challenging regions. We finally add local information flow to get our spatially smooth result (e).

Abstract

We present a novel, purely affinity-based natural image matting algorithm. Our method relies on carefully defined pixel-to-pixel connections that enable effective use of information available in the image. We control the information flow from the known-opacity regions into the unknown region, as well as within the unknown region itself, by utilizing multiple definitions of pixel affinities. Among other forms of information flow, we introduce color-mixture flow, which builds upon local linear embedding and effectively encapsulates the relation between different pixel opacities. Our resulting novel linear system formulation can be solved in closed-form and is robust against several fundamental challenges of natural matting such as holes and remote intricate structures. Our evaluation using the alpha matting benchmark suggests a significant performance improvement over the current methods. While our method is primarily designed as a standalone matting tool, we show that it can also be used for regularizing mattes obtained by sampling-based methods. We extend our formulation to layer color estimation and show that the use of multiple channels of flow increases the layer color quality. We also demonstrate our performance in green-screen keying and further analyze the characteristics of the affinities used in our method.

Manuscript

BibTeX

@INPROCEEDINGS{ifm,
author={Aksoy, Ya\u{g}{\i}z and Ayd{\i}n, Tun\c{c} Ozan and Pollefeys, Marc},
booktitle={Proc. CVPR},
title={Designing Effective Inter-Pixel Information Flow for Natural Image Matting},
year={2017},
}

@ARTICLE{ifmExt,
author={Aksoy, Ya\u{g}{\i}z and Ayd{\i}n, Tun\c{c} Ozan and Pollefeys, Marc},
journal = {\tt arXiv:1707.05055 [cs.CV]},
title={Designing Effective Inter-Pixel Information Flow for Natural Image Matting},
year={2017},
}

Benchmark results

Implementation

We can not release the original source code. However, a reimplementation of the method is available as a part of the affinity-based matting toolbox. While this reimplementation does not give the same results as the original implementation, we recommend its use in comparisons and extensions.

Related Publications


Yağız Aksoy, Tae-Hyun Oh, Sylvain Paris, Marc Pollefeys and Wojciech Matusik
ACM Transactions on Graphics (Proc. SIGGRAPH), 2018
Accurate representation of soft transitions between image regions is essential for high-quality image editing and compositing. Current techniques for generating such representations depend heavily on interaction by a skilled visual artist, as creating such accurate object selections is a tedious task. In this work, we introduce semantic soft segments, a set of layers that correspond to semantically meaningful regions in an image with accurate soft transitions between different objects. We approach this problem from a spectral segmentation angle and propose a graph structure that embeds texture and color features from the image as well as higher-level semantic information generated by a neural network. The soft segments are generated via eigendecomposition of the carefully constructed Laplacian matrix fully automatically. We demonstrate that otherwise complex image editing tasks can be done with little effort using semantic soft segments.
@ARTICLE{sss,
author={Ya\u{g}{\i}z Aksoy and Tae-Hyun Oh and Sylvain Paris and Marc Pollefeys and Wojciech Matusik},
title={Semantic Soft Segmentation},
journal={ACM Trans. Graph. (Proc. SIGGRAPH)},
year={2018},
pages = {72:1-72:13},
volume = {37},
number = {4}
}

Jingwei Tang, Yağız Aksoy, Cengiz Öztireli, Markus Gross and Tunç Ozan Aydın
CVPR, 2019
The goal of natural image matting is the estimation of opacities of a user-defined foreground object that is essential in creating realistic composite imagery. Natural matting is a challenging process due to the high number of unknowns in the mathematical modeling of the problem, namely the opacities as well as the foreground and background layer colors, while the original image serves as the single observation. In this paper, we propose the estimation of the layer colors through the use of deep neural networks prior to the opacity estimation. The layer color estimation is a better match for the capabilities of neural networks, and the availability of these colors substantially increase the performance of opacity estimation due to the reduced number of unknowns in the compositing equation. A prominent approach to matting in parallel to ours is called sampling-based matting, which involves gathering color samples from known-opacity regions to predict the layer colors. Our approach outperforms not only the previous hand-crafted sampling algorithms, but also current data-driven methods. We hence classify our method as a hybrid sampling- and learning-based approach to matting, and demonstrate the effectiveness of our approach through detailed ablation studies using alternative network architectures.
@INPROCEEDINGS{samplenet,
author={Tang, Jingwei and Aksoy, Ya\u{g}{\i}z and \"Oztireli, Cengiz and Gross, Markus and Ayd{\i}n, Tun\c{c} Ozan},
booktitle={Proc. CVPR},
title={Learning-based Sampling for Natural Image Matting},
year={2019},
}

Yağız Aksoy
PhD Thesis, ETH Zurich, 2019
@phdthesis{ssi,
author={Ya\u{g}{\i}z Aksoy},
title={Soft Segmentation of Images},
year={2019},
school={ETH Zurich},
}