Interactive Editing of Monocular Depth

Obumneme Stanley Dukor S. Mahdi H. MiangolehMahesh Kumar Krishna ReddyLong MaiYağız Aksoy
SIGGRAPH Posters, 2022
Interactive Editing of Monocular Depth

We propose an interactive web-based depth editing and visualization tool to perform local and global depth editing operations. From left to right, we apply iterative edits using our tool on the input depth to refine its 3D geometric properties.

Abstract

Recent advances in computer vision have made 3D structure-aware editing of still photographs a reality. Such computational photography applications use a depth map that is automatically generated by monocular depth estimation methods to represent the scene structure. In this work, we present a lightweight, web-based interactive depth editing and visualization tool that adapts low-level conventional image editing operations for geometric manipulation to enable artistic control in the 3D photography workflow. Our tool provides real-time feedback on the geometry through a 3D scene visualization to make the depth map editing process more intuitive for artists. Our web-based tool is open-source and platform-independent to support wider adoption of 3D photography techniques in everyday digital photography.

Interface and Implementation

Interactive depth editing interface

Source code

Paper

Video

BibTeX

@INPROCEEDINGS{interactiveDepth,
author={Obumneme Stanley Dukor and S. Mahdi H. Miangoleh and Mahesh Kumar Krishna Reddy and Long Mai and Ya\u{g}{\i}z Aksoy},
title={Interactive Editing of Monocular Depth},
booktitle={SIGGRAPH Posters},
year={2022},
}

Related Publications


S. Mahdi H. Miangoleh, Mahesh Reddy, and Yağız Aksoy
SIGGRAPH, 2024
Existing methods for scale-invariant monocular depth estimation (SI MDE) often struggle due to the complexity of the task, and limited and non-diverse datasets, hindering generalizability in real-world scenarios. This is while shift-and-scale-invariant (SSI) depth estimation, simplifying the task and enabling training with abundant stereo datasets achieves high performance. We present a novel approach that leverages SSI inputs to enhance SI depth estimation, streamlining the network's role and facilitating in-the-wild generalization for SI depth estimation while only using a synthetic dataset for training. Emphasizing the generation of high-resolution details, we introduce a novel sparse ordinal loss that substantially improves detail generation in SSI MDE, addressing critical limitations in existing approaches. Through in-the-wild qualitative examples and zero-shot evaluation we substantiate the practical utility of our approach in computational photography applications, showcasing its ability to generate highly detailed SI depth maps and achieve generalization in diverse scenarios.
@INPROCEEDINGS{miangolehSIDepth,
author={S. Mahdi H. Miangoleh and Mahesh Reddy and Ya\u{g}{\i}z Aksoy},
title={Scale-Invariant Monocular Depth Estimation via SSI Depth},
booktitle={Proc. SIGGRAPH},
year={2024},
}

S. Mahdi H. Miangoleh*, Sebastian Dille*, Long Mai, Sylvain Paris, and Yağız Aksoy
CVPR, 2021
Neural networks have shown great abilities in estimating depth from a single image. However, the inferred depth maps are well below one-megapixel resolution and often lack fine-grained details, which limits their practicality. Our method builds on our analysis on how the input resolution and the scene structure affects depth estimation performance. We demonstrate that there is a trade-off between a consistent scene structure and the high-frequency details, and merge low- and high-resolution estimations to take advantage of this duality using a simple depth merging network. We present a double estimation method that improves the whole-image depth estimation and a patch selection method that adds local details to the final result. We demonstrate that by merging estimations at different resolutions with changing context, we can generate multi-megapixel depth maps with a high level of detail using a pre-trained model.
@INPROCEEDINGS{Miangoleh2021Boosting,
author={S. Mahdi H. Miangoleh and Sebastian Dille and Long Mai and Sylvain Paris and Ya\u{g}{\i}z Aksoy},
title={Boosting Monocular Depth Estimation Models to High-Resolution via Content-Adaptive Multi-Resolution Merging},
journal={Proc. CVPR},
year={2021},
}

Seyed Mahdi Hosseini Miangoleh
MSc Thesis, Simon Fraser University, 2022
Convolutional neural networks have shown a remarkable ability to estimate depth from a single image. However, the estimated depth maps are low resolution due to network structure and hardware limitations, only showing the overall scene structure and lacking fine details, which limits their applicability. We demonstrate that there is a trade-off between the consistency of the scene structure and the high-frequency details concerning input content and resolution. Building upon this duality, we present a double estimation framework to improve the depth estimation of the whole image and a patch selection step to add more local details. Our approach obtains multi-megapixel depth estimations with sharp details by merging estimations at different resolutions based on image content. A key strength of our approach is that we can employ any off-the-shelf pre-trained CNN-based monocular depth estimation model without requiring further finetuning.
@MASTERSTHESIS{bmd-msc,
author={Seyed Mahdi Hosseini Miangoleh},
title={Boosting Monocular Depth Estimation to High Resolution},
year={2022},
school={Simon Fraser University},
}