Parallax Background Texture Generation

Brigham Okano, Shao Yu Shen, Sebastian Dille, and Yağız Aksoy
SIGGRAPH Posters, 2022
Parallax Background Texture Generation

We present a framework to turn a landscape photograph (left) into a layered game texture (right) with parallax effect.

Abstract

Art assets for games can be time intensive to produce. Whether it is a full 3D world, or simpler 2D background, creating good looking assets takes time and skills that are not always readily available. Time can be saved by using repeating assets, but visible repetition hurts immersion. Procedural generation techniques can help make repetition less uniform, but do not remove it entirely. Both approaches leave noticeable levels of repetition in the image, and require significant time and skill investments to produce. Video game developers in hobby, game jam, or early prototyping situations may not have access to the required time and skill. We propose a framework to produce layered 2D backgrounds without the need for significant artist time or skill. In our pipeline, the user provides segmented photographic input, instead of creating traditional art, and receives game-ready assets. By utilizing photographs as input, we can achieve both a high level of realism for the resulting background texture as well as a shift from manual work away towards computational run-time which frees up developers for other work.

This work was developed by Brigham and Shao Yu as an undergraduate class project for CMPT 461 - Computational Photography at SFU.

Implementation

GitHub Repository for CMPT 461 - Computational Photography

Paper

Video

BibTeX

@INPROCEEDINGS{parallaxBG,
author={Brigham Okano and Shao Yu Shen and Sebastian Dille and Ya\u{g}{\i}z Aksoy},
title={Parallax Background Texture Generation},
booktitle={SIGGRAPH Posters},
year={2022},
}

Other Posters at SIGGRAPH 2022 from Our Group


Obumneme Stanley Dukor, S. Mahdi H. Miangoleh, Mahesh Kumar Krishna Reddy, Long Mai, and Yağız Aksoy
SIGGRAPH Posters, 2022
Recent advances in computer vision have made 3D structure-aware editing of still photographs a reality. Such computational photography applications use a depth map that is automatically generated by monocular depth estimation methods to represent the scene structure. In this work, we present a lightweight, web-based interactive depth editing and visualization tool that adapts low-level conventional image editing operations for geometric manipulation to enable artistic control in the 3D photography workflow. Our tool provides real-time feedback on the geometry through a 3D scene visualization to make the depth map editing process more intuitive for artists. Our web-based tool is open-source and platform-independent to support wider adoption of 3D photography techniques in everyday digital photography.
@INPROCEEDINGS{interactiveDepth,
author={Obumneme Stanley Dukor and S. Mahdi H. Miangoleh and Mahesh Kumar Krishna Reddy and Long Mai and Ya\u{g}{\i}z Aksoy},
title={Interactive Editing of Monocular Depth},
booktitle={SIGGRAPH Posters},
year={2022},
}

Gerardo Gandeaga, Denys Iliash, Chris Careaga, and Yağız Aksoy
SIGGRAPH Posters, 2022
This work introduces DynaPix, a Krita extension that automatically generates pixelated images and surface normals from an input image. DynaPix is a tool that aids pixel artists and game developers more efficiently develop 8-bit style games and bring them to life with dynamic lighting through normal maps that can be used in modern game engines such as Unity. The extension offers artists a degree of flexibility as well as allows for further refinements to generated artwork. Powered by out of the box solutions, DynaPix is a tool that seamlessly integrates in the artistic workflow.
@INPROCEEDINGS{dynapix,
author={Gerardo Gandeaga and Denys Iliash and Chris Careaga and Ya\u{g}{\i}z Aksoy},
title={Dyna{P}ix: Normal Map Pixelization for Dynamic Lighting},
booktitle={SIGGRAPH Posters},
year={2022},
}