DynaPix - Normal Map Pixelization for Dynamic Lighting

Gerardo Gandeaga, Denys Iliash, Chris Careaga, and Yağız Aksoy
SIGGRAPH Posters, 2022
DynaPix - Normal Map Pixelization for Dynamic Lighting

DynaPix is a Krita extension that uses an existing pixelization engine and neural network surface normal estimator to generate pixelated images and their corresponding normal maps. These pixelized representations can be easily integrated into modern game development engines for dynamic relighting.

Abstract

This work introduces DynaPix, a Krita extension that automatically generates pixelated images and surface normals from an input image. DynaPix is a tool that aids pixel artists and game developers more efficiently develop 8-bit style games and bring them to life with dynamic lighting through normal maps that can be used in modern game engines such as Unity. The extension offers artists a degree of flexibility as well as allows for further refinements to generated artwork. Powered by out of the box solutions, DynaPix is a tool that seamlessly integrates in the artistic workflow.

This work was developed by Gerardo and Denys as an undergraduate class project for CMPT 461 - Computational Photography at SFU.

Krita Extension and Implementation

GitHub Repository for CMPT 461 - Computational Photography

Paper

Video

BibTeX

@INPROCEEDINGS{dynapix,
author={Gerardo Gandeaga and Denys Iliash and Chris Careaga and Ya\u{g}{\i}z Aksoy},
title={Dyna{P}ix: Normal Map Pixelization for Dynamic Lighting},
booktitle={SIGGRAPH Posters},
year={2022},
}

More posters from CMPT 461/769: Computational Photography


Chris Careaga, Mahesh Kumar Krishna Reddy, and Yağız Aksoy
SIGGRAPH Asia Posters, 2023
We propose a simple method for emulating the effect of data moshing, without relying on the corruption of encoded video, and explore its use in different application scenarios. Like traditional data moshing, we apply motion information to mismatched visual data. Our approach uses off-the-shelf optical flow estimation to generate motion vectors for each pixel. Our core algorithm can be implemented in a handful of lines but unlocks multiple video editing effects. The use of accurate optical flow rather than compression data also creates a more natural transition without block artifacts. We hope our method provides artists and content creators with more creative freedom over the process of data moshing.
@INPROCEEDINGS{datamosh,
author={Chris Careaga and Mahesh Kumar Krishna Reddy and Ya\u{g}{\i}z Aksoy},
title={Datamoshing with Optical Flow},
booktitle={SIGGRAPH Asia Posters},
year={2023},
}

Brigham Okano, Shao Yu Shen, Sebastian Dille, and Yağız Aksoy
SIGGRAPH Posters, 2022
Art assets for games can be time intensive to produce. Whether it is a full 3D world, or simpler 2D background, creating good looking assets takes time and skills that are not always readily available. Time can be saved by using repeating assets, but visible repetition hurts immersion. Procedural generation techniques can help make repetition less uniform, but do not remove it entirely. Both approaches leave noticeable levels of repetition in the image, and require significant time and skill investments to produce. Video game developers in hobby, game jam, or early prototyping situations may not have access to the required time and skill. We propose a framework to produce layered 2D backgrounds without the need for significant artist time or skill. In our pipeline, the user provides segmented photographic input, instead of creating traditional art, and receives game-ready assets. By utilizing photographs as input, we can achieve both a high level of realism for the resulting background texture as well as a shift from manual work away towards computational run-time which frees up developers for other work.
@INPROCEEDINGS{parallaxBG,
author={Brigham Okano and Shao Yu Shen and Sebastian Dille and Ya\u{g}{\i}z Aksoy},
title={Parallax Background Texture Generation},
booktitle={SIGGRAPH Posters},
year={2022},
}