Datamoshing with Optical Flow

Chris CareagaMahesh Kumar Krishna Reddy Yağız Aksoy
SIGGRAPH Asia Posters, 2023
Datamoshing with Optical Flow

We propose an algorithm to perform data moshing using optical flow. Our algorithm is general and has various applications. Using multiple video sequences, we can create perplexing video transitions where the visual information of one video is distorted or constructed using the motion of another (bottom row). Using a single video clip, we can create seamless looping GIFs with interesting glitch art effects. (top row).

Abstract

We propose a simple method for emulating the effect of data moshing, without relying on the corruption of encoded video, and explore its use in different application scenarios. Like traditional data moshing, we apply motion information to mismatched visual data. Our approach uses off-the-shelf optical flow estimation to generate motion vectors for each pixel. Our core algorithm can be implemented in a handful of lines but unlocks multiple video editing effects. The use of accurate optical flow rather than compression data also creates a more natural transition without block artifacts. We hope our method provides artists and content creators with more creative freedom over the process of data moshing.

This work was developed by Chris and Mahesh as a class project for CMPT 769 - Computational Photography at SFU.

Paper

Video (coming soon)

BibTeX

@INPROCEEDINGS{datamosh,
author={Chris Careaga and Mahesh Kumar Krishna Reddy and Ya\u{g}{\i}z Aksoy},
title={Datamoshing with Optical Flow},
booktitle={SIGGRAPH Asia Posters},
year={2023},
}

More posters from CMPT 461/769: Computational Photography


Gerardo Gandeaga, Denys Iliash, Chris Careaga, and Yağız Aksoy
SIGGRAPH Posters, 2022
This work introduces DynaPix, a Krita extension that automatically generates pixelated images and surface normals from an input image. DynaPix is a tool that aids pixel artists and game developers more efficiently develop 8-bit style games and bring them to life with dynamic lighting through normal maps that can be used in modern game engines such as Unity. The extension offers artists a degree of flexibility as well as allows for further refinements to generated artwork. Powered by out of the box solutions, DynaPix is a tool that seamlessly integrates in the artistic workflow.
@INPROCEEDINGS{dynapix,
author={Gerardo Gandeaga and Denys Iliash and Chris Careaga and Ya\u{g}{\i}z Aksoy},
title={Dyna{P}ix: Normal Map Pixelization for Dynamic Lighting},
booktitle={SIGGRAPH Posters},
year={2022},
}

Brigham Okano, Shao Yu Shen, Sebastian Dille, and Yağız Aksoy
SIGGRAPH Posters, 2022
Art assets for games can be time intensive to produce. Whether it is a full 3D world, or simpler 2D background, creating good looking assets takes time and skills that are not always readily available. Time can be saved by using repeating assets, but visible repetition hurts immersion. Procedural generation techniques can help make repetition less uniform, but do not remove it entirely. Both approaches leave noticeable levels of repetition in the image, and require significant time and skill investments to produce. Video game developers in hobby, game jam, or early prototyping situations may not have access to the required time and skill. We propose a framework to produce layered 2D backgrounds without the need for significant artist time or skill. In our pipeline, the user provides segmented photographic input, instead of creating traditional art, and receives game-ready assets. By utilizing photographs as input, we can achieve both a high level of realism for the resulting background texture as well as a shift from manual work away towards computational run-time which frees up developers for other work.
@INPROCEEDINGS{parallaxBG,
author={Brigham Okano and Shao Yu Shen and Sebastian Dille and Ya\u{g}{\i}z Aksoy},
title={Parallax Background Texture Generation},
booktitle={SIGGRAPH Posters},
year={2022},
}