A Sparse Sampling-based framework for Semantic Fast-Forward of First-Person Videos

Abstract

Technological advances in sensors have paved the way for digital cameras to become increasingly ubiquitous which, in turn, led to the popularity of the self-recording culture. As a result, the amount of visual data on the Internet is moving in the opposite direction of the available time and patience of the users. Thus, most of the uploaded videos are doomed to be forgotten and unwatched stashed away in some computer folder or website. In this paper, we address the problem of creating smooth fast-forward videos without losing the relevant content. We present a new adaptive frame selection formulated as a weighted minimum reconstruction problem. Using a smoothing frame transition and filling visual gaps between segments, our approach accelerates first-person videos emphasizing the relevant segments and avoids visual discontinuities. Experiments conducted over controlled videos and also over an unconstrained First-Person Videos (FPVs) dataset show that, when creating fast-forward videos, our method is able to retain as much relevant information and smoothness as the state-of-the-art techniques, but in less processing time.

Publication
In IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 4, pp. 1438-1444, 1 April 2021
Date