We’ve got our journal paper accepted to the IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)!

Our paper is titled “A Sparse Sampling-based framework for Semantic Fast-Forward of First-Person Videos”. For more details about the paper, please visit the Publications Section.

CONTINUE READING

We’ve got our paper accepted to the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)! Our RL agent is guided by documents to take you straight to the point in videos!

Our paper is titled “Straight to the Point: Fast-forwarding Videos via Reinforcement Learning Using Textual Data”. For more details about the paper, please visit the Publications Section.

CONTINUE READING

Our paper titled “Personalizing Fast-Forward Videos Based on Visual and Textual Features from Social Network” has been accepted in the IEEE Winter Conference on Applications of Computer Vision (WACV) 2020!

For more details about the paper, please visit the Publications Section. You can get more info about the project on our page: https://www.verlab.dcc.ufmg.br/semantic-hyperlapse/

CONTINUE READING

Great News! Our tutorial proposal titled “A Hands-on Tutorial on Fast Forwarding First-Person Videos” was accepted and presented at the 32nd Conference on Graphics, Patterns and Images (SIBGRAPI) 2019!

Our survey paper titled “Fast-Forward Methods for Egocentric Videos: A Review” is now available through this DOI link.

For more details about the paper, please visit the Publications Section. You can get more info about the project on our page: https://www.verlab.dcc.ufmg.br/semantic-hyperlapse/

CONTINUE READING

Our paper titled “Making a long story short: A Multi-Importance fast-forwarding egocentric videos with the emphasis on relevant objects” has been accepted for publication in the Journal of Visual Communication and Image Representation (JVCI) 2018.

For more details about the paper, please visit the Publications Section. You can get more info about the project on our page: https://www.verlab.dcc.ufmg.br/semantic-hyperlapse/

CONTINUE READING

Selected Publications

In this paper, we address the problem of creating smooth fast-forward videos without losing the relevant content. We present a new adaptive frame selection formulated as a weighted minimum reconstruction problem.
In TPAMI, to appear, 2020

In this paper, we present a novel methodology based on a reinforcement learning formulation to accelerate instructional videos. Our agent is textually and visually oriented to select which frames to remove to shrink the input video. Additionally, we propose a novel network, called Visually-guided Document Attention Network (VDAN), able to generate a highly discriminative embedding space to represent both textual and visual data.
To Appear, 2020

In this work, we propose a novel methodology to compose personalized fast-forward videos by selecting frames based in semantic information extracted from images and text in social networks.
In WACV, 2020, 2019

In this paper, we present a review of the representative methods in both fast-forward and semantic fast-forward methods and discuss the future directions of the area.
In SIBGRAPI-T, 2019, 2019

In this work, we address the problem of creating smooth fast-forward videos without losing the relevant content. We present a new adaptive frame selection formulated as a weighted minimum reconstruction problem, which combined with a smoothing frame transition method accelerates first-person videos emphasizing the relevant segments and avoids visual discontinuities.
In CVPR, 2018, 2018

In this work, we present the Multi-Importance Fast-Forward (MIFF), a fully automatic methodology to fast-forward egocentric videos facing these challenges.
In JVCI - Volume 53 - Pages 55-64, 2018, 2018

In this work, we propose a novel methodology to compose the new fast-forward video by selecting frames based in semantic information extracted from images.
In ICIP, 2016, 2016

All Publications

(2020). A Weighted Sparse Sampling and Smoothing Frame Transition Approach for Semantic Fast-Forward First-Person Videos. In TPAMI, to appear.

PDF Project Video Journal Website

(2020). Straight to the Point: Fast-forwarding Videos via Reinforcement Learning Using Textual Data. To Appear.

Preprint Project Conference Website

(2019). Personalizing Fast-Forward Videos Based on Visual and Textual Features from Social Network. In WACV, 2020.

Preprint Project Conference Website

(2019). Fast-Forward Methods for Egocentric Videos: A Review. In SIBGRAPI-T, 2019.

PDF Project Conference Website

(2018). A Weighted Sparse Sampling and Smoothing Frame Transition Approach for Semantic Fast-Forward First-Person Videos. In CVPR, 2018.

Preprint PDF Project Video Conference Website

(2018). Making a long story short: A Multi-Importance fast-forwarding egocentric videos with the emphasis on relevant objects. In JVCI - Volume 53 - Pages 55-64, 2018.

Preprint PDF Project Journal Website

(2017). Semantic Hyperlapse for Egocentric Videos. In WTD@SIBGRAPI, 2017.

PDF Project Poster Video Conference Website

(2016). Towards Semantic Fast-Forward and Stabilized Egocentric Videos. In ECCVW, 2016.

Preprint PDF Project Poster Video Conference Website

(2016). Fast-Forward Video Based on Semantic Extraction. In ICIP, 2016.

Preprint PDF Project Poster Video Conference Website

Awards

My M.Sc. Dissertation titled “Semantic Hyperlapse for Egocentric Videos” has been awarded as the best Computer Vision/Image Processing/Pattern Recognition M.Sc. Dissertation in the SIBGRAPI 2017 - 30th Conference on Graphics, Patterns and Images (Workshop of Theses and Dissertations).

Details about the paper can be found in the Publications Section. You can get the full dissertation through this download icon:

CONTINUE READING

My presentation has been has been awarded as the Best Presentation of Master’s Work Dissertation at Week of Graduate Seminars held by the Departamento de Ciência da Computação (DCC-UFMG).

My M.Sc. Dissertation is titled “Semantic Hyperlapse for Egocentric Videos” and it can be downloaded here.

Click here for more info about the award (in Portuguese).

CONTINUE READING

Contact