AT Texas Instruments to give a Talk on “Video Stabilization and Rolling Shutter Removal on YouTube

August 22nd, 2012 Irfan Essa Posted in Computational Photography and Video, Matthias Grundmann, Presentations, Vivek Kwatra No Comments »

Video Stabilization and Rolling Shutter Removal on YouTube

Abstract

In this talk, I will over a variety of approaches my group is working on for video analysis and enhancement. In particular, I will describe our approach for a video stabilizer (currently implemented on YouTube) and its extensions. This work is in collaboration with Matthias Grundmann and Vivek Kwatra at Google. This method generates stabilized videos by employing L1-optimal camera paths to remove undesirable motions [1]. We compute camera paths that are optimally partitioned into constant, linear and parabolic segments mimicking the camera motions employed by professional cinematographers. To this end, we propose a linear programming framework to minimize the first, second, and third derivatives of the resulting camera path. Our method allows for video stabilization beyond the conventional filtering that only suppresses high frequency jitter. An additional challenge in videos shot from mobile phones are rolling shutter distortions. Modern CMOS cameras capture the frame one scanline at a time, which results in non-rigid image distortions such as shear and wobble. I will demonstrate a solution based on a novel mixture model of homographies parametrized by scanline blocks to correct these rolling shutter distortions [2]. Our method does not rely on a-priori knowledge of the readout time nor requires prior camera calibration. A thorough evaluation based on a user study demonstrates a general preference for our algorithm.

I will conclude the talk by showcasing a live demo of the stabilizer and time permitting, I will discuss some other projects we are working on.

[1] Matthias Grundmann, Vivek Kwatra, Irfan Essa, CVPR 2011, www.cc.gatech.edu/cpl/projects/videostabilization

[2] Matthias Grundmann, Vivek Kwatra, Daniel Castro Irfan Essa, ICCP 2012, Best paper, www.cc.gatech.edu/cpl/projects/rollingshutter

AddThis Social Bookmark Button

At CVPR 2012, in Providence, RI, June 16 – 21, 2012

June 17th, 2012 Irfan Essa Posted in Activity Recognition, Computational Photography and Video, Kihwan Kim, Matthias Grundmann, PAMI/ICCV/CVPR/ECCV, Presentations, Vivek Kwatra No Comments »

At IEEE CVPR 2012 is in Providence RI, from Jun 16 – 21, 2012.

Busy week ahead meeting good friends and colleagues. Here are some highlights of what my group is involved with.

Paper in Main Conference

  • K. Kim, D. Lee, and I. Essa (2012), “Detecting Regions of Interest in Dynamic Scenes with Camera Motions,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012. [PDF] [WEBSITE] [VIDEO] [Poster on Tuesday 6/19/2012]

Demo in Main Conference

  • M. Grundmann, V. Kwatra, D. Castro, and I. Essa (2012), “Calibration-Free Rolling Shutter Removal,” in [WEBSITE] [VIDEO] (Paper in ICCP 2012) [Demo on Monday and Tuesday (6/18-19) at the Google Booth]

Invited Talk in Workshop

AddThis Social Bookmark Button

Video Stabilization on YouTube

May 6th, 2012 Irfan Essa Posted in Computational Photography and Video, Google, In The News, Matthias Grundmann, Vivek Kwatra No Comments »

Here is an excerpt from a Google Research Blog on our Video Stabilization on YouTube.  Now even more improved.

One thing we have been working on within Research at Google is developing methods for making casual videos look more professional, thereby providing users with a better viewing experience. Professional videos have several characteristics that differentiate them from casually shot videos. For example, in order to tell a story, cinematographers carefully control lighting and exposure and use specialized equipment to plan camera movement.

We have developed a technique that mimics professional camera moves and applies them to videos recorded by handheld devices. Cinematographers use specialized equipment such as tripods and dollies to plan their camera paths and hold them steady. In contrast, think of a video you shot using a mobile phone camera. How steady was your hand and were you able to anticipate an interesting moment and smoothly pan the camera to capture that moment? To bridge these differences, we propose an algorithm that automatically determines the best camera path and recasts the video as if it were filmed using stabilization equipment.

Via Video Stabilization on YouTube.

AddThis Social Bookmark Button

Paper in IEEE ICCP 2012: “Calibration-Free Rolling Shutter Removal”

April 28th, 2012 Irfan Essa Posted in Computational Photography and Video, Daniel Castro, ICCP, Matthias Grundmann, Vivek Kwatra No Comments »

Calibration-Free Rolling Shutter Removal

  • M. Grundmann, V. Kwatra, D. Castro, and I. Essa (2012), “Calibration-Free Rolling Shutter Removal,” in Proceedings of IEEE Conference on Computational Photography (ICCP), 2012. (Best Paper Award) [PDF] [WEBSITE] [VIDEO] [DOI] [BLOG] [BIBTEX]
    @InProceedings{    2012-Grundmann-CRSR,
      author  = {Matthias Grundmann and Vivek Kwatra and Daniel
          Castro and Irfan Essa},
      awards  = {(Best Paper Award)},
      blog    = {http://prof.irfanessa.com/2012/04/28/paper-iccp12/},
      booktitle  = {Proceedings of IEEE Conference on Computational
          Photography (ICCP)},
      doi    = {10.1109/ICCPhot.2012.6215213},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2012-Grundmann-CRSR.pdf},
      publisher  = {IEEE Computer Society},
      title    = {Calibration-Free Rolling Shutter Removal},
      url    = {http://www.cc.gatech.edu/cpl/projects/rollingshutter/},
      video    = {http://www.youtube.com/watch?v=_Pr_fpbAok8},
      year    = {2012}
    }

Abstract

We present a novel algorithm for efficient removal of rolling shutter distortions in uncalibrated streaming videos. Our proposed method is calibration free as it does not need any knowledge of the camera used, nor does it require calibration using specially recorded calibration sequences. Our algorithm can perform rolling shutter removal under varying focal lengths, as in videos from CMOS cameras equipped with an optical zoom. We evaluate our approach across a broad range of cameras and video sequences demonstrating robustness, scalability, and repeatability. We also conducted a user study, which demonstrates a preference for the output of our algorithm over other state-of-the art methods. Our algorithm is computationally efficient, easy to parallelize, and robust to challenging artifacts introduced by various cameras with differing technologies.

Presented at IEEE International Conference on Computational Photography, Seattle, WA, April 27-29, 2012.

Winner of BEST PAPER AWARD.

 

AddThis Social Bookmark Button

Award (2012): Best Computer Vision Paper Award by Google Research

March 22nd, 2012 Irfan Essa Posted in Computational Photography and Video, Matthias Grundmann, Papers, Vivek Kwatra No Comments »

Our following paper was just awarded the Excellent Paper for 2011 in Computer Vision by Google Research.

  • M. Grundmann, V. Kwatra, and I. Essa (2011), “Auto-Directed Video Stabilization with Robust L1 Optimal Camera Paths,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. [PDF] [WEBSITE] [VIDEO] [DEMO] [DOI] [BLOG] [BIBTEX]
    @InProceedings{    2011-Grundmann-AVSWROCP,
      author  = {M. Grundmann and V. Kwatra and I. Essa},
      blog    = {http://prof.irfanessa.com/2011/06/19/videostabilization/},
      booktitle  = {Proceedings of IEEE Conference on Computer Vision
          and Pattern Recognition (CVPR)},
      demo    = {http://www.youtube.com/watch?v=0MiY-PNy-GU},
      doi    = {10.1109/CVPR.2011.5995525},
      month    = {June},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2011-Grundmann-AVSWROCP.pdf},
      publisher  = {IEEE Computer Society},
      title    = {Auto-Directed Video Stabilization with Robust L1
          Optimal Camera Paths},
      url    = {http://www.cc.gatech.edu/cpl/projects/videostabilization/},
      video    = {http://www.youtube.com/watch?v=i5keG1Y810U},
      year    = {2011}
    }

Casually shot videos captured by handheld or mobile cameras suffer from significant amount of shake. Existing in-camera stabilization methods dampen high-frequency jitter but do not suppress low-frequency movements and bounces, such as those observed in videos captured by a walking person. On the other hand, most professionally shot videos usually consist of carefully designed camera configurations, using specialized equipment such as tripods or camera dollies, and employ ease-in and ease-out for transitions. Our stabilization technique automatically converts casual shaky footage into more pleasant and professional looking videos by mimicking these cinematographic principles. The original, shaky camera path is divided into a set of segments, each approximated by either constant, linear or parabolic motion, using an algorithm based on robust L1 optimization. The stabilizer has been part of the YouTube Editor youtube.com/editor since March 2011.

via Research Blog.

AddThis Social Bookmark Button

In the News (2011): “Shake it like an Instagram picture — Online Video News”

September 15th, 2011 Irfan Essa Posted in Collaborators, Computational Photography and Video, Google, In The News, Matthias Grundmann, Vivek Kwatra, WWW No Comments »

Our work, as described in the following paper, now showcased in youtube.

  • M. Grundmann, V. Kwatra, and I. Essa (2011), “Auto-Directed Video Stabilization with Robust L1 Optimal Camera Paths,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. [PDF] [WEBSITE] [VIDEO] [DEMO] [DOI] [BLOG] [BIBTEX]
    @InProceedings{    2011-Grundmann-AVSWROCP,
      author  = {M. Grundmann and V. Kwatra and I. Essa},
      blog    = {http://prof.irfanessa.com/2011/06/19/videostabilization/},
      booktitle  = {Proceedings of IEEE Conference on Computer Vision
          and Pattern Recognition (CVPR)},
      demo    = {http://www.youtube.com/watch?v=0MiY-PNy-GU},
      doi    = {10.1109/CVPR.2011.5995525},
      month    = {June},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2011-Grundmann-AVSWROCP.pdf},
      publisher  = {IEEE Computer Society},
      title    = {Auto-Directed Video Stabilization with Robust L1
          Optimal Camera Paths},
      url    = {http://www.cc.gatech.edu/cpl/projects/videostabilization/},
      video    = {http://www.youtube.com/watch?v=i5keG1Y810U},
      year    = {2011}
    }

YouTube effects: Shake it like an Instagram picture

via YouTube effects: Shake it like an Instagram picture — Online Video News.

YouTube users can now apply a number of Instagram-like effects to their videos, giving them a cartoonish or Lomo-like look with the click of a button. The effects are part of a new editing feature that also includes cropping and advanced image stabilization.

Taking the shaking out of video uploads should go a long way towards making some of the amateur footage captured on mobile phones more watchable, but it can also be resource-intensive — which is why Google’s engineers invented an entirely new approach toward image stabilization.

The new editing functionality will be part of YouTube’s video page, where a new “Edit video” button will offer access to filters and other editing functionality. This type of post-processing is separate from YouTube’s video editor, which allows to produce new videos based on existing clips.

AddThis Social Bookmark Button

DEMO (2011): Auto-Directed Video Stabilization with Robust L1 Optimal Camera Paths – from Google Research Blog

June 20th, 2011 Irfan Essa Posted in Computational Photography and Video, In The News, Matthias Grundmann, Mobile Computing, PAMI/ICCV/CVPR/ECCV, Vivek Kwatra No Comments »

via Auto-Directed Video Stabilization with Robust L1 Optimal Camera Paths – Google Research Blog.

Auto-Directed Video Stabilization with Robust L1 Optimal Camera Paths
Posted by Matthias GrundmannVivek Kwatra, and Irfan Essa,

Earlier this year, we announced the launch of new features on the YouTube Video Editor, including stabilization for shaky videos, with the ability to preview them in real-time. The core technology behind this feature is detailed in this paper, which will be presented at the IEEE International Conference on Computer Vision and Pattern Recognition (CVPR 2011).

Casually shot videos captured by handheld or mobile cameras suffer from significant amount of shake. Existing in-camera stabilization methods dampen high-frequency jitter but do not suppress low-frequency movements and bounces, such as those observed in videos captured by a walking person. On the other hand, most professionally shot videos usually consist of carefully designed camera configurations, using specialized equipment such as tripods or camera dollies, and employ ease-in and ease-out for transitions. Our goal was to devise a completely automatic method for converting casual shaky footage into more pleasant and professional looking videos.

Our technique mimics the cinematographic principles outlined above by automatically determining the best camera path using a robust optimization technique. The original, shaky camera path is divided into a set of segments, each approximated by either a constant, linear or parabolic motion. Our optimization finds the best of all possible partitions using a computationally efficient and stable algorithm.

To achieve real-time performance on the web, we distribute the computation across multiple machines in the cloud. This enables us to provide users with a real-time preview and interactive control of the stabilized result. Above we provide a video demonstration of how to use this feature on the YouTube Editor. We will also demo this live at Google’s exhibition booth in CVPR 2011.

For more details see the Project Site. See the youtube video of the system on youtube. See the paper in PDF, and a technical video of the work.

Full paper is

 

AddThis Social Bookmark Button

Paper (2011) in IEEE CVPR: “Auto-Directed Video Stabilization with Robust L1 Optimal Camera Paths”

June 19th, 2011 Irfan Essa Posted in Computational Photography and Video, Matthias Grundmann, PAMI/ICCV/CVPR/ECCV, Papers, Vivek Kwatra No Comments »

Auto-Directed Video Stabilization with Robust L1 Optimal Camera Paths

  • Grundmann, Kwatra, and Essa (2011), “Auto-Directed Video Stabilization with Robust L1 Optimal Camera Paths,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011.  [PDF] [WEBSITE][VIDEO] [DEMO][Google Research Blog] [BIBTEX]
     @inproceedings{2011-Grundmann-AVSWROCP, Author = {M. Grundmann and V. Kwatra and I. Essa}, Booktitle = {Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, Month = {June}, Pdf = {http://www.cc.gatech.edu/~irfan/p/2011-Grundmann-AVSWROCP}, Publisher = {IEEE Computer Society}, Title = {Auto-Directed Video Stabilization with Robust L1 Optimal Camera Paths}, Url = {http://www.cc.gatech.edu/cpl/projects/videostabilization/}, Video = {http://www.youtube.com/watch?v=i5keG1Y810U}, Year = {2011}}

Abstract

We present a novel algorithm for automatically applying constrainable, L1-optimal camera paths to generate stabilized videos by removing undesired motions. Our goal is to compute camera paths that are composed of constant, linear and parabolic segments mimicking the camera motions employed by professional cinematographers. To this end, our algorithm is based on a linear programming framework to minimize the first, second, and third derivatives of the resulting camera path. Our method allows for video stabilization beyond the conventional filtering of camera paths that only suppresses high frequency jitter. We incorporate additional constraints on the path of the camera directly in our algorithm, allowing for stabilized and retargeted videos. Our approach accomplishes this without the need of user interaction or costly 3D reconstruction of the scene, and works as a post-process for videos from any camera or from an online source.

AddThis Social Bookmark Button

Presentation (2011) at IBPRIA 2011: “Spatio-Temporal Video Analysis and Visual Activity Recognition”

June 8th, 2011 Irfan Essa Posted in Activity Recognition, Computational Photography and Video, Kihwan Kim, Matthias Grundmann, Multimedia, PAMI/ICCV/CVPR/ECCV, Presentations No Comments »

“Spatio-Temporal Video Analysis and Visual Activity Recognition” at the Iberian Conference on Pattern Recognition and Image Analysis  (IbPRIA) 2011 Conference in Las Palmas de Gran Canaria. Spain. June 8-10.

Abstract

My research group is focused on a variety of approaches for (a) low-level video analysis and synthesis and (b) recognizing activities in videos. In this talk, I will concentrate on two of our recent efforts. One effort aimed at robust spatio-temporal segmentation of video and another on using motion and flow to recognize and predict actions from video.

In the first part of the talk, I will present an efficient and scalable technique for spatio-temporal segmentation of long video sequences using a hierarchical graph-based algorithm. In this work, we begin by over segmenting a volumetric video graph into space-time regions grouped by appearance. We then construct a “region graph” over the obtained segmentation and iteratively repeat this process over multiple levels to create a tree of spatio-temporal segmentations. This hierarchical approach generates high quality segmentations, which are temporally coherent with stable region boundaries, and allows subsequent applications to choose from varying levels of granularity. We further improve segmentation quality by using dense optical flow to guide temporal connections in the initial graph. I will demonstrate a variety of examples of how this robust segmentation works, and will show additional examples of video-retargeting that use spatio-temporal saliency derived from this segmentation approach. (Matthias Grundmann, Vivek Kwatra, Mei Han, Irfan Essa, CVPR 2010, in collaboration with Google Research).

In the second part of this talk, I will show that constrained multi-agent events can be analyzed and even predicted from video. Such analysis requires estimating the global movements of all players in the scene at any time, and is needed for modeling and predicting how the multi-agent play evolves over time on the playing field. To this end, we propose a novel approach to detect the locations of where the play evolution will proceed, e.g. where interesting events will occur, by tracking player positions and movements over time. To achieve this, we extract the ground level sparse movement of players in each time-step, and then generate a dense motion field. Using this field we detect locations where the motion converges, implying positions towards which the play is evolving. I will show examples of how we have tested this approach for soccer, basketball and hockey. (Kihwan Kim, Matthias Grundmann, Ariel Shamir, Iain Matthews, Jessica Hodgins, Irfan Essa, CVPR 2010, in collaboration with Disney Research).

Time permitting, I will show some more videos of our recent work on video analysis and synthesis. For more information, papers, and videos, see my website.

AddThis Social Bookmark Button

PhD Fellowships from Google Research for Matthias Grundmann

May 16th, 2011 Irfan Essa Posted in Awards, In The News, Matthias Grundmann No Comments »

Congratulations to Matthias Grundmann, winner of the Google PhD Fellowship in Computer Vision for 2012.

via PhD Fellowships – Google Research.

Google PhD Fellowship Program Overview

Nurturing and maintaining strong relations with the academic community is a top priority at Google. The Google U.S./Canada PhD Student Fellowship Program was created to recognize outstanding graduate students doing exceptional work in computer science, related disciplines, or promising research areas. Last year we awarded 14 unique fellowships to some amazing students in the US and Canada:

  • Matthias Grundmann, Google U.S./Canada Fellowship in Computer Vision (Georgia Institute of Technology)
AddThis Social Bookmark Button