MENU: Home Bio Affiliations Research Teaching Publications Videos Collaborators/Students Contact FAQ ©2007-14 RSS

At ICVSS (International Computer Vision Summer School) 2013, in Calabria, ITALY (July 2013)

July 11th, 2013 Irfan Essa Posted in Computational Photography, Computational Photography and Video, Daniel Castro, Matthias Grundmann, Presentations, S. Hussain Raza, Vivek Kwatra No Comments »

Teaching at the ICVSS 2013, in Calabria, Italy, July 2013 (Programme)

Computational Video: Post-processing Methods for Stabilization, Retargeting and Segmentation

Irfan Essa
(This work in collaboration with
Matthias Grundmann, Daniel Castro, Vivek Kwatra, Mei Han, S. Hussian Raza).

Abstract

We address a variety of challenges for analysis and enhancement of Computational Video. We present novel post-processing methods to bridge the difference between professional and casually shot videos mostly seen on online sites. Our research presents solutions to three well-defined problems: (1) Video stabilization and rolling shutter removal in casually-shot, uncalibrated videos; (2) Content-aware video retargeting; and (3) spatio-temporal video segmentation to enable efficient video annotation. We showcase several real-world applications building on these techniques.

We start by proposing a novel algorithm for video stabilization that generates stabilized videos by employing L1-optimal camera paths to remove undesirable motions. We compute camera paths that are optimally partitioned into con- stant, linear and parabolic segments mimicking the camera motions employed by professional cinematographers. To achieve this, we propose a linear program- ming framework to minimize the first, second, and third derivatives of the result- ing camera path. Our method allows for video stabilization beyond conventional filtering, that only suppresses high frequency jitter. An additional challenge in videos shot from mobile phones are rolling shutter distortions. Modern CMOS cameras capture the frame one scanline at a time, which results in non-rigid image distortions such as shear and wobble. We propose a solution based on a novel mixture model of homographies parametrized by scanline blocks to correct these rolling shutter distortions. Our method does not rely on a-priori knowl- edge of the readout time nor requires prior camera calibration. Our novel video stabilization and calibration free rolling shutter removal have been deployed on YouTube where they have successfully stabilized millions of videos. We also discuss several extensions to the stabilization algorithm and present technical details behind the widely used YouTube Video Stabilizer.

We address the challenge of changing the aspect ratio of videos, by proposing algorithms that retarget videos to fit the form factor of a given device without stretching or letter-boxing. Our approaches use all of the screens pixels, while striving to deliver as much video-content of the original as possible. First, we introduce a new algorithm that uses discontinuous seam-carving in both space and time for resizing videos. Our algorithm relies on a novel appearance-based temporal coherence formulation that allows for frame-by-frame processing and results in temporally discontinuous seams, as opposed to geometrically smooth and continuous seams. Second, we present a technique, that builds on the above mentioned video stabilization approach. We effectively automate classical pan and scan techniques by smoothly guiding a virtual crop window via saliency constraints.

Finally, we introduce an efficient and scalable technique for spatio-temporal segmentation of long video sequences using a hierarchical graph-based algorithm. We begin by over-segmenting a volumetric video graph into space-time regions grouped by appearance. We then construct a region graph over the ob- tained segmentation and iteratively repeat this process over multiple levels to create a tree of spatio-temporal segmentations. This hierarchical approach gen- erates high quality segmentations, and allows subsequent applications to choose from varying levels of granularity. We demonstrate the use of spatio-temporal segmentation as users interact with the video, enabling efficient annotation of objects within the video.

Part of this talks will will expose attendees to use the Video Stabilizer on YouTube and the video segmentation system at videosegmentation.com. Please find appropriate videos to test the systems.

Part of the work described above was done at Google, where Matthias Grundmann, Vivek Kwatra and Mei Han are, and Professor Essa is working as a Consultant. Part of the work were efforts of research by Matthias Grundmann, Daniel Castro and S. Hussain Raza, as part of their research efforts as students at GA Tech.

AddThis Social Bookmark Button

Paper in IEEE CVPR 2013 “Geometric Context from Videos”

June 27th, 2013 Irfan Essa Posted in Matthias Grundmann, PAMI/ICCV/CVPR/ECCV, Papers, S. Hussain Raza No Comments »

  • S. H. Raza, M. Grundmann, and I. Essa (2013), “Geoemetric Context from Video,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013. [PDF] [WEBSITE] [VIDEO] [DOI] [BIBTEX]
    @inproceedings{2013-Raza-GCFV,
      Author = {Syed Hussain Raza and Matthias Grundmann and Irfan Essa},
      Booktitle = {{Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}},
      Date-Added = {2013-06-25 11:46:01 +0000},
      Date-Modified = {2014-04-28 17:09:08 +0000},
      Doi = {10.1109/CVPR.2013.396},
      Month = {June},
      Organization = {IEEE Computer Society},
      Pdf = {http://www.cc.gatech.edu/~irfan/p/2013-Raza-GCFV.pdf},
      Title = {Geoemetric Context from Video},
      Url = {http://www.cc.gatech.edu/cpl/projects/videogeometriccontext/},
      Video = {http://www.youtube.com/watch?v=EXPmgKHPJ64},
      Year = {2013},
      Bdsk-Url-1 = {http://www.cc.gatech.edu/cpl/projects/abow/},
      Bdsk-Url-2 = {http://www.cc.gatech.edu/cpl/projects/videogeometriccontext/},
      Bdsk-Url-3 = {http://dx.doi.org/10.1109/CVPR.2013.396}}

Abstract

We present a novel algorithm for estimating the broad 3D geometric structure of outdoor video scenes. Leveraging spatio-temporal video segmentation, we decompose a dynamic scene captured by a video into geometric classes, based on predictions made by region-classifiers that are trained on appearance and motion features. By examining the homogeneity of the prediction, we combine predictions across multiple segmentation hierarchy levels alleviating the need to determine the granularity a priori. We built a novel, extensive dataset on geometric context of video to evaluate our method, consisting of over 100 ground-truth annotated outdoor videos with over 20,000 frames. To further scale beyond this dataset, we propose a semi-supervised learning framework to expand the pool of labeled data with high confidence predictions obtained from unlabeled data. Our system produces an accurate prediction of geometric context of video achieving 96% accuracy across main geometric classes.

via IEEE Xplore – Geometric Context from Videos.

AddThis Social Bookmark Button

Google I/O 2013: Secrets of Video Stabilization on YouTube

May 28th, 2013 Irfan Essa Posted in Computational Photography and Video, Google, In The News, Matthias Grundmann, Presentations, Vivek Kwatra 1 Comment »

Presentation at Google I/0 2013 by Matthias Grundmann, John Gregg, and Vivek Kwatra on our Video Stabilizer on YouTube

Video stabilization is a key component of YouTubes video enhancement tools and youtube.com/editor. All YouTube uploads are automatically detected for shakiness and suggested stabilization if needed. This talk will describe the technical details behind our fully automatic one-click stabilization technology, including aspects such as camera path optimization, rolling shutter detection and removal, distributed computing for real-time previews, and camera shake detection. More info: http://googleresearch.blogspot.com/2012/05/video-stabilization-on-youtube.html

via Secrets of Video Stabilization on YouTube — Google I/O 2013.

AddThis Social Bookmark Button

Paper in ICCP 2013 “Post-processing approach for radiometric self-calibration of video”

April 19th, 2013 Irfan Essa Posted in Computational Photography and Video, ICCP, Matthias Grundmann, Papers, Sing Bing Kang No Comments »

  • M. Grundmann, C. McClanahan, S. B. Kang, and I. Essa (2013), “Post-processing Approach for Radiometric Self-Calibration of Video,” in Proceedings of IEEE International Conference on Computational Photography (ICCP), 2013. [PDF] [WEBSITE] [VIDEO] [DOI] [BIBTEX]
    @inproceedings{2013-Grundmann-PARSV,
      Author = {Matthias Grundmann and Chris McClanahan and Sing Bing Kang and Irfan Essa},
      Booktitle = {{Proceedings of IEEE International Conference on Computational Photography (ICCP)}},
      Date-Added = {2013-06-25 11:54:57 +0000},
      Date-Modified = {2014-04-28 17:09:49 +0000},
      Doi = {10.1109/ICCPhot.2013.6528307},
      Month = {April},
      Organization = {IEEE Computer Society},
      Pdf = {http://www.cc.gatech.edu/~irfan/p/2013-Grundmann-PARSV.pdf},
      Title = {Post-processing Approach for Radiometric Self-Calibration of Video},
      Url = {http://www.cc.gatech.edu/cpl/projects/radiometric},
      Video = {http://www.youtube.com/watch?v=sC942ZB4WuM},
      Year = {2013},
      Bdsk-Url-1 = {http://www.cc.gatech.edu/cpl/projects/radiometric},
      Bdsk-Url-2 = {http://dx.doi.org/10.1109/ICCPhot.2013.6528307}}

Abstract

We present a novel data-driven technique for radiometric self-calibration of video from an unknown camera. Our approach self-calibrates radiometric variations in video, and is applied as a post-process; there is no need to access the camera, and in particular it is applicable to internet videos. This technique builds on empirical evidence that in video the camera response function (CRF) should be regarded time variant, as it changes with scene content and exposure, instead of relying on a single camera response function. We show that a time-varying mixture of responses produces better accuracy and consistently reduces the error in mapping intensity to irradiance when compared to a single response model. Furthermore, our mixture model counteracts the effects of possible nonlinear exposure-dependent intensity perturbations and white-balance changes caused by proprietary camera firmware. We further show how radiometrically calibrated video improves the performance of other video analysis algorithms, enabling a video segmentation algorithm to be invariant to exposure and gain variations over the sequence. We validate our data-driven technique on videos from a variety of cameras and demonstrate the generality of our approach by applying it to internet video.

via IEEE Xplore – Post-processing approach for radiometric self-calibration of video.

AddThis Social Bookmark Button

Matthias Grundmann’s PhD Thesis Defense (2013): “Title: Computational Video: Post-processing Methods for Stabilization, Retargeting and Segmentation”

February 4th, 2013 Irfan Essa Posted in Computational Photography and Video, Matthias Grundmann, PhD No Comments »

Title: Computational Video: Post-processing Methods for Stabilization, Retargeting and Segmentation

Matthias Grundmann
School of Interactive Computing
College of Computing
Georgia Institute of Technology

Date: February 04, 2013 (Monday)
Time: 3:00p – 6:00p EST
Location: Nano building, 116-118

Abstract:

M+I

In this thesis, we address a variety of challenges for analysis and enhancement of Computational Video. We present novel post-processing methods to bridge the difference between professional and casually shot videos mostly seen on online sites. Our research presents solutions to three well-defined problems: (1) Video stabilization and rolling shutter removal in casually-shot, uncalibrated videos; (2) Content-aware video retargeting; and (3) spatio-temporal video segmentation to enable efficient video annotation. We showcase several real-world applications building on these techniques.

We start by proposing a novel algorithm for video stabilization that generates stabilized videos by employing L1-optimal camera paths to remove undesirable motions. We compute camera paths that are optimally partitioned into constant, linear and parabolic segments mimicking the camera motions employed by professional cinematographers. To achieve this, we propose a linear programming framework to minimize the first, second, and third derivatives of the resulting camera path. Our method allows for video stabilization beyond conventional filtering, that only suppresses high frequency jitter. An additional challenge in videos shot from mobile phones are rolling shutter distortions. Modern CMOS cameras capture the frame one scanline at a time, which results in non-rigid image distortions such as shear and wobble. We propose a solution based on a novel mixture model of homographies parametrized by scanline blocks to correct these rolling shutter distortions. Our method does not rely on a-priori knowledge of the readout time nor requires prior camera calibration. Our novel video stabilization and calibration free rolling shutter removal have been deployed on YouTube where they have successfully stabilized millions of videos. We also discuss several extensions to the stabilization algorithm and present technical details behind the widely used YouTube Video Stabilizer.

We address the challenge of changing the aspect ratio of videos, by proposing algorithms that retarget videos to fit the form factor of a given device without stretching or letter-boxing. Our approaches use all of the screen’s pixels, while striving to deliver as much video-content of the original as possible. First, we introduce a new algorithm that uses discontinuous seam-carving in both space and time for resizing videos. Our algorithm relies on a novel appearance-based temporal coherence formulation that allows for frame-by-frame processing and results in temporally discontinuous seams, as opposed to geometrically smooth and continuous seams. Second, we present a technique, that builds on the above mentioned video stabilization approach. We effectively automate classical pan and scan techniques by smoothly guiding a virtual crop window via saliency constraints.

Finally, we introduce an efficient and scalable technique for spatio-temporal segmentation of long video sequences using a hierarchical graph-based algorithm. We begin by over-segmenting a volumetric video graph into space-time regions grouped by appearance. We then construct a “region graph” over the obtained  segmentation and iteratively repeat this process over multiple levels to create a tree of spatio-temporal segmentations. This hierarchical approach generates high quality segmentations, and allows subsequent applications to choose from varying levels of granularity. We demonstrate the use of spatio-temporal segmentation as users interact with the video, enabling efficient annotation of objects within the video.

Committee:

  • Dr. Irfan Essa (Advisor, School of Interactive Computing, Georgia Tech)
  • Dr. Jim Rehg (School of Interactive Computing, Georgia Tech)
  • Dr. Frank Dellaert (School of Interactive Computing, Georgia Tech)
  • Dr. Michael Black (Perceiving Systems Department, Max Planck Institute for Intelligent Systems)
  • Dr. Sing Bing Kang (Adjunct Faculty, Georgia Tech; Microsoft Research, Microsoft Corp.)
  • Dr. Vivek Kwatra (Google Research, Google Inc.)
AddThis Social Bookmark Button

Presentation (2012): CMU Robotics Institute Seminar

October 19th, 2012 Irfan Essa Posted in Computational Photography and Video, Matthias Grundmann, Presentations, Vivek Kwatra No Comments »

Video Analysis and Enhancement: Video Stabilization and Rolling Shutter Removal on YouTube

Irfan Essa
Georgia Tech
School of Interactive Computing
GVU and RIM @ GT Centers

October 19, 2012, 3:30 PM, NSH 1305

Abstract

In this talk, I will discuss a variety of approaches my group is working on for video analysis and enhancement. In particular, I will describe our approach for a video stabilizer, currently implemented and running on YouTube, and its extensions.

This method generates stabilized videos by employing L1-optimal camera paths to remove undesirable motions [1]. We compute camera paths that are optimally partitioned into constant, linear and parabolic segments mimicking the camera motions employed by professional cinematographers. We propose a linear programming framework to minimize the first, second, and third derivatives of the resulting camera path. Our method allows for video stabilization beyond the conventional filtering that only suppresses high frequency jitter. An additional challenge in videos shot from mobile phones are rolling shutter distortions. Modern CMOS cameras capture the frame one scan-line at a time, which results in non-rigid image distortions such as shear and wobble. I will demonstrate a solution based on a novel mixture model of homographies parametrized by scan-line blocks to correct these rolling shutter distortions [2]. Our method does not rely on a-priori knowledge of the readout time nor requires prior camera calibration. A thorough evaluation based on a user study and direct comparisons to other approaches, demonstrates a general preference for our algorithm.

I will conclude the talk by showcasing a live demo of the stabilizer. This work is in collaboration with Matthias Grundmann and Vivek Kwatra at Google, and appears in following two papers.

Time permitting, I will discuss some other projects we are working on, including video segmentation and retargetting.

[1] Matthias Grundmann, Vivek Kwatra, Irfan Essa, CVPR 2011, www.cc.gatech.edu/cpl/projects/videostabilization

[2] Matthias Grundmann, Vivek Kwatra, Daniel Castro Irfan Essa, ICCP 2012, Best paper, www.cc.gatech.edu/cpl/projects/rollingshutter

Host: Takeo Kanade

via Robotics Institute: Talks and Seminars.

AddThis Social Bookmark Button

Paper in ECCV Workshop 2012: “Weakly Supervised Learning of Object Segmentations from Web-Scale Videos”

October 7th, 2012 Irfan Essa Posted in Activity Recognition, Awards, Google, Matthias Grundmann, Multimedia, PAMI/ICCV/CVPR/ECCV, Papers, Vivek Kwatra, WWW No Comments »

Weakly Supervised Learning of Object Segmentations from Web-Scale Videos

  • G. Hartmann, M. Grundmann, J. Hoffman, D. Tsai, V. Kwatra, O. Madani, S. Vijayanarasimhan, I. Essa, J. Rehg, and R. Sukthankar (2012), “Weakly Supervised Learning of Object Segmentations from Web-Scale Videos,” in Proceedings of ECCV 2012 Workshop on Web-scale Vision and Social Media, 2012. [PDF] [DOI] [BIBTEX]
    @inproceedings{2012-Hartmann-WSLOSFWV,
      Author = {Glenn Hartmann and Matthias Grundmann and Judy Hoffman and David Tsai and Vivek Kwatra and Omid Madani and Sudheendra Vijayanarasimhan and Irfan Essa and James Rehg and Rahul Sukthankar},
      Booktitle = {Proceedings of ECCV 2012 Workshop on Web-scale Vision and Social Media},
      Date-Added = {2012-10-23 15:03:18 +0000},
      Date-Modified = {2013-10-22 18:57:10 +0000},
      Doi = {10.1007/978-3-642-33863-2_20},
      Pdf = {http://www.cs.cmu.edu/~rahuls/pub/eccv2012wk-cp-rahuls.pdf},
      Title = {Weakly Supervised Learning of Object Segmentations from Web-Scale Videos},
      Year = {2012},
      Bdsk-Url-1 = {http://dx.doi.org/10.1007/978-3-642-33863-2_20}}

Abstract

We propose to learn pixel-level segmentations of objects from weakly labeled (tagged) internet videos. Speci cally, given a large collection of raw YouTube content, along with potentially noisy tags, our goal is to automatically generate spatiotemporal masks for each object, such as dog”, without employing any pre-trained object detectors. We formulate this problem as learning weakly supervised classi ers for a set of independent spatio-temporal segments. The object seeds obtained using segment-level classi ers are further re ned using graphcuts to generate high-precision object masks. Our results, obtained by training on a dataset of 20,000 YouTube videos weakly tagged into 15 classes, demonstrate automatic extraction of pixel-level object masks. Evaluated against a ground-truthed subset of 50,000 frames with pixel-level annotations, we con rm that our proposed methods can learn good object masks just by watching YouTube.

Presented at: ECCV 2012 Workshop on Web-scale Vision and Social Media, 2012, October 7-12, 2012, in Florence, ITALY.

Awarded the BEST PAPER AWARD!

 

AddThis Social Bookmark Button

Presentation (2012): Distinguished Seminar Series in Computer Science at the Imperial College, London

October 4th, 2012 Irfan Essa Posted in Computational Photography and Video, Matthias Grundmann, Presentations, Vivek Kwatra No Comments »

Video Enhancement and Analysis: From Content Analysis to Video Stabilization for YouTube

Irfan Essa
Georgia Institute of Technology

October 14, 2012 15:00 – 16:00, Huxely Room. South Kensington Campus, Imperial College, London

Abstract

The talk will describe a variety of efforts undertaken on analysis of  video to enhancement and synthesis of video. An overview of the past work on representing and analyzing videos as a stochastic process and use of this in a form of Video Textures will be provided.  Majority of the talk will then focus on the recent effort which resulted in a widely-used video stabilizer currently implemented on YouTube and its extensions. This method generates stabilized videos by employing L1-optimal camera paths to remove undesirable motions. We compute camera paths that are optimally partitioned into constant, linear and parabolic segments mimicking the camera motions employed by professional cinematographers. To this end, we propose a linear programming framework to minimize the first, second, and third derivatives of the resulting camera path. Our method allows for video stabilization beyond the conventional filtering that only suppresses high frequency jitter. An additional challenge in videos shot from mobile phones are rolling shutter distortions.  We demonstrate a solution based on a novel mixture model of homographies parametrized by scanline blocks to correct these rolling shutter distortions. Our method does not rely on a-priori knowledge of the readout time nor requires prior camera calibration.  This work is in collaboration with Matthias Grundmann and Vivek Kwatra at Google.

Via Distinguished Seminar Series in Computer Science Irfan Essa – GA Tech.

AddThis Social Bookmark Button

AT Texas Instruments to give a Talk on “Video Stabilization and Rolling Shutter Removal on YouTube

August 22nd, 2012 Irfan Essa Posted in Computational Photography and Video, Matthias Grundmann, Presentations, Vivek Kwatra No Comments »

Video Stabilization and Rolling Shutter Removal on YouTube

Abstract

In this talk, I will over a variety of approaches my group is working on for video analysis and enhancement. In particular, I will describe our approach for a video stabilizer (currently implemented on YouTube) and its extensions. This work is in collaboration with Matthias Grundmann and Vivek Kwatra at Google. This method generates stabilized videos by employing L1-optimal camera paths to remove undesirable motions [1]. We compute camera paths that are optimally partitioned into constant, linear and parabolic segments mimicking the camera motions employed by professional cinematographers. To this end, we propose a linear programming framework to minimize the first, second, and third derivatives of the resulting camera path. Our method allows for video stabilization beyond the conventional filtering that only suppresses high frequency jitter. An additional challenge in videos shot from mobile phones are rolling shutter distortions. Modern CMOS cameras capture the frame one scanline at a time, which results in non-rigid image distortions such as shear and wobble. I will demonstrate a solution based on a novel mixture model of homographies parametrized by scanline blocks to correct these rolling shutter distortions [2]. Our method does not rely on a-priori knowledge of the readout time nor requires prior camera calibration. A thorough evaluation based on a user study demonstrates a general preference for our algorithm.

I will conclude the talk by showcasing a live demo of the stabilizer and time permitting, I will discuss some other projects we are working on.

[1] Matthias Grundmann, Vivek Kwatra, Irfan Essa, CVPR 2011, www.cc.gatech.edu/cpl/projects/videostabilization

[2] Matthias Grundmann, Vivek Kwatra, Daniel Castro Irfan Essa, ICCP 2012, Best paper, www.cc.gatech.edu/cpl/projects/rollingshutter

AddThis Social Bookmark Button

At CVPR 2012, in Providence, RI, June 16 – 21, 2012

June 17th, 2012 Irfan Essa Posted in Activity Recognition, Computational Photography and Video, Kihwan Kim, Matthias Grundmann, PAMI/ICCV/CVPR/ECCV, Presentations, Vivek Kwatra No Comments »

At IEEE CVPR 2012 is in Providence RI, from Jun 16 – 21, 2012.

Busy week ahead meeting good friends and colleagues. Here are some highlights of what my group is involved with.

Paper in Main Conference

  • K. Kim, D. Lee, and I. Essa (2012), “Detecting Regions of Interest in Dynamic Scenes with Camera Motions,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012. [PDF] [WEBSITE] [VIDEO] [Poster on Tuesday 6/19/2012]

Demo in Main Conference

  • M. Grundmann, V. Kwatra, D. Castro, and I. Essa (2012), “Calibration-Free Rolling Shutter Removal,” in [WEBSITE] [VIDEO] (Paper in ICCP 2012) [Demo on Monday and Tuesday (6/18-19) at the Google Booth]

Invited Talk in Workshop

AddThis Social Bookmark Button