MENU: Home Bio Affiliations Research Teaching Publications Videos Collaborators/Students Contact FAQ ©2007-14 RSS

Google I/O 2013: Secrets of Video Stabilization on YouTube

May 28th, 2013 Irfan Essa Posted in Computational Photography and Video, Google, In The News, Matthias Grundmann, Presentations, Vivek Kwatra No Comments »

Presentation at Google I/0 2013 by Matthias Grundmann, John Gregg, and Vivek Kwatra on our Video Stabilizer on YouTube

Video stabilization is a key component of YouTubes video enhancement tools and youtube.com/editor. All YouTube uploads are automatically detected for shakiness and suggested stabilization if needed. This talk will describe the technical details behind our fully automatic one-click stabilization technology, including aspects such as camera path optimization, rolling shutter detection and removal, distributed computing for real-time previews, and camera shake detection. More info: http://googleresearch.blogspot.com/2012/05/video-stabilization-on-youtube.html

via Secrets of Video Stabilization on YouTube — Google I/O 2013.

AddThis Social Bookmark Button

Paper in ECCV Workshop 2012: “Weakly Supervised Learning of Object Segmentations from Web-Scale Videos”

October 7th, 2012 Irfan Essa Posted in Activity Recognition, Awards, Google, Matthias Grundmann, Multimedia, PAMI/ICCV/CVPR/ECCV, Papers, Vivek Kwatra, WWW No Comments »

Weakly Supervised Learning of Object Segmentations from Web-Scale Videos

  • G. Hartmann, M. Grundmann, J. Hoffman, D. Tsai, V. Kwatra, O. Madani, S. Vijayanarasimhan, I. Essa, J. Rehg, and R. Sukthankar (2012), “Weakly Supervised Learning of Object Segmentations from Web-Scale Videos,” in Proceedings of ECCV 2012 Workshop on Web-scale Vision and Social Media, 2012. [PDF] [DOI] [BIBTEX]
    @inproceedings{2012-Hartmann-WSLOSFWV,
      Author = {Glenn Hartmann and Matthias Grundmann and Judy Hoffman and David Tsai and Vivek Kwatra and Omid Madani and Sudheendra Vijayanarasimhan and Irfan Essa and James Rehg and Rahul Sukthankar},
      Booktitle = {Proceedings of ECCV 2012 Workshop on Web-scale Vision and Social Media},
      Date-Added = {2012-10-23 15:03:18 +0000},
      Date-Modified = {2013-10-22 18:57:10 +0000},
      Doi = {10.1007/978-3-642-33863-2_20},
      Pdf = {http://www.cs.cmu.edu/~rahuls/pub/eccv2012wk-cp-rahuls.pdf},
      Title = {Weakly Supervised Learning of Object Segmentations from Web-Scale Videos},
      Year = {2012},
      Bdsk-Url-1 = {http://dx.doi.org/10.1007/978-3-642-33863-2_20}}

Abstract

We propose to learn pixel-level segmentations of objects from weakly labeled (tagged) internet videos. Speci cally, given a large collection of raw YouTube content, along with potentially noisy tags, our goal is to automatically generate spatiotemporal masks for each object, such as dog”, without employing any pre-trained object detectors. We formulate this problem as learning weakly supervised classi ers for a set of independent spatio-temporal segments. The object seeds obtained using segment-level classi ers are further re ned using graphcuts to generate high-precision object masks. Our results, obtained by training on a dataset of 20,000 YouTube videos weakly tagged into 15 classes, demonstrate automatic extraction of pixel-level object masks. Evaluated against a ground-truthed subset of 50,000 frames with pixel-level annotations, we con rm that our proposed methods can learn good object masks just by watching YouTube.

Presented at: ECCV 2012 Workshop on Web-scale Vision and Social Media, 2012, October 7-12, 2012, in Florence, ITALY.

Awarded the BEST PAPER AWARD!

 

AddThis Social Bookmark Button

Paper in IROS 2012: “Linguistic Transfer of Human Assembly Tasks to Robots”

October 7th, 2012 Irfan Essa Posted in 0205507, Activity Recognition, IROS/ICRA, Mike Stilman, Robotics No Comments »

Linguistic Transfer of Human Assembly Tasks to Robots

  • N. Dantam, I. Essa, and M. Stilman (2012), “Linguistic Transfer of Human Assembly Tasks to Robots,” in Proceedings of Intelligent Robots and Systems (IROS), 2012. [PDF] [DOI] [BIBTEX]
    @inproceedings{2012-Dantam-LTHATR,
      Author = {N. Dantam and I. Essa and M. Stilman},
      Booktitle = {Proceedings of Intelligent Robots and Systems (IROS)},
      Date-Added = {2012-10-23 15:07:46 +0000},
      Date-Modified = {2013-10-22 18:58:04 +0000},
      Doi = {10.1109/IROS.2012.6385749},
      Pdf = {http://www.cc.gatech.edu/~ndantam3/papers/dantam2012assembly.pdf},
      Title = {Linguistic Transfer of Human Assembly Tasks to Robots},
      Year = {2012},
      Bdsk-Url-1 = {http://dx.doi.org/10.1109/IROS.2012.6385749}}

Abstract

We demonstrate the automatic transfer of an assembly task from human to robot. This work extends efforts showing the utility of linguistic models in verifiable robot control policies by now performing real visual analysis of human demonstrations to automatically extract a policy for the task. This method tokenizes each human demonstration into a sequence of object connection symbols, then transforms the set of sequences from all demonstrations into an automaton, which represents the task-language for assembling a desired object. Finally, we combine this assembly automaton with a kinematic model of a robot arm to reproduce the demonstrated task.

Presented at: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2012), October 7-12, 2012 Vilamoura, Algarve, Portugal.

 

AddThis Social Bookmark Button

Paper in IEEE CVPR 2012: “Detecting Regions of Interest in Dynamic Scenes with Camera Motions”

June 16th, 2012 Irfan Essa Posted in Activity Recognition, Kihwan Kim, Numerical Machine Learning, PAMI/ICCV/CVPR/ECCV, Papers, PERSEAS, Visual Surviellance No Comments »

Detecting Regions of Interest in Dynamic Scenes with Camera Motions

  • K. Kim, D. Lee, and I. Essa (2012), “Detecting Regions of Interest in Dynamic Scenes with Camera Motions,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012. [PDF] [WEBSITE] [VIDEO] [DOI] [BLOG] [BIBTEX]
    @inproceedings{2012-Kim-DRIDSWCM,
      Author = {Kihwan Kim and Dongreyol Lee and Irfan Essa},
      Blog = {http://prof.irfanessa.com/2012/04/09/paper-cvpr2012/},
      Booktitle = {Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
      Date-Added = {2012-04-09 22:37:06 +0000},
      Date-Modified = {2013-10-22 18:53:11 +0000},
      Doi = {10.1109/CVPR.2012.6247809},
      Pdf = {http://www.cc.gatech.edu/~irfan/p/2012-Kim-DRIDSWCM.pdf},
      Publisher = {IEEE Computer Society},
      Title = {Detecting Regions of Interest in Dynamic Scenes with Camera Motions},
      Url = {http://www.cc.gatech.edu/cpl/projects/roi/},
      Video = {http://www.youtube.com/watch?v=19BMwDMCSp8},
      Year = {2012},
      Bdsk-Url-1 = {http://www.cc.gatech.edu/cpl/projects/roi/},
      Bdsk-Url-2 = {http://dx.doi.org/10.1109/CVPR.2012.6247809}}

Abstract

We present a method to detect the regions of interests in moving camera views of dynamic scenes with multiple mov- ing objects. We start by extracting a global motion tendency that reflects the scene context by tracking movements of objects in the scene. We then use Gaussian process regression to represent the extracted motion tendency as a stochastic vector field. The generated stochastic field is robust to noise and can handle a video from an uncalibrated moving camera. We use the stochastic field for predicting important future regions of interest as the scene evolves dynamically.

We evaluate our approach on a variety of videos of team sports and compare the detected regions of interest to the camera motion generated by actual camera operators. Our experimental results demonstrate that our approach is computationally efficient, and provides better prediction than those of previously proposed RBF-based approaches.

Presented at: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2012, Providence, RI, June 16-21, 2012

AddThis Social Bookmark Button

Video Stabilization on YouTube

May 6th, 2012 Irfan Essa Posted in Computational Photography and Video, Google, In The News, Matthias Grundmann, Vivek Kwatra No Comments »

Here is an excerpt from a Google Research Blog on our Video Stabilization on YouTube.  Now even more improved.

One thing we have been working on within Research at Google is developing methods for making casual videos look more professional, thereby providing users with a better viewing experience. Professional videos have several characteristics that differentiate them from casually shot videos. For example, in order to tell a story, cinematographers carefully control lighting and exposure and use specialized equipment to plan camera movement.

We have developed a technique that mimics professional camera moves and applies them to videos recorded by handheld devices. Cinematographers use specialized equipment such as tripods and dollies to plan their camera paths and hold them steady. In contrast, think of a video you shot using a mobile phone camera. How steady was your hand and were you able to anticipate an interesting moment and smoothly pan the camera to capture that moment? To bridge these differences, we propose an algorithm that automatically determines the best camera path and recasts the video as if it were filmed using stabilization equipment.

Via Video Stabilization on YouTube.

AddThis Social Bookmark Button

Paper in ICCV 2011: “Gaussian Process Regression Flow for Analysis of Motion Trajectories”

October 28th, 2011 Irfan Essa Posted in Activity Recognition, DARPA, Kihwan Kim, PAMI/ICCV/CVPR/ECCV, Papers No Comments »

Gaussian Process Regression Flow for Analysis of Motion Trajectories

  • Kim, Lee, and Essa (2011), “Gaussian Process Regression Flow for Analysis of Motion Trajectories,” in Proceedings of IEEE International Conference on Computer Vision (ICCV), 2011. [PDF] [WEBSITE] [VIDEO] [BIBTEX]
     @inproceedings{Kim2011-GPRF, Author = {K. Kim and D. Lee and I. Essa}, Booktitle = {Proceedings of IEEE International Conference on Computer Vision (ICCV)}, Month = {November}, Pdf = {http://www.cc.gatech.edu/~irfan/p/2011-Kim-GPRFAMT.pdf}, Publisher = {IEEE Computer Society}, Title = {Gaussian Process Regression Flow for Analysis of Motion Trajectories}, Url = {http://www.cc.gatech.edu/cpl/projects/gprf/}, Video = {http://www.youtube.com/watch?v=UtLr37hDQz0}, Year = {2011}}

Abstract

Analysis and Recognition of motions and activities of objects in videos requires effective representations for analysis and matching of motion trajectories. In this paper, we introduce a new representation specifically aimed at matching motion trajectories. We model a trajectory as a continuous dense flow field from a sparse set of vector sequences using Gaussian Process Regression. Furthermore, we introduce a random sampling strategy for learning stable classes of motions from limited data.

Our representation allows for incrementally predicting possible paths and detecting anomalous events from online trajectories. This representation also supports matching of complex motions with acceleration changes and pauses or stops within a trajectory. We use the proposed approach for classifying and predicting motion trajectories in traffic monitoring domains and test on several data sets. We show that our approach works well on various types of complete and incomplete trajectories from a variety of video data sets with different frame rates

AddThis Social Bookmark Button

In the News (2011): “Shake it like an Instagram picture — Online Video News”

September 15th, 2011 Irfan Essa Posted in Collaborators, Computational Photography and Video, Google, In The News, Matthias Grundmann, Vivek Kwatra, WWW No Comments »

Our work, as described in the following paper, now showcased in youtube.

  • M. Grundmann, V. Kwatra, and I. Essa (2011), “Auto-Directed Video Stabilization with Robust L1 Optimal Camera Paths,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. [PDF] [WEBSITE] [VIDEO] [DEMO] [DOI] [BLOG] [BIBTEX]
    @inproceedings{2011-Grundmann-AVSWROCP,
      Author = {M. Grundmann and V. Kwatra and I. Essa},
      Blog = {http://prof.irfanessa.com/2011/06/19/videostabilization/},
      Booktitle = {Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
      Date-Modified = {2013-10-22 13:55:15 +0000},
      Demo = {http://www.youtube.com/watch?v=0MiY-PNy-GU},
      Doi = {10.1109/CVPR.2011.5995525},
      Month = {June},
      Pdf = {http://www.cc.gatech.edu/~irfan/p/2011-Grundmann-AVSWROCP.pdf},
      Publisher = {IEEE Computer Society},
      Title = {Auto-Directed Video Stabilization with Robust L1 Optimal Camera Paths},
      Url = {http://www.cc.gatech.edu/cpl/projects/videostabilization/},
      Video = {http://www.youtube.com/watch?v=i5keG1Y810U},
      Year = {2011},
      Bdsk-Url-1 = {http://www.cc.gatech.edu/cpl/projects/videostabilization/},
      Bdsk-Url-2 = {http://dx.doi.org/10.1109/CVPR.2011.5995525}}

YouTube effects: Shake it like an Instagram picture

via YouTube effects: Shake it like an Instagram picture — Online Video News.

YouTube users can now apply a number of Instagram-like effects to their videos, giving them a cartoonish or Lomo-like look with the click of a button. The effects are part of a new editing feature that also includes cropping and advanced image stabilization.

Taking the shaking out of video uploads should go a long way towards making some of the amateur footage captured on mobile phones more watchable, but it can also be resource-intensive — which is why Google’s engineers invented an entirely new approach toward image stabilization.

The new editing functionality will be part of YouTube’s video page, where a new “Edit video” button will offer access to filters and other editing functionality. This type of post-processing is separate from YouTube’s video editor, which allows to produce new videos based on existing clips.

AddThis Social Bookmark Button

Funding (2011) NSF (1146352) “EAGER: Linguistic Task Transfer for Humans and Cyber Systems”

September 1st, 2011 Irfan Essa Posted in Activity Recognition, Mike Stilman, NSF, Robotics No Comments »

EAGER: Linguistic Task Transfer for Humans and Cyber Systems (Mike Stillman, Irfan Essa) NSF/RI

This project, investigating formal languages as a general methodology for task transfer between distinct cyber-physical systems such as humans and robots, aims to expand the science of cyber physical systems by developing Motion Grammars that will enable task transfer between distinct systems.

Formal languages are tools for encoding, describing and transferring structured knowledge. In natural language, the latter process is called communication. Similarly, we will develop a formal language through which arbitrary cyber-physical systems communicate tasks via structured actions. This investigation of Motion Grammars will contribute to the science of human cognition and the engineering of cyber-physical algorithms. By observing human activities during manipulation we will develop a novel class of hybrid control algorithms based on linguistic representations of task execution. These algorithms will broaden the capabilities of man-made systems and provide the infrastructure for motion transfer between humans, robots and broader systems in a generic context. Furthermore, the representation in a rigorous grammatical context will enable formal verification and validation in future work.
Broader Impacts: The proposed research has direct applications to new solutions for manufacturing, medical treatments such as surgery, logistics and food processing. In turn, each of these areas has a significant impact on the efficiency and convenience of our daily lives. The PIs serve as coordinators of graduate/undergraduate programs and mentors to community schools. In order to guarantee that women and minorities have a significant role in the research, the PIs will annually invite K-12 students from Atlanta schools with primarily African American populations to the laboratories. One-day robot classes will be conducted that engage students in the excitement of hands-on science by interactively using lab equipment to transfer their manipulation skills to a robot arm.

Via Award#1146352 – EAGER: Linguistic Task Transfer for Humans and Cyber Systems.

AddThis Social Bookmark Button

Going Live on YouTube (2011): Lights, Camera… EDIT! New Features for the YouTube Video Editor

March 21st, 2011 Irfan Essa Posted in Computational Photography and Video, Google, In The News, Matthias Grundmann, Multimedia, Vivek Kwatra, WWW No Comments »

via YouTube Blog: Lights, Camera… EDIT! New Features for the YouTube Video Editor.

  • M. Grundmann, V. Kwatra, and I. Essa (2011), “Auto-Directed Video Stabilization with Robust L1 Optimal Camera Paths,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. [PDF] [WEBSITE] [VIDEO] [DEMO] [DOI] [BLOG] [BIBTEX]
    @inproceedings{2011-Grundmann-AVSWROCP,
      Author = {M. Grundmann and V. Kwatra and I. Essa},
      Blog = {http://prof.irfanessa.com/2011/06/19/videostabilization/},
      Booktitle = {Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
      Date-Modified = {2013-10-22 13:55:15 +0000},
      Demo = {http://www.youtube.com/watch?v=0MiY-PNy-GU},
      Doi = {10.1109/CVPR.2011.5995525},
      Month = {June},
      Pdf = {http://www.cc.gatech.edu/~irfan/p/2011-Grundmann-AVSWROCP.pdf},
      Publisher = {IEEE Computer Society},
      Title = {Auto-Directed Video Stabilization with Robust L1 Optimal Camera Paths},
      Url = {http://www.cc.gatech.edu/cpl/projects/videostabilization/},
      Video = {http://www.youtube.com/watch?v=i5keG1Y810U},
      Year = {2011},
      Bdsk-Url-1 = {http://www.cc.gatech.edu/cpl/projects/videostabilization/},
      Bdsk-Url-2 = {http://dx.doi.org/10.1109/CVPR.2011.5995525}}

Lights, Camera… EDIT! New Features for the YouTube Video Editor

Nine months ago we launched our cloud-based video editor. It was a simple product built to provide our users with simple editing tools. Although it didn’t have all the features available on paid desktop editing software, the idea was that the vast majority of people’s video editing needs are pretty basic and straight-forward and we could provide these features with a free editor available on the Web. Since launch, hundreds of thousands of videos have been published using the YouTube Video Editor and we’ve regularly pushed out new feature enhancements to the product, including:

  • Video transitions (crossfade, wipe, slide)
  • The ability to save projects across sessions
  • Increased clips allowed in the editor from 6 to 17
  • Video rotation (from portrait to landscape and vice versa – great for videos shot on mobile)
  • Shape transitions (heart, star, diamond, and Jack-O-Lantern for Halloween)
  • Audio mixing (AudioSwap track mixed with original audio)
  • Effects (brightness/contrast, black & white)

A new user interface and project menu for multiple saved projects

While many of these are familiar features also available on desktop software, today, we’re excited to unveil two new features that the team has been working on over the last couple of months that take unique advantage of the cloud:

Stabilizer

Ever shoot a shaky video that’s so jittery, it’s actually hard to watch? Professional cinematographers use stabilization equipment such as tripods or camera dollies to keep their shots smooth and steady. Our team mimicked these cinematographic principles by automatically determining the best camera path for you through a unified optimization technique. In plain English, you can smooth some of those unsteady videos with the click of a button. We also wanted you to be able to preview these results in real-time, before publishing the finished product to the Web. We can do this by harnessing the power of the cloud by splitting the computation required for stabilizing the video into chunks and distributed them across different servers. This allows us to use the power of many machines in parallel, computing and streaming the stabilized results quickly into the preview. You can check out the paper we’re publishing entitled “Auto-Directed Video Stabilization with Robust L1 Optimal Camera Paths.” Want to see stabilizer in action? You can test it out for yourself, or check out these two videos. The first is without stabilizer.

And now, with the stabilizer:

AddThis Social Bookmark Button

Funding (2011): NSF (1059362): “II-New: Motion Grammar Laboratory”

March 1st, 2011 Irfan Essa Posted in Henrik Christensen, Mike Stilman, NSF No Comments »

II-New: Motion Grammar Laboratory (Stillman, Essa, Egerstadt, Christensen, Ueda) Division of Computer and Network Systems Instrumentation Grant.

An anthropomorphic robot arm and a human capture system enable the autonomous performance of assembly tasks with significant uncertainty in problem specifications and environments. This line of work is investigated through sequences of manipulation actions where the guarantee of the completion of task-level objectives is rooted in the discovery of the semantic structure of human manipulation. New research directions in anthropomorphic robotics are explored including programming by demonstration, activity recognition, control and estimation and planning.

The motion grammar laboratory infrastructure allows a great opportunity for research and education. New classroom experiences for undergraduates and graduates provide practical experience in robot human interaction and activity process sharing. This opens possibilities for human training and rehabilitation, as well as assistive personal robotic, and opens the door to a host of technological innovations.

via Award#1059362 – II-New: Motion Grammar Laboratory.

AddThis Social Bookmark Button