MENU: Home Bio Affiliations Research Teaching Publications Videos Collaborators/Students Contact FAQ ©2007-14 RSS

AT HIGH Museum/Lumière’s Fall Lecture and Panel Discussion on “Art In The Digital Culture… Threat or Opportunity?”

September 8th, 2012 Irfan Essa Posted in Computational Photography and Video, In The News, Presentations No Comments »

Wednesday September 19, 2012, 7:00pm in the Hill Auditorium, High Museum, Altanta.

In this sixth installment of Lumière’s Fall Lecture Series, Shannon Perich, curator of the photographic history collection at the National Museum of American History, Smithsonian Institution, and Irfan Essa of the Georgia Institute of Technology will each speak to the future of art in a rapidly expanding digital culture. Their commentary will be followed by a panel discussion with audience participation. The panel will address the threats and opportunities created by a growing range of capabilities to create, distribute, and interact with art. Additional information is available at www.lumieregallery.net.This lecture is a collaborative event with the Atlanta Celebrates Photography 2012 Festival.

via Lumière’s Fall Lecture and Panel Discussion.

SLIDES now available here

AddThis Social Bookmark Button

AT Texas Instruments to give a Talk on “Video Stabilization and Rolling Shutter Removal on YouTube

August 22nd, 2012 Irfan Essa Posted in Computational Photography and Video, Matthias Grundmann, Presentations, Vivek Kwatra No Comments »

Video Stabilization and Rolling Shutter Removal on YouTube

Abstract

In this talk, I will over a variety of approaches my group is working on for video analysis and enhancement. In particular, I will describe our approach for a video stabilizer (currently implemented on YouTube) and its extensions. This work is in collaboration with Matthias Grundmann and Vivek Kwatra at Google. This method generates stabilized videos by employing L1-optimal camera paths to remove undesirable motions [1]. We compute camera paths that are optimally partitioned into constant, linear and parabolic segments mimicking the camera motions employed by professional cinematographers. To this end, we propose a linear programming framework to minimize the first, second, and third derivatives of the resulting camera path. Our method allows for video stabilization beyond the conventional filtering that only suppresses high frequency jitter. An additional challenge in videos shot from mobile phones are rolling shutter distortions. Modern CMOS cameras capture the frame one scanline at a time, which results in non-rigid image distortions such as shear and wobble. I will demonstrate a solution based on a novel mixture model of homographies parametrized by scanline blocks to correct these rolling shutter distortions [2]. Our method does not rely on a-priori knowledge of the readout time nor requires prior camera calibration. A thorough evaluation based on a user study demonstrates a general preference for our algorithm.

I will conclude the talk by showcasing a live demo of the stabilizer and time permitting, I will discuss some other projects we are working on.

[1] Matthias Grundmann, Vivek Kwatra, Irfan Essa, CVPR 2011, www.cc.gatech.edu/cpl/projects/videostabilization

[2] Matthias Grundmann, Vivek Kwatra, Daniel Castro Irfan Essa, ICCP 2012, Best paper, www.cc.gatech.edu/cpl/projects/rollingshutter

AddThis Social Bookmark Button

At CVPR 2012, in Providence, RI, June 16 – 21, 2012

June 17th, 2012 Irfan Essa Posted in Activity Recognition, Computational Photography and Video, Kihwan Kim, Matthias Grundmann, PAMI/ICCV/CVPR/ECCV, Presentations, Vivek Kwatra No Comments »

At IEEE CVPR 2012 is in Providence RI, from Jun 16 – 21, 2012.

Busy week ahead meeting good friends and colleagues. Here are some highlights of what my group is involved with.

Paper in Main Conference

  • K. Kim, D. Lee, and I. Essa (2012), “Detecting Regions of Interest in Dynamic Scenes with Camera Motions,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012. [PDF] [WEBSITE] [VIDEO] [Poster on Tuesday 6/19/2012]

Demo in Main Conference

  • M. Grundmann, V. Kwatra, D. Castro, and I. Essa (2012), “Calibration-Free Rolling Shutter Removal,” in [WEBSITE] [VIDEO] (Paper in ICCP 2012) [Demo on Monday and Tuesday (6/18-19) at the Google Booth]

Invited Talk in Workshop

AddThis Social Bookmark Button

AT IWCV 2012: “Videos Understanding: Extracting Content and Context from Video.”

May 24th, 2012 Irfan Essa Posted in Activity Recognition, Computational Photography and Video, Presentations, Visual Surviellance No Comments »

Videos Understanding: Extracting Content and Context from Video.

(Presentation at the International Workshop on Computer Vision 2012, Ortigia, Siracusa, Sicily, May 22-24, 2012.)

Irfan Essa
GEORGIA Tech

Abstract

In this talk, I will describe various efforts aimed at extracting context and content from video. I will highlight some of our recent work in extracting spatio-temporal features and the related saliency information from the video, which can be used to detect and localize regions of interest in video. Then I will describe approaches that use structured and unstructured representations to recognize the complex and extended-time actions.  I will also discuss the need for unsupervised activity discovery, and detection of anomalous activities from videos. I will show a variety of examples, which will include online videos, mobile videos, surveillance and home monitoring video, and sports videos. Finally, I will pose a series of questions and make observations about how we need to extend our current paradigms of video understanding to go beyond local spatio-temporal features, and standard time-series and bag of words models.

AddThis Social Bookmark Button

Video Stabilization on YouTube

May 6th, 2012 Irfan Essa Posted in Computational Photography and Video, Google, In The News, Matthias Grundmann, Vivek Kwatra No Comments »

Here is an excerpt from a Google Research Blog on our Video Stabilization on YouTube.  Now even more improved.

One thing we have been working on within Research at Google is developing methods for making casual videos look more professional, thereby providing users with a better viewing experience. Professional videos have several characteristics that differentiate them from casually shot videos. For example, in order to tell a story, cinematographers carefully control lighting and exposure and use specialized equipment to plan camera movement.

We have developed a technique that mimics professional camera moves and applies them to videos recorded by handheld devices. Cinematographers use specialized equipment such as tripods and dollies to plan their camera paths and hold them steady. In contrast, think of a video you shot using a mobile phone camera. How steady was your hand and were you able to anticipate an interesting moment and smoothly pan the camera to capture that moment? To bridge these differences, we propose an algorithm that automatically determines the best camera path and recasts the video as if it were filmed using stabilization equipment.

Via Video Stabilization on YouTube.

AddThis Social Bookmark Button

Paper in IEEE ICCP 2012: “Calibration-Free Rolling Shutter Removal”

April 28th, 2012 Irfan Essa Posted in Computational Photography and Video, Daniel Castro, ICCP, Matthias Grundmann, Vivek Kwatra No Comments »

Calibration-Free Rolling Shutter Removal

  • M. Grundmann, V. Kwatra, D. Castro, and I. Essa (2012), “Calibration-Free Rolling Shutter Removal,” in Proceedings of IEEE Conference on Computational Photography (ICCP), 2012. [PDF] [WEBSITE] [VIDEO] [DOI] [BLOG] [BIBTEX]
    @inproceedings{2012-Grundmann-CRSR,
      Author = {Matthias Grundmann and Vivek Kwatra and Daniel Castro and Irfan Essa},
      Blog = {http://prof.irfanessa.com/2012/04/28/paper-iccp12/},
      Booktitle = {Proceedings of IEEE Conference on Computational Photography (ICCP)},
      Date-Added = {2012-04-09 22:40:38 +0000},
      Date-Modified = {2013-10-22 13:54:12 +0000},
      Doi = {10.1109/ICCPhot.2012.6215213},
      Pdf = {http://www.cc.gatech.edu/~irfan/p/2012-Grundmann-CRSR.pdf},
      Publisher = {IEEE Computer Society},
      Title = {Calibration-Free Rolling Shutter Removal},
      Url = {http://www.cc.gatech.edu/cpl/projects/rollingshutter/},
      Video = {http://www.youtube.com/watch?v=_Pr_fpbAok8},
      Year = {2012},
      Bdsk-Url-1 = {http://www.cc.gatech.edu/cpl/projects/rollingshutter/},
      Bdsk-Url-2 = {http://dx.doi.org/10.1109/ICCPhot.2012.6215213}}

Abstract

We present a novel algorithm for efficient removal of rolling shutter distortions in uncalibrated streaming videos. Our proposed method is calibration free as it does not need any knowledge of the camera used, nor does it require calibration using specially recorded calibration sequences. Our algorithm can perform rolling shutter removal under varying focal lengths, as in videos from CMOS cameras equipped with an optical zoom. We evaluate our approach across a broad range of cameras and video sequences demonstrating robustness, scalability, and repeatability. We also conducted a user study, which demonstrates a preference for the output of our algorithm over other state-of-the art methods. Our algorithm is computationally efficient, easy to parallelize, and robust to challenging artifacts introduced by various cameras with differing technologies.

Presented at IEEE International Conference on Computational Photography, Seattle, WA, April 27-29, 2012.

Winner of BEST PAPER AWARD.

 

AddThis Social Bookmark Button

Award (2012): Best Computer Vision Paper Award by Google Research

March 22nd, 2012 Irfan Essa Posted in Computational Photography and Video, Matthias Grundmann, Papers, Vivek Kwatra No Comments »

Our following paper was just awarded the Excellent Paper for 2011 in Computer Vision by Google Research.

  • M. Grundmann, V. Kwatra, and I. Essa (2011), “Auto-Directed Video Stabilization with Robust L1 Optimal Camera Paths,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. [PDF] [WEBSITE] [VIDEO] [DEMO] [DOI] [BLOG] [BIBTEX]
    @inproceedings{2011-Grundmann-AVSWROCP,
      Author = {M. Grundmann and V. Kwatra and I. Essa},
      Blog = {http://prof.irfanessa.com/2011/06/19/videostabilization/},
      Booktitle = {Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
      Date-Modified = {2013-10-22 13:55:15 +0000},
      Demo = {http://www.youtube.com/watch?v=0MiY-PNy-GU},
      Doi = {10.1109/CVPR.2011.5995525},
      Month = {June},
      Pdf = {http://www.cc.gatech.edu/~irfan/p/2011-Grundmann-AVSWROCP.pdf},
      Publisher = {IEEE Computer Society},
      Title = {Auto-Directed Video Stabilization with Robust L1 Optimal Camera Paths},
      Url = {http://www.cc.gatech.edu/cpl/projects/videostabilization/},
      Video = {http://www.youtube.com/watch?v=i5keG1Y810U},
      Year = {2011},
      Bdsk-Url-1 = {http://www.cc.gatech.edu/cpl/projects/videostabilization/},
      Bdsk-Url-2 = {http://dx.doi.org/10.1109/CVPR.2011.5995525}}

Casually shot videos captured by handheld or mobile cameras suffer from significant amount of shake. Existing in-camera stabilization methods dampen high-frequency jitter but do not suppress low-frequency movements and bounces, such as those observed in videos captured by a walking person. On the other hand, most professionally shot videos usually consist of carefully designed camera configurations, using specialized equipment such as tripods or camera dollies, and employ ease-in and ease-out for transitions. Our stabilization technique automatically converts casual shaky footage into more pleasant and professional looking videos by mimicking these cinematographic principles. The original, shaky camera path is divided into a set of segments, each approximated by either constant, linear or parabolic motion, using an algorithm based on robust L1 optimization. The stabilizer has been part of the YouTube Editor youtube.com/editor since March 2011.

via Research Blog.

AddThis Social Bookmark Button

Teaching: Spring 2012

January 11th, 2012 Irfan Essa Posted in CnJ, Computational Journalism, Computational Photography and Video, DVFX No Comments »

In Spring 2012, I am teaching 2 classes.

Advanced Computational Photography (CS 8803 PHO) [with Grant Schindler]

This is an advanced topics class in Computational Photography, building on my intro class and explores technical aspects of pictures, and more precisely the capture and depiction of reality on a 2D medium. The scientific, perceptual, and artistic principles behind image-making will be emphasized. Topics include the relationship between pictorial techniques and the human visual system; intrinsic limitations of 2D representations and their possible compensations; and technical issues involving depiction. Technical aspects of image capture and rendering, and exploration of how such a medium can be used to its maximum potential, will be examined. Students are strongly encouraged (not required) to bring their digital cameras and a laptop to facilitate experiments. The class will explore recent and state of the art paper in Computational Photography from leading conferences and journals in the area and students will do projects in a variety of topics.

Computation + Journalism (CS 4464 / CS 6465)

This class is aimed at understanding the computational and technological advancements in the area of journalism. Primary focus is on the study of technologies for developing new tools for (a) sense-making from diverse news information sources, (b) the impact of more and cheaper networked sensors (c) collaborative human models for information aggregation and sense-making, (d) mashups and the use of programming in journalism, (e) the impact of mobile computing and data gathering, (f) computational approaches to information quality, (g) data mining for personalization and aggregation, and (h) citizen journalism. Complete schedule and other information will be on the t-square site available to only students taking the class.

AddThis Social Bookmark Button

Kihwan Kim’s Thesis Defense (2011): “Spatio-temporal Data Interpolation for Dynamic Scene Analysis”

December 6th, 2011 Irfan Essa Posted in Computational Photography and Video, Kihwan Kim, Modeling and Animation, Multimedia, PhD, Security, Visual Surviellance, WWW No Comments »

Spatio-temporal Data Interpolation for Dynamic Scene Analysis

Kihwan Kim, PhD Candidate

School of Interactive Computing, College of Computing, Georgia Institute of Technology

Date: Tuesday, December 6, 2011

Time: 1:00 pm – 3:00 pm EST

Location: Technology Square Research Building (TSRB) Room 223

Abstract

Analysis and visualization of dynamic scenes is often constrained by the amount of spatio-temporal information available from the environment. In most scenarios, we have to account for incomplete information and sparse motion data, requiring us to employ interpolation and approximation methods to fill for the missing information. Scattered data interpolation and approximation techniques have been widely used for solving the problem of completing surfaces and images with incomplete input data. We introduce approaches for such data interpolation and approximation from limited sensors, into the domain of analyzing and visualizing dynamic scenes. Data from dynamic scenes is subject to constraints due to the spatial layout of the scene and/or the configurations of video cameras in use. Such constraints include: (1) sparsely available cameras observing the scene, (2) limited field of view provided by the cameras in use, (3) incomplete motion at a specific moment, and (4) varying frame rates due to different exposures and resolutions.

In this thesis, we establish these forms of incompleteness in the scene, as spatio- temporal uncertainties, and propose solutions for resolving the uncertainties by applying scattered data approximation into a spatio-temporal domain.

The main contributions of this research are as follows: First, we provide an effi- cient framework to visualize large-scale dynamic scenes from distributed static videos. Second, we adopt Radial Basis Function (RBF) interpolation to the spatio-temporal domain to generate global motion tendency. The tendency, represented by a dense flow field, is used to optimally pan and tilt a video camera. Third, we propose a method to represent motion trajectories using stochastic vector fields. Gaussian Pro- cess Regression (GPR) is used to generate a dense vector field and the certainty of each vector in the field. The generated stochastic fields are used for recognizing motion patterns under varying frame-rate and incompleteness of the input videos. Fourth, we also show that the stochastic representation of vector field can also be used for modeling global tendency to detect the region of interests in dynamic scenes with camera motion. We evaluate and demonstrate our approaches in several applications for visualizing virtual cities, automating sports broadcasting, and recognizing traffic patterns in surveillance videos.

Committee:

  • Prof. Irfan Essa (Advisor, School of Interactive Computing, Georgia Institute of Technology)
  • Prof. James M. Rehg (School of Interactive Computing, Georgia Institute of Technology)
  • Prof. Thad Starner (School of Interactive Computing, Georgia Institute of Technology)
  • Prof. Greg Turk (School of Interactive Computing, Georgia Institute of Technology)
  • Prof. Jessica K. Hodgins (Robotics Institute, Carnegie Mellon University, and Disney Research Pittsburgh)
AddThis Social Bookmark Button

In the News (2011): “Shake it like an Instagram picture — Online Video News”

September 15th, 2011 Irfan Essa Posted in Collaborators, Computational Photography and Video, Google, In The News, Matthias Grundmann, Vivek Kwatra, WWW No Comments »

Our work, as described in the following paper, now showcased in youtube.

  • M. Grundmann, V. Kwatra, and I. Essa (2011), “Auto-Directed Video Stabilization with Robust L1 Optimal Camera Paths,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2011. [PDF] [WEBSITE] [VIDEO] [DEMO] [DOI] [BLOG] [BIBTEX]
    @inproceedings{2011-Grundmann-AVSWROCP,
      Author = {M. Grundmann and V. Kwatra and I. Essa},
      Blog = {http://prof.irfanessa.com/2011/06/19/videostabilization/},
      Booktitle = {Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
      Date-Modified = {2013-10-22 13:55:15 +0000},
      Demo = {http://www.youtube.com/watch?v=0MiY-PNy-GU},
      Doi = {10.1109/CVPR.2011.5995525},
      Month = {June},
      Pdf = {http://www.cc.gatech.edu/~irfan/p/2011-Grundmann-AVSWROCP.pdf},
      Publisher = {IEEE Computer Society},
      Title = {Auto-Directed Video Stabilization with Robust L1 Optimal Camera Paths},
      Url = {http://www.cc.gatech.edu/cpl/projects/videostabilization/},
      Video = {http://www.youtube.com/watch?v=i5keG1Y810U},
      Year = {2011},
      Bdsk-Url-1 = {http://www.cc.gatech.edu/cpl/projects/videostabilization/},
      Bdsk-Url-2 = {http://dx.doi.org/10.1109/CVPR.2011.5995525}}

YouTube effects: Shake it like an Instagram picture

via YouTube effects: Shake it like an Instagram picture — Online Video News.

YouTube users can now apply a number of Instagram-like effects to their videos, giving them a cartoonish or Lomo-like look with the click of a button. The effects are part of a new editing feature that also includes cropping and advanced image stabilization.

Taking the shaking out of video uploads should go a long way towards making some of the amateur footage captured on mobile phones more watchable, but it can also be resource-intensive — which is why Google’s engineers invented an entirely new approach toward image stabilization.

The new editing functionality will be part of YouTube’s video page, where a new “Edit video” button will offer access to filters and other editing functionality. This type of post-processing is separate from YouTube’s video editor, which allows to produce new videos based on existing clips.

AddThis Social Bookmark Button