Presentation (2011) at IBPRIA 2011: “Spatio-Temporal Video Analysis and Visual Activity Recognition”

June 8th, 2011 Irfan Essa Posted in Activity Recognition, Computational Photography and Video, Kihwan Kim, Matthias Grundmann, Multimedia, PAMI/ICCV/CVPR/ECCV, Presentations No Comments »

“Spatio-Temporal Video Analysis and Visual Activity Recognition” at the Iberian Conference on Pattern Recognition and Image Analysis  (IbPRIA) 2011 Conference in Las Palmas de Gran Canaria. Spain. June 8-10.

Abstract

My research group is focused on a variety of approaches for (a) low-level video analysis and synthesis and (b) recognizing activities in videos. In this talk, I will concentrate on two of our recent efforts. One effort aimed at robust spatio-temporal segmentation of video and another on using motion and flow to recognize and predict actions from video.

In the first part of the talk, I will present an efficient and scalable technique for spatio-temporal segmentation of long video sequences using a hierarchical graph-based algorithm. In this work, we begin by over segmenting a volumetric video graph into space-time regions grouped by appearance. We then construct a “region graph” over the obtained segmentation and iteratively repeat this process over multiple levels to create a tree of spatio-temporal segmentations. This hierarchical approach generates high quality segmentations, which are temporally coherent with stable region boundaries, and allows subsequent applications to choose from varying levels of granularity. We further improve segmentation quality by using dense optical flow to guide temporal connections in the initial graph. I will demonstrate a variety of examples of how this robust segmentation works, and will show additional examples of video-retargeting that use spatio-temporal saliency derived from this segmentation approach. (Matthias Grundmann, Vivek Kwatra, Mei Han, Irfan Essa, CVPR 2010, in collaboration with Google Research).

In the second part of this talk, I will show that constrained multi-agent events can be analyzed and even predicted from video. Such analysis requires estimating the global movements of all players in the scene at any time, and is needed for modeling and predicting how the multi-agent play evolves over time on the playing field. To this end, we propose a novel approach to detect the locations of where the play evolution will proceed, e.g. where interesting events will occur, by tracking player positions and movements over time. To achieve this, we extract the ground level sparse movement of players in each time-step, and then generate a dense motion field. Using this field we detect locations where the motion converges, implying positions towards which the play is evolving. I will show examples of how we have tested this approach for soccer, basketball and hockey. (Kihwan Kim, Matthias Grundmann, Ariel Shamir, Iain Matthews, Jessica Hodgins, Irfan Essa, CVPR 2010, in collaboration with Disney Research).

Time permitting, I will show some more videos of our recent work on video analysis and synthesis. For more information, papers, and videos, see my website.

AddThis Social Bookmark Button

Paper (2011) in IEEE PAMI: “Bilayer Segmentation of Webcam Videos Using Tree-Based Classifiers “

January 12th, 2011 Irfan Essa Posted in Antonio Crimisini, Computational Photography and Video, John Winn, Machine Learning, PAMI/ICCV/CVPR/ECCV, Papers, Pei Yin No Comments »

Bilayer Segmentation of Webcam Videos Using Tree-Based Classifiers

Pei Yin, A. Criminisi, J. Winn, I. Essa (2011), “Bilayer Segmentation of Webcam Videos Using Tree-Based Classifiers” in Pattern Analysis and Machine Intelligence, IEEE Transactions on, Jan. 2011, Volume :  33 ,  Issue:1, ISSN :  0162-8828, Digital Object Identifier :  10.1109/TPAMI.2010.65,  IEEE Computer Society [Project Page|DOI]

ABSTRACT

This paper presents an automatic segmentation algorithm for video frames captured by a (monocular) webcam that closely approximates depth segmentation from a stereo camera. The frames are segmented into foreground and background layers that comprise a subject (participant) and other objects and individuals. The algorithm produces correct segmentations even in the presence of large background motion with a nearly stationary foreground. This research makes three key contributions: First, we introduce a novel motion representation, referred to as “motons,” inspired by research in object recognition. Second, we propose estimating the segmentation likelihood from the spatial context of motion. The estimation is efficiently learned by random forests. Third, we introduce a general taxonomy of tree-based classifiers that facilitates both theoretical and experimental comparisons of several known classification algorithms and generates new ones. In our bilayer segmentation algorithm, diverse visual cues such as motion, motion context, color, contrast, and spatial priors are fused by means of a conditional random field (CRF) model. Segmentation is then achieved by binary min-cut. Experiments on many sequences of our videochat application demonstrate that our algorithm, which requires no initialization, is effective in a variety of scenes, and the segmentation results are comparable to those obtained by stereo systems.

via IEEE Xplore – Abstract Page.

AddThis Social Bookmark Button

Paper in CVPR (2010): “Motion Field to Predict Play Evolution in Dynamic Sport Scenes

June 13th, 2010 Irfan Essa Posted in Activity Recognition, Jessica Hodgins, Kihwan Kim, Matthias Grundmann, PAMI/ICCV/CVPR/ECCV, Papers, Sports Visualization No Comments »

Kihwan Kim, Matthias Grundmann, Ariel Shamir, Iain Matthews, Jessica Hodgins, Irfan Essa (2010) “Motion Field to Predict Play Evolution in Dynamic Sport Scenes” in Proceedings of IEEE Computer Vision and Pattern Recognition Conference (CVPR), San Francisco, CA, USA, June 2010 [PDF][Website][DOI][Video (Youtube)].

Abstract

Videos of multi-player team sports provide a challenging domain for dynamic scene analysis. Player actions and interactions are complex as they are driven by many factors, such as the short-term goals of the individual player, the overall team strategy, the rules of the sport, and the current context of the game. We show that constrained multi-agent events can be analyzed and even predicted from video. Such analysis requires estimating the global movements of all players in the scene at any time, and is needed for modeling and predicting how the multi-agent play evolves over time on the field. To this end, we propose a novel approach to detect the locations of where the play evolution will proceed, e.g. where interesting events will occur, by tracking player positions and movements over time. We start by extracting the ground level sparse movement of players in each time-step, and then generate a dense motion field. Using this field we detect locations where the motion converges, implying positions towards which the play is evolving. We evaluate our approach by analyzing videos of a variety of complex soccer plays.

CVPR 2010 Paper on Play Evolution

AddThis Social Bookmark Button

Paper in CVPR (2010): “Discontinuous Seam-Carving for Video Retargeting”

June 13th, 2010 Irfan Essa Posted in Computational Photography and Video, Matthias Grundmann, PAMI/ICCV/CVPR/ECCV, Papers, Vivek Kwatra No Comments »

Discontinuous Seam-Carving for Video Retargeting

  • M. Grundmann, V. Kwatra, M. Han, and I. Essa (2010), “Discontinuous Seam-Carving for Video Retargeting,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2010. [PDF] [WEBSITE] [DOI] [BIBTEX]
    @InProceedings{    2010-Grundmann-DSVR,
      author  = {M. Grundmann and V. Kwatra and M. Han and I. Essa},
      booktitle  = {Proceedings of IEEE Conference on Computer Vision
          and Pattern Recognition (CVPR)},
      doi    = {10.1109/CVPR.2010.5540165},
      month    = {June},
      pdf    = {http://www.cc.gatech.edu/cpl/projects/videoretargeting/cvpr2010_videoretargeting.pdf},
      publisher  = {IEEE Computer Society},
      title    = {Discontinuous Seam-Carving for Video Retargeting},
      url    = {http://www.cc.gatech.edu/cpl/projects/videoretargeting/},
      year    = {2010}
    }

Abstract

We introduce a new algorithm for video retargeting that uses discontinuous seam-carving in both space and time for resizing videos. Our algorithm relies on a novel appearance-based temporal coherence formulation that allows for frame-by-frame processing and results in temporally discontinuous seams, as opposed to geometrically smooth and continuous seams. This formulation optimizes the difference in appearance of the resultant retargeted frame to the optimal temporally coherent one, and allows for carving around fast moving salient regions.

Additionally, we generalize the idea of appearance-based coherence to the spatial domain by introducing piece-wise spatial seams. Our spatial coherence measure minimizes the change in gradients during retargeting, which preserves spatial detail better than minimization of color difference alone. We also show that per-frame saliency (gradient- based or feature-based) does not always produce desirable retargeting results and propose a novel automatically computed measure of spatio-temporal saliency. As needed, a user may also augment the saliency by interactive region-brushing. Our retargeting algorithm processes the video sequentially, making it conducive for streaming applications.

Examples from our CVPR 2010 Paper

AddThis Social Bookmark Button

Paper in CVPR (2010): “Efficient Hierarchical Graph-Based Video Segmentation

June 13th, 2010 Irfan Essa Posted in Computational Photography and Video, Matthias Grundmann, PAMI/ICCV/CVPR/ECCV, Vivek Kwatra No Comments »

Matthias GrundmannVivek KwatraMei Han, Irfan Essa (2010) “Efficient Hierarchical Graph-Based Video Segmentation” in Proceedings of IEEE Computer Vision and Pattern Recognition Conference (CVPR), San Francisco, CA, USA, June 2010 [PDF][Website][DOI][Video (Youtube)].

Abstract

We present an efficient and scalable technique for spatio- temporal segmentation of long video sequences using a hierarchical graph-based algorithm. We begin by over- segmenting a volumetric video graph into space-time regions grouped by appearance. We then construct a “region graph” over the obtained segmentation and iteratively repeat this process over multiple levels to create a tree of spatio-temporal segmentations. This hierarchical approach generates high quality segmentations, which are temporally coherent with stable region boundaries, and allows subse- quent applications to choose from varying levels of granularity. We further improve segmentation quality by using dense optical flow to guide temporal connections in the initial graph.

We also propose two novel approaches to improve the scalability of our technique: (a) a parallel out- of-core algorithm that can process volumes much larger than an in-core algorithm, and (b) a clip-based process- ing algorithm that divides the video into overlapping clips in time, and segments them successively while enforcing consistency.

We demonstrate hierarchical segmentations on video shots as long as 40 seconds, and even support a streaming mode for arbitrarily long videos, albeit without the ability to process them hierarchically.

VideoSegmentation Teaser

AddThis Social Bookmark Button

Paper in CVPR (2010): “Player Localization Using Multiple Static Cameras for Sports Visualization”

June 13th, 2010 Irfan Essa Posted in Activity Recognition, Jessica Hodgins, Kihwan Kim, Machine Learning, Matthias Grundmann, PAMI/ICCV/CVPR/ECCV, Raffay Hamid, Sports Visualization No Comments »

Raffay Hamid, Ram Krishan Kumar, Matthias Grundmann, Kihwan Kim, Irfan Essa, Jessica Hodgins (2010), “Player Localization Using Multiple Static Cameras for Sports Visualization” In Proceedings of IEEE Computer Vision and Pattern Recognition Conference (CVPR), San Francisco, CA, USA, June 2010 [PDF][Website][DOI][Video (Youtube)].

Abstract

We present a novel approach for robust localization of multiple people observed using multiple cameras. We usethis location information to generate sports visualizations,which include displaying a virtual offside line in soccer games, and showing players’ positions and motion patterns.Our main contribution is the modeling and analysis for the problem of fusing corresponding players’ positional informationas finding minimum weight K-length cycles in complete K-partite graphs. To this end, we use a dynamic programmingbased approach that varies over a continuum of being maximally to minimally greedy in terms of the numberof paths explored at each iteration. We present an end-to-end sports visualization framework that employs our proposed algorithm-class. We demonstrate the robustness of our framework by testing it on 60; 000 frames of soccerfootage captured over 5 different illumination conditions, play types, and team attire.

Teaser Image from CVPR 2010 paper

AddThis Social Bookmark Button

CVPR 2010: Accepted Papers

April 1st, 2010 Irfan Essa Posted in Activity Recognition, Computational Photography and Video, Jessica Hodgins, Kihwan Kim, Matthias Grundmann, PAMI/ICCV/CVPR/ECCV, Papers, Vivek Kwatra No Comments »

We have the following 4 papers that have been accepted for publications in IEEE CVPR 2010. More details forthcoming, with links to more details.
  • Matthias Grundmann, Vivek Kwatra, Mei Han, and Irfan Essa (2010) “Discontinuous Seam-Carving for Video Retargeting” (a GA Tech, Google Collaboration)
  • Matthias Grundmann, Vivek Kwatra, Mei Han, and Irfan Essa (2010) “Efficient Hierarchical Graph-Based Video Segmentation” (a GA Tech, Google Collaboration)
  • Kihwan Kim, Matthias Grundmann, Ariel Shamir, Iain Matthews, Jessica Hodgins, and Irfan Essa (2010) “Motion Fields to Predict Play Evolution in Dynamic Sport Scenes” (a GA Tech, Disney Collaboration)
  • Raffay Hamid, Ramkrishan Kumar, Matthias Grundmann, Kihwan Kim, Irfan Essa, and Jessica Hodgins (2010) “Player Localization Using Multiple Static Cameras for Sports Visualization” (a GA Tech, Disney Collaboration)
AddThis Social Bookmark Button

EVENT: CVPR 2009 Decisions are Announced.

February 25th, 2009 Irfan Essa Posted in Events, PAMI/ICCV/CVPR/ECCV No Comments »

AddThis Social Bookmark Button

Paper: ICPR (2008) “3D Shape Context and Distance Transform for Action Recognition”

December 8th, 2008 Irfan Essa Posted in Activity Recognition, Aware Home, Face and Gesture, Franzi Meier, Matthias Grundmann, PAMI/ICCV/CVPR/ECCV, Papers 1 Comment »

M. Grundmann, F. Meier, and I. Essa (2008) “3D Shape Context and Distance Transform for Action Recognition”, In Proceedings of International Conference on Pattern Recognition (ICPR) 2008, Tampa, FL. [Project Page | DOI | PDF]

ABSTRACT

We propose the use of 3D (2D+time) Shape Context to recognize the spatial and temporal details inherent in human actions. We represent an action in a video sequence by a 3D point cloud extracted by sampling 2D silhouettes over time. A non-uniform sampling method is introduced that gives preference to fast moving body parts using a Euclidean 3D Distance Transform. Actions are then classified by matching the extracted point clouds. Our proposed approach is based on a global matching and does not require specific training to learn the model. We test the approach thoroughly on two publicly available datasets and compare to several state-of-the-art methods. The achieved classification accuracy is on par with or superior to the best results reported to date.

AddThis Social Bookmark Button

Paper: ICASSP (2008) “Discriminative Feature Selection for Hidden Markov Models using Segmental Boosting”

April 3rd, 2008 Irfan Essa Posted in 0205507, Face and Gesture, Funding, James Rehg, Machine Learning, PAMI/ICCV/CVPR/ECCV, Papers, Pei Yin, Thad Starner No Comments »

Pei Yin, Irfan Essa, James Rehg, Thad Starner (2008) “Discriminative Feature Selection for Hidden Markov Models using Segmental Boosting”, ICASSP 2008 – March 30 – April 4, 2008 – Las Vegas, Nevada, U.S.A. (Paper: MLSP-P3.D8, Session: Pattern Recognition and Classification II, Time: Thursday, April 3, 15:30 – 17:30, Topic: Machine Learning for Signal Processing: Learning Theory and Modeling) (PDF|Project Site)

ABSTRACT

icassp08We address the feature selection problem for hidden Markov models (HMMs) in sequence classification. Temporal correlation in sequences often causes difficulty in applying feature selection techniques. Inspired by segmental k-means segmentation (SKS), we propose Segmentally Boosted HMMs (SBHMMs), where the state-optimized features are constructed in a segmental and discriminative manner. The contributions are twofold. First, we introduce a novel feature selection algorithm, where the temporal dynamics are decoupled from the static learning procedure by assuming that the sequential data are piecewise independent and identically distributed. Second, we show that the SBHMM consistently improves traditional HMM recognition in various domains. The reduction of error compared to traditional HMMs ranges from 17% to 70% in American Sign Language recognition, human gait identification, lip reading, and speech recognition.

AddThis Social Bookmark Button