MENU: Home Bio Affiliations Research Teaching Publications Videos Collaborators/Students Contact FAQ ©2007-14 RSS

Paper in BMCV (2014): “Depth Extraction from Videos Using Geometric Context and Occlusion Boundaries”

September 5th, 2014 Irfan Essa Posted in Computational Photography and Video, PAMI/ICCV/CVPR/ECCV, S. Hussain Raza No Comments »

  • S. H. Raza, O. Javed, A. Das, H. Sawhney, H. Cheng, and I. Essa (2014), “Depth Extraction from Videos Using Geometric Context and Occlusion Boundaries Depth Extraction from Videos Using Geometric Context and Occlusion Boundaries,” in Proceedings of British Machine Vision Conference (BMVC), Nottingham, UK, 2014. [PDF] [WEBSITE] [BIBTEX]
    @inproceedings{2014-Raza-DEFVUGCOBDEFVUGCOB,
      Address = {Nottingham, UK},
      Author = {Syed Hussain Raza and Omar Javed and Aveek Das and Harpreet Sawhney and Hui Cheng and Irfan Essa},
      Booktitle = {{Proceedings of British Machine Vision Conference (BMVC)}},
      Date-Added = {2014-08-30 12:56:03 +0000},
      Date-Modified = {2014-11-10 16:10:07 +0000},
      Month = {September},
      Pdf = {http://www.cc.gatech.edu/~irfan/p/2014-Raza-DEFVUGCOBDEFVUGCOB.pdf},
      Title = {Depth Extraction from Videos Using Geometric Context and Occlusion Boundaries Depth Extraction from Videos Using Geometric Context and Occlusion Boundaries},
      Url = {http://www.cc.gatech.edu/cpl/projects/videodepth/},
      Year = {2014},
      Bdsk-Url-1 = {http://www.cc.gatech.edu/cpl/projects/videodepth/}}

We present an algorithm to estimate depth in dynamic video scenes.We present an algorithm to estimate depth in dynamic video scenes.

We propose to learn and infer depth in videos from appearance, motion, occlusion boundaries, and geometric context of the scene. Using our method, depth can be estimated from unconstrained videos with no requirement of camera pose estimation, and with significant background/foreground motions. We start by decomposing a video into spatio-temporal regions. For each spatio-temporal region, we learn the relationship of depth to visual appearance, motion, and geometric classes. Then we infer the depth information of new scenes using piecewise planar parametrization estimated within a Markov random field (MRF) framework by combining appearance to depth learned mappings and occlusion boundary guided smoothness constraints. Subsequently, we perform temporal smoothing to obtain temporally consistent depth maps.

To evaluate our depth estimation algorithm, we provide a novel dataset with ground truth depth for outdoor video scenes. We present a thorough evaluation of our algorithm on our new dataset and the publicly available Make3d static image dataset.

AddThis Social Bookmark Button

Spring 2014 term begins; teaching CS 4464/6465 (Computational Journalism) and CS 4001 (Computerization and Society)

January 6th, 2014 Irfan Essa Posted in IROS/ICRA, ISWC, PAMI/ICCV/CVPR/ECCV No Comments »

Welcome to Spring 2014 term.  Happy 2014 to all.  This term I am teaching CS 4464/6465 (Computational Journalism) and CS 4001 (Computerization and Society) at Georgia Tech.  Following links provide more information on both these classes.

  • CS 4464 / CS 6465 Computational Journalism: This class is aimed at understanding the computational and technological advancements in the area of journalism. Primary focus is on the study of technologies for developing new tools for (a) sense-making from diverse news information sources, (b) the impact of more and cheaper networked sensors (c) collaborative human models for information aggregation and sense-making, (d) mashups and the use of programming in journalism, (e) the impact of mobile computing and data gathering, (f) computational approaches to information quality, (g) data mining for personalization and aggregation, and (h) citizen journalism.
  • CS 4001 Computerization and Society: Although Computing, Society and Professionalism is a required course for CS majors, it is not a typical computer science course. Rather than dealing with the technical content of computing, it addresses the effects of computing on individuals, organizations, and society, and on what your responsibilities are as a computing professional in light of those impacts. The topic is a very broad one and one that you will have to deal with almost every day of your professional life. The issues are sometimes as intellectually deep as some of the greatest philosophical writings in history – and sometimes as shallow as a report on the evening TV news. This course can do little more than introduce you to the topics, but, if successful, will change the way you view the technology with which you work. You will do a lot of reading, analyzing, and communicating (verbally and in writing) in this course. It will require your active participation throughout the semester and should be fun and enlightening.
AddThis Social Bookmark Button

Paper in CVIU 2013 “A Visualization Framework for Team Sports Captured using Multiple Static Cameras”

October 3rd, 2013 Irfan Essa Posted in Activity Recognition, Computational Photography and Video, Jessica Hodgins, PAMI/ICCV/CVPR/ECCV, Papers, Raffay Hamid, Sports Visualization No Comments »

  • R. Hamid, R. Kumar, J. Hodgins, and I. Essa (2013), “A Visualization Framework for Team Sports Captured using Multiple Static Cameras,” Computer Vision and Image Understanding, p. -, 2013. [PDF] [WEBSITE] [VIDEO] [DOI] [BIBTEX]
    @article{2013-Hamid-VFTSCUMSC,
      Author = {Raffay Hamid and Ramkrishan Kumar and Jessica Hodgins and Irfan Essa},
      Date-Added = {2013-10-22 13:42:46 +0000},
      Date-Modified = {2014-04-28 17:09:21 +0000},
      Doi = {10.1016/j.cviu.2013.09.006},
      Issn = {1077-3142},
      Journal = {{Computer Vision and Image Understanding}},
      Number = {0},
      Pages = {-},
      Pdf = {http://www.cc.gatech.edu/~irfan/p/2013-Hamid-VFTSCUMSC.pdf},
      Title = {A Visualization Framework for Team Sports Captured using Multiple Static Cameras},
      Url = {http://raffayhamid.com/sports_viz.shtml},
      Video = {http://www.youtube.com/watch?v=VwzAMi9pUDQ},
      Year = {2013},
      Bdsk-Url-1 = {http://www.sciencedirect.com/science/article/pii/S1077314213001768},
      Bdsk-Url-2 = {http://dx.doi.org/10.1016/j.cviu.2013.09.006},
      Bdsk-Url-3 = {http://raffayhamid.com/sports_viz.shtml}}

Abstract

We present a novel approach for robust localization of multiple people observed using a set of static cameras. We use this location information to generate a visualization of the virtual offside line in soccer games. To compute the position of the offside line, we need to localize players′ positions, and identify their team roles. We solve the problem of fusing corresponding players′ positional information by finding minimum weight K-length cycles in a complete K-partite graph. Each partite of the graph corresponds to one of the K cameras, whereas each node of a partite encodes the position and appearance of a player observed from a particular camera. To find the minimum weight cycles in this graph, we use a dynamic programming based approach that varies over a continuum from maximally to minimally greedy in terms of the number of graph-paths explored at each iteration. We present proofs for the efficiency and performance bounds of our algorithms. Finally, we demonstrate the robustness of our framework by testing it on 82,000 frames of soccer footage captured over eight different illumination conditions, play types, and team attire. Our framework runs in near-real time, and processes video from 3 full HD cameras in about 0.4 seconds for each set of corresponding 3 frames.

via Science Direct A Visualization Framework for Team Sports Captured using Multiple Static Cameras.

AddThis Social Bookmark Button

Paper in IEEE CVPR 2013 “Geometric Context from Videos”

June 27th, 2013 Irfan Essa Posted in Matthias Grundmann, PAMI/ICCV/CVPR/ECCV, Papers, S. Hussain Raza No Comments »

  • S. H. Raza, M. Grundmann, and I. Essa (2013), “Geoemetric Context from Video,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013. [PDF] [WEBSITE] [VIDEO] [DOI] [BIBTEX]
    @inproceedings{2013-Raza-GCFV,
      Author = {Syed Hussain Raza and Matthias Grundmann and Irfan Essa},
      Booktitle = {{Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}},
      Date-Added = {2013-06-25 11:46:01 +0000},
      Date-Modified = {2014-04-28 17:09:08 +0000},
      Doi = {10.1109/CVPR.2013.396},
      Month = {June},
      Organization = {IEEE Computer Society},
      Pdf = {http://www.cc.gatech.edu/~irfan/p/2013-Raza-GCFV.pdf},
      Title = {Geoemetric Context from Video},
      Url = {http://www.cc.gatech.edu/cpl/projects/videogeometriccontext/},
      Video = {http://www.youtube.com/watch?v=EXPmgKHPJ64},
      Year = {2013},
      Bdsk-Url-1 = {http://www.cc.gatech.edu/cpl/projects/abow/},
      Bdsk-Url-2 = {http://www.cc.gatech.edu/cpl/projects/videogeometriccontext/},
      Bdsk-Url-3 = {http://dx.doi.org/10.1109/CVPR.2013.396}}

Abstract

We present a novel algorithm for estimating the broad 3D geometric structure of outdoor video scenes. Leveraging spatio-temporal video segmentation, we decompose a dynamic scene captured by a video into geometric classes, based on predictions made by region-classifiers that are trained on appearance and motion features. By examining the homogeneity of the prediction, we combine predictions across multiple segmentation hierarchy levels alleviating the need to determine the granularity a priori. We built a novel, extensive dataset on geometric context of video to evaluate our method, consisting of over 100 ground-truth annotated outdoor videos with over 20,000 frames. To further scale beyond this dataset, we propose a semi-supervised learning framework to expand the pool of labeled data with high confidence predictions obtained from unlabeled data. Our system produces an accurate prediction of geometric context of video achieving 96% accuracy across main geometric classes.

via IEEE Xplore – Geometric Context from Videos.

AddThis Social Bookmark Button

Paper in IEEE CVPR 2013 “Decoding Children’s Social Behavior”

June 27th, 2013 Irfan Essa Posted in Affective Computing, Behavioral Imaging, Denis Lantsman, Gregory Abowd, James Rehg, PAMI/ICCV/CVPR/ECCV, Papers, Thomas Ploetz No Comments »

  • J. M. Rehg, G. D. Abowd, A. Rozga, M. Romero, M. A. Clements, S. Sclaroff, I. Essa, O. Y. Ousley, Y. Li, C. Kim, H. Rao, J. C. Kim, L. L. Presti, J. Zhang, D. Lantsman, J. Bidwell, and Z. Ye (2013), “Decoding Children’s Social Behavior,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013. [PDF] [WEBSITE] [DOI] [BIBTEX]
    @inproceedings{2013-Rehg-DCSB,
      Author = {James M. Rehg and Gregory D. Abowd and Agata Rozga and Mario Romero and Mark A. Clements and Stan Sclaroff and Irfan Essa and Opal Y. Ousley and Yin Li and Chanho Kim and Hrishikesh Rao and Jonathan C. Kim and Liliana Lo Presti and Jianming Zhang and Denis Lantsman and Jonathan Bidwell and Zhefan Ye},
      Booktitle = {{Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}},
      Date-Added = {2013-06-25 11:47:42 +0000},
      Date-Modified = {2014-04-28 17:08:51 +0000},
      Doi = {10.1109/CVPR.2013.438},
      Month = {June},
      Organization = {IEEE Computer Society},
      Pdf = {http://www.cc.gatech.edu/~rehg/Papers/Rehg_CVPR13.pdf},
      Title = {Decoding Children's Social Behavior},
      Url = {http://www.cbi.gatech.edu/mmdb/},
      Year = {2013},
      Bdsk-Url-1 = {http://www.cbi.gatech.edu/mmdb/},
      Bdsk-Url-2 = {http://dx.doi.org/10.1109/CVPR.2013.438}}

Abstract

We introduce a new problem domain for activity recognition: the analysis of children’s social and communicative behaviors based on video and audio data. We specifically target interactions between children aged 1-2 years and an adult. Such interactions arise naturally in the diagnosis and treatment of developmental disorders such as autism. We introduce a new publicly-available dataset containing over 160 sessions of a 3-5 minute child-adult interaction. In each session, the adult examiner followed a semi-structured play interaction protocol which was designed to elicit a broad range of social behaviors. We identify the key technical challenges in analyzing these behaviors, and describe methods for decoding the interactions. We present experimental results that demonstrate the potential of the dataset to drive interesting research questions, and show preliminary results for multi-modal activity recognition.

Full database available from http://www.cbi.gatech.edu/mmdb/

via IEEE Xplore – Decoding Children’s Social Behavior.

AddThis Social Bookmark Button

Paper in IEEE CVPR 2013 “Augmenting Bag-of-Words: Data-Driven Discovery of Temporal and Structural Information for Activity Recognition”

June 27th, 2013 Irfan Essa Posted in Activity Recognition, Behavioral Imaging, Grant Schindler, PAMI/ICCV/CVPR/ECCV, Papers, Sports Visualization, Thomas Ploetz, Vinay Bettadapura No Comments »

  • V. Bettadapura, G. Schindler, T. Ploetz, and I. Essa (2013), “Augmenting Bag-of-Words: Data-Driven Discovery of Temporal and Structural Information for Activity Recognition,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013. [PDF] [WEBSITE] [DOI] [BIBTEX]
    @inproceedings{2013-Bettadapura-ABDDTSIAR,
      Author = {Vinay Bettadapura and Grant Schindler and Thomas Ploetz and Irfan Essa},
      Booktitle = {{Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}},
      Date-Added = {2013-06-25 11:42:31 +0000},
      Date-Modified = {2014-04-28 17:10:00 +0000},
      Doi = {10.1109/CVPR.2013.338},
      Month = {June},
      Organization = {IEEE Computer Society},
      Pdf = {http://www.cc.gatech.edu/~irfan/p/2013-Bettadapura-ABDDTSIAR.pdf},
      Title = {Augmenting Bag-of-Words: Data-Driven Discovery of Temporal and Structural Information for Activity Recognition},
      Url = {http://www.cc.gatech.edu/cpl/projects/abow/},
      Year = {2013},
      Bdsk-Url-1 = {http://www.cc.gatech.edu/cpl/projects/abow/},
      Bdsk-Url-2 = {http://dx.doi.org/10.1109/CVPR.2013.338}}

Abstract

We present data-driven techniques to augment Bag of Words (BoW) models, which allow for more robust modeling and recognition of complex long-term activities, especially when the structure and topology of the activities are not known a priori. Our approach specifically addresses the limitations of standard BoW approaches, which fail to represent the underlying temporal and causal information that is inherent in activity streams. In addition, we also propose the use of randomly sampled regular expressions to discover and encode patterns in activities. We demonstrate the effectiveness of our approach in experimental evaluations where we successfully recognize activities and detect anomalies in four complex datasets.

via IEEE Xplore – Augmenting Bag-of-Words: Data-Driven Discovery of Temporal and Structural Information for Activity R….

AddThis Social Bookmark Button

Paper in ECCV Workshop 2012: “Weakly Supervised Learning of Object Segmentations from Web-Scale Videos”

October 7th, 2012 Irfan Essa Posted in Activity Recognition, Awards, Google, Matthias Grundmann, Multimedia, PAMI/ICCV/CVPR/ECCV, Papers, Vivek Kwatra, WWW No Comments »

Weakly Supervised Learning of Object Segmentations from Web-Scale Videos

  • G. Hartmann, M. Grundmann, J. Hoffman, D. Tsai, V. Kwatra, O. Madani, S. Vijayanarasimhan, I. Essa, J. Rehg, and R. Sukthankar (2012), “Weakly Supervised Learning of Object Segmentations from Web-Scale Videos,” in Proceedings of ECCV 2012 Workshop on Web-scale Vision and Social Media, 2012. [PDF] [DOI] [BIBTEX]
    @inproceedings{2012-Hartmann-WSLOSFWV,
      Author = {Glenn Hartmann and Matthias Grundmann and Judy Hoffman and David Tsai and Vivek Kwatra and Omid Madani and Sudheendra Vijayanarasimhan and Irfan Essa and James Rehg and Rahul Sukthankar},
      Booktitle = {Proceedings of ECCV 2012 Workshop on Web-scale Vision and Social Media},
      Date-Added = {2012-10-23 15:03:18 +0000},
      Date-Modified = {2013-10-22 18:57:10 +0000},
      Doi = {10.1007/978-3-642-33863-2_20},
      Pdf = {http://www.cs.cmu.edu/~rahuls/pub/eccv2012wk-cp-rahuls.pdf},
      Title = {Weakly Supervised Learning of Object Segmentations from Web-Scale Videos},
      Year = {2012},
      Bdsk-Url-1 = {http://dx.doi.org/10.1007/978-3-642-33863-2_20}}

Abstract

We propose to learn pixel-level segmentations of objects from weakly labeled (tagged) internet videos. Speci cally, given a large collection of raw YouTube content, along with potentially noisy tags, our goal is to automatically generate spatiotemporal masks for each object, such as dog”, without employing any pre-trained object detectors. We formulate this problem as learning weakly supervised classi ers for a set of independent spatio-temporal segments. The object seeds obtained using segment-level classi ers are further re ned using graphcuts to generate high-precision object masks. Our results, obtained by training on a dataset of 20,000 YouTube videos weakly tagged into 15 classes, demonstrate automatic extraction of pixel-level object masks. Evaluated against a ground-truthed subset of 50,000 frames with pixel-level annotations, we con rm that our proposed methods can learn good object masks just by watching YouTube.

Presented at: ECCV 2012 Workshop on Web-scale Vision and Social Media, 2012, October 7-12, 2012, in Florence, ITALY.

Awarded the BEST PAPER AWARD!

 

AddThis Social Bookmark Button

At CVPR 2012, in Providence, RI, June 16 – 21, 2012

June 17th, 2012 Irfan Essa Posted in Activity Recognition, Computational Photography and Video, Kihwan Kim, Matthias Grundmann, PAMI/ICCV/CVPR/ECCV, Presentations, Vivek Kwatra No Comments »

At IEEE CVPR 2012 is in Providence RI, from Jun 16 – 21, 2012.

Busy week ahead meeting good friends and colleagues. Here are some highlights of what my group is involved with.

Paper in Main Conference

  • K. Kim, D. Lee, and I. Essa (2012), “Detecting Regions of Interest in Dynamic Scenes with Camera Motions,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012. [PDF] [WEBSITE] [VIDEO] [Poster on Tuesday 6/19/2012]

Demo in Main Conference

  • M. Grundmann, V. Kwatra, D. Castro, and I. Essa (2012), “Calibration-Free Rolling Shutter Removal,” in [WEBSITE] [VIDEO] (Paper in ICCP 2012) [Demo on Monday and Tuesday (6/18-19) at the Google Booth]

Invited Talk in Workshop

AddThis Social Bookmark Button

Paper in IEEE CVPR 2012: “Detecting Regions of Interest in Dynamic Scenes with Camera Motions”

June 16th, 2012 Irfan Essa Posted in Activity Recognition, Kihwan Kim, Numerical Machine Learning, PAMI/ICCV/CVPR/ECCV, Papers, PERSEAS, Visual Surviellance No Comments »

Detecting Regions of Interest in Dynamic Scenes with Camera Motions

  • K. Kim, D. Lee, and I. Essa (2012), “Detecting Regions of Interest in Dynamic Scenes with Camera Motions,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012. [PDF] [WEBSITE] [VIDEO] [DOI] [BLOG] [BIBTEX]
    @inproceedings{2012-Kim-DRIDSWCM,
      Author = {Kihwan Kim and Dongreyol Lee and Irfan Essa},
      Blog = {http://prof.irfanessa.com/2012/04/09/paper-cvpr2012/},
      Booktitle = {Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
      Date-Added = {2012-04-09 22:37:06 +0000},
      Date-Modified = {2013-10-22 18:53:11 +0000},
      Doi = {10.1109/CVPR.2012.6247809},
      Pdf = {http://www.cc.gatech.edu/~irfan/p/2012-Kim-DRIDSWCM.pdf},
      Publisher = {IEEE Computer Society},
      Title = {Detecting Regions of Interest in Dynamic Scenes with Camera Motions},
      Url = {http://www.cc.gatech.edu/cpl/projects/roi/},
      Video = {http://www.youtube.com/watch?v=19BMwDMCSp8},
      Year = {2012},
      Bdsk-Url-1 = {http://www.cc.gatech.edu/cpl/projects/roi/},
      Bdsk-Url-2 = {http://dx.doi.org/10.1109/CVPR.2012.6247809}}

Abstract

We present a method to detect the regions of interests in moving camera views of dynamic scenes with multiple mov- ing objects. We start by extracting a global motion tendency that reflects the scene context by tracking movements of objects in the scene. We then use Gaussian process regression to represent the extracted motion tendency as a stochastic vector field. The generated stochastic field is robust to noise and can handle a video from an uncalibrated moving camera. We use the stochastic field for predicting important future regions of interest as the scene evolves dynamically.

We evaluate our approach on a variety of videos of team sports and compare the detected regions of interest to the camera motion generated by actual camera operators. Our experimental results demonstrate that our approach is computationally efficient, and provides better prediction than those of previously proposed RBF-based approaches.

Presented at: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2012, Providence, RI, June 16-21, 2012

AddThis Social Bookmark Button

Presentation at CVPR 2012 workshop on Large Scale Video Search and Mining “Extracting Content and Context from Video.”

June 5th, 2012 Irfan Essa Posted in Activity Recognition, PAMI/ICCV/CVPR/ECCV No Comments »

Extracting Content and Context from Video.

(Presentation at CVPR 2012 workshop on Large Scale Video Search and Mining 2012, June 21, 2012)

Irfan Essa
GEORGIA Tech
prof.irfanessa.com

Abstract

In this talk, I will describe various efforts aimed at extracting context and content from video. I will highlight some of our recent work in extracting spatio-temporal features and the related saliency information from the video, which can be used to detect and localize regions of interest in video. Then I will describe approaches that use structured and unstructured representations to recognize the complex and extended-time actions.  I will also discuss the need for unsupervised activity discovery, and detection of anomalous activities from videos. I will show a variety of examples, which will include online videos, mobile videos, surveillance and home monitoring video, and sports videos. Finally, I will pose a series of questions and make observations about how we need to extend our current paradigms of video understanding to go beyond local spatio-temporal features, and standard time-series and bag of words models.

AddThis Social Bookmark Button