MENU: Home Bio Affiliations Research Teaching Publications Videos Collaborators/Students Contact FAQ ©2007-14 RSS

Two Ph. D. Defenses the same day. A first for me!

April 2nd, 2014 Irfan Essa Posted in Activity Recognition, Computational Photography and Video, Health Systems, PhD, S. Hussain Raza, Students, Yachna Sharma No Comments »

Today, two of my Ph. D. Students defended their Dissertations.  Back to back.  Congrats to both as they are both done.

Thesis title: Surgical Skill Assessment Using Motion Texture analysis
Student: Yachna Sharma, Ph. D. Candidate in ECE
http://users.ece.gatech.edu/~ysharma3/
Date/Time : 2nd April, 1:00 pm

Title : Temporally Consistent Semantic Segmentation in Videos
S. Hussain Raza, Ph. D. Candidate in ECE
https://sites.google.com/site/shussainraza5/
Date/Time : 2nd April, 1:00 pm

Location : CSIP Library, Room 5186, CenterGy One Building

 

AddThis Social Bookmark Button

Paper in CVIU 2013 “A Visualization Framework for Team Sports Captured using Multiple Static Cameras”

October 3rd, 2013 Irfan Essa Posted in Activity Recognition, Computational Photography and Video, Jessica Hodgins, PAMI/ICCV/CVPR/ECCV, Papers, Raffay Hamid, Sports Visualization No Comments »

  • R. Hamid, R. Kumar, J. Hodgins, and I. Essa (2013), “A Visualization Framework for Team Sports Captured using Multiple Static Cameras,” Computer Vision and Image Understanding, p. -, 2013. [PDF] [WEBSITE] [VIDEO] [DOI] [BIBTEX]
    @article{2013-Hamid-VFTSCUMSC,
      Author = {Raffay Hamid and Ramkrishan Kumar and Jessica Hodgins and Irfan Essa},
      Date-Added = {2013-10-22 13:42:46 +0000},
      Date-Modified = {2013-10-22 13:51:43 +0000},
      Doi = {10.1016/j.cviu.2013.09.006},
      Issn = {1077-3142},
      Journal = {Computer Vision and Image Understanding},
      Number = {0},
      Pages = {-},
      Pdf = {http://www.cc.gatech.edu/~irfan/p/2013-Hamid-VFTSCUMSC.pdf},
      Title = {A Visualization Framework for Team Sports Captured using Multiple Static Cameras},
      Url = {http://raffayhamid.com/sports_viz.shtml},
      Video = {http://www.youtube.com/watch?v=VwzAMi9pUDQ},
      Year = {2013},
      Bdsk-Url-1 = {http://www.sciencedirect.com/science/article/pii/S1077314213001768},
      Bdsk-Url-2 = {http://dx.doi.org/10.1016/j.cviu.2013.09.006},
      Bdsk-Url-3 = {http://raffayhamid.com/sports_viz.shtml}}

Abstract

We present a novel approach for robust localization of multiple people observed using a set of static cameras. We use this location information to generate a visualization of the virtual offside line in soccer games. To compute the position of the offside line, we need to localize players′ positions, and identify their team roles. We solve the problem of fusing corresponding players′ positional information by finding minimum weight K-length cycles in a complete K-partite graph. Each partite of the graph corresponds to one of the K cameras, whereas each node of a partite encodes the position and appearance of a player observed from a particular camera. To find the minimum weight cycles in this graph, we use a dynamic programming based approach that varies over a continuum from maximally to minimally greedy in terms of the number of graph-paths explored at each iteration. We present proofs for the efficiency and performance bounds of our algorithms. Finally, we demonstrate the robustness of our framework by testing it on 82,000 frames of soccer footage captured over eight different illumination conditions, play types, and team attire. Our framework runs in near-real time, and processes video from 3 full HD cameras in about 0.4 seconds for each set of corresponding 3 frames.

via Science Direct A Visualization Framework for Team Sports Captured using Multiple Static Cameras.

AddThis Social Bookmark Button

Paper in ACM Ubicomp 2013 “Technological approaches for addressing privacy concerns when recognizing eating behaviors with wearable cameras”

September 14th, 2013 Irfan Essa Posted in Activity Recognition, Computational Photography and Video, Edison Thomaz, Gregory Abowd, ISWC, Mobile Computing, Papers, Ubiquitous Computing No Comments »

  • E. Thomaz, A. Parnami, J. Bidwell, I. Essa, and G. D. Abowd (2013), “Technological Approaches for Addressing Privacy Concerns when Recognizing Eating Behaviors with Wearable Cameras.,” in Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp ’13), 2013. [PDF] [DOI] [BIBTEX]
    @inproceedings{2013-Thomaz-TAAPCWREBWWC,
      Author = {Edison Thomaz and Aman Parnami and Jonathan Bidwell and Irfan Essa and Gregory D. Abowd},
      Booktitle = {Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp '13)},
      Date-Added = {2013-10-22 18:31:23 +0000},
      Date-Modified = {2013-10-22 19:19:14 +0000},
      Doi = {10.1145/2493432.2493509},
      Pdf = {http://www.cc.gatech.edu/~irfan/p/2013-Thomaz-TAAPCWREBWWC.pdf},
      Title = {Technological Approaches for Addressing Privacy Concerns when Recognizing Eating Behaviors with Wearable Cameras.},
      Year = {2013},
      Bdsk-Url-1 = {http://dx.doi.org/10.1145/2493432.2493509}}

 Abstract

First-person point-of-view (FPPOV) images taken by wearable cameras can be used to better understand people’s eating habits. Human computation is a way to provide effective analysis of FPPOV images in cases where algorithmic approaches currently fail. However, privacy is a serious concern. We provide a framework, the privacy-saliency matrix, for understanding the balance between the eating information in an image and its potential privacy concerns. Using data gathered by 5 participants wearing a lanyard-mounted smartphone, we show how the framework can be used to quantitatively assess the effectiveness of four automated techniques (face detection, image cropping, location filtering and motion filtering) at reducing the privacy-infringing content of images while still maintaining evidence of eating behaviors throughout the day.

via ACM DL Technological approaches for addressing privacy concerns when recognizing eating behaviors with wearable cameras.

AddThis Social Bookmark Button

Paper in IEEE CVPR 2013 “Augmenting Bag-of-Words: Data-Driven Discovery of Temporal and Structural Information for Activity Recognition”

June 27th, 2013 Irfan Essa Posted in Activity Recognition, Behavioral Imaging, Grant Schindler, PAMI/ICCV/CVPR/ECCV, Papers, Sports Visualization, Thomas Ploetz, Vinay Bettadapura No Comments »

  • V. Bettadapura, G. Schindler, T. Ploetz, and I. Essa (2013), “Augmenting Bag-of-Words: Data-Driven Discovery of Temporal and Structural Information for Activity Recognition,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013. [PDF] [WEBSITE] [DOI] [BIBTEX]
    @inproceedings{2013-Bettadapura-ABDDTSIAR,
      Author = {Vinay Bettadapura and Grant Schindler and Thomas Ploetz and Irfan Essa},
      Booktitle = {Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
      Date-Added = {2013-06-25 11:42:31 +0000},
      Date-Modified = {2013-10-22 18:39:15 +0000},
      Doi = {10.1109/CVPR.2013.338},
      Month = {June},
      Organization = {IEEE Computer Society},
      Pdf = {http://www.cc.gatech.edu/~irfan/p/2013-Bettadapura-ABDDTSIAR.pdf},
      Title = {Augmenting Bag-of-Words: Data-Driven Discovery of Temporal and Structural Information for Activity Recognition},
      Url = {http://www.cc.gatech.edu/cpl/projects/abow/},
      Year = {2013},
      Bdsk-Url-1 = {http://www.cc.gatech.edu/cpl/projects/abow/},
      Bdsk-Url-2 = {http://dx.doi.org/10.1109/CVPR.2013.338}}

Abstract

We present data-driven techniques to augment Bag of Words (BoW) models, which allow for more robust modeling and recognition of complex long-term activities, especially when the structure and topology of the activities are not known a priori. Our approach specifically addresses the limitations of standard BoW approaches, which fail to represent the underlying temporal and causal information that is inherent in activity streams. In addition, we also propose the use of randomly sampled regular expressions to discover and encode patterns in activities. We demonstrate the effectiveness of our approach in experimental evaluations where we successfully recognize activities and detect anomalies in four complex datasets.

via IEEE Xplore – Augmenting Bag-of-Words: Data-Driven Discovery of Temporal and Structural Information for Activity R….

AddThis Social Bookmark Button

Paper in ECCV Workshop 2012: “Weakly Supervised Learning of Object Segmentations from Web-Scale Videos”

October 7th, 2012 Irfan Essa Posted in Activity Recognition, Awards, Google, Matthias Grundmann, Multimedia, PAMI/ICCV/CVPR/ECCV, Papers, Vivek Kwatra, WWW No Comments »

Weakly Supervised Learning of Object Segmentations from Web-Scale Videos

  • G. Hartmann, M. Grundmann, J. Hoffman, D. Tsai, V. Kwatra, O. Madani, S. Vijayanarasimhan, I. Essa, J. Rehg, and R. Sukthankar (2012), “Weakly Supervised Learning of Object Segmentations from Web-Scale Videos,” in Proceedings of ECCV 2012 Workshop on Web-scale Vision and Social Media, 2012. [PDF] [DOI] [BIBTEX]
    @inproceedings{2012-Hartmann-WSLOSFWV,
      Author = {Glenn Hartmann and Matthias Grundmann and Judy Hoffman and David Tsai and Vivek Kwatra and Omid Madani and Sudheendra Vijayanarasimhan and Irfan Essa and James Rehg and Rahul Sukthankar},
      Booktitle = {Proceedings of ECCV 2012 Workshop on Web-scale Vision and Social Media},
      Date-Added = {2012-10-23 15:03:18 +0000},
      Date-Modified = {2013-10-22 18:57:10 +0000},
      Doi = {10.1007/978-3-642-33863-2_20},
      Pdf = {http://www.cs.cmu.edu/~rahuls/pub/eccv2012wk-cp-rahuls.pdf},
      Title = {Weakly Supervised Learning of Object Segmentations from Web-Scale Videos},
      Year = {2012},
      Bdsk-Url-1 = {http://dx.doi.org/10.1007/978-3-642-33863-2_20}}

Abstract

We propose to learn pixel-level segmentations of objects from weakly labeled (tagged) internet videos. Speci cally, given a large collection of raw YouTube content, along with potentially noisy tags, our goal is to automatically generate spatiotemporal masks for each object, such as dog”, without employing any pre-trained object detectors. We formulate this problem as learning weakly supervised classi ers for a set of independent spatio-temporal segments. The object seeds obtained using segment-level classi ers are further re ned using graphcuts to generate high-precision object masks. Our results, obtained by training on a dataset of 20,000 YouTube videos weakly tagged into 15 classes, demonstrate automatic extraction of pixel-level object masks. Evaluated against a ground-truthed subset of 50,000 frames with pixel-level annotations, we con rm that our proposed methods can learn good object masks just by watching YouTube.

Presented at: ECCV 2012 Workshop on Web-scale Vision and Social Media, 2012, October 7-12, 2012, in Florence, ITALY.

Awarded the BEST PAPER AWARD!

 

AddThis Social Bookmark Button

Paper in IROS 2012: “Linguistic Transfer of Human Assembly Tasks to Robots”

October 7th, 2012 Irfan Essa Posted in 0205507, Activity Recognition, IROS/ICRA, Mike Stilman, Robotics No Comments »

Linguistic Transfer of Human Assembly Tasks to Robots

  • N. Dantam, I. Essa, and M. Stilman (2012), “Linguistic Transfer of Human Assembly Tasks to Robots,” in Proceedings of Intelligent Robots and Systems (IROS), 2012. [PDF] [DOI] [BIBTEX]
    @inproceedings{2012-Dantam-LTHATR,
      Author = {N. Dantam and I. Essa and M. Stilman},
      Booktitle = {Proceedings of Intelligent Robots and Systems (IROS)},
      Date-Added = {2012-10-23 15:07:46 +0000},
      Date-Modified = {2013-10-22 18:58:04 +0000},
      Doi = {10.1109/IROS.2012.6385749},
      Pdf = {http://www.cc.gatech.edu/~ndantam3/papers/dantam2012assembly.pdf},
      Title = {Linguistic Transfer of Human Assembly Tasks to Robots},
      Year = {2012},
      Bdsk-Url-1 = {http://dx.doi.org/10.1109/IROS.2012.6385749}}

Abstract

We demonstrate the automatic transfer of an assembly task from human to robot. This work extends efforts showing the utility of linguistic models in verifiable robot control policies by now performing real visual analysis of human demonstrations to automatically extract a policy for the task. This method tokenizes each human demonstration into a sequence of object connection symbols, then transforms the set of sequences from all demonstrations into an automaton, which represents the task-language for assembling a desired object. Finally, we combine this assembly automaton with a kinematic model of a robot arm to reproduce the demonstrated task.

Presented at: IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2012), October 7-12, 2012 Vilamoura, Algarve, Portugal.

 

AddThis Social Bookmark Button

At CVPR 2012, in Providence, RI, June 16 – 21, 2012

June 17th, 2012 Irfan Essa Posted in Activity Recognition, Computational Photography and Video, Kihwan Kim, Matthias Grundmann, PAMI/ICCV/CVPR/ECCV, Presentations, Vivek Kwatra No Comments »

At IEEE CVPR 2012 is in Providence RI, from Jun 16 – 21, 2012.

Busy week ahead meeting good friends and colleagues. Here are some highlights of what my group is involved with.

Paper in Main Conference

  • K. Kim, D. Lee, and I. Essa (2012), “Detecting Regions of Interest in Dynamic Scenes with Camera Motions,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012. [PDF] [WEBSITE] [VIDEO] [Poster on Tuesday 6/19/2012]

Demo in Main Conference

  • M. Grundmann, V. Kwatra, D. Castro, and I. Essa (2012), “Calibration-Free Rolling Shutter Removal,” in [WEBSITE] [VIDEO] (Paper in ICCP 2012) [Demo on Monday and Tuesday (6/18-19) at the Google Booth]

Invited Talk in Workshop

AddThis Social Bookmark Button

Paper in IEEE CVPR 2012: “Detecting Regions of Interest in Dynamic Scenes with Camera Motions”

June 16th, 2012 Irfan Essa Posted in Activity Recognition, Kihwan Kim, Numerical Machine Learning, PAMI/ICCV/CVPR/ECCV, Papers, PERSEAS, Visual Surviellance No Comments »

Detecting Regions of Interest in Dynamic Scenes with Camera Motions

  • K. Kim, D. Lee, and I. Essa (2012), “Detecting Regions of Interest in Dynamic Scenes with Camera Motions,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2012. [PDF] [WEBSITE] [VIDEO] [DOI] [BLOG] [BIBTEX]
    @inproceedings{2012-Kim-DRIDSWCM,
      Author = {Kihwan Kim and Dongreyol Lee and Irfan Essa},
      Blog = {http://prof.irfanessa.com/2012/04/09/paper-cvpr2012/},
      Booktitle = {Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
      Date-Added = {2012-04-09 22:37:06 +0000},
      Date-Modified = {2013-10-22 18:53:11 +0000},
      Doi = {10.1109/CVPR.2012.6247809},
      Pdf = {http://www.cc.gatech.edu/~irfan/p/2012-Kim-DRIDSWCM.pdf},
      Publisher = {IEEE Computer Society},
      Title = {Detecting Regions of Interest in Dynamic Scenes with Camera Motions},
      Url = {http://www.cc.gatech.edu/cpl/projects/roi/},
      Video = {http://www.youtube.com/watch?v=19BMwDMCSp8},
      Year = {2012},
      Bdsk-Url-1 = {http://www.cc.gatech.edu/cpl/projects/roi/},
      Bdsk-Url-2 = {http://dx.doi.org/10.1109/CVPR.2012.6247809}}

Abstract

We present a method to detect the regions of interests in moving camera views of dynamic scenes with multiple mov- ing objects. We start by extracting a global motion tendency that reflects the scene context by tracking movements of objects in the scene. We then use Gaussian process regression to represent the extracted motion tendency as a stochastic vector field. The generated stochastic field is robust to noise and can handle a video from an uncalibrated moving camera. We use the stochastic field for predicting important future regions of interest as the scene evolves dynamically.

We evaluate our approach on a variety of videos of team sports and compare the detected regions of interest to the camera motion generated by actual camera operators. Our experimental results demonstrate that our approach is computationally efficient, and provides better prediction than those of previously proposed RBF-based approaches.

Presented at: IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2012, Providence, RI, June 16-21, 2012

AddThis Social Bookmark Button

Presentation at CVPR 2012 workshop on Large Scale Video Search and Mining “Extracting Content and Context from Video.”

June 5th, 2012 Irfan Essa Posted in Activity Recognition, PAMI/ICCV/CVPR/ECCV No Comments »

Extracting Content and Context from Video.

(Presentation at CVPR 2012 workshop on Large Scale Video Search and Mining 2012, June 21, 2012)

Irfan Essa
GEORGIA Tech
prof.irfanessa.com

Abstract

In this talk, I will describe various efforts aimed at extracting context and content from video. I will highlight some of our recent work in extracting spatio-temporal features and the related saliency information from the video, which can be used to detect and localize regions of interest in video. Then I will describe approaches that use structured and unstructured representations to recognize the complex and extended-time actions.  I will also discuss the need for unsupervised activity discovery, and detection of anomalous activities from videos. I will show a variety of examples, which will include online videos, mobile videos, surveillance and home monitoring video, and sports videos. Finally, I will pose a series of questions and make observations about how we need to extend our current paradigms of video understanding to go beyond local spatio-temporal features, and standard time-series and bag of words models.

AddThis Social Bookmark Button

AT IWCV 2012: “Videos Understanding: Extracting Content and Context from Video.”

May 24th, 2012 Irfan Essa Posted in Activity Recognition, Computational Photography and Video, Presentations, Visual Surviellance No Comments »

Videos Understanding: Extracting Content and Context from Video.

(Presentation at the International Workshop on Computer Vision 2012, Ortigia, Siracusa, Sicily, May 22-24, 2012.)

Irfan Essa
GEORGIA Tech

Abstract

In this talk, I will describe various efforts aimed at extracting context and content from video. I will highlight some of our recent work in extracting spatio-temporal features and the related saliency information from the video, which can be used to detect and localize regions of interest in video. Then I will describe approaches that use structured and unstructured representations to recognize the complex and extended-time actions.  I will also discuss the need for unsupervised activity discovery, and detection of anomalous activities from videos. I will show a variety of examples, which will include online videos, mobile videos, surveillance and home monitoring video, and sports videos. Finally, I will pose a series of questions and make observations about how we need to extend our current paradigms of video understanding to go beyond local spatio-temporal features, and standard time-series and bag of words models.

AddThis Social Bookmark Button