Paper in MICCAI (2015): “Automated Assessment of Surgical Skills Using Frequency Analysis”

October 6th, 2015 Irfan Essa Posted in Activity Recognition, Aneeq Zia, Eric Sarin, Mark Clements, Medical, MICCAI, Papers, Vinay Bettadapura, Yachna Sharma No Comments »

Paper

  • A. Zia, Y. Sharma, V. Bettadapura, E. Sarin, M. Clements, and I. Essa (2015), “Automated Assessment of Surgical Skills Using Frequency Analysis,” in International Conference on Medical Image Computing and Computer Assisted Interventions (MICCAI), 2015. [PDF] [BIBTEX]
    @InProceedings{    2015-Zia-AASSUFA,
      author  = {A. Zia and Y. Sharma and V. Bettadapura and E.
          Sarin and M. Clements and I. Essa},
      booktitle  = {International Conference on Medical Image Computing
          and Computer Assisted Interventions (MICCAI)},
      month    = {October},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Zia-AASSUFA.pdf}
          ,
      title    = {Automated Assessment of Surgical Skills Using
          Frequency Analysis},
      year    = {2015}
    }

Abstract

We present an automated framework for a visual assessment of the expertise level of surgeons using the OSATS (Objective Structured Assessment of Technical Skills) criteria. Video analysis technique for extracting motion quality via  frequency coefficients is introduced. The framework is tested in a case study that involved analysis of videos of medical students with different expertise levels performing basic surgical tasks in a surgical training lab setting. We demonstrate that transforming the sequential time data into frequency components effectively extracts the useful information differentiating between different skill levels of the surgeons. The results show significant performance improvements using DFT and DCT coefficients over known state-of-the-art techniques.

AddThis Social Bookmark Button

Paper in Ubicomp 2015: “A Practical Approach for Recognizing Eating Moments with Wrist-Mounted Inertial Sensing”

September 8th, 2015 Irfan Essa Posted in ACM UIST/CHI, Activity Recognition, Behavioral Imaging, Edison Thomaz, Gregory Abowd, Health Systems, Machine Learning, Mobile Computing, Papers, UBICOMP, Ubiquitous Computing No Comments »

Paper

  • E. Thomaz, I. Essa, and G. D. Abowd (2015), “A Practical Approach for Recognizing Eating Moments with Wrist-Mounted Inertial Sensing,” in Proceedings of ACM International Conference on Ubiquitous Computing (UBICOMP), 2015. [PDF] [BIBTEX]
    @InProceedings{    2015-Thomaz-PAREMWWIS,
      author  = {Edison Thomaz and Irfan Essa and Gregory D. Abowd},
      booktitle  = {Proceedings of ACM International Conference on
          Ubiquitous Computing (UBICOMP)},
      month    = {September},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Thomaz-PAREMWWIS.pdf}
          ,
      title    = {A Practical Approach for Recognizing Eating Moments
          with Wrist-Mounted Inertial Sensing},
      year    = {2015}
    }

Abstract

Thomaz-UBICOMP15.pngRecognizing when eating activities take place is one of the key challenges in automated food intake monitoring. Despite progress over the years, most proposed approaches have been largely impractical for everyday usage, requiring multiple onbody sensors or specialized devices such as neck collars for swallow detection. In this paper, we describe the implementation and evaluation of an approach for inferring eating moments based on 3-axis accelerometry collected with a popular off-the-shelf smartwatch. Trained with data collected in a semi-controlled laboratory setting with 20 subjects, our system recognized eating moments in two free-living condition studies (7 participants, 1 day; 1 participant, 31 days), with Fscores of 76.1% (66.7% Precision, 88.8% Recall), and 71.3% (65.2% Precision, 78.6% Recall). This work represents a contribution towards the implementation of a practical, automated system for everyday food intake monitoring, with applicability in areas ranging from health research and food journaling.

AddThis Social Bookmark Button

Paper in ISWC 2015: “Predicting Daily Activities from Egocentric Images Using Deep Learning”

September 7th, 2015 Irfan Essa Posted in Activity Recognition, Daniel Castro, Gregory Abowd, Henrik Christensen, ISWC, Machine Learning, Papers, Steven Hickson, Ubiquitous Computing, Vinay Bettadapura No Comments »

Paper

  • D. Castro, S. Hickson, V. Bettadapura, E. Thomaz, G. Abowd, H. Christensen, and I. Essa (2015), “Predicting Daily Activities from Egocentric Images Using Deep Learning,” in Proceedings of International Symposium on Wearable Computers (ISWC), 2015. [PDF] [WEBSITE] [arXiv] [BIBTEX]
    @InProceedings{    2015-Castro-PDAFEIUDL,
      arxiv    = {http://arxiv.org/abs/1510.01576},
      author  = {Daniel Castro and Steven Hickson and Vinay
          Bettadapura and Edison Thomaz and Gregory Abowd and
          Henrik Christensen and Irfan Essa},
      booktitle  = {Proceedings of International Symposium on Wearable
          Computers (ISWC)},
      month    = {September},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Castro-PDAFEIUDL.pdf}
          ,
      title    = {Predicting Daily Activities from Egocentric Images
          Using Deep Learning},
      url    = {http://www.cc.gatech.edu/cpl/projects/dailyactivities/}
          ,
      year    = {2015}
    }

Abstract

Castro-ISWC2015We present a method to analyze images taken from a passive egocentric wearable camera along with the contextual information, such as time and day of a week, to learn and predict everyday activities of an individual. We collected a dataset of 40,103 egocentric images over a 6 month period with 19 activity classes and demonstrate the benefit of state-of-the-art deep learning techniques for learning and predicting daily activities. Classification is conducted using a Convolutional Neural Network (CNN) with a classification method we introduce called a late fusion ensemble. This late fusion ensemble incorporates relevant contextual information and increases our classification accuracy. Our technique achieves an overall accuracy of 83.07% in predicting a person’s activity across the 19 activity classes. We also demonstrate some promising results from two additional users by fine-tuning the classifier with one day of training data.

AddThis Social Bookmark Button

Paper in WACV (2015): “Semantic Instance Labeling Leveraging Hierarchical Segmentation”

January 6th, 2015 Irfan Essa Posted in Computer Vision, Henrik Christensen, PAMI/ICCV/CVPR/ECCV, Papers, Robotics, Steven Hickson No Comments »

Paper

  • S. Hickson, I. Essa, and H. Christensen (2015), “Semantic Instance Labeling Leveraging Hierarchical Segmentation,” in Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV), 2015. [PDF] [DOI] [BIBTEX]
    @InProceedings{    2015-Hickson-SILLHS,
      author  = {Steven Hickson and Irfan Essa and Henrik
          Christensen},
      booktitle  = {Proceedings of IEEE Winter Conference on
          Applications of Computer Vision (WACV)},
      doi    = {10.1109/WACV.2015.147},
      month    = {January},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Hickson-SILLHS.pdf}
          ,
      publisher  = {IEEE Computer Society},
      title    = {Semantic Instance Labeling Leveraging Hierarchical
          Segmentation},
      year    = {2015}
    }

Abstract

Most of the approaches for indoor RGBD semantic labeling focus on using pixels or superpixels to train a classifier. In this paper, we implement a higher level segmentation using a hierarchy of superpixels to obtain a better segmentation for training our classifier. By focusing on meaningful segments that conform more directly to objects, regardless of size, we train a random forest of decision trees as a classifier using simple features such as the 3D size, LAB color histogram, width, height, and shape as specified by a histogram of surface normals. We test our method on the NYU V2 depth dataset, a challenging dataset of cluttered indoor environments. Our experiments using the NYU V2 depth dataset show that our method achieves state of the art results on both a general semantic labeling introduced by the dataset (floor, structure, furniture, and objects) and a more object specific semantic labeling. We show that training a classifier on a segmentation from a hierarchy of super pixels yields better results than training directly on super pixels, patches, or pixels as in previous work.2015-Hickson-SILLHS_pdf

AddThis Social Bookmark Button

Paper in IEEE WACV (2015): “Finding Temporally Consistent Occlusion Boundaries using Scene Layout”

January 6th, 2015 Irfan Essa Posted in Computational Photography and Video, Computer Vision, Matthias Grundmann, PAMI/ICCV/CVPR/ECCV, Papers, S. Hussain Raza, Uncategorized No Comments »

Paper

  • S. H. Raza, A. Humayun, M. Grundmann, D. Anderson, and I. Essa (2015), “Finding Temporally Consistent Occlusion Boundaries using Scene Layout,” in Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV), 2015. [PDF] [DOI] [BIBTEX]
    @InProceedings{    2015-Raza-FTCOBUSL,
      author  = {Syed Hussain Raza and Ahmad Humayun and Matthias
          Grundmann and David Anderson and Irfan Essa},
      booktitle  = {Proceedings of IEEE Winter Conference on
          Applications of Computer Vision (WACV)},
      doi    = {10.1109/WACV.2015.141},
      month    = {January},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Raza-FTCOBUSL.pdf}
          ,
      publisher  = {IEEE Computer Society},
      title    = {Finding Temporally Consistent Occlusion Boundaries
          using Scene Layout},
      year    = {2015}
    }

Abstract

We present an algorithm for finding temporally consistent occlusion boundaries in videos to support segmentation of dynamic scenes. We learn occlusion boundaries in a pairwise Markov random field (MRF) framework. We first estimate the probability of a spatiotemporal edge being an occlusion boundary by using appearance, flow, and geometric features. Next, we enforce occlusion boundary continuity in an MRF model by learning pairwise occlusion probabilities using a random forest. Then, we temporally smooth boundaries to remove temporal inconsistencies in occlusion boundary estimation. Our proposed framework provides an efficient approach for finding temporally consistent occlusion boundaries in video by utilizing causality, redundancy in videos, and semantic layout of the scene. We have developed a dataset with fully annotated ground-truth occlusion boundaries of over 30 videos (∼5000 frames). This dataset is used to evaluate temporal occlusion boundaries and provides a much-needed baseline for future studies. We perform experiments to demonstrate the role of scene layout, and temporal information for occlusion reasoning in video of dynamic scenes.

AddThis Social Bookmark Button

Paper in IEEE WACV (2015): “Leveraging Context to Support Automated Food Recognition in Restaurants”

January 6th, 2015 Irfan Essa Posted in Activity Recognition, Computer Vision, Edison Thomaz, First Person Computing, Gregory Abowd, Mobile Computing, PAMI/ICCV/CVPR/ECCV, Papers, Ubiquitous Computing, Uncategorized, Vinay Bettadapura No Comments »

Paper

  • V. Bettadapura, E. Thomaz, A. Parnami, G. Abowd, and I. Essa (2015), “Leveraging Context to Support Automated Food Recognition in Restaurants,” in Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV), 2015. [PDF] [WEBSITE] [DOI] [arXiv] [BIBTEX]
    @InProceedings{    2015-Bettadapura-LCSAFRR,
      arxiv    = {http://arxiv.org/abs/1510.02078},
      author  = {Vinay Bettadapura and Edison Thomaz and Aman
          Parnami and Gregory Abowd and Irfan Essa},
      booktitle  = {Proceedings of IEEE Winter Conference on
          Applications of Computer Vision (WACV)},
      doi    = {10.1109/WACV.2015.83},
      month    = {January},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Bettadapura-LCSAFRR.pdf}
          ,
      publisher  = {IEEE Computer Society},
      title    = {Leveraging Context to Support Automated Food
          Recognition in Restaurants},
      url    = {http://www.vbettadapura.com/egocentric/food/},
      year    = {2015}
    }

 

Abstract

The pervasiveness of mobile cameras has resulted in a dramatic increase in food photos, which are pictures reflecting what people eat. In this paper, we study how taking pictures of what we eat in restaurants can be used for the purpose of automating food journaling. We propose to leverage the context of where the picture was taken, with additional information about the restaurant, available online, coupled with state-of-the-art computer vision techniques to recognize the food being consumed. To this end, we demonstrate image-based recognition of foods eaten in restaurants by training a classifier with images from restaurant’s online menu databases. We evaluate the performance of our system in unconstrained, real-world settings with food images taken in 10 restaurants across 5 different types of food (American, Indian, Italian, Mexican and Thai).food-poster

AddThis Social Bookmark Button

Paper in WACV (2015): “Egocentric Field-of-View Localization Using First-Person Point-of-View Devices”

January 6th, 2015 Irfan Essa Posted in Activity Recognition, Caroline Pantofaru, Computer Vision, First Person Computing, Mobile Computing, PAMI/ICCV/CVPR/ECCV, Papers, Vinay Bettadapura No Comments »

Paper

  • V. Bettadapura, I. Essa, and C. Pantofaru (2015), “Egocentric Field-of-View Localization Using First-Person Point-of-View Devices,” in Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV), 2015. (Best Paper Award) [PDF] [WEBSITE] [DOI] [arXiv] [BIBTEX]
    @InProceedings{    2015-Bettadapura-EFLUFPD,
      arxiv    = {http://arxiv.org/abs/1510.02073},
      author  = {Vinay Bettadapura and Irfan Essa and Caroline
          Pantofaru},
      awards  = {(Best Paper Award)},
      booktitle  = {Proceedings of IEEE Winter Conference on
          Applications of Computer Vision (WACV)},
      doi    = {10.1109/WACV.2015.89},
      month    = {January},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Bettadapura-EFLUFPD.pdf}
          ,
      publisher  = {IEEE Computer Society},
      title    = {Egocentric Field-of-View Localization Using
          First-Person Point-of-View Devices},
      url    = {http://www.vbettadapura.com/egocentric/localization/}
          ,
      year    = {2015}
    }

Abstract

We present a technique that uses images, videos and sensor data taken from first-person point-of-view devices to perform egocentric field-of-view (FOV) localization. We define egocentric FOV localization as capturing the visual information from a person’s field-of-view in a given environment and transferring this information onto a reference corpus of images and videos of the same space, hence determining what a person is attending to. Our method matches images and video taken from the first-person perspective with the reference corpus and refines the results using the first-person’s head orientation information obtained using the device sensors. We demonstrate single and multi-user egocentric FOV localization in different indoor and outdoor environments with applications in augmented reality, event understanding and studying social interactions.

AddThis Social Bookmark Button

Four Papers at IEEE Winter Conference on Applications of Computer Vision (WACV 2015)

January 5th, 2015 Irfan Essa Posted in Computational Photography and Video, Computer Vision, PAMI/ICCV/CVPR/ECCV, Papers, S. Hussain Raza, Steven Hickson, Vinay Bettadapura No Comments »

Four papers accepted at the IEEE Winter Conference on Applications of Computer Vision (WACV) 2015. See you at Waikoloa Beach, Hawaii!

  • V. Bettadapura, E. Thomaz, A. Parnami, G. Abowd, and I. Essa (2015), “Leveraging Context to Support Automated Food Recognition in Restaurants,” in Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV), 2015. [PDF] [WEBSITE] [DOI] [arXiv] [BIBTEX]
    @InProceedings{    2015-Bettadapura-LCSAFRR,
      arxiv    = {http://arxiv.org/abs/1510.02078},
      author  = {Vinay Bettadapura and Edison Thomaz and Aman
          Parnami and Gregory Abowd and Irfan Essa},
      booktitle  = {Proceedings of IEEE Winter Conference on
          Applications of Computer Vision (WACV)},
      doi    = {10.1109/WACV.2015.83},
      month    = {January},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Bettadapura-LCSAFRR.pdf}
          ,
      publisher  = {IEEE Computer Society},
      title    = {Leveraging Context to Support Automated Food
          Recognition in Restaurants},
      url    = {http://www.vbettadapura.com/egocentric/food/},
      year    = {2015}
    }
  • S. Hickson, I. Essa, and H. Christensen (2015), “Semantic Instance Labeling Leveraging Hierarchical Segmentation,” in Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV), 2015. [PDF] [DOI] [BIBTEX]
    @InProceedings{    2015-Hickson-SILLHS,
      author  = {Steven Hickson and Irfan Essa and Henrik
          Christensen},
      booktitle  = {Proceedings of IEEE Winter Conference on
          Applications of Computer Vision (WACV)},
      doi    = {10.1109/WACV.2015.147},
      month    = {January},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Hickson-SILLHS.pdf}
          ,
      publisher  = {IEEE Computer Society},
      title    = {Semantic Instance Labeling Leveraging Hierarchical
          Segmentation},
      year    = {2015}
    }
  • S. H. Raza, A. Humayun, M. Grundmann, D. Anderson, and I. Essa (2015), “Finding Temporally Consistent Occlusion Boundaries using Scene Layout,” in Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV), 2015. [PDF] [DOI] [BIBTEX]
    @InProceedings{    2015-Raza-FTCOBUSL,
      author  = {Syed Hussain Raza and Ahmad Humayun and Matthias
          Grundmann and David Anderson and Irfan Essa},
      booktitle  = {Proceedings of IEEE Winter Conference on
          Applications of Computer Vision (WACV)},
      doi    = {10.1109/WACV.2015.141},
      month    = {January},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Raza-FTCOBUSL.pdf}
          ,
      publisher  = {IEEE Computer Society},
      title    = {Finding Temporally Consistent Occlusion Boundaries
          using Scene Layout},
      year    = {2015}
    }
  • V. Bettadapura, I. Essa, and C. Pantofaru (2015), “Egocentric Field-of-View Localization Using First-Person Point-of-View Devices,” in Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV), 2015. (Best Paper Award) [PDF] [WEBSITE] [DOI] [arXiv] [BIBTEX]
    @InProceedings{    2015-Bettadapura-EFLUFPD,
      arxiv    = {http://arxiv.org/abs/1510.02073},
      author  = {Vinay Bettadapura and Irfan Essa and Caroline
          Pantofaru},
      awards  = {(Best Paper Award)},
      booktitle  = {Proceedings of IEEE Winter Conference on
          Applications of Computer Vision (WACV)},
      doi    = {10.1109/WACV.2015.89},
      month    = {January},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Bettadapura-EFLUFPD.pdf}
          ,
      publisher  = {IEEE Computer Society},
      title    = {Egocentric Field-of-View Localization Using
          First-Person Point-of-View Devices},
      url    = {http://www.vbettadapura.com/egocentric/localization/}
          ,
      year    = {2015}
    }

Last one was also the WINNER of Best Paper Award (see http://wacv2015.org/). More details coming soon.

 

AddThis Social Bookmark Button

Paper in M2CAI 2014: “Video Based Assessment of OSATS Using Sequential Motion Textures”

September 14th, 2014 Irfan Essa Posted in Activity Recognition, Behavioral Imaging, Computer Vision, Medical, MICCAI, Papers, Thomas Ploetz, Vinay Bettadapura, Yachna Sharma No Comments »

Paper

  • Y. Sharma, V. Bettadapura, T. Ploetz, N. Hammerla, S. Mellor, R. McNaney, P. Olivier, S. Deshmukh, A. Mccaskie, and I. Essa (2014), “Video Based Assessment of OSATS Using Sequential Motion Textures,” in Proceedings of Workshop on Modeling and Monitoring of Computer Assisted Interventions (M2CAI), 2014. (Best Paper Honorable Mention Award) [PDF] [BIBTEX]
    @InProceedings{    2014-Sharma-VBAOUSMT,
      author  = {Yachna Sharma and Vinay Bettadapura and Thomas
          Ploetz and Nils Hammerla and Sebastian Mellor and
          Roisin McNaney and Patrick Olivier and Sandeep
          Deshmukh and Andrew Mccaskie and Irfan Essa},
      awards  = {(Best Paper Honorable Mention Award)},
      booktitle  = {{Proceedings of Workshop on Modeling and Monitoring
          of Computer Assisted Interventions (M2CAI)}},
      month    = {September},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2014-Sharma-VBAOUSMT.pdf}
          ,
      title    = {Video Based Assessment of OSATS Using Sequential
          Motion Textures},
      year    = {2014}
    }

Abstract

2014-Sharma-VBAOUSMTA fully automated framework for video-based surgical skill assessment is presented that incorporates the sequential and qualitative aspects of surgical motion in a data-driven manner. The Objective Structured Assessment of Technical Skills (OSATS) assessments is replicated, which provides both an overall and in-detail evaluation of basic suturing skills required for surgeons. Video analysis techniques are introduced that incorporate sequential motion aspects into motion textures. Significant performance improvement over standard bag-of-words and motion analysis approaches is demonstrated. The framework is evaluated in a case study that involved medical students with varying levels of expertise performing basic surgical tasks in a surgical training lab setting.

AddThis Social Bookmark Button

Paper in CVPR 2014 “Efficient Hierarchical Graph-Based Segmentation of RGBD Videos”

June 22nd, 2014 Irfan Essa Posted in Computer Vision, Henrik Christensen, Papers, Steven Hickson No Comments »

  • S. Hickson, S. Birchfield, I. Essa, and H. Christensen (2014), “Efficient Hierarchical Graph-Based Segmentation of RGBD Videos,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014. [PDF] [WEBSITE] [BIBTEX]
    @InProceedings{    2014-Hickson-EHGSRV,
      author  = {Steven Hickson and Stan Birchfield and Irfan Essa
          and Henrik Christensen},
      booktitle  = {{Proceedings of IEEE Conference on Computer Vision
          and Pattern Recognition (CVPR)}},
      month    = {June},
      organization  = {IEEE Computer Society},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2014-Hickson-EHGSRV.pdf}
          ,
      title    = {Efficient Hierarchical Graph-Based Segmentation of
          RGBD Videos},
      url    = {http://www.cc.gatech.edu/cpl/projects/4dseg},
      year    = {2014}
    }

Abstract

We present an efficient and scalable algorithm for seg- menting 3D RGBD point clouds by combining depth, color, and temporal information using a multistage, hierarchical graph-based approach. Our algorithm processes a moving window over several point clouds to group similar regions over a graph, resulting in an initial over-segmentation. These regions are then merged to yield a dendrogram using agglomerative clustering via a minimum spanning tree algorithm. Bipartite graph matching at a given level of the hierarchical tree yields the final segmentation of the point clouds by maintaining region identities over arbitrarily long periods of time. We show that a multistage segmentation with depth then color yields better results than a linear combination of depth and color. Due to its incremental process- ing, our algorithm can process videos of any length and in a streaming pipeline. The algorithm’s ability to produce robust, efficient segmentation is demonstrated with numerous experimental results on challenging sequences from our own as well as public RGBD data sets.

AddThis Social Bookmark Button