Dagstuhl Workshop 2015: “Modeling and Simulation of Sport Games, Sport Movements, and Adaptations to Training”

September 13th, 2015 Irfan Essa Posted in Activity Recognition, Behavioral Imaging, Computer Vision, Human Factors, Modeling and Animation, Presentations No Comments »

Participated in the Dagstuhl Workshop on “Modeling and Simulation of Sport Games, Sport Movements, and Adaptations to Training” at the Dagstuhl Castle, September 13  – 16, 2015.

Motivation

Computational modeling and simulation are essential to analyze human motion and interaction in sports science. Applications range from game analysis, issues in training science like training load-adaptation relationship, motor control & learning, to biomechanical analysis. The motivation of this seminar is to enable an interdisciplinary exchange between sports and computer scientists to advance modeling and simulation technologies in selected fields of applications: sport games, sport movements and adaptations to training. In addition, contributions to the epistemic basics of modeling and simulation are welcome.

Source: Schloss Dagstuhl : Seminar Homepage

Past Seminars on this topic include

AddThis Social Bookmark Button

Paper in Ubicomp 2015: “A Practical Approach for Recognizing Eating Moments with Wrist-Mounted Inertial Sensing”

September 8th, 2015 Irfan Essa Posted in ACM UIST/CHI, Activity Recognition, Behavioral Imaging, Edison Thomaz, Gregory Abowd, Health Systems, Machine Learning, Mobile Computing, Papers, UBICOMP, Ubiquitous Computing No Comments »

Paper

  • E. Thomaz, I. Essa, and G. D. Abowd (2015), “A Practical Approach for Recognizing Eating Moments with Wrist-Mounted Inertial Sensing,” in Proceedings of ACM International Conference on Ubiquitous Computing (UBICOMP), 2015. [PDF] [BIBTEX]
    @InProceedings{    2015-Thomaz-PAREMWWIS,
      author  = {Edison Thomaz and Irfan Essa and Gregory D. Abowd},
      booktitle  = {Proceedings of ACM International Conference on
          Ubiquitous Computing (UBICOMP)},
      month    = {September},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Thomaz-PAREMWWIS.pdf},
      title    = {A Practical Approach for Recognizing Eating Moments
          with Wrist-Mounted Inertial Sensing},
      year    = {2015}
    }

Abstract

Thomaz-UBICOMP15.pngRecognizing when eating activities take place is one of the key challenges in automated food intake monitoring. Despite progress over the years, most proposed approaches have been largely impractical for everyday usage, requiring multiple onbody sensors or specialized devices such as neck collars for swallow detection. In this paper, we describe the implementation and evaluation of an approach for inferring eating moments based on 3-axis accelerometry collected with a popular off-the-shelf smartwatch. Trained with data collected in a semi-controlled laboratory setting with 20 subjects, our system recognized eating moments in two free-living condition studies (7 participants, 1 day; 1 participant, 31 days), with Fscores of 76.1% (66.7% Precision, 88.8% Recall), and 71.3% (65.2% Precision, 78.6% Recall). This work represents a contribution towards the implementation of a practical, automated system for everyday food intake monitoring, with applicability in areas ranging from health research and food journaling.

AddThis Social Bookmark Button

Paper in ACM IUI15: “Inferring Meal Eating Activities in Real World Settings from Ambient Sounds: A Feasibility Study”

April 1st, 2015 Irfan Essa Posted in ACM ICMI/IUI, Activity Recognition, Audio Analysis, Behavioral Imaging, Edison Thomaz, Gregory Abowd, Health Systems, Machine Learning, Multimedia No Comments »

Paper

  • E. Thomaz, C. Zhang, I. Essa, and G. D. Abowd (2015), “Inferring Meal Eating Activities in Real World Settings from Ambient Sounds: A Feasibility Study,” in Proceedings of ACM Conference on Intelligence User Interfaces (IUI), 2015. (Best Short Paper Award) [PDF] [BIBTEX]
    @InProceedings{    2015-Thomaz-IMEARWSFASFS,
      author  = {Edison Thomaz and Cheng Zhang and Irfan Essa and
          Gregory D. Abowd},
      awards  = {(Best Short Paper Award)},
      booktitle  = {Proceedings of ACM Conference on Intelligence User
          Interfaces (IUI)},
      month    = {May},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Thomaz-IMEARWSFASFS.pdf},
      title    = {Inferring Meal Eating Activities in Real World
          Settings from Ambient Sounds: A Feasibility Study},
      year    = {2015}
    }

Abstract

2015-04-IUI-AwardDietary self-monitoring has been shown to be an effective method for weight-loss, but it remains an onerous task despite recent advances in food journaling systems. Semi-automated food journaling can reduce the effort of logging, but often requires that eating activities be detected automatically. In this work we describe results from a feasibility study conducted in-the-wild where eating activities were inferred from ambient sounds captured with a wrist-mounted device; twenty participants wore the device during one day for an average of 5 hours while performing normal everyday activities. Our system was able to identify meal eating with an F-score of 79.8% in a person-dependent evaluation, and with 86.6% accuracy in a person-independent evaluation. Our approach is intended to be practical, leveraging off-the-shelf devices with audio sensing capabilities in contrast to systems for automated dietary assessment based on specialized sensors.

AddThis Social Bookmark Button

Paper in M2CAI 2014: “Video Based Assessment of OSATS Using Sequential Motion Textures”

September 14th, 2014 Irfan Essa Posted in Activity Recognition, Behavioral Imaging, Computer Vision, Medical, MICCAI, Papers, Thomas Ploetz, Vinay Bettadapura, Yachna Sharma No Comments »

Paper

  • Y. Sharma, V. Bettadapura, T. Ploetz, N. Hammerla, S. Mellor, R. McNaney, P. Olivier, S. Deshmukh, A. Mccaskie, and I. Essa (2014), “Video Based Assessment of OSATS Using Sequential Motion Textures,” in Proceedings of Workshop on Modeling and Monitoring of Computer Assisted Interventions (M2CAI), 2014. (Best Paper Honorable Mention Award) [PDF] [BIBTEX]
    @InProceedings{    2014-Sharma-VBAOUSMT,
      author  = {Yachna Sharma and Vinay Bettadapura and Thomas
          Ploetz and Nils Hammerla and Sebastian Mellor and
          Roisin McNaney and Patrick Olivier and Sandeep
          Deshmukh and Andrew Mccaskie and Irfan Essa},
      awards  = {(Best Paper Honorable Mention Award)},
      booktitle  = {{Proceedings of Workshop on Modeling and Monitoring
          of Computer Assisted Interventions (M2CAI)}},
      month    = {September},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2014-Sharma-VBAOUSMT.pdf},
      title    = {Video Based Assessment of OSATS Using Sequential
          Motion Textures},
      year    = {2014}
    }

Abstract

2014-Sharma-VBAOUSMTA fully automated framework for video-based surgical skill assessment is presented that incorporates the sequential and qualitative aspects of surgical motion in a data-driven manner. The Objective Structured Assessment of Technical Skills (OSATS) assessments is replicated, which provides both an overall and in-detail evaluation of basic suturing skills required for surgeons. Video analysis techniques are introduced that incorporate sequential motion aspects into motion textures. Significant performance improvement over standard bag-of-words and motion analysis approaches is demonstrated. The framework is evaluated in a case study that involved medical students with varying levels of expertise performing basic surgical tasks in a surgical training lab setting.

AddThis Social Bookmark Button

PhD Thesis by Zahoor Zafrulla “Automatic recognition of American Sign Language Classifiers

May 2nd, 2014 Irfan Essa Posted in Affective Computing, Behavioral Imaging, Face and Gesture, PhD, Thad Starner, Zahoor Zafrulla No Comments »

Title: Automatic recognition of American Sign Language Classifiers

Zahoor Zafrulla
School of Interactive Computing
College of Computing
Georgia Institute of Technology
http://www.cc.gatech.edu/grads/z/zahoor/

Committee:

Dr. Thad Starner (Advisor, School of Interactive Computing, Georgia Tech)
Dr. Irfan Essa (Co-Advisor, School of Interactive Computing, Georgia Tech)
Dr. Jim Rehg (School of Interactive Computing, Georgia Tech)
Dr. Harley Hamilton (School of Interactive Computing, Georgia Tech)
Dr. Vassilis Athitsos (Computer Science and Engineering Department, University of Texas at Arlington)

Summary:

Automatically recognizing classifier-based grammatical structures of American Sign Language (ASL) is a challenging problem. Classifiers in ASL utilize surrogate hand shapes for people or “classes” of objects and provide information about their location, movement and appearance. In the past researchers have focused on recognition of finger spelling, isolated signs, facial expressions and interrogative words like WH-questions (e.g. Who, What, Where, and When). Challenging problems such as recognition of ASL sentences and classifier-based grammatical structures remain relatively unexplored in the field of ASL recognition.

One application of recognition of classifiers is toward creating educational games to help young deaf children acquire language skills. Previous work developed CopyCat, an educational ASL game that requires children to engage in a progressively more difficult expressive signing task as they advance through the game.

We have shown that by leveraging context we can use verification, in place of recognition, to boost machine performance for determining if the signed responses in an expressive signing task, like in the CopyCat game, are correct or incorrect. We have demonstrated that the quality of a machine verifier’s ability to identify the boundary of the signs can be improved by using a novel two-pass technique that combines signed input in both forward and reverse directions. Additionally, we have shown that we can reduce CopyCat’s dependency on custom manufactured hardware by using an off-the-shelf Microsoft Kinect depth camera to achieve similar verification performance. Finally, we show how we can extend our ability to recognize sign language by leveraging depth maps to develop a method using improved hand detection and hand shape classification to recognize selected classifier-based grammatical structures of ASL.

AddThis Social Bookmark Button

Paper in IBSI 2014 conference entitled “Automated Surgical OSATS Prediction from Videos”

April 28th, 2014 Irfan Essa Posted in Behavioral Imaging, Health Systems, Medical, Papers, Thomas Ploetz, Yachna Sharma No Comments »

  • Y. Sharma, T. Ploetz, N. Hammerla, S. Mellor, R. McNaney, P. Oliver, S. Deshmukh, A. McCaskie, and I. Essa (2014), “Automated Surgical OSATS Prediction from Videos,” in Proceedings of IEEE International Symposium on Biomedical Imaging, Beijing, CHINA, 2014. [PDF] [BIBTEX]
    @InProceedings{    2014-Sharma-ASOPFV,
      address  = {Beijing, CHINA},
      author  = {Yachna Sharma and Thomas Ploetz and Nils Hammerla
          and Sebastian Mellor and Roisin McNaney and Patrick
          Oliver and Sandeep Deshmukh and Andrew McCaskie and
          Irfan Essa},
      booktitle  = {{Proceedings of IEEE International Symposium on
          Biomedical Imaging}},
      month    = {April},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2014-Sharma-ASOPFV.pdf},
      title    = {Automated Surgical {OSATS} Prediction from Videos},
      year    = {2014}
    }

Abstract

The assessment of surgical skills is an essential part of medical training. The prevalent manual evaluations by expert surgeons are time consuming and often their outcomes vary substantially from one observer to another. We present a video-based framework for automated evaluation of surgical skills based on the Objective Structured Assessment of Technical Skills (OSATS) criteria. We encode the motion dynamics via frame kernel matrices, and represent the motion granularity by texture features. Linear discriminant analysis is used to derive a reduced dimensionality feature space followed by linear regression to predict OSATS skill scores. We achieve statistically significant correlation (p-value < 0.01) between the ground-truth (given by domain experts) and the OSATS scores predicted by our framework.

AddThis Social Bookmark Button

Paper in IEEE CVPR 2013 “Decoding Children’s Social Behavior”

June 27th, 2013 Irfan Essa Posted in Affective Computing, Behavioral Imaging, Denis Lantsman, Gregory Abowd, James Rehg, PAMI/ICCV/CVPR/ECCV, Papers, Thomas Ploetz No Comments »

  • J. M. Rehg, G. D. Abowd, A. Rozga, M. Romero, M. A. Clements, S. Sclaroff, I. Essa, O. Y. Ousley, Y. Li, C. Kim, H. Rao, J. C. Kim, L. L. Presti, J. Zhang, D. Lantsman, J. Bidwell, and Z. Ye (2013), “Decoding Children’s Social Behavior,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013. [PDF] [WEBSITE] [DOI] [BIBTEX]
    @InProceedings{    2013-Rehg-DCSB,
      author  = {James M. Rehg and Gregory D. Abowd and Agata Rozga
          and Mario Romero and Mark A. Clements and Stan
          Sclaroff and Irfan Essa and Opal Y. Ousley and Yin
          Li and Chanho Kim and Hrishikesh Rao and Jonathan C.
          Kim and Liliana Lo Presti and Jianming Zhang and
          Denis Lantsman and Jonathan Bidwell and Zhefan Ye},
      booktitle  = {{Proceedings of IEEE Conference on Computer Vision
          and Pattern Recognition (CVPR)}},
      doi    = {10.1109/CVPR.2013.438},
      month    = {June},
      organization  = {IEEE Computer Society},
      pdf    = {http://www.cc.gatech.edu/~rehg/Papers/Rehg_CVPR13.pdf},
      title    = {Decoding Children's Social Behavior},
      url    = {http://www.cbi.gatech.edu/mmdb/},
      year    = {2013}
    }

Abstract

We introduce a new problem domain for activity recognition: the analysis of children’s social and communicative behaviors based on video and audio data. We specifically target interactions between children aged 1-2 years and an adult. Such interactions arise naturally in the diagnosis and treatment of developmental disorders such as autism. We introduce a new publicly-available dataset containing over 160 sessions of a 3-5 minute child-adult interaction. In each session, the adult examiner followed a semi-structured play interaction protocol which was designed to elicit a broad range of social behaviors. We identify the key technical challenges in analyzing these behaviors, and describe methods for decoding the interactions. We present experimental results that demonstrate the potential of the dataset to drive interesting research questions, and show preliminary results for multi-modal activity recognition.

Full database available from http://www.cbi.gatech.edu/mmdb/

via IEEE Xplore – Decoding Children’s Social Behavior.

AddThis Social Bookmark Button

Paper in IEEE CVPR 2013 “Augmenting Bag-of-Words: Data-Driven Discovery of Temporal and Structural Information for Activity Recognition”

June 27th, 2013 Irfan Essa Posted in Activity Recognition, Behavioral Imaging, Grant Schindler, PAMI/ICCV/CVPR/ECCV, Papers, Sports Visualization, Thomas Ploetz, Vinay Bettadapura No Comments »

  • V. Bettadapura, G. Schindler, T. Ploetz, and I. Essa (2013), “Augmenting Bag-of-Words: Data-Driven Discovery of Temporal and Structural Information for Activity Recognition,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013. [PDF] [WEBSITE] [DOI] [arXiv] [BIBTEX]
    @InProceedings{    2013-Bettadapura-ABDDTSIAR,
      arxiv    = {http://arxiv.org/abs/1510.02071},
      author  = {Vinay Bettadapura and Grant Schindler and Thomas
          Ploetz and Irfan Essa},
      booktitle  = {{Proceedings of IEEE Conference on Computer Vision
          and Pattern Recognition (CVPR)}},
      doi    = {10.1109/CVPR.2013.338},
      month    = {June},
      organization  = {IEEE Computer Society},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2013-Bettadapura-ABDDTSIAR.pdf},
      title    = {Augmenting Bag-of-Words: Data-Driven Discovery of
          Temporal and Structural Information for Activity
          Recognition},
      url    = {http://www.cc.gatech.edu/cpl/projects/abow/},
      year    = {2013}
    }

Abstract

We present data-driven techniques to augment Bag of Words (BoW) models, which allow for more robust modeling and recognition of complex long-term activities, especially when the structure and topology of the activities are not known a priori. Our approach specifically addresses the limitations of standard BoW approaches, which fail to represent the underlying temporal and causal information that is inherent in activity streams. In addition, we also propose the use of randomly sampled regular expressions to discover and encode patterns in activities. We demonstrate the effectiveness of our approach in experimental evaluations where we successfully recognize activities and detect anomalies in four complex datasets.

via IEEE Xplore – Augmenting Bag-of-Words: Data-Driven Discovery of Temporal and Structural Information for Activity R….

AddThis Social Bookmark Button

Paper in AISTATS 2013 “Beyond Sentiment: The Manifold of Human Emotions”

April 29th, 2013 Irfan Essa Posted in AAAI/IJCAI/UAI, Behavioral Imaging, Computational Journalism, Machine Learning, Papers, WWW No Comments »

  • S. Kim, F. Li, G. Lebanon, and I. A. Essa (2013), “Beyond Sentiment: The Manifold of Human Emotions,” in Proceedings of AI STATS, 2013. [PDF] [BIBTEX]
    @InProceedings{    2012-Kim-BSMHE,
      author  = {Seungyeon Kim and Fuxin Li and Guy Lebanon and
          Irfan A. Essa},
      booktitle  = {Proceedings of AI STATS},
      pdf    = {http://arxiv.org/pdf/1202.1568v1},
      title    = {Beyond Sentiment: The Manifold of Human Emotions},
      year    = {2013}
    }

Abstract

Sentiment analysis predicts the presence of positive or negative emotions in a text document. In this paper we consider higher dimensional extensions of the sentiment concept, which represent a richer set of human emotions. Our approach goes beyond previous work in that our model contains a continuous manifold rather than a finite set of human emotions. We investigate the resulting model, compare it to psychological observations, and explore its predictive capabilities. Besides obtaining significant improvements over a baseline without manifold, we are also able to visualize different notions of positive sentiment in different domains.

via [arXiv.org 1202.1568] Beyond Sentiment: The Manifold of Human Emotions.

AddThis Social Bookmark Button