Paper in Ubicomp 2015: “A Practical Approach for Recognizing Eating Moments with Wrist-Mounted Inertial Sensing”

September 8th, 2015 Irfan Essa Posted in ACM UIST/CHI, Activity Recognition, Behavioral Imaging, Edison Thomaz, Gregory Abowd, Health Systems, Machine Learning, Mobile Computing, Papers, UBICOMP, Ubiquitous Computing No Comments »

Paper

  • E. Thomaz, I. Essa, and G. D. Abowd (2015), “A Practical Approach for Recognizing Eating Moments with Wrist-Mounted Inertial Sensing,” in Proceedings of ACM International Conference on Ubiquitous Computing (UBICOMP), 2015. [PDF] [BIBTEX]
    @InProceedings{    2015-Thomaz-PAREMWWIS,
      author  = {Edison Thomaz and Irfan Essa and Gregory D. Abowd},
      booktitle  = {Proceedings of ACM International Conference on
          Ubiquitous Computing (UBICOMP)},
      month    = {September},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Thomaz-PAREMWWIS.pdf}
          ,
      title    = {A Practical Approach for Recognizing Eating Moments
          with Wrist-Mounted Inertial Sensing},
      year    = {2015}
    }

Abstract

Thomaz-UBICOMP15.pngRecognizing when eating activities take place is one of the key challenges in automated food intake monitoring. Despite progress over the years, most proposed approaches have been largely impractical for everyday usage, requiring multiple onbody sensors or specialized devices such as neck collars for swallow detection. In this paper, we describe the implementation and evaluation of an approach for inferring eating moments based on 3-axis accelerometry collected with a popular off-the-shelf smartwatch. Trained with data collected in a semi-controlled laboratory setting with 20 subjects, our system recognized eating moments in two free-living condition studies (7 participants, 1 day; 1 participant, 31 days), with Fscores of 76.1% (66.7% Precision, 88.8% Recall), and 71.3% (65.2% Precision, 78.6% Recall). This work represents a contribution towards the implementation of a practical, automated system for everyday food intake monitoring, with applicability in areas ranging from health research and food journaling.

AddThis Social Bookmark Button

Paper in ACM IUI15: “Inferring Meal Eating Activities in Real World Settings from Ambient Sounds: A Feasibility Study”

April 1st, 2015 Irfan Essa Posted in ACM ICMI/IUI, Activity Recognition, Audio Analysis, Behavioral Imaging, Edison Thomaz, Gregory Abowd, Health Systems, Machine Learning, Multimedia No Comments »

Paper

  • E. Thomaz, C. Zhang, I. Essa, and G. D. Abowd (2015), “Inferring Meal Eating Activities in Real World Settings from Ambient Sounds: A Feasibility Study,” in Proceedings of ACM Conference on Intelligence User Interfaces (IUI), 2015. (Best Short Paper Award) [PDF] [BIBTEX]
    @InProceedings{    2015-Thomaz-IMEARWSFASFS,
      author  = {Edison Thomaz and Cheng Zhang and Irfan Essa and
          Gregory D. Abowd},
      awards  = {(Best Short Paper Award)},
      booktitle  = {Proceedings of ACM Conference on Intelligence User
          Interfaces (IUI)},
      month    = {May},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Thomaz-IMEARWSFASFS.pdf}
          ,
      title    = {Inferring Meal Eating Activities in Real World
          Settings from Ambient Sounds: A Feasibility Study},
      year    = {2015}
    }

Abstract

2015-04-IUI-AwardDietary self-monitoring has been shown to be an effective method for weight-loss, but it remains an onerous task despite recent advances in food journaling systems. Semi-automated food journaling can reduce the effort of logging, but often requires that eating activities be detected automatically. In this work we describe results from a feasibility study conducted in-the-wild where eating activities were inferred from ambient sounds captured with a wrist-mounted device; twenty participants wore the device during one day for an average of 5 hours while performing normal everyday activities. Our system was able to identify meal eating with an F-score of 79.8% in a person-dependent evaluation, and with 86.6% accuracy in a person-independent evaluation. Our approach is intended to be practical, leveraging off-the-shelf devices with audio sensing capabilities in contrast to systems for automated dietary assessment based on specialized sensors.

AddThis Social Bookmark Button

Paper in IBSI 2014 conference entitled “Automated Surgical OSATS Prediction from Videos”

April 28th, 2014 Irfan Essa Posted in Behavioral Imaging, Health Systems, Medical, Papers, Thomas Ploetz, Yachna Sharma No Comments »

  • Y. Sharma, T. Ploetz, N. Hammerla, S. Mellor, R. McNaney, P. Oliver, S. Deshmukh, A. McCaskie, and I. Essa (2014), “Automated Surgical OSATS Prediction from Videos,” in Proceedings of IEEE International Symposium on Biomedical Imaging, Beijing, CHINA, 2014. [PDF] [BIBTEX]
    @InProceedings{    2014-Sharma-ASOPFV,
      address  = {Beijing, CHINA},
      author  = {Yachna Sharma and Thomas Ploetz and Nils Hammerla
          and Sebastian Mellor and Roisin McNaney and Patrick
          Oliver and Sandeep Deshmukh and Andrew McCaskie and
          Irfan Essa},
      booktitle  = {{Proceedings of IEEE International Symposium on
          Biomedical Imaging}},
      month    = {April},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2014-Sharma-ASOPFV.pdf}
          ,
      title    = {Automated Surgical {OSATS} Prediction from Videos},
      year    = {2014}
    }

Abstract

The assessment of surgical skills is an essential part of medical training. The prevalent manual evaluations by expert surgeons are time consuming and often their outcomes vary substantially from one observer to another. We present a video-based framework for automated evaluation of surgical skills based on the Objective Structured Assessment of Technical Skills (OSATS) criteria. We encode the motion dynamics via frame kernel matrices, and represent the motion granularity by texture features. Linear discriminant analysis is used to derive a reduced dimensionality feature space followed by linear regression to predict OSATS skill scores. We achieve statistically significant correlation (p-value < 0.01) between the ground-truth (given by domain experts) and the OSATS scores predicted by our framework.

AddThis Social Bookmark Button

Two Ph. D. Defenses the same day. A first for me!

April 2nd, 2014 Irfan Essa Posted in Activity Recognition, Computational Photography and Video, Health Systems, PhD, S. Hussain Raza, Students, Yachna Sharma No Comments »

Today, two of my Ph. D. Students defended their Dissertations.  Back to back.  Congrats to both as they are both done.

Thesis title: Surgical Skill Assessment Using Motion Texture analysis
Student: Yachna Sharma, Ph. D. Candidate in ECE
http://users.ece.gatech.edu/~ysharma3/
Date/Time : 2nd April, 1:00 pm

Title : Temporally Consistent Semantic Segmentation in Videos
S. Hussain Raza, Ph. D. Candidate in ECE
https://sites.google.com/site/shussainraza5/
Date/Time : 2nd April, 1:00 pm

Location : CSIP Library, Room 5186, CenterGy One Building

 

AddThis Social Bookmark Button

Poster STS 2011: “3-Dimensional Visualization of the Operating Room Using Advanced Motion Capture: A Novel Paradigm to Expand Simulation-Based Surgical Education”

February 2nd, 2011 Irfan Essa Posted in Computational Photography and Video, Eric Sarin, Health Systems, Kihwan Kim, Papers, Uncategorized, William Cooper No Comments »

3-Dimensional Visualization of the Operating Room Using Advanced Motion Capture: A Novel Paradigm to Expand Simulation-Based Surgical Education

  • Sarin, Kim, Essa, and Cooper (2011), “3-Dimensional Visualization of the Operating Room Using Advanced Motion Capture: A Novel Paradigm to Expand Simulation-Based Surgical Education,” in Proccedings of Society of Thoracic Surgeons Annual Meeting, Society of Thoracic Surgeons, 2011.  [BLOG][BIBTEX]
    
    @incollection{2011-Sarin-3VORUAMCNPESSE,
      Author = {E. L. Sarin and K. Kim and I. Essa and W. A. Cooper},
      Blog = {http://prof.irfanessa.com/2011/02/02/sts-2011/},
      Booktitle = {Proccedings of Society of Thoracic Surgeons Annual Meeting},
      Month = {January},
      Publisher = {Society of Thoracic Surgeons},
      Title = {3-Dimensional Visualization of the Operating Room Using Advanced Motion Capture: A Novel Paradigm to Expand Simulation-Based Surgical Education},
      Type = {Poster and Video Presentation},
      Year = {2011}}

A collaborative project between School of Interactive Computing, Georgia Institute of Technology, Atlanta, Georgia, Division of Cardiothoracic Surgery, Emory University School of Medicine, Atlanta, Georgia, and Inova Heart and Vascular Institute1, Fairfax, Virginia. This was a Video and a Poster presentation at the Society of Thoracic Surgeons Annual Meeting in San Diego, CA, Jan 2011.

Poster for Society of Thoracic Surgeon's Annual Meeting

AddThis Social Bookmark Button

Paper MICCAI (2007): “A Boosted Segmentation Method for Surgical Workflow Analysis”

October 21st, 2007 Irfan Essa Posted in Activity Recognition, Health Systems, Medical, MICCAI No Comments »

  • N. Padoy, T. Blum, I. Essa, H. Feussner, M. O. Berger, and N. Navab (2007), “A Boosted Segmentation Method for Surgical Workflow Analysis,” in Proceedings of International Conference on Medical Imaging Computing and Computer Assisted Intervention, (MICCAI), Brisbane, Australia, 2007. [PDF] [DOI] [BIBTEX]
    @InProceedings{    2007-Padoy-BSMSWA,
      address  = {Brisbane, Australia},
      author  = {N. Padoy and T. Blum and I. Essa and H. Feussner
          and M. O. Berger and N. Navab},
      booktitle  = {Proceedings of International Conference on Medical
          Imaging Computing and Computer Assisted
          Intervention, (MICCAI)},
      doi    = {10.1007/978-3-540-75757-3_13},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2007-Padoy-BSMSWA.pdf}
          ,
      publisher  = {Springer Lecture Notes in Computer Science (LNCS)
          series},
      title    = {A Boosted Segmentation Method for Surgical Workflow
          Analysis},
      year    = {2007}
    }

Abstract

As demands on hospital efficiency increase, there is a stronger need for automatic analysis, recovery, and modification of surgical workflows. Even though most of the previous work has dealt with higher level and hospital-wide workflow including issues like document management, workflow is also an important issue within the surgery room. Its study has a high potential, e.g., for building context-sensitive operating rooms, evaluating and training surgical staff, optimizing surgeries and generating automatic reports.In this paper, we propose an approach to segment the surgical workflow into phases based on temporal synchronization of multidimensional state vectors. Our method is evaluated on the example of laparoscopic cholecystectomy with state vectors representing tool usage during the surgeries. The discriminative power of each instrument in regard to each phase is estimated using AdaBoost. A boosted version of the Dynamic Time Warping DTW algorithm is used to create a surgical reference model and to segment a newly observed surgery. Full cross-validation on ten surgeries is performed and the method is compared to standard DTW and to Hidden Markov Models.

At the 10th International Conference on Medical Image Computing and Computer Assisted Intervention, 29 October to 2 November 2007 in Brisbane, Australia.

via Abstract – SpringerLink.

AddThis Social Bookmark Button

GT Research Horizons — Fall 2003

October 30th, 2003 Irfan Essa Posted in Aware Home, Health Systems, Human Factors, In The News, Intelligent Environments, Research No Comments »

GT Research Horizons — Fall 2003

AddThis Social Bookmark Button