Paper in Ubicomp 2015: “A Practical Approach for Recognizing Eating Moments with Wrist-Mounted Inertial Sensing”

September 8th, 2015 Irfan Essa Posted in ACM UIST/CHI, Activity Recognition, Behavioral Imaging, Edison Thomaz, Gregory Abowd, Health Systems, Machine Learning, Mobile Computing, Papers, UBICOMP, Ubiquitous Computing No Comments »

Paper

  • E. Thomaz, I. Essa, and G. D. Abowd (2015), “A Practical Approach for Recognizing Eating Moments with Wrist-Mounted Inertial Sensing,” in Proceedings of ACM International Conference on Ubiquitous Computing (UBICOMP), 2015. [PDF] [BIBTEX]
    @InProceedings{    2015-Thomaz-PAREMWWIS,
      author  = {Edison Thomaz and Irfan Essa and Gregory D. Abowd},
      booktitle  = {Proceedings of ACM International Conference on
          Ubiquitous Computing (UBICOMP)},
      month    = {September},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Thomaz-PAREMWWIS.pdf}
          ,
      title    = {A Practical Approach for Recognizing Eating Moments
          with Wrist-Mounted Inertial Sensing},
      year    = {2015}
    }

Abstract

Thomaz-UBICOMP15.pngRecognizing when eating activities take place is one of the key challenges in automated food intake monitoring. Despite progress over the years, most proposed approaches have been largely impractical for everyday usage, requiring multiple onbody sensors or specialized devices such as neck collars for swallow detection. In this paper, we describe the implementation and evaluation of an approach for inferring eating moments based on 3-axis accelerometry collected with a popular off-the-shelf smartwatch. Trained with data collected in a semi-controlled laboratory setting with 20 subjects, our system recognized eating moments in two free-living condition studies (7 participants, 1 day; 1 participant, 31 days), with Fscores of 76.1% (66.7% Precision, 88.8% Recall), and 71.3% (65.2% Precision, 78.6% Recall). This work represents a contribution towards the implementation of a practical, automated system for everyday food intake monitoring, with applicability in areas ranging from health research and food journaling.

AddThis Social Bookmark Button

Paper in ISWC 2015: “Predicting Daily Activities from Egocentric Images Using Deep Learning”

September 7th, 2015 Irfan Essa Posted in Activity Recognition, Daniel Castro, Gregory Abowd, Henrik Christensen, ISWC, Machine Learning, Papers, Steven Hickson, Ubiquitous Computing, Vinay Bettadapura No Comments »

Paper

  • D. Castro, S. Hickson, V. Bettadapura, E. Thomaz, G. Abowd, H. Christensen, and I. Essa (2015), “Predicting Daily Activities from Egocentric Images Using Deep Learning,” in Proceedings of International Symposium on Wearable Computers (ISWC), 2015. [PDF] [WEBSITE] [arXiv] [BIBTEX]
    @InProceedings{    2015-Castro-PDAFEIUDL,
      arxiv    = {http://arxiv.org/abs/1510.01576},
      author  = {Daniel Castro and Steven Hickson and Vinay
          Bettadapura and Edison Thomaz and Gregory Abowd and
          Henrik Christensen and Irfan Essa},
      booktitle  = {Proceedings of International Symposium on Wearable
          Computers (ISWC)},
      month    = {September},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Castro-PDAFEIUDL.pdf}
          ,
      title    = {Predicting Daily Activities from Egocentric Images
          Using Deep Learning},
      url    = {http://www.cc.gatech.edu/cpl/projects/dailyactivities/}
          ,
      year    = {2015}
    }

Abstract

Castro-ISWC2015We present a method to analyze images taken from a passive egocentric wearable camera along with the contextual information, such as time and day of a week, to learn and predict everyday activities of an individual. We collected a dataset of 40,103 egocentric images over a 6 month period with 19 activity classes and demonstrate the benefit of state-of-the-art deep learning techniques for learning and predicting daily activities. Classification is conducted using a Convolutional Neural Network (CNN) with a classification method we introduce called a late fusion ensemble. This late fusion ensemble incorporates relevant contextual information and increases our classification accuracy. Our technique achieves an overall accuracy of 83.07% in predicting a person’s activity across the 19 activity classes. We also demonstrate some promising results from two additional users by fine-tuning the classifier with one day of training data.

AddThis Social Bookmark Button

Paper in ACM IUI15: “Inferring Meal Eating Activities in Real World Settings from Ambient Sounds: A Feasibility Study”

April 1st, 2015 Irfan Essa Posted in ACM ICMI/IUI, Activity Recognition, Audio Analysis, Behavioral Imaging, Edison Thomaz, Gregory Abowd, Health Systems, Machine Learning, Multimedia No Comments »

Paper

  • E. Thomaz, C. Zhang, I. Essa, and G. D. Abowd (2015), “Inferring Meal Eating Activities in Real World Settings from Ambient Sounds: A Feasibility Study,” in Proceedings of ACM Conference on Intelligence User Interfaces (IUI), 2015. (Best Short Paper Award) [PDF] [BIBTEX]
    @InProceedings{    2015-Thomaz-IMEARWSFASFS,
      author  = {Edison Thomaz and Cheng Zhang and Irfan Essa and
          Gregory D. Abowd},
      awards  = {(Best Short Paper Award)},
      booktitle  = {Proceedings of ACM Conference on Intelligence User
          Interfaces (IUI)},
      month    = {May},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Thomaz-IMEARWSFASFS.pdf}
          ,
      title    = {Inferring Meal Eating Activities in Real World
          Settings from Ambient Sounds: A Feasibility Study},
      year    = {2015}
    }

Abstract

2015-04-IUI-AwardDietary self-monitoring has been shown to be an effective method for weight-loss, but it remains an onerous task despite recent advances in food journaling systems. Semi-automated food journaling can reduce the effort of logging, but often requires that eating activities be detected automatically. In this work we describe results from a feasibility study conducted in-the-wild where eating activities were inferred from ambient sounds captured with a wrist-mounted device; twenty participants wore the device during one day for an average of 5 hours while performing normal everyday activities. Our system was able to identify meal eating with an F-score of 79.8% in a person-dependent evaluation, and with 86.6% accuracy in a person-independent evaluation. Our approach is intended to be practical, leveraging off-the-shelf devices with audio sensing capabilities in contrast to systems for automated dietary assessment based on specialized sensors.

AddThis Social Bookmark Button

Paper in IEEE WACV (2015): “Leveraging Context to Support Automated Food Recognition in Restaurants”

January 6th, 2015 Irfan Essa Posted in Activity Recognition, Computer Vision, Edison Thomaz, First Person Computing, Gregory Abowd, Mobile Computing, PAMI/ICCV/CVPR/ECCV, Papers, Ubiquitous Computing, Uncategorized, Vinay Bettadapura No Comments »

Paper

  • V. Bettadapura, E. Thomaz, A. Parnami, G. Abowd, and I. Essa (2015), “Leveraging Context to Support Automated Food Recognition in Restaurants,” in Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV), 2015. [PDF] [WEBSITE] [DOI] [arXiv] [BIBTEX]
    @InProceedings{    2015-Bettadapura-LCSAFRR,
      arxiv    = {http://arxiv.org/abs/1510.02078},
      author  = {Vinay Bettadapura and Edison Thomaz and Aman
          Parnami and Gregory Abowd and Irfan Essa},
      booktitle  = {Proceedings of IEEE Winter Conference on
          Applications of Computer Vision (WACV)},
      doi    = {10.1109/WACV.2015.83},
      month    = {January},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Bettadapura-LCSAFRR.pdf}
          ,
      publisher  = {IEEE Computer Society},
      title    = {Leveraging Context to Support Automated Food
          Recognition in Restaurants},
      url    = {http://www.vbettadapura.com/egocentric/food/},
      year    = {2015}
    }

 

Abstract

The pervasiveness of mobile cameras has resulted in a dramatic increase in food photos, which are pictures reflecting what people eat. In this paper, we study how taking pictures of what we eat in restaurants can be used for the purpose of automating food journaling. We propose to leverage the context of where the picture was taken, with additional information about the restaurant, available online, coupled with state-of-the-art computer vision techniques to recognize the food being consumed. To this end, we demonstrate image-based recognition of foods eaten in restaurants by training a classifier with images from restaurant’s online menu databases. We evaluate the performance of our system in unconstrained, real-world settings with food images taken in 10 restaurants across 5 different types of food (American, Indian, Italian, Mexican and Thai).food-poster

AddThis Social Bookmark Button

Paper in ACM Ubicomp 2013 “Technological approaches for addressing privacy concerns when recognizing eating behaviors with wearable cameras”

September 14th, 2013 Irfan Essa Posted in Activity Recognition, Computational Photography and Video, Edison Thomaz, Gregory Abowd, ISWC, Mobile Computing, Papers, UBICOMP, Ubiquitous Computing No Comments »

  • E. Thomaz, A. Parnami, J. Bidwell, I. Essa, and G. D. Abowd (2013), “Technological Approaches for Addressing Privacy Concerns when Recognizing Eating Behaviors with Wearable Cameras.,” in Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp, 2013. [PDF] [DOI] [BIBTEX]
    @InProceedings{    2013-Thomaz-TAAPCWREBWWC,
      author  = {Edison Thomaz and Aman Parnami and Jonathan Bidwell
          and Irfan Essa and Gregory D. Abowd},
      booktitle  = {{Proceedings of the ACM International Joint
          Conference on Pervasive and Ubiquitous Computing
          (UbiComp}},
      doi    = {10.1145/2493432.2493509},
      month    = {September},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2013-Thomaz-TAAPCWREBWWC.pdf}
          ,
      title    = {Technological Approaches for Addressing Privacy
          Concerns when Recognizing Eating Behaviors with
          Wearable Cameras.},
      year    = {2013}
    }

 Abstract

First-person point-of-view (FPPOV) images taken by wearable cameras can be used to better understand people’s eating habits. Human computation is a way to provide effective analysis of FPPOV images in cases where algorithmic approaches currently fail. However, privacy is a serious concern. We provide a framework, the privacy-saliency matrix, for understanding the balance between the eating information in an image and its potential privacy concerns. Using data gathered by 5 participants wearing a lanyard-mounted smartphone, we show how the framework can be used to quantitatively assess the effectiveness of four automated techniques (face detection, image cropping, location filtering and motion filtering) at reducing the privacy-infringing content of images while still maintaining evidence of eating behaviors throughout the day.

via ACM DL Technological approaches for addressing privacy concerns when recognizing eating behaviors with wearable cameras.

AddThis Social Bookmark Button

Paper in IEEE CVPR 2013 “Decoding Children’s Social Behavior”

June 27th, 2013 Irfan Essa Posted in Affective Computing, Behavioral Imaging, Denis Lantsman, Gregory Abowd, James Rehg, PAMI/ICCV/CVPR/ECCV, Papers, Thomas Ploetz No Comments »

  • J. M. Rehg, G. D. Abowd, A. Rozga, M. Romero, M. A. Clements, S. Sclaroff, I. Essa, O. Y. Ousley, Y. Li, C. Kim, H. Rao, J. C. Kim, L. L. Presti, J. Zhang, D. Lantsman, J. Bidwell, and Z. Ye (2013), “Decoding Children’s Social Behavior,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013. [PDF] [WEBSITE] [DOI] [BIBTEX]
    @InProceedings{    2013-Rehg-DCSB,
      author  = {James M. Rehg and Gregory D. Abowd and Agata Rozga
          and Mario Romero and Mark A. Clements and Stan
          Sclaroff and Irfan Essa and Opal Y. Ousley and Yin
          Li and Chanho Kim and Hrishikesh Rao and Jonathan C.
          Kim and Liliana Lo Presti and Jianming Zhang and
          Denis Lantsman and Jonathan Bidwell and Zhefan Ye},
      booktitle  = {{Proceedings of IEEE Conference on Computer Vision
          and Pattern Recognition (CVPR)}},
      doi    = {10.1109/CVPR.2013.438},
      month    = {June},
      organization  = {IEEE Computer Society},
      pdf    = {http://www.cc.gatech.edu/~rehg/Papers/Rehg_CVPR13.pdf}
          ,
      title    = {Decoding Children's Social Behavior},
      url    = {http://www.cbi.gatech.edu/mmdb/},
      year    = {2013}
    }

Abstract

We introduce a new problem domain for activity recognition: the analysis of children’s social and communicative behaviors based on video and audio data. We specifically target interactions between children aged 1-2 years and an adult. Such interactions arise naturally in the diagnosis and treatment of developmental disorders such as autism. We introduce a new publicly-available dataset containing over 160 sessions of a 3-5 minute child-adult interaction. In each session, the adult examiner followed a semi-structured play interaction protocol which was designed to elicit a broad range of social behaviors. We identify the key technical challenges in analyzing these behaviors, and describe methods for decoding the interactions. We present experimental results that demonstrate the potential of the dataset to drive interesting research questions, and show preliminary results for multi-modal activity recognition.

Full database available from http://www.cbi.gatech.edu/mmdb/

via IEEE Xplore – Decoding Children’s Social Behavior.

AddThis Social Bookmark Button

AT UBICOMP 2012 Conference, in Pittsburgh, PA, September 5 – 7, 2012

September 4th, 2012 Irfan Essa Posted in Edison Thomaz, Grant Schindler, Gregory Abowd, Papers, Presentations, Thomas Ploetz, UBICOMP, Ubiquitous Computing, Vinay Bettadapura No Comments »

At ACM sponsored, 14th International Conference on Ubiquitous Computing (Ubicomp 2012), Pittsburgh, PA, September 5 – 7, 2012.

Here are the highlights of my group’s participation in Ubicomp 2012.

  • E. Thomaz, V. Bettadapura, G. Reyes, M. Sandesh, G. Schindler, T. Ploetz, G. D. Abowd, and I. Essa (2012), “Recognizing Water-Based Activities in the Home Through Infrastructure-Mediated Sensing,” in Proceedings of ACM International Conference on Ubiquitous Computing (UBICOMP), 2012. [PDF] [WEBSITE] (Oral Presentation at 2pm on Wednesday September 5, 2012).
  • J. Wang, G. Schindler, and I. Essa (2012), “Orientation Aware Scene Understanding for Mobile Camera,” in Proceedings of ACM International Conference on Ubiquitous Computing (UBICOMP), 2012. [PDF][WEBSITE] (Oral Presentation at 2pm on Thursday September 6, 2012).

In addition, my colleague, Gregory Abowd has a position paper on “What next, Ubicomp? Celebrating an intellectual disappearing act” on Wednesday 11:15am session and my other colleague/collaborator Thomas Ploetz has a paper on “Automatic Assessment of Problem Behavior in Individuals with Developmental Disabilities” with his co-authors Nils Hammerla, Agata Rozga, Andrea Reavis, Nathan Call, Gregory Abowd on Friday September 6, in the 9:15am session.

AddThis Social Bookmark Button

Paper in ACM Multimedia (2006): “Interactive mosaic generation for video navigation”

October 22nd, 2006 Irfan Essa Posted in ACM MM, Computational Photography and Video, Gregory Abowd, Kihwan Kim, Multimedia, Papers No Comments »

K. Kim, I. Essa, and G. Abowd (2006) “Interactive mosaic generation for video navigation.” in Proceedings of the 14th annual ACM international conference on Multimedia, pages 655-658, 2006. [Project Page | DOI | PDF]

Abstract

Navigation through large multimedia collections that include videos and images still remains cumbersome. In this paper, we introduce a novel method to visualize and navigate through the collection by creating a mosaic image that visually represents the compilation. This image is generated by a labeling-based layout algorithm using various sizes of sample tile images from the collection. Each tile represents both the photographs and video files representing scenes selected by matching algorithms. This generated mosaic image provides a new way for thematic video and visually summarizes the videos. Users can generate these mosaics with some predefined themes and layouts, or base it on the results of their queries. Our approach supports automatic generation of these layouts by using meta-information such as color, time-line and existence of faces or manually generated annotated information from existing systems (e.g., the Family Video Archive).

Interactive Video Mosaic

Interactive Video Mosaic

AddThis Social Bookmark Button

Funding: NSF (2001) ITR/SY “The Aware Home: Sustaining the Quality of Life for an Aging Population”

October 1st, 2001 Irfan Essa Posted in Aaron Bobick, Aware Home, Beth Mynatt, Funding, Gregory Abowd, Wendy Rogers No Comments »

Award# 0121661 – ITR/SY: The Aware Home: Sustaining the Quality of Life for an Aging Population

ABSTRACT

The focus of this project is on development of a domestic environment that is cognizant of the whereabouts and activities of its occupants and can support them in their everyday life. While the technology is applicable to a range of domestic situations, the emphasis in this work will be on support for aging in place; through collaboration with experts in assistive care and cognitive aging, the PI and his team will design, demonstrate, and evaluate a series of domestic services that aim to maintain the quality of life for an aging population, with the goal of increasing the likelihood of a “stay at home” alternative to assisted living that satisfies the needs of an aging individual and his/her distributed family. In particular, the PI will explore two areas that are key to sustaining quality of life for an independent senior adult: maintaining familial vigilance, and supporting daily routines. The intention is to serve as an active partner, aiding the senior occupant without taking control. This research will lead to advances in three research areas: human-computer interaction; computational perception; and software engineering. To achieve the desired goals, the PI will conduct the research and experimentation in an authentic domestic setting, a novel research facility called the Residential Laboratory recently completed next to the Georgia Tech campus. Together with experts in theoretical and practical aspects of aging, the PI will establish a pattern of research in which informed design of ubiquitous computing technology can be rapidly deployed, evaluated and evolved in an authentic setting. Special attention will be paid throughout to issues relating to privacy and trust implications. The PI will transition the products of this project to researchers and practitioners interested in performing more large-scale observations of the social and economic impact of Aware Home technologies.

AddThis Social Bookmark Button

Paper (1999) in CoBuild: “The Aware Home: A Living Laboratory for Ubiquitous Computing Research”

October 28th, 1999 Irfan Essa Posted in Aware Home, Beth Mynatt, Collaborators, Gregory Abowd, Intelligent Environments, Thad Starner, Wendy Rogers No Comments »

Cory D. Kidd, Robert Orr, Gregory D. Abowd, Christopher G. Atkeson, Irfan A. Essa, Blair MacIntyre, Elizabeth Mynatt, Thad E. Starner and Wendy Newstetter (1999) “The Aware Home: A Living Laboratory for Ubiquitous Computing Research”, In Cooperative Buildings. Integrating Information, Organizations and Architecture , Volume 1670/1999, Springer Berlin / Heidelberg, Lecture Notes in Computer Science, ISBN: 978-3-540-66596-0. [PDF | DOI | Project Site]

Abstract

We are building a home, called the Aware Home, to create a living laboratory for research in ubiquitous computing for everyday activities. This paper introduces the Aware Home project and outlines some of our technology-and human-centered research objectives in creating the Aware Home.

AddThis Social Bookmark Button