Paper in ACM IUI15: “Inferring Meal Eating Activities in Real World Settings from Ambient Sounds: A Feasibility Study”

Paper

  • E. Thomaz, C. Zhang, I. Essa, and G. D. Abowd (2015), “Inferring Meal Eating Activities in Real World Settings from Ambient Sounds: A Feasibility Study,” in Proceedings of ACM Conference on Intelligence User Interfaces (IUI), 2015. (Best Short Paper Award) [PDF] [BIBTEX]
    @InProceedings{    2015-Thomaz-IMEARWSFASFS,
      author  = {Edison Thomaz and Cheng Zhang and Irfan Essa and
          Gregory D. Abowd},
      awards  = {(Best Short Paper Award)},
      booktitle  = {Proceedings of ACM Conference on Intelligence User
          Interfaces (IUI)},
      month    = {May},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Thomaz-IMEARWSFASFS.pdf}
          ,
      title    = {Inferring Meal Eating Activities in Real World
          Settings from Ambient Sounds: A Feasibility Study},
      year    = {2015}
    }

Abstract

2015-04-IUI-AwardDietary self-monitoring has been shown to be an effective method for weight-loss, but it remains an onerous task despite recent advances in food journaling systems. Semi-automated food journaling can reduce the effort of logging, but often requires that eating activities be detected automatically. In this work we describe results from a feasibility study conducted in-the-wild where eating activities were inferred from ambient sounds captured with a wrist-mounted device; twenty participants wore the device during one day for an average of 5 hours while performing normal everyday activities. Our system was able to identify meal eating with an F-score of 79.8% in a person-dependent evaluation, and with 86.6% accuracy in a person-independent evaluation. Our approach is intended to be practical, leveraging off-the-shelf devices with audio sensing capabilities in contrast to systems for automated dietary assessment based on specialized sensors.

Tags: , , , , , , | Categories: ACM ICMI/IUI, Activity Recognition, Audio Analysis, Behavioral Imaging, Edison Thomaz, Gregory Abowd, Health Systems, Machine Learning, Multimedia | Date: April 1st, 2015 | By: Irfan Essa |

No Comments »



Participated in the KAUST Conference on Computational Imaging and Vision 2015

I was invited to participate and present at the King Abdullah University of Science & Technology Conference on Computational Imaging and Vision (CIV)

March 1-4, 2015
Building 19 Level 3, Lecture Halls
Visual Computing Center (VCC)

Invited Speakers included

  • Shree Nayar – Columbia University
  • Daniel Cremers – Technical University of Munich
  • Rene Vidal –The Johns Hopkins University
  • Wolfgang Heidrich – VCC, KAUST
  • Jingyi Yu –University of Delaware
  • Irfan Essa – The Georgia Institute of Technology
  • Mubarak Shah – University of Central Florida
  • Larry Davis – University of Maryland
  • David Forsyth –University of Illinois
  • Gordon Wetzstein – Stanford University
  • Brian Barsky – University of California
  • Yi Ma – ShanghaiTech University
  • etc.

This event was hosted by the Visual Computing Center (Wolfgang HeidrichBernard GhanemGanesh Sundaramoorthi).

Daniel Castro also attended and presented a poster at the meeting.

2015-03-KUAST

Tags: , , , , | Categories: Computational Photography and Video, Computer Vision, Daniel Castro, Presentations | Date: March 1st, 2015 | By: Irfan Essa |

No Comments »



My Infographic Resume

Check out my infographic resume created via Vizualize.me.

via My Infographic Resume.

Tags: , , | Categories: Interesting | Date: January 20th, 2015 | By: Irfan Essa |

No Comments »



Paper in WACV (2015): “Semantic Instance Labeling Leveraging Hierarchical Segmentation”

Paper

  • S. Hickson, I. Essa, and H. Christensen (2015), “Semantic Instance Labeling Leveraging Hierarchical Segmentation,” in Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV), 2015. [PDF] [DOI] [BIBTEX]
    @InProceedings{    2015-Hickson-SILLHS,
      author  = {Steven Hickson and Irfan Essa and Henrik
          Christensen},
      booktitle  = {Proceedings of IEEE Winter Conference on
          Applications of Computer Vision (WACV)},
      doi    = {10.1109/WACV.2015.147},
      month    = {January},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Hickson-SILLHS.pdf}
          ,
      publisher  = {IEEE Computer Society},
      title    = {Semantic Instance Labeling Leveraging Hierarchical
          Segmentation},
      year    = {2015}
    }

Abstract

Most of the approaches for indoor RGBD semantic labeling focus on using pixels or superpixels to train a classifier. In this paper, we implement a higher level segmentation using a hierarchy of superpixels to obtain a better segmentation for training our classifier. By focusing on meaningful segments that conform more directly to objects, regardless of size, we train a random forest of decision trees as a classifier using simple features such as the 3D size, LAB color histogram, width, height, and shape as specified by a histogram of surface normals. We test our method on the NYU V2 depth dataset, a challenging dataset of cluttered indoor environments. Our experiments using the NYU V2 depth dataset show that our method achieves state of the art results on both a general semantic labeling introduced by the dataset (floor, structure, furniture, and objects) and a more object specific semantic labeling. We show that training a classifier on a segmentation from a hierarchy of super pixels yields better results than training directly on super pixels, patches, or pixels as in previous work.2015-Hickson-SILLHS_pdf

Tags: , , , | Categories: Computer Vision, Henrik Christensen, PAMI/ICCV/CVPR/ECCV, Papers, Robotics, Steven Hickson | Date: January 6th, 2015 | By: Irfan Essa |

No Comments »



Paper in IEEE WACV (2015): “Finding Temporally Consistent Occlusion Boundaries using Scene Layout”

Paper

  • S. H. Raza, A. Humayun, M. Grundmann, D. Anderson, and I. Essa (2015), “Finding Temporally Consistent Occlusion Boundaries using Scene Layout,” in Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV), 2015. [PDF] [DOI] [BIBTEX]
    @InProceedings{    2015-Raza-FTCOBUSL,
      author  = {Syed Hussain Raza and Ahmad Humayun and Matthias
          Grundmann and David Anderson and Irfan Essa},
      booktitle  = {Proceedings of IEEE Winter Conference on
          Applications of Computer Vision (WACV)},
      doi    = {10.1109/WACV.2015.141},
      month    = {January},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Raza-FTCOBUSL.pdf}
          ,
      publisher  = {IEEE Computer Society},
      title    = {Finding Temporally Consistent Occlusion Boundaries
          using Scene Layout},
      year    = {2015}
    }

Abstract

We present an algorithm for finding temporally consistent occlusion boundaries in videos to support segmentation of dynamic scenes. We learn occlusion boundaries in a pairwise Markov random field (MRF) framework. We first estimate the probability of a spatiotemporal edge being an occlusion boundary by using appearance, flow, and geometric features. Next, we enforce occlusion boundary continuity in an MRF model by learning pairwise occlusion probabilities using a random forest. Then, we temporally smooth boundaries to remove temporal inconsistencies in occlusion boundary estimation. Our proposed framework provides an efficient approach for finding temporally consistent occlusion boundaries in video by utilizing causality, redundancy in videos, and semantic layout of the scene. We have developed a dataset with fully annotated ground-truth occlusion boundaries of over 30 videos (∼5000 frames). This dataset is used to evaluate temporal occlusion boundaries and provides a much-needed baseline for future studies. We perform experiments to demonstrate the role of scene layout, and temporal information for occlusion reasoning in video of dynamic scenes.

Tags: , , , , , | Categories: Computational Photography and Video, Computer Vision, Matthias Grundmann, PAMI/ICCV/CVPR/ECCV, Papers, S. Hussain Raza, Uncategorized | Date: January 6th, 2015 | By: Irfan Essa |

No Comments »



Paper in IEEE WACV (2015): “Leveraging Context to Support Automated Food Recognition in Restaurants”

Paper

  • V. Bettadapura, E. Thomaz, A. Parnami, G. Abowd, and I. Essa (2015), “Leveraging Context to Support Automated Food Recognition in Restaurants,” in Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV), 2015. [PDF] [WEBSITE] [DOI] [arXiv] [BIBTEX]
    @InProceedings{    2015-Bettadapura-LCSAFRR,
      arxiv    = {http://arxiv.org/abs/1510.02078},
      author  = {Vinay Bettadapura and Edison Thomaz and Aman
          Parnami and Gregory Abowd and Irfan Essa},
      booktitle  = {Proceedings of IEEE Winter Conference on
          Applications of Computer Vision (WACV)},
      doi    = {10.1109/WACV.2015.83},
      month    = {January},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Bettadapura-LCSAFRR.pdf}
          ,
      publisher  = {IEEE Computer Society},
      title    = {Leveraging Context to Support Automated Food
          Recognition in Restaurants},
      url    = {http://www.vbettadapura.com/egocentric/food/},
      year    = {2015}
    }

 

Abstract

The pervasiveness of mobile cameras has resulted in a dramatic increase in food photos, which are pictures reflecting what people eat. In this paper, we study how taking pictures of what we eat in restaurants can be used for the purpose of automating food journaling. We propose to leverage the context of where the picture was taken, with additional information about the restaurant, available online, coupled with state-of-the-art computer vision techniques to recognize the food being consumed. To this end, we demonstrate image-based recognition of foods eaten in restaurants by training a classifier with images from restaurant’s online menu databases. We evaluate the performance of our system in unconstrained, real-world settings with food images taken in 10 restaurants across 5 different types of food (American, Indian, Italian, Mexican and Thai).food-poster

Tags: , , , , | Categories: Activity Recognition, Computer Vision, Edison Thomaz, First Person Computing, Gregory Abowd, Mobile Computing, PAMI/ICCV/CVPR/ECCV, Papers, Ubiquitous Computing, Uncategorized, Vinay Bettadapura | Date: January 6th, 2015 | By: Irfan Essa |

No Comments »



Paper in WACV (2015): “Egocentric Field-of-View Localization Using First-Person Point-of-View Devices”

Paper

  • V. Bettadapura, I. Essa, and C. Pantofaru (2015), “Egocentric Field-of-View Localization Using First-Person Point-of-View Devices,” in Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV), 2015. (Best Paper Award) [PDF] [WEBSITE] [DOI] [arXiv] [BIBTEX]
    @InProceedings{    2015-Bettadapura-EFLUFPD,
      arxiv    = {http://arxiv.org/abs/1510.02073},
      author  = {Vinay Bettadapura and Irfan Essa and Caroline
          Pantofaru},
      awards  = {(Best Paper Award)},
      booktitle  = {Proceedings of IEEE Winter Conference on
          Applications of Computer Vision (WACV)},
      doi    = {10.1109/WACV.2015.89},
      month    = {January},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Bettadapura-EFLUFPD.pdf}
          ,
      publisher  = {IEEE Computer Society},
      title    = {Egocentric Field-of-View Localization Using
          First-Person Point-of-View Devices},
      url    = {http://www.vbettadapura.com/egocentric/localization/}
          ,
      year    = {2015}
    }

Abstract

We present a technique that uses images, videos and sensor data taken from first-person point-of-view devices to perform egocentric field-of-view (FOV) localization. We define egocentric FOV localization as capturing the visual information from a person’s field-of-view in a given environment and transferring this information onto a reference corpus of images and videos of the same space, hence determining what a person is attending to. Our method matches images and video taken from the first-person perspective with the reference corpus and refines the results using the first-person’s head orientation information obtained using the device sensors. We demonstrate single and multi-user egocentric FOV localization in different indoor and outdoor environments with applications in augmented reality, event understanding and studying social interactions.

Tags: , , , | Categories: Activity Recognition, Caroline Pantofaru, Computer Vision, First Person Computing, Mobile Computing, PAMI/ICCV/CVPR/ECCV, Papers, Vinay Bettadapura | Date: January 6th, 2015 | By: Irfan Essa |

No Comments »



Four Papers at IEEE Winter Conference on Applications of Computer Vision (WACV 2015)

Four papers accepted at the IEEE Winter Conference on Applications of Computer Vision (WACV) 2015. See you at Waikoloa Beach, Hawaii!

  • V. Bettadapura, E. Thomaz, A. Parnami, G. Abowd, and I. Essa (2015), “Leveraging Context to Support Automated Food Recognition in Restaurants,” in Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV), 2015. [PDF] [WEBSITE] [DOI] [arXiv] [BIBTEX]
    @InProceedings{    2015-Bettadapura-LCSAFRR,
      arxiv    = {http://arxiv.org/abs/1510.02078},
      author  = {Vinay Bettadapura and Edison Thomaz and Aman
          Parnami and Gregory Abowd and Irfan Essa},
      booktitle  = {Proceedings of IEEE Winter Conference on
          Applications of Computer Vision (WACV)},
      doi    = {10.1109/WACV.2015.83},
      month    = {January},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Bettadapura-LCSAFRR.pdf}
          ,
      publisher  = {IEEE Computer Society},
      title    = {Leveraging Context to Support Automated Food
          Recognition in Restaurants},
      url    = {http://www.vbettadapura.com/egocentric/food/},
      year    = {2015}
    }
  • S. Hickson, I. Essa, and H. Christensen (2015), “Semantic Instance Labeling Leveraging Hierarchical Segmentation,” in Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV), 2015. [PDF] [DOI] [BIBTEX]
    @InProceedings{    2015-Hickson-SILLHS,
      author  = {Steven Hickson and Irfan Essa and Henrik
          Christensen},
      booktitle  = {Proceedings of IEEE Winter Conference on
          Applications of Computer Vision (WACV)},
      doi    = {10.1109/WACV.2015.147},
      month    = {January},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Hickson-SILLHS.pdf}
          ,
      publisher  = {IEEE Computer Society},
      title    = {Semantic Instance Labeling Leveraging Hierarchical
          Segmentation},
      year    = {2015}
    }
  • S. H. Raza, A. Humayun, M. Grundmann, D. Anderson, and I. Essa (2015), “Finding Temporally Consistent Occlusion Boundaries using Scene Layout,” in Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV), 2015. [PDF] [DOI] [BIBTEX]
    @InProceedings{    2015-Raza-FTCOBUSL,
      author  = {Syed Hussain Raza and Ahmad Humayun and Matthias
          Grundmann and David Anderson and Irfan Essa},
      booktitle  = {Proceedings of IEEE Winter Conference on
          Applications of Computer Vision (WACV)},
      doi    = {10.1109/WACV.2015.141},
      month    = {January},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Raza-FTCOBUSL.pdf}
          ,
      publisher  = {IEEE Computer Society},
      title    = {Finding Temporally Consistent Occlusion Boundaries
          using Scene Layout},
      year    = {2015}
    }
  • V. Bettadapura, I. Essa, and C. Pantofaru (2015), “Egocentric Field-of-View Localization Using First-Person Point-of-View Devices,” in Proceedings of IEEE Winter Conference on Applications of Computer Vision (WACV), 2015. (Best Paper Award) [PDF] [WEBSITE] [DOI] [arXiv] [BIBTEX]
    @InProceedings{    2015-Bettadapura-EFLUFPD,
      arxiv    = {http://arxiv.org/abs/1510.02073},
      author  = {Vinay Bettadapura and Irfan Essa and Caroline
          Pantofaru},
      awards  = {(Best Paper Award)},
      booktitle  = {Proceedings of IEEE Winter Conference on
          Applications of Computer Vision (WACV)},
      doi    = {10.1109/WACV.2015.89},
      month    = {January},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/2015-Bettadapura-EFLUFPD.pdf}
          ,
      publisher  = {IEEE Computer Society},
      title    = {Egocentric Field-of-View Localization Using
          First-Person Point-of-View Devices},
      url    = {http://www.vbettadapura.com/egocentric/localization/}
          ,
      year    = {2015}
    }

Last one was also the WINNER of Best Paper Award (see http://wacv2015.org/). More details coming soon.

 

Tags: , , , | Categories: Computational Photography and Video, Computer Vision, PAMI/ICCV/CVPR/ECCV, Papers, S. Hussain Raza, Steven Hickson, Vinay Bettadapura | Date: January 5th, 2015 | By: Irfan Essa |

No Comments »



Computational Photography (CS 6475) for Georgia Tech’s Online MSCS Program (via Udacity)

Today, the Inaugural Offering of the Computational Photography (CS 6475) was launched for the Georgia Tech’s Online MSCS Program using the Udacity platform.

Course Description

CS 6475* (3-0-3): Computational Photography – (Instructor: Irfan Essa) – This class explores how computation impacts the entire workflow of photography, which is traditionally aimed at capturing light from a (3D) scene to form an (2D) image. A detailed study of the perceptual, technical and computational aspects of forming pictures, and more precisely the capture and depiction of reality on a (mostly 2D) medium of images is undertaken over the entire term. The scientific, perceptual, and artistic principles behind image-making will be emphasized, especially as impacted and changed by computation. Topics include the relationship between pictorial techniques and the human visual system; intrinsic limitations of 2D representations and their possible compensations; and technical issues involving capturing light to form images. Technical aspects of image capture and rendering, and exploration of how such a medium can be used to its maximum potential, will be examined. New forms of cameras and imaging paradigms will be introduced. Students will undertake a hand-on approach over the entire term using computation techniques, merged with digital imaging processes to produce photographic artifacts.

DO NOTE that there are programming assignments in this class, and working knowledge of Linear Algebra, Calculus, Probability, and Programming in C++/Python/Matlab/Java will be required. OpenCV OR Matlab are used in this class as appropriate. More information on this class at Computational Photography Class Website.

Video Preview

Tags: , , | Categories: Computational Photography, Computational Photography and Video | Date: January 5th, 2015 | By: Irfan Essa |

No Comments »



William Mong Distinguished Lecture at the University of Hong Kong on “Video Cameras are Everywhere: Data-Driven Methods for Video Analysis and Enhancement”

Video Cameras are Everywhere: Data-Driven Methods for Video Analysis and Enhancement

Irfan Essa (prof.irfanessa.com)
Georgia Institute of Technology
School of Interactive Computing
GVU and RIM @ GT Centers 

Abstract 

2014-12-11-HKUIn this talk, I will start with describing the pervasiveness of image and video content, and how such content is growing with the ubiquity of cameras.  I will use this to motivate the need for better tools for analysis and enhancement of video content. I will start with some of our earlier work on temporal modeling of video, then lead up to some of our current work and describe two main projects. (1) Our approach for a video stabilizer, currently implemented and running on YouTube, and its extensions. (2) A robust and scaleable method for video segmentation. 

I will describe, in some detail, our Video stabilization method, which generates stabilized videos and is in wide use. Our method allows for video stabilization beyond the conventional filtering that only suppresses high frequency jitter. This method also supports removal of rolling shutter distortions common in modern CMOS cameras that capture the frame one scan-line at a time resulting in non-rigid image distortions such as shear and wobble. Our method does not rely on a-priori knowledge and works on video from any camera or on legacy footage. I will showcase examples of this approach and also discuss how this method is launched and running on YouTube, with Millions of users.

Then I will  describe an efficient and scalable technique for spatio-temporal segmentation of long video sequences using a hierarchical graph-based algorithm. This hierarchical approach generates high quality segmentations and we demonstrate the use of this segmentation as users interact with the video, enabling efficient annotation of objects within the video. I will also show some recent work on how this segmentation and annotation can be used to do dynamic scene understanding. 

Bio: http://prof.irfanessa.com/bio 

Tags: , , , | Categories: Computational Photography and Video, Computer Vision, Presentations | Date: December 11th, 2014 | By: Irfan Essa |

No Comments »