MENU: Home Bio Affiliations Research Teaching Publications Videos Collaborators/Students Contact FAQ ©2007-14 RSS

Paper in Advanced Robotics (2009): “Human Action Recognition Using Global Point Feature Histograms and Action Shapes”

October 29th, 2009 Irfan Essa Posted in Activity Recognition, Franzi Meier, Intelligent Environments, Michael Beetz, Papers No Comments »

Radu Bogdan Rusu, Jan Bandouch, Franziska Meier, Irfan Essa and Michael Beetz (2009) “Human Action Recognition Using Global Point Feature Histograms and Action Shapes”, in Journal of Advanced Robotics, volume 23, pages 1873–1908, Koninklijke Brill NV, Leiden and The Robotics Society of Japan, 2009. [ DOI | PDF]

Abstract

This paper investigates the recognition of human actions from three-dimensional (3-D) point clouds that encode the motions of people acting in sensor-distributed indoor environments. Data streams are time sequences of silhouettes extracted from cameras in the environment. From the 2-D silhouette contours we generate space–time streams by continuously aligning and stacking the contours along the time axis as third spatial dimension. The space–time stream of an observation sequence is segmented into parts corresponding to subactions using a pattern matching technique based on suffix trees and interval scheduling. Then, the segmented space–time shapes are processed by treating the shapes as 3-D point clouds and estimating global point feature histograms for them. The resultant models are clustered using statistical analysis and our experimental results indicate that the presented methods robustly derive different action classes. This holds despite large intra-class variance in the recorded datasets due to performances from different persons at different time intervals.

© Koninklijke Brill NV, Leiden and The Robotics Society of Japan, 2009

Overview of the approach.

Overview of the approach.

Keywords: Action recognition, point cloud, global features, action segmentation

AddThis Social Bookmark Button

GT Research Horizons — Fall 2003

October 30th, 2003 Irfan Essa Posted in Aware Home, Health Systems, Human Factors, In The News, Intelligent Environments, Research No Comments »

GT Research Horizons — Fall 2003

AddThis Social Bookmark Button

Paper AAAI (2002): “Recognizing Multitasked Activities from Video using Stochastic Context-Free Grammar”

September 29th, 2002 Irfan Essa Posted in AAAI/IJCAI/UAI, Activity Recognition, Darnell Moore, Intelligent Environments, Papers No Comments »

D. Moore and I. Essa (2002). “Recognizing multitasked activities from video using stochastic context-free grammar”, in Proceedings of AAAI 2002. [PDF | Project Site]

Abstract

In this paper, we present techniques for recognizing com- plex, multitasked activities from video. Visual information like image features and motion appearances, combined with domain-specific information, like object context is used ini- tially to label events. Each action event is represented with a unique symbol, allowing for a sequence of interactions to be described as an ordered symbolic string. Then, a model of stochastic context-free grammar (SCFG), which is devel- oped using underlying rules of an activity, is used to provide the structure for recognizing semantically meaningful behav- ior over extended periods. Symbolic strings are parsed us- ing the Earley-Stolcke algorithm to determine the most likely semantic derivation for recognition. Parsing substrings al- lows us to recognize patterns that describe high-level, com- plex events taking place over segments of the video sequence. We introduce new parsing strategies to enable error detection and recovery in stochastic context-free grammar and meth- ods of quantifying group and individual behavior in activities with separable roles. We show through experiments, with a popular card game, the recognition of high-level narratives of multi-player games and the identification of player strate- gies and behavior using computer vision.

Recognizing Black Jack

Recognizing Black Jack

AddThis Social Bookmark Button

NY Times Article (2001): “Smart Home, to Avoid the Nursing Home”

April 5th, 2001 Irfan Essa Posted in Aware Home, In The News, Intelligent Environments, Research No Comments »

Anne Eisenberg (2001)“A ‘Smart’ Home, to Avoid the Nursing Home” New York Times Circuits Section,

April 5, 2001 Issue

Quote from the Article: “Cameras are going to rule one day at the Georgia Tech house, though, staff members there say. Dr. Irfan A. Essa, a computer science professor at Georgia Tech, is one of the people building a tracking system, based on video cameras, that will one day replace radio frequency tags. ”We can locate where the person is,” Dr. Essa said, ”and make a first-level guess at where this person is heading using the optical sensors.””

AddThis Social Bookmark Button

Paper: IEEE Personal Commications (2000) “Ubiquitous sensing for smart and aware environments”

October 14th, 2000 Irfan Essa Posted in Aware Home, Intelligent Environments, Papers, Research No Comments »


Essa, I.A (2000), “Ubiquitous sensing for smart and aware environments” In Personal Communications, IEEE [see also IEEE Wireless Communications] Publication Date: Oct. 2000, Volume: 7 , Issue: 5
On page(s): 47 – 49, ISSN: 1070-9916, CODEN: IPCME7, INSPEC Accession Number:6756447, Digital Object Identifier: 10.1109/98.878538

Abstract

As computing technology continues to become increasingly pervasive and ubiquitous, we envision the development of environments that can sense what we are doing and support our daily activities. In this article, we outline our efforts toward building such environments and discuss the importance of a sensing and signal-understanding infrastructure that leads to awareness of what is happening in an environment and how it can best be supported. Such an infrastructure supports both high- and low-end data transmission and processing, while allowing for detailed interpretation, modeling and recognition from sensed information. We are currently prototyping several aware environments to aid in the development and study of such sensing and computation in real-world settings

AddThis Social Bookmark Button

Paper (1999) in CoBuild: “The Aware Home: A Living Laboratory for Ubiquitous Computing Research”

October 28th, 1999 Irfan Essa Posted in Aware Home, Beth Mynatt, Collaborators, Gregory Abowd, Intelligent Environments, Thad Starner, Wendy Rogers No Comments »

Cory D. Kidd, Robert Orr, Gregory D. Abowd, Christopher G. Atkeson, Irfan A. Essa, Blair MacIntyre, Elizabeth Mynatt, Thad E. Starner and Wendy Newstetter (1999) “The Aware Home: A Living Laboratory for Ubiquitous Computing Research”, In Cooperative Buildings. Integrating Information, Organizations and Architecture , Volume 1670/1999, Springer Berlin / Heidelberg, Lecture Notes in Computer Science, ISBN: 978-3-540-66596-0. [PDF | DOI | Project Site]

Abstract

We are building a home, called the Aware Home, to create a living laboratory for research in ubiquitous computing for everyday activities. This paper introduces the Aware Home project and outlines some of our technology-and human-centered research objectives in creating the Aware Home.

AddThis Social Bookmark Button

Project: The Aware Home

October 1st, 1999 Irfan Essa Posted in A. Dan Fisk, Aware Home, Beth Mynatt, Gregory Abowd, Intelligent Environments, Projects, Research, Wendy Rogers No Comments »

The Aware Home

Is it possible to create a home environment that is aware of its occupants whereabouts and activities?

If we build such a home, how can it provide services to its residents that enhance their quality of life or help them to maintain independence as they age? The Aware Home Research Initiative (AHRI) is an interdisciplinary research endeavor at Georgia Tech aimed at addressing the fundamental technical, design, and social challenges presented by such questions.

The Aware Home Research Initiative at Georgia Institute of Technology is devoted to the multidisciplinary exploration of emerging technologies adn services based in the home. Starting in 1988, our collection of faculty and students has created a unique research facility that allows us to simulate and evaluate user experiences with off-the-shelf and state-of-the-art technologies. With speciifc expertise in health, education, entertainment and usable security, we are able to apply our research to problems of significant social and economic impact.

New technologies show great promise when applied to the home domain. The opportunities are vast, ranging from new modes of entertainment, services to simplify the management of the home and its myriad activities, and much-needed assistance for individuals at risk and the busy family members who care for them.

Home entertainment is important to help us enjoy our leisure time. We are interested in developingg new ways to simplify the control of a complex array of digital entertainment items and to creat new ways to capture the meaning ful moments of everyday life and share them with others now and well into the future. As we introduce more technologies into the home, we do not want to change the important characteristic of home life; to relax and enjoy family events. Currently, the influx of technology into the home has produced an increased burden to manage that infrastructure and guard against new security threats. by considering the importance of the human experience in managing technology and maintaining control and privacy, we are showing how a state-of-the-art experience can also be an enjoyable one.

Many otherwise busy adults are sandwiched between generations of older and younger relations that rely on them for care. Many baby boomers take responsibility to help an aging parent retain an independent life in his or her own home, rather than moving to an institutional facility. Others are assisting a developmentally delayed child or grandchild grow into an independent life in his or her own hoem, rather than moving to an institutional facility. Others are assisting a developmentally delayed child or grandchild grow into an independent and functional lifestyle. Still others may help a sibling cope with a chronic health condition. Whatever the situation, there are many opportunities for home technologies to support the important communication and coordination tasks a network of formal and informal caregivers. the same technologies that revolutionized and “flattened” the workplace can now make life easier in the home.

AddThis Social Bookmark Button

Paper ICCV (1009): Exploiting Human Actions and Object Context for Recognition Tasks

September 20th, 1999 Irfan Essa Posted in Activity Recognition, Darnell Moore, Intelligent Environments, PAMI/ICCV/CVPR/ECCV, Papers No Comments »

D. J. Moore, I. Essa, and M. Hayes (1999) “Exploiting Human Actions and Object Context for Recognition Tasks.” In Proceedings of Seventh International Conference on Computer Vision (ICCV’99), Volume 1, p. 80, Sept 20, 1999. ISBN: 0-7695-0164-8. [ DOI | PDF | Project Site]

Abstract

Overhead Image for Object/Action Recognition in the Office

Overhead Image for Object/Action Recognition in the Office

Our goal is to exploit human motion and object context to perform action recognition and object classification. Towards this end, we introduce a framework for recognizing actions and objects by measuring image-, object- and action-based information from video. Hidden Markov models are combined with object context to classify hand actions, which are aggregated by a Bayesian classifier to summarize activities. We also use Bayesian methods to differentiate the class of unknown objects by evaluating detected actions along with low-level, extracted object features. Our approach is appropriate for locating and classifying objects under a variety of conditions including full occlusion. We show experiments where both familiar and previously unseen objects are recognized using action and context information.

AddThis Social Bookmark Button

Paper in ICCV 1999: “Motion Based Decompositing Video”

September 10th, 1999 Irfan Essa Posted in Aware Home, Computational Photography and Video, Gabriel Brostow, Intelligent Environments, PAMI/ICCV/CVPR/ECCV, Papers No Comments »

Motion Based Decompositing Video

  • G. J. Brostow and I. Essa (1999), “Motion Based Decompositing of Video,” in Proceedings of IEEE International Conference on Computer Vision (ICCV), 1999, pp. 8-13. [PDF] [WEBSITE] [DOI] [BLOG] [BIBTEX]
    @inproceedings{1999-Brostow-MBDV,
      Author = {G. J. Brostow and I. Essa},
      Blog = {http://prof.irfanessa.com/1999/09/10/1999-brostow-mbdv/},
      Booktitle = {Proceedings of IEEE International Conference on Computer Vision (ICCV)},
      Date-Modified = {2012-05-10 11:03:52 +0000},
      Doi = {http://dx.doi.org/10.1109/ICCV.1999.791190},
      Pages = {8--13},
      Pdf = {http://www.cc.gatech.edu/~irfan/p/1999-Brostow-MBDV.pdf},
      Publisher = {IEEE Computer Society},
      Title = {Motion Based Decompositing of Video},
      Url = {http://www.cc.gatech.edu/cpl/projects/layering/},
      Volume = {1},
      Year = {1999},
      Bdsk-Url-1 = {http://www.cc.gatech.edu/cpl/projects/layering/},
      Bdsk-Url-2 = {http://dx.doi.org/10.1109/ICCV.1999.791190}}

Abstract

We present a method to decompose video sequences into layers that represent the relative depths of complex scenes. Our method combines spatial information with temporal occlusions to determine relative depths of these layers. Spatial information is obtained through edge detection and a customized contour completion algorithm. Activity in a scene is used to extract temporal occlusion events, which are in turn, used to classify objects as occluders or occludees. The path traversed by the moving objects determines the segmentation of the scene. Several examples of decompositing and compositing of video are shown. This approach can be applied in the pre-processing of sequences for compositing or tracking purposes and to determine the approximate 3D structure of a scene.

AddThis Social Bookmark Button

Paper: AI Magazine (1999) “Computers Seeing People”

July 14th, 1999 Irfan Essa Posted in Aware Home, Face and Gesture, Intelligent Environments, Papers No Comments »

Irfan A. Essa “Computers Seeing People”, AI Magazine 20(2): Summer 1999, 69-82

Abstract

AI researchers are interested in building intelligent machines that can interact with them as they interact with each other. Science fiction writers have given us these goals in the form of HAL in 2001: A Space Odyssey and Commander Data in Star Trek: The Next Generation. However, at present, our computers are deaf, dumb, and blind, almost unaware of the environment they are in and of the user who interacts with them. In this article, I present the current state of the art in machines that can see people, recognize them, determine their gaze, understand their facial expressions and hand gestures, and interpret their activities. I believe that by building machines with such abilities for perceiving, people will take us one step closer to building HAL and Commander Data.

AddThis Social Bookmark Button