Talk: Invited Speaker at CMU’s Robotics Institute (2002): “Temporal Reasoning from Video to Temporal Synthesis of Video”

February 12th, 2002 Irfan Essa Posted in Activity Recognition, Aware Home, Computational Photography and Video, Presentations No Comments »

Temporal Reasoning from Video to Temporal Synthesis of Video

Abstract

In this talk, I will present some ongoing work on extracting spatio-temporal cues from video for both synthesis of novel video sequences, and recognition of complex activities. First I will discuss (in brief) our work on Video Textures, where repeating information is extracted to generate extended sequences of videos. I will then describe some our extensions to this approach that allows for controlled generation of animations of video sprites. We have developed various learning and optimization techniques that allow for video-based animations of photo-realistic characters. Then I will describe our new approach for image and video synthesis that builds on optimal patch-based copying of samples. I will show how our method allows for iterative refinement and extend to synthesis of both images and video from very limited samples. In the next part of my talk, I will describe how a similar analysis of video can be used to recognize what a person is doing in a scene. Such an analysis of video, aimed at recognition, requires more contextual information about the environment. I will show how we leverage off contextual information shared between actions and objects to recognize what is happening in complex environments. I will also show that by adding some form of grammar (we use Stochastic Context Free Grammar) we can recognize very complex, multi-tasked activities. Finally, I will describe (very briefly) the Aware Home project at Georgia Tech, which is one primary area of ongoing and future research for me and my group. Further information on my work with videos is available from my webpage at http://www.cc.gatech.edu/~irfan

AddThis Social Bookmark Button

Funding: NSF (2001) ITR/SY “The Aware Home: Sustaining the Quality of Life for an Aging Population”

October 1st, 2001 Irfan Essa Posted in Aaron Bobick, Aware Home, Beth Mynatt, Funding, Gregory Abowd, Wendy Rogers No Comments »

Award# 0121661 – ITR/SY: The Aware Home: Sustaining the Quality of Life for an Aging Population

ABSTRACT

The focus of this project is on development of a domestic environment that is cognizant of the whereabouts and activities of its occupants and can support them in their everyday life. While the technology is applicable to a range of domestic situations, the emphasis in this work will be on support for aging in place; through collaboration with experts in assistive care and cognitive aging, the PI and his team will design, demonstrate, and evaluate a series of domestic services that aim to maintain the quality of life for an aging population, with the goal of increasing the likelihood of a “stay at home” alternative to assisted living that satisfies the needs of an aging individual and his/her distributed family. In particular, the PI will explore two areas that are key to sustaining quality of life for an independent senior adult: maintaining familial vigilance, and supporting daily routines. The intention is to serve as an active partner, aiding the senior occupant without taking control. This research will lead to advances in three research areas: human-computer interaction; computational perception; and software engineering. To achieve the desired goals, the PI will conduct the research and experimentation in an authentic domestic setting, a novel research facility called the Residential Laboratory recently completed next to the Georgia Tech campus. Together with experts in theoretical and practical aspects of aging, the PI will establish a pattern of research in which informed design of ubiquitous computing technology can be rapidly deployed, evaluated and evolved in an authentic setting. Special attention will be paid throughout to issues relating to privacy and trust implications. The PI will transition the products of this project to researchers and practitioners interested in performing more large-scale observations of the social and economic impact of Aware Home technologies.

AddThis Social Bookmark Button

NY Times Article (2001): “Smart Home, to Avoid the Nursing Home”

April 5th, 2001 Irfan Essa Posted in Aware Home, In The News, Intelligent Environments, Research No Comments »

Anne Eisenberg (2001)“A ‘Smart’ Home, to Avoid the Nursing Home” New York Times Circuits Section,

April 5, 2001 Issue

Quote from the Article: “Cameras are going to rule one day at the Georgia Tech house, though, staff members there say. Dr. Irfan A. Essa, a computer science professor at Georgia Tech, is one of the people building a tracking system, based on video cameras, that will one day replace radio frequency tags. ”We can locate where the person is,” Dr. Essa said, ”and make a first-level guess at where this person is heading using the optical sensors.””

AddThis Social Bookmark Button

Paper: IEEE Personal Commications (2000) “Ubiquitous sensing for smart and aware environments”

October 14th, 2000 Irfan Essa Posted in Aware Home, Intelligent Environments, Papers, Research No Comments »


Essa, I.A (2000), “Ubiquitous sensing for smart and aware environments” In Personal Communications, IEEE [see also IEEE Wireless Communications] Publication Date: Oct. 2000, Volume: 7 , Issue: 5
On page(s): 47 – 49, ISSN: 1070-9916, CODEN: IPCME7, INSPEC Accession Number:6756447, Digital Object Identifier: 10.1109/98.878538

Abstract

As computing technology continues to become increasingly pervasive and ubiquitous, we envision the development of environments that can sense what we are doing and support our daily activities. In this article, we outline our efforts toward building such environments and discuss the importance of a sensing and signal-understanding infrastructure that leads to awareness of what is happening in an environment and how it can best be supported. Such an infrastructure supports both high- and low-end data transmission and processing, while allowing for detailed interpretation, modeling and recognition from sensed information. We are currently prototyping several aware environments to aid in the development and study of such sensing and computation in real-world settings

AddThis Social Bookmark Button

Paper (1999) in CoBuild: “The Aware Home: A Living Laboratory for Ubiquitous Computing Research”

October 28th, 1999 Irfan Essa Posted in Aware Home, Beth Mynatt, Collaborators, Gregory Abowd, Intelligent Environments, Thad Starner, Wendy Rogers No Comments »

Cory D. Kidd, Robert Orr, Gregory D. Abowd, Christopher G. Atkeson, Irfan A. Essa, Blair MacIntyre, Elizabeth Mynatt, Thad E. Starner and Wendy Newstetter (1999) “The Aware Home: A Living Laboratory for Ubiquitous Computing Research”, In Cooperative Buildings. Integrating Information, Organizations and Architecture , Volume 1670/1999, Springer Berlin / Heidelberg, Lecture Notes in Computer Science, ISBN: 978-3-540-66596-0. [PDF | DOI | Project Site]

Abstract

We are building a home, called the Aware Home, to create a living laboratory for research in ubiquitous computing for everyday activities. This paper introduces the Aware Home project and outlines some of our technology-and human-centered research objectives in creating the Aware Home.

AddThis Social Bookmark Button

Project: The Aware Home

October 1st, 1999 Irfan Essa Posted in A. Dan Fisk, Aware Home, Beth Mynatt, Gregory Abowd, Intelligent Environments, Projects, Research, Wendy Rogers No Comments »

The Aware Home

Is it possible to create a home environment that is aware of its occupants whereabouts and activities?

If we build such a home, how can it provide services to its residents that enhance their quality of life or help them to maintain independence as they age? The Aware Home Research Initiative (AHRI) is an interdisciplinary research endeavor at Georgia Tech aimed at addressing the fundamental technical, design, and social challenges presented by such questions.

The Aware Home Research Initiative at Georgia Institute of Technology is devoted to the multidisciplinary exploration of emerging technologies adn services based in the home. Starting in 1988, our collection of faculty and students has created a unique research facility that allows us to simulate and evaluate user experiences with off-the-shelf and state-of-the-art technologies. With speciifc expertise in health, education, entertainment and usable security, we are able to apply our research to problems of significant social and economic impact.

New technologies show great promise when applied to the home domain. The opportunities are vast, ranging from new modes of entertainment, services to simplify the management of the home and its myriad activities, and much-needed assistance for individuals at risk and the busy family members who care for them.

Home entertainment is important to help us enjoy our leisure time. We are interested in developingg new ways to simplify the control of a complex array of digital entertainment items and to creat new ways to capture the meaning ful moments of everyday life and share them with others now and well into the future. As we introduce more technologies into the home, we do not want to change the important characteristic of home life; to relax and enjoy family events. Currently, the influx of technology into the home has produced an increased burden to manage that infrastructure and guard against new security threats. by considering the importance of the human experience in managing technology and maintaining control and privacy, we are showing how a state-of-the-art experience can also be an enjoyable one.

Many otherwise busy adults are sandwiched between generations of older and younger relations that rely on them for care. Many baby boomers take responsibility to help an aging parent retain an independent life in his or her own home, rather than moving to an institutional facility. Others are assisting a developmentally delayed child or grandchild grow into an independent life in his or her own hoem, rather than moving to an institutional facility. Others are assisting a developmentally delayed child or grandchild grow into an independent and functional lifestyle. Still others may help a sibling cope with a chronic health condition. Whatever the situation, there are many opportunities for home technologies to support the important communication and coordination tasks a network of formal and informal caregivers. the same technologies that revolutionized and “flattened” the workplace can now make life easier in the home.

AddThis Social Bookmark Button

Paper in ICCV 1999: “Motion Based Decompositing Video”

September 10th, 1999 Irfan Essa Posted in Aware Home, Computational Photography and Video, Gabriel Brostow, Intelligent Environments, PAMI/ICCV/CVPR/ECCV, Papers No Comments »

Motion Based Decompositing Video

  • G. J. Brostow and I. Essa (1999), “Motion Based Decompositing of Video,” in Proceedings of IEEE International Conference on Computer Vision (ICCV), 1999, pp. 8-13. [PDF] [WEBSITE] [DOI] [BLOG] [BIBTEX]
    @InProceedings{    1999-Brostow-MBDV,
      author  = {G. J. Brostow and I. Essa},
      blog    = {http://prof.irfanessa.com/1999/09/10/1999-brostow-mbdv/},
      booktitle  = {Proceedings of IEEE International Conference on
          Computer Vision (ICCV)},
      doi    = {http://dx.doi.org/10.1109/ICCV.1999.791190},
      pages    = {8--13},
      pdf    = {http://www.cc.gatech.edu/~irfan/p/1999-Brostow-MBDV.pdf},
      publisher  = {IEEE Computer Society},
      title    = {Motion Based Decompositing of Video},
      url    = {http://www.cc.gatech.edu/cpl/projects/layering/},
      volume  = {1},
      year    = {1999}
    }

Abstract

We present a method to decompose video sequences into layers that represent the relative depths of complex scenes. Our method combines spatial information with temporal occlusions to determine relative depths of these layers. Spatial information is obtained through edge detection and a customized contour completion algorithm. Activity in a scene is used to extract temporal occlusion events, which are in turn, used to classify objects as occluders or occludees. The path traversed by the moving objects determines the segmentation of the scene. Several examples of decompositing and compositing of video are shown. This approach can be applied in the pre-processing of sequences for compositing or tracking purposes and to determine the approximate 3D structure of a scene.

AddThis Social Bookmark Button

Paper: AI Magazine (1999) “Computers Seeing People”

July 14th, 1999 Irfan Essa Posted in Aware Home, Face and Gesture, Intelligent Environments, Papers No Comments »

Irfan A. Essa “Computers Seeing People”, AI Magazine 20(2): Summer 1999, 69-82

Abstract

AI researchers are interested in building intelligent machines that can interact with them as they interact with each other. Science fiction writers have given us these goals in the form of HAL in 2001: A Space Odyssey and Commander Data in Star Trek: The Next Generation. However, at present, our computers are deaf, dumb, and blind, almost unaware of the environment they are in and of the user who interacts with them. In this article, I present the current state of the art in machines that can see people, recognize them, determine their gaze, understand their facial expressions and hand gestures, and interpret their activities. I believe that by building machines with such abilities for perceiving, people will take us one step closer to building HAL and Commander Data.

AddThis Social Bookmark Button

Funding: NSF (1998) Experimental Software Systems “Automated Understanding of Captured Experience”

September 1st, 1998 Irfan Essa Posted in Activity Recognition, Audio Analysis, Aware Home, Funding, Gregory Abowd, Intelligent Environments No Comments »

Award#9806822 – Experimental Software Systems: Automated Understanding of Captured Experience
ABSTRACT

9806822 Essa, Irfan A. Abowd, Gregory D. Georgia Institute of Technology Experimental Software Systems: Automated Understanding of Captured Experience The objective of this research is to reduce substantially the human input necessary for creating and accessing large collections of multimedia, particularly multimedia created by capturing what is happening in an environment. The existing software system which is being used as the starting point for this investigation is Classroom 2000, a system designed to capture what happens in classrooms, meetings, and offices. Classroom 2000 integrates and synchronizes multiple streams of captured text, images, handwritten annotations, audio, and video. In a sense, it automates note-taking for a lecture or meeting. The research challenge is to make sense of this flood of captured data. The project explores how the output of Classroom 2000 can be automatically structured, segmented, indexed, and linked. Machine learning and statistical approaches to language are used to attempt to understand the captured data. Techniques from computational perception are used to try to find structure in the captured data. An important component of this research is the experimental analysis of the software system being built. The expectation is that this research will have a dramatic impact on how humans work and learn, as technology aids humans by capturing and making accessible what happens in an environment.

AddThis Social Bookmark Button