Funding: NSF (1998) Experimental Software Systems “Automated Understanding of Captured Experience”

September 1st, 1998 Irfan Essa Posted in Activity Recognition, Audio Analysis, Aware Home, Funding, Gregory Abowd, Intelligent Environments No Comments »

Award#9806822 – Experimental Software Systems: Automated Understanding of Captured Experience
ABSTRACT

9806822 Essa, Irfan A. Abowd, Gregory D. Georgia Institute of Technology Experimental Software Systems: Automated Understanding of Captured Experience The objective of this research is to reduce substantially the human input necessary for creating and accessing large collections of multimedia, particularly multimedia created by capturing what is happening in an environment. The existing software system which is being used as the starting point for this investigation is Classroom 2000, a system designed to capture what happens in classrooms, meetings, and offices. Classroom 2000 integrates and synchronizes multiple streams of captured text, images, handwritten annotations, audio, and video. In a sense, it automates note-taking for a lecture or meeting. The research challenge is to make sense of this flood of captured data. The project explores how the output of Classroom 2000 can be automatically structured, segmented, indexed, and linked. Machine learning and statistical approaches to language are used to attempt to understand the captured data. Techniques from computational perception are used to try to find structure in the captured data. An important component of this research is the experimental analysis of the software system being built. The expectation is that this research will have a dramatic impact on how humans work and learn, as technology aids humans by capturing and making accessible what happens in an environment.

AddThis Social Bookmark Button

Paper: ICPR (1996): “Motion regularization for model-based head tracking”

August 25th, 1996 Irfan Essa Posted in Face and Gesture, Intelligent Environments, PAMI/ICCV/CVPR/ECCV, Sumit Basu No Comments »

S. Basu, I. Essa, A. Pentland (1996) “Motion regularization for model-based head tracking.” In Proceedings of  Proceedings of the 13th International Conference on Pattern Recognition, 1996., 25-29 Aug 1996 Volume: 3, page(s): 611-616. [ DOI | PDF]

Abstract

This paper describes a method for the robust tracking of rigid head motion from video. This method uses a 3D ellipsoidal model of the head and interprets the optical flow in terms of the possible rigid motions of the model. This method is robust to large angular and translational motions of the head and is not subject to the singularities of a 2D model. The method has been successfully applied to heads with a variety of shapes, hair styles, etc. This method also has the advantage of accurately capturing the 3D motion parameters of the head. This accuracy is shown through comparison with a ground truth synthetic sequence (a rendered 3D animation of a model head). In addition, the ellipsoidal model is robust to small variations in the initial fit, enabling the automation of the model initialization. Lastly, due to its consideration of the entire 3D aspect of the head, the tracking is very stable over a large number of frames. This robustness extends even to sequences with very low frame rates and noisy camera images

AddThis Social Bookmark Button

Scientific American Article (1996): “Smart Rooms; by Alex Pentland

April 9th, 1996 Irfan Essa Posted in Affective Computing, Face and Gesture, In The News, Intelligent Environments, Research No Comments »

Alex Pentland (1996), “Smart Rooms”Scientific American, April 1996

Quote from the Article: “Facial expression is almost as important as identity. A teaching program, for example, should know if its students look bored. So once our smart room has found and identified someone’s face, it analyzes the expression. Yet another computer compares the facial motion the camera records with maps depicting the facial motions involved in making various expressions. Each expression, in fact, involves a unique collection of muscle movements. When you smile, you curl the corners of your mouth and lift certain parts of your forehead; when you fake a smile, though, you move only your mouth. In experiments conducted by scientist Irfan A. Essa and me, our system has correctly judged expressions-among a small group of subjects-98 percent of the time.”

AddThis Social Bookmark Button