Event: International Conference on Face and Gesture Recognition (1996).

October 13th, 1996 Irfan Essa Posted in Events, Face and Gesture, Sandy Pentland No Comments »

International Conference on Face and Gesture Recognition (FG) 1996, October 13-16, 1996, Killington, Vermont

fg96yptitle.gif

AddThis Social Bookmark Button

Paper: ICPR (1996): “Motion regularization for model-based head tracking”

August 25th, 1996 Irfan Essa Posted in Face and Gesture, Intelligent Environments, PAMI/ICCV/CVPR/ECCV, Sumit Basu No Comments »

S. Basu, I. Essa, A. Pentland (1996) “Motion regularization for model-based head tracking.” In Proceedings of  Proceedings of the 13th International Conference on Pattern Recognition, 1996., 25-29 Aug 1996 Volume: 3, page(s): 611-616. [ DOI | PDF]

Abstract

This paper describes a method for the robust tracking of rigid head motion from video. This method uses a 3D ellipsoidal model of the head and interprets the optical flow in terms of the possible rigid motions of the model. This method is robust to large angular and translational motions of the head and is not subject to the singularities of a 2D model. The method has been successfully applied to heads with a variety of shapes, hair styles, etc. This method also has the advantage of accurately capturing the 3D motion parameters of the head. This accuracy is shown through comparison with a ground truth synthetic sequence (a rendered 3D animation of a model head). In addition, the ellipsoidal model is robust to small variations in the initial fit, enabling the automation of the model initialization. Lastly, due to its consideration of the entire 3D aspect of the head, the tracking is very stable over a large number of frames. This robustness extends even to sequences with very low frame rates and noisy camera images

AddThis Social Bookmark Button

Scientific American Article (1996): “Smart Rooms; by Alex Pentland

April 9th, 1996 Irfan Essa Posted in Affective Computing, Face and Gesture, In The News, Intelligent Environments, Research No Comments »

Alex Pentland (1996), “Smart Rooms”Scientific American, April 1996

Quote from the Article: “Facial expression is almost as important as identity. A teaching program, for example, should know if its students look bored. So once our smart room has found and identified someone’s face, it analyzes the expression. Yet another computer compares the facial motion the camera records with maps depicting the facial motions involved in making various expressions. Each expression, in fact, involves a unique collection of muscle movements. When you smile, you curl the corners of your mouth and lift certain parts of your forehead; when you fake a smile, though, you move only your mouth. In experiments conducted by scientist Irfan A. Essa and me, our system has correctly judged expressions-among a small group of subjects-98 percent of the time.”

AddThis Social Bookmark Button

Discover Magazine Article (1995) “A Face of Ones Own Memory, Emotions, Decisions”

December 1st, 1995 Irfan Essa Posted in Affective Computing, Face and Gesture, In The News, Research No Comments »

Evan I. Schwartz (1995) “A Face of One’s Own | Memory, Emotions, & Decisions”, DISCOVER MagazineDecember 1, 1995.

Quote from the Article: “Chief among the members of his staff working on the problem is computer scientist Irfan Essa. To get computers to read facial expressions such as happiness or anger, Essa has designed three-dimensional animated models of common facial movements. His animated faces move according to biomedical data gathered from facial surgeons and anatomists. Essa uses this information to simulate exactly what happens when a person’s static, expressionless face, whose muscles are completely relaxed and free of stress, breaks out into a laugh or a frown or some other expression of emotion.”

AddThis Social Bookmark Button

Paper: IEEE ICCV (1995) “Facial expression recognition using a dynamic model and motion energy”

June 20th, 1995 Irfan Essa Posted in Face and Gesture, PAMI/ICCV/CVPR/ECCV, Papers, Sandy Pentland No Comments »

Essa, I.A. Pentland, A.P. (1995), “Facial expression recognition using a dynamic model and motion energy”, In Proceedings of Fifth International Conference on Computer Vision, 1995, 20-23 June 1995, page(s): 360 – 367, 06/20/1995 – 06/23/1995, Cambridge, MA, ISBN: 0-8186-7042-8, INSPEC Accession Number:5028034
Digital Object Identifier: [DOI:10.1109/ICCV.1995.466916][IEEEXplore#]

Abstract

Previous efforts at facial expression recognition have been based on the Facial Action Coding System (FACS), a representation developed in order to allow human psychologists to code expression from static facial “mugshots.” We develop new more accurate representations for facial expression by building a video database of facial expressions and then probabilistically characterizing the facial muscle activation associated with each expression using a detailed physical model of the skin and muscles. This produces a muscle based representation of facial motion, which is then used to recognize facial expressions in two different ways. The first method uses the physics based model directly, by recognizing expressions through comparison of estimated muscle activations. The second method uses the physics based model to generate spatio temporal motion energy templates of the whole face for each different expression. These simple, biologically plausible motion energy “templates” are then used for recognition. Both methods show substantially greater accuracy at expression recognition than has been previously achieved

AddThis Social Bookmark Button

Thesis: Irfan Essa’s PhD Thesis (1994): “Analysis, interpretation and synthesis of facial expressions”

August 30th, 1994 Irfan Essa Posted in Face and Gesture, Thesis 3 Comments »

Irfan Essa (1994), “Analysis, interpretation and synthesis of facial expressions“, PhD Thesis, MIT, Cambridge, MA, USA. (Advisor: Alex (Sandy) Pentland) [PDF]

Abstract

This thesis describes a computer vision system for observing the “action units” of a face using video sequences as input. The visual observation (sensing) is achieved by using an optimal estimation optical flow method coupled with a geometric and a physical (muscle) model describing the facial structure. This modeling results in a time-varying spatial patterning of facial shape and a parametric representation of the independent muscle action groups responsible for the observed facial motions. These muscle action patterns are then used for analysis, interpretation, recognition, and synthesis of facial expressions. Thus, by interpreting facial motions within a physics-based optimal estimation framework, a new control model of facial movement is developed. The newly extracted action units (which we name “FACS+”) are both physics and geometry-based, and extend the well known FACS parameters for facial expressions by adding temporal information and non-local spatial patterning of facial motion.

Irfan Essa’s PhD Thesis

AddThis Social Bookmark Button

Paper: IEEE CVPR (1994) “A vision system for observing and extracting facial action parameters”

June 21st, 1994 Irfan Essa Posted in Face and Gesture, PAMI/ICCV/CVPR/ECCV, Papers, Sandy Pentland No Comments »

Essa, I.A. Pentland, A. (1994), “A vision system for observing and extracting facial action parameters”,
In Proceedings of IEEE Computer Vision and Pattern Recognition (CVPR ’94.),  21-23 June 1994, page(s): 76 – 83, 06/21/1994 – 06/23/1994, Seattle, WA, ISBN: 0-8186-5825-8 [Digital Object Identifier: 10.1109/CVPR.1994.323813][IEEEXplore#]

Abstract

We describe a computer vision system for observing the “action units” of a face using video sequences as input. The visual observation (sensing) is achieved by using an optimal estimation optical flow method coupled with a geometric and a physical (muscle) model describing the facial structure. This modeling results in a time-varying spatial patterning of facial shape and a parametric representation of the independent muscle action groups, responsible for the observed facial motions. These muscle action patterns may then be used for analysis, interpretation, and synthesis. Thus, by interpreting facial motions within a physics-based optimal estimation framework, a new control model of facial movement is developed. The newly extracted action units (which we name “FACS ”) are both physics and geometry-based, and extend the well-known FACS parameters for facial expressions by adding temporal information and non-local spatial patterning of facial motion

AddThis Social Bookmark Button