MENU: Home Bio Affiliations Research Teaching Publications Videos Collaborators/Students Contact FAQ ©2007-14 RSS

NAE elects Prof. Alex (Sandy) Pentland as a Member

March 1st, 2014 Irfan Essa Posted in In The News, Sandy Pentland No Comments »

Congratulations to my Ph. D. Advisor, Sandy Pentland for being elected to the National Academy of Engineering.

“For contributions to computer vision and technologies for measuring human social behavior.”

via NAE Website – Prof. Alex Pentland.

AddThis Social Bookmark Button

Paper: IEEE PAMI (1997) “Coding, analysis, interpretation, and recognition of facial expressions”

July 14th, 1997 Irfan Essa Posted in Affective Computing, Face and Gesture, PAMI/ICCV/CVPR/ECCV, Papers, Research, Sandy Pentland No Comments »

Coding, analysis, interpretation, and recognition of facial expressions

Essa, I.A. Pentland, A.P. In IEEE Transactions on Pattern Analysis and Machine Intelligence, July 1997, Volume: 19 , Issue: 7, pp 757 – 763, ISSN: 0162-8828, CODEN: ITPIDJ. INSPEC Accession Number:5661539
Digital Object Identifier: 10.1109/34.598232

Abstract

We describe a computer vision system for observing facial motion by using an optimal estimation optical flow method coupled with geometric, physical and motion-based dynamic models describing the facial structure. Our method produces a reliable parametric representation of the face’s independent muscle action groups, as well as an accurate estimate of facial motion. Previous efforts at analysis of facial expression have been based on the facial action coding system (FACS), a representation developed in order to allow human psychologists to code expression from static pictures. To avoid use of this heuristic coding scheme, we have used our computer vision system to probabilistically characterize facial motion and muscle activation in an experimental population, thus deriving a new, more accurate, representation of human facial expressions that we call FACS . Finally, we show how this method can be used for coding, analysis, interpretation, and recognition of facial expressions

AddThis Social Bookmark Button

Paper: IEEE PAMI (1996) “Task-specific gesture analysis in real-time using interpolated views”

December 14th, 1996 Irfan Essa Posted in Activity Recognition, Face and Gesture, PAMI/ICCV/CVPR/ECCV, Papers, Research, Sandy Pentland No Comments »

Darrell, T.J.; Essa, I.A.; Pentland, A.P., “Task-specific gesture analysis in real-time using interpolated views” Transactions on Pattern Analysis and Machine Intelligence , vol.18, no.12, pp.1236-1242, Dec 1996
URL: [ieeexplore.ieee.org] [DOI]

Abstract

Hand and face gestures are modeled using an appearance-based approach in which patterns are represented as a vector of similarity scores to a set of view models defined in space and time. These view models are learned from examples using unsupervised clustering techniques. A supervised teaming paradigm is then used to interpolate view scores into a task-dependent coordinate system appropriate for recognition and control tasks. We apply this analysis to the problem of context-specific gesture interpolation and recognition, and demonstrate real-time systems which perform these tasks

AddThis Social Bookmark Button

Event: International Conference on Face and Gesture Recognition (1996).

October 13th, 1996 Irfan Essa Posted in Events, Face and Gesture, Sandy Pentland No Comments »

International Conference on Face and Gesture Recognition (FG) 1996, October 13-16, 1996, Killington, Vermont

fg96yptitle.gif

AddThis Social Bookmark Button

Paper: IEEE ICCV (1995) “Facial expression recognition using a dynamic model and motion energy”

June 20th, 1995 Irfan Essa Posted in Face and Gesture, PAMI/ICCV/CVPR/ECCV, Papers, Sandy Pentland No Comments »

Essa, I.A. Pentland, A.P. (1995), “Facial expression recognition using a dynamic model and motion energy”, In Proceedings of Fifth International Conference on Computer Vision, 1995, 20-23 June 1995, page(s): 360 – 367, 06/20/1995 – 06/23/1995, Cambridge, MA, ISBN: 0-8186-7042-8, INSPEC Accession Number:5028034
Digital Object Identifier: [DOI:10.1109/ICCV.1995.466916][IEEEXplore#]

Abstract

Previous efforts at facial expression recognition have been based on the Facial Action Coding System (FACS), a representation developed in order to allow human psychologists to code expression from static facial “mugshots.” We develop new more accurate representations for facial expression by building a video database of facial expressions and then probabilistically characterizing the facial muscle activation associated with each expression using a detailed physical model of the skin and muscles. This produces a muscle based representation of facial motion, which is then used to recognize facial expressions in two different ways. The first method uses the physics based model directly, by recognizing expressions through comparison of estimated muscle activations. The second method uses the physics based model to generate spatio temporal motion energy templates of the whole face for each different expression. These simple, biologically plausible motion energy “templates” are then used for recognition. Both methods show substantially greater accuracy at expression recognition than has been previously achieved

AddThis Social Bookmark Button

Paper: IEEE CVPR (1994) “A vision system for observing and extracting facial action parameters”

June 21st, 1994 Irfan Essa Posted in Face and Gesture, PAMI/ICCV/CVPR/ECCV, Papers, Sandy Pentland No Comments »

Essa, I.A. Pentland, A. (1994), “A vision system for observing and extracting facial action parameters”,
In Proceedings of IEEE Computer Vision and Pattern Recognition (CVPR ’94.),  21-23 June 1994, page(s): 76 – 83, 06/21/1994 – 06/23/1994, Seattle, WA, ISBN: 0-8186-5825-8 [Digital Object Identifier: 10.1109/CVPR.1994.323813][IEEEXplore#]

Abstract

We describe a computer vision system for observing the “action units” of a face using video sequences as input. The visual observation (sensing) is achieved by using an optimal estimation optical flow method coupled with a geometric and a physical (muscle) model describing the facial structure. This modeling results in a time-varying spatial patterning of facial shape and a parametric representation of the independent muscle action groups, responsible for the observed facial motions. These muscle action patterns may then be used for analysis, interpretation, and synthesis. Thus, by interpreting facial motions within a physics-based optimal estimation framework, a new control model of facial movement is developed. The newly extracted action units (which we name “FACS ”) are both physics and geometry-based, and extend the well-known FACS parameters for facial expressions by adding temporal information and non-local spatial patterning of facial motion

AddThis Social Bookmark Button

Paper I3DG (1990) “The ThingWorld modeling system: virtual sculpting by modal forces”

March 13th, 1990 Irfan Essa Posted in Bradley Horowitz, Sandy Pentland, Stan Sclaroff No Comments »

The ThingWorld modeling system: virtual sculpting by modal forces

  • A. Pentland, I. Essa, M. Friedmann, B. Horowitz, and S. E. Sclaroff (1990), “The Thingworld Modeling System: Virtual Sculpting by Modal Forces,” ACM SIGGRAPH Proceedings of Symposium on Interactive 3D Graphics (I3DG), vol. 24, iss. 2, pp. 143-144, 1990. [PDF] [DOI] [BLOG] [BIBTEX]
    @article{1990-Pentland-TMSVSMF,
      Author = {A. Pentland and I. Essa and M. Friedmann and B. Horowitz and S. E. Sclaroff},
      Blog = {http://prof.irfanessa.com/1990/03/13/thingworld-i3gd1990/},
      Date-Modified = {2011-12-13 23:07:06 +0000},
      Doi = {10.1145/91394.91434},
      Journal = {ACM SIGGRAPH Proceedings of Symposium on Interactive 3D Graphics (I3DG)},
      Month = {March},
      Number = {2},
      Pages = {143--144},
      Pdf = {http://www.cc.gatech.edu/~irfan/p/1990-Pentland-TMSVSMF.pdf},
      Title = {The Thingworld Modeling System: Virtual Sculpting by Modal Forces},
      Volume = {24},
      Year = {1990},
      Bdsk-Url-1 = {http://dx.doi.org/10.1145/91394.91434}}

Abstract

We describe a real-time solid modeling system that is based on the physical analogy of forming clay by applying forces. The system is implemented by simulating real materials as they react to user-supplied forces. Unlike other physically-based modeling approaches, the Thingworld system allows the user to restrict forming action to simple global deformations during the initial “roughing in” phase of modeling, and then later concern themselves with detailing. The Thingworld system also allows users to automatically model existing objects by using measurements taken from the object’s surface. These measurements are used to generate artificial forces that mold the computer model much as a human would mold a clay model. Timed examples for constructing solid models are shown.

via The ThingWorld modeling system: virtual sculpting by modal forces.

AddThis Social Bookmark Button