MENU: Home Bio Affiliations Research Teaching Publications Videos Collaborators/Students Contact FAQ ©2007-14 RSS

Paper AAAI (2002): “Recognizing Multitasked Activities from Video using Stochastic Context-Free Grammar”

September 29th, 2002 Irfan Essa Posted in AAAI/IJCAI/UAI, Activity Recognition, Darnell Moore, Intelligent Environments, Papers No Comments »

D. Moore and I. Essa (2002). “Recognizing multitasked activities from video using stochastic context-free grammar”, in Proceedings of AAAI 2002. [PDF | Project Site]

Abstract

In this paper, we present techniques for recognizing com- plex, multitasked activities from video. Visual information like image features and motion appearances, combined with domain-specific information, like object context is used ini- tially to label events. Each action event is represented with a unique symbol, allowing for a sequence of interactions to be described as an ordered symbolic string. Then, a model of stochastic context-free grammar (SCFG), which is devel- oped using underlying rules of an activity, is used to provide the structure for recognizing semantically meaningful behav- ior over extended periods. Symbolic strings are parsed us- ing the Earley-Stolcke algorithm to determine the most likely semantic derivation for recognition. Parsing substrings al- lows us to recognize patterns that describe high-level, com- plex events taking place over segments of the video sequence. We introduce new parsing strategies to enable error detection and recovery in stochastic context-free grammar and meth- ods of quantifying group and individual behavior in activities with separable roles. We show through experiments, with a popular card game, the recognition of high-level narratives of multi-player games and the identification of player strate- gies and behavior using computer vision.

Recognizing Black Jack

Recognizing Black Jack

AddThis Social Bookmark Button

Paper ICCV (1009): Exploiting Human Actions and Object Context for Recognition Tasks

September 20th, 1999 Irfan Essa Posted in Activity Recognition, Darnell Moore, Intelligent Environments, PAMI/ICCV/CVPR/ECCV, Papers No Comments »

D. J. Moore, I. Essa, and M. Hayes (1999) “Exploiting Human Actions and Object Context for Recognition Tasks.” In Proceedings of Seventh International Conference on Computer Vision (ICCV’99), Volume 1, p. 80, Sept 20, 1999. ISBN: 0-7695-0164-8. [ DOI | PDF | Project Site]

Abstract

Overhead Image for Object/Action Recognition in the Office

Overhead Image for Object/Action Recognition in the Office

Our goal is to exploit human motion and object context to perform action recognition and object classification. Towards this end, we introduce a framework for recognizing actions and objects by measuring image-, object- and action-based information from video. Hidden Markov models are combined with object context to classify hand actions, which are aggregated by a Bayesian classifier to summarize activities. We also use Bayesian methods to differentiate the class of unknown objects by evaluating detected actions along with low-level, extracted object features. Our approach is appropriate for locating and classifying objects under a variety of conditions including full occlusion. We show experiments where both familiar and previously unseen objects are recognized using action and context information.

AddThis Social Bookmark Button