MENU: Home Bio Affiliations Research Teaching Publications Videos Collaborators/Students Contact FAQ ©2007-14 RSS

Paper in Artificial Intelligence (2009): “A novel sequence representation for unsupervised analysis of human activities”

September 20th, 2009 Irfan Essa Posted in Aaron Bobick, Activity Recognition, Charles Isbell, Papers, Raffay Hamid, Siddhartha Maddi No Comments »

A novel sequence representation for unsupervised analysis of human activities

  • R. Hamid, S. Maddi, A. Johnson, A. Bobick, I. Essa, and C. Isbell (2009), “A Novel Sequence Representation for Unsupervised Analysis of Human Activities,” Artificial Intelligence Journal, 2009. [BIBTEX]
    @article{2009-Hamid-NSRUAHA,
      Author = {R. Hamid and S. Maddi and A. Johnson and A. Bobick and I. Essa and C. Isbell},
      Date-Modified = {2011-12-08 21:27:48 +0000},
      Journal = {Artificial Intelligence Journal},
      Month = {May},
      Title = {A Novel Sequence Representation for Unsupervised Analysis of Human Activities},
      Year = {2009}}

Abstract

Formalizing computational models for everyday human activities remains an open challenge. Many previous approaches towards this end assume prior knowledge about the structure of activities, using which explicitly defined models are learned in a completely supervised manner. For a majority of everyday environments however, the structure of the in situ activities is generally not known a priori. In this paper we investigate knowledge representations and manipulation techniques that facilitate learning of human activities in a minimally supervised manner. The key contribution of this work is the idea that global structural information of human activities can be encoded using a subset of their local event subsequences, and that this encoding is sufficient for activity-class discovery and classification.

In particular, we investigate modeling activity sequences in terms of their constituent subsequences that we call event n-grams. Exploiting this representation, we propose a computational framework to automatically discover the various activity-classes taking place in an environment. We model these activity-classes as maximally similar activity-cliques in a completely connected graph of activities, and describe how to discover them efficiently. Moreover, we propose methods for finding characterizations of these discovered classes from a holistic as well as a by-parts perspective. Using such characterizations, we present a method to classify a new activity to one of the discovered activity-classes, and to automatically detect whether it is anomalous with respect to the general characteristics of its membership class. Our results show the efficacy of our approach in a variety of everyday environments.

Keywords: Temporal reasoning; Scene analysis; Computer vision

Hamid et al AIJ Paper

AddThis Social Bookmark Button

Paper: ACM IWVSSN (2006) “Unsupervised Analysis of Activity Sequences Using Event Motifs”

October 23rd, 2006 Irfan Essa Posted in AAAI/IJCAI/UAI, Aaron Bobick, Activity Recognition, Aware Home, Papers, Raffay Hamid, Siddhartha Maddi No Comments »

  • R. Hamid, S. Maddi, A. Bobick, I. Essa. “Unsupervised Analysis of Activity Sequences Using Event Motifs”, In proceedings of 4th ACM International Workshop on Video Surveillance and Sensor Networks (in conjunction with ACM Multimedia 2006).

Abstract

We present an unsupervised framework to discover characterizations of everyday human activities, and demonstrate how such representations can be used to extract points of interest in event-streams. We begin with the usage of Suffix Trees as an efficient activity-representation to analyze the global structural information of activities, using their local event statistics over the entire continuum of their temporal resolution. Exploiting this representation, we discover characterizing event-subsequences and present their usage in an ensemble-based framework for activity classification. Finally, we propose a method to automatically detect subsequences of events that are locally atypical in a structural sense. Results over extensive data-sets, collected from multiple sensor-rich environments are presented, to show the competence and scalability of the proposed framework.

AddThis Social Bookmark Button