MENU: Home Bio Affiliations Research Teaching Publications Videos Collaborators/Students Contact FAQ ©2007-14 RSS

Thesis: Gabriel Brostow’s PhD (2004): “Novel Skeletal Representation for Articulated Creatures”

April 9th, 2004 Irfan Essa Posted in Activity Recognition, Gabriel Brostow, Modeling and Animation, Research, Thesis No Comments »

Gabriel Brostow (2004), “Novel Skeletal Representation for Articulated Creatures” PhD Thesis, Georgia Institute of Technology, College of Computing. (Advisor: Irfan Essa) [PDF] [URI]

Abstract

This research examines an approach for capturing 3D surface and structural data of moving articulated creatures. Given the task of non-invasively and automatically capturing such data, a methodologyand the associated experiments are presented, that apply to multiview videos of the subjects motion. Our thesis states: A functional structure and the timevarying surface of an articulated creature subject are contained in a sequence of its 3D data. A functional structure is one example of the possible arrangements of internal mechanisms (kinematic joints, springs, etc.) that is capable of performing the motions observed in the input data. Volumetric structures are frequently used as shape descriptors for 3D data. The capture of such data is being facilitated by developments in multi-view video and range scanning, extending to subjects that are alive and moving. In this research, we examine vision-based modeling and the related representation of moving articulated creatures using Spines. We define a Spine as a branching axial structure representing the shape and topology of a 3D objects limbs, and capturing the limbs correspondence and motion over time. The Spine concept builds on skeletal representations often used to describe the internal structure of an articulated object and the significant protrusions. Our representation of a Spine provides for enhancements over a 3D skeleton. These enhancements form temporally consistent limb hierarchies that contain correspondence information about real motion data. We present a practical implementation that approximates a Spines joint probability function to reconstruct Spines for synthetic and real subjects that move. In general, our approach combines the objectives of generalized cylinders, 3D scanning, and markerless motion capture to generate baseline models from real puppets, animals, and human subjects.

AddThis Social Bookmark Button

Paper: ACM SIGGRAPH (2001) “Image-based motion blur for stop motion animation”

August 1st, 2001 Irfan Essa Posted in Computational Photography and Video, Gabriel Brostow, Non-Photorealism, Papers, SIGGRAPH/SCA/NPAR/EG No Comments »

Gabriel J. Brostow and  Irfan Essa (2001) “Image-based motion blur for stop motion animation” In Proceedings of the 28th annual conference on Computer graphics and interactive techniques (ACM SIGGRPH) Pages: 561 – 566 August 2001, ISBN:1-58113-374-X ACM New York, NY, USA (DOI|PDF|Video|Project Site)

ABSTRACT

blur-gorilla.jpgStop motion animation is a well-established technique where still pictures of static scenes are taken and then played at film speeds to show motion. A major limitation of this method appears when fast motions are desired; most motion appears to have sharp edges and there is no visible motion blur. Appearance of motion blur is a strong perceptual cue, which is automatically present in live-action films, and synthetically generated in animated sequences. In this paper, we present an approach for automatically simulating motion blur. Ours is wholly a post-process, and uses image sequences, both stop motion or raw video, as input. First we track the frame-to-frame motion of the objects within the image plane. We then integrate the scene’s appearance as it changed over a period of time. This period of time corresponds to shutter speed in live-action filming, and gives us interactive control over the extent of the induced blur. We demonstrate a simple implementation of our approach as it applies to footage of different motions and to scenes of varying complexity. Our photorealistic renderings of these input sequences approximate the effect of capturing moving objects on film that is exposed for finite periods of time.

AddThis Social Bookmark Button

Project: DVFX@GeorgiaTech

September 17th, 1999 Irfan Essa Posted in Computational Photography and Video, DVFX, Frank Dellaert, Gabriel Brostow, Projects No Comments »

DVFX@GeorgiaTech
The DVFX Group at the Georgia Tech’s GVU Center and the School of Interactive Computing is aimed at exploring the technical aspects of digital video special effects production and computer animation. For more information see the research at the Computational Perception Laboratory, and the class offerings in this area since 1999.

Check out the New BS in Computational Media, and the BS in Computer Science and its related specialization tracks (threads!) as options for degrees that include material described here.

AddThis Social Bookmark Button

Paper in ICCV 1999: “Motion Based Decompositing Video”

September 10th, 1999 Irfan Essa Posted in Aware Home, Computational Photography and Video, Gabriel Brostow, Intelligent Environments, PAMI/ICCV/CVPR/ECCV, Papers No Comments »

Motion Based Decompositing Video

  • G. J. Brostow and I. Essa (1999), “Motion Based Decompositing of Video,” in Proceedings of IEEE International Conference on Computer Vision (ICCV), 1999, pp. 8-13. [PDF] [WEBSITE] [DOI] [BLOG] [BIBTEX]
    @inproceedings{1999-Brostow-MBDV,
      Author = {G. J. Brostow and I. Essa},
      Blog = {http://prof.irfanessa.com/1999/09/10/1999-brostow-mbdv/},
      Booktitle = {Proceedings of IEEE International Conference on Computer Vision (ICCV)},
      Date-Modified = {2012-05-10 11:03:52 +0000},
      Doi = {http://dx.doi.org/10.1109/ICCV.1999.791190},
      Pages = {8--13},
      Pdf = {http://www.cc.gatech.edu/~irfan/p/1999-Brostow-MBDV.pdf},
      Publisher = {IEEE Computer Society},
      Title = {Motion Based Decompositing of Video},
      Url = {http://www.cc.gatech.edu/cpl/projects/layering/},
      Volume = {1},
      Year = {1999},
      Bdsk-Url-1 = {http://www.cc.gatech.edu/cpl/projects/layering/},
      Bdsk-Url-2 = {http://dx.doi.org/10.1109/ICCV.1999.791190}}

Abstract

We present a method to decompose video sequences into layers that represent the relative depths of complex scenes. Our method combines spatial information with temporal occlusions to determine relative depths of these layers. Spatial information is obtained through edge detection and a customized contour completion algorithm. Activity in a scene is used to extract temporal occlusion events, which are in turn, used to classify objects as occluders or occludees. The path traversed by the moving objects determines the segmentation of the scene. Several examples of decompositing and compositing of video are shown. This approach can be applied in the pre-processing of sequences for compositing or tracking purposes and to determine the approximate 3D structure of a scene.

AddThis Social Bookmark Button