MENU: Home Bio Affiliations Research Teaching Publications Videos Collaborators/Students Contact FAQ ©2007-14 RSS

Paper (2011) in Virtual Reality: “Augmenting aerial earth maps with dynamic information from videos”

February 2nd, 2011 Irfan Essa Posted in Computational Photography and Video, Kihwan Kim, Papers, Sangmin Oh No Comments »

Augmenting aerial earth maps with dynamic information from videos

  • Kim, Oh, Lee, and Essa (2011), “Augmenting aerial earth maps with dynamic information from videos,” Journal of Virtual Reality, Special Issue on Augmented Reality, vol. 15, iss. 2-3, pp. 1359-4338, 2011.  [PDF] [WEBSITE] [VIDEO] [DOI] [SpringerLink][BIBTEX]
    
    @article{2011-Kim-AAEMWDIFV, 
     Author = {K. Kim and S. Oh and J. Lee and I. Essa}, 
     Doi = {10.1007/s10055-010-0186-2}, 
     Journal = {Journal of Virtual Reality, Special Issue on Augmented Reality}, 
     Number = {2-3}, 
     Pages = {1359-4338}, 
     Pdf = {http://www.cc.gatech.edu/~irfan/p/2011-Kim-AAEMWDIFV.pdf}, 
     Title = {Augmenting aerial earth maps with dynamic information from videos}, 
     Url = {http://www.cc.gatech.edu/cpl/projects/augearth}, 
     Video = {http://www.youtube.com/watch?v=TPk88soc2qw}, 
     Volume = {15}, 
     Year = {2011}}

Abstract

We introduce methods for augmenting aerial visualizations of Earth (from tools such as Google Earth or Microsoft Virtual Earth) with dynamic information obtained from videos. Our goal is to make Augmented Earth Maps that visualize plausible live views of dynamic scenes in a city. We propose different approaches to analyze videos of pedestrians and cars in real situations, under differing conditions to extract dynamic information. Then, we augment an Aerial Earth Maps (AEMs) with the extracted live and dynamic content. We also analyze natural phenomenon (skies, clouds) and project information from these to the AEMs to add to the visual reality. Our primary contributions are: (1) Analyzing videos with different viewpoints, coverage, and overlaps to extract relevant information about view geometry and movements, with limited user input. (2) Projecting this information appropriately to the viewpoint of the AEMs and modeling the dynamics in the scene from observations to allow inference (in case of missing data) and synthesis. We demonstrate this over a variety of camera configurations and conditions. (3) The modeled information from videos is registered to the AEMs to render appropriate movements and related dynamics. We demonstrate this with traffic flow, people movements, and cloud motions. All of these approaches are brought together as a prototype system for a real-time visualization of a city that is alive and engaging.

Augmented Earth

AddThis Social Bookmark Button