Paper ISMAR 2009: “Augmenting Aerial Earth Maps with Dynamic Information”

October 20th, 2009 Irfan Essa Posted in Computational Journalism, Computational Photography and Video, Kihwan Kim, Modeling and Animation, Papers No Comments »

Augmenting Aerial Earth Maps with Dynamic Information

  • K. Kim, S. Oh, J. Lee, and I. Essa (2009), “Augmenting Aerial Earth Maps with Dynamic Information,” in Proceedings of IEEE International Symposium on Mixed and Augmented Reality (ISMAR), 2009. [PDF] [WEBSITE] [VIDEO] [DOI] [BLOG] [BIBTEX]
    @InProceedings{    2009-Kim-AAEMWDI,
      author  = {K. Kim and S. Oh and J. Lee and I. Essa},
      blog    = {},
      booktitle  = {Proceedings of IEEE International Symposium on
          Mixed and Augmented Reality (ISMAR)},
      doi    = {10.1109/ISMAR.2009.5336505},
      month    = {October},
      pdf    = {},
      title    = {Augmenting Aerial Earth Maps with Dynamic
      url    = {},
      video    = {},
      year    = {2009}


We introduce methods for augmenting aerial visualizations of Earth (from tools such as Google Earth or Microsoft Virtual Earth) with dynamic information obtained from videos. Our goal is to make Augmented Earth Maps that visualize the live broadcast of dynamic sceneries within a city. We propose different approaches to analyze videos of pedestrians and cars, under differing conditions and then augment Aerial Earth Maps (AEMs) with live and dynamic information. We also analyze natural phenomenon (clouds) and project information from these to the AEMs to add the visual reality.

For Journal Version of this paper, please see

AddThis Social Bookmark Button

In the News (2009): “Augmenting Earth Maps”

October 13th, 2009 Irfan Essa Posted in In The News, Kihwan Kim 1 Comment »

Video – Breaking News Videos from

Check out the media coverage of our new paper to appear in ISMAR 2009, in October.

Also see

  • “Latest videos makes Google Earth cities bustle” New Scientist (Sep 30, 2009 Issue)
  • “Video: Google Earth animated with real time human and vehicular traffic” Endgadget (Sep 30, 2009)
AddThis Social Bookmark Button

Paper: ISWC (2008) “Localization and 3D Reconstruction of Urban Scenes Using GPS”

September 28th, 2008 Irfan Essa Posted in ISWC, Kihwan Kim, Mobile Computing, Papers, Thad Starner No Comments »

Kihwan Kim, Jay Summet, Thad Starner, Daniel Ashbrook, Mrunal Kapade and Irfan Essa  (2008) “Localization and 3D Reconstruction of Urban Scenes Using GPS” In Proceedings of IEEE Symposium on Wearable Computing (ISWC) 2008 (To Appear). [PDF]



Using off-the-shelf Global Positioning System (GPS) units, we reconstruct buildings in 3D by exploiting the reduction in signal to noise ratio (SNR) that occurs when the buildings obstruct the line-of-sight between the moving units and the orbiting satellites. We measure the size and height of skyscrapers as well as automatically constructing a density map representing the location of multiple buildings in an urban landscape.  If deployed on a large scale, via a cellular service provider’s GPS-enabled mobile phones or GPS-tracked delivery vehicles, the system could provide an inexpensive means of continuously creating and updating 3D maps of urban environments.

AddThis Social Bookmark Button

Paper in ACM Multimedia (2006): “Interactive mosaic generation for video navigation”

October 22nd, 2006 Irfan Essa Posted in ACM MM, Computational Photography and Video, Gregory Abowd, Kihwan Kim, Multimedia, Papers No Comments »

K. Kim, I. Essa, and G. Abowd (2006) “Interactive mosaic generation for video navigation.” in Proceedings of the 14th annual ACM international conference on Multimedia, pages 655-658, 2006. [Project Page | DOI | PDF]


Navigation through large multimedia collections that include videos and images still remains cumbersome. In this paper, we introduce a novel method to visualize and navigate through the collection by creating a mosaic image that visually represents the compilation. This image is generated by a labeling-based layout algorithm using various sizes of sample tile images from the collection. Each tile represents both the photographs and video files representing scenes selected by matching algorithms. This generated mosaic image provides a new way for thematic video and visually summarizes the videos. Users can generate these mosaics with some predefined themes and layouts, or base it on the results of their queries. Our approach supports automatic generation of these layouts by using meta-information such as color, time-line and existence of faces or manually generated annotated information from existing systems (e.g., the Family Video Archive).

Interactive Video Mosaic

Interactive Video Mosaic

AddThis Social Bookmark Button