MENU: Home Bio Affiliations Research Teaching Publications Videos Collaborators/Students Contact FAQ ©2007-14 RSS

Paper in ICCP 2013 “Post-processing approach for radiometric self-calibration of video”

April 19th, 2013 Irfan Essa Posted in Computational Photography and Video, ICCP, Matthias Grundmann, Papers, Sing Bing Kang No Comments »

  • M. Grundmann, C. McClanahan, S. B. Kang, and I. Essa (2013), “Post-processing Approach for Radiometric Self-Calibration of Video,” in Proceedings of IEEE International Conference on Computational Photography, 2013. [PDF] [WEBSITE] [VIDEO] [DOI] [BIBTEX]
    @inproceedings{2013-Grundmann-PARSV,
      Author = {Matthias Grundmann and Chris McClanahan and Sing Bing Kang and Irfan Essa},
      Booktitle = {Proceedings of IEEE International Conference on Computational Photography},
      Date-Added = {2013-06-25 11:54:57 +0000},
      Date-Modified = {2013-10-22 18:41:09 +0000},
      Doi = {10.1109/ICCPhot.2013.6528307},
      Month = {April},
      Organization = {IEEE Computer Society},
      Pdf = {http://www.cc.gatech.edu/~irfan/p/2013-Grundmann-PARSV.pdf},
      Title = {Post-processing Approach for Radiometric Self-Calibration of Video},
      Url = {http://www.cc.gatech.edu/cpl/projects/radiometric},
      Video = {http://www.youtube.com/watch?v=sC942ZB4WuM},
      Year = {2013},
      Bdsk-Url-1 = {http://www.cc.gatech.edu/cpl/projects/radiometric},
      Bdsk-Url-2 = {http://dx.doi.org/10.1109/ICCPhot.2013.6528307}}

Abstract

We present a novel data-driven technique for radiometric self-calibration of video from an unknown camera. Our approach self-calibrates radiometric variations in video, and is applied as a post-process; there is no need to access the camera, and in particular it is applicable to internet videos. This technique builds on empirical evidence that in video the camera response function (CRF) should be regarded time variant, as it changes with scene content and exposure, instead of relying on a single camera response function. We show that a time-varying mixture of responses produces better accuracy and consistently reduces the error in mapping intensity to irradiance when compared to a single response model. Furthermore, our mixture model counteracts the effects of possible nonlinear exposure-dependent intensity perturbations and white-balance changes caused by proprietary camera firmware. We further show how radiometrically calibrated video improves the performance of other video analysis algorithms, enabling a video segmentation algorithm to be invariant to exposure and gain variations over the sequence. We validate our data-driven technique on videos from a variety of cameras and demonstrate the generality of our approach by applying it to internet video.

via IEEE Xplore – Post-processing approach for radiometric self-calibration of video.

AddThis Social Bookmark Button

Paper (2009) In ACM Symposium on Interactive 3D Graphics “Human Video Textures”

March 1st, 2009 Irfan Essa Posted in ACM SIGGRAPH, Computational Photography and Video, James Rehg, Matt Flagg, Modeling and Animation, Papers, Sing Bing Kang No Comments »

 

Matthew FlaggAtsushi Nakazawa, Qiushuang Zhang, Sing Bing Kang, Young Kee Ryu, Irfan EssaJames M. Rehg (2009), Human Video Textures In Proceedings of the ACM Symposium on Interactive 3D Graphics and Games 2009 (I3D ’09), Boston, MA, February 27-March 1 (Fri-Sun), 2009 [PDF (see Copyright) | Video in DiVx | Website ]

Abstract

This paper describes a data-driven approach for generating photorealistic animations of human motion. Each animation sequence follows a user-choreographed path and plays continuously by seamlessly transitioning between different segments of the captured data. To produce these animations, we capitalize on the complementary characteristics of motion capture data and video. We customize our capture system to record motion capture data that are synchronized with our video source. Candidate transition points in video clips are identified using a new similarity metric based on 3-D marker trajectories and their 2-D projections into video. Once the transitions have been identified, a video-based motion graph is constructed. We further exploit hybrid motion and video data to ensure that the transitions are seamless when generating animations. Motion capture marker projections serve as control points for segmentation of layers and nonrigid transformation of regions. This allows warping and blending to generate seamless in-between frames for animation. We show a series of choreographed animations of walks and martial arts scenes as validation of our approach.

Example Image from Project

Human Video Textures (Output Rendered as a Collage!)

AddThis Social Bookmark Button