Thesis: Vivek Kwatra’s PhD Thesis (2005) “Example-based Rendering of Textural Phenomena”

July 19th, 2005 Irfan Essa Posted in Computational Photography and Video, PhD, Thesis, Vivek Kwatra No Comments »

Vivek Kwatra (2005), “Example-based Rendering of Textural Phenomena”PhD Thesis, Georgia Institute of Technology, College of Computing (Advisors: Aaron Bobick, Irfan Essa) [URI], 19-Jul-2005

Abstract

This thesis explores synthesis by example as a paradigm for rendering real-world phenomena. In particular, phenomena that can be visually described as texture are considered. We exploit, for synthesis, the self-repeating nature of the visual elements constituting these texture exemplars. Techniques for unconstrained as well as constrained/controllable synthesis of both image and video textures are presented. For unconstrained synthesis, we present two robust techniques that can perform spatio-temporal extension, editing, and merging of image as well as video textures. In one of these techniques, large patches of input texture are automatically aligned and seamless stitched with each other to generate realistic looking images and videos. The second technique is based on iterative optimization of a global energy function that measures the quality of the synthesized texture with respect to the given input exemplar. We also present a technique for controllable texture synthesis. In particular, it allows for generation of motion-controlled texture animations that follow a specified flow field. Animations synthesized in this fashion maintain the structural properties like local shape, size, and orientation of the input texture even as they move according to the specified flow. We cast this problem into an optimization framework that tries to simultaneously satisfy the two (potentially competing) objectives of similarity to the input texture and consistency with the flow field. This optimization is a simple extension of the approach used for unconstrained texture synthesis. A general framework for example-based synthesis and rendering is also presented. This framework provides a design space for constructing example-based rendering algorithms. The goal of such algorithms would be to use texture exemplars to render animations for which certain behavioral characteristics need to be controlled. Our motion-controlled texture synthesis technique is an instantiation of this framework where the characteristic being controlled is motion represented as a flow field.

AddThis Social Bookmark Button

Thesis: Drew Steedly PhD (2004): “Rigid Partitioning Techniques for Efficiently Generating 3D Reconstructions from Images”

December 9th, 2004 Irfan Essa Posted in Computational Photography and Video, Drew Steedly, PhD, Thesis No Comments »

Drew Steedly (2004)“Rigid Partitioning Techniques for Efficiently Generating 3D Reconstructions from Images”PhD Thesis, Georgia Institute of Technology, College of Computing. (Advisor: Irfan Essa) [PDF] [URI]

Abstract

This thesis explores efficient techniques for generating 3D reconstructions from imagery. Non-linear optimization is one of the core techniques used when computing a reconstruction and is a computational bottleneck for large sets of images. Since non-linear optimization requires a good initialization to avoid getting stuck in local minima, robust systems for generating reconstructions from images build up the reconstruction incrementally. A hierarchical approach is to split up the images into small subsets, reconstruct each subset independently and then hierarchically merge the subsets. Rigidly locking together portions of the reconstructions reduces the number of parameters needed to represent them when merging, thereby lowering the computational cost of the optimization. We present two techniques that involve optimizing with parts of the reconstruction rigidly locked together. In the first, we start by rigidly grouping the cameras and scene features from each of the reconstructions being merged into separate groups. Cameras and scene features are then incrementally unlocked and optimized until the reconstruction is close to the minimum energy. This technique is most effective when the influence of the new measurements is restricted to a small set of parameters. Measurements that stitch together weakly coupled portions of the reconstruction, though, tend to cause deformations in the low error modes of the reconstruction and cannot be efficiently incorporated with the previous technique. To address this, we present a spectral technique for clustering the tightly coupled portions of a reconstruction into rigid groups. Reconstructions partitioned in this manner can closely mimic the poorly conditioned, low error modes, and therefore efficiently incorporate measurements that stitch together weakly coupled portions of the reconstruction. We explain how this technique can be used to scalably and efficiently generate reconstructions from large sets of images.

AddThis Social Bookmark Button

Thesis: Antonio Haro’s PhD (2003): “Example based processing for image and video synthesis”

November 6th, 2003 Irfan Essa Posted in Antonio Haro, Computational Photography and Video, PhD, Research, Thesis No Comments »

Antonio Haro (2003) “Example based processing for image and video synthesis” PhD Thesis, Georgia Institute of Technology, College of Computing, Atlanta, GA, [URI] [PDF] (Advisor: Irfan Essa)

Abstract:

The example based processing problem can be expressed as: “Given an example of an image or video before and after processing, apply a similar processing to a new image or video”. Our thesis is that there are some problems where a single general algorithm can be used to create varieties of outputs, solely by presenting examples of what is desired to the algorithm. This is valuable if the algorithm to produce the output is non-obvious, e.g. an algorithm to emulate an example painting’s style. We limit our investigations to example based processing of images, video, and 3D models as these data types are easy to acquire and experiment with.

We represent this problem first as a texture synthesis influenced sampling problem, where the idea is to form feature vectors representative of the data and then sample them coherently to synthesize a plausible output for the new image or video. Grounding the problem in this manner is useful as both problems involve learning the structure of training data under some assumptions to sample it properly. We then reduce the problem to a labeling problem to perform example based processing in a more generalized and principled manner than earlier techniques. This allows us to perform a different estimation of what the output should be by approximating the optimal (and possibly not known) solution through a different approach.

AddThis Social Bookmark Button