test Browse by Author Names Browse by Titles of Works Browse by Subjects of Works Browse by Issue Dates of Works
       

Advanced Search
Home   
 
Browse   
Communities
& Collections
  
Issue Date   
Author   
Title   
Subject   
 
Sign on to:   
Receive email
updates
  
My Account
authorized users
  
Edit Profile   
 
Help   
About T-Space   

T-Space at The University of Toronto Libraries >
School of Graduate Studies - Theses >
Doctoral >

Please use this identifier to cite or link to this item: http://hdl.handle.net/1807/32044

Title: Machine Learning Algorithms for Geometry Processing by Example
Authors: Kalogerakis, Evangelos
Advisor: Singh, Karan
Hertzmann, Aaron
Department: Computer Science
Keywords: geometry processing
learning
segmentation
labeling
illustration
hatching
line drawing
curvature
machine learning
geometry processing by example
non-photorealistic rendering
artistic rendering
Issue Date: 18-Jan-2012
Abstract: This thesis proposes machine learning algorithms for processing geometry by example. Each algorithm takes as input a collection of shapes along with exemplar values of target properties related to shape processing tasks. The goal of the algorithms is to output a function that maps from the shape data to the target properties. The learned functions can be applied to novel input shape data in order to synthesize the target properties with style similar to the training examples. Learning such functions is particularly useful for two different types of geometry processing problems. The first type of problems involves learning functions that map to target properties required for shape interpretation and understanding. The second type of problems involves learning functions that map to geometric attributes of animated shapes required for real-time rendering of dynamic scenes. With respect to the first type of problems involving shape interpretation and understanding, I demonstrate learning for shape segmentation and line illustration. For shape segmentation, the algorithms learn functions of shape data in order to perform segmentation and recognition of parts in 3D meshes simultaneously. This is in contrast to existing mesh segmentation methods that attempt segmentation without recognition based only on low-level geometric cues. The proposed method does not require any manual parameter tuning and achieves significant improvements in results over the state-of-the-art. For line illustration, the algorithms learn functions from shape and shading data to hatching properties, given a single exemplar line illustration of a shape. Learning models of such artistic-based properties is extremely challenging, since hatching exhibits significant complexity as a network of overlapping curves of varying orientation, thickness, density, as well as considerable stylistic variation. In contrast to existing algorithms that are hand-tuned or hand-designed from insight and intuition, the proposed technique offers a largely automated and potentially natural workflow for artists. With respect to the second type of problems involving fast computations of geometric attributes in dynamic scenes, I demonstrate algorithms for learning functions of shape animation parameters that specifically aim at taking advantage of the spatial and temporal coherence in the attribute data. As a result, the learned mappings can be evaluated very efficiently during runtime. This is especially useful when traditional geometric computations are too expensive to re-estimate the shape attributes at each frame. I apply such algorithms to efficiently compute curvature and high-order derivatives of animated surfaces. As a result, curvature-dependent tasks, such as line drawing, which could be previously performed only offline for animated scenes, can now be executed in real-time on modern CPU hardware.
URI: http://hdl.handle.net/1807/32044
Appears in Collections:Doctoral

Files in This Item:

File Description SizeFormat
Kalogerakis_E_201011_PhD_thesis.pdf14.59 MBAdobe PDF
View/Open

Items in T-Space are protected by copyright, with all rights reserved, unless otherwise indicated.

uoft