header

Profile


photo

Prof. Dr. Leif Kobbelt
Room 117
Phone: +49 241 8021801
Fax: +49 241 8021801
Email: sekretariati8@informatik.rwth-aachen.de
Office hours: Friday 10:00 - 11:00

(Register via sekretariati8@informatik.rwth-aachen.de)



Publications


Neural Implicit Shape Editing Using Boundary Sensitivity


Arturs Berzins, Moritz Ibing, Leif Kobbelt
International Conference on Learning Representations 2023
pubimg

Neural fields are receiving increased attention as a geometric representation due to their ability to compactly store detailed and smooth shapes and easily undergo topological changes. Compared to classic geometry representations, however, neural representations do not allow the user to exert intuitive control over the shape. Motivated by this, we leverage boundary sensitivity to express how perturbations in parameters move the shape boundary. This allows to interpret the effect of each learnable parameter and study achievable deformations. With this, we perform geometric editing: finding a parameter update that best approximates a globally prescribed deformation. Prescribing the deformation only locally allows the rest of the shape to change according to some prior, such as semantics or deformation rigidity. Our method is agnostic to the model its training and updates the NN in-place. Furthermore, we show how boundary sensitivity helps to optimize and constrain objectives (such as surface area and volume), which are difficult to compute without first converting to another representation, such as a mesh.

» Show BibTeX

@misc{berzins2023neural,
title={Neural Implicit Shape Editing using Boundary Sensitivity},
author={Arturs Berzins and Moritz Ibing and Leif Kobbelt},
year={2023},
eprint={2304.12951},
archivePrefix={arXiv},
primaryClass={cs.CV}
}





Surface Maps via Adaptive Triangulations


Patrick Schmidt, Dörte Pieper, Leif Kobbelt
Eurographics 2023
pubimg

We present a new method to compute continuous and bijective maps (surface homeomorphisms) between two or more genus-0 triangle meshes. In contrast to previous approaches, we decouple the resolution at which a map is represented from the resolution of the input meshes. We discretize maps via common triangulations that approximate the input meshes while remaining in bijective correspondence to them. Both the geometry and the connectivity of these triangulations are optimized with respect to a single objective function that simultaneously controls mapping distortion, triangulation quality, and approximation error. A discrete-continuous optimization algorithm performs both energy-based remeshing as well as global second-order optimization of vertex positions, parametrized via the sphere. With this, we combine the disciplines of compatible remeshing and surface map optimization in a unified formulation and make a contribution in both fields. While existing compatible remeshing algorithms often operate on a fixed pre-computed surface map, we can now globally update this correspondence during remeshing. On the other hand, bijective surface-to-surface map optimization previously required computing costly overlay meshes that are inherently tied to the input mesh resolution. We achieve significant complexity reduction by instead assessing distortion between the approximating triangulations. This new map representation is inherently more robust than previous overlay-based approaches, is less intricate to implement, and naturally supports mapping between more than two surfaces. Moreover, it enables adaptive multi-resolution schemes that, e.g., first align corresponding surface regions at coarse resolutions before refining the map where needed. We demonstrate significant speedups and increased flexibility over state-of-the art mapping algorithms at similar map quality, and also provide a reference implementation of the method.



Awards:
» Show Videos
» Show BibTeX

@article{schmidt2023surface,
title={Surface Maps via Adaptive Triangulations},
author={Schmidt, Patrick and Pieper, D\"orte and Kobbelt, Leif},
year={2023},
journal={Computer Graphics Forum},
volume={42},
number={2},
}





Octree Transformer: Autoregressive 3D Shape Generation on Hierarchically Structured Sequences


Moritz Ibing, Gregor Kobsik, Leif Kobbelt
Structural and Compositional Learning on 3D Data (CVPR Workshop 2023)
pubimg

Autoregressive models have proven to be very powerful in NLP text generation tasks and lately have gained pop ularity for image generation as well. However, they have seen limited use for the synthesis of 3D shapes so far. This is mainly due to the lack of a straightforward way to linearize 3D data as well as to scaling problems with the length of the resulting sequences when describing complex shapes. In this work we address both of these problems. We use octrees as a compact hierarchical shape representation that can be sequentialized by traversal ordering. Moreover, we introduce an adaptive compression scheme, that significantly reduces sequence lengths and thus enables their effective generation with a transformer, while still allowing fully autoregressive sampling and parallel training. We demonstrate the performance of our model by performing superresolution and comparing against the state-of-the-art in shape generation.

» Show BibTeX

@inproceedings{ibing_octree,
author = {Moritz Ibing and
Gregor Kobsik and
Leif Kobbelt},
title = {Octree Transformer: Autoregressive 3D Shape Generation on Hierarchically Structured Sequences},
booktitle = {{IEEE/CVF} Conference on Computer Vision and Pattern Recognition Workshops,
{CVPR} Workshops 2023},
publisher = {{IEEE}},
year = {2023},
}





Localized Latent Updates for Fine-Tuning Vision-Language Models


Moritz Ibing, Isaak Lim, Leif Kobbelt
Efficient Deep Learning for Computer Vision (CVPR Workshop 2023)

Although massive pre-trained vision-language models like CLIP show impressive generalization capabilities for many tasks, still it often remains necessary to fine-tune them for improved performance on specific datasets. When doing so, it is desirable that updating the model is fast and that the model does not lose its capabilities on data outside of the dataset, as is often the case with classical fine-tuning approaches. In this work we suggest a lightweight adapter that only updates the models predictions close to seen datapoints. We demonstrate the effectiveness and speed of this relatively simple approach in the context of few-shot learning, where our results both on classes seen and unseen during training are comparable with or improve on the state of the art.

» Show BibTeX

@inproceedings{ibing_localized,
author = {Moritz Ibing and
Isaak Lim and
Leif Kobbelt},
title = {Localized Latent Updates for Fine-Tuning Vision-Language Models},
booktitle = {{IEEE/CVF} Conference on Computer Vision and Pattern Recognition Workshops,
{CVPR} Workshops 2023},
publisher = {{IEEE}},
year = {2023},
}





Greedy Image Approximation for Artwork Generation via Contiguous Bézier Segments


Julius Nehring-Wirxel, Isaak Lim, Leif Kobbelt
28th International Symposium on Vision, Modeling, and Visualization (VMV) 2023
pubimg

The automatic creation of digital art has a long history in computer graphics. In this work, we focus on approximating input images to mimic artwork by the artist Kumi Yamashita, as well as the popular scribble art style. Both have in common that the artists create the works by using a single, contiguous thread (Yamashita) or stroke (scribble) that is placed seemingly at random when viewed at close range, but perceived as a tone-mapped picture when viewed from a distance. Our approach takes a rasterized image as input and creates a single, connected path by iteratively sampling a set of candidate segments that extend the current path and greedily selecting the best one. The candidates are sampled according to art style specific constraints, i.e. conforming to continuity constraints in the mathematical sense for the scribble art style. To model the perceptual discrepancy between close and far viewing distances, we minimize the difference between the input image and the image created by rasterizing our path after applying the contrast sensitivity function, which models how human vision blurs images when viewed from a distance. Our approach generalizes to colored images by using one path per color. We evaluate our approach on a wide range of input images and show that it is able to achieve good results for both art styles in grayscale and color.

» Show BibTeX

@inproceedings{nehringwirxel2023greedy,
title={Greedy Image Approximation for Artwork Generation via Contiguous B{\'{e}}zier Segments},
author={Nehring-Wirxel, Julius and Lim, Isaak and Kobbelt, Leif},
booktitle={28th International Symposium on Vision, Modeling, and Visualization, VMV 2023},
year={2023}
}





EMBER: Exact Mesh Booleans via Efficient & Robust Local Arrangements


Philip Trettner, Julius Nehring-Wirxel, Leif Kobbelt
SIGGRAPH 2022
pubimg

Boolean operators are an essential tool in a wide range of geometry processing and CAD/CAM tasks. We present a novel method, EMBER, to compute Boolean operations on polygon meshes which is exact, reliable, and highly performant at the same time. Exactness is guaranteed by using a plane-based representation for the input meshes along with recently introduced homogeneous integer coordinates. Reliability and robustness emerge from a formulation of the algorithm via generalized winding numbers and mesh arrangements. High performance is achieved by avoiding the (pre-)construction of a global acceleration structure. Instead, our algorithm performs an adaptive recursive subdivision of the scene’s bounding box while generating and tracking all required data on the fly. By leveraging a number of early-out termination criteria, we can avoid the generation and inspection of regions that do not contribute to the output. With a careful implementation and a work-stealing multi-threading architecture, we are able to compute Boolean operations between meshes with millions of triangles at interactive rates. We run an extensive evaluation on the Thingi10K dataset to demonstrate that our method outperforms state-of-the-art algorithms, even inexact ones like QuickCSG, by orders of magnitude.



If you are interested in a binary implementation including various additional features, please contact the authors.

Contact: trettner@shapedcode.com
» Show Videos



Scan2FEM: From Point Clouds to Structured 3D Models Suitable for Simulation


Zain Selman, Juan Musto, Leif Kobbelt
EUROGRAPHICS Workshop on Graphics and Cultural Heritage
pubimg

Preservation of cultural heritage is important to prevent singular objects or sites of cultural importance to decay. One aspect of preservation is the creation of a digital twin. In case of a catastrophic event, this twin can be used to support repairs or reconstruction, in order to stay faithful to the original object or site. Certain activities in prolongation of such an objects lifetime may involve adding or replacing structural support elements to prevent a collapse. We propose an automatic method that is capable of transforming a point cloud into a geometric representation that is suitable for structural analysis. We robustly find cuboids and their connections in a point cloud to approximate the wooden beam structure contained inside. We export the necessary information to perform structural analysis, on the example of the timber attic of the UNESCO World Heritage Aachen Cathedral. We provide evaluation of the resulting cuboids’ quality and show how a user can interactively refine the cuboids in order to improve the approximated model, and consequently the simulation results.

» Show BibTeX

@inproceedings {10.2312:gch.20221215,
booktitle = {Eurographics Workshop on Graphics and Cultural Heritage},
editor = {Ponchio, Federico and Pintus, Ruggero},
title = {{Scan2FEM: From Point Clouds to Structured 3D Models Suitable for Simulation}},
author = {Selman, Zain and Musto, Juan and Kobbelt, Leif},
year = {2022},
publisher = {The Eurographics Association},
ISSN = {2312-6124},
ISBN = {978-3-03868-178-6},
DOI = {10.2312/gch.20221215}
}





TinyAD: Automatic Differentiation in Geometry Processing Made Simple


Patrick Schmidt, Janis Born, David Bommes, Marcel Campen, Leif Kobbelt
Eurographics Symposium on Geometry Processing 2022
pubimg

Non-linear optimization is essential to many areas of geometry processing research. However, when experimenting with different problem formulations or when prototyping new algorithms, a major practical obstacle is the need to figure out derivatives of objective functions, especially when second-order derivatives are required. Deriving and manually implementing gradients and Hessians is both time-consuming and error-prone. Automatic differentiation techniques address this problem, but can introduce a diverse set of obstacles themselves, e.g. limiting the set of supported language features, imposing restrictions on a program's control flow, incurring a significant run time overhead, or making it hard to exploit sparsity patterns common in geometry processing. We show that for many geometric problems, in particular on meshes, the simplest form of forward-mode automatic differentiation is not only the most flexible, but also actually the most efficient choice. We introduce TinyAD: a lightweight C++ library that automatically computes gradients and Hessians, in particular of sparse problems, by differentiating small (tiny) sub-problems. Its simplicity enables easy integration; no restrictions on, e.g., looping and branching are imposed. TinyAD provides the basic ingredients to quickly implement first and second order Newton-style solvers, allowing for flexible adjustment of both problem formulations and solver details. By showcasing compact implementations of methods from parametrization, deformation, and direction field design, we demonstrate how TinyAD lowers the barrier to exploring non-linear optimization techniques. This enables not only fast prototyping of new research ideas, but also improves replicability of existing algorithms in geometry processing. TinyAD is available to the community as an open source library.



Awards:
» Show Videos
» Show BibTeX

@article{schmidt2022tinyad,
title={{TinyAD}: Automatic Differentiation in Geometry Processing Made Simple},
author={Schmidt, Patrick and Born, Janis and Bommes, David and Campen, Marcel and Kobbelt, Leif},
year={2022},
journal={Computer Graphics Forum},
volume={41},
number={5},
}





Pseudodynamic analysis of heart tube formation in the mouse reveals strong regional variability and early left–right asymmetry


Isaac Esteban, Patrick Schmidt, Audrey Desgrange, Morena Raiola, Susana Temiño, Sigolène Meilhac, Leif Kobbelt, Miguel Torres
Nature Cardiovascular Research
pubimg

Understanding organ morphogenesis requires a precise geometrical description of the tissues involved in the process. The high morphological variability in mammalian embryos hinders the quantitative analysis of organogenesis. In particular, the study of early heart development in mammals remains a challenging problem due to imaging limitations and complexity. Here, we provide a complete morphological description of mammalian heart tube formation based on detailed imaging of a temporally dense collection of mouse embryonic hearts. We develop strategies for morphometric staging and quantification of local morphological variations between specimens. We identify hot spots of regionalized variability and identify Nodal-controlled left–right asymmetry of the inflow tracts as the earliest signs of organ left–right asymmetry in the mammalian embryo. Finally, we generate a three-dimensional+t digital model that allows co-representation of data from different sources and provides a framework for the computer modeling of heart tube formation.

» Show Videos
» Show BibTeX

@article{esteban2022pseudodynamic,
author = {Esteban, Isaac and Schmidt, Patrick and Desgrange, Audrey and Raiola, Morena and Temi{\~n}o, Susana and Meilhac, Sigol\`{e}ne M. and Kobbelt, Leif and Torres, Miguel},
title = {Pseudo-dynamic analysis of heart tube formation in the mouse reveals strong regional variability and early left-right asymmetry},
year = {2022},
journal = {Nature Cardiovascular Research},
volume = 1,
number = 5
}





Interactive Segmentation of Textured Point Clouds


Patric Schmitz, Sebastian Suder, Kersten Schuster, Leif Kobbelt
International Symposium on Vision, Modeling, and Visualization 2022
pubimg

We present a method for the interactive segmentation of textured 3D point clouds. The problem is formulated as a minimum graph cut on a k-nearest neighbor graph and leverages the rich information contained in high-resolution photographs as the discriminative feature. We demonstrate that the achievable segmentation accuracy is significantly improved compared to using an average color per point as in prior work. The method is designed to work efficiently on large datasets and yields results at interactive rates. This way, an interactive workflow can be realized in an immersive virtual environment, which supports the segmentation task by improved depth perception and the use of tracked 3D input devices. Our method enables to create high-quality segmentations of textured point clouds fast and conveniently.

» Show Videos
» Show BibTeX

@inproceedings {10.2312:vmv.20221200,
booktitle = {Vision, Modeling, and Visualization},
editor = {Bender, Jan and Botsch, Mario and Keim, Daniel A.},
title = {{Interactive Segmentation of Textured Point Clouds}},
author = {Schmitz, Patric and Suder, Sebastian and Schuster, Kersten and Kobbelt, Leif},
year = {2022},
publisher = {The Eurographics Association},
ISBN = {978-3-03868-189-2},
DOI = {10.2312/vmv.20221200}
}





Automatic region-growing system for the segmentation of large point clouds


Florent Poux, Christian Mattes, Zain Selman, Leif Kobbelt
Automation in Construction

This article describes a complete unsupervised system for the segmentation of massive 3D point clouds. Our system bridges the missing components that permit to go from 99% automation to 100% automation for the construction industry. It scales up to billions of 3D points and targets a generic low-level grouping of planar regions usable by a wide range of applications. Furthermore, we introduce a hierarchical multi-level segment definition to cope with potential variations in high-level object definitions. The approach first leverages planar predominance in scenes through a normal-based region growing. Then, for usability and simplicity, we designed an automatic heuristic to determine without user supervision three RANSAC-inspired parameters. These are the distance threshold for the region growing, the threshold for the minimum number of points needed to form a valid planar region, and the decision criterion for adding points to a region. Our experiments are conducted on 3D scans of complex buildings to test the robustness of the “one-click” method in varying scenarios. Labelled and instantiated point clouds from different sensors and platforms (depth sensor, terrestrial laser scanner, hand-held laser scanner, mobile mapping system), in different environments (indoor, outdoor, buildings) and with different objects of interests (AEC-related, BIM-related, navigation-related) are provided as a new extensive test-bench. The current implementation processes ten million points per minutes on a single thread CPU configuration. Moreover, the resulting segments are tested for the high-level task of semantic segmentation over 14 classes, to achieve an F1-score of 90+ averaged over all datasets while reducing the training phase to a fraction of state of the art point-based deep learning methods. We provide this baseline along with six new open-access datasets with 300+ million hand-labelled and instantiated 3D points at: https://www.graphics.rwth-aachen.de/project/ 45/.

» Show BibTeX

@article{POUX2022104250,
title = {Automatic region-growing system for the segmentation of large point clouds},
journal = {Automation in Construction},
volume = {138},
pages = {104250},
year = {2022},
issn = {0926-5805},
doi = {https://doi.org/10.1016/j.autcon.2022.104250},
url = {https://www.sciencedirect.com/science/article/pii/S0926580522001236},
author = {F. Poux and C. Mattes and Z. Selman and L. Kobbelt},
keywords = {3D point cloud, Segmentation, Region-growing, RANSAC, Unsupervised clustering}
}





3D Shape Generation with Grid-based Implicit Functions


Moritz Ibing, Isaak Lim, Leif Kobbelt
IEEE Conference on Computer Vision and Pattern Recognition
pubimg

Previous approaches to generate shapes in a 3D setting train a GAN on the latent space of an autoencoder (AE). Even though this produces convincing results, it has two major shortcomings. As the GAN is limited to reproduce the dataset the AE was trained on, we cannot reuse a trained AE for novel data. Furthermore, it is difficult to add spatial supervision into the generation process, as the AE only gives us a global representation. To remedy these issues, we propose to train the GAN on grids (i.e. each cell covers a part of a shape). In this representation each cell is equipped with a latent vector provided by an AE. This localized representation enables more expressiveness (since the cell-based latent vectors can be combined in novel ways) as well as spatial control of the generation process (e.g. via bounding boxes). Our method outperforms the current state of the art on all established evaluation measures, proposed for quantitatively evaluating the generative capabilities of GANs. We show limitations of these measures and propose the adaptation of a robust criterion from statistical analysis as an alternative.

» Show BibTeX

@inproceedings {ibing20213Dshape,
title = {3D Shape Generation with Grid-based Implicit Functions},
author = {Ibing, Moritz and Lim, Isaak and Kobbelt, Leif},
booktitle = {IEEE Computer Vision and Pattern Recognition (CVPR)},
pages = {},
year = {2021}
}





Learning Direction Fields for Quad Mesh Generation


Alexander Dielen, Isaak Lim, Max Lyon, Leif Kobbelt
Eurographics Symposium on Geometry Processing 2021
pubimg

State of the art quadrangulation methods are able to reliably and robustly convert triangle meshes into quad meshes. Most of these methods rely on a dense direction field that is used to align a parametrization from which a quad mesh can be extracted. In this context, the aforementioned direction field is of particular importance, as it plays a key role in determining the structure of the generated quad mesh. If there are no user-provided directions available, the direction field is usually interpolated from a subset of principal curvature directions. To this end, a number of heuristics that aim to identify significant surface regions have been proposed. Unfortunately, the resulting fields often fail to capture the structure found in meshes created by human experts. This is due to the fact that experienced designers can leverage their domain knowledge in order to optimize a mesh for a specific application. In the context of physics simulation, for example, a designer might prefer an alignment and local refinement that facilitates a more accurate numerical simulation. Similarly, a character artist may prefer an alignment that makes the resulting mesh easier to animate. Crucially, this higher level domain knowledge cannot be easily extracted from local curvature information alone. Motivated by this issue, we propose a data-driven approach to the computation of direction fields that allows us to mimic the structure found in existing meshes, which could originate from human experts or other sources. More specifically, we make use of a neural network that aggregates global and local shape information in order to compute a direction field that can be used to guide a parametrization-based quad meshing method. Our approach is a first step towards addressing this challenging problem with a fully automatic learning-based method. We show that compared to classical techniques our data-driven approach combined with a robust model-driven method, is able to produce results that more closely exhibit the ground truth structure of a synthetic dataset (i.e. a manually designed quad mesh template fitted to a variety of human body types in a set of different poses).

» Show BibTeX

@article{dielen2021learning_direction_fields,
title={Learning Direction Fields for Quad Mesh Generation},
author={Dielen, Alexander and Lim, Isaak and Lyon, Max and Kobbelt, Leif},
year={2021},
journal={Computer Graphics Forum},
volume={40},
number={5},
}





Simpler Quad Layouts using Relaxed Singularities


Max Lyon, Marcel Campen, Leif Kobbelt
Eurographics Symposium on Geometry Processing 2021
pubimg

A common approach to automatic quad layout generation on surfaces is to, in a first stage, decide on the positioning of irregular layout vertices, followed by finding sensible layout edges connecting these vertices and partitioning the surface into quadrilateral patches in a second stage. While this two-step approach reduces the problem's complexity, this separation also limits the result quality. In the worst case, the set of layout vertices fixed in the first stage without consideration of the second may not even permit a valid quad layout. We propose an algorithm for the creation of quad layouts in which the initial layout vertices can be adjusted in the second stage. Whenever beneficial for layout quality or even validity, these vertices may be moved within a prescribed radius or even be removed. Our algorithm is based on a robust quantization strategy, turning a continuous T-mesh structure into a discrete layout. We show the effectiveness of our algorithm on a variety of inputs.

» Show Videos
» Show BibTeX

@article{lyon2021simplerlayouts,
title={Simpler Quad Layouts using Relaxed Singularities},
author={Lyon, Max and Campen, Marcel and Kobbelt, Leif},
year={2021},
journal={Computer Graphics Forum},
volume={40},
number={5},
}





Surface Map Homology Inference


Janis Born, Patrick Schmidt, Marcel Campen, Leif Kobbelt
Eurographics Symposium on Geometry Processing 2021
pubimg

A homeomorphism between two surfaces not only defines a (continuous and bijective) geometric correspondence of points but also (by implication) an identification of topological features, i.e. handles and tunnels, and how the map twists around them. However, in practice, surface maps are often encoded via sparse correspondences or fuzzy representations that merely approximate a homeomorphism and are therefore inherently ambiguous about map topology. In this work, we show a way to infer topological information from an imperfect input map between two shapes. In particular, we compute a homology map, a linear map that transports homology classes of cycles from one surface to the other, subject to a global consistency constraint. Our inference robustly handles imperfect (e.g., partial, sparse, fuzzy, noisy, outlier-ridden, non-injective) input maps and is guaranteed to produce homology maps that are compatible with true homeomorphisms between the input shapes. Homology maps inferred by our method can be directly used to transfer homological information between shapes, or serve as foundation for the construction of a proper homeomorphism guided by the input map, e.g., via compatible surface decomposition.



Awards:
» Show Videos
» Show BibTeX

@article{born2021surface,
title={Surface Map Homology Inference},
author={Born, Janis and Schmidt, Patrick and Campen, Marcel and Kobbelt, Leif},
year={2021},
journal={Computer Graphics Forum},
volume={40},
number={5},
}





Geodesic Distance Computation via Virtual Source Propagation


Philip Trettner, David Bommes, Leif Kobbelt
Eurographics Symposium on Geometry Processing 2021
pubimg

We present a highly practical, efficient, and versatile approach for computing approximate geodesic distances. The method is designed to operate on triangle meshes and a set of point sources on the surface. We also show extensions for all kinds of geometric input including inconsistent triangle soups and point clouds, as well as other source types, such as lines. The algorithm is based on the propagation of virtual sources and hence easy to implement. We extensively evaluate our method on about 10000 meshes taken from the Thingi10k and the Tet Meshing in the Wild data sets. Our approach clearly outperforms previous approximate methods in terms of runtime efficiency and accuracy. Through careful implementation and cache optimization, we achieve runtimes comparable to other elementary mesh operations (e.g. smoothing, curvature estimation) such that geodesic distances become a "first-class citizen" in the toolbox of geometric operations. Our method can be parallelized and we observe up to 6× speed-up on the CPU and 20× on the GPU. We present a number of mesh processing tasks easily implemented on the basis of fast geodesic distances. The source code of our method will be provided as a C++ library under the MIT license.

Note: we are currently in the process of cleaning up and documenting the source code. A basic implementation can already be found in the supplemental material.




Sampling from Quadric-Based CSG Surfaces


Philip Trettner, Leif Kobbelt
High-Performance Graphics 2021
pubimg

We present an efficient method to create samples directly on surfaces defined by constructive solid geometry (CSG) trees or graphs. The generated samples can be used for visualization or as an approximation to the actual surface with strong guarantees. We chose to use quadric surfaces as CSG primitives as they can model classical primitives such as planes, cubes, spheres, cylinders, and ellipsoids, but also certain saddle surfaces. More importantly, they are closed under affine transformations, a desirable property for a modeling system. We also propose a rendering method that performs local quadric ray-tracing and clipping to achieve pixel-perfect accuracy and hole-free rendering.




Layout Embedding via Combinatorial Optimization


Janis Born, Patrick Schmidt, Leif Kobbelt
Eurographics 2021
pubimg

We consider the problem of injectively embedding a given graph connectivity (a layout) into a target surface. Starting from prescribed positions of layout vertices, the task is to embed all layout edges as intersection-free paths on the surface. Besides merely geometric choices (the shape of paths) this problem is especially challenging due to its topological degrees of freedom (how to route paths around layout vertices). The problem is typically addressed through a sequence of shortest path insertions, ordered by a greedy heuristic. Such insertion sequences are not guaranteed to be optimal: Early path insertions can potentially force later paths into unexpected homotopy classes. We show how common greedy methods can easily produce embeddings of dramatically bad quality, rendering such methods unsuitable for automatic processing pipelines. Instead, we strive to find the optimal order of insertions, i.e. the one that minimizes the total path length of the embedding. We demonstrate that, despite the vast combinatorial solution space, this problem can be effectively solved on simply-connected domains via a custom-tailored branch-and-bound strategy. This enables directly using the resulting embeddings in downstream applications which cannot recover from initializations in a wrong homotopy class. We demonstrate the robustness of our method on a shape dataset by embedding a common template layout per category, and show applications in quad meshing and inter-surface mapping.



Awards:
» Show Videos
» Show BibTeX

@article{born2021layout,
title={Layout Embedding via Combinatorial Optimization},
author={Born, Janis and Schmidt, Patrick and Kobbelt, Leif},
year={2021},
journal={Computer Graphics Forum},
volume={40},
number={2},
}





Compression and Rendering of Textured Point Clouds via Sparse Coding


Kersten Schuster, Philip Trettner, Patric Schmitz, Julian Schakib, Leif Kobbelt
High-Performance Graphics 2021
pubimg

Splat-based rendering techniques produce highly realistic renderings from 3D scan data without prior mesh generation. Mapping high-resolution photographs to the splat primitives enables detailed reproduction of surface appearance. However, in many cases these massive datasets do not fit into GPU memory. In this paper, we present a compression and rendering method that is designed for large textured point cloud datasets. Our goal is to achieve compression ratios that outperform generic texture compression algorithms, while still retaining the ability to efficiently render without prior decompression. To achieve this, we resample the input textures by projecting them onto the splats and create a fixed-size representation that can be approximated by a sparse dictionary coding scheme. Each splat has a variable number of codeword indices and associated weights, which define the final texture as a linear combination during rendering. For further reduction of the memory footprint, we compress geometric attributes by careful clustering and quantization of local neighborhoods. Our approach reduces the memory requirements of textured point clouds by one order of magnitude, while retaining the possibility to efficiently render the compressed data.




Quad Layouts via Constrained T-Mesh Quantization


Max Lyon, Marcel Campen, Leif Kobbelt
Eurographics 2021
pubimg

We present a robust and fast method for the creation of conforming quad layouts on surfaces. Our algorithm is based on the quantization of a T-mesh, i.e. an assignment of integer lengths to the sides of a non-conforming rectangular partition of the surface. This representation has the benefit of being able to encode an infinite number of layout connectivity options in a finite manner, which guarantees that a valid layout can always be found. We carefully construct the T-mesh from a given seamless parametrization such that the algorithm can provide guarantees on the results' quality. In particular, the user can specify a bound on the angular deviation of layout edges from prescribed directions. We solve an integer linear program (ILP) to find a coarse quad layout adhering to that maximal deviation. Our algorithm is guaranteed to yield a conforming quad layout free of T-junctions together with bounded angle distortion. Our results show that the presented method is fast, reliable, and achieves high quality layouts.

» Show Videos
» Show BibTeX

@article{Lyon:2021:Quad,
title = {Quad Layouts via Constrained T-Mesh Quantization},
author = {Lyon, Max and Campen, Marcel and Kobbelt, Leif},
journal = {Computer Graphics Forum},
volume = {40},
number = {2},
year = {2021}
}





Intuitive Shape Editing in Latent Space


Tim Elsner, Moritz Ibing, Victor Czech, Julius Nehring-Wirxel, Leif Kobbelt
arXiv
pubimg

The use of autoencoders for shape editing or generation through latent space manipulation suffers from unpredictable changes in the output shape. Our autoencoder-based method enables intuitive shape editing in latent space by disentangling latent sub-spaces into style variables and control points on the surface that can be manipulated independently. The key idea is adding a Lipschitz-type constraint to the loss function, i.e. bounding the change of the output shape proportionally to the change in latent space, leading to interpretable latent space representations. The control points on the surface that are part of the latent code of an object can then be freely moved, allowing for intuitive shape editing directly in latent space. We evaluate our method by comparing to state-of-the-art data-driven shape editing methods. We further demonstrate the expressiveness of our learned latent space by leveraging it for unsupervised part segmentation.




Highly accurate digital traffic recording as a basis for future mobility research: Methods and concepts of the research project HDV-Mess


Philip Trettner, Tim Elsner, Julius Nehring-Wirxel, Kersten Schuster, Leif Kobbelt
arXiv

The research project HDV-Mess aims at a currently missing, but very crucial component for addressing important challenges in the field of connected and automated driving on public roads. The goal is to record traffic events at various relevant locations with high accuracy and to collect real traffic data as a basis for the development and validation of current and future sensor technologies as well as automated driving functions. For this purpose, it is necessary to develop a concept for a mobile modular system of measuring stations for highly accurate traffic data acquisition, which enables a temporary installation of a sensor and communication infrastructure at different locations. Within this paper, we first discuss the project goals before we present our traffic detection concept using mobile modular intelligent transport systems stations (ITS-Ss). We then explain the approaches for data processing of sensor raw data to refined trajectories, data communication, and data validation.

» Show BibTeX

@article{DBLP:journals/corr/abs-2106-04175,
author = {Laurent Kloeker and
Fabian Thomsen and
Lutz Eckstein and
Philip Trettner and
Tim Elsner and
Julius Nehring{-}Wirxel and
Kersten Schuster and
Leif Kobbelt and
Michael Hoesch},
title = {Highly accurate digital traffic recording as a basis for future mobility
research: Methods and concepts of the research project HDV-Mess},
journal = {CoRR},
volume = {abs/2106.04175},
year = {2021},
url = {https://arxiv.org/abs/2106.04175},
eprinttype = {arXiv},
eprint = {2106.04175},
timestamp = {Fri, 11 Jun 2021 11:04:16 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2106-04175.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}





Fast Exact Booleans for Iterated CSG using Octree-Embedded BSPs


Julius Nehring-Wirxel, Philip Trettner, Leif Kobbelt
Computer-Aided Design
pubimg

We present octree-embedded BSPs, a volumetric mesh data structure suited for performing a sequence of Boolean operations (iterated CSG) efficiently. At its core, our data structure leverages a plane-based geometry representation and integer arithmetics to guarantee unconditionally robust operations. These typically present considerable performance challenges which we overcome by using custom-tailored fixed-precision operations and an efficient algorithm for cutting a convex mesh against a plane. Consequently, BSP Booleans and mesh extraction are formulated in terms of mesh cutting. The octree is used as a global acceleration structure to keep modifications local and bound the BSP complexity. With our optimizations, we can perform up to 2.5 million mesh-plane cuts per second on a single core, which creates roughly 40-50 million output BSP nodes for CSG. We demonstrate our system in two iterated CSG settings: sweep volumes and a milling simulation.

» Show BibTeX

@article{NEHRINGWIRXEL2021103015,
title = {Fast Exact Booleans for Iterated CSG using Octree-Embedded BSPs},
journal = {Computer-Aided Design},
volume = {135},
pages = {103015},
year = {2021},
issn = {0010-4485},
doi = {https://doi.org/10.1016/j.cad.2021.103015},
url = {https://www.sciencedirect.com/science/article/pii/S0010448521000269},
author = {Julius Nehring-Wirxel and Philip Trettner and Leif Kobbelt},
keywords = {Plane-based geometry, CSG, Mesh Booleans, BSP, Octree, Integer arithmetic},
abstract = {We present octree-embedded BSPs, a volumetric mesh data structure suited for performing a sequence of Boolean operations (iterated CSG) efficiently. At its core, our data structure leverages a plane-based geometry representation and integer arithmetics to guarantee unconditionally robust operations. These typically present considerable performance challenges which we overcome by using custom-tailored fixed-precision operations and an efficient algorithm for cutting a convex mesh against a plane. Consequently, BSP Booleans and mesh extraction are formulated in terms of mesh cutting. The octree is used as a global acceleration structure to keep modifications local and bound the BSP complexity. With our optimizations, we can perform up to 2.5 million mesh-plane cuts per second on a single core, which creates roughly 40-50 million output BSP nodes for CSG. We demonstrate our system in two iterated CSG settings: sweep volumes and a milling simulation.}
}





Inter-Surface Maps via Constant-Curvature Metrics


Patrick Schmidt, Marcel Campen, Janis Born, Leif Kobbelt
SIGGRAPH 2020
pubimg

We propose a novel approach to represent maps between two discrete surfaces of the same genus and to minimize intrinsic mapping distortion. Our maps are well-defined at every surface point and are guaranteed to be continuous bijections (surface homeomorphisms). As a key feature of our approach, only the images of vertices need to be represented explicitly, since the images of all other points (on edges or in faces) are properly defined implicitly. This definition is via unique geodesics in metrics of constant Gaussian curvature. Our method is built upon the fact that such metrics exist on surfaces of arbitrary topology, without the need for any cuts or cones (as asserted by the uniformization theorem). Depending on the surfaces' genus, these metrics exhibit one of the three classical geometries: Euclidean, spherical or hyperbolic. Our formulation handles constructions in all three geometries in a unified way. In addition, by considering not only the vertex images but also the discrete metric as degrees of freedom, our formulation enables us to simultaneously optimize the images of these vertices and images of all other points.



» Show Videos
» Show BibTeX

@article{schmidt2020intersurface,
author = {Schmidt, Patrick and Campen, Marcel and Born, Janis and Kobbelt, Leif},
title = {Inter-Surface Maps via Constant-Curvature Metrics},
journal = {ACM Transactions on Graphics},
issue_date = {July 2020},
volume = {39},
number = {4},
month = jul,
year = {2020},
articleno = {119},
url = {https://doi.org/10.1145/3386569.3392399},
doi = {10.1145/3386569.3392399},
publisher = {ACM},
address = {New York, NY, USA},
}





Rilievo: Artistic Scene Authoring via Interactive Height Map Extrusion in VR


Sevinc Eroglu, Patric Schmitz, Carlos Aguilera Martinez, Jana Rusch, Leif Kobbelt, Torsten Wolfgang Kuhlen
ACM SIGGRAPH 2020 Art Papers. Published in Leonardo Journal.
pubimg

The authors present a virtual authoring environment for artistic creation in VR. It enables the effortless conversion of 2D images into volumetric 3D objects. Artistic elements in the input material are extracted with a convenient VR-based segmentation tool. Relief sculpting is then performed by interactively mixing different height maps. These are automatically generated from the input image structure and appearance. A prototype of the tool is showcased in an analog-virtual artistic workflow in collaboration with a traditional painter. It combines the expressiveness of analog painting and sculpting with the creative freedom of spatial arrangement in VR.

» Show BibTeX

@article{eroglu2020rilievo,
title={Rilievo: Artistic Scene Authoring via Interactive Height Map Extrusion in VR},
author={Eroglu, Sevinc and Schmitz, Patric and Martinez, Carlos Aguilera and Rusch, Jana and Kobbelt, Leif and Kuhlen, Torsten W},
journal={Leonardo},
volume={53},
number={4},
pages={438--441},
year={2020},
publisher={MIT Press}
}





Fast and Robust QEF Minimization using Probabilistic Quadrics


Philip Trettner, Leif Kobbelt
Computer Graphics Forum (Proc. EUROGRAPHICS 2020)
pubimg

Error quadrics are a fundamental and powerful building block in many geometry processing algorithms. However, finding the minimizer of a given quadric is in many cases not robust and requires a singular value decomposition or some ad-hoc regularization. While classical error quadrics measure the squared deviation from a set of ground truth planes or polygons, we treat the input data as genuinely uncertain information and embed error quadrics in a probabilistic setting ("probabilistic quadrics") where the optimal point minimizes the expected squared error. We derive closed form solutions for the popular plane and triangle quadrics subject to (spatially varying, anisotropic) Gaussian noise. Probabilistic quadrics can be minimized robustly by solving a simple linear system - 50x faster than SVD. We show that probabilistic quadrics have superior properties in tasks like decimation and isosurface extraction since they favor more uniform triangulations and are more tolerant to noise while still maintaining feature sensitivity. A broad spectrum of applications can directly benefit from our new quadrics as a drop-in replacement which we demonstrate with mesh smoothing via filtered quadrics and non-linear subdivision surfaces.

» Show BibTeX

@article {10.1111:cgf.13933,
journal = {Computer Graphics Forum},
title = {{Fast and Robust QEF Minimization using Probabilistic Quadrics}},
author = {Trettner, Philip and Kobbelt, Leif},
year = {2020},
publisher = {The Eurographics Association and John Wiley & Sons Ltd.},
ISSN = {1467-8659},
DOI = {10.1111/cgf.13933}
}





High-Fidelity Point-Based Rendering of Large-Scale 3D Scan Datasets


Patric Schmitz, Timothy Blut, Christian Mattes, Leif Kobbelt
IEEE Computer Graphics and Applications
pubimg

Digitalization of 3D objects and scenes using modern depth sensors and high-resolution RGB cameras enables the preservation of human cultural artifacts at an unprecedented level of detail. Interactive visualization of these large datasets, however, is challenging without degradation in visual fidelity. A common solution is to fit the dataset into available video memory by downsampling and compression. The achievable reproduction accuracy is thereby limited for interactive scenarios, such as immersive exploration in Virtual Reality (VR). This degradation in visual realism ultimately hinders the effective communication of human cultural knowledge. This article presents a method to render 3D scan datasets with minimal loss of visual fidelity. A point-based rendering approach visualizes scan data as a dense splat cloud. For improved surface approximation of thin and sparsely sampled objects, we propose oriented 3D ellipsoids as rendering primitives. To render massive texture datasets, we present a virtual texturing system that dynamically loads required image data. It is paired with a single-pass page prediction method that minimizes visible texturing artifacts. Our system renders a challenging dataset in the order of 70 million points and a texture size of 1.2 terabytes consistently at 90 frames per second in stereoscopic VR.

» Show Videos



High-Performance Image Filters via Sparse Approximations


Kersten Schuster, Philip Trettner, Leif Kobbelt
Proceedings of the ACM on Computer Graphics and Interactive Techniques, Vol. 3, No. 2, 2020
pubimg

We present a numerical optimization method to find highly efficient (sparse) approximations for convolutional image filters. Using a modified parallel tempering approach, we solve a constrained optimization that maximizes approximation quality while strictly staying within a user-prescribed performance budget. The results are multi-pass filters where each pass computes a weighted sum of bilinearly interpolated sparse image samples, exploiting hardware acceleration on the GPU. We systematically decompose the target filter into a series of sparse convolutions, trying to find good trade-offs between approximation quality and performance. Since our sparse filters are linear and translation-invariant, they do not exhibit the aliasing and temporal coherence issues that often appear in filters working on image pyramids. We show several applications, ranging from simple Gaussian or box blurs to the emulation of sophisticated Bokeh effects with user-provided masks. Our filters achieve high performance as well as high quality, often providing significant speed-up at acceptable quality even for separable filters. The optimized filters can be baked into shaders and used as a drop-in replacement for filtering tasks in image processing or rendering pipelines.




Cost Minimizing Local Anisotropic Quad Mesh Refinement


Max Lyon, David Bommes, Leif Kobbelt
Eurographics Symposium on Geometry Processing 2020
pubimg

Quad meshes as a surface representation have many conceptual advantages over triangle meshes. Their edges can naturally be aligned to principal curvatures of the underlying surface and they have the flexibility to create strongly anisotropic cells without causing excessively small inner angles. While in recent years a lot of progress has been made towards generating high quality uniform quad meshes for arbitrary shapes, their adaptive and anisotropic refinement remains difficult since a single edge split might propagate across the entire surface in order to maintain consistency. In this paper we present a novel refinement technique which finds the optimal trade-off between number of resulting elements and inserted singularities according to a user prescribed weighting. Our algorithm takes as input a quad mesh with those edges tagged that are prescribed to be refined. It then formulates a binary optimization problem that minimizes the number of additional edges which need to be split in order to maintain consistency. Valence 3 and 5 singularities have to be introduced in the transition region between refined and unrefined regions of the mesh. The optimization hence computes the optimal trade-off and places singularities strategically in order to minimize the number of consistency splits — or avoids singularities where this causes only a small number of additional splits. When applying the refinement scheme iteratively, we extend our binary optimization formulation such that previous splits can be undone if this prevents degenerate cells with small inner angles that otherwise might occur in anisotropic regions or in the vicinity of singularities. We demonstrate on a number of challenging examples that the algorithm performs well in practice.

» Show Videos
» Show BibTeX

@article{Lyon:2020:Cost,
title = {Cost Minimizing Local Anisotropic Quad Mesh Refinement},
author = {Lyon, Max and Bommes, David and Kobbelt, Leif},
journal = {Computer Graphics Forum},
volume = {39},
number = {5},
year = {2020},
doi = {10.1111/cgf.14076}
}





A Three-Level Approach to Texture Mapping and Synthesis on 3D Surfaces


Kersten Schuster, Philip Trettner, Patric Schmitz, Leif Kobbelt
Proceedings of the ACM on Computer Graphics and Interactive Techniques, Vol. 3, No. 1, 2020
pubimg

We present a method for example-based texturing of triangular 3D meshes. Our algorithm maps a small 2D texture sample onto objects of arbitrary size in a seamless fashion, with no visible repetitions and low overall distortion. It requires minimal user interaction and can be applied to complex, multi-layered input materials that are not required to be tileable. Our framework integrates a patch-based approach with per-pixel compositing. To minimize visual artifacts, we run a three-level optimization that starts with a rigid alignment of texture patches (macro scale), then continues with non-rigid adjustments (meso scale) and finally performs pixel-level texture blending (micro scale). We demonstrate that the relevance of the three levels depends on the texture content and type (stochastic, structured, or anisotropic textures).

» Show BibTeX

@article{schuster2020,
author = {Schuster, Kersten and Trettner, Philip and Schmitz, Patric and Kobbelt, Leif},
title = {A Three-Level Approach to Texture Mapping and Synthesis on 3D Surfaces},
year = {2020},
issue_date = {Apr 2020},
publisher = {The Association for Computers in Mathematics and Science Teaching},
address = {USA},
volume = {3},
number = {1},
url = {https://doi.org/10.1145/3384542},
doi = {10.1145/3384542},
journal = {Proc. ACM Comput. Graph. Interact. Tech.},
month = apr,
articleno = {1},
numpages = {19},
keywords = {material blending, surface texture synthesis, texture mapping}
}





PRS-Net: Planar Reflective Symmetry Detection Net for 3D Models


Lin Gao, Ling-Xiao Zhang, Hsien-Yu Meng, Yi-Hui Ren, Yu-Kun Lai, Leif Kobbelt
IEEE Transactions on Visualization and Computer Graphics
pubimg

In geometry processing, symmetry is a universal type of high-level structural information of 3D models and benefits many geometry processing tasks including shape segmentation, alignment, matching, and completion. Thus it is an important problem to analyze various symmetry forms of 3D shapes. Planar reflective symmetry is the most fundamental one. Traditional methods based on spatial sampling can be time-consuming and may not be able to identify all the symmetry planes. In this paper, we present a novel learning framework to automatically discover global planar reflective symmetry of a 3D shape. Our framework trains an unsupervised 3D convolutional neural network to extract global model features and then outputs possible global symmetry parameters, where input shapes are represented using voxels. We introduce a dedicated symmetry distance loss along with a regularization loss to avoid generating duplicated symmetry planes. Our network can also identify generalized cylinders by predicting their rotation axes. We further provide a method to remove invalid and duplicated planes and axes. We demonstrate that our method is able to produce reliable and accurate results. Our neural network based method is hundreds of times faster than the state-of-the-art methods, which are based on sampling. Our method is also robust even with noisy or incomplete input surfaces.

» Show BibTeX

@article{abs-1910-06511,
author = {Lin Gao and
Ling{-}Xiao Zhang and
Hsien{-}Yu Meng and
Yi{-}Hui Ren and
Yu{-}Kun Lai and
Leif Kobbelt},
title = {PRS-Net: Planar Reflective Symmetry Detection Net for 3D Models},
journal = {CoRR},
volume = {abs/1910.06511},
year = {2019},
url = {http://arxiv.org/abs/1910.06511},
archivePrefix = {arXiv},
eprint = {1910.06511},
}





Unsupervised Segmentation of Indoor 3D Point Cloud: Application to Object-based Classification


Florent Poux, Christian Mattes, Leif Kobbelt
3D GeoInfo Conference 2020
pubimg

Point cloud data of indoor scenes is primarily composed of planar-dominant elements. Automatic shape segmentation is thus valuable to avoid labour intensive labelling. This paper provides a fully unsupervised region growing segmentation approach for efficient clustering of massive 3D point clouds. Our contribution targets a low-level grouping beneficial to object-based classification. We argue that the use of relevant segments for object-based classification has the potential to perform better in terms of recognition accuracy, computing time and lowers the manual labelling time needed. However, fully unsupervised approaches are rare due to a lack of proper generalisation of user-defined parameters. We propose a self-learning heuristic process to define optimal parameters, and we validate our method on a large and richly annotated dataset (S3DIS) yielding 88.1% average F1-score for object-based classification. It permits to automatically segment indoor point clouds with no prior knowledge at commercially viable performance and is the foundation for efficient indoor 3D modelling in cluttered point clouds.

» Show BibTeX

@Article{poux2020b,
author = {Poux, F. and Mattes, C. and Kobbelt, L.},
title = {UNSUPERVISED SEGMENTATION OF INDOOR 3D POINT CLOUD: APPLICATION TO OBJECT-BASED CLASSIFICATION},
journal = {ISPRS - International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences},
volume = {XLIV-4/W1-2020},
year = {2020},
pages = {111--118},
url = {https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLIV-4-W1-2020/111/2020/},
doi = {10.5194/isprs-archives-XLIV-4-W1-2020-111-2020}
}





Initial User-Centered Design of a Virtual Reality Heritage System: Applications for Digital Tourism


Florent Poux, Quentin Valembois, Christian Mattes, Leif Kobbelt, Roland Billen
Remote Sensing
pubimg

Reality capture allows for the reconstruction, with a high accuracy, of the physical reality of cultural heritage sites. Obtained 3D models are often used for various applications such as promotional content creation, virtual tours, and immersive experiences. In this paper, we study new ways to interact with these high-quality 3D reconstructions in a real-world scenario. We propose a user-centric product design to create a virtual reality (VR) application specifically intended for multi-modal purposes. It is applied to the castle of Jehay (Belgium), which is under renovation, to permit multi-user digital immersive experiences. The article proposes a high-level view of multi-disciplinary processes, from a needs analysis to the 3D reality capture workflow and the creation of a VR environment incorporated into an immersive application. We provide several relevant VR parameters for the scene optimization, the locomotion system, and the multi-user environment definition that were tested in a heritage tourism context.

» Show BibTeX

@article{poux2020a,
title={Initial User-Centered Design of a Virtual Reality Heritage System: Applications for Digital Tourism},
volume={12},
ISSN={2072-4292},
url={http://dx.doi.org/10.3390/rs12162583},
DOI={10.3390/rs12162583},
number={16},
journal={Remote Sensing},
publisher={MDPI AG},
author={Poux, Florent and Valembois, Quentin and Mattes, Christian and Kobbelt, Leif and Billen, Roland},
year={2020},
month={Aug},
pages={2583}
}





SEG-MAT: 3D Shape Segmentation Using Medial Axis Transform


Cheng Lin, Lingjie Liu, Changjian Li, Leif Kobbelt, Bin Wang, Shiqing Xin, Wenping Wang
IEEE Transactions on Visualization and Computer Graphics
pubimg

Segmenting arbitrary 3D objects into constituent parts that are structurally meaningful is a fundamental problem encountered in a wide range of computer graphics applications. Existing methods for 3D shape segmentation suffer from complex geometry processing and heavy computation caused by using low-level features and fragmented segmentation results due to the lack of global consideration. We present an efficient method, called SEG-MAT, based on the medial axis transform (MAT) of the input shape. Specifically, with the rich geometrical and structural information encoded in the MAT, we are able to develop a simple and principled approach to effectively identify the various types of junctions between different parts of a 3D shape. Extensive evaluations and comparisons show that our method outperforms the state-of-the-art methods in terms of segmentation quality and is also one order of magnitude faster.

» Show BibTeX

@ARTICLE{9234096,
author={C. {Lin} and L. {Liu} and C. {Li} and L. {Kobbelt} and B. {Wang} and S. {Xin} and W. {Wang}},
journal={IEEE Transactions on Visualization and Computer Graphics},
title={SEG-MAT: 3D Shape Segmentation Using Medial Axis Transform},
year={2020},
volume={},
number={},
pages={1-1},
doi={10.1109/TVCG.2020.3032566}}





Parametrization Quantization with Free Boundaries for Trimmed Quad Meshing


Max Lyon, Marcel Campen, David Bommes, Leif Kobbelt
SIGGRAPH 2019
pubimg

The generation of quad meshes based on surface parametrization techniques has proven to be a versatile approach. These techniques quantize an initial seamless parametrization so as to obtain an integer grid map implying a pure quad mesh. State-of-the-art methods following this approach have to assume that the surface to be meshed either has no boundary, or has a boundary which the resulting mesh is supposed to be aligned to. In a variety of applications this is not desirable and non-boundary-aligned meshes or grid-parametrizations are preferred. We thus present a technique to robustly generate integer grid maps which are either boundary-aligned, non-boundary-aligned, or partially boundary-aligned, just as required by different applications. We thereby generalize previous work to this broader setting. This enables the reliable generation of trimmed quad meshes with partial elements along the boundary, preferable in various scenarios, from tiled texturing over design and modeling to fabrication and architecture, due to fewer constraints and hence higher overall mesh quality and other benefits in terms of aesthetics and flexibility.

» Show BibTeX

@article{Lyon:2019:TrimmedQuadMeshing,
author = "Lyon, Max and Campen, Marcel and Bommes, David and Kobbelt, Leif",
title = "Parametrization Quantization with Free Boundaries for Trimmed Quad Meshing",
journal = "ACM Transactions on Graphics",
volume = 38,
number = 4,
year = 2019
}





Distortion-Minimizing Injective Maps Between Surfaces


Patrick Schmidt, Janis Born, Marcel Campen, Leif Kobbelt
SIGGRAPH Asia 2019
pubimg

The problem of discrete surface parametrization, i.e. mapping a mesh to a planar domain, has been investigated extensively. We address the more general problem of mapping between surfaces. In particular, we provide a formulation that yields a map between two disk-topology meshes, which is continuous and injective by construction and which locally minimizes intrinsic distortion. A common approach is to express such a map as the composition of two maps via a simple intermediate domain such as the plane, and to independently optimize the individual maps. However, even if both individual maps are of minimal distortion, there is potentially high distortion in the composed map. In contrast to many previous works, we minimize distortion in an end-to-end manner, directly optimizing the quality of the composed map. This setting poses additional challenges due to the discrete nature of both the source and the target domain. We propose a formulation that, despite the combinatorial aspects of the problem, allows for a purely continuous optimization. Further, our approach addresses the non-smooth nature of discrete distortion measures in this context which hinders straightforward application of off-the-shelf optimization techniques. We demonstrate that, despite the challenges inherent to the more involved setting, discrete surface-to-surface maps can be optimized effectively.



» Show Videos
» Show BibTeX

@article{schmidt2019distortion,
author = {Schmidt, Patrick and Born, Janis and Campen, Marcel and Kobbelt, Leif},
title = {Distortion-Minimizing Injective Maps Between Surfaces},
journal = {ACM Transactions on Graphics},
issue_date = {November 2019},
volume = {38},
number = {6},
month = nov,
year = {2019},
articleno = {156},
url = {https://doi.org/10.1145/3355089.3356519},
doi = {10.1145/3355089.3356519},
publisher = {ACM},
address = {New York, NY, USA},
}





String-Based Synthesis of Structured Shapes


Javor Kalojanov, Isaak Lim, Niloy Mitra, Leif Kobbelt
Computer Graphics Forum (Proc. EUROGRAPHICS 2019)
pubimg

We propose a novel method to synthesize geometric models from a given class of context-aware structured shapes such as buildings and other man-made objects. Our central idea is to leverage powerful machine learning methods from the area of natural language processing for this task. To this end, we propose a technique that maps shapes to strings and vice versa, through an intermediate shape graph representation. We then convert procedurally generated shape repositories into text databases that in turn can be used to train a variational autoencoder which enables higher level shape manipulation and synthesis like, e.g., interpolation and sampling via its continuous latent space.

» Show BibTeX

@article{Kalojanov2019,
journal = {Computer Graphics Forum},
title = {{String-Based Synthesis of Structured Shapes}},
author = {Javor Kalojanov and Isaak Lim and Niloy Mitra and Leif Kobbelt},
pages = {027-036},
volume= {38},
number= {2},
year = {2019},
note = {\URL{https://diglib.eg.org/bitstream/handle/10.1111/cgf13616/v38i2pp027-036.pdf}},
DOI = {10.1111/cgf.13616},
}





A Convolutional Decoder for Point Clouds using Adaptive Instance Normalization


Isaak Lim, Moritz Ibing, Leif Kobbelt
Eurographics Symposium on Geometry Processing 2019
pubimg

Automatic synthesis of high quality 3D shapes is an ongoing and challenging area of research. While several data-driven methods have been proposed that make use of neural networks to generate 3D shapes, none of them reach the level of quality that deep learning synthesis approaches for images provide. In this work we present a method for a convolutional point cloud decoder/generator that makes use of recent advances in the domain of image synthesis. Namely, we use Adaptive Instance Normalization and offer an intuition on why it can improve training. Furthermore, we propose extensions to the minimization of the commonly used Chamfer distance for auto-encoding point clouds. In addition, we show that careful sampling is important both for the input geometry and in our point cloud generation process to improve results. The results are evaluated in an auto-encoding setup to offer both qualitative and quantitative analysis. The proposed decoder is validated by an extensive ablation study and is able to outperform current state of the art results in a number of experiments. We show the applicability of our method in the fields of point cloud upsampling, single view reconstruction, and shape synthesis.

» Show BibTeX

@article{Lim:2019:ConvolutionalDecoder,
author = "Lim, Isaak and Ibing, Moritz and Kobbelt, Leif",
title = "A Convolutional Decoder for Point Clouds using Adaptive Instance Normalization",
journal = "Computer Graphics Forum",
volume = 38,
number = 5,
year = 2019
}





ACAP: Sparse Data Driven Mesh Deformation


Lin Gao, Yu-Kun Lai, Jie Yang, Ling-Xiao Zhang, Shihong Xia, Leif Kobbelt
IEEE Transactions on Visualization and Computer Graphics
pubimg

Example-based mesh deformation methods are powerful tools for realistic shape editing. However, existing techniques typically combine all the example deformation modes, which can lead to overfitting, i.e. using an overly complicated model to explain the user-specified deformation. This leads to implausible or unstable deformation results, including unexpected global changes outside the region of interest. To address this fundamental limitation, we propose a sparse blending method that automatically selects a smaller number of deformation modes to compactly describe the desired deformation. This along with a suitably chosen deformation basis including spatially localized deformation modes leads to significant advantages, including more meaningful, reliable, and efficient deformations because fewer and localized deformation modes are applied. To cope with large rotations, we develop a simple but effective representation based on polar decomposition of deformation gradients, which resolves the ambiguity of large global rotations using an as-consistent-as-possible global optimization. This simple representation has a closed form solution for derivatives, making it efficient for our sparse localized representation and thus ensuring interactive performance. Experimental results show that our method outperforms state-of-the-art data-driven mesh deformation methods, for both quality of results and efficiency.

» Show Videos
» Show BibTeX

@article{gao2019sparse,
title={Sparse data driven mesh deformation},
author={Gao, Lin and Lai, Yu-Kun and Yang, Jie and Ling-Xiao, Zhang and Xia, Shihong and Kobbelt, Leif},
journal={IEEE transactions on visualization and computer graphics},
year={2019},
publisher={IEEE}
}





Form Finding of Stress Adapted Folding as a Lightweight Structure Under Different Load Cases


Juan Musto, Max Lyon, Martin Trautz, Leif Kobbelt
Proceedings of IASS Annual Symposia
pubimg

In steel construction, the use of folds is limited to longitudinal folds (e.g. trapezoidal sheets). The efficiency of creases can be increased by aligning the folding pattern to the principal stresses or to their directions. This paper presents a form-finding approach to use the material as homogeneously as possible. In addition to the purely geometric alignment according to the stress directions, it also allows the stress intensity to be taken into account during form-finding. A trajectory mesh of the principle stresses is generated on the basis of which the structure is derived. The relationships between the stress lines distance, progression and stress intensity are discussed and implemented in the approaches of form-finding. Building on this, this paper additionally deals with the question of which load case is the most effective basis for designing the crease pattern when several load cases can act simultaneously.

» Show BibTeX

@article {Musto:2019:2518-6582:1,
title = "Form finding of stress adapted folding as a lightweight structure under different load cases",
journal = "Proceedings of IASS Annual Symposia",
parent_itemid = "infobike://iass/piass",
publishercode ="iass",
year = "2019",
volume = "2019",
number = "13",
publication date ="2019-10-07T00:00:00",
pages = "1-8",
itemtype = "ARTICLE",
issn = "2518-6582",
eissn = "2518-6582",
url = "https://www.ingentaconnect.com/content/iass/piass/2019/00002019/00000013/art00006",
keyword = "lightweight-construction, folding, Mixed-Integer Quadrangulation, principle stress lines",
author = "Musto, Juan and Lyon, Max and Trautz, Martin and Kobbelt, Leif",
abstract = "In steel construction, the use of folds is limited to longitudinal folds (e.g. trapezoidal sheets). The efficiency of creases can be increased by aligning the folding pattern to the principal stresses or to their directions. This paper presents a form-finding approach to use the material
as homogeneously as possible. In addition to the purely geometric alignment according to the stress directions, it also allows the stress intensity to be taken into account during form-finding. A trajectory mesh of the principle stresses is generated on the basis of which the structure is
derived. The relationships between the stress lines distance, progression and stress intensity are discussed and implemented in the approaches of form-finding. Building on this, this paper additionally deals with the question of which load case is the most effective basis for designing
the crease pattern when several load cases can act simultaneously.",
}





Beanspruchungsoptimierte Faltungen aus Stahl für selbsttragende Raumfaltwerke


Juan Musto, Max Lyon, Martin Trautz, Leif Kobbelt
Bautechnik
pubimg

Der Einsatz von Faltungen beschränkt sich im Bauwesen auf Longitudinalfaltungen (Trapezbleche) und regelmäßige Faltungen. Raumfaltwerke und Faltleichtbauplatten, räumlich gekrümmte und dreidimensionale Flächentragwerke sind Desiderate eines Leichtbaus mit Stahlblechen. Raumfaltwerke bestehen vorwiegend aus regelmäßigen Faltmuster, die auf Tesselierung mit Primitivflächen (Drei-und Vierecke) basieren. Um die Effizienz dieser Leichtbaustrukturen zu verbessern, liegt es nahe, statt regelmäßige und auf geometrischen Prinzipien basierende Faltmuster umzusetzen, Faltmuster nach Maßgabe nach Maßgabe der der Beanspruchungen bzw. der Beanspruchungsverteilung anzuwenden. Hierzu ist ein Formfindungsprozess zu entwickeln, der auf der Generierung eines Trajektoriennetzes basiert, das aus dem maßgeblichen Lastfall (formgebenden Lastfall) abgeleitet wird. Der Vergleich des Masseneinsatzes und der Traglast der Faltungen, die auf geometrischer Basis erzeugt wurden mit einer auf Basis des Trajektoriennetzes entwickelten Faltung zeigt die Veränderung der Effizienz .

» Show BibTeX

@article{https://doi.org/10.1002/bate.201900024,
author = {Musto, Juan and Lyon, Max and Trautz, Martin and Kobbelt, Leif},
title = {Beanspruchungsoptimierte Faltungen aus Stahl für selbsttragende Raumfaltwerke},
journal = {Bautechnik},
volume = {96},
number = {12},
pages = {902-911},
keywords = {Leichtbau, Faltungen, Hauptspannungstrajektorien, Mixed-Integer Quadrangulation, lightweight-construction, folgings, principle stress trajectories, mixed-integer quadrangulation, Stahlbau, Leichtbau, Steel construction, lightweight construction},
doi = {https://doi.org/10.1002/bate.201900024},
url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/bate.201900024},
eprint = {https://onlinelibrary.wiley.com/doi/pdf/10.1002/bate.201900024},
abstract = {Abstract Der Einsatz von Faltungen beschränkt sich im Bauwesen auf Longitudinalfaltungen (Trapezbleche) und regelmäßige Faltungen. Raumfaltwerke und Faltleichtbauplatten, räumlich gekrümmte und dreidimensionale Flächentragwerke sind Desiderate eines Leichtbaus mit Stahlblechen. Raumfaltwerke bestehen vorwiegend aus regelmäßigen Faltmustern, die auf Tesselierung mit Primitivflächen (Drei- und Vierecke) basieren. Um die Effizienz dieser Leichtbaustrukturen zu verbessern, liegt es nahe, statt regelmäßige und auf geometrischen Prinzipien basierende Faltmuster umzusetzen, Faltmuster nach Maßgabe der Beanspruchungen bzw. der Beanspruchungsverteilung anzuwenden. Hierzu ist ein Formfindungsprozess zu entwickeln, der auf der Generierung eines Trajektoriennetzes basiert, das aus dem maßgeblichen Lastfall (formgebenden Lastfall) abgeleitet wird. Der Vergleich des Masseneinsatzes und der Traglast der Faltungen, die auf geometrischer Basis erzeugt wurden, mit einer auf Basis des Trajektoriennetzes entwickelten Faltung zeigt die Veränderung der Effizienz.},
year = {2019}
}





Structured Discrete Shape Approximation: Theoretical Complexity and Practical Algorithm


Andreas Tillmann, Leif Kobbelt
Computational Geometry (Volume 99, 2021)

We consider the problem of approximating a two-dimensional shape contour (or curve segment) using discrete assembly systems, which allow to build geometric structures based on limited sets of node and edge types subject to edge length and orientation restrictions. We show that already deciding feasibility of such approximation problems is NP-hard, and remains intractable even for very simple setups. We then devise an algorithmic framework that combines shape sampling with exact cardinality-minimization to obtain good approximations using few components. As a particular application and showcase example, we discuss approximating shape contours using the classical Zometool construction kit and provide promising computational results, demonstrating that our algorithm is capable of obtaining good shape representations within reasonable time, in spite of the problem's general intractability. We conclude the paper with an outlook on possible extensions of the developed methodology, in particular regarding 3D shape approximation tasks.



Code available per request.
» Show BibTeX

@article{TILLMANN2021101795,
title = {Structured discrete shape approximation: Theoretical complexity and practical algorithm},
journal = {Computational Geometry},
volume = {99},
pages = {101795},
year = {2021},
issn = {0925-7721},
doi = {https://doi.org/10.1016/j.comgeo.2021.101795},
url = {https://www.sciencedirect.com/science/article/pii/S0925772121000511},
author = {Andreas M. Tillmann and Leif Kobbelt},
keywords = {Shape approximation, Discrete assembly systems, Computational complexity, Mixed-integer programming, Zometool},
abstract = {We consider the problem of approximating a two-dimensional shape contour (or curve segment) using discrete assembly systems, which allow to build geometric structures based on limited sets of node and edge types subject to edge length and orientation restrictions. We show that already deciding feasibility of such approximation problems is NP-hard, and remains intractable even for very simple setups. We then devise an algorithmic framework that combines shape sampling with exact cardinality minimization to obtain good approximations using few components. As a particular application and showcase example, we discuss approximating shape contours using the classical Zometool construction kit and provide promising computational results, demonstrating that our algorithm is capable of obtaining good shape representations within reasonable time, in spite of the problem's general intractability. We conclude the paper with an outlook on possible extensions of the developed methodology, in particular regarding 3D shape approximation tasks.}
}





Feature Curve Co-Completion in Noisy Data


Anne Gehre, Isaak Lim, Leif Kobbelt
Computer Graphics Forum (Proc. EUROGRAPHICS 2018)
pubimg

Feature curves on 3D shapes provide important hints about significant parts of the geometry and reveal their underlying structure. However, when we process real world data, automatically detected feature curves are affected by measurement uncertainty, missing data, and sampling resolution, leading to noisy, fragmented, and incomplete feature curve networks. These artifacts make further processing unreliable. In this paper we analyze the global co-occurrence information in noisy feature curve networks to fill in missing data and suppress weakly supported feature curves. For this we propose an unsupervised approach to find meaningful structure within the incomplete data by detecting multiple occurrences of feature curve configurations (co-occurrence analysis). We cluster and merge these into feature curve templates, which we leverage to identify strongly supported feature curve segments as well as to complete missing data in the feature curve network. In the presence of significant noise, previous approaches had to resort to user input, while our method performs fully automatic feature curve co-completion. Finding feature reoccurrences however, is challenging since naive feature curve comparison fails in this setting due to fragmentation and partial overlaps of curve segments. To tackle this problem we propose a robust method for partial curve matching. This provides us with the means to apply symmetry detection methods to identify co-occurring configurations. Finally, Bayesian model selection enables us to detect and group re-occurrences that describe the data well and with low redundancy.

» Show BibTeX

@inproceedings{gehre2018feature,
title={Feature Curve Co-Completion in Noisy Data},
author={Gehre, Anne and Lim, Isaak and Kobbelt, Leif},
booktitle={Computer Graphics Forum},
volume={37},
number={2},
year={2018},
organization={Wiley Online Library}
}





Interactive Curve Constrained Functional Maps


Anne Gehre, Michael Bronstein, Leif Kobbelt, Justin Solomon
Eurographics Symposium on Geometry Processing 2018
pubimg

Functional maps have gained popularity as a versatile framework for representing intrinsic correspondence between 3D shapes using algebraic machinery. A key ingredient for this framework is the ability to find pairs of corresponding functions (typically, feature descriptors) across the shapes. This is a challenging problem on its own, and when the shapes are strongly non-isometric, nearly impossible to solve automatically. In this paper, we use feature curve correspondences to provide flexible abstractions of semantically similar parts of non-isometric shapes. We design a user interface implementing an interactive process for constructing shape correspondence, allowing the user to update the functional map at interactive rates by introducing feature curve correspondences. We add feature curve preservation constraints to the functional map framework and propose an efficient numerical method to optimize the map with immediate feedback. Experimental results show that our approach establishes correspondences between geometrically diverse shapes with just a few clicks.

» Show BibTeX

@article{Gehre:2018:InteractiveFunctionalMaps,
author = "Gehre, Anne and Bronstein, Michael and Kobbelt, Leif and Solomon, Justin",
title = "Interactive Curve Constrained Functional Maps",
journal = "Computer Graphics Forum",
volume = 37,
number = 5,
year = 2018
}





You Spin my Head Right Round: Threshold of Limited Immersion for Rotation Gains in Redirected Walking


Patric Schmitz, Julian Romeo Hildebrandt, André Calero Valdez, Leif Kobbelt, Martina Ziefle
IEEE Transactions on Visualization and Computer Graphics
pubimg

In virtual environments, the space that can be explored by real walking is limited by the size of the tracked area. To enable unimpeded walking through large virtual spaces in small real-world surroundings, redirection techniques are used. These unnoticeably manipulate the user’s virtual walking trajectory. It is important to know how strongly such techniques can be applied without the user noticing the manipulation—or getting cybersick. Previously, this was estimated by measuring a detection threshold (DT) in highly-controlled psychophysical studies, which experimentally isolate the effect but do not aim for perceived immersion in the context of VR applications. While these studies suggest that only relatively low degrees of manipulation are tolerable, we claim that, besides establishing detection thresholds, it is important to know when the user’s immersion breaks. We hypothesize that the degree of unnoticed manipulation is significantly different from the detection threshold when the user is immersed in a task. We conducted three studies: a) to devise an experimental paradigm to measure the threshold of limited immersion (TLI), b) to measure the TLI for slowly decreasing and increasing rotation gains, and c) to establish a baseline of cybersickness for our experimental setup. For rotation gains greater than 1.0, we found that immersion breaks quite late after the gain is detectable. However, for gains lesser than 1.0, some users reported a break of immersion even before established detection thresholds were reached. Apparently, the developed metric measures an additional quality of user experience. This article contributes to the development of effective spatial compression methods by utilizing the break of immersion as a benchmark for redirection techniques.




A Simple Approach to Intrinsic Correspondence Learning on Unstructured 3D Meshes


Isaak Lim, Alexander Dielen, Marcel Campen, Leif Kobbelt
Geometry Meets Deep Learning ECCV 2018 Workshop
pubimg

The question of representation of 3D geometry is of vital importance when it comes to leveraging the recent advances in the field of machine learning for geometry processing tasks. For common unstructured surface meshes state-of-the-art methods rely on patch-based or mapping-based techniques that introduce resampling operations in order to encode neighborhood information in a structured and regular manner. We investigate whether such resampling can be avoided, and propose a simple and direct encoding approach. It does not only increase processing efficiency due to its simplicity - its direct nature also avoids any loss in data fidelity. To evaluate the proposed method, we perform a number of experiments in the challenging domain of intrinsic, non-rigid shape correspondence estimation. In comparisons to current methods we observe that our approach is able to achieve highly competitive results.

» Show BibTeX

@InProceedings{lim2018_correspondence_learning,
author = {Lim, Isaak and Dielen, Alexander and Campen, Marcel and Kobbelt, Leif},
title = {A Simple Approach to Intrinsic Correspondence Learning on Unstructured 3D Meshes},
booktitle = {The European Conference on Computer Vision (ECCV) Workshops},
month = {September},
year = {2018}
}





Get Well Soon! Human Factors’ Influence on Cybersickness after Redirected Walking Exposure in Virtual Reality


Julian Romeo Hildebrandt, Patric Schmitz, André Calero Valdez, Leif Kobbelt, Martina Ziefle
Proceedings of HCI International 2018
pubimg

Cybersickness poses a crucial threat to applications in the domain of Virtual Reality. Yet, its predictors are insufficiently explored when redirection techniques are applied. Those techniques let users explore large virtual spaces by natural walking in a smaller tracked space. This is achieved by unnoticeably manipulating the user’s virtual walking trajectory. Unfortunately, this also makes the application more prone to cause Cybersickness. We conducted a user study with a semi-structured interview to get quantitative and qualitative insights into this domain. Results show that Cybersickness arises, but also eases ten minutes after the exposure. Quantitative results indicate that a tolerance towards Cybersickness might be related to self-efficacy constructs and therefore learnable or trainable, while qualitative results indicate that users’ endurance of Cybersickness is dependent on symptom factors such as intensity and duration, as well as factors of usage context and motivation. The role of Cybersickness in Virtual Reality environments is discussed in terms of the applicability of redirected walking techniques.



Real Walking in Virtual Spaces: Visiting the Aachen Cathedral


Patric Schmitz, Leif Kobbelt
Mensch und Computer 2018 - Workshopband
pubimg

Real walking is the most natural and intuitive way to navigate the world around us. In Virtual Reality, the limited tracking area of commercially available systems typically does not match the size of the virtual environment we wish to explore. Spatial compression methods enable the user to walk further in the virtual environment than the real tracking bounds permit. This demo gives a glimpse into our ongoing research on spatial compression in VR. Visitors can walk through a realistic model of the Aachen Cathedral within a room-sized tracking area.

» Show Videos
» Show BibTeX

@article{schmitz2018real,
title={Real Walking in Virtual Spaces: Visiting the Aachen Cathedral},
author={Schmitz, Patric and Kobbelt, Leif},
journal={Mensch und Computer 2018-Workshopband},
year={2018},
publisher={Gesellschaft f{\"u}r Informatik eV}
}





Near-Constant Density Wireframe Meshes for 3D Printing


Ole Untzelmann, Leif Kobbelt
Symposium on Computational Fabrication (2018)
pubimg

In fused deposition modeling (FDM) an object is usually constructed layer-by-layer. Using FDM 3D printers it is however also possible to extrude filament directly in 3D space. Using this technique, a wireframe version of an object can be created by directly printing the wireframe edges into 3D space. This way the print time can be reduced and significant material saving can be achieved. This paper presents a technique for wireframe mesh generation with application in 3D printing. The proposed technique transforms triangle meshes into polygonal meshes, from which the edges can be printed to create the wiremesh. Furthermore, the method is able to generate near-constant density of lines, even in regions parallel to the build platform.




Variance-Minimizing Transport Plans for Inter-surface Mapping


Manish Mandad, David Cohen-Steiner, Leif Kobbelt, Pierre Alliez, Mathieu Desbrun
SIGGRAPH 2017
pubimg

We introduce an efficient computational method for generating dense and low distortion maps between two arbitrary surfaces of same genus. Instead of relying on semantic correspondences or surface parameterization, we directly optimize a variance-minimizing transport plan between two input surfaces that defines an as-conformal-as-possible inter-surface map satisfying a user-prescribed bound on area distortion. The transport plan is computed via two alternating convex optimizations, and is shown to minimize a generalized Dirichlet energy of both the map and its inverse. Computational efficiency is achieved through a coarse-to-fine approach in diffusion geometry, with Sinkhorn iterations modified to enforce bounded area distortion. The resulting inter-surface mapping algorithm applies to arbitrary shapes robustly, with little to no user interaction.

» Show BibTeX

@article{Mandad:2017:Mapping,
author = "Mandad, Manish and Cohen-Steiner, David and Kobbelt, Leif and Alliez, Pierre and Desbrun, Mathieu",
title = "Variance-Minimizing Transport Plans for Inter-surface Mapping",
journal = "ACM Transactions on Graphics",
volume = 36,
number = 4,
year = 2017,
articleno = {39},
}





Non-Linear Shape Optimization Using Local Subspace Projections


Przemyslaw Musialski, Christian Hafner, Florian Rist, Michael Birsak, Michael Wimmer, Leif Kobbelt
SIGGRAPH 2016
pubimg

In this paper we present a novel method for non-linear shape opti- mization of 3d objects given by their surface representation. Our method takes advantage of the fact that various shape properties of interest give rise to underdetermined design spaces implying the existence of many good solutions. Our algorithm exploits this by performing iterative projections of the problem to local subspaces where it can be solved much more efficiently using standard numer- ical routines. We demonstrate how this approach can be utilized for various shape optimization tasks using different shape parameteri- zations. In particular, we show how to efficiently optimize natural frequencies, mass properties, as well as the structural yield strength of a solid body. Our method is flexible, easy to implement, and very fast.

» Show BibTeX

@article{Musialski:2016:ShapeOpt,
author = "Musialski, Przemyslaw and Hafner, Christian and Rist, Florian and Birsak, Michael and Wimmer, Michael and Kobbelt, Leif",
title = "Non-Linear Shape Optimization Using Local Subspace Projections",
journal = "ACM Transactions on Graphics",
volume = 35,
number = 4,
year = 2016
}





HexEx: Robust Hexahedral Mesh Extraction


Max Lyon, David Bommes, Leif Kobbelt
SIGGRAPH 2016
pubimg

State-of-the-art hex meshing algorithms consist of three steps: Frame-field design, parametrization generation, and mesh extraction. However, while the first two steps are usually discussed in detail, the last step is often not well studied. In this paper, we fully concentrate on reliable mesh extraction.

Parametrization methods employ computationally expensive countermeasures to avoid mapping input tetrahedra to degenerate or flipped tetrahedra in the parameter domain because such a parametrization does not define a proper hexahedral mesh. Nevertheless, there is no known technique that can guarantee the complete absence of such artifacts.

We tackle this problem from the other side by developing a mesh extraction algorithm which is extremely robust against typical imperfections in the parametrization. First, a sanitization process cleans up numerical inconsistencies of the parameter values caused by limited precision solvers and floating-point number representation. On the sanitized parametrization, we extract vertices and so-called darts based on intersections of the integer grid with the parametric image of the tetrahedral mesh. The darts are reliably interconnected by tracing within the parametrization and thus define the topology of the hexahedral mesh. In a postprocessing step, we let certain pairs of darts cancel each other, counteracting the effect of flipped regions of the parametrization. With this strategy, our algorithm is able to robustly extract hexahedral meshes from imperfect parametrizations which previously would have been considered defective. The algorithm will be published as an open source library.

» Show BibTeX

@article{Lyon:2016:HexEx,
author = "Lyon, Max and Bommes, David and Kobbelt, Leif",
title = "HexEx: Robust Hexahedral Mesh Extraction",
journal = "ACM Transactions on Graphics",
volume = 35,
number = 4,
year = 2016
}





Interactively Controlled Quad Remeshing of High Resolution 3D Models


Hans-Christian Ebke, Patrick Schmidt, Marcel Campen, Leif Kobbelt
SIGGRAPH Asia 2016
pubimg

Pa­ra­met­ri­za­tion based methods have recently become very popular for the generation of high quality quad meshes. In contrast to previous approaches, they allow for intuitive user control in order to accommodate all kinds of application driven constraints and design intentions. A major obstacle in practice, however, are the relatively long computations that lead to response times of several minutes already for input models of moderate complexity. In this paper we introduce a novel strategy to handle highly complex input meshes with up to several millions of triangles such that quad meshes can still be created and edited within an interactive workflow. Our method is based on representing the input model on different levels of resolution with a mechanism to propagate pa­ra­met­ri­za­tions from coarser to finer levels. The major challenge is to guarantee consistent pa­ra­met­ri­za­tions even in the presence of charts, transition functions, and singularities. Moreover, the remaining degrees of freedom on coarser levels of resolution have to be chosen carefully in order to still achieve low distortion pa­ra­met­ri­za­tions. We demonstrate a prototypic system where the user can interactively edit quad meshes with powerful high-level operations such as guiding constraints, singularity repositioning, and singularity connections.

» Show Videos
» Show BibTeX

@article{esck2016,
author = {Ebke, Hans-Christian and Schmidt, Patrick and Campen, Marcel and Kobbelt, Leif},
title = {Interactively Controlled Quad Remeshing of High Resolution 3D Models},
journal = {ACM Trans. Graph.},
issue_date = {November 2016},
volume = {35},
number = {6},
month = nov,
year = {2016},
issn = {0730-0301},
pages = {218:1--218:13},
articleno = {218},
url = {http://doi.acm.org/10.1145/2980179.2982413},
doi = {10.1145/2980179.2982413},
acmid = {2982413},
publisher = {ACM},
address = {New York, NY, USA},
}





Adapting Feature Curve Networks to a Prescribed Scale


Anne Gehre, Isaak Lim, Leif Kobbelt
Computer Graphics Forum (Proc. EUROGRAPHICS 2016)
pubimg

Feature curves on surface meshes are usually defined solely based on local shape properties such as dihedral angles and principal curvatures. From the application perspective, however, the meaningfulness of a network of feature curves also depends on a global scale parameter that takes the distance between feature curves into account, i.e., on a coarse scale, nearby feature curves should be merged or suppressed if the surface region between them is not representable at the given scale/resolution. In this paper, we propose a computational approach to the intuitive notion of scale conforming feature curve networks where the density of feature curves on the surface adapts to a global scale parameter. We present a constrained global optimization algorithm that computes scale conforming feature curve networks by eliminating curve segments that represent surface features, which are not compatible to the prescribed scale. To demonstrate the usefulness of our approach we apply isotropic and anisotropic remeshing schemes that take our feature curve networks as input. For a number of example meshes, we thus generate high quality shape approximations at various levels of detail.

» Show BibTeX

@inproceedings{gehre2016adapting,
title={Adapting Feature Curve Networks to a Prescribed Scale},
author={Gehre, Anne and Lim, Isaak and Kobbelt, Leif},
booktitle={Computer Graphics Forum},
volume={35},
number={2},
pages={319--330},
year={2016},
organization={Wiley Online Library}
}





Improved Surface Quality in 3D Printing by Optimizing the Printing Direction


Weiming Wang, Cédric Zanni, Leif Kobbelt
Computer Graphics Forum (Proc. EUROGRAPHICS 2016)
pubimg

We present a pipeline of algorithms that decomposes a given polygon model into parts such that each part can be 3D printed with high (outer) surface quality. For this we exploit the fact that most 3D printing technologies have an anisotropic resolution and hence the surface smoothness varies significantly with the orientation of the surface. Our pipeline starts by segmenting the input surface into patches such that their normals can be aligned perpendicularly to the printing direction. A 3D Voronoi diagram is computed such that the intersections of the Voronoi cells with the surface approximate these surface patches. The intersections of the Voronoi cells with the input model's volume then provide an initial decomposition. We further present an algorithm to compute an assembly order for the parts and generate connectors between them. A post processing step further optimizes the seams between segments to improve the visual quality. We run our pipeline on a wide range of 3D models and experimentally evaluate the obtained improvements in terms of numerical, visual, and haptic quality.




MobileVideoTiles: Video Display on Multiple Mobile Devices


Ming Li, Kaspar Scharf, Leif Kobbelt
MobileHCI '16, 18th International Conference on Human-Computer Interaction with Mobile Devices and Services
pubimg

Modern mobile phones can capture and process high quality videos, which makes them a very popular tool to create and watch video content. However when watching a video together with a group, it is not convenient to watch on one mobile display due to its small form factor. One idea is to combine multiple mobile displays together to create a larger interactive surface for sharing visual content. However so far a practical framework supporting synchronous video playback on multiple mobile displays is still missing. We present the design of “MobileVideoTiles”, a mobile application that enables users to watch local or online videos on a big virtual screen composed of multiple mobile displays. We focus on improving video quality and usability of the tiled virtual screen. The major technical contributions include: mobile peer-to-peer video streaming, playback synchronization, and accessibility of video resources. The prototype application has got several thousand downloads since release and re ceived very positive feedback from users.

» Show Videos



Scale-Invariant Directional Alignment of Surface Parametrizations


Marcel Campen, Moritz Ibing, Hans-Christian Ebke, Denis Zorin, Leif Kobbelt
Eurographics Symposium on Geometry Processing 2016
pubimg

Various applications of global surface parametrization benefit from the alignment of parametrization isolines with principal curvature directions. This is particularly true for recent parametrization-based meshing approaches, where this directly translates into a shape-aware edge flow, better approximation quality, and reduced meshing artifacts. Existing methods to influence a parametrization based on principal curvature directions suffer from scale-dependence, which implies the necessity of parameter variation, or try to capture complex directional shape features using simple 1D curves. Especially for non-sharp features, such as chamfers, fillets, blends, and even more for organic variants thereof, these abstractions can be unfit. We present a novel approach which respects and exploits the 2D nature of such directional feature regions, detects them based on coherence and homogeneity properties, and controls the parametrization process accordingly. This approach enables us to provide an intuitive, scale-invariant control parameter to the user. It also allows us to consider non-local aspects like the topology of a feature, enabling further improvements. We demonstrate that, compared to previous approaches, global parametrizations of higher quality can be generated without user intervention.

» Show BibTeX

@article{Campen:2016:ScaleInvariant,
author = "Campen, Marcel and Ibing, Moritz and Ebke, Hans-Christian and Zorin, Denis and Kobbelt, Leif",
title = "Scale-Invariant Directional Alignment of Surface Parametrizations",
journal = "Computer Graphics Forum",
volume = 35,
number = 5,
year = 2016
}





Identifying Style of 3D Shapes using Deep Metric Learning


Isaak Lim, Anne Gehre, Leif Kobbelt
Eurographics Symposium on Geometry Processing 2016
pubimg

We present a method that expands on previous work in learning human perceived style similarity across objects with different structures and functionalities. Unlike previous approaches that tackle this problem with the help of hand-crafted geometric descriptors, we make use of recent advances in metric learning with neural networks (deep metric learning). This allows us to train the similarity metric on a shape collection directly, since any low- or high-level features needed to discriminate between different styles are identified by the neural network automatically. Furthermore, we avoid the issue of finding and comparing sub-elements of the shapes. We represent the shapes as rendered images and show how image tuples can be selected, generated and used efficiently for deep metric learning. We also tackle the problem of training our neural networks on relatively small datasets and show that we achieve style classification accuracy competitive with the state of the art. Finally, to reduce annotation effort we propose a method to incorporate heterogeneous data sources by adding annotated photos found online in order to expand or supplant parts of our training data.

» Show BibTeX

@article{Lim:2016:StyleLearning,
author = "Lim, Isaak and Gehre, Anne and Kobbelt, Leif",
title = "Identifying Style of 3D Shapes using Deep Metric Learning",
journal = "Computer Graphics Forum",
volume = 35,
number = 5,
year = 2016
}





City Reconstruction and Visualization from Public Data Sources


Jan Robert Menzel, Sven Middelberg, Philip Trettner, Bastian Jonas, Leif Kobbelt
Eurographics Workshop on Urban Data Modelling and Visualisation (UDMV 2016)
pubimg

We present a city reconstruction and visualization framework that integrates geometric models reconstructed with a range of different techniques. The framework generates the vast majority of buildings procedurally, which yields plausible visualizations for structurally simple buildings, e.g. residential buildings. For structurally complex landmarks, e.g. churches, a procedural approach does not achieve satisfactory visual fidelity. Thus, we also employ image-based techniques to reconstruct the latter in a more realistic, recognizable way. As the manual acquisition of data required for the procedural and image-based reconstructions is practically infeasible for whole cities, we rely on publicly available data as well as crowd sourcing projects. This enables our framework to render views from cities without any dedicated data acquisition as long as there are sufficient public data sources available. To obtain a more lively impression of a city, we also visualize dynamic features like weather conditions and traffic based on publicly available real-time data.




Geodesic Iso-Curve Signature


Anne Gehre, David Bommes, Leif Kobbelt
21st International Symposium on Vision, Modeling and Visualization (VMV 2016)
pubimg

During the last decade a set of surface descriptors have been presented describing local surface features. Recent approaches have shown that augmenting local descriptors with topological information improves the correspondence and segmentation quality. In this paper we build upon the work of Tevs et al. and Sun and Abidi by presenting a surface descriptor which captures both local surface properties and topological features of 3D objects. We present experiments on shape repositories that are provided with ground-truth correspondences (FAUST, SCAPE, TOSCA) which show that this descriptor outperforms current local surface descriptors.

» Show BibTeX

@INPROCEEDINGS{gbk2016,
author = {Gehre, Anne and Bommes, David and Kobbelt, Leif}
title = {Geodesic Iso-Curve Signature},
booktitle = {Vision, Modeling {\&} Visualization},
year = {2016},
publisher = {The Eurographics Association}
}





Reduced-Order Shape Optimization Using Offset Surfaces


Przemyslaw Musialski, Thomas Auzinger, Michael Birsak, Michael Wimmer, Leif Kobbelt
ACM Transactions on Graphics (TOG), 34(4), 2015
Proceedings of the 2015 SIGGRAPH Conference
pubimg

Given the 2-manifold surface of a 3d object, we propose a novel method for the computation of an offset surface with varying thickness such that the solid volume between the surface an its offset satisfies a set of prescribed constraints and at the same time minimizes a given objective functional. Since the constraints as well as the objective functional can easily be adjusted to specific application requirements, our method provides a flexible and powerful tool for shape optimization. We use manifold harmonics to derive a reduced-order formulation of the optimization problem which guarantees a smooth offset surface and speeds up the computation independently from the input mesh resolution without affecting the quality of the result. The constrained optimization problem can be solved in a numerically robust manner with commodity solvers. Furthermore, the method allows to simultaneously optimize an inner and an outer offset in order to increase the degrees of freedom. We demonstrate our method in a number of examples where we control the physical mass properties of rigid objects for the purpose of 3d printing.

» Show Videos
» Show BibTeX

@article{musialski-2015-souos,
title = "Reduced-Order Shape Optimization Using Offset Surfaces",
author = "Przemyslaw Musialski and Thomas Auzinger and Michael Birsak
and Michael Wimmer and Leif Kobbelt",
year = "2015",
abstract = "Given the 2-manifold surface of a 3d object, we propose a
novel method for the computation of an offset surface with
varying thickness such that the solid volume between the
surface an its offset satisfies a set of prescribed
constraints and at the same time minimizes a given objective
functional. Since the constraints as well as the objective
functional can easily be adjusted to specific application
requirements, our method provides a flexible and powerful
tool for shape optimization. We use manifold harmonics to
derive a reduced-order formulation of the optimization
problem which guarantees a smooth offset surface and speeds
up the computation independently from the input mesh
resolution without affecting the quality of the result. The
constrained optimization problem can be solved in a
numerically robust manner with commodity solvers.
Furthermore, the method allows to simultaneously optimize an
inner and an outer offset in order to increase the degrees
of freedom. We demonstrate our method in a number of
examples where we control the physical mass properties of
rigid objects for the purpose of 3d printing.",
pages = "to appear--9",
month = aug,
number = "4",
event = "ACM SIGGRAPH 2015",
journal = "ACM Transactions on Graphics (ACM SIGGRAPH 2015)",
volume = "34",
location = "Los Angeles, CA, USA",
keywords = "reduced-order models, shape optimization, computational
geometry, geometry processing, physical mass properties",
URL = "http://www.cg.tuwien.ac.at/research/publications/2015/musialski-2015-souos/",
}





Quantized Global Parametrization


Marcel Campen, David Bommes, Leif Kobbelt
SIGGRAPH Asia 2015
pubimg

Global surface parametrization often requires the use of cuts or charts due to non-trivial topology. In recent years a focus has been on so-called seamless parametrizations, where the transition functions across the cuts are rigid transformations with a rotation about some multiple of 90 degrees. Of particular interest, e.g. for quadrilateral meshing, paneling, or texturing, are those instances where in addition the translational part of these transitions is integral (or more generally: quantized). We show that finding not even the optimal, but just an arbitrary valid quantization (one that does not imply parametric degeneracies), is a complex combinatorial problem. We present a novel method that allows us to solve it, i.e. to find valid as well as good quality quantizations. It is based on an original approach to quickly construct solutions to linear Diophantine equation systems, exploiting the specific geometric nature of the parametrization problem. We thereby largely outperform the state-of-the-art, sometimes by several orders of magnitude.




Data Driven 3D Face Tracking Based on a Facial Deformation Model


Dominik Sibbing, Leif Kobbelt
20th International Symposium on Vision, Modeling and Visualization (VMV 2015)
pubimg

We introduce a new markerless 3D face tracking approach for 2D video streams captured by a single consumer grade camera. Our approach is based on tracking 2D features in the video and matching them with the projection of the corresponding feature points of a deformable 3D model. By this we estimate the initial shape and pose of the face. To make the tracking and reconstruction more robust we add a smoothness prior for pose changes as well as for deformations of the faces. Our major contribution lies in the formulation of the smooth deformation prior which we derive from a large database of previously captured facial animations showing different (dynamic) facial expressions of a fairly large number of subjects. We split these animation sequences into snippets of fixed length which we use to predict the facial motion based on previous frames. In order to keep the deformation model compact and independent from the individual physiognomy, we represent it by deformation gradients (instead of vertex positions) and apply a principal component analysis in deformation gradient space to extract the major modes of facial deformation. Since the facial deformation is optimized during tracking, it is particularly easy to apply them to other physiognomies and thereby re-target the facial expressions. We demonstrate the effectiveness of our technique on a number of examples.



VMV 2015 Honorable Mention



ACTUI: Using Commodity Mobile Devices to Build Active Tangible User Interfaces


Ming Li, Leif Kobbelt
MobileHCI '15. 17th International Conference on Human-Computer Interaction with Mobile Devices and Services
pubimg

We present the prototype design for a novel user interface, which extends the concept of tangible user interfaces from mostly specialized hardware components and studio deployment to commodity mobile devices in daily life. Our prototype enables mobile devices to be components of a tangible interface where each device can serve as both, a touch sensing display and as a tangible item for interaction. The only necessary modification is the attachment of a conductive 2D touch pattern on each device. Compared to existing approaches, our Active Commodity Tangible User Interfaces (ACTUI) can display graphical output directly on their built-in display paving the way to a plethora of innovative applications where the diverse combination of local and global active display area can significantly enhance the flexibility and effectiveness of the interaction. We explore two exemplary application scenarios where we demonstrate the potential of ACTUI.

» Show Videos



Active Exploration of Large 3D Model Repositories


Lin Gao, Yan-Pei Cao, Yu-Kun Lai, Hao-Zhi Huang, Leif Kobbelt, Shi-Min Hu
IEEE Transactions on Visualization and Computer Graphics
pubimg

With broader availability of large-scale 3D model repositories, the need for efficient and effective exploration becomes more and more urgent. Existing model retrieval techniques do not scale well with the size of the database since often a large number of very similar objects are returned for a query, and the possibilities to refine the search are quite limited. We propose an interactive approach where the user feeds an active learning procedure by labeling either entire models or parts of them as “like” or “dislike” such that the system can automatically update an active set of recommended models. To provide an intuitive user interface, candidate models are presented based on their estimated relevance for the current query. From the methodological point of view, our main contribution is to exploit not only the similarity between a query and the database models but also the similarities among the database models themselves. We achieve this by an offline pre-processing stage, where global and local shape descriptors are computed for each model and a sparse distance metric is derived that can be evaluated efficiently even for very large databases. We demonstrate the effectiveness of our method by interactively exploring a repository containing over 100K models.

» Show Videos
» Show BibTeX

@ARTICLE{6951464,
author={L. {Gao} and Y. {Cao} and Y. {Lai} and H. {Huang} and L. {Kobbelt} and S. {Hu}},
journal={IEEE Transactions on Visualization and Computer Graphics},
title={Active Exploration of Large 3D Model Repositories},
year={2015},
volume={21},
number={12},
pages={1390-1402},}





Nonparametric Facial Feature Localization Using Segment-Based Eigenfeatures


Hyun-Chul Choi, Dominik Sibbing, Leif Kobbelt
Computational Intelligence and Neuroscience
pubimg

We present a nonparametric facial feature localization method using relative directional information between regularly sampled image segments and facial feature points. Instead of using any iterative parameter optimization technique or search algorithm, our method finds the location of facial feature points by using a weighted concentration of the directional vectors originating from the image segments pointing to the expected facial feature positions. Each directional vector is calculated by linear combination of eigendirectional vectors which are obtained by a principal component analysis of training facial segments in feature space of histogram of oriented gradient (HOG). Our method finds facial feature points very fast and accurately, since it utilizes statistical reasoning from all the training data without need to extract local patterns at the estimated positions of facial features, any iterative parameter optimization algorithm, and any search algorithm. In addition, we can reduce the storage size for the trained model by controlling the energy preserving level of HOG pattern space.




Influence of Temporal Delay and Display Update Rate in an Augmented Reality Application Scenario


Ming Li, Katrin Arning, Luisa Vervier, Martina Ziefle, Leif Kobbelt
The 14th International Conference on Mobile and Ubiquitous Multimedia (MUM 2015)
pubimg

In mobile augmented reality (AR) applications, highly complex computing tasks such as position tracking and 3D rendering compete for limited processing resources. This leads to unavoidable system latency in the form of temporal delay and reduced display update rates. In this paper we present a user study on the influence of these system parameters in an AR point'n'click scenario. Our experiment was conducted in a lab environment to collect quantitative data (user performance as well as user perceived ease of use). We can show that temporal delay and update rate both affect user performance and experience but that users are much more sensitive to longer temporal delay than to lower update rates. Moreover, we found that the effects of temporal delay and update rate are not independent as with longer temporal delay, changing update rates tend to have less impact on the ease of use. Furthermore, in some cases user performance can actually increase when reducing the update rate in order to make it compatible to the latency. Our findings indicate that in the development of mobile AR applications, more emphasis should be put on delay reduction than on update rate improvement and that increasing the update rate does not necessarily improve user performance and experience if the temporal delay is significantly higher than the update interval.

» Show BibTeX

@inproceedings{Li:2015:ITD:2836041.2836070,
author = {Li, Ming and Arning, Katrin and Vervier, Luisa and Ziefle, Martina and Kobbelt, Leif},
title = {Influence of Temporal Delay and Display Update Rate in an Augmented Reality Application Scenario},
booktitle = {Proceedings of the 14th International Conference on Mobile and Ubiquitous Multimedia},
series = {MUM '15},
year = {2015},
isbn = {978-1-4503-3605-5},
location = {Linz, Austria},
pages = {278--286},
numpages = {9},
url = {http://doi.acm.org/10.1145/2836041.2836070},
doi = {10.1145/2836041.2836070},
acmid = {2836070},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {display update rate, ease of use, latency, mobile augmented reality, perception tolerance, point'n'click, temporal delay, user study},
}





Level-of-Detail Quad Meshing


Hans-Christian Ebke, Marcel Campen, David Bommes, Leif Kobbelt
SIGGRAPH Asia 2014
pubimg

The most effective and popular tools for obtaining feature aligned quad meshes from triangular input meshes are based on cross field guided parametrization. These methods are incarnations of a conceptual three-step pipeline: (1) cross field computation, (2) field-guided surface parametrization, (3) quad mesh extraction. While in most meshing scenarios the user prescribes a desired target quad size or edge length, this information is typically taken into account from step 2 onwards only, but not in the cross field computation step. This turns into a problem in the presence of small scale geometric or topological features or noise in the input mesh: closely placed singularities are induced in the cross field, which are not properly reproducible by vertices in a quad mesh with the prescribed edge length, causing severe distortions or even failure of the meshing algorithm. We reformulate the construction of cross fields as well as field-guided parametrizations in a scale-aware manner which effectively suppresses densely spaced features and noise of geometric as well as topological kind. Dominant large-scale features are adequately preserved in the output by relying on the unaltered input mesh as the computational domain.




Dual Strip Weaving: Interactive Design of Quad Layouts using Elastica Strips


Marcel Campen, Leif Kobbelt
SIGGRAPH Asia 2014
pubimg

We introduce Dual Strip Weaving, a novel concept for the interactive design of quad layouts, i.e. partitionings of freeform surfaces into quadrilateral patch networks. In contrast to established tools for the design of quad layouts or subdivision base meshes, which are often based on creating individual vertices, edges, and quads, our method takes a more global perspective, operating on a higher level of abstraction: the atomic operation of our method is the creation of an entire cyclic strip, delineating a large number of quad patches at once. The global consistency-preserving nature of this approach reduces demands on the user’s expertise by requiring less advance planning. Efficiency is achieved using a novel method at the heart of our system, which automatically proposes geometrically and topologically suitable strips to the user. Based on this we provide interaction tools to influence the design process to any desired degree and visual guides to support the user in this task.

» Show Videos



Scalable 6-DOF Localization on Mobile Devices


Sven Middelberg, Torsten Sattler, Ole Untzelmann, Leif Kobbelt
13th European Conference on Computer Vision (ECCV'14)
pubimg

Recent improvements in image-based localization have produced powerful methods that scale up to the massive 3D models emerging from modern Structure-from-Motion techniques. However, these approaches are too resource intensive to run in real-time, let alone to be implemented on mobile devices. In this paper, we propose to combine the scalability of such a global localization system running on a server with the speed and precision of a local pose tracker on a mobile device. Our approach is both scalable and drift-free by design and eliminates the need for loop closure. We propose two strategies to combine the information provided by local tracking and global localization. We evaluate our system on a large-scale dataset of the historic inner city of Aachen where it achieves interactive framerates at a localization error of less than 50cm while using less than 5MB of memory on the mobile device.



The final publication will be available at link.springer.com upon publication.
» Show BibTeX

@inproceedings{middelberg2014eccv,
author = "Middelberg, Sven and Sattler, Torsten and Untzelmann, Ole and Kobbelt, Leif",
title = "{Scalable 6-DOF Localization on Mobile Devices}",
booktitle = "{Proceedings of the 13th European Conference on Computer Vision (ECCV'14)}",
year = 2014
}





Efficient Enforcement of Hard Articulation Constraints in the Presence of Closed Loops and Contacts


Robin Tomcin, Dominik Sibbing, Leif Kobbelt
Eurographics 2014
pubimg

In rigid body simulation, one must distinguish between contacts (so-called unilateral constraints) and articulations (bilateral constraints). For contacts and friction, iterative solution methods have proven most useful for interactive applications, often in combination with Shock-Propagation in cases with strong interactions between contacts (such as stacks), prioritizing performance and plausibility over accuracy. For articulation constraints, direct solution methods are preferred, because one can rely on a factorization with linear time complexity for tree-like systems, even in ill-conditioned cases caused by large mass-ratios or high complexity. Despite recent advances, combining the advantages of direct and iterative solution methods wrt. performance has proven difficult and the intricacy of articulations in interactive applications is often limited by the convergence speed of the iterative solution method in the presence of closed kinematic loops (i.e. auxiliary constraints) and contacts. We identify common performance bottlenecks in the dynamic simulation of unilateral and bilateral constraints and are able to present a simulation method, that scales well in the number of constraints even in ill-conditioned cases with frictional contacts, collisions and closed loops in the kinematic graph. For cases where many joints are connected to a single body, we propose a technique to increase the sparsity of the positive definite linear system. A solution to these bottlenecks is presented in this paper to make the simulation of a wider range of mechanisms possible in real-time without extensive parameter tuning.




Quad Layout Embedding via Aligned Parameterization


Marcel Campen, Leif Kobbelt
Computer Graphics Forum 33 (8), pp. 69-81
pubimg

Quad layouting, i.e. the partitioning of a surface into a coarse network of quadrilateral patches, is a fundamental step in application scenarios ranging from animation and simulation to reverse engineering and meshing. This process involves determining the layout's combinatorial structure as well as its geometric embedding in the surface. We present a novel quad layout algorithm that focuses on the embedding optimization, thereby complementing recent methods focusing on the structure optimization aspect. It takes as input a description of the target layout structure and computes a complete embedding in form of a parameterization globally optimized for isometry and, in particular, principal direction alignment. Besides being suited for fully automatic workflows, our method can also incorporate user constraints and support the tedious but common procedure of manual layouting.

» Show Videos



Zometool Shape Approximation


Henrik Zimmer, Florent Lafarge, Pierre Alliez, Leif Kobbelt
Geometric Modeling and Processing 2014 (GMP) / Graphical Models (GMOD)
pubimg

We present an algorithm that approximates 2-manifold surfaces with Zometool models while preserving their topology. Zometool is a~popular hands-on mathematical modeling system used in teaching, research and for recreational model assemblies at home. This construction system relies on a single node type with a small, fixed set of directions and only 9 different edge types in its basic form. While being naturally well suited for modeling symmetries, various polytopes or visualizing molecular structures, the inherent discreteness of the system poses difficult constraints on any algorithmic approach to support the modeling of freeform shapes. We contribute a set of local, topology preserving Zome mesh modification operators enabling the efficient exploration of the space of 2-manifold Zome models around a given input shape. Starting from a rough initial approximation, the operators are iteratively selected within a stochastic framework guided by an energy functional measuring the quality of the approximation. We demonstrate our approach on a number of designs and also describe parameters which are used to explore different complexities and enable coarse approximations.




Interactive Volume-Based Visualization and Exploration for Diffusion Fiber Tracking


Dominik Sibbing, Henrik Zimmer, Robin Tomcin, Leif Kobbelt
Bildverarbeitung für die Medizin (2014)
pubimg

We present a new method to interactively compute and visualize fiber bundles extracted from a diffusion magnetic resonance image. It uses Dijkstra's shortest path algorithm to find globally optimal pathways from a given seed to all other voxels. Our distance function enables Dijkstra to generalize to larger voxel neighborhoods, resulting in fewer quantization artifacts of the orientations, while the shortest paths are still efficiently computable. Our volumetric fiber representation enables the usage of volume rendering techniques. Therefore no complicated pruning or analysis of the resulting fiber tree is needed in order to visualize important fibers. In fact, this can efficiently be done by changing a transfer function. Our application is highly interactive, allowing the user to focus completely on the exploration of the data.




Zometool Rationalization of Freeform Surfaces


Henrik Zimmer, Leif Kobbelt
IEEE Transactions on Visualization and Computer Graphics
pubimg

An ever broader availability of freeform designs together with an increasing demand for product customization has lead to a rising interest in efficient physical realization of such designs, the trend toward personal fabrication. Not only large-scale architectural applications are (becoming increasingly) popular but also different consumer-level rapid-prototyping applications, including toy and 3D puzzle creation. In this work we present a method for do-it-yourself reproduction of freeform designs without the typical limitation of state-of-the-art approaches requiring manufacturing custom parts using semi-professional laser cutters or 3d printers. Our idea is based on a popular mathematical modeling system (Zometool) commonly used for modeling higher dimensional polyhedra and symmetric structures such as molecules and crystal lattices. The proposed method extends the scope of Zometool modeling to freeform, disk-topology surfaces. While being an efficient construction system on the one hand (consisting only of a single node type and 9 different edge types), this inherent discreteness of the Zometool system, on the other hand gives rise to a hard approximation problem. We base our method on a marching front approach, where elements are not added in a greedy sense, but rather whole regions on the front are filled optimally, using a set of problem specific heuristics to keep complexity under control.




Integer-Grid Maps for Reliable Quad Meshing


David Bommes, Marcel Campen, Hans-Christian Ebke, Pierre Alliez, Leif Kobbelt
SIGGRAPH 2013
pubimg

Quadrilateral remeshing approaches based on global parametrization enable many desirable mesh properties. Two of the most important ones are (1) high regularity due to explicit control over irregular vertices and (2) smooth distribution of distortion achieved by convex variational formulations. Apart from these strengths, state-of-the-art techniques suffer from limited reliability on real-world input data, i.e. the determined map might have degeneracies like (local) non-injectivities and consequently often cannot be used directly to generate a quadrilateral mesh. In this paper we propose a novel convex Mixed-Integer Quadratic Programming (MIQP) formulation which ensures by construction that the resulting map is within the class of so called Integer-Grid Maps that are guaranteed to imply a quad mesh. In order to overcome the NP-hardness of MIQP and to be able to remesh typical input geometries in acceptable time we propose two additional problem specific optimizations: a complexity reduction algorithm and singularity separating conditions. While the former decouples the dimension of the MIQP search space from the input complexity of the triangle mesh and thus is able to dramatically speed up the computation without inducing inaccuracies, the latter improves the continuous relaxation, which is crucial for the success of modern MIQP optimizers. Our experiments show that the reliability of the resulting algorithm does not only annihilate the main drawback of parametrization based quad-remeshing but moreover enables the global search for high-quality coarse quad layouts – a difficult task solely tackled by greedy methodologies before.




QEx: Robust Quad Mesh Extraction


Hans-Christian Ebke, David Bommes, Marcel Campen, Leif Kobbelt
SIGGRAPH Asia 2013
pubimg

The most popular and actively researched class of quad remeshing techniques is the family of parametrization based quad meshing methods. They all strive to generate an integer-grid map, i.e. a parametrization of the input surface into R2 such that the canonical grid of integer iso-lines forms a quad mesh when mapped back onto the surface in R3. An essential, albeit broadly neglected aspect of these methods is the quad extraction step, i.e. the materialization of an actual quad mesh from the mere “quad texture”. Quad (mesh) extraction is often believed to be a trivial matter but quite the opposite is true: Numerous special cases, ambiguities induced by numerical inaccuracies and limited solver precision, as well as imperfections in the maps produced by most methods (unless costly countermeasures are taken) pose significant challenges to the quad extractor. We present a method to sanitize a provided parametrization such that it becomes numerically consistent even in a limited precision floating point representation. Based on this we are able to provide a comprehensive and sound description of how to perform quad extraction robustly and without the need for any complex tolerance thresholds or disambiguation rules. On top of that we develop a novel strategy to cope with common local fold-overs in the parametrization. This allows our method, dubbed QEx, to generate all-quadrilateral meshes where otherwise holes, non-quad polygons or no output at all would have been produced. We thus enable the practical use of an entire class of maps that was previously considered defective. Since state of the art quad meshing methods spend a significant share of their run time solely to prevent local fold-overs, using our method it is now possible to obtain quad meshes significantly quicker than before. We also provide libQEx, an open source C++ reference implementation of our method and thus significantly lower the bar to enter the field of quad meshing.




Efficient Computation of Shortest Path-Concavity for 3D Meshes


Henrik Zimmer, Marcel Campen, Leif Kobbelt
IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2013
pubimg

In the context of shape segmentation and retrieval object-wide distributions of measures are needed to accurately evaluate and compare local regions of shapes. Lien et al. proposed two point-wise concavity measures in the context of Approximate Convex Decompositions of polygons measuring the distance from a point to the polygon’s convex hull: an accurate Shortest Path-Concavity (SPC) measure and a Straight Line-Concavity (SLC) approximation of the same. While both are practicable on 2D shapes, the exponential costs of SPC in 3D makes it inhibitively expensive for a generalization to meshes. In this paper we propose an efficient and straight forward approximation of the Shortest Path-Concavity measure to 3D meshes. Our approximation is based on discretizing the space between mesh and convex hull, thereby reducing the continuous Shortest Path search to an efficiently solvable graph problem. Our approach works out-of-the-box on complex mesh topologies and requires no complicated handling of genus. Besides presenting a rigorous evaluation of our method on a variety of input meshes, we also define an SPC-based Shape Descriptor and show its superior retrieval and runtime performance compared with the recently presented results on the Convexity Distribution by Lian et al.




Polygon Mesh Repairing: An Application Perspective


Marco Attene, Marcel Campen, Leif Kobbelt
ACM Computing Surveys, vol. 45, 2, February 2013
pubimg

Nowadays, digital 3D models are in widespread and ubiquitous use, and each specific application dealing with 3D geometry has its own quality requirements that restrict the class of acceptable and supported models. This article analyzes typical defects that make a 3D model unsuitable for key application contexts, and surveys existing algorithms that process, repair, and improve its structure, geometry, and topology to make it appropriate to case-by-case requirements. The analysis is focused on polygon meshes, which constitute by far the most common 3D object representation. In particular, this article provides a structured overview of mesh repairing techniques from the point of view of the application context. Di fferent types of mesh defects are classified according to the upstream application that produced the mesh, whereas mesh quality requirements are grouped by representative sets of downstream applications where the mesh is to be used. The numerous mesh repair methods that have been proposed during the last two decades are analyzed and classified in terms of their capabilities, properties, and guarantees. Based on these classifications, guidelines can be derived to support the identification of repairing algorithms best-suited to bridge the compatibility gap between the quality provided by the upstream process and the quality required by the downstream applications in a given geometry processing scenario.




View-Dependent Realtime Rendering of Procedural Facades with High Geometric Detail


Lars Krecklau, Janis Born, Leif Kobbelt
Eurographics 2013
pubimg

We present an algorithm for realtime rendering of large-scale city models with procedurally generated facades. By using highly detailed assets like windows, doors, and decoration such city models can provide an extremely high geometric level of detail but on the downside they also consist of billions of polygons which makes it infeasible to even store them as explicit polygonal meshes. Moreover, when rendering urban scenes usually only a very small fraction of the city is actually visible which calls for effective culling mechanisms. For procedural textures there are efficient screen space techniques that evaluate, e.g., a split grammar on a per-pixel basis in the fragment shader and thus render a textured facade in a view dependent manner. We take this idea further by introducing 3D geometric detail in addition to flat textures. Our approach is a two-pass procedure that first renders a flat procedural facade. During rasterization the fragment shader triggers the instantiation of a detailed asset whenever a geometric facade element is potentially visible. The set of instantiated detail models are then rendered in a second pass. The major challenges arise from the fact that geometric details belonging to a facade can be visible even if the base polygon of the facade itself is not visible. Hence we propose measures to conservatively estimate visibility without introducing excessive redundancy. We further extend our technique by a simple level of detail mechanism that switches to baked textures (of the assets) depending on the distance to the camera. We demonstrate that our technique achieves realtime frame rates for large-scale city models with massive detail on current commodity graphics hardware.

» Show Videos



Practical Anisotropic Geodesy


Marcel Campen, Martin Heistermann, Leif Kobbelt
Eurographics Symposium on Geometry Processing (SGP 2013)
pubimg

The computation of intrinsic, geodesic distances and geodesic paths on surfaces is a fundamental low-level building block in countless Computer Graphics and Geometry Processing applications. This demand led to the development of numerous algorithms – some for the exact, others for the approximative computation, some focussing on speed, others providing strict guarantees. Most of these methods are designed for computing distances according to the standard Riemannian metric induced by the surface’s embedding in Euclidean space. Generalization to other, especially anisotropic, metrics – which more recently gained interest in several application areas – is not rarely hampered by fundamental problems. We explore and discuss possibilities for the generalization and extension of well-known methods to the anisotropic case, evaluate their relative performance in terms of accuracy and speed, and propose a novel algorithm, the Short-Term Vector Dijkstra. This algorithm is strikingly simple to implement and proves to provide practical accuracy at a higher speed than generalized previous methods.




SIFT-Realistic Rendering


Dominik Sibbing, Torsten Sattler, Bastian Leibe, Leif Kobbelt
Proceedings of Three-dimensional Vision 2013 (3DV 2013), Conference Publishing Services (CPS), IEEE Computer Society Press, Los Alamitos, California.
pubimg

3D localization approaches establish correspondences between points in a query image and a 3D point cloud reconstruction of the environment. Traditionally, the database models are created from photographs using Structure-from-Motion (SfM) techniques, which requires large collections of densely sampled images. In this paper, we address the question how point cloud data from terrestrial laser scanners can be used instead to significantly reduce the data collection effort and enable more scalable localization.

The key change here is that, in contrast to SfM points, laser-scanned 3D points are not automatically associated with local image features that could be matched to query image features. In order to make this data usable for image-based localization, we explore how point cloud rendering techniques can be leveraged to create virtual views from which database features can be extracted that match real image-based features as closely as possible. We propose different rendering techniques for this task, experimentally quantify how they affect feature repeatability, and demonstrate their benefit for image-based localization.




A Scalable Collaborative Online System for City Reconstruction


Ole Untzelmann, Torsten Sattler, Sven Middelberg, Leif Kobbelt
Best Paper Award at the ICCV Workshop on Big Data in 3D Computer Vision, 2013
pubimg

Recent advances in Structure-from-Motion and Bundle Adjustment allow us to efficiently reconstruct large 3D scenes from millions of images. However, acquiring the imagery necessary to reconstruct a whole city and not only its landmark buildings still poses a tremendous problem. In this paper, we therefore present an online system for collaborative city reconstruction that is based on crowdsourcing the image acquisition. Employing publicly available building footprints to reconstruct individual blocks rather than the whole city at once enables our system to easily scale to large urban environments. In order to map all partial reconstructions into a single coordinate frame, we develop a robust alignment scheme that registers the individual point clouds to their corresponding footprints based on GPS coordinates. Our approach can handle noise and outliers in the GPS positions and allows us to detect wrong alignments caused by the typical issues in the context of crowdsourcing applications such as malicious or improper image uploads. Furthermore, we present an efficient rendering method to obtain dense and textured views of the resulting point clouds without requiring costly multi-view stereo methods

» Show BibTeX

@inproceedings{untzelmann2013iccv,
author = "Untzelmann, Ole and Sattler, Torsten and Middelberg, Sven and Kobbelt, Leif",
title = "{A Scalable Collaborative Online System for City Reconstruction}",
booktitle = "{The IEEE International Conference on Computer Vision (ICCV) Workshops}",
year = {2013}
}





Evaluation of a Mobile Projector based Indoor Navigation Interface


Ming Li, Katrin Arning, Oliver Sack, Jiyoung Park, Myoung-hee Kim, Martina Ziefle, Leif Kobbelt
Interacting with Computers
pubimg

In recent years, the interest and potential applications of pedestrian indoor navigation solutions have significantly increased. Whereas the majority of mobile indoor navigation aid solutions visualize navigational information on mobile screens, the present study investigates the effectiveness of a mobile projector as navigation aid which directly projects navigational information into the environment. A benchmark evaluation of the mobile projector-based indoor navigation interface was carried out investigating a combination of different navigation devices (mobile projector vs. mobile screen) and navigation information (map vs. arrow) as well as the impact of users' spatial abilities. Results showed a superiority of the mobile screen as navigation aid and the map as navigation information type. Especially users with low spatial abilities benefited from this combination in their navigation performance and acceptance. Potential application scenarios and design implications for novel indoor navigation interfaces are derived from our findings.

» Show Videos



Advanced Automatic Hexahedral Mesh Generation from Surface Quad Meshes


Michael Kremer, David Bommes, Isaak Lim, Leif Kobbelt
22nd International Meshing Roundtable, Orlando, Florida, USA.
pubimg

A purely topological approach for the generation of hexahedral meshes from quadrilateral surface meshes of genus zero has been proposed by M. Müller-Hannemann: in a first stage, the input surface mesh is reduced to a single hexahedron by successively eliminating loops from the dual graph of the quad mesh; in the second stage, the hexahedral mesh is constructed by extruding a layer of hexahedra for each dual loop from the first stage in reverse elimination order. In this paper, we introduce several techniques to extend the scope of target shapes of the approach and significantly improve the quality of the generated hexahedral meshes. While the original method can only handle "almost convex" objects and requires mesh surgery and remeshing in case of concave geometry, we propose a method to overcome this issue by introducing the notion of concave dual loops. Furthermore, we analyze and improve the heuristic to determine the elimination order for the dual loops such that the inordinate introduction of interior singular edges, i.e. edges of degree other than four in the hexahedral mesh, can be avoided in many cases.




Geometry Seam Carving


Ellen Dekkers, Leif Kobbelt
SIAM Conference on Geometric and Physical Modeling (GD/SPM 2013)
pubimg

We present a novel approach to feature-aware mesh deformation. Previous mesh editing methods are based on an elastic deformation model and thus tend to uniformly distribute the distortion in a least squares sense over the entire deformation region. Recent results from image resizing, however, show that discrete local modifications like deleting or adding connected seams of image pixels in regions with low saliency lead to far superior preservation of local features compared to uniform scaling -- the image retargeting analogon to least squares mesh deformation. Hence, we propose a discrete mesh editing scheme that combines elastic as well as plastic deformation (in regions with little geometric detail) by transferring the concept of seam carving from image retargeting to the mesh deformation scenario. A geometry seam consists of a connected strip of triangles within the mesh's deformation region. By collapsing or splitting the interior edges of this strip we perform a deletion or insertion operation that is equivalent to image seam carving and can be interpreted as a local plastic deformation. We use a feature measure to rate the geometric saliency of each triangle in the mesh and a well-adjusted distortion measure to determine where the current mesh distortion asks for plastic deformations, i.e., for deletion or insertion of geometry seams. Precomputing a fixed set of low-saliency seams in the deformation region allows us to perform fast seam deletion and insertion operations in a predetermined order such that the local mesh modifications are properly restored when a mesh editing operation is (partially) undone. Geometry seam carving hence enables the deformation of a given mesh in a way that causes stronger distortion in homogeneous mesh regions while salient features are preserved much better.

» Show Videos



ProFi: Design and Evaluation of a Product Finder in a Supermarket Scenario


Ming Li, Katrin Arning, Luisa Bremen, Oliver Sack, Martina Ziefle, Leif Kobbelt
Workshop on Pervasive Technologies in Retail Environments (PeTRE13) in conjunction with UbiComp 2013
pubimg

This paper presents the design and evaluation of ProFi, a PROduct FInding assistant in a supermarket scenario. We explore the idea of micro-navigation in supermarkets and aim at enhancing visual search processes in front of a shelf. In order to assess the concept, a prototype is built combining visual recognition techniques with an Augmented Reality interface. Two AR patterns (circle and spotlight) are designed to highlight target products. The prototype is formally evaluated in a controlled environment. Quantitative and qualitative data is collected to evaluate the usability and user preference. The results show that ProFi significantly improves the users’ product finding performance, especially when using the circle, and that ProFi is well accepted by users.

» Show Videos



OpenFlipper - A Highly Modular Framework for Processing and Visualization of Complex Geometric Models


Jan Möbius, Michael Kremer, Leif Kobbelt
Sixth Workshop on Software Engineering and Architectures for Realtime Interactive Systems SEARIS 2013
pubimg

OpenFlipper is an open-source framework for processing and visualization of complex geometric models suitable for software development in both research and commercial applications. In this paper we describe in detail the software architecture which is designed in order to provide a high degree of modularity and adaptability for various purposes. Although OpenFlipper originates in the field of geometry processing, many emerging applications in this domain increasingly rely on immersion technologies. Consequently, the presented software is, unlike most existing VR software frameworks, mainly intended to be used for the content creation and processing of virtual environments while directly providing a variety of immersion techniques. By keeping OpenFlipper’s core as simple as possible and implementing functional components as plugins, the framework’s structure allows for easy extensions, replacements, and bundling. We particularly focus on the description of the integrated rendering pipeline that addresses the requirements of flexible, modern high-end graphics applications. Furthermore we describe how cross-platform unit and smoke testing as well as continuous integration is implemented in order to guarantee that new code revisions remain portable and regression is minimized. OpenFlipper is licensed under the Lesser GNU Public License and available, up to this state, for Linux, Windows, and Mac OSX.




Dual Loops Meshing: Quality Quad Layouts on Manifolds


Marcel Campen, David Bommes, Leif Kobbelt
SIGGRAPH 2012
pubimg

We present a theoretical framework and practical method for the automatic construction of simple, all-quadrilateral patch layouts on manifold surfaces. The resulting layouts are coarse, surface-embedded cell complexes well adapted to the geometric structure, hence they are ideally suited as domains and base complexes for surface parameterization, spline fitting, or subdivision surfaces and can be used to generate quad meshes with a high-level patch structure that are advantageous in many application scenarios. Our approach is based on the careful construction of the layout graph's combinatorial dual. In contrast to the primal this dual perspective provides direct control over the globally interdependent structural constraints inherent to quad layouts. The dual layout is built from curvature-guided, crossing loops on the surface. A novel method to construct these efficiently in a geometry- and structure-aware manner constitutes the core of our approach.




Improving Image-Based Localization by Active Correspondence Search


Torsten Sattler, Bastian Leibe, Leif Kobbelt
12th European Conference on Computer Vision (ECCV'12)
pubimg

We propose a powerful pipeline for determining the pose of a query image relative to a point cloud reconstruction of a large scene consisting of more than one million 3D points. The key component of our approach is an efficient and effective search method to establish matches between image features and scene points needed for pose estimation. Our main contribution is a framework for actively searching for additional matches, based on both 2D-to-3D and 3D-to-2D search. A unified formulation of search in both directions allows us to exploit the distinct advantages of both strategies, while avoiding their weaknesses. Due to active search, the resulting pipeline is able to close the gap in registration performance observed between efficient search methods and approaches that are allowed to run for multiple seconds, without sacrificing run-time efficiency. Our method achieves the best registration performance published so far on three standard benchmark datasets, with run-times comparable or superior to the fastest state-of-the-art methods.



The original publication will be available at www.springerlink.com upon publication.



Procedural Interpolation of Historical City Maps


Lars Krecklau, Christopher Manthei, Leif Kobbelt
Eurographics 2012
pubimg

We propose a novel approach for the temporal interpolation of city maps. The input to our algorithm is a sparse set of historical city maps plus optional additional knowledge about construction or destruction events. The output is a fast forward animation of the city map development where roads and buildings are constructed and destroyed over time in order to match the sparse historical facts and to look plausible where no precise facts are available. A smooth transition between any real-world data could be interesting for educational purposes, because our system conveys an intuition of the city development. The insertion of data, like when and where a certain building or road existed, is efficiently performed by an intuitive graphical user interface. Our system collects all this information into a global dependency graph of events. By propagating time intervals through the dependency graph we can automatically derive the earliest and latest possible date for each event which are guaranteeing temporal as well as geographical consistency (e.g. buildings can only appear along roads that have been constructed before). During the simulation of the city development, events are scheduled according to a score function that rates the plausibility of the development (e.g. cities grow along major roads). Finally, the events are properly distributed over time to control the dynamics of the city development. Based on the city map animation we create a procedural city model in order to render a 3D animation of the city development over decades.

» Show Videos



Rationalization of Triangle-Based Point-Folding Structures


Henrik Zimmer, Marcel Campen, David Bommes, Leif Kobbelt
Eurographics 2012
pubimg

In mechanical engineering and architecture, structural elements with low material consumption and high load-bearing capabilities are essential for light-weight and even self-supporting constructions. This paper deals with so called point-folding elements - non-planar, pyramidal panels, usually formed from thin metal sheets, which exploit the increased structural capabilities emerging from folds or creases. Given a triangulated free-form surface, a corresponding point-folding structure is a collection of pyramidal elements basing on the triangles. User-specified or material-induced geometric constraints often imply that each individual folding element has a different shape, leading to immense fabrication costs. We present a rationalization method for such structures which respects the prescribed aesthetic and production constraints and ?nds a minimal set of molds for the production process, leading to drastically reduced costs. For each base triangle we compute and parametrize the range of feasible folding elements that satisfy the given constraints within the allowed tolerances. Then we pose the rationalization task as a geometric intersection problem, which we solve so as to maximize the re-use of mold dies. Major challenges arise from the high precision requirements and the non-trivial parametrization of the search space. We evaluate our method on a number of practical examples where we achieve rationalization gains of more than 90%.




Linear Analysis of Nonlinear Constraints for Interactive Geometric Modeling


Martin Habbecke, Leif Kobbelt
Eurographics 2012
pubimg

Thanks to its flexibility and power to handle even complex geometric relations, 3D geometric modeling with nonlinear constraints is an attractive extension of traditional shape editing approaches. However, existing approaches to analyze and solve constraint systems usually fail to meet the two main challenges of an interactive 3D modeling system: For each atomic editing operation, it is crucial to adjust as few auxiliary vertices as possible in order to not destroy the user's earlier editing effort. Furthermore, the whole constraint resolution pipeline is required to run in real-time to enable a fluent, interactive workflow. To address both issues, we propose a novel constraint analysis and solution scheme based on a key observation: While the computation of actual vertex positions requires nonlinear techniques, under few simplifying assumptions the determination of the minimal set of to-be-updated vertices can be performed on a linearization of the constraint functions. Posing the constraint analysis phase as the solution of an under-determined linear system with as few non-zero elements as possible enables us to exploit an efficient strategy for the Cardinality Minimization problem known from the field of Compressed Sensing, resulting in an algorithm capable of handling hundreds of vertices and constraints in real-time. We demonstrate at the example of an image-based modeling system for architectural models that this approach performs very well in practical applications.



Awards:
» Show Videos



A Practical Guide to Polygon Mesh Repairing


Marcel Campen, Marco Attene, Leif Kobbelt
Eurographics 2012 Tutorial
pubimg

Digital 3D models are key components in many industrial and scientific sectors. In numerous domains polygon meshes have become a de facto standard for model representation. In practice meshes often have a number of defects and flaws that make them incompatible with quality requirements of specific applications. Hence, repairing such defects in order to achieve compatibility is a highly important task – in academic as well as industrial applications. In this tutorial we first systematically analyze typical application contexts together with their requirements and issues, as well as the various types of defects that typically play a role. Subsequently, we consider existing techniques to process, repair, and improve the structure, geometry, and topology of imperfect meshes, aiming at making them appropriate to case-by-case requirements. We present seminal works and key algorithms, discuss extensions and improvements, and analyze the respective advantages and disadvantages depending on the application context. Furthermore, we outline directions where further research is particularly important or promising.




Using Spherical Harmonics for Modeling Antenna Patterns


Arne Schmitz , Thomas Karolski, Leif Kobbelt
IEEE Radio and Wireless Symposium, 15th to 18th January 2012, Santa Clara, USA, to be published
pubimg

In radio wave propagation simulations there is a need for modeling antenna patterns. Both the transmitting and the receiving antenna influence the wireless link. We use spherical harmonics to compress the amount of measured data needed for complex antenna patterns. We present a method to efficiently incorporate these patterns into a ray tracing framework for radio wave propagation. We show how to efficiently generate rays according to the transmitting antenna pattern. The ray tracing simulation computes a compressed irradiance field for every point in the scene. The receiving antenna pattern can then be applied to this field for the final estimation of signal strength.




Topology aware Quad Dominant Meshing for Vascular Structures


Dominik Sibbing, Hans-Christian Ebke, Kai Ingo Esser, Leif Kobbelt
Lecture Notes in Computer Science, Proc. of MeshMed 2012
pubimg

We present a pipeline to generate high quality quad dominant meshes for vascular structures from a given volumetric image. As common for medical image segmentation we use a Level Set approach to separate the region of interest from the background. However in contrast to the standard method we control the topology of the deformable object – defined by the Level Set function – which allows us to extract a proper skeleton which represents the global topological information of the vascular structure. Instead of solving a complex global optimization problem to compute a quad mesh, we divide the problem and partition the complex model into junction and tube elements, employing the skeleton of the vascular structure. After computing quad meshes for the junctions using the Mixed Integer Quadrangulation approach, we re-mesh the tubes using an algorithm inspired by the well known Bresenham Algorithm for drawing lines which distributes irregular elements equally over the entire tube element.




Interactive Modeling by Procedural High-Level Primitives


Lars Krecklau, Leif Kobbelt
Shape Modeling International 2012
pubimg

Procedural modeling is a promising approach to create complex and detailed 3D objects and scenes. Based on the concept of split grammars, e.g., construction rules can be defined textually in order to describe a hierarchical build-up of a scene. Unfortunately, creating or even just reading such grammars can become very challenging for non-programmers. Recent approaches have demonstrated ideas to interactively control basic split operations for boxes, however, designers need to have a deep understanding of how to express a certain object by just using box splitting. Moreover, the degrees of freedom of a certain model are typically very high and thus the adjustment of parameters remains more or less a trial-and-error process. In our paper, we therefore present novel concepts for the intuitive and interactive handling of complex procedural grammars allowing even amateurs and non-programmers to easily modify and combine existing procedural models that are not limited to the subdivision of boxes. In our grammar 3D manipulators can be defined in order to spawn a visual representation of adjustable parameters directly in model space to reveal the influence of a parameter. Additionally, modules of the procedural grammar can be associated with a set of camera views which draw the user's attention to a specific subset of relevant parameters and manipulators. All these concepts are encapsulated into procedural high-level primitives that effectively support the efficient creation of complex procedural 3D scenes. Since our target group are mainly users without any experience in 3D modeling, we prove the usability of our system by letting some untrained students perform a modeling task from scratch.

» Show Videos



A Framework for Vision-based Mobile AR Applications


Jan Robert Menzel, Michael Königs, Leif Kobbelt
MobileHCI 2012
pubimg

This paper analyzes the requirements for a general purpose mobile Augmented Reality framework that supports expert as well as non-expert authors to create customized mobile AR applications. A key component is the use of image based localization performed on a central server. It further describes an implementation of such a framework as well as an example application created in this framework to demonstrate the practicability of the described design.




Dynamic Tiling Display: Building an Interactive Display Surface using Multiple Mobile Devices


Ming Li, Leif Kobbelt
11th International Conference on Mobile and Ubiquitous Multimedia (MUM 2012)
pubimg

Table display surfaces, like Microsoft PixelSense, can display multimedia content to a group of users simultaneously, but it is expensive and lacks mobility. On the contrary, mobile devices are more easily available, but due to limited screen size and resolution, they are not suitable for sharing multimedia data interactively. In this paper we present a "Dynamic Tiling Display", an interactive display surface built from mobile devices. Our framework utilizes the integrated front facing camera of mobile devices to estimate the relative pose of multiple mobile screens arbitrarily placed on a table. Using this framework, users can create a large virtual display where multiple users can explore multimedia data interactively through separate windows (mobile screens). The major technical challenge is the calibration of individual displays, which is solved by visual object recognition using front facing camera inputs.



Best Paper Award in MUM12
» Show Videos



Image Retrieval for Image-Based Localization Revisited


Torsten Sattler, Tobias Weyand, Bastian Leibe, Leif Kobbelt
British Machine Vision Conference (BMVC'12), 2012
pubimg

To reliably determine the camera pose of an image relative to a 3D point cloud of a scene, correspondences between 2D features and 3D points are needed. Recent work has demonstrated that directly matching the features against the points outperforms methods that take an intermediate image retrieval step in terms of the number of images that can be localized successfully. Yet, direct matching is inherently less scalable than retrieval-based approaches. In this paper, we therefore analyze the algorithmic factors that cause the performance gap and identify false positive votes as the main source of the gap. Based on a detailed experimental evaluation, we show that retrieval methods using a selective voting scheme are able to outperform state-of-the-art direct matching methods. We explore how both selective voting and correspondence computation can be accelerated by using a Hamming embedding of feature descriptors. Furthermore, we introduce a new dataset with challenging query images for the evaluation of image-based localization.




Variational Tangent Plane Intersection for Planar Polygonal Meshing


Henrik Zimmer, Marcel Campen, Ralf Herkrath, Leif Kobbelt
Advances in Architectural Geometry 2012
pubimg

Several theoretical and practical geometry applications are based on polygon meshes with planar faces. The planar panelization of freeform surfaces is a prominent example from the field of architectural geometry. One approach to obtain a certain kind of such meshes is by intersection of suitably distributed tangent planes. Unfortunately, this simple tangent plane intersection (TPI) idea is limited to the generation of hex-dominant meshes: as vertices are in general defined by three intersecting planes, the resulting meshes are basically duals of triangle meshes.

The explicit computation of intersection points furthermore requires dedicated handling of special cases and degenerate constellations to achieve robustness on freeform surfaces. Another limitation is the small number of degrees of freedom for incorporating design parameters.

Using a variational re-formulation, we equip the concept of TPI with additional degrees of freedom and present a robust, unified approach for creating polygonal structures with planar faces that is readily able to integrate various objectives and constraints needed in different applications scenarios. We exemplarily demonstrate the abilities of our approach on three common problems in geometry processing.




Practical Mixed-Integer Optimization for Geometry Processing


David Bommes, Henrik Zimmer, Leif Kobbelt
Special Issue of Lecture Notes in Computer Science 2012
Proceedings of Curves and Surfaces 2010
pubimg

Solving mixed-integer problems, i.e., optimization problems where some of the unknowns are continuous while others are discrete, is NP-hard. Unfortunately, real-world problems like e.g., quadrangular remeshing usually have a large number of unknowns such that exact methods become unfeasible. In this article we present a greedy strategy to rapidly approximate the solution of large quadratic mixed-integer problems within a practically sufficient accuracy. The algorithm, which is freely available as an open source library implemented in C++, determines the values of the discrete variables by successively solving relaxed problems. Additionally the specification of arbitrary linear equality constraints which typically arise as side conditions of the optimization problem is possible. The performance of the base algorithm is strongly improved by two novel extensions which are (1) simultaneously estimating sets of discrete variables which do not interfere and (2) a fill-in reducing reordering of the constraints. Exemplarily the solver is applied to the problem of quadrilateral surface remeshing, enabling a great flexibility by supporting different types of user guidance within a real-time modeling framework for input surfaces of moderate complexity.




Insights into user experiences and acceptance of mobile indoor navigation devices


Katrin Arning, Martina Ziefle, Ming Li, Leif Kobbelt
11th International Conference on Mobile and Ubiquitous Multimedia (MUM 2012)

Location-based services, which can be applied in navigation systems, are a key application in mobile and ubiquitous computing. Combined with indoor localization techniques, pico projectors can be used for navigation purposes to augment the environment with navigation information. In the present empirical study (n = 24) we explore users’ perceptions, workload and navigation performance when navigating with a mobile projector in comparison to a mobile screen as indoor navigation interface. To capture user perceptions and to predict acceptance by applying structural equation modeling, we assessed perceived disorientation, privacy concerns, trust, ease of use, usefulness, and sources of visibility problems. Moreover, the impact of user factors (spatial abilities, technical self-efficacy, familiarity) on acceptance was analyzed. The structural models exhibited adequate predictive and psychometric properties. Based on real user experience, they clearly pointed out a) similarities and device-specific differences in navigation device acceptance, b) the role of specific user experiences (visibility, trust, and disorientation) during navigation device usage and c) illuminated the underlying relationships between determinants of user acceptance. Practical implications of the results and future research questions are provided.




OpenVolumeMesh - A Versatile Index-Based Data Structure for 3D Polytopal Complexes


Michael Kremer, David Bommes, Leif Kobbelt
International Meshing Roundtable 2012, San Jose, California, USA.
pubimg

OpenVolumeMesh is a data structure which is able to represent heterogeneous 3-dimensional polytopal cell complexes and is general enough to also represent non- manifolds without incurring undue overhead. Extending the idea of half-edge based data structures for two-manifold surface meshes, all faces, i.e. the two-dimensional entities of a mesh, are represented by a pair of oriented half-faces. The concept of using directed half-entities enables inducing an orientation to the meshes in an intuitive and easy to use manner. We pursue the idea of encoding connectivity by storing first-order top-down incidence relations per entity, i.e. for each entity of dimension d, a list of links to the respective incident entities of dimension d?1 is stored. For instance, each half-face as well as its orientation is uniquely determined by a tuple of links to its incident half- edges or each 3D cell by the set of incident half-faces. This representation allows for handling non-manifolds as well as mixed-dimensional mesh configurations. No entity is duplicated according to its valence, instead, it is shared by all incident entities in order to reduce memory consumption. Furthermore, an array-based storage layout is used in combination with direct index-based access. This guarantees constant access time to the entities of a mesh. Although bottom-up incidence relations are implied by the top-down incidences, our data structure provides the option to explicitly generate and cache them in a transparent manner. This allows for accelerated navigation in the local neighbor- hood of an entity. We provide an open-source and platform-independent implementation of the proposed data structure written in C++ using dynamic typing paradigms. The li- brary is equipped with a set of STL compliant iterators, a generic property system to dynamically attach properties to all entities at run-time, and a serializer/deseri- alizer supporting a simple file format. Due to its similarity to the OpenMesh data structure, it is easy to use, in particular for those familiar with OpenMesh. Since the presented data structure is compact, intuitive, and efficient, it is suitable for a variety of applications, such as meshing, visualization, and numerical analysis. OpenVolumeMesh is open-source software licensed under the terms of the LGPL.




Golden Ratio Sequences for Low-Discrepancy Sampling


Colas Schretter, Leif Kobbelt, Paul-Olivier Dehaye
ACM Journal of Graphics Tools 16(2), 2012, pp. 95-104
pubimg

Most classical constructions of low-discrepancy point sets are based on generalizations of the one-dimensional binary van der Corput sequence, whose implementation requires nontrivial bit-operations. As an alternative, we introduce the quasi-regular golden ratio sequences, which are based on the fractional part of successive integer multiples of the golden ratio. By leveraging results from number theory, we show that point sets, which evenly cover the unit square or disc, can be computed by a simple incremental permutation of a generator golden ratio sequence. We compare ambient occlusion images generated with a Monte Carlo ray tracer based on random, Hammersley, blue noise, and golden ratio point sets. The source code of the ray tracer used for our experiments is available online at the address provided at the end of this article.




The Design of a Segway AR-Tactile Navigation System


Ming Li, Lars Mahnkopf, Leif Kobbelt
Pervasive 2012
pubimg

A Segway is often used to transport a user across mid range distances in urban environments. It has more degrees of freedom than car/bike and is faster than pedestrian. However a navigation system designed for it has not been researched. The existing navigation systems are adapted for car drivers or pedestrians. Using such systems on the Segway can increase the driver’s cognitive workload and generate safety risks. In this paper, we present a Segway AR-Tactile navigation system, in which we visualize the route through an Augmented Reality interface displayed by a mobile phone. The turning instructions are presented to the driver via vibro-tactile actuators attached to the handlebar. Multiple vibro-tactile patterns provide navigation instructions. We evaluate the system in real traffic and an artificial environment. Our results show the AR interface reduces users’ subjective workload significantly. The vibro-tactile patterns can be perceived correctly and greatly improve the driving performance.

» Show Videos



Towards Fast Image-Based Localization on a City-Scale


Torsten Sattler, Bastian Leibe, Leif Kobbelt
Outdoor and Large-Scale Real-World Scene Analysis, LNCS 7474, pp. 191-211, Springer, 2012
pubimg

Recent developments in Structure-from-Motion approaches allow the reconstructions of large parts of urban scenes. The available models can in turn be used for accurate image-based localization via pose estimation from 2D-to-3D correspondences. In this paper, we analyze a recently proposed localization method that achieves state-of-the-art localization performance using a visual vocabulary quantization for efficient 2D-to-3D correspondence search. We show that using only a subset of the original models allows the method to achieve a similar localization performance. While this gain can come at additional computational cost depending on the dataset, the reduced model requires significantly less memory, allowing the method to handle even larger datasets. We study how the size of the subset, as well as the quantization, affect both the search for matches and the time needed by RANSAC for pose estimation.



The original publication will be available at www.springerlink.com upon publication.



Fast Image-Based Localization using Direct 2D-to-3D Matching


Torsten Sattler, Bastian Leibe, Leif Kobbelt
13th IEEE International Conference on Computer Vision (ICCV'11), 2011.
pubimg

Estimating the position and orientation of a camera given an image taken by it is an important step in many interesting applications such as tourist navigations, robotics, augmented reality and incremental Structure-from-Motion reconstruction. To do so, we have to find correspondences between structures seen in the image and a 3D representation of the scene. Due to the recent advances in the field of Structure-from-Motion it is now possible to reconstruct large scenes up to the level of an entire city in very little time. We can use these results to enable image-based localization of a camera (and its user) on a large scale. However, when processing such large data, the computation between points in the image and points in the model quickly becomes the bottleneck of the localization pipeline. Therefore, it is extremely important to develop methods that are able to effectively and efficiently handle such large environments and that scale well to even larger scenes.




Global Structure Optimization of Quadrilateral Meshes


David Bommes, Timm Lempfer, Leif Kobbelt
Eurographics 2011
pubimg

We introduce a fully automatic algorithm which optimizes the high-level structure of a given quadrilateral mesh to achieve a coarser quadrangular base complex. Such a topological optimization is highly desirable, since state-of-the-art quadrangulation techniques lead to meshes which have an appropriate singularity distribution and an anisotropic element alignment, but usually they are still far away from the high-level structure which is typical for carefully designed meshes manually created by specialists and used e.g. in animation or simulation. In this paper we show that the quality of the high-level structure is negatively affected by helical configurations within the quadrilateral mesh. Consequently we present an algorithm which detects helices and is able to remove most of them by applying a novel grid preserving simplification operator (GP-operator) which is guaranteed to maintain an all-quadrilateral mesh. Additionally it preserves the given singularity distribution and in particular does not introduce new singularities. For each helix we construct a directed graph in which cycles through the start vertex encode operations to remove the corresponding helix. Therefore a simple graph search algorithm can be performed iteratively to remove as many helices as possible and thus improve the high-level structure in a greedy fashion. We demonstrate the usefulness of our automatic structure optimization technique by showing several examples with varying complexity.




Procedural Modeling of Interconnected Structures


Lars Krecklau, Leif Kobbelt
Eurographics 2011
pubimg

The complexity and detail of geometric scenes that are used in today's computer animated films and interactive games have reached a level where the manual creation by traditional 3D modeling tools has become infeasible. This is why procedural modeling concepts have been developed which generate highly complex 3D models by automatically executing a set of formal construction rules. Well-known examples are variants of L-systems which describe the bottom-up growth process of plants and shape grammars which define architectural buildings by decomposing blocks in a top-down fashion. However, none of these approaches allows for the easy generation of interconnected structures such as bridges or roller coasters where a functional interaction between rigid and deformable parts of an object is needed. Our approach mainly relies on the top-down decomposition principle of shape grammars to create an arbitrarily complex but well structured layout. During this process, potential attaching points are collected in containers which represent the set of candidates to establish interconnections. Our grammar then uses either abstract connection patterns or geometric queries to determine elements in those containers that are to be connected. The two different types of connections that our system supports are rigid object chains and deformable beams. The former type is constructed by inverse kinematics, the latter by spline interpolation. We demonstrate the descriptive power of our grammar by example models of bridges, roller coasters, and wall-mounted catenaries.




Walking On Broken Mesh: Defect-Tolerant Geodesic Distances and Parameterizations


Marcel Campen, Leif Kobbelt
Eurographics 2011
pubimg

Efficient methods to compute intrinsic distances and geodesic paths have been presented for various types of surface representations, most importantly polygon meshes. These meshes are usually assumed to be well-structured and manifold. In practice, however, they often contain defects like holes, gaps, degeneracies, non-manifold configurations – or they might even be just a soup of polygons. The task of repairing these defects is computationally complex and in many cases exhibits various ambiguities demanding tedious manual efforts. We present a computational framework that enables the computation of meaningful approximate intrinsic distances and geodesic paths on raw meshes in a way which is tolerant to such defects. Holes and gaps are bridged up to a user-specified tolerance threshold such that distances can be computed plausibly even across multiple connected components of inconsistent meshes. Further, we show ways to locally parameterize a surface based on geodesic distance fields, easily facilitating the application of textures and decals on raw meshes. We do all this without explicitly repairing the input, thereby avoiding the costly additional efforts. In order to enable broad applicability we provide details on two implementation variants, one optimized for performance, the other optimized for memory efficiency. Using the presented framework many applications can readily be extended to deal with imperfect meshes. Since we abstract from the input applicability is not even limited to meshes, other representations can be handled as well.




Efficient and Accurate Urban Outdoor Radio Wave Propagation


Arne Schmitz , Leif Kobbelt
IEEE ICEAA 2011, September 12-16 2011, Torino, Italy
pubimg

Simulating Radio Wave Propagation using geometrical optics is a well known method. We introduce and compare a simplified 2D beam tracing and a very general 3D ray tracing approach, called photon path tracing. Both methods are designed for outdoor, urban scenarios. The 2D approach is computationally less expensive and can still model an important part of propagation effects. The 3D approach is more general, and not limited to outdoor scenarios, and does not impose constraints or assumptions on the scene geometry. We develop methods to adapt the simulation parameters to real measurements and compare the accuracy of both presented algorithms.




Online Estimation of B-Spline Mixture Models From TOF-PET List-Mode Data


Colas Schretter, Jianyong Sun, Leif Kobbelt
Proc. of the 11th International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and Nuclear Medicine, Potsdam, Germany, July 11-15, 2011.
pubimg

In emission tomography, images are usually represented by regular grids of voxels or overlapping smooth image elements (blobs). Few other image models have been proposed like tetrahedral meshes or point clouds that are adapted to an anatomical image. This work proposes a practical sparse and continuous image model inspired from the field of parametric density estimation for Gaussian mixture models. The position, size, aspect ratio and orientation of each image element is optimized as well as its weight with a very fast online estimation method. Furthermore, the number of mixture components, hence the image resolution, is locally adapted according to the available data. The system model is represented in the same basis as image elements and captures time of flight and positron range effects in an exact way. Computations use apodized B-spline approximations of Gaussians and simple closed-form analytical expressions without any sampling or interpolation. In consequence, the reconstructed image never suffers from spurious aliasing artifacts. Noiseless images of the XCAT brain phantom were reconstructed from simulated data.




A Sketching Interface for Feature Curve Recovery of Free-Form Surfaces


Ellen Dekkers, Leif Kobbelt, Richard Pawlicki, Randall C. Smith
Computer-Aided Design, Volume 43, Issue 7, July 2011
Special issue on The 2009 SIAM/ACM Joint Conference on Geometric and Physical Modeling
pubimg

In this paper, we present a semi-automatic approach to efficiently and robustly recover the characteristic feature curves of a given free-form surface where we do not have to assume that the input is a proper manifold. The technique supports a sketch-based interface where the user just has to roughly sketch the location of a feature by drawing a stroke directly on the input mesh. The system then snaps this initial curve to the correct position based on a graph-cut optimization scheme that takes various surface properties into account. Additional position constraints can be placed and modified manually which allows for an interactive feature curve editing functionality. We demonstrate the usefulness of our technique by applying it to two practical scenarios. At first, feature curves can be used as handles for surface deformation, since they describe the main characteristics of an object. Our system allows the user to manipulate a curve while the underlying non-manifold surface adopts itself to the deformed feature. Secondly, we apply our technique to a practical problem scenario in reverse engineering. Here, we consider the problem of generating a statistical (PCA) shape model for car bodies. The crucial step is to establish proper feature correspondences between a large number of input models. Due to the significant shape variation, fully automatic techniques are doomed to failure. With our simple and effective feature curve recovery tool, we can quickly sketch a set of characteristic features on each input model which establishes the correspondence to a pre-defined template mesh and thus allows us to generate the shape model. Finally, we can use the feature curves and the shape model to implement an intuitive modeling metaphor to explore the shape space spanned by the input models.



The paper is an extended version of the paper "A sketching interface for feature curve recovery of free-form surfaces" published at the 2009 SIAM/ACM Joint Conference on Geometric and Physical Modeling. In this extended version, we presend a second application where we use the recovered feature curves as modeling handles for surface deformation.



Realtime Compositing of Procedural Facade Textures on the GPU


Lars Krecklau, Leif Kobbelt
Invited Paper at 3D-Arch 2011 (ISPRS - International Society for Photogrammetry and Remote Sensing)
pubimg

The real time rendering of complex virtual city models has become more important in the last few years for many practical applications like realistic navigation or urban planning. For maximum rendering performance, the complexity of the geometry or textures can be reduced by decreasing the resolution until the data set can fully reside on the memory of the graphics card. This typically results in a low quality of the virtual city model. Alternatively, a streaming algorithm can load the high quality data set from the hard drive. However, this approach requires a large amount of persistent storage providing several gigabytes of static data. We present a system that uses a texture atlas containing atomic tiles like windows, doors or wall patterns, and that combines those elements on-the-fly directly on the graphics card. The presented approach benefits from a sophisticated randomization approach that produces lots of different facades while the grammar description itself remains small. By using a ray casting apporach, we are able to trace through transparent windows revealing procedurally generated rooms which further contributes to the realism of the rendering. The presented method enables real time rendering of city models with a high level of detail for facades while still relying on a small memory footprint.




Markerless Reconstruction and Synthesis of Dynamic Facial Expressions


Dominik Sibbing, Martin Habbecke, Leif Kobbelt
Computer Vision and Image Understanding, Volume 115, Issue 5, Special issue on 3D Imaging and Modelling, May 2011
pubimg

In this paper we combine methods from the field of computer vision with surface editing techniques to generate animated faces, which are all in full correspondence to each other. The inputs for our system are synchronized video streams from multiple cameras. The system produces a sequence of triangle meshes with fixed connectivity, representing the dynamics of the captured face. By carefully taking all requirements and characteristics into account we decided for the proposed system design: We deform an initial face template using movements estimated from the video streams. To increase the robustness of the reconstruction, we use a morphable model as a shape prior to initialize a surfel fitting technique which is able to precisely capture face shapes not included in the morphable model. In the deformation stage, we use a 2D mesh-based tracking approach to establish correspondences over time. We then reconstruct positions in 3D using the same surfel fitting technique, and finally use the reconstructed points to robustly deform the initially reconstructed face.



This paper is an extended version of our paper "Markerless Reconstruction of Dynamic Facial Expressions" which was published 2009 at 3-D Digital Imaging and Modeling. Besides describing the reconstruction of human faces in more detail we demonstrate the applicability of the tracked face template for automatic modeling and show how to use deformation transfer to attenuate expressions, blend expressions or how to build a statistical model, similar to a morphable model, on the dynamic movements.



Pseudo-Immersive Real-Time Display of 3D Scenes on Mobile Devices


Ming Li, Arne Schmitz , Leif Kobbelt
3D Imaging, Modeling, Processing, Visualization and Transmission (3DIMPVT), May 2011
pubimg

The display of complex 3D scenes in real-time on mobile devices is difficult due to the insufficient data throughput and a relatively weak graphics performance. Hence, we propose a client-server system, where the processing of the complex scene is performed on a server and the resulting data is streamed to the mobile device. In order to cope with low transmission bitrates, the server sends new data only with a framerate of about 2 Hz. However, instead of sending plain framebuffers, the server decomposes the geometry represented by the current view's depth profile into a small set of textured polygons. This processing does not require the knowledge of geometries in the scene, i.e. the outputs of Time-of-flight camera can be handled as well. The 2.5D representation of the current frame allows the mobile device to render plausibly distorted views of the scene at high frame rates as long as the viewing direction does not change too much before the next frame arrives from the server. In order to further augment the visual experience, we use the mobile device's built-in camera or gyroscope to detect the spatial relation between the user's face and the device, so that the camera view can be changed accordingly. This produces a pseudo-immersive visual effect. Besides designing the overall system with a render-server, 3D display client, and real-time face/pose detection, our main technical contribution is a highly efficient algorithm that decomposes a frame buffer with per-pixel depth and normal information into a small set of planar regions which can be textured with the current frame. This representation is simple enough for realtime display on today's mobile devices.




OpenFlipper: An Open Source Geometry Processing and Rendering Framework


Jan Möbius, Leif Kobbelt
Proceedings of Curves and Surfaces 2010
pubimg

In this paper we present OpenFlipper, an extensible open source geometry processing and rendering framework. OpenFlipper is a free software toolkit and software development platform for geometry processing algorithms. It is mainly developed in the context of various academic research projects. Nevertheless some companies are already using it as a toolkit for commercial applications. This article presents the design goals for OpenFlipper, the central usability considerations and the important steps that were taken to achieve them. We give some examples of commercial applications which illustrate the exibility of OpenFlipper. Besides software developers, end users also bene t from this common framework since all applications built on top of it share the same basic functionality and interaction metaphors.




Polygonal Boundary Evaluation of Minkowski Sums and Swept Volumes


Marcel Campen, Leif Kobbelt
Eurographics Symposium on Geometry Processing (SGP 2010)
pubimg

We present a novel technique for the efficient boundary evaluation of sweep operations applied to objects in polygonal boundary representation. These sweep operations include Minkowski addition, offsetting, and sweeping along a discrete rigid motion trajectory. Many previous methods focus on the construction of a polygonal superset (containing self-intersections and spurious internal geometry) of the boundary of the volumes which are swept. Only few are able to determine a clean representation of the actual boundary, most of them in a discrete volumetric setting. We unify such superset constructions into a succinct common formulation and present a technique for the robust extraction of a polygonal mesh representing the outer boundary, i.e. it makes no general position assumptions and always yields a manifold, watertight mesh. It is exact for Minkowski sums and approximates swept volumes polygonally. By using plane-based geometry in conjunction with hierarchical arrangement computations we avoid the necessity of arbitrary precision arithmetics and extensive special case handling. By restricting operations to regions containing pieces of the boundary, we significantly enhance the performance of the algorithm.



A WebService employing this method is available.



Two-Colored Pixels


Darko Pavic, Leif Kobbelt
Eurographics 2010
pubimg

In this paper we show how to use two-colored pixels as a generic tool for image processing. We apply two-colored pixels as a basic operator as well as a supporting data structure for several image processing applications. Traditionally, images are represented by a regular grid of square pixels with one constant color each. In the two-colored pixel representation, we reduce the image resolution and replace blocks of NxN pixels by one square that is split by a (feature) line into two regions with constant colors. We show how the conversion of standard mono-colored pixel images into two-colored pixel images can be computed efficiently by applying a hierarchical algorithm along with a CUDA-based implementation. Two-colored pixels overcome some of the limitations that classical pixel representations have, and their feature lines provide minimal geometric information about the underlying image region that can be effectively exploited for a number of applications. We show how to use two-colored pixels as an interactive brush tool, achieving realtime performance for image abstraction and non-photorealistic filtering. Additionally, we propose a realtime solution for image retargeting, defined as a linear minimization problem on a regular or even adaptive two-colored pixel image. The concept of two-colored pixels can be easily extended to a video volume, and we demonstrate this for the example of video retargeting.




Ad-Hoc Multi-Displays for Mobile Interactive Applications


Arne Schmitz , Ming Li, Volker Schönefeld, Leif Kobbelt
Eurographics 2010 Area Paper
pubimg

We present a framework which enables the combination of different mobile devices into one multi-display such that visual content can be shown on a larger area consisting, e.g., of several mobile phones placed arbitrarily on the table. Our system allows the user to perform multi-touch interaction metaphors, even across different devices, and it guarantees the proper synchronization of the individual displays with low latency. Hence from the user’s perspective the heterogeneous collection of mobile devices acts like one single display and input device. From the system perspective the major technical and algorithmic challenges lie in the co-calibration of the individual displays and in the low latency synchronization and communication of user events. For the calibration we estimate the relative positioning of the displays by visual object recognition and an optional manual calibration step.




Exact and Robust (Self-)Intersections for Polygonal Meshes


Marcel Campen, Leif Kobbelt
Eurographics 2010
pubimg

We present a new technique to implement operators that modify the topology of polygonal meshes at intersectionsand self-intersections. Depending on the modification strategy, this effectively results in operators for Boolean combinations or for the construction of outer hulls that are suited for mesh repair tasks and accurate meshbased front tracking of deformable materials that split and merge. By combining an adaptive octree with nested binary space partitions (BSP), we can guarantee exactness (= correctness) and robustness (= completeness) of the algorithm while still achieving higher performance and less memory consumption than previous approaches. The efficiency and scalability in terms of runtime and memory is obtained by an operation localization scheme. We restrict the essential computations to those cells in the adaptive octree where intersections actually occur. Within those critical cells, we convert the input geometry into a plane-based BSP-representation which allows us to perform all computations exactly even with fixed precision arithmetics. We carefully analyze the precision requirements of the involved geometric data and predicates in order to guarantee correctness and show how minimal input mesh quantization can be used to safely rely on computations with standard floating point numbers. We properly evaluate our method with respect to precision, robustness, and efficiency.



A WebService employing this method is available.



Efficient Rasterization for Outdoor Radio Wave Propagation


Arne Schmitz , Tobias Rick, Thomas Karolski, Torsten Wolfgang Kuhlen, Leif Kobbelt
IEEE Transactions on Visualization and Computer Graphics, Feb. 2011, Vol. 17, Issue 2, pp. 159 - 170
pubimg

Conventional beam tracing can be used for solving global illumination problems. It is an efficient algorithm, and performs very well when implemented on the GPU. This allows us to apply the algorithm in a novel way to the problem of radio wave propagation. The simulation of radio waves is conceptually analogous to the problem of light transport. We use a custom, parallel rasterization pipeline for creation and evaluation of the beams. We implement a subset of a standard 3D rasterization pipeline entirely on the GPU, supporting 2D and 3D framebuffers for output. Our algorithm can provide a detailed description of complex radio channel characteristics like propagation losses and the spread of arriving signals over time (delay spread). Those are essential for the planning of communication systems required by mobile network operators. For validation, we compare our simulation results with measurements from a real world network. Furthermore, we account for characteristics of different propagation environments and estimate the influence of unknown components like traffic or vegetation by adapting model parameters to measurements.




Image Synthesis for Branching Structures


Dominik Sibbing, Darko Pavic, Leif Kobbelt
Computer Graphics Forum, Special Issue of Pacific Graphics 2010
pubimg

We present a set of techniques for the synthesis of artificial images that depict branching structures like rivers, cracks, lightning, mountain ranges, or blood vessels. The central idea is to build a statistical model that captures the characteristic bending and branching structure from example images. Then a new skeleton structure is synthesized and the final output image is composed from image fragments of the original input images. The synthesis part of our algorithm runs mostly automatic but it optionally allows the user to control the process in order to achieve a specific result. The combination of the statistical bending and branching model with sophisticated fragment-based image synthesis corresponds to a multi-resolution decomposition of the underlying branching structure into the low frequency behavior (captured by the statistical model) and the high frequency detail (captured by the image detail in the fragments). This approach allows for the synthesis of realistic branching structures, while at the same time preserving important textural details from the original image.




Automatic Registration of Oblique Aerial Images with Cadastral Maps


Martin Habbecke, Leif Kobbelt
ECCV Workshop on Reconstruction and Modeling of Large Scale 3D Virtual Environments, 2010
pubimg

In recent years, oblique aerial images of urban regions have become increasingly popular for 3D city modeling, texturing, and various cadastral applications. In contrast to images taken vertically to the ground, they provide information on building heights, appearance of facades, and terrain elevation. Despite their widespread availability for many cities, the processing pipeline for oblique images is not fully automatic yet. Especially the process of precisely registering oblique images with map vector data can be a tedious manual process. We address this problem with a registration approach for oblique aerial images that is fully automatic and robust against discrepancies between map and image data. As input, it merely requires a cadastral map and an arbitrary number of oblique images. Besides rough initial registrations usually available from GPS/INS measurements, no further information is required, in particular no information about the terrain elevation.




Generalized Use of Non-Terminal Symbols for Procedural Modeling


Lars Krecklau, Darko Pavic, Leif Kobbelt
Computer Graphics Forum (CGF), Volume 29, Issue 8 Talk at Eurographics 2011
pubimg

We present the new procedural modeling language G² (Generalized Grammar) which adapts various concepts from general purpose programming languages in order to provide high descriptive power with well-defined semantics and a simple syntax which is easily readable even by non-programmers. We extend the scope of previous architectural modeling languages by allowing for multiple types of non-terminal objects with domain-specific operators and attributes. The language accepts non-terminal symbols as parameters in modeling rules and thus enables the definition of abstract structure templates for flexible re-use within the grammar. To identify specific scene parts or objects, we introduce flags which are Boolean values whose scope covers an entire subtree in the scenegraph. The rigorous handling of typed parameters which are locally declared within the rules prevents inconsistent states emerging from not or wrongly declared variables. By deriving G² from the well-established programming language Python, we can make sure that our modeling language has a well-defined semantics. For illustration, we apply G² to architectural as well as plant modeling in order to demonstrate its descriptive power with some complex examples.



We also provide a Python prototype related to this paper for an easy integration of our system into the Houdini modeling framework from SideFX software. It is available on the project page.



Hybrid Booleans


Darko Pavic, Marcel Campen, Leif Kobbelt
Computer Graphics Forum, vol. 29, p. 75-87, 2010
Talk at Eurographics 2011
pubimg

In this paper we present a novel method to compute Boolean operation polygonal meshes. Given a Boolean expression over an arbitrary number input meshes we reliably and efficiently compute an output mesh which faithfully preserves the existing sharp features and precisely reconstructs the new features appearing along the intersections of the input meshes. The term "hybrid" applies to our method in two ways: First, our algorithm operates on a hybrid data structure which stores the original input polygons (surface data) in an adaptively refined octree (volume data). By this we combine the robustness of volumetric techniques with the accuracy of surface-oriented techniques. Second, we generate a new triangulation only in a close vicinity around the intersections of the input meshes and thus preserve as much of the original mesh structure as possible (hybrid mesh). Since the actual processing of the Boolean operation is confined to a very small region around the intersections of the input meshes, we can achieve very high adaptive refinement resolutions and hence very high precision. We demonstrate our method on a number of challenging examples.




Character Reconstruction and Animation from Uncalibrated Video


Alexander Hornung, Ellen Dekkers, Martin Habbecke, Markus Gross, Leif Kobbelt
Technical Report
pubimg

We present a novel method to reconstruct 3D character models from video. The main conceptual contribution is that the reconstruction can be performed from a single uncalibrated video sequence which shows the character in articulated motion. We reduce this generalized problem setting to the easier case of multi-view reconstruction of a rigid scene by applying pose synchronization of the character between frames. This is enabled by two central technical contributions. First, based on a generic character shape template, a new mesh-based technique for accurate shape tracking is proposed. This method successfully handles the complex occlusions issues, which occur when tracking the motion of an articulated character. Secondly, we show that image-based 3D reconstruction becomes possible by deforming the tracked character shapes as-rigid-as-possible into a common pose using motion capture data. After pose synchronization, several partial reconstructions can be merged in order to create a single, consistent 3D character model. We integrated these components into a simple interactive framework, which allows for straightforward generation and animation of 3D models for a variety of character shapes from uncalibrated monocular video.




Mixed-Integer Quadrangulation


David Bommes, Henrik Zimmer, Leif Kobbelt
ACM Transactions on Graphics (TOG), 28(3), Article No. 77, 2009
Proceedings of the 2009 SIGGRAPH Conference
pubimg

We present a novel method for quadrangulating a given triangle mesh. After constructing an as smooth as possible symmetric cross field satisfying a sparse set of directional constraints (to capture the geometric structure of the surface), the mesh is cut open in order to enable a low distortion unfolding. Then a seamless globally smooth parametrization is computed whose iso-parameter lines follow the cross field directions. In contrast to previous methods, sparsely distributed directional constraints are sufficient to automatically determine the appropriate number, type and position of singularities in the quadrangulation. Both steps of the algorithm (cross field and parametrization) can be formulated as a mixed-integer problem which we solve very efficiently by an adaptive greedy solver. We show several complex examples where high quality quad meshes are generated in a fully automatic manner.



The Constrained Mixed-Integer Solver used in this project has been released under GPL and can be found on its projects page.



SCRAMSAC: Improving RANSAC's Efficiency with a Spatial Consistency Filter


Torsten Sattler, Bastian Leibe, Leif Kobbelt
IEEE International Conference on Computer Vision (ICCV) 2009
pubimg

Geometric verification with RANSAC has become a crucial step for many local feature based matching applications. Therefore, the details of its implementation are directly relevant for an application's run-time and the quality of the estimated results. In this paper, we propose a RANSAC extension that is several orders of magnitude faster than standard RANSAC and as fast as and more robust to degenerate configurations than PROSAC, the currently fastest RANSAC extension from the literature. In addition, our proposed method is simple to implement and does not require parameter tuning. Its main component is a spatial consistency check that results in a reduced correspondence set with a significantly increased inlier ratio, leading to faster convergence of the remaining estimation steps. In addition, we experimentally demonstrate that RANSAC can operate entirely on the reduced set not only for sampling, but also for its consensus step, leading to additional speed-ups. The resulting approach is widely applicable and can be readily combined with other extensions from the literature. We quantitatively evaluate our approach's robustness on a variety of challenging datasets and compare its performance to the state-of-the-art.




Simulation of Radio Wave Propagation by Beam Tracing


Arne Schmitz , Tobias Rick, Thomas Karolski, Leif Kobbelt, Torsten Wolfgang Kuhlen
Eurographics Symposium on Parallel Graphics and Visualization
pubimg

Beam tracing can be used for solving global illumination problems. It is an efficient algorithm, and performs very well when implemented on the GPU. This allows us to apply the algorithm in a novel way to the problem of radio wave propagation. The simulation of radio waves is conceptually analogous to the problem of light transport. However, their wavelengths are of proportions similar to that of the environment. At such frequencies, waves that bend around corners due to diffraction are becoming an important propagation effect. In this paper we present a method which integrates diffraction, on top of the usual effects related to global illumination like reflection, into our beam tracing algorithm. We use a custom, parallel rasterization pipeline for creation and evaluation of the beams. Our algorithm can provide a detailed description of complex radio channel characteristics like propagation losses and the spread of arriving signals over time (delay spread). Those are essential for the planning of communication systems required by mobile network operators. For validation, we compare our simulation results with measurements from a real world network.




Markerless Reconstruction of Dynamic Facial Expressions


Dominik Sibbing, Martin Habbecke, Leif Kobbelt
2009 IEEE 12th International Conference on Computer Vision Workshops, ICCV Workshops
pubimg

In this paper we combine methods from the field of computer vision with surface editing techniques to generate animated faces, which are all in full correspondence to each other. The input for our system are synchronized video streams from multiple cameras. The system produces a sequence of triangle meshes with fixed connectivity, representing the dynamics of the captured face. By carfully taking all requirements and characteristics into account we decided for the proposed system design: We deform an initial face template using movements estimated from the video streams. To increase the robustness of the initial reconstruction, we use a morphable model as a shape prior. However using an efficient Surfel Fitting technique, we are still able to precisely capture face shapes not part of the PCA Model. In the deformation stage, we use a 2D mesh-based tracking approach to establish correspondences in time. We then reconstruct image-samples in 3D using the same Surfel Fitting technique, and finally use the reconstructed points to robustly deform the initially reconstructed face.




An Intuitive Interface for Interactive High Quality Image-Based Modeling


Martin Habbecke, Leif Kobbelt
Computer Graphics Forum, Volume 28, Number 7, 2009
(Proc. of Pacific Graphics 2009)
pubimg

We present the design of an interactive image-based modeling tool that enables a user to quickly generate detailed 3D models with texture from a set of calibrated input images. Our main contribution is an intuitive user interface that is entirely based on simple 2D painting operations and does not require any technical expertise by the user or difficult pre-processing of the input images. One central component of our tool is a GPU-based multi-view stereo reconstruction scheme, which is implemented by an incremental algorithm, that runs in the background during user interaction so that the user does not notice any significant response delay.




GIzMOs: Genuine Image Mosaics with Adaptive Tiling


Darko Pavic, Ulf Ceumern, Leif Kobbelt
Computer Graphics Forum, Volume 28, Issue 8, pages 2244–2254, December 2009
pubimg

We present a method which splits an input image into a set of tiles. Each tile is then replaced by another image from a large database such that, when viewed from a distance, the original image is reproduced as well as possible. While the general concept of image mosaics is not new, we consider our results as "genuine image mosaics" (or short GIzMOs) in the sense that the images from the database are not modified in any way. This is different from previous work, where the image tiles are usually color shifted or overlaid with the high-frequency content of the input image. Besides the regular alignment of the tiles we propose a greedy approach for adaptive tiling where larger tiles are placed in homogenous image regions. By this we avoid the visual periodicity, which is induced by the equal spacing of the image tiles in the completely regular setting. Our overall system addresses also the cleaning of the image database by removing all unwanted images with no meaningful content. We apply differently sophisticated image descriptors to find the best matching image for each tile. For esthetic and artistic reasons we classify each tile as "feature" or "non-feature" and then apply a suitable image descriptor. In a user study we have verified that our descriptors lead to mosaics that are significantly better recognizable than just taking, e.g., average color values.



A WebService employing this method is available.



A Sketching Interface for Feature Curve Recovery of Free-Form Surfaces


Ellen Dekkers, Leif Kobbelt, Richard Pawlicki, Randall C. Smith
2009 SIAM/ACM Joint Conference on Geometric and Physical Modeling
pubimg

In this paper, we present a semi-automatic approach to efficiently and robustly recover the characteristic feature curves of a given free-form surface. The technique supports a sketch-based interface where the user just has to roughly sketch the location of a feature by drawing a stroke directly on the input mesh. The system then snaps this initial curve to the correct position based on a graph-cut optimization scheme that takes various surface properties into account. Additional position constraints can be placed and modified manually which allows for an interactive feature curve editing functionality. We demonstrate the usefulness of our technique by applying it to a practical problem scenario in reverse engineering. Here, we consider the problem of generating a statistical (PCA) shape model for car bodies. The crucial step is to establish proper feature correspondences between a large number of input models. Due to the significant shape variation, fully automatic techniques are doomed to failure. With our simple and effective feature curve recovery tool, we can quickly sketch a set of characteristic features on each input model which establishes the correspondence to a pre-defined template mesh and thus allows us to generate the shape model. Finally, we can use the feature curves and the shape model to implement an intuitive modeling metaphor to explore the shape space spanned by the input models.




Spectral Quadrangulation with Orientation and Alignment Control


Jin Huang, Muyang Zhang, Jin Ma, Xinguo Liu, Leif Kobbelt, Hujun Bao
SIGGRAPH Asia 2008
pubimg

This paper presents a new quadrangulation algorithm, extending the spectral surface quadrangulation approach where the coarse quadrangular structure is derived from the Morse-Smale complex of an eigenfunction of the Laplacian operator on the input mesh. In contrast to the original scheme, we provide flexible explicit controls of the shape, size, orientation and feature alignment of the quadrangular faces. We achieve this by proper selection of the optimal eigenvalue (shape), by adaption of the area term in the Laplacian operator (size), and by adding special constraints to the Laplace eigenproblem (orientation and alignment). By solving a generalized eigenproblem we can generate a scalar field on the mesh whose Morse-Smale complex is of high quality and satisfies all the user requirements. The final quadrilateral mesh is generated from the Morse- Smale complex by computing a globally smooth parametrization. Here we additionally introduce edge constraints to preserve user specified feature lines accurately.




Geometric Modeling Based on Polygonal Meshes


Mario Botsch, Mark Pauly, Leif Kobbelt, Pierre Alliez, Bruno Lévy, Stephan Bischoff, Christian Rössl
Eurographics 2008 Tutorial
pubimg

In the last years triangle meshes have become increasingly popular and are nowadays intensively used in many different areas of computer graphics and geometry processing. In classical CAGD irregular triangle meshes developed into a valuable alternative to traditional spline surfaces, since their conceptual simplicity allows for more flexible and highly efficient processing. Moreover, the consequent use of triangle meshes as surface representation avoids error-prone conversions, e.g., from CAD surfaces to mesh-based input data of numerical simulations. Besides classical geometric modeling, other major areas frequently employing triangle meshes are computer games and movie production. In this context geometric models are often acquired by 3D scanning techniques and have to undergo post-processing and shape optimization techniques before being actually used in production.




High-Resolution Volumetric Computation of Offset Surfaces with Feature Preservation


Darko Pavic, Leif Kobbelt
Eurographics 2008
pubimg

We present a new algorithm for the efficient and reliable generation of offset surfaces for polygonal meshes. The algorithm is robust with respect to degenerate configurations and computes (self-)intersection free offsets that do not miss small and thin components. The results are correct within a prescribed e-tolerance. This is achieved by using a volumetric approach where the offset surface is defined as the union of a set of spheres, cylinders, and prisms instead of surface-based approaches that generally construct an offset surface by shifting the input mesh in normal direction. Since we are using the unsigned distance field, we can handle any type of topological inconsistencies including non-manifold configurations and degenerate triangles. A simple but effective mesh operation allows us to detect and include sharp features (shocks) into the output mesh and to preserve them during post-processing (decimation and smoothing). We discretize the distance function by an efficient multi-level scheme on an adaptive octree data structure. The problem of limited voxel resolutions inherent to every volumetric approach is avoided by breaking the bounding volume into smaller tiles and processing them independently. This allows for almost arbitrarily high voxel resolutions on a commodity PC while keeping the output mesh complexity low. The quality and performance of our algorithm is demonstrated for a number of challenging examples.




Image Selection For Improved Multi-View Stereo


Alexander Hornung, Boyi Zeng, Leif Kobbelt
IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2008)
pubimg

The Middlebury Multi-View Stereo evaluation clearly shows that the quality and speed of most multi-view stereo algorithms depends significantly on the number and selection of input images. In general, not all input images contribute equally to the quality of the output model, since several images may often contain similar and hence overly redundant visual information. This leads to unnecessarily increased processing times. On the other hand, a certain degree of redundancy can help to improve the reconstruction in more ``difficult'' regions of a model. In this paper we propose an image selection scheme for multi-view stereo which results in improved reconstruction quality compared to uniformly distributed views. Our method is tuned towards the typical requirements of current multi-view stereo algorithms, and is based on the idea of incrementally selecting images so that the overall coverage of a simultaneously generated proxy is guaranteed without adding too much redundant information. Critical regions such as cavities are detected by an estimate of the local photo-consistency and are improved by adding additional views. Our method is highly efficient, since most computations can be out-sourced to the GPU. We evaluate our method with four different methods participating in the Middlebury benchmark and show that in each case reconstructions based on our selected images yield an improved output quality while at the same time reducing the processing time considerably.




Beam Tracing for Multipath Propagation in Urban Environments


Arne Schmitz , Tobias Rick, Thomas Karolski, Leif Kobbelt, Torsten Wolfgang Kuhlen
3rd European Conference on Antennas and Propagation, to appear
pubimg

We present a novel method for efficient computation of complex channel characteristics due to multipath effects in urban microcell environments. Significant speedups are obtained compared to state-of-the-art ray-tracing algorithms by tracing continuous beams and by using parallelization techniques. We optimize simulation parameters using on-site measurements from real world networks. We formulate the adaption of model parameters as a constrained least-squares problem where each row of the matrix corresponds to one measurement location, and where the columns are formed by the beams that reach the respective location.




2D Video Editing for 3D Effects


Darko Pavic, Volker Schönefeld, Lars Krecklau, Martin Habbecke, Leif Kobbelt
VMV 2008
pubimg

We present a semi-interactive system for advanced video processing and editing. The basic idea is to partially recover planar regions in object space and to exploit this minimal pseudo-3D information in order to make perspectively correct modifications. Typical operations are to increase the quality of a low-resolution video by overlaying high-resolution photos of the same approximately planar object or to add or remove objects by copying them from other video streams and distorting them perspectively according to some planar reference geometry. The necessary user interaction is entirely in 2D and easy to perform even for untrained users. The key to our video processing functionality is a very robust and mostly automatic algorithm for the perspective registration of video frames and photos, which can be used as a very effective video stabilization tool even in the presence of fast and blurred motion. Explicit 3D reconstruction is thus avoided and replaced by image and video rectification. The technique is based on state-of-the-art feature tracking and homography matching. In complicated and ambiguous scenes, user interaction as simple as 2D brush strokes can be used to support the registration. In the stabilized video, he reference plane appears frozen which simplifies segmentation and matte extraction. We demonstrate our system for a number of quite challenging application scenarios such as video enhancement, background replacement, foreground removal and perspectively correct video cut and paste.




Interactive Global Illumination for Deformable Geometry in CUDA


Arne Schmitz , Markus Tavenrath, Leif Kobbelt
Pacific Graphics 2008
pubimg

Interactive global illumination for fully deformable scenes with dynamic relighting is currently a very elusive goal in the area of realistic rendering. In this work we propose a highly efficient and scalable system that is based on explicit visibility calculations. The rendering equation defines the light exchange between surfaces, which we approximate by subsampling. By utilizing the power of modern parallel GPUs using the CUDA framework we achieve interactive frame rates. Since we update the global illumination continuously in an asynchronous fashion, we maintain interactivity at all times for moderately complex scenes. We show that we can achieve higher frame rates for scenes with moving light sources, diffuse indirect illumination and dynamic geometry than other current methods, while maintaining a high image quality.



Updated paper: Small technical fix.



LaserBrush: A Flexible Device for 3D Reconstruction of Indoor Scenes


Martin Habbecke, Leif Kobbelt
ACM Symposium on Solid and Physical Modeling 2008
pubimg

While many techniques for the 3D reconstruction of small to medium sized objects have been proposed in recent years, the reconstruction of entire scenes is still a challenging task. This is especially true for indoor environments where existing active reconstruction techniques are usually quite expensive and passive, image-based techniques tend to fail due to high scene complexities, difficult lighting situations, or shiny surface materials. To fill this gap we present a novel low-cost method for the reconstruction of depth maps using a video camera and an array of laser pointers mounted on a hand-held rig. Similar to existing laser-based active reconstruction techniques, our method is based on a fixed camera, moving laser rays and depth computation by triangulation. However, unlike traditional methods, the position and orientation of the laser rig does not need to be calibrated a-priori and no precise control is necessary during image capture. The user rather moves the laser rig freely through the scene in a brush-like manner, letting the laser points sweep over the scene's surface. We do not impose any constraints on the distribution of the laser rays, the motion of the laser rig, or the scene geometry except that in each frame at least six laser points have to be visible. Our main contributions are two-fold. The first is the depth map reconstruction technique based on irregularly oriented laser rays that, by exploiting robust sampling techniques, is able to cope with missing and even wrongly detected laser points. The second is a smoothing operator for the reconstructed geometry specifically tailored to our setting that removes most of the inevitable noise introduced by calibration and detection errors without damaging important surface features like sharp edges.




An Incremental Approach to Feature Aligned Quad Dominant Remeshing


Yu-Kun Lai, Leif Kobbelt, Shi-Min Hu
ACM Symposium on Solid and Physical Modeling 2008
pubimg

In this paper we present a new algorithm which turns an unstructured triangle mesh into a quad-dominant mesh with edges aligned to the principal directions of the underlying geometry. Instead of computing a globally smooth parameterization or integrating curvature lines along a tangent vector field, we simply apply an iterative relaxation scheme which incrementally aligns the mesh edges to the principal directions. The quad-dominant mesh is eventually obtained by dropping the not-aligned diagonals from the triangle mesh. A post-processing stage is introduced to further improve the results. The major advantage of our algorithm is its conceptual simplicity since it is merely based on elementary mesh operations such as edge collapse, flip, and split. The resulting meshes exhibit a very good alignment to surface features and rather uniform distribution of mesh vertices. This makes them very well-suited, e.g., as Catmull-Clark Subdivision control meshes.




City Virtualization


Gregor Fabritius, Jan Kraßnigg, Lars Krecklau, Christopher Manthei, Alexander Hornung, Martin Habbecke, Leif Kobbelt
5. Workshop Virtuelle und Erweiterte Realität der GI-Fachgruppe VR/AR
pubimg

Virtual city models become more and more important in applications like virtual city guides, geographic information systems or large scale visualizations, and also play an important role during the design of wireless networks and the simulation of noise distribution or environmental phenomena. However, generating city models of sufficient quality with respect to different target applications is still an extremely challenging, time consuming and costly process. To improve this situation, we present a novel system for the rapid and easy creation of 3D city models from 2D map data and terrain information, which is available for many cities in digital form. Our system allows to continuously vary the resulting level of correctness, ranging from models with high-quality geometry and plausible appearance which are generated almost completely automatic to models with correctly textured facades and highly detailed representations of important, well known buildings which can be generated with reasonable additional effort. While our main target application is the high-quality, real-time visualization of complex, detailed city models, the models generated with our approach have successfully been used for radio wave simulations as well. To demonstrate the validity of our approach, we show an exemplary reconstruction of the city of Aachen.




Quadrangular Parameterization for Reverse Engineering


David Bommes, Tobias Vossemer, Leif Kobbelt
Special Issue of Lecture Notes in Computer Science 2009 Proceedings of Curves and Surfaces 2008
pubimg

The aim of Reverse Engineering is to convert an unstructured representation of a geometric object, emerging e.g. from laser scanners, into a natural, structured representation in the spirit of CAD models, which is suitable for numerical computations. Therefore we present a user-controlled, as isometric as possible parameterization technique which is able to prescribe geometric features of the input and produces high-quality quadmeshes with low distortion. Starting with a coarse, user-prescribed layout this is achieved by using affine functions for the transition between non-orthogonal quadrangular charts of a global parameterization. The shape of each chart is optimized non-linearly for isometry of the underlying parameterization to produce meshes with low edge-length distortion. To provide full control over the meshing alignment the user can additionally tag an arbitrary subset of the layout edges which are guaranteed to be represented by enforcing them to lie on iso-lines of the parameterization but still allowing the global parameterization to relax in the direction of the iso-lines.




On-the-fly Curve-skeleton Computation for 3D Shapes


Andrei Sharf, Thomas Lewiner, Ariel Shamir, Leif Kobbelt
Eurographics 2007
pubimg

The curve-skeleton of a 3D object is an abstract geometrical and topological representation of its 3D shape. It maps the spatial relation of geometrically meaningful parts to a graph structure. Each arc of this graph represents a part of the object with roughly constant diameter or thickness, and approximates its centerline. This makes the curve-skeleton suitable to describe and handle articulated objects such as characters for animation. We present an algorithm to extract such a skeleton on-the-fly, both from point clouds and polygonal meshes. The algorithm is based on a deformable model evolution that captures the object’s volumetric shape. The deformable model involves multiple competing fronts which evolve inside the object in a coarse-to-fine manner. We first track these fronts’ centers, and then merge and filter the resulting arcs to obtain a curve-skeleton of the object. The process inherits the robustness of the reconstruction technique, being able to cope with noisy input, intricate geometry and complex topology. It creates a natural segmentation of the object and computes a center curve for each segment while maintaining a full correspondence between the skeleton and the boundary of the object.




GPU-Based Multiresolution Deformation Using Approximate Normal Field Reconstruction


Martin Marinov, Mario Botsch, Leif Kobbelt
ACM Journal of Graphics Tools 12(1), 2007, pp. 27-46
pubimg

Multiresolution shape editing performs global deformations while preserving fine surface details by modifying a smooth base surface and reconstructing the modified detailed surface as a normal displacement from it. Since two non-trivial operators (deformation and reconstruction) are involved, the computational complexity can become too high for real-time deformations of complex models. We present an efficient technique for evaluating multiresolution deformations of high-resolution triangle meshes directly on the GPU. By precomputing the deformation functions as well as their gradient information we can map both the deformation and the reconstruction operator to the GPU, which enables us to reconstruct the deformed positions and sufficiently close approximations of the normal vectors in the vertex shader in a single rendering pass. This allows us to render dynamically deforming 3D models several times faster than on the CPU. We demonstrate the application of our technique to two modern multiresolution approaches: one based on (irregular) displaced subdivision surfaces and the other one on volumetric space deformation using radial basis functions.




Accurate Computation of Geodesic Distance Fields for Polygonal Curves on Triangle Meshes


David Bommes, Leif Kobbelt
VMV 2007, pp. 151-160
pubimg

We present an algorithm for the efficient and accurate computation of geodesic distance fields on triangle meshes. We generalize the algorithm originally proposed by Surazhsky et al. . While the original algorithm is able to compute geodesic distances to isolated points on the mesh only, our generalization can handle arbitrary, possibly open, polygons on the mesh to define the zero set of the distance field. Our extensions integrate naturally into the base algorithm and consequently maintain all its nice properties. For most geometry processing algorithms, the exact geodesic distance information is sampled at the mesh vertices and the resulting piecewise linear interpolant is used as an approximation to the true distance field. The quality of this approximation strongly depends on the structure of the mesh and the location of the medial axis of the distance fild. Hence our second contribution is a simple adaptive refinement scheme, which inserts new vertices at critical locations on the mesh such that the final piecewise linear interpolant is guaranteed to be a faithful approximation to the true geodesic distance field.




A Surface-Growing Approach to Multi-View Stereo Reconstruction


Martin Habbecke, Leif Kobbelt
IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2007
pubimg

We present a new approach to reconstruct the shape of a 3D object or scene from a set of calibrated images. The central idea of our method is to combine the topological flexibility of a point-based geometry representation with the robust reconstruction properties of scene-aligned planar primitives. This can be achieved by approximating the shape with a set of surface elements (surfels) in the form of planar disks which are independently fitted such that their footprint in the input images matches. Instead of using an artificial energy functional to promote the smoothness of the recovered surface during fitting, we use the smoothness assumption only to initialize planar primitives and to check the feasibility of the fitting result. After an initial disk has been found, the recovered region is iteratively expanded by growing further disks in tangent direction. The expansion stops when a disk rotates by more than a given threshold during the fitting step. A global sampling strategy guarantees that eventually the whole surface is covered. Our technique does not depend on a shape prior or silhouette information for the initialization and it can automatically and simultaneously recover the geometry, topology, and visibility information which makes it superior to other state-of-the-art techniques. We demonstrate with several high-quality reconstruction examples that our algorithm performs highly robustly and is tolerant to a wide range of image capture modalities.




Fast Interactive Region of Interest Selection for Volume Visualization


Dominik Sibbing, Leif Kobbelt
Bildverarbeitung für die Medizin (2007)
pubimg

We describe a new method to support the segmentation of a volumetric MRI- or CT-dataset such that only the components selected by the user are displayed by a volume renderer for visual inspection. The goal is to combine the advantages of direct volume rendering (high efficiency and semi-transparent display of internal structures) and indirect volume rendering (well defined surface geometry and topology). Our approach is based on a re-labeling of the input volume's set of isosurfaces which allows the user to peel off the outer layers and to distinguish unconnected voxel components which happen to have the same voxel values. For memory and time efficiency, isosurfaces are never generated explicitly. Instead a second voxel grid is computed which stores a discretization of the new isosurface labels. Hence the masking of unwanted regions as well as the direct volume rendering of the desired regions of interest (ROI) can be implemented on the GPU which enables interactive frame rates even while the user changes the selection of the ROI.




Character Animation from 2D Pictures and 3D Motion Data


Alexander Hornung, Ellen Dekkers, Leif Kobbelt
ACM Transactions on Graphics (TOG), vol. 26(1), 2007
pubimg

This paper presents a new method to animate photos of 2D characters using 3D motion capture data. Given a single image of a person or essentially human-like subject our method transfers the motion of a 3D skeleton onto the subject's 2D shape in image space, generating the impression of a realistic movement. We present robust solutions to reconstruct a projective camera model and a 3D model pose which matches best to the given 2D image. Depending on the reconstructed view, a 2D shape template is selected which enables the proper handling of occlusions. After fitting the template to the character in the input image, it is deformed as-rigid-as-possible by taking the projected 3D motion data into account. Unlike previous work our method thereby correctly handles projective shape distortion. It works for images from arbitrary views and requires only a small amount of user interaction. We present animations of a diverse set of human (and non-human) characters with different types of motions such as walking, jumping, or dancing.




Competing Fronts for Coarse-to-Fine Surface Reconstruction


Andrei Sharf, Thomas Lewiner, Ariel Shamir, Leif Kobbelt, Daniel Cohen-Or
Eurographics 2006
pubimg

We present a deformable model to reconstruct a surface from a point cloud. The model is based on an explicit mesh representation composed of multiple competing evolving fronts. These fronts adapt to the local feature size of the target shape in a coarse-to-fine manner. Hence, they approach towards the finer (local) features of the target shape only after the reconstruction of the coarse (global) features has been completed. This conservative approach leads to a better control and interpretation of the reconstructed topology. The use of an explicit representation for the deformable model guarantees water-tightness and simple tracking of topological events. Furthermore, the coarse-to-fine nature of reconstruction enables adaptive handling of non-homogenous sample density, including robustness to missing data in defected areas.




Robust Reconstruction of Watertight 3D Models from Non-uniformly Sampled Point Clouds Without Normal Information


Alexander Hornung, Leif Kobbelt
Eurographics Symposium on Geometry Processing (SGP 2006), 41-50
pubimg

We present a new volumetric method for reconstructing watertight triangle meshes from arbitrary, unoriented point clouds. While previous techniques usually reconstruct surfaces as the zero level-set of a signed distance function, our method uses an unsigned distance function and hence does not require any information about the local surface orientation. Our algorithm estimates local surface confidence values within a dilated crust around the input samples. The surface which maximizes the global confidence is then extracted by computing the minimum cut of a weighted spatial graph structure. We present an algorithm, which efficiently converts this cut into a closed, manifold triangle mesh with a minimal number of vertices. The use of an unsigned distance function avoids the topological noise artifacts caused by misalignment of 3D scans, which are common to most volumetric reconstruction techniques. Due to a hierarchical approach our method efficiently produces solid models of low genus even for noisy and highly irregular data containing large holes, without loosing fine details in densely sampled regions. We show several examples for different application settings such as model generation from raw laser-scanned data, image-based 3D reconstruction, and mesh repair.




PriMo: Coupled Prisms for Intuitive Surface Modeling


Mario Botsch, Mark Pauly, Markus Gross, Leif Kobbelt
Eurographics Symposium on Geometry Processing (SGP 2006), 11-20
pubimg

We present a new method for 3D shape modeling that achieves intuitive and robust deformations by emulating physically plausible surface behavior inspired by thin shells and plates. The surface mesh is embedded in a layer of volumetric prisms, which are coupled through non-linear, elastic forces. To deform the mesh, prisms are rigidly transformed to satisfy user constraints while minimizing the elastic energy. The rigidity of the prisms prevents degenerations even under extreme deformations, making the method numerically stable. For the underlying geometric optimization we employ both local and global shape matching techniques. Our modeling framework allows for the specification of various geometrically intuitive parameters that provide control over the physical surface behavior. While computationally more involved than previous methods, our approach significantly improves robustness and simplifies user interaction for large, complex deformations.

» Show Videos



A Robust Two-Step Procedure for Quad-Dominant Remeshing


Martin Marinov, Leif Kobbelt
Computer Graphics Forum, to appear (Eurographics 2006 proceedings).
pubimg

We propose a new technique for quad-dominant remeshing which separates the local regularity requirements from the global alignment requirements by working in two steps. In the first step, we apply a slight variant of variational shape approximation in order to segment the input mesh into patches which capture the global structure of the processed object. Then we compute an optimized quad-mesh for every patch by generating a finite set of candidate curves and applying a combinatorial optimization procedure. Since the optimization is performed independently for each patch, we can afford more complex operations while keeping the overall computation times at a reasonable level. Our quad-meshing technique is robust even for noisy meshes and meshes with isotropic or flat regions since it does not rely on the generation of curves by integration along estimated principal curvature directions. Instead we compute a conformal parametrization for each patch and generate the quad-mesh from curves with minimum bending energy in the 2D parameter domain. Mesh consistency between patches is guaranteed by simply using the same set of sample points along the common boundary curve. The resulting quad-meshes are of high-quality locally (shape of the quads) as well as globally (global alignment) which allows us to even generate fairly coarse quad-meshes that can be used as Catmull-Clark control meshes.




Point-Based Multiscale Surface Representation


Mark Pauly, Leif Kobbelt, Markus Gross
ACM Transactions on Graphics, Vol. 25, No. 2, April 2006, Pages 177–193.
pubimg

In this article we present a new multiscale surface representation based on point samples. Given an unstructured point cloud as input, our method first computes a series of point-based surface approximations at successively higher levels of smoothness, that is, coarser scales of detail, using geometric low-pass filtering. These point clouds are then encoded relative to each other by expressing each level as a scalar displacement of its predecessor. Low-pass filtering and encoding are combined in an efficient multilevel projection operator using local weighted least squares fitting.

Our representation is motivated by the need for higher-level editing semantics which allow surface modifications at different scales. The user would be able to edit the surface at different approximation levels to perform coarse-scale edits on the whole model as well as very localized modifications on the surface detail. Additionally, the multiscale representation provides a separation in geometric scale which can be understood as a spectral decomposition of the surface geometry. Based on this observation, advanced geometric filtering methods can be implemented that mimic the effects of Fourier filters to achieve effects such as smoothing, enhancement, or band-bass filtering.




Wave Propagation Using the Photon Path Map


Arne Schmitz , Leif Kobbelt
ACM PE-WASUN 2006, pp. 158 ff.
pubimg

In wireless network planning, much effort is spent on the improvement of the network and transport layer -- especially for Mobile Ad Hoc Networks. Although in principle real-world measurements are necessary for this, their setup is often too complex and costly. Hence good and reliable simulation tools are needed. In this work we present a new physical layer simulation algorithm based on the extension and adaptation of recent techniques for global illumination simulation. By combining and improving these highly efficient algorithms from the field of Computer Graphics, it is possible to build a fast and flexible utility to be used for wireless network simulation. We compute a discrete sampling of the volumetric electromagnetic field by tracing stochastically generated photon paths through the scene. This so called Photon Path Map is then used to estimate the field density at any point in space and also provides local information about the delay spread. The algorithm can be applied to three dimensional indoor as well as outdoor scenarios without any changes and the path-tracing costs scale only logarithmically with the growing complexity of the underlying scene geometry.




Real-time Visualization of Wave Propagation


Arne Schmitz , Leif Kobbelt
Proceedings of MSPE'06, pp. 71-80, Gesellschaft für Informatik (GI)
pubimg

In this work we present a method to visualize the wave propagation mechanisms of wireless networks at interactive rates. The user can move around transmitting nodes and immediately sees the resulting field strength for the complete scenario. This can be used for rapid optimization of antenna placement, or for visualizing the coverage of mobile stations, as they move through a simulation. In a preprocessing step we compute the wave propagation for a distinct set of transmitter positions. Whereas in the visualization phase we use these precomputed maps to do a fast interpolation using a current graphics card.




Iterative Multi-View Plane Fitting


Martin Habbecke, Leif Kobbelt
VMV 2006, pp. 73-80
pubimg

We present a method for the reconstruction of 3D planes from calibrated 2D images. Given a set of pixels Ω in a reference image, our method computes a plane which best approximates that part of the scene which has been projected to Ω by exploiting additional views. Based on classical image alignment techniques we derive linear matching equations minimally parameterized by the three parameters of an object-space plane. The resulting iterative algorithm is highly robust because it is able to integrate over large image regions due to the correct object-space approximation and hence is not limited to comparing small image patches. Our method can be applied to a pair of stereo images but is also able to take advantage of the additional information provided by an arbitrary number of input images. A thorough experimental validation shows that these properties enable robust convergence especially under the influence of image sensor noise and camera calibration errors.




Interactive Image Completion with Perspective Correction


Darko Pavic, Volker Schönefeld, Leif Kobbelt
Pacific Graphics 2006, [Special issue of The Visual Computer, Volume 22, Number 9, Pages 671-681]
pubimg

We present an interactive system for fragment-based image completion which exploits information about the approximate 3D structure in a scene in order to estimate and apply perspective corrections when copying a source fragment to a target position. Even though implicit 3D information is used, the interaction is strictly 2D which makes the user interface very simple and intuitive. We propose different interaction metaphors in our system for providing 3D information interactively. Our search and matching procedure is done in the Fourier domain and hence it is very fast and it allows us to use large fragments and multiple source images with high resolution while still obtaining interactive response times. Our image completion technique also takes user-specified structure information into account where we generalize the concept of feature curves to arbitrary sets of feature pixels. We demonstrate our technique on a number of difficult completion tasks.




Hierarchical Volumetric Multi-view Stereo Reconstruction of Manifold Surfaces based on Dual Graph Embedding


Alexander Hornung, Leif Kobbelt
IEEE Conference on Computer Vision and Pattern Recognition (CVPR 2006), vol. 1, 503-510
pubimg

This paper presents a new volumetric stereo algorithm to reconstruct the 3D shape of an arbitrary object. Our method is based on finding the minimum cut in an octahedral graph structure embedded into the vol umetric grid, which establishes a well defined relationship between the integrated photo-consistency function of a region in space and the corresponding edge weights of the embedded graph. This new graph structure allows for a highly efficient hierarchical implementation supporting high volumetric resolutions and large numbers of input images. Furthermore we will show how the resulting cut surface can be directly converted into a consistent, closed and manifold mesh. Hence this work provides a complete multi-view stereo reconstruction pipeline. We demonstrate the robustness and efficiency of our technique by a number of high quality reconstructions of real objects.




Robust and Efficient Photo-Consistency Estimation for Volumetric 3D Reconstruction


Alexander Hornung, Leif Kobbelt
European Conference on Computer Vision (ECCV 2006), LNCS, vol. 3952, Springer, 179-190
pubimg

Estimating photo-consistency is one of the most important ingredients for any 3D stereo reconstruction technique that is based on a volumetric scene representation. This paper presents a new, illumination invariant photo-consistency measure for high quality, volumetric 3D reconstruction from calibrated images. In contrast to current standard methods such as normalized cross-correlation it supports unconstrained camera setups and non-planar surface approximations. We show how this measure can be embedded into a highly efficient, completely hardware accelerated volumetric reconstruction pipeline by exploiting current graphics processors. We provide examples of high quality reconstructions with computation times of only a few seconds to minutes, even for large numbers of cameras and high volumetric resolutions.




Extracting consistent and manifold interfaces from multi-valued volume data sets


Stephan Bischoff, Leif Kobbelt
Bildverarbeitung für die Medizin (2006)
pubimg

We propose an algorithm to construct a set of interfaces that separate the connected components of a multi-valued volume dataset. While each single interface is a manifold triangle mesh, two or more interfaces may join consistently along their common boundaries, i.e. there are no T-junctions or gaps. In contrast to previous work, our algorithm classifies and removes the topological ambiguities from the volume before extracting the interfaces. This not only allows for a simple and stable extraction algorithm, but also makes it possible to include user constraints.




Structure Recovery via Hybrid Variational Surface Approximation


Jianhua Wu, Leif Kobbelt
Computer Graphics Forum, Volume 24, Number 3, 2005, pp. 277 - 284 (Eurographics 2005 proceedings).
pubimg

Aiming at robust surface structure recovery, we extend the powerful optimization technique of variational shape approximation by allowing for several different primitives to represent the geometric proxy of a surface region. While the original paper only considered planes, we also include spheres, cylinders, and more complex rollingball blend patches. The motivation for this choice is the fact that most technical CAD objects consist of patches from these four categories. The robust segmentation and global optimization properties which have been observed for the variational shape approximation carry over to our hybrid extension. Hence, we can use our algorithm to segment a given mesh model into characteristic patches and provide a corresponding geometric proxy for each patch. The expected result that we recover surface structures more robustly and thus obtain better approximations with a smaller number of primitives, is validated and demonstrated on a number of examples.




Automatic Generation of Structure Preserving Multiresolution Models


Martin Marinov, Leif Kobbelt
Computer Graphics Forum, Volume 24, Number 3, 2005, pp. 479 -- 486 (Eurographics 2005 proceedings).
pubimg

We are proposing a multiresolution representation which uses a subdivision surface as a smooth base surface with respect to which a high resolution mesh is defined by normal displacement. While this basic representation is quite straightforward, our actual contribution lies in the automatic generation of such a representation. Given a high resolution mesh, our algorithm is designed to derive a subdivision control mesh whose structure is properly adjusted and aligned to the major geometric features. This implies that the control vertices of the subdivision surface not only control globally smooth deformations but in addition that these deformations are meaningful in the sense that their support and shape correspond to the characteristic structure of the input mesh. This is achieved by using a new decimation scheme for general polygonal meshes (not just triangles) that is based on face merging instead of edge collapsing. A face-based integral metric makes the decimation scheme very robust such that we can obtain extremely coarse control meshes which in turn allow for deformations with large support.




Structure Preserving CAD Model Repair


Stephan Bischoff, Leif Kobbelt
Computer Graphics Forum, Volume 24, Number 3, 2005, pp. 527 -- 536 (Eurographics 2005 proceedings).
pubimg

There are two major approaches for converting a tessellated CAD model that contains inconsistencies like cracks or intersections into a manifold and closed triangle mesh. Surface oriented algorithms try to fix the inconsistencies by perturbing the input only slightly, but they often cannot handle special cases. Volumetric algorithms on the other hand produce guaranteed manifold meshes but mostly destroy the structure of the input tessellation due to global resampling. In this paper we combine the advantages of both approaches: We exploit the topological simplicity of a voxel grid to reconstruct a cleaned up surface in the vicinity of intersections and cracks, but keep the input tessellation in regions that are away from these inconsistencies. We are thus able to preserve any characteristic structure (i.e. iso-parameter or curvature lines) that might be present in the input tessellation. Our algorithm closes gaps up to a user-defined maximum diameter, resolves intersections, handles incompatible patch orientations and produces a feature-sensitive, manifold output that stays within a prescribed error-tolerance to the input model.




Real-Time Shape Editing using Radial Basis Functions


Mario Botsch, Leif Kobbelt
Computer Graphics Forum, Volume 24, Number 3, 2005, pp. 611 -- 621 (Eurographics 2005 proceedings).
pubimg

Current surface-based methods for interactive freeform editing of high resolution 3D models are very powerful, but at the same time require a certain minimum tessellation or sampling quality in order to guarantee sufficient robustness. In contrast to this, space deformation techniques do not depend on the underlying surface representation and hence are affected neither by its complexity nor by its quality aspects. However, while analogously to surface-based methods high quality deformations can be derived from variational optimization, the major drawback lies in the computation and evaluation, which is considerably more expensive for volumetric space deformations. In this paper we present techniques which allow us to use triharmonic radial basis functions for real-time freeform shape editing. An incremental least-squares method enables us to approximately solve the involved linear systems in a robust and efficient manner and by precomputing a special set of deformation basis functions we are able to significantly reduce the per-frame costs. Moreover, evaluating these linear basis functions on the GPU finally allows us to deform highly complex polygon meshes or point-based models at a rate of 30M vertices or 13M splats per second, respectively.

» Show Videos



High-Quality Surface Splatting on Today's GPUs


Mario Botsch, Alexander Hornung, Matthias Zwicker, Leif Kobbelt
Eurographics Symposium on Point-Based Graphics 2005, 17-24.
pubimg

Because of their conceptual simplicity and superior flexibility, point-based geometries evolved into a valuable alternative to surface representations based on polygonal meshes. Elliptical surface splats were shown to allow for high-quality anti-aliased rendering by sophisticated EWA filtering. Since the publication of the original software-based EWA splatting, several authors tried to map this technique to the GPU in order to exploit hardware acceleration. Due to the lacking support for splat primitives, these methods always have to find a trade-off between rendering quality and rendering performance. In this paper, we discuss the capabilities of today's GPUs for hardware-accelerated surface splatting. We present an approach that achieves a quality comparable to the original EWA splatting at a rate of more than 20M elliptical splats per second. In contrast to previous GPU renderers, our method provides per-pixel Phong shading even for dynamically changing geometries and high-quality anti-aliasing by employing a screen-space pre-filter in addition to the object-space reconstruction filter. The use of deferred shading techniques effectively avoids unnecessary shader computations and additionally provides a clear separation between the rasterization and the shading of elliptical splats, which considerably simplifies the development of custom shaders. We demonstrate quality, efficiency, and flexibility of our approach by showing several shaders on a range of models.




Progressive Splatting


Jianhua Wu, Zhuo Zhang, Leif Kobbelt
Eurographics Symposium on Point-Based Graphics 2005, 25 - 32.
pubimg

Surface splatting enables high quality and ecient rendering algorithms for dense point-sampled datasets. However, with increasing data complexity, the need for multiresolution models becomes evident. For triangle meshes, progressive or continuous level of detail hierarchies have proven to be very effective when it comes to (locally) adapt the resolution level of the 3D model to the application-dependent quality requirements. In this paper we transfer this concept to splat-based geometry representations. Our progressive splat decimation procedure uses the standard greedy approach but unlike previous work, it uses the full splat geometry in the decimation criteria and error estimates, not just the splat centers. With two improved error metrics, this new greedy framework offers better approximation quality than other progressive splat decimators. It comes even close to the recently proposed globally optimized single-resolution sub-sampling techniques while being faster by a factor of 3.




Automatic Restoration of Polygon Models


Stephan Bischoff, Darko Pavic, Leif Kobbelt
ACM Transactions on Graphics (TOG), 24(4), 1332-1352
pubimg

We present a fully automatic technique which converts an inconsistent input mesh into an output mesh that is guaranteed to be a clean and consistent mesh representing the closed manifold surface of a solid object. The algorithm removes all typical mesh artifacts such as degenerate triangles, incompatible face orientation, non-manifold vertices and edges, overlapping and penetrating polygons, internal redundant geometry as well as gaps and holes up to a user-defined maximum size. Moreover, the output mesh always stays within a prescribed tolerance to the input mesh. Due to the effective use of a hierarchical octree data structure, the algorithm achieves high voxel resolution (up to 4096^3 on a 2GB PC) and processing times of just a few minutes for moderately complex objects. We demonstrate our technique on various architectural CAD models to show its robustness and reliability.




Optimization Methods for Scattered Data Approximation with Subdivision Surfaces


Martin Marinov, Leif Kobbelt
Graphical Models, Volume 67, issue 5, 2005, pp. 452--473 (Special issue on SM'04).
pubimg

We present a method for scattered data approximation with subdivision surfaces which actually uses the true representation of the limit surface as a linear combination of smooth basis functions associated with the control vertices. A robust and fast algorithm for exact closest point search on Loop surfaces which combines Newton iteration and non-linear minimization is used for parameterizing the samples. Based on this we perform unconditionally convergent parameter correction to optimize the approximation with respect to the L2 metric and thus we make a well-established scattered data fitting technique which has been available before only for B-spline surfaces, applicable to subdivision surfaces. We also adapt the recently discovered local second order squared distance function approximant to the parameter correction setup. Further we exploit the fact that the control mesh of a subdivision surface can have arbitrary connectivity to reduce the L? error up to a certain user-defined tolerance by adaptively restructuring the control mesh. Combining the presented algorithms we describe a complete procedure which is able to produce high-quality approximations of complex, detailed models.




Efficient Spectral Watermarking of Large Meshes with Orthogonal Basis Functions


Jianhua Wu, Leif Kobbelt
The Visual Computers (Pacific Graphics 2005 Proceedings), 21(8-10): 848-857, 2005.
pubimg

Allowing for copyright protection and ownership assertion, digital watermarking techniques, which have been successfully applied for classical media types like audio, images and videos, have recently been adapted for the newly emerged multimedia data type of 3D geometry models. In particular, the widely used spread-spectrum methods can be generalized for 3D datasets by transforming the original model to a frequency domain and perturbing the coefficients of the most dominant basis functions. Previous approaches employing this kind of spectral watermarking are mainly based on multiresolution mesh analysis, wavelet domain transformation or spectral mesh analysis. Though they already exhibit good resistance to many types of real-world attacks, they are often far too slow to cope with very large meshes due to their complicated numerical computations. In this paper, we present a novel spectral watermarking scheme using new orthogonal basis functions based on radial basis functions. With our proposed fast basis function orthogonalization, while observing similar persistence with respect to various attacks as other related approaches, our scheme runs faster by two orders of magnitude and thus can efficiently watermark very large models.




Self-Calibrating Optical Motion Tracking for Articulated Bodies


Alexander Hornung, Sandip Sar-Dessai, Leif Kobbelt
IEEE Virtual Reality 2005, pp. 75-82
pubimg

Building intuitive user-interfaces for Virtual Reality applications is a difficult task, as one of the main purposes is to provide a ''natural'', yet efficient input device to interact with the virtual environment. One particularly interesting approach is to track and retarget the complete motion of a subject. Established techniques for full body motion capture like optical motion tracking exist. However, due to their computational complexity and their reliance on pre-specified models, they fail to meet the demanding requirements of Virtual Reality environments such as real-time response, immersion, and ad hoc configurability. Our goal is to support the use of motion capture as a general input device for Virtual Reality applications. In this paper we present a self-calibrating framework for optical motion capture, enabling the reconstruction and tracking of arbitrary articulated objects in real-time. Our method automatically estimates all relevant model parameters on-the-fly without any information on the initial tracking setup or the marker distribution, and computes the geometry and topology of multiple tracked skeletons. Moreover, we show how the model can make the motion capture phase robust against marker occlusions by exploiting the redundancy in the skeleton model and by reconstructing missing inner limbs and joints of the subject from partial information. Meeting the above requirements our system is well applicable to a wide range of Virtual Reality based applications, where unconstrained tracking and flexible retargeting of motion data is desirable.




Snakes on triangle meshes


Stephan Bischoff, Tobias Weyand, Leif Kobbelt
Bildverarbeitung für die Medizin (2005), 208-212
pubimg

In this work we introduce a new method for representing and evolving snakes that are constrained to lie on a prescribed surface (triangle mesh). The new representation allows to automatically adapt the snake resolution to the surface tesselation and does not need any (unstable) back-projection operations. Furthermore, it enables efficient and robust collision detection and gives us complete control on the topological behaviour of the snakes, i.e. snakes may split or merge depending on the intended task. Possible applications include enhanced mesh scissoring operations and the detection of constrictions of a surface.




Efficient Linear System Solvers for Mesh Processing


Mario Botsch, David Bommes, Leif Kobbelt
Invited paper at XIth IMA Conference on the Mathematics of Surfaces
pubimg

Mario Botsch, David Bommes, Leif Kobbelt

The use of polygonal mesh representations for freeform geometry enables the formulation of many important geometry processing tasks as the solution of one or several linear systems. As a consequence, the key ingredient for efficient algorithms is a fast procedure to solve linear systems. A large class of standard problems can further be shown to lead more specifically to sparse, symmetric, and positive definite systems, that allow for a numerically robust and efficient solution. In this paper we discuss and evaluate the use of \emph{sparse direct solvers} for such kind of systems in geometry processing applications, since in our experiments they turned out to be superior even to highly optimized multigrid methods, but at the same time were considerably easier to use and implement. Although the methods we present are well known in the field of high performance computing, we observed that they are in practice surprisingly rarely applied to geometry processing problems.




Point-Sampled Shape Representations


Leif Kobbelt
Keynote talk at the 1st ACM SIGGRAPH and Eurographics Symposium on Point-Based Graphics, 2004
pubimg



An Intuitive Framework for Real-Time Freeform Modeling


Mario Botsch, Leif Kobbelt
ACM Transactions on Graphics (TOG), 23(3), 630-634, 2004 Proceedings of the 2004 SIGGRAPH Conference
pubimg

We present a freeform modeling framework for unstructured triangle meshes which is based on constraint shape optimization. The goal is to simplify the user interaction even for quite complex freeform or multiresolution modifications. The user first sets various boundary constraints to define a custom tailored (abstract) basis function which is adjusted to a given design task. The actual modification is then controlled by moving one single 9-dof manipulator object. The technique can handle arbitrary support regions and piecewise boundary conditions with smoothness ranging continuously from C0 to C2. To more naturally adapt the modification to the shape of the support region, the deformed surface can be tuned to bend with anisotropic stiffness. We are able to achieve real-time response in an interactive design session even for complex meshes by precomputing a set of scalar-valued basis functions that correspond to the degrees of freedom of the manipulator by which the user controls the modification.

» Show Videos



Optimized Sub-Sampling of Point Sets for Surface Splatting


Jianhua Wu, Leif Kobbelt
Computer Graphics Forum, 23(3), 643-652, 2004 Eurographics 2004 proceedings
pubimg

Using surface splats as a rendering primitive has gained increasing attention recently due to its potential for high-performance and high-quality rendering of complex geometric models. However, as with any other rendering primitive, the processing costs are still proportional to the number of primitives that we use to represent a given object. This is why complexity reduction for point-sampled geometry is as important as it is, e.g., for triangle meshes. In this paper we present a new sub-sampling technique for dense point clouds which is speccally adjusted to the particular geometric properties of circular or elliptical surface splats. A global optimization scheme computes an approximately minimal set of splats that covers the entire surface while staying below a globally prescribed maximum error tolerance e. Since our algorithm converts pure point sample data into surface splats with normal vectors and spatial extent, it can also be considered as a surface reconstruction technique which generates a hole-free piecewise linear C^(-1) continuous approximation of the input data. Here we can exploit the higher flexibility of surface splats compared to triangle meshes. Compared to previous work in this area we are able to obtain significantly lower splat numbers for a given error tolerance.



Best student paper award!



Optimization Techniques for Approximation with Subdivision Surfaces


Martin Marinov, Leif Kobbelt
ACM Symposium on Solid Modeling and Applications 2004, 113 - 122
pubimg

We present a method for scattered data approximation with subdivision surfaces which actually uses the true representation of the limit surface as a linear combination of smooth basis functions associated with the control vertices. This is unlike previous techniques which used only piecewise linear approximations of the limit surface. By this we can assign arbitrary parameterizations to the given sample points, including those generated by parameter correction. We present a robust and fast algorithm for exact closest point search on Loop surfaces by combining Newton iteration and non-linear minimization. Based on this we perform unconditionally convergent parameter correction to optimize the approximation with respect to the L^2 metric and thus we make a well-established scattered data fitting technique which has been available before only for B-spline surfaces, applicable to subdivision surfaces. Further we exploit the fact that the control mesh of a subdivision surface can have arbitrary connectivity to reduce the L^\infty error up to a certain user-defined tolerance by adaptively restructuring the control mesh. By employing iterative least squares solvers, we achieve acceptable running times even for large amounts of data and we obtain high quality approximations by surfaces with relatively low control mesh complexity compared to the number of sample points. Since we are using plain subdivision surfaces, there is no need for multiresolution detail coefficients and we do not have to deal with the additional overhead in data and computational complexity associated with them.




Teaching meshes, subdivision and multiresolution techniques


Stephan Bischoff, Leif Kobbelt
Computer-Aided Design (2004), 36 (14), 1483-1500
pubimg

In recent years, geometry processing algorithms that directly operate on polygonal meshes have become an indispensable tool in computer graphics, CAD/CAM applications, numerical simulations, and medical imaging. Because the demand for people that are specialized in these techniques increases steadily the topic is finding its way into the standard curricula of related lectures on computer graphics and geometric modeling and is often the subject of seminars and presentations. In this article we suggest a toolbox to educators who are planning to set up a lecture or talk about geometry processing for a specific audience. For this we propose a set of teaching blocks, each of which covers a specific subtopic. These teaching blocks can be assembled so as to fit different occasions like lectures, courses, seminars and talks and different audiences like students and industrial practitioners. We also provide examples that can be used to deepen the subject matter and give references to the most relevant work.




Direct Anisotropic Quad-Dominant Remeshing


Martin Marinov, Leif Kobbelt
Proc. Pacific Graphics, 207-216, 2004
pubimg

We present an extension of the anisotropic polygonal remeshing technique developed by Alliez et al. Our algorithm does not rely on a global parameterization of the mesh and therefore is applicable to arbitrary genus surfaces. We show how to exploit the structure of the original mesh in order to perform efficiently the proximity queries required in the line integration phase, thus improving dramatically the scalability and the performance of the original algorithm. Finally, we propose a novel technique for producing conforming quad-dominant meshes in isotropic regions as well by propagating directional information from the anisotropic regions.




A Remeshing Approach to Multiresolution Modeling


Mario Botsch, Leif Kobbelt
Symposium on Geometry Processing 2004, 189-196
pubimg

Providing a thorough mathematical foundation, multiresolution modeling is the standard approach for global surface deformations that preserve fine surface details in an intuitive and plausible manner. A given shape is decomposed into a smooth low-frequency base surface and high-frequency detail information. Adding these details back onto a deformed version of the base surface results in the desired modification. Using a suitable detail encoding, the connectivity of the base surface is not restricted to be the same as that of the original surface. We propose to exploit this degree of freedom to improve both robustness and efficiency of multiresolution shape editing. In several approaches the modified base surface is computed by solving a linear system of discretized Laplacians. By remeshing the base surface such that the Voronoi areas of its vertices are equalized, we turn the unsymmetric surface-related linear system into a symmetric one, such that simpler, more robust, and more efficient solvers can be applied. The high regularity of the remeshed base surface further removes numerical problems caused by mesh degeneracies and results in a better discretization of the Laplacian operator. The remeshing is performed on the low-frequency base surface only, while the connectivity of the original surface is kept fixed. Hence, this functionality can be encapsulated inside a multiresolution kernel and is thus completely hidden from the user.




Phong Splatting


Mario Botsch, Michael Spernat, Leif Kobbelt
Symposium on Point-Based Graphics 2004, 25-32
pubimg

Surface splatting has developed into a valuable alternative to triangle meshes when it comes to rendering of highly detailed massive datasets. However, even highly accurate splat approximations of the given geometry may sometimes not provide a sufficient rendering quality since surface lighting mostly depends on normal vectors whose deviation is not bounded by the Hausdorff approximation error. Moreover, current point-based rendering systems usually associate a constant normal vector with each splat, leading to rendering results which are comparable to flat or Gouraud shading for polygon meshes. In contrast, we propose to base the lighting of a splat on a linearly varying normal field associated with it, and we show that the resulting Phong Splats provide a visual quality which is far superior to existing approaches. We present a simple and effective way to construct a Phong splat representation for a given set of input samples. Our surface splatting system is implemented completely based on vertex and pixel shaders of current GPUs and achieves a splat rate of up to 4M Phong shaded, filtered, and blended splats per second. In contrast to previous work, our scan conversion is projectively correct per pixel, leading to more accurate visualization and clipping at sharp features.




Survey of Point-Based Techniques in Computer Graphics


Leif Kobbelt, Mario Botsch
Computers & Graphics 2004
pubimg

In recent years point-based geometry has gained increasing attention as an alternative surface representation, both for efficient rendering and for flexible geometry processing of highly complex 3D-models. Point sampled objects do neither have to store nor to maintain globally consistent topological information. Therefore they are more flexible compared to triangle meshes when it comes to handling highly complex or dynamically changing shapes. In this paper, we make an attempt to give an overview of the various point-based methods that have been proposed over the last years. In particular we review and evaluate different shape representations, geometric algorithms, and rendering methods which use points as a universal graphics primitive.




Parameterization-free active contour models


Stephan Bischoff, Leif Kobbelt
The Visual Computer (2004), 20:217-228
pubimg

We present a novel approach for representing and evolving deformable active contours by restricting the movement of the contour vertices to the grid-lines of a uniform lattice. This restriction implicitly controls the (re-) parameterization of the contour and hence makes it possible to employ parameterization independent evolution rules. Moreover, the underlying uniform grid makes self-collision detection very efficient. Our contour model is also able to perform topology changes but - more importantly - it can detect and handle self-collisions at sub-pixel precision. In applications where topology changes are not appropriate we generate contours that touch themselves without any gaps or self-intersections.




Subdivision Scheme Tuning Around Extraordinary Vertices


Loïc Barthe, Leif Kobbelt
Computer Aided Geometric Design, 21(6), 561-583, 2004
pubimg

In this paper we extend the standard method to derive and optimize subdivision rules in the vicinity of extraordinary vertices (EV). Starting from a given set of rules for regular control meshes, we tune the extraordinary rules (ER) such that the necessary conditions for C1 continuity are satisfied along with as many necessary C2 conditions as possible. As usually done, our approach sets up the general configuration around an EV by exploiting rotational symmetry and reformulating the subdivision rules in terms of the subdivision matrix' eigencomponents. The degrees of freedom are then successively eliminated by imposing new constraints which allows us, e.g., to improve the curvature behavior around EVs. The method is flexible enough to simultaneously optimize several subdivision rules, i.e. not only the one for the EV itself but also the rules for its direct neighbors. Moreover it allows us to prescribe the stencils for the ERs and naturally blends them with the regular rules that are applied away from the EV. All the constraints are combined in an optimization scheme that searches in the space of feasible subdivision schemes for a candidate which satisfies some necessary conditions exactly and other conditions approximately. The relative weighting of the constraints allows us to tune the properties of the subdivision scheme according to application specific requirements. We demonstrate our method by tuning the ERs for the well-known Loop scheme and by deriving ERs for a \sqrt{3}-type scheme based on a 6-direction Box-spline.




View-Dependent Streaming of Progressive Meshes


Junho Kim, Seungyong Lee, Leif Kobbelt
Shape Modeling Applications 2004, 209-391
pubimg

Multiresolution geometry streaming has been well studied in recent years. The client can progressively visualize a triangle mesh from the coarsest resolution to the finest one while a server successively transmits detail information. However, the streaming order of the detail data usually depends only on the geometric importance, since basically a mesh simplification process is performed backwards in the streaming. Consequently, the resolution of the model changes globally during streaming even if the client does not want to download detail information for the invisible parts from a given view point. In this paper, we introduce a novel framework for view-dependent streaming of multiresolution meshes. The transmission order of the detail data can be adjusted dynamically according to the visual importance with respect to the client's current view point. By adapting the truly selective refinement scheme for progressive meshes, our framework provides efficient view-dependent streaming that minimizes memory cost and network communication overhead. Furthermore, we reduce the per-client session data on the server side by using a special data structu re for encoding which vertices have already been transmitted to each client. Experimental results indicate that our framework is efficient enough for a broadcast scenario where one server streams geometry data to multiple clients with different view points.




GPU-based Tolerance Volumes for Mesh Processing


Mario Botsch, David Bommes, Christoph Vogel, Leif Kobbelt
Proc. Pacific Graphics, 237-243, 2004
pubimg

In an increasing number of applications triangle meshes represent a flexible and efficient alternative to traditional NURBS-based surface representations. Especially in engineering applications it is crucial to guarantee that a prescribed approximation tolerance to a given reference geometry is respected for any combination of geometric algorithms that are applied when processing a triangle mesh. We propose a simple and generic method for computing the distance of a given polygonal mesh to the reference surface, based on a linear approximation of its signed distance field. Exploiting the hardware acceleration of modern GPUs allows us to perform up to 3M triangle checks per second, enabling real-time distance evaluations even for complex geometries. An additional feature of our approach is the accurate high-quality distance visualization of dynamically changing meshes at a rate of 15M triangles per second. Due to its generality, the presented approach can be used to enhance any mesh processing method by global error control, guaranteeing the resulting mesh to stay within a prescribed error tolerance. The application examples that we present include mesh decimation, mesh smoothing and freeform mesh deformation.




Topologically Correct Extraction of the Cortical Surface of a Brain Using Level-Set Methods


Stephan Bischoff, Leif Kobbelt
Bildverarbeitung für die Medizin (2004), 50-54
pubimg

In this paper we present a level-set framework for accurate and efficient extraction of the surface of a brain from MRI data. To prevent the so-called partial volume effect we use a topology preserving model that ensures the correct topology of the surface at all times during the reconstruction process. We also describe improvements that enhance its stability, accuracy and efficiency. The resulting reconstruction can then be used in downstream applications where we in particular focus on the problem of accurately measuring geodesic distances on the surface.




Shape Modeling with Point-Sampled Geometry


Mark Pauly, Richard Keiser, Leif Kobbelt, Markus Gross
SIGGRAPH 2003 Proceedings, 641 - 650
pubimg

We present a versatile and complete free-form shape modeling framework for point-sampled geometry. By combining unstructured point clouds with the implicit surface definition of the moving least squares approximation, we obtain a hybrid geometry representation that allows us to exploit the advantages of implicit and parametric surface models. Based on this representation we introduce a shape modeling system that enables the designer to perform large constrained deformations as well as boolean operations on arbitrarily shaped objects. Due to minimum consistency requirements, point-sampled surfaces can easily be re-structured on the fly to support extreme geometric deformations during interactive editing. In addition, we show that strict topology control is possible and sharp features can be generated and preserved on point-sampled objects. We demonstrate the effectiveness of our system on a large set of input models, including noisy range scans, irregular point clouds, and sparsely as well as densely sampled models.




Freeform Shape Representations for Efficient Geometry Processing


Leif Kobbelt
Keynote Talk at Eurographics 2003
pubimg

The most important data structures for handling and storage of free form shapes in geometry processing applications and computer graphics can be classified according to the dimensionality of their basic structural elements: point-sets, polygon meshes, as well as volumetric representations are well established concepts to describe the shape of arbitrarily complex objects. On the other hand, algorithms to process a given geometric object can be classified according to the dominant access operation which can be of the type evaluation (sampling), query (e.g. inside/outside tests), or modification (of the geometry or the topology). On a more abstract level, we can in principle distinguish between unstructured geometry representations which do not imply any global regularity and uniformly structured representations or even hierarchically structured representations.
All these different types of shape representations will be presented in this talk and the respective strengths and disadvantages will be discussed in detail. For a given application we can analyse which types of access operations are the most critial ones and then design a custom data structure according to the usage profile. In some cases it turns out that a proper combination of two different representations provides the optimal performance or robustness. We will present a couple of important geometry processing problems such as topology control for deforming implicit surfaces, mesh decimation and smoothing with global error control, and mesh restoration where this combination of geometry data representations leads to superior algorithms compared to the standard approaches.




Sub-Voxel Topology Control for Level Set Surfaces


Stephan Bischoff, Leif Kobbelt
Computer Graphics Forum 22(3) (Eurographics 2003 Proceedings), 273-280
pubimg

Active contour models are an efficient, accurate, and robust tool for the segmentation of 2D and 3D image data. In particular, geometric deformable models (GDM) that represent an active contour as the level set of an implicit function have proven to be very effective. GDMs, however, do not provide any topology control, i.e. contours may merge or split arbitrarily and hence change the genus of the reconstructed surface. This behavior is inadequate in settings like the segmentation of organic tissue or other objects whose genus is known beforehand. In this paper we describe a novel method to overcome this limitation while still preserving the favorable properties of the GDM setup. We achieve this by adding (sparse) topological information to the volume representation at locations where it is necessary to locally resolve topological ambiguities. Since the sparse topology information is attached to the edges of the voxel grid, we can reconstruct the interfaces where the deformable surface touches itself at sub-voxel accuracy. We also demonstrate the efficiency and robustness of our method on synthetic as well as on real (MRT) scan data.




Multiresolution Surface Representation Based on Displacement Volumes


Mario Botsch, Leif Kobbelt
Computer Graphics Forum 22(3) (Eurographics 2003 Proceedings), 483-491
pubimg

We propose a new representation for multiresolution models which uses volume elements enclosed between the different resolution levels to encode the detail information. Keeping these displacement volumes locally constant during a deformation of the base surface leads to a natural behaviour of the detail features. The corresponding reconstruction operator can be implemented efficiently by a hierarchical iterative relaxation scheme, providing close to interactive response times for moderately complex models. Based on this representation we implement a multiresolution editing tool for irregular polygon meshes that allows the designer to freely edit the base surface of a multiresolution model without having to care about self-intersections in the respective detailed surface. We demonstrate the effectiveness and robustness of the reconstruction by several examples with real-world data.




Piecewise Linear Approximation of Signed Distance Fields


Jianhua Wu, Leif Kobbelt
Vision, Modeling and Visualization 2003 Proceedings, 513-520
pubimg

The signed distance field of a surface can effectively support many geometry processing tasks such as decimation, smoothing, and Boolean operations since it provides efficient access to distance (error) estimates. In this paper we present an algorithm to compute a piecewise linear, not necessarily continuous approximation of the signed distance field for a given object. Our approach is based on an adaptive hierarchical space partition that stores a linear distance function in every leaf node. We provide positive and negative criteria for selecting the splitting planes. Consequently the algorithm adapts the leaf cells of the space partition to the geometric shape of the underlying model better than previous methods. This results in a hierarchical representation with comparably low memory consumption and which allows for fast evaluation of the distance field function.




High-Quality Point-Based Rendering on Modern GPUs


Mario Botsch, Leif Kobbelt
Pacific Graphics 2003 Proceedings, 335-343
pubimg

In the last years point-based rendering has been shown to offer the potential to outperform traditional triangle based rendering both in speed and visual quality when it comes to processing highly complex models. Existing surface splatting techniques achieve superior visual quality by proper filtering but they are still limited in rendering speed. On the other hand the increasing availability and programmability of graphics hardware lead to the developement of very efficient hardware-accelerated rendering methods. However, since no filtered splats are used, these approaches trade visual quality for rendering speed. In this paper we propose a rendering framework for point-based geometry providing high visual quality as well as efficient rendering. Our approach is based on a two-pass splatting technique with Gaussian filtering, resulting in a visual quality comparable to existing software rendering systems. Using programmable graphics hardware we delegate all expensive rendering tasks to the GPU, thereby minimizing data transfer and saving CPU resources. The proposed system renders up to 28M mid-quality or up to 10M high-quality surface splats per second on the latest graphics hardware.




Direct computation of a control vertex position on any subdivision level


Loïc Barthe, Leif Kobbelt
IMA Conference on the Mathematics of Surfaces 2003: 40-47
pubimg

In this paper, we present general closed form equations for directly computing the position of a vertex at different subdivision levels for both triangular and quadrilateral meshes. These results are obtained using simple computations and they lead to very useful applications, especially for adaptive subdivision. We illustrate our method on Loop's and Catmull-Clark's subdivision schemes.




A Stream Algorithm for the Decimation of Massive Meshes


Jianhua Wu, Leif Kobbelt
Graphics Interface 2003 Proceedings, 185-192
pubimg

We present an out-of-core mesh decimation algorithm that is able to handle input and output meshes of arbitrary size. The algorithm reads the input from a data stream in a single pass and writes the output to another stream while using only a fixed-sized in-core buffer. By applying randomized multiple choice optimization, we are able to use incremental mesh decimation based on edge collapses and the quadric error metric. The quality of our results is comparable to state-of-the-art highquality mesh decimation schemes (which are slower than our algorithm) and the decimation performance matches the performance of the most efficient out-of-core techniques (which generate meshes of inferior quality).




Freeform Shape Representations for Efficient Geometry Processing


Leif Kobbelt, Mario Botsch
Invited Paper at Shape Modeling International 2003, pp. 111-118

The most important concepts for the handling and storage of freeform shapes in geometry processing applications are parametric representations and volumetric representations. Both have their specific advantages and drawbacks. While the algebraic complexity of volumetric representations S = {(x, y, z) | f (x, y, z) = 0} is independent from the shape complexity, the domain OMEGA of a parametric representation f : OMEGA to S usually has to have the same structure as the surface S itself (which sometimes makes it necessary to update the domain when the surface is modified). On the other hand, the topology of a parametrically defined surface can be controlled explicitly while in a volumetric representation, the surface topology can change accidentally during deformation. A volumetric representation reduces distance queries or inside/outside tests to mere function evaluations but the geodesic neighborhood relation between surface points is difficult to resolve. As a consequence, it seems promising to combine parametric and volumetric representations to effectively exploit both advantages.
In this talk, a number of applications is presented and discussed where such a combination leads to efficient and numerically stable algorithms for the solution of various geometry processing tasks. These applications include surface remeshing, mesh fairing, global error control for mesh decimation and smoothing, and topology control for levelset surfaces.




Parameter Reduction and Automatic Generation of Active Shape Models


David Liersch, Abhijit Sovakar, Leif Kobbelt
Workshop Bildverarbeitung für die Medizin, 2003
pubimg

In this paper we propose an alternative method to build Active Shape Models. It avoids the use of explicit landmarks since it represents shapes by normal displacements relative to an average (domain) contour. By this we reduce the redundancy of the model and consequently the number of parameters in our representation. The resulting models have a significantly lower algebraic complexity compared to those based on landmarks. Additionally we show how to automate the generation of ASMs form sets of unprocessed training contours in arbitrary representation.




Efficient high quality rendering of point sampled geometry


Mario Botsch, Andreas Wiratanaya, Leif Kobbelt
Eurographics Workshop on Rendering
pubimg

We propose a highly efficient hierarchical representation for point sampled geometry that automatically balances sampling density and point coordinate quantization. The representation is very compact with a memory consumption of far less than 2 bits per point position which does not depend on the quantization precision. We present an efficient rendering algorithm that exploits the hierarchical structure of the representation to perform fast 3D transformations and shading. The algorithm is extended to surface splatting which yields high quality anti-aliased and water tight surface renderings. Our pure software implementation renders up to 14 million Phong shaded and textured samples per second and about 4 million anti-aliased surface splats on a commodity PC. This is more than a factor 10 times faster than previous algorithms.




Fast Mesh Decimation by Multiple-Choice Techniques


Jianhua Wu, Leif Kobbelt
Vision, Modeling, Visualization 2002 Proceedings, 241-248
pubimg

We present a new mesh decimation framework which is based on the probabilistic optimization technique of Multiple-Choice algorithms. While producing the same expected quality of the output meshes, the Multiple-Choice approach leads to a significant speed-up compared to the well-established standard framework for mesh decimation as a greedy optimization scheme. Moreover, Multiple-Choice decimation does not require a global priority queue data structure which reduces the memory overhead and simplifies the algorithmic structure. We explain why and how the Multiple- Choice optimization works well for the mesh decimation problem and give a detailed CPU profile analysis to explain where the speed-up comes from.




Efficient Simplification of Point-Sampled Surfaces


Mark Pauly, Markus Gross, Leif Kobbelt
IEEE Visualization 2002
pubimg

In this paper we introduce, analyze and quantitatively compare a number of surface simplification methods for point-sampled geometry. We have implemented incremental and hierarchical clustering, iterative simplification, and particle simulation algorithms to create approximations of point-based models with lower sampling density. All these methods work directly on the point cloud, requiring no intermediate tesselation. We show how local variation estimation and quadric error metrics can be employed to diminish the approximation error and concentrate more samples in regions of high curvature. To compare the quality of the simplified surfaces, we have designed a new method for computing numerical and visual error estimates for point-sampled surfaces. Our algorithms are fast, easy to implement, and create high-quality surface approximations, clearly demonstrating the effectiveness of point-based surface simplification.




Isosurface Reconstruction with Topology Control


Stephan Bischoff, Leif Kobbelt
Pacific Graphics 2002 Proceedings, 246-255
pubimg

Extracting isosurfaces from volumetric datasets is an essential step for indirect volume rendering algorithms. For physically measured data like it is used, e.g. in medical imaging applications one often introduces topological errors such as small handles that stem from measurement inaccuracy and cavities that are generated by tight folds of an organ. During isosurface extraction these measurement errors result in a surface whose genus is much higher than that of the actual surface. In many cases however, the topological type of the object under consideration is known beforehand, e.g., the cortex of a human brain is always homeomorphic to a sphere. By using topology preserving morphological operators we can exploit this knowledge to gradually dilate an initial set of voxels with correct topology until it fits the target isosurface. This approach avoids the formation of handles and cavities and guarantees a topologically correct reconstruction of the object's surface.




Streaming 3D Geometry Data Over Lossy Communication Channels


Stephan Bischoff, Leif Kobbelt
IEEE International Conference on Multimedia and Expo Proceedings, 2002
pubimg

In this paper we propose a progressive 3D geometry transmission technique that is robust with respect to data loss. In a preprocessing step we decompose a given polygon mesh model into a set of overlapping ellipsoids, representing the coarse shape of the model, and a stream of sample points, representing its fine detail. On the client-side, we derive a coarse approximation of the model from the ellipsoid decomposition and then re-insert the sample points to reconstruct the fine detail. The overlapping ellipsoids as well as the sample points represent independent pieces of geometric information, hence partial data loss can be tolerated by our reconstruction algorithm and will only lead to a gradual degradation of the reconstruction quality. We present a transmission scheme that is especially well-suited for geometry broadcasting where we exploit that the order of the sample points can be arbitrarily permuted.




Multiresolution techniques


Leif Kobbelt
Chapter for the Handbook of Computer Aided Geometric Design, G. Farin, J. Hoschek, M-S. Kim (eds.), Elsevier, 2002
pubimg

The term multiresolution techniques refers to a class of algorithms that decompose a given geometry into its global shape and detail information on different levels of resolution. The representation of an object on several levels of detail which are defined relative to each other gives rise to a number of applications that exploit the hierarchical nature of the representation. In this Chapter we explain the theoretical background of the multiresolution transform and show how the basic concepts can be generalized to arbitrary freeform surfaces.




Simplification and compression of 3D-meshes


Craig Gotsman, Stefan Gumhold, Leif Kobbelt
Tutorials on multiresolution in geometric modeling, A. Iske, E. Quak, M. Floater (eds.), Springer, 2002
pubimg

We survey recent developments in compact representations of 3D mesh data. This includes: Methods to reduce the complexity of meshes by simplification, thereby reducing the number of vertices and faces in the mesh; Methods to resample the geometry in order to optimize the vertex distribution; Methods to compactly represent the connectivity data (the graph structure defined by the edges) of the mesh; Methods to compactly represent the geometry data (the vertex coordinates) of a mesh.




Ellipsoid decomposition of 3D-models


Stephan Bischoff, Leif Kobbelt
3DPVT proceedings, 480-488, 2002
pubimg

In this paper we present a simple technique to approximate the volume enclosed by a given triangle mesh with a set of overlapping ellipsoids. This type of geometry representation allows us to approximately reconstruct 3D-shapes from a very small amount of information being transmitted. The two central questions that we address are: how can we compute optimal fitting ellipsoids that lie in the interior of a given triangle mesh and how do we select the most significant (least redundant) subset from a huge number of candidate ellipsoids. Our major motivation for computing ellipsoid decompositions is the robust transmission of geometric objects where the receiver can reconstruct the 3D-shape even if part of the data gets lost during transmission.




Towards Robust Broadcasting of Geometry Data


Stephan Bischoff, Leif Kobbelt
Computers & Graphics, 26(5), 665-675, 2002
pubimg

We present new algorithms for the robust transmission of geometric data sets, i.e. transmission which allows the receiver to recover (an approximation of) the original geometric object even if parts of the data get lost on the way. These algorithms can be considered as hinted point cloud triangulation schemes since the general manifold reconstruction problem is simplified by adding tags to the vertices and by providing a coarse base-mesh which determines the global surface topology. Robust transmission techniques exploit the geometric coherence of the data and do not require redundant transmission protocols on lower software layers. As an example application scenario we describe the teletext-like broadcasting of 3D models.




OpenMesh -- a generic and efficient polygon mesh data structure


Mario Botsch, Stephan Steinberg, Stephan Bischoff, Leif Kobbelt
OpenSG Symposium 2002
pubimg

We describe the implementation of a half-edge data structure for the static representation and dynamic handling of arbitrary polygonal meshes. The particular design of the data structures and classes aims at maximum flexibility and high performance. We achieve this by using generative programming concepts which allow the compiler to resolve most of the special case handling decisions at compile time. We evaluate our data structure based on prototypic implementations of mesh processing applications such as decimation and smoothing.



An implementation is available in the Software section.



Feature Sensitive Surface Extraction from Volume Data


Leif Kobbelt, Mario Botsch, Ulrich Schwanecke, Hans-Peter Seidel
SIGGRAPH 2001 proceedings
pubimg

The representation of geometric objects based on volumetric data structures has advantages in many geometry processing applications that require, e.g., fast surface interrogation or boolean operations such as intersection and union. However, surface based algorithms like shape optimization (fairing) or freeform modeling often need a topological manifold representation where neighborhood information within the surface is explicitly available. Consequently, it is necessary to find effective conversion algorithms to generate explicit surface descriptions for the geometry which is implicitly defined by a volumetric data set. Since volume data is usually sampled on a regular grid with a given step width, we often observe severe alias artifacts at sharp features on the extracted surfaces. In this paper we present a new technique for surface extraction that performs feature sensitive sampling and thus reduces these alias effects while keeping the simple algorithmic structure of the standard Marching Cubes algorithm. We demonstrate the effectiveness of the new technique with a number of application examples ranging from CSG modeling and simulation to surface reconstruction and remeshing of polygonal models.



An implementation is available in the Software section.



Feature Sensitive Remeshing


Jens Vorsatz, Christian Rössl, Leif Kobbelt, Hans-Peter Seidel
Eurographics 2001 proceedings
pubimg

Remeshing artifacts are a fundamental problem when converting a given geometry into a triangle mesh. We propose a new remeshing technique that is sensitive to features. First, the resolution of the mesh is iteratively adapted by a global restructuring process which optimizes the connectivity. Than a particle system approach evenly distributes the vertices across the original geometry. To exactly find the features we extend the relacation procedure by an effective mechanism to attract the vertices to feature edges. The attracting force is imposed by means of a hierarchical curvature field and does not require any tresholding parameters to classify the features.




Resampling Feature and Blend Regions in Polygonal Meshes for Surface Anti-Aliasing


Mario Botsch, Leif Kobbelt
Eurographics 2001 proceedings
pubimg

Efficient surface reconstruction and reverse engineering techniques are usually based on a polygonal mesh representation of the geometry: the resulting models emerge from piecewise linear interpolation of a set of sample points. The quality of the reconstruction not only depends on the number and density of the sample points but also on their alignment to sharp and rounded features of the original geometry. Bad alignment can lead to severe alias artifacts. In this paper we present a sampling pattern for feature and blend regions which minimizes these alias errors. We show how to improve the quality of a given polygonal mesh model by resampling its feature and blend regions within an interactive framework. We further demonstrate sophisticated modeling operations that can be implemented based on this resampling technique.




A Robust Procedure to Eliminate Degenerate Faces from Triangle Meshes


Mario Botsch, Leif Kobbelt
Vision, Modeling, Visualization 2001 proceedings
pubimg

When using triangle meshes in numerical simulations or other sophisticated downstream applications, we have to guarantee that no degenerate faces are present since they have, e.g., no well defined normal vectors. In this paper we present a simple but effective algorithm to remove such artifacts from a given triangle mesh. The central problem is to make this algorithm numerically robust because degenerate triangles are usually the source for all kinds of numerical instabilities. Our algorithm is based on a slicing technique that cuts a set of planes through the given polygonal model. The mesh slicing operator only uses numerically stable predicates and therefore is able to split faces in a controlled manner. In combination with a custom tailored mesh decimation scheme we are able to remove the degenerate faces from meshes like those typically generated by tesselation units in CAD systems.




Sqrt(3) subdivision


Leif Kobbelt
SIGGRAPH 2000 proceedings
pubimg

A new stationary subdivision scheme is presented which performs slower topological refinement than the usual dyadic split operation. The number of triangles increases in every step by a factor of 3 instead of 4. Applying the subdivision operator twice causes a uniform refinement with tri-section of every original edge (hence the name sqrt(3)-subdivision) while two dyadic splits would quad-sect every original edge. Besides the finer gradation of the hierarchy levels, the new scheme has several important properties: The stencils for the subdivision rules have minimum size and maximum symmetry. The smoothness of the limit surface is C2 everywhere except for the extraordinary points where it is C1. The convergence analysis of the scheme is presented based on a new general technique which also applies to the analysis of other subdivision schemes. The new splitting operation enables locally adaptive refinement under built-in preservation of the mesh consistency without temporary crack-fixing between neighboring faces from different refinement levels. The size of the surrounding mesh area which is affected by selective refinement is smaller than for the dyadic split operation. We further present a simple extension of the new subdivision scheme which makes it applicable to meshes with boundary and allows us to generate sharp feature lines.




Towards Hardware Implementation Of Loop Subdivision


Stephan Bischoff, Leif Kobbelt, Hans-Peter Seidel
Eurographics/SIGGRAPH Graphics Hardware Workshop 2000 proceedings
pubimg

We present a novel algorithm to evaluate and render Loop subdivision surfaces. The algorithm exploits the fact that Loop subdivision surfaces are piecewise polynomial and uses the forward difference technique for efficiently computing uniform samples on the limit surface. The main advantage of our algorithm is that it only requires a small and constant amount of memory that does not depend on the subdivision depth. The simple structure of the algorithm enables a scalable degree of hardware implementation. By low-level parallelization of the computations, we can reduce the critical computation costs to a theoretical minimum of about one float[3]-operation per triangle.




Geometric Modeling Based on Polygonal Meshes


Leif Kobbelt, Stephan Bischoff, Mario Botsch, Kolja Kähler, Christian Rössl, Robert Schneider, Jens Vorsatz
Eurographics 2000 tutorial
pubimg

While traditional computer aided design (CAD) is mainly based on piecewise polynomial surface representations, the recent advances in the efficient handling of polygonal meshes have made available a set of powerful techniques which enable sophisticated modeling operations on freeform shapes. In this tutorial we are going to give a detailed introduction into the various techniques that have been proposed over the last years. Those techniques address important issues such as surface generation from discrete samples (e.g. laser scans) or from control meshes (ab initio design); complexity control by adjusting the level of detail of a given 3D-model to the current application or to the available hardware resources; advanced mesh optimization techniques that are based on the numerical simulation of physical material (e.g. membranes or thin plates) and finally the generation and modification of hierarchical representations which enable sophisticated multiresolution modeling functionality. We present an interactive system for the generation of high quality triangle meshes that allows us to handle hybrid geometry (point clouds, polygons,...) as input data. In order to be able to robustly process huge data sets, we exploit graphics hardware features like the raster manager and the z-buffer for specific sub-tasks in the overall procedure. By this we significantly accelerate the stitching of mesh patches and obtain an algorithm for sub-sampling the data points in linear time. The target resolution and the triangle alignment in sub-regions of the resulting mesh can be controlled by adjusting the screen resolution and viewing transformation. An intuitive user interface provides a flexible tool for application dependent optimization of the mesh.




An interactive approach to point cloud triangulation


Leif Kobbelt, Mario Botsch
Eurographics 2000 proceedings
pubimg

We present an interactive system for the generation of high quality triangle meshes that allows us to handle hybrid geometry (point clouds, polygons,...) as input data. In order to be able to robustly process huge data sets, we exploit graphics hardware features like the raster manager and the z-buffer for specific sub-tasks in the overall procedure. By this we significantly accelerate the stitching of mesh patches and obtain an algorithm for sub-sampling the data points in linear time. The target resolution and the triangle alignment in sub-regions of the resulting mesh can be controlled by adjusting the screen resolution and viewing transformation. An intuitive user interface provides a flexible tool for application dependent optimization of the mesh.




Multiresolution shape deformations for meshes with dynamic vertex connectivity


Leif Kobbelt, Thilo Bareuther, Hans-Peter Seidel
Computer Graphics Forum 19 (2000), Eurographics '00 issue, pp. C249-C260
pubimg

Multiresolution shape representation is a very effective way to decompose surface geometry into several levels of detail. Geometric modeling with such representations enables flexible modifications of the global shape while preserving the detail information. Many schemes for modeling with multiresolution decompositions based on splines, polygonal meshes and subdivision surfaces have been proposed recently. In this paper we modify the classical concept of multiresolution representation by no longer requiring a global hierarchical structure that links the different levels of detail. Instead we represent the detail information implicitly by the geometric difference between independent meshes. The detail function is evaluated by shooting rays in normal direction from one surface to the other without assuming a consistent tessellation. In the context of multiresolution shape deformation, we propose a dynamic mesh representation which adapts the connectivity during the modification in order to maintain a prescribed mesh quality. Combining the two techniques leads to an efficient mechanism which enables extreme deformations of the global shape while preventing the mesh from degenerating. During the deformation, the detail is reconstructed in a natural and robust way. The key to the intuitive detail preservation is a transformation map which associates points on the original and the modified geometry with minimum distortion. We show several examples which demonstrate the effectiveness and robustness of our approach including the editing of multiresolution models and models with texture.




Geometric Fairing of Irregular Meshes for Free-Form Surface Design


Robert Schneider, Leif Kobbelt
Computer Aided Geometric Design
pubimg

In this paper we present a new algorithm for smoothing arbitrary triangle meshes while satisfying G^1 boundary conditions. The algorithm is based on solving a non-linear fourth order partial differential equation (PDE) that onlydepends on intrinsic surface properties instead of being derived from a particular surface parameterization. This continuous PDE has a (representation-independent) well-defined solution which we approximate by our triangle mesh. Hence, changing the mesh complexity (refinement) or the mesh connectivity (remeshing) lead to just another discretization of the same smooth surface and doesn't affect the resulting geometric shape beyond this. This is typically not true for filter-based mesh smoothing algorithms. To simplify the computation we factorize the fourth order PDE into a set of two nested second order problems thus avoiding the estimation of higher order derivatives. Further acceleration is achieved by applying multigrid techniques on a fine-to-coarse hierarchical mesh representation.




Line-art rendering of 3D-models


Christian Rössl, Leif Kobbelt
Pacific Graphics 2000 proceedings
pubimg

We present an interactive system for computer aided generation of line art drawings to illustrate 3D models that are given as triangulated surfaces. In a preprocessing step an enhanced 2D view of the scene is computed by sampling for every pixel the shading, the normal vectors and the principal directions obtained from discrete curvature analysis. Then streamlines are traced in the 2D direction fields and are used to define line strokes. In order to reduce noise artifacts the user may interactively select sparse reference lines and the system will automatically fill in additional strokes. By exploiting the special structure of the streamlines an intuitive and simple tone mapping algorithm can be derived to generate the final rendering.




Feature Sensitive Sampling for Interactive Remeshing


Mario Botsch, Christian Rössl, Leif Kobbelt
Vision, Modelling & Visualization 2000 proceedings, pp. 129-136
pubimg

We present a technique for remeshing irregular triangles meshes where the distribution and alignment can be adapted to the underlying geometry. Following the interactive virtual range scanner approach we overcome aliasing problems by introducing a special sampling technique. A sampling grid that can be aligned to the local features of the mesh is constructed interactively in an intuitive way and without adding reasonable overhead to the virtual scanning process.




Generating fair meshes with G¹ boundary conditions


Robert Schneider, Leif Kobbelt
Geometric Modeling and Processing Conference Proceedings, 2000
pubimg

In this paper we present a new algorithm to create fair discrete surfaces satisfying prescribed G1 boundary constraints. All surfaces are built by discretizing a partial differential equation based on pure geometric intrinsics. The construction scheme is designed to produce meshes that are partitioned into regular domains. Using this knowledge in advance we can develop a fast iterative algorithm resulting in surfaces of high aesthetic quality that have no local mean curvature extrema in the interior.




Extraction of feature lines on triangulated surfaces using morphological operators


Christian Rössl, Leif Kobbelt, Hans-Peter Seidel
Smart Graphics 2000, AAAI Spring Symposium, Stanford University
pubimg

Triangle meshes are a popular representation of surfaces in computer graphics. Our aim is to detect feature on such surfaces. Feature regions distinguish themselves by high curvature. We are using discrete curvature analysis on triangle meshes to obtain curvature values in every vertex of a mesh. These values are then thresholded resulting in a so called binary feature vector. By adapting morphological operators to triangle meshes, noise and artifacts can be removed from the feature. We introduce an operator that determines the skeleton of the feature region. This skeleton can then be converted into a graph representing the desired feature. Therefore a description of the surface's geometrical characteristics is constructed.




Discrete Fairing and Variational Subdivision for Freeform Surface Design


Leif Kobbelt
The Visual Computer Journal, 2000
pubimg

The representation of free-form surfaces by sufficiently refined polygonal meshes has become common in many geometric modeling applications where complicated objects have to be handled. While working with triangle meshes is flexible and efficient, there are difficulties arising prominently from the lack of infinitesimal smoothness and the prohibitive complexity of highly detailed 3D-models. In this paper we discuss the generation of fair triangle meshes which are optimal with respect to some discretized curvature energy functional. The key issues are the proper definition of discrete curvature, the smoothing of high resolution meshes by solving a sparse linear system that characterizes the global minimum of an energy functional. Results and techniques from differential geometry, variational surface design (fairing), and numerical analysis are combined to find efficient and robust algorithms that generate smooth meshes of arbitrary topology which interpolate or approximate a given set of data points.




Hierarchical solutions for the deformable surface problem in visualization


Christoph Lürig, Leif Kobbelt, Thomas Ertl
Graphical Models, Vol. 62, No. 1, January 2000, pp. 2-18
pubimg

In this paper we present a hierarchical approach for the deformable surface technique. This technique is a three dimensional extension of the snake segmentation method. We use it in the context of visualizing three dimensional scalar data sets. In contrast to classical indirect volume visualization methos, this reconstruction is not based on iso-values but on boundary information derived from discontinuities in the data. We propose a mulitlevel adaptive finite difference solver, which generates a target surface minimizing an energy functional based on an internal energy of the surface and an outer energy induced by the gradient of the volume. The method is attractive for preprocessing in numerical simulation or texture mapping. Red-green triangulation allows adaptive refinement of the mesh. Special considerations help to prevent self interpenetration of the surfaces. We will also show some techniques that introduce the hierarchical aspect into the inhomogeneity of the partial differential equation. The approach proves to be appropriate for data sets that contain a collection of objects separated by distinct boundaries. Thes kind of data sets often occur in medical and technical tomography, as we will demonstrate in a few examples.




A Shrink Wrapping Approach to Remeshing Polygonal Surfaces


Leif Kobbelt, Jens Vorsatz, Ulf Labsik, Hans-Peter Seidel
Computer Graphics Forum 18 (1999), Eurographics '99 issue, pp. C119 - C130
pubimg

Due to their simplicity and flexibility, polygonal meshes are about to become the standard representation for surface geometry in computer graphics applications. Some algorithms in the context of multiresolution representation and modeling can be performed much more efficiently and robustly if the underlying surface tesselations have the special subdivision connectivity. In this paper, we propose a new algorithm for converting a given unstructured triangle mesh into one having subdivision connectivity. The basic idea is to simulate the shrink wrapping process by adapting the deformable surface technique known from image processing. The resulting algorithm generates subdivision connectivity meshes whose base meshes only have a very small number of triangles. The iterative optimization process that distributes the mesh vertices over the given surface geometry guarantees low local distortion of the triangular faces. We show several examples and applications including the progressive transmission of subdivision surfaces.




Discrete fairing of curves and surfaces based on linear curvature distribution


Robert Schneider, Leif Kobbelt
Curve and Surface Design: Saint-Malo 1999, Laurent, Sablonniere, Schumaker (eds.), pp. 371-380
pubimg

In the planar case, one possibility to create a high quality curve that interpolates a given set of points is to use a clothoid spline, which is a curvature continuous curve with linear curvature segments. In the first part of the paper we develop an efficient fairing algorithm that calculates the discrete analogon of a closed clothoid spline. In the second part we show how this discrete linear curvature concept can be extended to create a fairing scheme for the construction of a triangle mesh that interpolates the vertices of a given closed polyhedron of arbitrary topology.




Multiresolution Hierarchies on Unstructured Triangle Meshes


Leif Kobbelt, Jens Vorsatz, Hans-Peter Seidel
Computational Geometry Journal: Theory and Applications 14, 1999, pp. 5-24
pubimg

The use of polygonal meshes for the representation of highly complex geometric objects has become the de facto standard in most computer graphics applications. Especially triangle meshes are preferred due to their algorithmic simplicity, numerical robustness, and efficient display. The possibility to decompose a given triangle mesh into a hierarchy of differently detailed approximations enables sophisticated modeling operations like the modification of the global shape under preservation of the detail features. So far, multiresolution hierarchies have been proposed mainly for meshes with subdivision connectivity. This type of connectivity results from iteratively applying a uniform split operator to an initially given coarse base mesh. In this paper we demonstrate how a similar hierarchical structure can be derived for arbitrary meshes with no restrictions on the connectivity. Since smooth (subdivision) basis functions are no longer available in this generalized context, we use constrained energy minimization to associate smooth geometry with coarse levels of detail. As the energy minimization requires one to solve a global sparse system, we investigate the effect of various parameters and boundary conditions in order to optimize the performance of iterative solving algorithms. Another crucial ingredient for an effective multiresolution decomposition of unstructured meshes is the flexible representation of detail information. We discuss several approaches.




Robust Multi-Band Detail Encoding for Triangular Meshes of Arbitrary Connectivity


Jens Vorsatz, Leif Kobbelt
Vision, Modeling, and Visualization (VMV) '99 proceedings, pp. 245 -252
pubimg

The flexibility coming along with the simplicity of their base primitive and the support by todays graphics hardware, have made triangular meshes more and more popular for representing complex 3D objects. Due to the complexity of realistic datasets, a considerable amount of work has been spent during the last years to provide means for the modification of a given mesh by intuitive metaphors, i.e. large scale edits under preservation of the detail features. In this paper we demonstrate how a hierarchical structure of a mesh can be derived for arbitrary meshes to enable intuitive modifications without restrictions on the underlying connectivity, known from existing subdivision approaches. We combine mesh reduction algorithms and constrained energy minimization to decompose the given mesh into several frequency bands. Therefore, a new stabilizing technique to encode the geometric difference between the levels will be presented.




Approximation and Visualization of Discrete Curvature on Triangulated Surfaces


Christian Rössl, Leif Kobbelt
Vision, Modeling, and Visualization (VMV) '99 proceedings, pp. 339 - 346
pubimg

Triangle meshes are a facile and effective representation for many kinds of surfaces. In order to rate the quality of a surface, the calculation of geometric curvatures as there are defined for smooth surfaces is useful an necessary for a variety of applications. We investigate an approach to locally approximate the first and second fundamental forms at every (inner) vertex of a triangle mesh. We use locally isometric divided difference operators, where we compare two variants of parameterizations (tangent plane and exponential map) by testing on elementary analytic surfaces. We further describe a technique for visualizing the resulting curvature data. A simple median filter is used to effectively filter noise from the input data. According to application dependent requirements a global or a pervertex local color coding can be provided. The user may interactively modify the color transfer function, enabling him or her to visually evaluate the quality of triangulated surfaces.




Real-time Exploration of Regular Volume Data by Adaptive Reconstruction of Iso-Surfaces


Rüdiger Westermann, Leif Kobbelt, Thomas Ertl
The Visual Computer 15 (1999) 2, pp. 100 - 111
pubimg

We propose an adaptive approach for the fast reconstruction of isosurfaces from regular volume data at arbitrary levels of detail. The algorithm has been designed to enable real-time navigation through complex structures while providing user-adjustable resolution levels. Since adaptive on-the-fly reconstruction and rendering is performed from a hierarchical octree representation of the volume data, the method does not depend on preprocessing with respect to a specific isovalue, thus the user can browse interactively through the set of all possible isosurfaces. Special attention is paid to the fixing of cracks in the surface where the adaptive reconstruction level changes and to the efficient estimation of the isosurface's curvature.




Interactive Multi-Resolution Modeling on Arbitrary Meshes


Leif Kobbelt, Swen Campagna, Jens Vorsatz, Hans-Peter Seidel
ACM SIGGRAPH '98 proceedings, 1998, pp. 105-114
pubimg

During the last years the concept of multi-resolution modeling has gained special attention in many fields of computer graphics and geometric modeling. In this paper we generalize powerful multiresolution techniques to arbitrary triangle meshes without requiring subdivision connectivity. Our major observation is that the hierarchy of nested spaces which is the structural core element of most multi-resolution algorithms can be replaced by the sequence of intermediate meshes emerging from the application of incremental mesh decimation. Performing such schemes with local frame coding of the detail coefficients already provides effective and efficient algorithms to extract multi-resolution information from unstructured meshes. In combination with discrete fairing techniques, i.e., the constrained minimization of discrete energy functionals, we obtain very fast mesh smoothing algorithms which are able to reduce noise from a geometrically specified frequency band in a multiresolution decomposition. Putting mesh hierarchies, local frame coding and multi-level smoothing together allows us to propose a flexible and intuitive paradigm for interactive detail-preserving mesh modification. We show examples generated by our mesh modeling tool implementation to demonstrate its functionality.




Ray Tracing of subdivision surfaces


Leif Kobbelt, K. Daubert, Hans-Peter Seidel
9th Eurographics Workshop on Rendering proceedings, 1998, pp. 69 - 80
pubimg

We present the necessary theory for the integration of subdivision surfaces into general purpose rendering systems. The most important functionality that has to be provided via an abstract geometry interface are the computation of surface points and normals as well as the ray intersection test. We demonstrate how to derive the corresponding formulas and how to construct tight bounding volumes for subdivision surfaces. We introduce envelope meshes which have the same topology as the control meshes but tightly circumscribe the limit surface. An efficient and simple algorithm is presented to trace a ray recursively through the forest of triangles emerging from adaptive refinement of an envelope mesh.




A Multiresolution Framework for Variational Subdivision


Leif Kobbelt, Peter Schröder
ACM Trans. on Graph. 17 (4), 1998, pp. 209-237
pubimg

Subdivision is a powerful paradigm for the generation of curves and surfaces. It is easy to implement, computationally efficient, and useful in a variety of applications because of its intimate connection with multiresolution analysis. An important task in computer graphics and geometric modeling is the construction of curves that interpolate a given set of points and minimize a fairness functional (variational design). In the context of subdivision, fairing leads to special schemes requiring the solution of a banded linear system at every subdivision step. We present several examples of such schemes including one that reproduces non-uniform interpolating cubic splines. Expressing the construction in terms of certain elementary operationswe are able to embed variational subdivision in the lifting framework, a powerful technique to construct wavelet filter banks given a subdivision scheme. This allows us to extend the traditional lifting scheme for FIR filters to a certain class of IIR filters. Consequently we show how to build variationally optimal curves and associated, stable wavelets in a straightforward fashion. The algorithms to perform the corresponding decomposition and reconstruction transformations are easy to implement and efficient enough for interactive applications.




Directed Edges - A Scalable Representation For Triangle Meshes


Swen Campagna, Leif Kobbelt, Hans-Peter Seidel
ACM Journal of Graphics Tools 3 (4), 1998, pp. 1-12
pubimg

In a broad range of computer graphics applications the representation of geometric shape is based on triangle meshes. General purpose data structures for polygonal meshes typically provide fast access to geometric objects (e.g. points) and topologic entities (e.g. neighborhood relation) but the memory requirements are rather high due to the many special configurations. In this paper we present a new data structure which is specifically designed for triangle meshes. The data structure enables to trade memory for access time by either storing internal references explicitly or by locally reconstructing them on demand. The trade-off can be hidden from the programmer by an object-oriented API and automatically adapts to the available hardware resources or the complexity of the mesh (scalability).




Enhancing Digital Documents by Including 3D-Models


Swen Campagna, Leif Kobbelt, Hans-Peter Seidel
Computer and Graphics Journal, Vol 22 (6), 1998, pp. 655 - 666
pubimg

Due to their simplicity, triangle meshes are used to represent geometric objects in many applications. Since the number of triangles often goes beyond the capabilities of computer graphics hardware and the transmission time of such data is often inappropriately high, a large variety of mesh simplification algorithms has been proposed in the last years. In this paper we identify major requirements for the practical usability of general purpose mesh reduction algorithms to enable the integration of triangle meshes into digital documents. The driving idea is to understand mesh reduction algorithms as a software extension to make more complex meshes accessible with limited hardware resources (regarding both transmission and display). We show how these requirements can be efficiently satisfied and discuss implementation aspects in detail. We present a mesh decimation scheme that fulfills these design goals and which has already been evaluated by several users from different application areas. We apply this algorithm to typical mesh data sets to demonstrate its performance.




Efficient Generation of Hierarchical Triangle Meshes


Swen Campagna, Leif Kobbelt, Hans-Peter Seidel
Proceedings of the Eighth IMA Conference on the Mathematics of Surfaces, Information Geometers, 1998, pp. 105-124


Density Estimation on Delaunay Triangulations


Leif Kobbelt, Marc Stamminger, Hans-Peter Seidel
IMDSP '98, IEEE conference proceedings, 1998, pp. 307-310
pubimg

Density Estimation is a very popular method to compute global illumination solutions in a complex virtual scene. After simulating the reflection paths of a large number of photons in the scene, the illumination at a surface point is approximated by estimating the density of photon hits in the point's surrounding. In this paper we describe a novel approach to compute such a density approximation based on Delaunay triangulation and mesh modification techniques.




Using the Discrete Fourier-Transform to Analyze the Convergence of Subdivision Schemes


Leif Kobbelt
Applied and Computational Harmonic Analysis 5 (1998), Academic Press, pp. 68 - 91
pubimg

While the continuous Fourier transform is a well-established standard tool for the analysis of subdivision schemes, we present a new technique based on the discrete Fourier transform instead. We first prove a very general convergence criterion for arbitrary interpolatory schemes, i.e., for non-stationary, globally supported or even non-linear schemes. Then we use the discrete Fourier transform as an algebraic tool to transform subdivision schemes into a form suitable for the analysis. This allows us to formulate simple and numerically stable sufficient criteria for the convergence of subdivision schemes of very general type. We analyze some example schemes to illustrate the resulting easy-to-apply criteria which merely require to numerically estimate the maximum of a smooth function on a compact interval.




Dreiecksbeziehungen


Swen Campagna, Leif Kobbelt
c't Zeitschrift für Computertechnik, 16 (1998) , pp. 174-179



A general framework for mesh decimation


Leif Kobbelt, Swen Campagna, Hans-Peter Seidel
Graphics Interface '98 Proceedings, 1998, pp. 43 - 50
pubimg

The decimation of highly detailed meshes has emerged as an important issue in many computer graphics related fields. A whole library of different algorithms has been proposed in the literature. By carefully investigating such algorithms, we can derive a generic structure for mesh reduction schemes which is analogous to a class of greedy-algorithms for heuristic optimization. Particular instances of this algorithmic template allow to adapt to specific target applications.We present a new mesh reduction algorithmwhich clearly reflects this meta scheme and efficiently generates decimated high quality meshes while observing global error bounds.




Fairing by Finite Difference Methods


Leif Kobbelt
Math. Meth in CAGD IV, M. Daehlen, T. Lyche, L. Schumaker (eds.), Vanderbilt University Press, 1998, pp. 279 - 286
pubimg

We propose an efficient and flexible scheme to fairly interpolate or approximate the vertices of a given triangular mesh. Instead of generating a piecewise polynomial representation, our output will be a refined mesh with vertices lying densely on a surface with minimum bending energy. To obtain those, we generalize the finite differences technique to parametric meshes. The use of local parameterizations (charts) makes it possible to cast the minimization of non-linear geometric functionals into solving a sparse linear system. Efficient multi-grid solvers can be applied which leads to fast algorithms that generate surfaces of high quality.




Variational Design with Parametric Meshes of Arbitrary Topology


Leif Kobbelt
Creating fair and shape preserving curves and surfaces, Teubner, 1998, pp. 189 - 198
pubimg

Many mathematical problems in geometric modeling are merely due to the difficulties of handling piecewise polynomial parameterizations of surfaces (e.g., smooth connection of patches, evaluation of geometric fairness measures). Dealing with polygonal meshes is mathematically much easier although infinitesimal smoothness can no longer be achieved. However, transferring the notion of fairness to the discrete setting of triangle meshes allows to develop very efficient algorithms for many specific tasks within the design process of high quality surfaces. The use of discrete meshes instead of continuous spline surfaces is tolerable in all applications where (on an intermediate stage) explicit parameterizations are not necessary. We explain the basic technique of discrete fairing and give a survey of possible applications of this approach.




Deformable Surfaces for Feature Based Indirect Volume Rendering


Christoph Lürig, Leif Kobbelt, Thomas Ertl
Computer Graphics International '98, IEEE Proceedings, 1998, pp. 752-760
pubimg

In this paper we present an indirect volume visualization method, based on the deformable surface model, which is a three dimensional extension of the snake segmentation method. In contrast to classical indirect volume visualization methods, this model is not based on iso-values but on boundary information. Physically speaking it simulates a combination of a thin plate and a rubber skin, that is influenced by forces implied by feature information extracted from the given data set. The approach proves to be appropriate for data sets that represent a collection of objects separated by distinct boundaries. These kind of data sets often occur in medical and technical tomography, as we will demonstrate by a few examples. We propose a multilevel adaptive finite difference solver, which generates a target surface minimizing an energy functional based on an internal energy of the surface and an outer energy induced by the gradient of the volume. This functional tends to produce very regular triangular meshes compared to results of the marching cubes algorithm. It makes this method attractive for meshing in numerical simulation or texture mapping. Red-green triangulation allows an adaptive refinement of the mesh. Special considerations have been made to prevent self inter-penetration of the surfaces.




Using Subdivision on Hierarchical Data to Reconstruct Radiosity Distribution


Leif Kobbelt, Marc Stamminger, Hans-Peter Seidel
Computer Graphics Forum 16 (1997), Eurographics '97 issue, pp. 347-356
pubimg

Computing global illumination by finite element techniques usually generates a piecewise constant approximation of the radiosity distribution on surfaces. Directly displaying such scenes generates artefacts due to discretization errors.We propose to remedy this drawback by considering the piecewise constant output to be samples of a (piecewise) smooth function in object space and reconstruct this function by applying a binary subdivision scheme. We design custom taylored subdivision schemes with quadratic precision for the efficient refinement of cell- or pixeltype data. The technique naturally allows to reconstruct functions from non-uniform samples which result from adaptive binary splitting of the original domain (quadtree). This type of output is produced, e.g., by hierarchical radiosity algorithms. The result of the subdivision process can be mapped as a texture on the respective surface patch which allows to exploit graphics hardware for considerably accelerating the display.




Iterative Mesh Generation for FE-Computations on Free Form Surfaces


Leif Kobbelt, Torsten Hesse, Hartmut Prautzsch, Karl Schweizerhof
Engineering Computations 14 (1997), MCB University Press, pp. 806-820
pubimg

We present an interpolatory subdivision scheme to generate adaptiely refined quadrilateral meshes which approximate a smooth surface of arbitrary topology. The described method significantly differs from classical mesh generation techniques based on spline surfaces or implicit representations since no explicit description of the limit surface is used. Instead, simple affine combinations are applied to compute new vertices if a face of the net is split. These rules are designed to guarantee asymptotic smoothness, i.e., the sequence of refined nets converges to a smooth limit surface. Subdivision techniques are useful mainly in applications where a given quadrilateral net is a coarse approximation of a surface and points on a refined grid have to be estimated. To evaluate our approach, we show examples for FE-computations on surfaces generated by this algorithm.




Stable Evaluation of Box-Splines


Leif Kobbelt
Numerical Algorithms 14 (1997) 4, J.C. Baltzer, pp. 377-382
pubimg

The most elegant way to evaluate box-splines is by using their recursive definition. However, a straightforward implementation reveals numerical difficulties. A careful analysis of the algorithm allows a reformulation which overcomes these problems without losing efficiency. A concise vectorized MATLAB-implementation is given.




Discrete Fairing


Leif Kobbelt
Proceedings of the Seventh IMA Conference on the Mathematics of Surfaces, 1997, pp. 101-131
pubimg

We address the general problem of, given a triangular net of arbitrary topology in IR3 , nd a re ned net which contains the original vertices and yields an improved approximation of a smooth and fair interpolating surface. The (topological) mesh re nement is performed by uniform subdivision of the original triangles while the (geometric) position of the newly inserted vertices is determined by variational methods, i.e., by the minimization of a functional measuring a discrete approximation of bending energy. The major problem in this approach is to nd an appropriate parameterization for the re ned net's vertices such that second divided di erences (derivatives) tightly approximate intrinsic curvatures. We prove the existence of a unique optimal solution for the minimization of discrete functionals that involve squared second order derivatives. Finally, we address the e cient computation of fair nets.




Robust and Efficient Evaluation of Functionals on Parametric Surfaces


Leif Kobbelt
Proceedings to the 13th ACM Symposium on Computational Geometry, ACM Press 1997, pp. 376-378



Interpolatory Subdivision on Open Quadrilateral Nets with Arbitrary Topology


Leif Kobbelt
Computer Graphics Forum 15 (1996), Eurographics '96 issue, pp. 409 - 420
pubimg

A simple interpolatory subdivision scheme for quadrilateral nets with arbitrary topology is presented which generates C1 surfaces in the limit. The scheme satisfies important requirements for practical applications in computer graphics and engineering. These requirements include the necessity to generate smooth surfaces with local creases and cusps. The scheme can be applied to open nets in which case it generates boundary curves that allow a C0-join of several subdivision patches. Due to the local support of the scheme, adaptive refinement strategies can be applied. We present a simple device to preserve the consistency of such adaptively refined nets.




A Variational Approach to Subdivision


Leif Kobbelt
Computer Aided Geometric Design 13 (1996), Elsevier, North-Holland, pp. 743-761
pubimg

In this paper a new class of interpolatory refinement schemes is presented which in every refinement step determine the new points by solving an optimization problem. In general, these schemes are global, i.e., every new point depends on all points of the polygon to be refined. By choosing appropriate quadratic functionals to be minimized iteratively during refinement, very efficient schemes producing limiting curves of high smoothness can be defined. The well known class of stationary interpolatory refinement schemes turns out to be a special case of these variational schemes.




Interpolatory Refinement is Low Pass Filtering


Leif Kobbelt
Math. Meth in CAGD III, M. Daehlen, T. Lyche, L. Schumaker (eds.), Vanderbilt University Press, 1995, pp. 281 - 290

A new technique to analyse the convergence behavior of interpolatory refinement schemes is presented. The refinement schemes are considered as discrete low pass filters and the convergence analysis is done in the frequency domain. The class of subdivision schemes covered by this approach includes the stationary subdivision schemes but also a very general class of refinement schemes with global dependence of the new points form the old. The filter formalism can be used for both, the analysis and the construction of refinement schemes.



Approximating the length of a spline by its control polygon


Leif Kobbelt, Hartmut Prautzsch
Math. Meth in CAGD III, M. Daehlen, T. Lyche, L. Schumaker (eds.), Vanderbilt University Press, 1995, pp. 291 - 292


Interpolatory refinement by variational methods


Leif Kobbelt
Approximation Theory VIII, Vol. 2: Wavelets and Multilevel Approximation C. Chui, L. Schumaker (eds.), World Scientific Publishing Co., 1995, pp. 217 - 224


Using Simulated Annealing to Obtain Good Nodal Approximations of Deformable Bodies


O. Deussen, Leif Kobbelt, P. Tücke
Sixth Eurographics Workshop on Simulation and Animation: "Computer Animation and Simulation", Springer, 1995, pp. 30-43
pubimg

In this paper we present a method to obtain good approximations of deformable bodies with spring/mass systems. An iterative algorithm based on voronoi diagrams is used to get a good mass distribution. The elastic properties of the system are optimized by simulated annealing. Results are shown, and some applications are discussed.




Convergence of subdivision and degree elevation


Hartmut Prautzsch, Leif Kobbelt
Adv. Comp. Math. 2 (1994), J.C. Baltzer, pp. 143-154

This paper presents a short, simple, and general proof showing that the control polygons generated by subdivision and degree elevation converge to the underlying splines, box-splines, or multivariate Bézier polynomials, respectively. The proof is based only on a Taylor expansion. Then the results are carried over to rational curves and surfaces. Finally, an even shorter but as simple proof is presented for the fact that subdivided Bézier polygons converge to the corresponding curve.




A Fast Dot-Product Algorithm with Minimal Rounding Errors


Leif Kobbelt
Computing 52 (1994), Springer-Verlag, pp. 355-369

We present a new algorithm which computes dot-products of arbitrary length with minimal rounding errors, independent of the number of addends. The algorithm has an O(n) time and O(1) memory complexity and does not need extensions of the arithmetic kernel, i.e., usual floating-point operations. A slight modification yields an algorithm which computes the dot-product in machine precision. Due to its simplicity, the algorithm can easily be implemented in hardware.




Using Three Dimensional Hand-Gesture Recognition as a New 3D Input Technique


U. Bröckl-Fox, L. Kettner, A. Klingert, Leif Kobbelt
Artificial Life and Virtual Reality, N. and D. Thalmann (eds.), Wiley & Sons, 1994, pp. 173 - 187


Disclaimer Home Visual Computing institute RWTH Aachen University