Welcome
The research and teaching activities at our institute focus on geometry acquisition and processing, on interactive visualization, and on related areas such as computer vision, photo-realistic image synthesis, and ultra high speed multimedia data transmission.
In our projects we are cooperating with various industry companies as well as with academic research groups around the world. Results are published and presented at high-profile conferences and symposia. Additional funding sources, among others, are the Deutsche Forschungsgemeinschaft and the European Union.
News
• |
We have a paper on retargeting visual data at ECCV 2024. |
Aug. 5, 2024 |
• |
We have a paper on greedily generating artwork using Bézier segments at VMV 2023. |
May 23, 2023 |
• |
Our paper on surface maps received the Günter Enderle Best Paper Award at Eurographics 2023. |
May 12, 2023 |
• |
We have a paper on the interactive segmentation of textured point clouds at VMV 2022. |
Oct. 4, 2022 |
• |
Our paper on automatic differentiation received the best paper award (1st place) at the Symposium on Geometry Processing 2022 |
July 6, 2022 |
• |
We have a paper on fast and exact mesh Booleans at SIGGRAPH 2022. |
June 13, 2022 |
Recent Publications
Choose Your Reference Frame Right: An Immersive Authoring Technique for Creating Reactive Behavior Proceedings of the 30th ACM Symposium on Virtual Reality Software and Technology
Immersive authoring enables content creation for virtual environments without a break of immersion. To enable immersive authoring of reactive behavior for a broad audience, we present modulation mapping, a simplified visual programming technique. To evaluate the applicability of our technique, we investigate the role of reference frames in which the programming elements are positioned, as this can affect the user experience. Thus, we developed two interface layouts: "surround-referenced" and "object-referenced". The former positions the programming elements relative to the physical tracking space, and the latter relative to the virtual scene objects. We compared the layouts in an empirical user study (n = 34) and found the surround-referenced layout faster, lower in task load, less cluttered, easier to learn and use, and preferred by users. Qualitative feedback, however, revealed the object-referenced layout as more intuitive, engaging, and valuable for visual debugging. Based on the results, we propose initial design implications for immersive authoring of reactive behavior by visual programming. Overall, modulation mapping was found to be an effective means for creating reactive behavior by the participants. Honorable Mention for Best Paper! |
Generalizing feature preservation in iso-surface extraction from triple dexel models Computer-Aided Design We present a method to resolve visual artifacts of a state-of-the-art iso-surface extraction algorithm by generating feature-preserving surface patches for isolated arbitrarily complex, single voxels without the need for further adaptive subdivision. In the literature, iso-surface extraction from a 3D voxel grid is limited to a single sharp feature per minimal unit, even for algorithms such as Cubical Marching Squares that produce feature-preserving surface reconstructions. In practice though, multiple sharp features can meet in a single voxel. This is reflected in the triple dexel model, which is used in simulation of CNC manufacturing processes. Our approach generalizes the use of normal information to perfectly preserve multiple sharp features for a single voxel, thus avoiding visual artifacts caused by state-of-the-art procedures. |
Retargeting Visual Data with Deformation Fields 18th European Conference on Computer Vision (ECCV 2024) Seam carving is an image editing method that enables content- aware resizing, including operations like removing objects. However, the seam-finding strategy based on dynamic programming or graph-cut lim- its its applications to broader visual data formats and degrees of freedom for editing. Our observation is that describing the editing and retargeting of images more generally by a deformation field yields a generalisation of content-aware deformations. We propose to learn a deformation with a neural network that keeps the output plausible while trying to deform it only in places with low information content. This technique applies to different kinds of visual data, including images, 3D scenes given as neu- ral radiance fields, or even polygon meshes. Experiments conducted on different visual data show that our method achieves better content-aware retargeting compared to previous methods. |