Welcome to the Computer Graphics Group at RWTH Aachen University!

The research and teaching activities at our institute focus on geometry acquisition and processing, on interactive visualization, and on related areas such as computer vision, photo-realistic image synthesis, and ultra high speed multimedia data transmission.

In our projects we are cooperating with various industry companies as well as with academic research groups around the world. Results are published and presented at high-profile conferences and symposia. Additional funding sources, among others, are the Deutsche Forschungsgemeinschaft and the European Union.


We have a paper on greedily generating artwork using Bézier segments at VMV 2023.

May 23, 2023

Our paper on surface maps received the Günter Enderle Best Paper Award at Eurographics 2023.

May 12, 2023

We have a paper on the interactive segmentation of textured point clouds at VMV 2022.

Oct. 4, 2022

Our paper on automatic differentiation received the best paper award (1st place) at the Symposium on Geometry Processing 2022

July 6, 2022

We have a paper on fast and exact mesh Booleans at SIGGRAPH 2022.

June 13, 2022

In collaboration with CNIC Madrid we have a paper on the geometry of heart development in Nature Cardiovascular Research.

May 18, 2022

Recent Publications

Who Did What When? Discovering Complex Historical Interrelations in Immersive Virtual Reality

Konferenz: 2023 IEEE International Symposium on Mixed and Augmented Reality

Traditional digital tools for exploring historical data mostly rely on conventional 2D visualizations, which often cannot reveal all relevant interrelationships between historical fragments (e.g., persons or events). In this paper, we present a novel interactive exploration tool for historical data in VR, which represents fragments as spheres in a 3D environment and arranges them around the user based on their temporal, geo, categorical and semantic similarity. Quantitative and qualitative results from a user study with 29 participants revealed that most participants considered the virtual space and the abstract fragment representation well-suited to explore historical data and to discover complex interrelationships. These results were particularly underlined by high usability scores in terms of attractiveness, stimulation, and novelty, while researching historical facts with our system did not impose unexpectedly high task loads. Additionally, the insights from our post-study interviews provided valuable suggestions for future developments to further expand the possibilities of our system.

Octree Transformer: Autoregressive 3D Shape Generation on Hierarchically Structured Sequences

Structural and Compositional Learning on 3D Data (CVPR Workshop 2023)

Autoregressive models have proven to be very powerful in NLP text generation tasks and lately have gained pop ularity for image generation as well. However, they have seen limited use for the synthesis of 3D shapes so far. This is mainly due to the lack of a straightforward way to linearize 3D data as well as to scaling problems with the length of the resulting sequences when describing complex shapes. In this work we address both of these problems. We use octrees as a compact hierarchical shape representation that can be sequentialized by traversal ordering. Moreover, we introduce an adaptive compression scheme, that significantly reduces sequence lengths and thus enables their effective generation with a transformer, while still allowing fully autoregressive sampling and parallel training. We demonstrate the performance of our model by performing superresolution and comparing against the state-of-the-art in shape generation.

Localized Latent Updates for Fine-Tuning Vision-Language Models

Efficient Deep Learning for Computer Vision (CVPR Workshop 2023)

Although massive pre-trained vision-language models like CLIP show impressive generalization capabilities for many tasks, still it often remains necessary to fine-tune them for improved performance on specific datasets. When doing so, it is desirable that updating the model is fast and that the model does not lose its capabilities on data outside of the dataset, as is often the case with classical fine-tuning approaches. In this work we suggest a lightweight adapter that only updates the models predictions close to seen datapoints. We demonstrate the effectiveness and speed of this relatively simple approach in the context of few-shot learning, where our results both on classes seen and unseen during training are comparable with or improve on the state of the art.

Disclaimer Home Visual Computing institute RWTH Aachen University