header

Profile


photo

Gregor Kobsik, M.Sc.
Email: kobsik@cs.rwth-aachen.de

Working on various topics at the intersection of Geometric Modeling, Shape Analysis and Machine Learning with a focus on Deep Shape Representation for Shape Analysis, Modeling, and Reconstruction. My goal is to utilize unsupervised or self-supervised deep learning methods to gain more insight into the composition and nature of geometric objects.

Currently, I am researching deep neural networks for Geometry Abstraction. Furthermore, I am supervising multiple thesis researching methods for 3D Shape Representation and Generation, Partial Symmetry Detection as well as 3D Structure and Relationship Detection.

Thesis Supervision:

  • Geometry Abstraction using Pre-Trained Image Segmentation Models (Bachelor ongoing)
  • Evaluation of Partial and Approximate Symmetry Detection for 3D Geometry (Bachelor 2024)
  • Regressing the Attention Matrix to Learn Similarity Relationships in 3D Models (Bachelor 2024)
  • Efficient Octree Shape Generation with RWKV: A Linear Scaling Transformer Alternative (Bachelor 2023)
  • An Unsupervised Deep Neural Network for 3D Shape Cuboid Abstraction, Segmentation and Partial Symmetry Detection (Bachelor 2023)
  • Learning 3D Shape Generation with Octree Value Quantized Deep Implicit Functions Transformer (Bachelor 2023)
  • Clustering 3D Models with DeepDPM (Bachelor 2022)

Teaching:

  • Data Analysis and Visualization (WS 2024)
  • Seminar: Various Topics in 3D Deep Learning (SS 2024)
  • Data Analysis and Visualization (WS 2023)
  • Shape Analysis and 3D Deep Learning (SS 2023)
  • Seminar: Various Topics in 3D Deep Learning (WS 2022)
  • Datenstrukturen und Algorithmen (SS 2022)


Publications


Multidimensional Byte Pair Encoding: Shortened Sequences for Improved Visual Data Generation


Tim Elsner, Paula Usinger, Julius Nehring-Wirxel, Gregor Kobsik, Victor Czech, Yanjiang He, Isaak Lim, Leif Kobbelt
International Conference on Computer Vision, ICCV 2025
pubimg

In language processing, transformers benefit greatly from text being condensed. This is achieved through a larger vocabulary that captures word fragments instead of plain characters. This is often done with Byte Pair Encoding. In the context of images, tokenisation of visual data is usually limited to regular grids obtained from quantisation methods, without global content awareness. Our work improves tokenisation of visual data by bringing Byte Pair Encoding from 1D to multiple dimensions, as a complementary add-on to existing compression. We achieve this through counting constellations of token pairs and replacing the most frequent token pair with a newly introduced token. The multidimensionality only increases the computation time by a factor of 2 for images, making it applicable even to large datasets like ImageNet within minutes on consumer hardware. This is a lossless preprocessing step. Our evaluation shows improved training and inference performance of transformers on visual data achieved by compressing frequent constellations of tokens: The resulting sequences are shorter, with more uniformly distributed information content, e.g. condensing empty regions in an image into single tokens. As our experiments show, these condensed sequences are easier to process. We additionally introduce a strategy to amplify this compression further by clustering the vocabulary.




Quantised Global Autoencoder: A Holistic Approach to Representing Visual Data


Tim Elsner, Paula Usinger, Victor Czech, Gregor Kobsik, Yanjiang He, Isaak Lim, Leif Kobbelt
30th International Symposium on Vision, Modeling, and Visualization (VMV 2025)
pubimg

In quantised autoencoders, images are usually split into local patches, each encoded by one token. This representation is redundant in the sense that the same number of tokens is spend per region, regardless of the visual information content in that region. Adaptive discretisation schemes like quadtrees are applied to allocate tokens for patches with varying sizes, but this just varies the region of influence for a token which nevertheless remains a local descriptor. Modern architectures add an attention mechanism to the autoencoder which infuses some degree of global information into the local tokens. Despite the global context, tokens are still associated with a local image region. In contrast, our method is inspired by spectral decompositions which transform an input signal into a superposition of global frequencies. Taking the data-driven perspective, we learn custom basis functions corresponding to the codebook entries in our VQ-VAE setup. Furthermore, a decoder combines these basis functions in a non-linear fashion, going beyond the simple linear superposition of spectral decompositions. We can achieve this global description with an efficient transpose operation between features and channels and demonstrate our performance on compression.



Awards:
» Show BibTeX

@inproceedings{10.2312:vmv.20251231,
booktitle = {Vision, Modeling, and Visualization},
editor = {Egger, Bernhard and Günther, Tobias},
title = {{Quantised Global Autoencoder: A Holistic Approach to Representing Visual Data}},
author = {Elsner, Tim and Usinger, Paula and Czech, Victor and Kobsik, Gregor and He, Yanjiang and Lim, Isaak and Kobbelt, Leif},
year = {2025},
publisher = {The Eurographics Association},
ISBN = {978-3-03868-294-3},
DOI = {10.2312/vmv.20251231}
}





Octree Transformer: Autoregressive 3D Shape Generation on Hierarchically Structured Sequences


Moritz Ibing, Gregor Kobsik, Leif Kobbelt
Structural and Compositional Learning on 3D Data (CVPR Workshop 2023)
pubimg

Autoregressive models have proven to be very powerful in NLP text generation tasks and lately have gained pop ularity for image generation as well. However, they have seen limited use for the synthesis of 3D shapes so far. This is mainly due to the lack of a straightforward way to linearize 3D data as well as to scaling problems with the length of the resulting sequences when describing complex shapes. In this work we address both of these problems. We use octrees as a compact hierarchical shape representation that can be sequentialized by traversal ordering. Moreover, we introduce an adaptive compression scheme, that significantly reduces sequence lengths and thus enables their effective generation with a transformer, while still allowing fully autoregressive sampling and parallel training. We demonstrate the performance of our model by performing superresolution and comparing against the state-of-the-art in shape generation.

» Show BibTeX

@inproceedings{ibing_octree,
author = {Moritz Ibing and
Gregor Kobsik and
Leif Kobbelt},
title = {Octree Transformer: Autoregressive 3D Shape Generation on Hierarchically Structured Sequences},
booktitle = {{IEEE/CVF} Conference on Computer Vision and Pattern Recognition Workshops,
{CVPR} Workshops 2023},
publisher = {{IEEE}},
year = {2023},
}





Datenschutzerklärung/Privacy Policy Home Visual Computing institute RWTH Aachen University