Current Topics

Below, you will find current topics that can be worked in the context of a  Seminar Software Engineering, a Bachelor’s or a Master’s Thesis. The context indicates the scope of the work, and the keywords give you further information about the topic and its domain.

Mind that there are multiple pages, you can navigate them using the buttons on the bottom.

 User Interaction Layer for Mixed Reality Robot Programming

Context

This project addresses the fundamental challenge of programming and controlling robot swarms through intuitive interfaces. We operate in the domain of Mixed Reality robotics interfaces, spatial computing, and multi-robot coordination systems. Traditional robot programming requires complex coding or abstract 2D interfaces that disconnect users from the spatial nature of robotic coordination. This work contributes to the broader SwarmOps research initiative by developing the human-machine interface components necessary for effective human oversight of collaborative cyber-physical systems.

 Creating a Core Rule Set for Android Taint Analysis Tools

Context

Android applications often process sensitive data such as location, contacts, and authentication tokens. Ensuring that this information is not leaked or misused is a central challenge in mobile app security.

Taint analysis is a static or dynamic program analysis technique that tracks the flow of sensitive data (“tainted sources”) through a program to determine whether it reaches untrusted components (“sinks”). Several tools exist to perform taint analysis on Android applications, including FlowDroid, Mariana Trench, and Joern. Each has different capabilities, rule definitions, and performance characteristics.

 GPU Performance and Energy Trade-offs in Simulation-based Testing of Autonomous Vehicles

Context

Autonomous vehicles (AVs) are complex cyberphysical systems that require extensive testing to ensure safety. Since field testing is costly and unsafe, simulation-based testing using platforms like CARLA and BeamNG.tech has become a cornerstone in AV software validation. These simulators rely heavily on GPU performance for rendering, physics, and sensor emulation, and are therefore both resource-intensive and energy-demanding. As the scale of simulation campaigns grows (thousands of tests per day in CI pipelines), understanding and optimizing GPU cost becomes critical for cost-effective and sustainable testing.

 RL-based Training for Code in LLMs

Context

Large Language Models (LLMs) have shown strong performance in code generation, completion, and repair tasks. However, supervised pretraining on massive code corpora is limited by data quality, lack of explicit feedback, and the inability to capture correctness beyond next-token prediction. Recent research has explored Reinforcement Learning (RL) based training approaches to refine LLMs for code. By leveraging feedback signals—such as compilation success, test case execution, or static analysis warnings—models can be trained to better align with correctness and developer intent.