B.Sc. Thesis
- Automated Test Selection for Simulation-based Testing of UAVs | Current Topics
Context
Unmanned aerial vehicles (UAVs), also known as drones, are acquiring increasing autonomy. With their commercial adoption, the problem of testing their safety requirements has become a critical concern. Simulation-based testing represents a fundamental practice for cost-effective testing of UAVs.
- Creating a Core Rule Set for Android Taint Analysis Tools | Current Topics
Context
Android applications often process sensitive data such as location, contacts, and authentication tokens. Ensuring that this information is not leaked or misused is a central challenge in mobile app security.
Taint analysis is a static or dynamic program analysis technique that tracks the flow of sensitive data (“tainted sources”) through a program to determine whether it reaches untrusted components (“sinks”). Several tools exist to perform taint analysis on Android applications, including FlowDroid, Mariana Trench, and Joern. Each has different capabilities, rule definitions, and performance characteristics.
- Empirical Study on Merge Conflict Dynamics: The Role of Personas in Merge Resolutions | Current Topics
Context
Merge conflicts are a common challenge in collaborative software development, requiring developers to manually resolve inconsistencies between different code versions. Prior research has explored automated approaches to merge conflict resolution, but the impact of developer behavior and personas on the merge process remains not fully investigated [1].
- GPU Performance and Energy Trade-offs in Simulation-based Testing of Autonomous Vehicles | Current Topics
Context
Autonomous vehicles (AVs) are complex cyberphysical systems that require extensive testing to ensure safety. Since field testing is costly and unsafe, simulation-based testing using platforms like CARLA and BeamNG.tech has become a cornerstone in AV software validation. These simulators rely heavily on GPU performance for rendering, physics, and sensor emulation, and are therefore both resource-intensive and energy-demanding. As the scale of simulation campaigns grows (thousands of tests per day in CI pipelines), understanding and optimizing GPU cost becomes critical for cost-effective and sustainable testing.
- Persistent Risks in GitHub Actions: How Developers Address, Prioritize, or Neglect Security Vulnerabilities in CI/CD Pipelines | Current Topics
Context
The increasing adoption of continuous integration and continuous deployment (CI/CD) practices has transformed software development, with GitHub Actions playing a key role in automating workflows. Many projects rely on third-party GitHub Actions, which streamline deployment but also introduce security vulnerabilities due to outdated dependencies, excessive permissions, or lack of maintenance.
Despite the availability of security mechanisms such as Dependabot alerts and the GitHub Advisory Database, vulnerabilities often remain unpatched for long periods, leaving repositories exposed to supply chain attacks. Understanding how developers address, prioritize, or neglect these vulnerabilities is key to improving security practices in CI/CD environments.
- Reducing Simulation Overhead in UAV/Drone Test Generation Using Surrogate Models | Current Topics
Context
Unmanned aerial vehicles (UAVs), also known as drones, are acquiring increasing autonomy. With their commercial adoption, the problem of testing their safety requirements has become a critical concern. Simulation-based testing represents a fundamental practice, but the testing scenarios considered in software-in-the-loop testing may be different from the actual scenarios experienced in the field.
- RL-based Training for Code in LLMs | Current Topics
Context
Large Language Models (LLMs) have shown strong performance in code generation, completion, and repair tasks. However, supervised pretraining on massive code corpora is limited by data quality, lack of explicit feedback, and the inability to capture correctness beyond next-token prediction. Recent research has explored Reinforcement Learning (RL) based training approaches to refine LLMs for code. By leveraging feedback signals—such as compilation success, test case execution, or static analysis warnings—models can be trained to better align with correctness and developer intent.