M.Sc. Thesis
Automated Test Selection for Simulation-based Testing of UAVs | Current Topics
Context
Unmanned aerial vehicles (UAVs), also known as drones, are acquiring increasing autonomy. With their commercial adoption, the problem of testing their safety requirements has become a critical concern. Simulation-based testing represents a fundamental practice for cost-effective testing of UAVs.
Creating a Core Rule Set for Android Taint Analysis Tools | Current Topics
Context
Android applications often process sensitive data such as location, contacts, and authentication tokens. Ensuring that this information is not leaked or misused is a central challenge in mobile app security.
Taint analysis is a static or dynamic program analysis technique that tracks the flow of sensitive data (“tainted sources”) through a program to determine whether it reaches untrusted components (“sinks”). Several tools exist to perform taint analysis on Android applications, including FlowDroid, Mariana Trench, and Joern. Each has different capabilities, rule definitions, and performance characteristics.
Context
Merge conflicts are a common challenge in collaborative software development, requiring developers to manually resolve inconsistencies between different code versions. Prior research has explored automated approaches to merge conflict resolution, but the impact of developer behavior and personas on the merge process remains not fully investigated [1].
Context
Modern public transportation vehicles, such as trams, buses, trolleybuses and trains, increasingly rely on on-board computing units to process and securely transfer large volumes of data generated by sensors and surveillance cameras. These systems often operate on limited battery power during night-time parking, when vehicles are disconnected from external energy sources. During this time window, the on-board computer must complete several computationally intensive tasks—such as software updates, video decoding, compression, encryption, and data upload—before service resumes.
In collaboration with Supercomputing Systems AG (SCS) and a public transportation company in Romandie, this project addresses the challenge of executing these tasks reliably under strict energy and time constraints. Understanding how to configure the embedded system and how to select optimal communication protocols for data transfer in order to remain both energy-efficient and predictable is essential for dependable fleet operations.
Context
Autonomous vehicles (AVs) are complex cyberphysical systems that require extensive testing to ensure safety. Since field testing is costly and unsafe, simulation-based testing using platforms like CARLA and BeamNG.tech has become a cornerstone in AV software validation. These simulators rely heavily on GPU performance for rendering, physics, and sensor emulation, and are therefore both resource-intensive and energy-demanding. As the scale of simulation campaigns grows (thousands of tests per day in CI pipelines), understanding and optimizing GPU cost becomes critical for cost-effective and sustainable testing.
Context
The increasing adoption of continuous integration and continuous deployment (CI/CD) practices has transformed software development, with GitHub Actions playing a key role in automating workflows. Many projects rely on third-party GitHub Actions, which streamline deployment but also introduce security vulnerabilities due to outdated dependencies, excessive permissions, or lack of maintenance.
Despite the availability of security mechanisms such as Dependabot alerts and the GitHub Advisory Database, vulnerabilities often remain unpatched for long periods, leaving repositories exposed to supply chain attacks. Understanding how developers address, prioritize, or neglect these vulnerabilities is key to improving security practices in CI/CD environments.
Reducing Simulation Overhead in UAV/Drone Test Generation Using Surrogate Models | Current Topics
Context
Unmanned aerial vehicles (UAVs), also known as drones, are acquiring increasing autonomy. With their commercial adoption, the problem of testing their safety requirements has become a critical concern. Simulation-based testing represents a fundamental practice, but the testing scenarios considered in software-in-the-loop testing may be different from the actual scenarios experienced in the field.
RL-based Training for Code in LLMs | Current Topics
Context
Large Language Models (LLMs) have shown strong performance in code generation, completion, and repair tasks. However, supervised pretraining on massive code corpora is limited by data quality, lack of explicit feedback, and the inability to capture correctness beyond next-token prediction. Recent research has explored Reinforcement Learning (RL) based training approaches to refine LLMs for code. By leveraging feedback signals—such as compilation success, test case execution, or static analysis warnings—models can be trained to better align with correctness and developer intent.