User Interaction Layer for Mixed Reality Robot Programming
Context
This project addresses the fundamental challenge of programming and controlling robot swarms through intuitive interfaces. We operate in the domain of Mixed Reality robotics interfaces, spatial computing, and multi-robot coordination systems. Traditional robot programming requires complex coding or abstract 2D interfaces that disconnect users from the spatial nature of robotic coordination. This work contributes to the broader SwarmOps research initiative by developing the human-machine interface components necessary for effective human oversight of collaborative cyber-physical systems.
Motivation
Programming robot swarms currently demands expertise in robotics-specific programming languages and complex abstract thinking to translate spatial coordination into code. This creates a barrier preventing domain experts (who understand the tasks robots should perform) from directly programming robot behaviors. Existing graphical interfaces remain stuck with 2D paradigms that poorly represent the 3D spatial relationships crucial for multi-robot coordination. Mixed Reality technology offers the potential for direct spatial manipulation of robot behaviors, but the interaction methods remain underdeveloped. Without intuitive programming interfaces, the deployment of robot swarms remains limited to specialists rather than expanding to domain experts who could benefit most from robotic automation.
Goal
The student will create a user interaction layer for natural programming of robot systems in Mixed Reality environments. The system will include a gesture recognition system supporting intuitive hand-based robot control commands. Voice command processing will handle natural language robot instructions. Spatial manipulation interfaces will allow direct 3D positioning and path planning for robots. A visual programming language adapted for 3D spatial interaction will replace traditional 2D paradigms. Multi-modal feedback systems will combine visual, auditory, and haptic responses. Formation design tools will use drag-and-drop interaction in 3D space. User experience optimization will address ergonomic factors and learning curves. User studies will compare MR programming efficiency with traditional interfaces.
Requirements
- The student needs solid programming experience in C# and Unity 3D development, along with basic understanding of 3D mathematics and spatial transformations.
- Familiarity with user interface design principles and human-computer interaction concepts is essential.
- Experience with gesture-based interaction or voice recognition systems is helpful but not required.
- The student should have interest in user experience design and willingness to conduct user studies and gather feedback.
- Problem-solving skills and creativity in interface design are important, as is comfort with iterative development based on user testing.
- Access to Mixed Reality hardware will be provided.
Pointers
- Unity 3D documentation and Mixed Reality Toolkit (MRTK) resources provide development foundations.
- Ultraleap hand tracking documentation and gesture recognition techniques support interaction development.
- Voice recognition and natural language processing libraries for Unity handle speech input.
- Human-computer interaction literature on spatial interfaces and 3D manipulation offers design guidance.
- Ergonomic design principles for extended VR/MR usage address user comfort.
- User study methodologies for interface evaluation provide evaluation frameworks.
- ROS2 integration tutorials support robot communication.
- Research papers on spatial programming languages and visual programming paradigms inform design decisions.