Towards Mental Imagery-Aware Systems using Physiological Sensors and Machine Learning
- Context-aware systems use their functional environment to provide relevant information or functions to their users.
Currently available context-aware systems, for example, advanced driving assistant systems or mobile learning applications, primarily rely on physical activity or direct user input. This thesis makes the next step and introduces the concept of mental-imagery-aware systems that could enable a more sophisticated perception of users. Mental imagery includes multiple dimensions, whereas this thesis focuses on the two most common forms occurring in daily life: mind-wandering and spatial imagery.
Mind-wandering-aware systems are especially relevant in learning settings, where mind-wandering itself is primarily associated with low learning performance. This work proposes a novel approach for detecting episodes of mind wandering using physiological sensors and machine learning methods. For the first time, it is demonstrated that the electrodermal activity is sufficient for classifying the episodes of mind wandering with outstanding classification accuracy. The fusion of eye-tracker and electrodermal activity data additionally improves the classification performance of the machine learning algorithms.
Next, this thesis introduces a prospect towards spatial imagery and engagement-aware systems. With the rapid increase of automation levels in serial vehicles, there is a need for a better understanding of its impact on spatial imagery required for successful navigation. For this purpose, a highly immersive driving simulation system with an integrated eye-tracking system is deployed. With a real-time application in mind, the proxy of spatial imagery ``engagement with a driving task'' is used to infer the driver's presence in the driving loop. The work demonstrates the feasibility of the eye-tracking features combined with the Gradient boosting algorithm to recognize the disengagement of the driver from the driving loop outperforming the state-of-the-art. Finally, this work pave the wave for the driver's engagement recognition using a UWB-radar and deep learning algorithms. Six driving activities are recorded with a UWB radar and classified using state-of-the-art deep learning models showing promising outcomes.