Managing Affective-learning THrough Intelligent atoms and Smart InteractionS (MaTHiSiS)
During my Ph.D., I worked for the Horizon 2020-funded project MaTHiSiS, an educational platform providing a personalized learning experience based on multimodal emotion recognition from diverse cues. In MaTHiSiS, my research exploited a wide range of sensors to capture the learners’ affective states. Subsequently, the project aimed to foster a personalized student experience by increasing their engagement and preventing boredom and anxiety. My research at MaTHiSiS benefited from pedagogical experts to develop dynamic multimodal fusion based on different learners’ use cases, such as learners with an autism spectrum disorder or severe disabilities. Please check the MaTHiSiS website for more information.
In MaTHiSiS, our group (RAI) was leading the AI work package. For this, we researched frameworks for learning, based on automatic human emotion recognition and personalization of the learning procedure, based on state-of-the-art machine learning. In this project, I worked on the following tasks:
Developing a multimodal fusion framework for affective learning
Developing a framework to capture the correlation between students’ affective states & their interactions with the learning materials. This was one of the input modalities of the multimodal fusion framework in MaTHiSiS.
Developing a vision framework for recognizing basic emotions from facial expressions.
This video shows a presentation of the MaTHiSiS project, which was shown in the Bachelor Open Days (October 2017). The video includes my work, as well as the work of my colleagues in the RAI group.
Related Publications
Towards Affect Recognition through Interactions with Learning Materials
Esam Ghaleb, Mirela Popa, Enrique Hortal, and 2 more authors
In Machine Learning and Applications (ICMLA), 2018 17th International Conference on 2018
Affective state recognition has recently attracted a notable amount of attention in the research community, as it can be directly linked to a student’s performance during learning. Consequently, being able to retrieve the affect of a student can lead to more personalized education, targeting higher degrees of engagement and, thus, optimizing the learning experience and its outcomes. In this paper, we apply Machine Learning (ML) and present a novel approach for affect recognition in Technology-Enhanced Learning (TEL) by understanding learners’ experience through tracking their interactions with a serious game as a learning platform. We utilize a variety of interaction parameters to examine their potential to be used as an indicator of the learner’s affective state. Driven by the Theory of Flow model, we investigate the correspondence between the prediction of users’ self-reported affective states and the interaction features. Cross-subject evaluation using Support Vector Machines (SVMs) on a dataset of 32 participants interacting with the platform demonstrated that the proposed framework could achieve a significant precision in affect recognition. The subject-based evaluation highlighted the benefits of an adaptive personalized learning experience, contributing to achieving optimized levels of engagement.
High-performance and lightweight real-time deep face emotion recognition
Justus Schwan, Esam Ghaleb, Enrique Hortal, and 1 more author
In Semantic and Social Media Adaptation and Personalization (SMAP), 2017 12th International Workshop on 2017
Deep learning is used for all kinds of tasks which require human-like performance, such as voice and image recognition in smartphones, smart home technology, and self-driving cars. While great advances have been made in the field, results are often not satisfactory when compared to human performance. In the field of facial emotion recognition, especially in the wild, Convolutional Neural Networks (CNN) are employed because of their excellent generalization properties. However, while CNNs can learn a representation for certain object classes, an amount of (annotated) training data roughly proportional to the class’s complexity is needed and seldom available. This work describes an advanced pre-processing algorithm for facial images and a transfer learning mechanism, two potential candidates for relaxing this requirement. Using these algorithms, a lightweight face emotion recognition application for Human-Computer Interaction with TurtleBot units was developed.
Exploiting sensing devices availability in AR/VR deployments to foster engagement
Nicholas Vretos, Petros Daras, Stylianos Asteriadis, and 7 more authors
Currently, in all augmented reality (AR) or virtual reality (VR) educational experiences, the evolution of the experience (game, exercise or other) and the assessment of the user’s performance are based on her/his (re)actions which are continuously traced/sensed. In this paper, we propose the exploitation of the sensors available in the AR/VR systems to enhance the current AR/VR experiences, taking into account the users’ affect state that changes in real time. Adapting the difficulty level of the experience to the users’ affect state fosters their engagement which is a crucial issue in educational environments and prevents boredom and anxiety. The users’ cues are processed enabling dynamic user profiling. The detection of the affect state based on different sensing inputs, since diverse sensing devices exist in different AR/VR systems, is investigated, and techniques that have been undergone validation using state-of-the-art sensors are presented.