During my Ph.D., I worked for the Horizon 2020-funded project MaTHiSiS, an educational platform providing a personalized learning experience based on multimodal emotion recognition from diverse cues. In MaTHiSiS, my research exploited a wide range of sensors to capture the learners’ affective states. Subsequently, the project aimed to foster a personalized student experience by increasing their engagement and preventing boredom and anxiety. My research at MaTHiSiS benefited from pedagogical experts to develop dynamic multimodal fusion based on different learners’ use cases, such as learners with an autism spectrum disorder or severe disabilities. Please check the MaTHiSiS website for more information.
In MaTHiSiS, our group (RAI) was leading the AI work package. For this, we researched frameworks for learning, based on automatic human emotion recognition and personalization of the learning procedure, based on state-of-the-art machine learning. In this project, I worked on the following tasks:
- Developing a multimodal fusion framework for affective learning
- Developing a framework to capture the correlation between students’ affective states & their interactions with the learning materials. This was one of the input modalities of the multimodal fusion framework in MaTHiSiS.
- Developing a vision framework for recognizing basic emotions from facial expressions.
This video shows a presentation of the MaTHiSiS project, which was shown in the Bachelor Open Days (October 2017). The video includes my work, as well as the work of my colleagues in the RAI group.