Currently, I am a postdoctoral researcher in the Dialogue Modelling Group of the ILLC at UvA, where I work under the supervision of Prof. Raquel Fernández Rovira and collaborate with leading linguists and cognitive scientists within the Language in Interaction consortium.
Research Interest. I am fascinated by how people employ verbal and non-verbal cues in their interactions. This fascination has driven my research on human-centered AI, focusing on recognizing and modeling human behaviors such as emotions, activities, and dialogue coordination. I pay particular attention to non-verbal cues—speech prosody, facial expressions, and gestures—critical for human face-to-face communication and communication technologies. My research aims to get a deeper understanding of how humans employ communication cues and to make communication technologies embody these processes, which can significantly benefit assistive technologies, thereby improving people’s experiences and lives.
Background. Born in Yemen, I graduated in the top 0.2% of high school graduates nationwide, which led to a full scholarship for my bachelor’s and master’s degrees in computer engineering at Istanbul Technical University (ITU), where I graduated with honors from both programs. During my master’s at ITU, I interned at the Karlsruhe Institute of Technology, where I conducted the research that formed the basis of my Master’s thesis. I completed my doctoral research in the Data Science and Advanced Computing department at Maastricht University (UM). In my doctoral research, I focused on bimodal emotion recognition through audio-visual cues. My postdoctoral work at UM extended explainable AI for emotion recognition beyond face and voice as expressive modalities, shifting the focus to gestural expressions.
Throughout my career, I have consistently collaborated internationally, gaining experiences at several international institutions, e.g., in Turkey (ITU), Germany (KIT) and the Netherlands (UM and UvA). I have built a track record of interdisciplinary research, bridging the gap between AI and its application in domains such as healthcare and e-learning through my participation in two EU projects at UM, wherein, in one, I acted as a work package leader. In terms of mentorship, I acted as a daily supervisor of a Ph.D. candidate and two Master’s students during my postdoc at UM. I’ve taught AI courses in Computer Vision and Natural Language Processing at UM and UvA, respectively. My scholarly contributions have been cited over 315 times as of 2023, and I have acted as a reviewer for leading AI journals and conferences.
|Nov 11, 2023||Lecture on Dialogue Modeling Module in the Master of AI, NLP1 Course at the UvA|
|Oct 24, 2023||Our Paper on Multi-Phase Co-Speech Gesture Detection accepted WACV!|
|Sep 7, 2023||Guest Lecture on Machine Learning for Multimodal Behavior at Upcoming Summer School Program|
|Jul 28, 2023||Our Research on Linguistic Alignment to Be Presented at Multidisciplinary Workshop in France|
|Mar 28, 2023||Teaching Natural Language Models and Interfaces at UvA|
- Co-Speech Gesture Detection through Multi-phase Sequence Labeling (to appear)In IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2024
- Joint Modelling of Audio-visual Cues Using Attention Mechanism for Emotion RecognitionMultimedia Tools and Applications 2023
- Skeleton-Based Explainable Bodily Expressed Emotion Recognition Through Graph Convolutional NetworksIn 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021) 2021