Esam Ghaleb

Affiliation - Institute for Logic, Language and Computation(ILLC) at University of Amsterdam(UvA). Address - LAB42 (room L6.39), Science Park 900, 1012 WX, Amsterdam, The Netherlands.

EsamProfile2022_edt.JPG

Currently, I am a postdoctoral researcher in the Dialogue Modelling Group of the ILLC at UvA, where I work under the supervision of Prof. Raquel Fernández Rovira and collaborate with leading linguists and cognitive scientists within the Language in Interaction consortium.

Research Interest. I am fascinated by how people employ verbal and non-verbal cues in their interactions. This fascination has driven my research on human-centered AI, focusing on recognizing and modeling human behaviors such as emotions, activities, and dialogue coordination. I pay particular attention to non-verbal cues—speech prosody, facial expressions, and gestures—critical for human face-to-face communication and communication technologies. My research aims to get a deeper understanding of how humans employ communication cues and to make communication technologies embody these processes, which can significantly benefit assistive technologies, thereby improving people’s experiences and lives.

Background. Born in Yemen, I graduated in the top 0.2% of high school graduates nationwide, which led to a full scholarship for my bachelor’s and master’s degrees in computer engineering at Istanbul Technical University (ITU), where I graduated with honors from both programs. During my master’s at ITU, I interned at the Karlsruhe Institute of Technology, where I conducted the research that formed the basis of my Master’s thesis. I completed my doctoral research in the Data Science and Advanced Computing department at Maastricht University (UM). In my doctoral research, I focused on bimodal emotion recognition through audio-visual cues. My postdoctoral work at UM extended explainable AI for emotion recognition beyond face and voice as expressive modalities, shifting the focus to gestural expressions.

Throughout my career, I have consistently collaborated internationally, gaining experiences at several international institutions, e.g., in Turkey (ITU), Germany (KIT) and the Netherlands (UM and UvA). I have built a track record of interdisciplinary research, bridging the gap between AI and its application in domains such as healthcare and e-learning through my participation in two EU projects at UM, wherein, in one, I acted as a work package leader. In terms of mentorship, I acted as a daily supervisor of a Ph.D. candidate and two Master’s students during my postdoc at UM. I’ve taught AI courses in Computer Vision and Natural Language Processing at UM and UvA, respectively. My scholarly contributions have been cited over 315 times as of 2023, and I have acted as a reviewer for leading AI journals and conferences.

News

May 27, 2024 Guest Lecture on Body Language Modeling in the Master of AI, NLP2 Course at the UvA
May 24, 2024 Seminar Presentation: Co-Speech Gesture Modeling at Tilburg University
May 1, 2024 Two papers have been accepted at CogSci24
Apr 25, 2024 Pre-print on leveraging speech to detect co-speech gestures in multimodal communication
Nov 11, 2023 Lecture on Dialogue Modeling Module in the Master of AI, NLP1 Course at the UvA

Selected Publications

  1. Leveraging Speech for Gesture Detection in Multimodal Communication
    Esam Ghaleb, Ilya Burenko, Marlou Rasenberg, and 7 more authors
    arXiv preprint arXiv:2404.14952 2024
  2. Speakers align both their gestures and words not only to establish but also to maintain reference to create shared labels for novel objects in interaction
    Sho Akamine, Esam Ghaleb, Marlou Rasenberg, and 3 more authors
    In Proceedings of the Annual Meeting of the Cognitive Science Society 2024
  3. Analysing Cross-Speaker Convergence in Face-to-Face Dialogue through the Lens of Automatically Detected Shared Linguistic Constructions
    Esam Ghaleb, Marlou Rasenberg, Wim Pouw, and 4 more authors
    In Proceedings of the Annual Meeting of the Cognitive Science Society 2024
  4. Co-Speech Gesture Detection through Multi-phase Sequence Labeling (to appear)
    Esam Ghaleb, Ilya Burenko, Marlou Rasenberg, and 6 more authors
    In IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2024
  5. Joint Modelling of Audio-visual Cues Using Attention Mechanism for Emotion Recognition
    Esam Ghaleb, Jan Niehues, and Stylianos Asteriadis
    Multimedia Tools and Applications 2023
  6. Skeleton-Based Explainable Bodily Expressed Emotion Recognition Through Graph Convolutional Networks
    Esam Ghaleb, André Mertens, Stylianos Asteriadis, and 1 more author
    In 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021) 2021