Esam Ghaleb
In brief: I study and model how verbal and non-verbal cues work together in a variety of human behaviors. Since September 2024 I have been a researcher in the Multimodal Language Department at the Max Planck Institute for Psycholinguistics, where I model multimodal communication for both human insight and machine applications.
Trained in computer science and engineering, I work across AI, cognitive science, psycholinguistics, psychology and healthcare to computationally model human behaviour for both fundamental and applied research. My work focuses on multimodal interaction, particularly in the context of dialogue. My research spans various domains, including gesture generation, multimodal dialogue systems, and previously affective computing, with applications in healthcare, human-computer interaction, and social robotics.
During my PhD and post-doc at Maastricht University, I developed explainable multimodal emotion-recognition techniques; at the Institute for Logic, Language & Computation (University of Amsterdam) I investigated linguistic–gestural alignment and automatic gesture segmentation in dialogues. My applied projects include two EU-funded studies (200+ participants) and a work package that combined clinicians’ expertise with machine intelligence for socio-economic contexts.
News
Jun 27, 2025 | Paper accepted at ICCV on Semantics-Aware Co-Speech Gesture Generation! |
---|---|
Jun 23, 2025 | Plenary Talk and Workshop on Multimodal Interaction at Summer School with Raquel Fernández |
May 16, 2025 | Two paper accepted at the Annual Meeting of the Association for Computational Linguistics (ACL 2025) |
Dec 12, 2024 | I gave a talk on “Understanding and Modelling Multimodal Dialogue Coordindation” at the Max Planck Institute for Psycholinguistics. |
Oct 21, 2024 | I gave a talk on “Learning Representations in Dialogue through Contrastive Learning: An Intrinsic Evaluation” at the UvA SignLab. |