Multimodality as a Design Feature of Language: Opportunities and Challenges for the Future of Multimodal Linguistics

Speaker: Dr Asli Özyürek (Max Planck Institute for Psycholinguistics)

About Dr Asli Özyürek: In August 2022, Dr Asli Özyürek was appointed as the Director of Multimodal Language Department of Max Planck Institute for Psycholinguistics. Dr Özyürek is also a Professor at the Donders Institute for Brain, Cognition and Behavior (Faculty of Social Sciences) and affiliated researcher at Center for Languages Studies at Radboud University.

Abstract: One of the unique aspects of human language is that in face-to- face communication it is universally multimodal (e.g., Holler and Levinson, 2019; Perniss, 2018). All hearing and deaf communities around the world use vocal and/or visual modalities (e.g., hands, body, face) with different affordances for semiotic and linguistic expression (e.g., Goldin-Meadow and Brentani, 2015; Vigliocco et al., 2014; Özyürek and Woll, 2019). Hearing communities use both vocal and visual modalities, combining speech and gesture. Deaf communities can use the visual modality for all aspects of linguistic expression in sign language. Visual articulators in both cospeech gesture and sign, unlike speech, have unique affordances for visible iconic, indexical (e.g., pointing) and simultaneous representations due to use of multiple articulators. Such expressions have been considered in traditional linguistics as being “external” to the language system. I will however argue and show evidence for the fact that both spoken languages and sign languages combine such modality-specific expressions with arbitrary, categorical and sequential expressions in their language structures in cross-linguistically different ways in diverse spoken and sign languages  (e.g., Slonimska Özyürek , Capirci ,2021; Özyürek, 2018; 2021). Furthermore they modulate language processing and language acquisition (e.g., Furman, Kuntay, Özyürek,2014; Karadoller et al., 2022) and enable languages to emerge anew in deaf communities with no prior language model suggesting that they are an integral property of a unified  multimodal language system. I will end my talk with discussion on how such a multimodal (but not unimodal) view can actually explain the dynamic, adaptive and flexible aspects of our language system enabling  optimally  to bridge  the human biological, cognitive and learning constraints to the interactive, culturally varying communicative requirements of face-to-face context. I will also mention opportunities and challenges using machine learning tools on the one hand and VR on the other for enhancing our understanding of the multimodal nature of human language faculty.