Program
LingCologne2025 takes place exclusively on-site at the University of Cologne, IBW building, Herbert-Lewin-Straße 2, 50931 Köln. The talks and poster sessions will take place here.
This is the preliminary program, there may be minor adjustments in the upcoming days.
Wednesday, 21st May 2025
10.00 a.m. – 12.30 p.m. | Roman Poryadin
International Sign Workshop
2.00 p.m. – 6.00 p.m. | Patrick Rohrer (Donders Centre for Cognition | Radboud University)
Satellite Workshop “M3D labelling system”
Thursday, 22nd May 2025
8.30 a.m. – 9.30 a.m. | Registration
9.30 a.m. – 9.45 a.m. | Opening address & Poster flash talks
09.45 a.m. – 10.45 a.m. | Rod Gardner
Recipient Actions during Extended Multi-unit Turns
In longer ‘project’ MUTs, such as stories, complaints or descriptions, recipients are constrained in the kinds of actions they engage in. Their work is to align or affiliate with the stance that the speaker of the MUT is taking in their project. Some of their vocal actions have been well documented in the literature, such as continuers, acknowledgement tokens, assessments, newsmarkers and laughter. More recently, increasing attention has been paid to embodied actions, including gaze, gesture, posture and facial expressions, including micro-gestures such as eyebrow flashes and blinks.
The data for this talk are from the Australian Research Council funded project Conversational Interaction in Aboriginal and Remote Australia. We use an hour of each of four multiparty conversation among English speakers. The extracts are from a chapter in a book in preparation on MUTs, ‘Recipiency Actions during Extended MUTs’.
The presentation will not deal extensively with response tokens and assessments, but will focus instead on three facets of recipients’ behaviour. First, I will present examples of recipient vocal actions that disrupt the progressivity of an MUT to a greater or lesser degree, such as some questions, word searches, jokes or heckles. Second, some recipient embodied actions will be shown. In our data, recipients rarely gesture (apart from self-attending grooming gestures), and gaze is typically directed towards the speaker, while posture generally does not shift. On the other hand, small facial gestures are used more extensively, such as facial expressions (smiles, ‘serious’ face, ‘concerned’ face, ‘stupefied amazement’ face), head nods and head shakes, eyebrow raises and flashes. All of these (along with other important resources, notably prosody and lexis) are important resources to show epistemic alignment and affective affiliation. Third, recipients sometimes engage in parallel activities, including manipulating objects, eating and drinking, or patting a dog. These rarely disrupt the progressivity of an extended MUT (but they can).
Recipients during MUTs perform a wide range of actions, and how these actions are delivered is highly consequential to the trajectory of an extended MUT.
10.45 a.m. – 11.30 a.m. | Coffee break
11.30 a.m. – 12.30 p.m. | Nivja de Jong
Fluency and interaction in second language (L2) teaching and assessment
For second language (L2) learners, interactional phenomena like backchannels, filled pause use, and turn-taking are not self-evident. Even when interactional phenomena between learners’ L1 and L2 may be similar, just like explicit attention to other aspects of form (morphosyntax) is beneficial, explicit attention to the phenomena for successful interaction in the L2 are beneficial for successful L2 interactional communication. However, in many L2 classrooms, speaking and interaction as pedagogical targets are usually neglected. In this presentation, I will present issues and challenges when teaching and assessing fluency and interactional phenomena in speaking. I will conclude with suggestions to improve the integration of fluency and interactional phenomena into teaching and assessment practices.
12.30 p.m. – 2.00 p.m. | Lunch break
2.00 p.m. – 3.00 p.m. | Kristian Skedsmo
Pragmatic universals and differing affordances – Interactive repair in Norwegian spoken and signed conversation
Other-initiations of self-repair are recurring practices which provide a front row view to interactional establishing and maintenance of mutual understanding. Although there are differences in how languages may achieve these practices, there seem to be common patterns in the distribution of formats for other-initiating self-repair. The current study compares interactive repair in Norwegian Sign Language and spoken Norwegian. The spoken language data comes from the Norwegian BigBrother Corpus while the signed language data comes from my own studies of Norwegian Sign Language. The findings indicate that although there are many similarities, there are diverging practices due to the respective language affordances of the visual and auditive modalities. For instance, the findings show notably higher frequence of open, interjection-based repair-initiations leading to repetition in spoken Norwegian as compared to Norwegian Sign Language. The presentation will conclude with future directions and how these findings can be applied to repair-initiation among L2 speakers and interpreters.
3.00 p.m. – 5.00 p.m. | Poster session I
5.00 p.m. – 6.00 p.m. | Martin Pickering & Simon Garrod
Alignment under different modes of communication
In Pickering and Garrod (2021), we proposed that interlocutors in synchronous dialogue align their linguistic representations and situation models. In addition, they use feedback to determine whether they believe they are aligned or not (meta-representations of alignment). In this talk, we extend the account of alignment and its meta-representation to other modes of communication. The other dominant mode is asynchronous monologue, in which authors compose a written or spoken text in isolation from their audience and then make a final version available (by publication). Alignment between author and audience is mediated by the form and meaning of the text and takes place when it is interpreted. The audience comes to believe that they are aligned with the author as a consequence of “close reading” of the text, which they use in lieu of feedback, and the author uses audience design to meta-represent potential alignment. We then consider hybrid modes, in particular asynchronous dialogue (letters, emails, SMS, half-duplex “walkie talkies”), in which the respondent cannot provide simultaneous feedback. Thus, meta-representation of alignment occurs intermittently, only after the respondent takes over. We argue that synchronous dialogue allows interlocutors to directly perceive alignment, whereas asynchronous dialogue and monologue require inferences by both the author and the audience, and interpret psycholinguistic findings (such as referential communication games) in terms of this contrast.
Friday, 22nd May 2025
09.30 a.m. – 10.30 a.m. | Eve Clark
Conversational feedback and language acquisition
Children acquire language in conversation: they learn to take turns and make contributions, and they attend to what adults say. And adults often check on what they mean. This results in extensive feedback on the different kinds of errors children make in the course of acquisition. The feedback takes the form of repairs offered as adult check on what their children meant. In this talk, I explore the kinds of repairs offered along with evidence of children’s attention and uptake. Adult repairs both target children’s errors (negative feedback) and offer them conventional forms in place of their errors (positive feedback).
10.30 a.m. – 11.15 a.m. | Coffee break
11.15 a.m. – 12.15 p.m. | Johanna Mesch
Receiver engagement in signed and tactile conversations
Backchannel responses convey to the active signer that the perceiver is actively engaged and attentive to the conversation. They also signal the active signer to feel acknowledged. They are therefore fundamental to conversations in any language. This presentation addresses backchannel responses in two different dyadic conversation settings: visual sign language conversations (n=16) and tactile sign language conversations (n=9), both in Swedish Sign Language. Turn-taking patterns in both types of conversations are different because of visual/tactile modality of the languages.
Visual signed conversations make use of both manual backchannel responses and non-manual backchannel responses such as lexical sign, palm-up and pointing, and non-manual backchannel responses such as nodding, head-shaking, smiling, changes in posture, and facial expressions. Tactile signed conversations employ only manual strategies, namely repetition, holds, palm-up, pointing, and haptic sensations. This presentation highlights both pragmatic differences and similarities in backchannel responses across these modalities.
12.15 p.m. – 1.30 p.m. | Lunch break
1.30 p.m. – 2.30 p.m. | JP de Ruiter
Are LLMs Good Models for Verbal Dialogue, and Why Not?
Large Language Models ("AI") are everywhere now. From couple therapy to curing cancer, from theoretical physics to Turing tests, LLMs seem to be able to do it all. It therefore sounds reasonable to assume that they can have a decent conversation as well. So we have looked a few basic skills that come natural to human interactants, but haven't been tested yet in LLMs. We found that while LLMs can be very convincing when it comes to content, they still have severe limitations in issues related to Turn-Taking. These limitations are not only challenges to be solved, but can also improve our understanding of the nature of conversation.
2.30 p.m. – 4.30 p.m. | Poster session II
5.30 p.m. | Best Poster Awards & Closing
English - International Sign Language interpreting
Oliver Pouliot, Leyre Suijan Casado, Tina Vrbanic