Muhammad Umair, Julia Beret Mertens, Saul Albert, J. D. Ruiter
Researchers studying human interaction, such as conversation analysts, psychologists, and linguists, all rely on detailed transcriptions of language use. Ideally, these should include so-called paralinguistic features of talk, such as overlaps, prosody, and intonation, as they convey important information. However, creating conversational transcripts that include these features by hand requires substantial amounts of time by trained transcribers. There are currently no Speech to Text (STT) systems that are able to integrate these features in the generated transcript. To reduce the resources needed to create detailed conversation transcripts that include representation of paralinguistic features, we developed a program called GailBot. GailBot combines STT services with plugins to automatically generate first drafts of transcripts that largely follow the transcription standards common in the field of Conversation Analysis. It also enables researchers to add new plugins to transcribe additional features, or to improve the plugins it currently uses. We describe GailBot’s architecture and its use of computational heuristics and machine learning. We also evaluate its output in relation to transcripts produced by both human transcribers and comparable automated transcription systems. We argue that despite its limitations, GailBot represents a substantial improvement over existing dialogue transcription software.
{"title":"GailBot: An automatic transcription system for Conversation Analysis","authors":"Muhammad Umair, Julia Beret Mertens, Saul Albert, J. D. Ruiter","doi":"10.5210/dad.2022.103","DOIUrl":"https://doi.org/10.5210/dad.2022.103","url":null,"abstract":"Researchers studying human interaction, such as conversation analysts, psychologists, and linguists, all rely on detailed transcriptions of language use. Ideally, these should include so-called paralinguistic features of talk, such as overlaps, prosody, and intonation, as they convey important information. However, creating conversational transcripts that include these features by hand requires substantial amounts of time by trained transcribers. There are currently no Speech to Text (STT) systems that are able to integrate these features in the generated transcript. To reduce the resources needed to create detailed conversation transcripts that include representation of paralinguistic features, we developed a program called GailBot. GailBot combines STT services with plugins to automatically generate first drafts of transcripts that largely follow the transcription standards common in the field of Conversation Analysis. It also enables researchers to add new plugins to transcribe additional features, or to improve the plugins it currently uses. We describe GailBot’s architecture and its use of computational heuristics and machine learning. We also evaluate its output in relation to transcripts produced by both human transcribers and comparable automated transcription systems. We argue that despite its limitations, GailBot represents a substantial improvement over existing dialogue transcription software.","PeriodicalId":37604,"journal":{"name":"Dialogue and Discourse","volume":"48 1","pages":"63-95"},"PeriodicalIF":0.0,"publicationDate":"2022-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83793669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the aim of designing a spoken dialogue system which has the ability to adapt to the user's communication idiosyncrasies, we investigate whether it is possible to carry over insights from the usage of communication styles in human-human interaction to human-computer interaction. In an extensive literature review, it is demonstrated that communication styles play an important role in human communication. Using a multi-lingual data set, we show that there is a significant correlation between the communication style of the system and the preceding communication style of the user. This is why two components that extend the standard architecture of spoken dialogue systems are presented: 1) a communication style classifier that automatically identifies the user communication style and 2) a communication style selection module that selects an appropriate system communication style. We consider the communication styles elaborateness and indirectness as it has been shown that they influence the user's satisfaction and the user's perception of a dialogue. We present a neural classification approach based on supervised learning for each task. Neural networks are trained and evaluated with features that can be automatically derived during an ongoing interaction in every spoken dialogue system. It is shown that both components yield solid results and outperform the baseline in form of a majority-class classifier.
{"title":"When to Say What and How: Adapting the Elaborateness and Indirectness of Spoken Dialogue Systems","authors":"Juliana Miehle, W. Minker, Stefan Ultes","doi":"10.5210/dad.2022.101","DOIUrl":"https://doi.org/10.5210/dad.2022.101","url":null,"abstract":"With the aim of designing a spoken dialogue system which has the ability to adapt to the user's communication idiosyncrasies, we investigate whether it is possible to carry over insights from the usage of communication styles in human-human interaction to human-computer interaction. In an extensive literature review, it is demonstrated that communication styles play an important role in human communication. Using a multi-lingual data set, we show that there is a significant correlation between the communication style of the system and the preceding communication style of the user. This is why two components that extend the standard architecture of spoken dialogue systems are presented: 1) a communication style classifier that automatically identifies the user communication style and 2) a communication style selection module that selects an appropriate system communication style. We consider the communication styles elaborateness and indirectness as it has been shown that they influence the user's satisfaction and the user's perception of a dialogue. We present a neural classification approach based on supervised learning for each task. Neural networks are trained and evaluated with features that can be automatically derived during an ongoing interaction in every spoken dialogue system. It is shown that both components yield solid results and outperform the baseline in form of a majority-class classifier.","PeriodicalId":37604,"journal":{"name":"Dialogue and Discourse","volume":"13 1","pages":"1-40"},"PeriodicalIF":0.0,"publicationDate":"2022-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86557271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yone, a Japanese sentence-final particle (SFP), is frequently used in conversation, and some functions overlap with ne, another SFP. However, not much discussion has taken place about their differences. This study argues that the two Japanese sentence-final particles, yone and ne, express a distinction about the speaker's state of mind: yone indicates that an idea has been on the speaker's mind, while ne suggests a thought just emerged into the speaker's awareness. Naturally occurring conversation data provides evidence for this claim. The results show that the particles reflect the speaker's choice of presenting his/her state of awareness.
{"title":"An Analysis of Japanese Sentence-final Particle Yone: Compare Yone and Ne in Response","authors":"Jun Xu","doi":"10.5210/dad.2021.206","DOIUrl":"https://doi.org/10.5210/dad.2021.206","url":null,"abstract":"Yone, a Japanese sentence-final particle (SFP), is frequently used in conversation, and some functions overlap with ne, another SFP. However, not much discussion has taken place about their differences. This study argues that the two Japanese sentence-final particles, yone and ne, express a distinction about the speaker's state of mind: yone indicates that an idea has been on the speaker's mind, while ne suggests a thought just emerged into the speaker's awareness. Naturally occurring conversation data provides evidence for this claim. The results show that the particles reflect the speaker's choice of presenting his/her state of awareness.","PeriodicalId":37604,"journal":{"name":"Dialogue and Discourse","volume":"2 1","pages":"174-191"},"PeriodicalIF":0.0,"publicationDate":"2021-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87136400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current work on automatic coreference resolution has focused on the OntoNotes benchmark dataset, due to both its size and consistency. However many aspects of the OntoNotes annotation scheme are not well understood by NLP practitioners, including the treatment of generic NPs, noun modifiers, indefinite anaphora, predication and more. These often lead to counterintuitive claims, results and system behaviors. This opinion piece aims to highlight some of the problems with the OntoNotes rendition of coreference, and to propose a way forward relying on three principles: 1. a focus on semantics, not morphosyntax; 2. cross-linguistic generalizability; and 3. a separation of identity and scope, which can resolve old problems involving temporal and modal domain consistency.
{"title":"Can we Fix the Scope for Coreference? Problems and Solutions for Benchmarks beyond OntoNotes","authors":"Amir Zeldes","doi":"10.5210/dad.2022.102","DOIUrl":"https://doi.org/10.5210/dad.2022.102","url":null,"abstract":"Current work on automatic coreference resolution has focused on the OntoNotes benchmark dataset, due to both its size and consistency. However many aspects of the OntoNotes annotation scheme are not well understood by NLP practitioners, including the treatment of generic NPs, noun modifiers, indefinite anaphora, predication and more. These often lead to counterintuitive claims, results and system behaviors. This opinion piece aims to highlight some of the problems with the OntoNotes rendition of coreference, and to propose a way forward relying on three principles: 1. a focus on semantics, not morphosyntax; 2. cross-linguistic generalizability; and 3. a separation of identity and scope, which can resolve old problems involving temporal and modal domain consistency.","PeriodicalId":37604,"journal":{"name":"Dialogue and Discourse","volume":"111 1","pages":"41-62"},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80644892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Ivanova, H. Branigan, Janet McLean, Albert Costa, M. Pickering
Two picture-matching-game experiments investigated if lexical-referential alignment to non-native speakers is enhanced by a desire to aid communicative success (by saying something the conversation partner can certainly understand), a form of audience design. In Experiment 1, a group of native speakers of British English that was not given evidence of their conversation partners’ picture-matching performance showed more alignment to non-native than to native speakers, while another group that was given such evidence aligned equivalently to the two types of speaker. Experiment 2, conducted with speakers of Castilian Spanish, replicated the greater alignment to non-native than native speakers without feedback. However, Experiment 2 also showed that production of grammatical errors by the confederate produced no additional increase of alignment even though making errors suggests lower communicative competence. We suggest that this pattern is consistent with another collaborative strategy, the desire to model correct usage. Together, these results support a role for audience design in alignment to non-native speakers in structured task-based dialogue, but one that is strategically deployed only when deemed necessary.
{"title":"Lexical Alignment to Non-native Speakers","authors":"I. Ivanova, H. Branigan, Janet McLean, Albert Costa, M. Pickering","doi":"10.5210/dad.2021.205","DOIUrl":"https://doi.org/10.5210/dad.2021.205","url":null,"abstract":"Two picture-matching-game experiments investigated if lexical-referential alignment to non-native speakers is enhanced by a desire to aid communicative success (by saying something the conversation partner can certainly understand), a form of audience design. In Experiment 1, a group of native speakers of British English that was not given evidence of their conversation partners’ picture-matching performance showed more alignment to non-native than to native speakers, while another group that was given such evidence aligned equivalently to the two types of speaker. Experiment 2, conducted with speakers of Castilian Spanish, replicated the greater alignment to non-native than native speakers without feedback. However, Experiment 2 also showed that production of grammatical errors by the confederate produced no additional increase of alignment even though making errors suggests lower communicative competence. We suggest that this pattern is consistent with another collaborative strategy, the desire to model correct usage. Together, these results support a role for audience design in alignment to non-native speakers in structured task-based dialogue, but one that is strategically deployed only when deemed necessary.","PeriodicalId":37604,"journal":{"name":"Dialogue and Discourse","volume":"82 1","pages":"145-173"},"PeriodicalIF":0.0,"publicationDate":"2021-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84375016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Sangers, J. Evers-Vermeul, T. Sanders, H. Hoeken
While the use of narrative elements in educational texts seems to be an adequate means to enhance students’ engagement and comprehension, we know little about how and to what extent these elements are used in the present-day educational practice. In this quantitative corpus-based analysis, we chart how and when narrative elements are used in current Dutch educational texts (N=999). While educational texts have traditionally been considered prime exemplars of expository texts, we show that the distinction between the expository and narrative genre is not that strict in the educational domain: prototypical narrative elements – particularized events, experiencing characters, and landscapes of consciousness – occur in 45% of the corpus’ texts. Their distribution varies between school subjects: while specific events, specific people, and their experiences are often at the heart of the to-be-learned information in history texts, narrativity is less present in the educational content of biology and geography texts. Instead publishers employ narrative-like strategies to make these texts more concrete and imaginable, such as the addition of fictitious characters and representative entities.
{"title":"Narrative Elements in Expository Texts: A Corpus Study of Educational Textbooks","authors":"N. Sangers, J. Evers-Vermeul, T. Sanders, H. Hoeken","doi":"10.5210/dad.2021.204","DOIUrl":"https://doi.org/10.5210/dad.2021.204","url":null,"abstract":"While the use of narrative elements in educational texts seems to be an adequate means to enhance students’ engagement and comprehension, we know little about how and to what extent these elements are used in the present-day educational practice. In this quantitative corpus-based analysis, we chart how and when narrative elements are used in current Dutch educational texts (N=999). While educational texts have traditionally been considered prime exemplars of expository texts, we show that the distinction between the expository and narrative genre is not that strict in the educational domain: prototypical narrative elements – particularized events, experiencing characters, and landscapes of consciousness – occur in 45% of the corpus’ texts. Their distribution varies between school subjects: while specific events, specific people, and their experiences are often at the heart of the to-be-learned information in history texts, narrativity is less present in the educational content of biology and geography texts. Instead publishers employ narrative-like strategies to make these texts more concrete and imaginable, such as the addition of fictitious characters and representative entities.","PeriodicalId":37604,"journal":{"name":"Dialogue and Discourse","volume":"6 1","pages":"115-144"},"PeriodicalIF":0.0,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89899602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Learning suitable and well-performing dialogue behaviour in statistical spoken dialogue systems has been in the focus of research for many years. While most work that is based on reinforcement learning employs an objective measure like task success for modelling the reward signal, we propose to use a reward signal based on user satisfaction. We propose a novel estimator and show that it outperforms all previous estimators while learning temporal dependencies implicitly. We show in simulated experiments that a live user satisfaction estimation model may be applied resulting in higher estimated satisfaction whilst achieving similar success rates. Moreover, we show that a satisfaction estimation model trained on one domain may be applied in many other domains that cover a similar task. We verify our findings by employing the model to one of the domains for learning a policy from real users and compare its performance to policies using user satisfaction and task success acquired directly from the users as reward.
{"title":"User Satisfaction Reward Estimation Across Domains: Domain-independent Dialogue Policy Learning","authors":"Stefan Ultes, Wolfgang Maier","doi":"10.5210/dad.2021.203","DOIUrl":"https://doi.org/10.5210/dad.2021.203","url":null,"abstract":"Learning suitable and well-performing dialogue behaviour in statistical spoken dialogue systems has been in the focus of research for many years. While most work that is based on reinforcement learning employs an objective measure like task success for modelling the reward signal, we propose to use a reward signal based on user satisfaction. We propose a novel estimator and show that it outperforms all previous estimators while learning temporal dependencies implicitly. We show in simulated experiments that a live user satisfaction estimation model may be applied resulting in higher estimated satisfaction whilst achieving similar success rates. Moreover, we show that a satisfaction estimation model trained on one domain may be applied in many other domains that cover a similar task. We verify our findings by employing the model to one of the domains for learning a policy from real users and compare its performance to policies using user satisfaction and task success acquired directly from the users as reward.","PeriodicalId":37604,"journal":{"name":"Dialogue and Discourse","volume":"43 1","pages":"81-114"},"PeriodicalIF":0.0,"publicationDate":"2021-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87876486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yaman Kumar Singla, Swapnil Parekh, Somesh Singh, J. Li, R. Shah, Changyou Chen
Deep-learning based Automatic Essay Scoring (AES) systems are being actively used in various high-stake applications in education and testing. However, little research has been put to understand and interpret the black-box nature of deep-learning-based scoring algorithms. While previous studies indicate that scoring models can be easily fooled, in this paper, we explore the reason behind their surprising adversarial brittleness. We utilize recent advances in interpretability to find the extent to which features such as coherence, content, vocabulary, and relevance are important for automated scoring mechanisms. We use this to investigate the oversensitivity (i.e., large change in output score with a little change in input essay content) and overstability (i.e., little change in output scores with large changes in input essay content) of AES. Our results indicate that autoscoring models, despite getting trained as “end-to-end” models with rich contextual embeddings such as BERT, behave like bag-of-words models. A few words determine the essay score without the requirement of any context making the model largely overstable. This is in stark contrast to recent probing studies on pre-trained representation learning models, which show that rich linguistic features such as parts-of-speech and morphology are encoded by them. Further, we also find that the models have learnt dataset biases, making them oversensitive. The presence of a few words with high co-occurrence with a certain score class makes the model associate the essay sample with that score. This causes score changes in ∼95% of samples with an addition of only a few words. To deal with these issues, we propose detection-based protection models that can detect oversensitivity and samples causing overstability with high accuracies. We find that our proposed models are able to detect unusual attribution patterns and flag adversarial samples successfully.
{"title":"Automatic Essay Scoring Systems Are Both Overstable And Oversensitive: Explaining Why And Proposing Defenses","authors":"Yaman Kumar Singla, Swapnil Parekh, Somesh Singh, J. Li, R. Shah, Changyou Chen","doi":"10.5210/dad.2023.101","DOIUrl":"https://doi.org/10.5210/dad.2023.101","url":null,"abstract":"Deep-learning based Automatic Essay Scoring (AES) systems are being actively used in various high-stake applications in education and testing. However, little research has been put to understand and interpret the black-box nature of deep-learning-based scoring algorithms. While previous studies indicate that scoring models can be easily fooled, in this paper, we explore the reason behind their surprising adversarial brittleness. We utilize recent advances in interpretability to find the extent to which features such as coherence, content, vocabulary, and relevance are important for automated scoring mechanisms. We use this to investigate the oversensitivity (i.e., large change in output score with a little change in input essay content) and overstability (i.e., little change in output scores with large changes in input essay content) of AES. Our results indicate that autoscoring models, despite getting trained as “end-to-end” models with rich contextual embeddings such as BERT, behave like bag-of-words models. A few words determine the essay score without the requirement of any context making the model largely overstable. This is in stark contrast to recent probing studies on pre-trained representation learning models, which show that rich linguistic features such as parts-of-speech and morphology are encoded by them. Further, we also find that the models have learnt dataset biases, making them oversensitive. The presence of a few words with high co-occurrence with a certain score class makes the model associate the essay sample with that score. This causes score changes in ∼95% of samples with an addition of only a few words. To deal with these issues, we propose detection-based protection models that can detect oversensitivity and samples causing overstability with high accuracies. We find that our proposed models are able to detect unusual attribution patterns and flag adversarial samples successfully.","PeriodicalId":37604,"journal":{"name":"Dialogue and Discourse","volume":"4 1","pages":"1-33"},"PeriodicalIF":0.0,"publicationDate":"2021-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87038384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Utku Norman, Tanvi Dinkar, Barbara Bruno, C. Clavel
A dialogue is successful when there is alignment between the speakers, at different linguistic levels. In this work, we consider the dialogue occurring between interlocutors engaged in a collaborative learning task, where they are evaluated on how well they performed and how much they learnt. Our main contribution is to propose new automatic measures to study alignment; focusing on lexical alignment, and a new alignment context that we introduce termed as behavioural alignment (when an instruction given by one interlocutor was followed with concrete actions in a physical environment by another). Thus we propose methodologies to create a link between what was said, and what was done as a consequence. To do so, we focus on expressions related to the task in the situated activity. These expressions are minimally required by the interlocutors to make progress in the task. We then observe how these local alignment contexts build to dialogue level phenomena; success in the task. What distinguishes our approach from other works, is the treatment of alignment as a procedure that occurs in stages. Since we utilise a dataset of spontaneous speech dialogues elicited from children, a second contribution of our work is to study how spontaneous speech phenomena (such as when interlocutors say "uh", "oh" ...) are used in the process of alignment. Lastly, we make public the dataset to study alignment in educational dialogues. Our results show that all teams lexically and behaviourally align to some degree regardless of their performance and learning, and our measures capture that teams that did not succeed in the task were simply slower to collaborate. Thus we find that teams that performed better, were faster to align. Furthermore, our methodology captures a productive, collaborative period that includes the time where the interlocutors came up with their best solutions. We also find that well-performing teams verbalise the marker "oh" more when they are behaviourally aligned, compared to other times in the dialogue; showing that this marker is an important cue in alignment. To the best of our knowledge, we are the first to study the role of "oh" as an information management marker in a behavioural context (i.e. in connection to actions taken in a physical environment), compared to only a verbal one. Our measures contribute to the research in the field of educational dialogue and the intersection between dialogue and collaborative learning research.
{"title":"Studying Alignment in a Collaborative Learning Activity via Automatic Methods: The Link Between What We Say and Do","authors":"Utku Norman, Tanvi Dinkar, Barbara Bruno, C. Clavel","doi":"10.5210/dad.2022.201","DOIUrl":"https://doi.org/10.5210/dad.2022.201","url":null,"abstract":"A dialogue is successful when there is alignment between the speakers, at different linguistic levels. In this work, we consider the dialogue occurring between interlocutors engaged in a collaborative learning task, where they are evaluated on how well they performed and how much they learnt. Our main contribution is to propose new automatic measures to study alignment; focusing on lexical alignment, and a new alignment context that we introduce termed as behavioural alignment (when an instruction given by one interlocutor was followed with concrete actions in a physical environment by another). Thus we propose methodologies to create a link between what was said, and what was done as a consequence. To do so, we focus on expressions related to the task in the situated activity. These expressions are minimally required by the interlocutors to make progress in the task. We then observe how these local alignment contexts build to dialogue level phenomena; success in the task. What distinguishes our approach from other works, is the treatment of alignment as a procedure that occurs in stages. Since we utilise a dataset of spontaneous speech dialogues elicited from children, a second contribution of our work is to study how spontaneous speech phenomena (such as when interlocutors say \"uh\", \"oh\" ...) are used in the process of alignment. Lastly, we make public the dataset to study alignment in educational dialogues. Our results show that all teams lexically and behaviourally align to some degree regardless of their performance and learning, and our measures capture that teams that did not succeed in the task were simply slower to collaborate. Thus we find that teams that performed better, were faster to align. Furthermore, our methodology captures a productive, collaborative period that includes the time where the interlocutors came up with their best solutions. We also find that well-performing teams verbalise the marker \"oh\" more when they are behaviourally aligned, compared to other times in the dialogue; showing that this marker is an important cue in alignment. To the best of our knowledge, we are the first to study the role of \"oh\" as an information management marker in a behavioural context (i.e. in connection to actions taken in a physical environment), compared to only a verbal one. Our measures contribute to the research in the field of educational dialogue and the intersection between dialogue and collaborative learning research. ","PeriodicalId":37604,"journal":{"name":"Dialogue and Discourse","volume":"25 1","pages":"1-48"},"PeriodicalIF":0.0,"publicationDate":"2021-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77214634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mental models or situation models include representations of people, but much of the literature about such models focuses on the representation of eventualities (events, states, and processes) or (small-scale) situations. In the well-known event-indexing model of Zwaan, Langston, and Graesser (1995), for example, protagonists are just one of five dimensions on which situation models are indexed. They are not given any additional special status. Consideration of longer narratives, and the ways in which readers or listeners relate to them, suggest that people have a more central status in the way we think about texts, and hence in discourse representations, Indeed, such considerations suggest that discourse representations are organised around (the representations of) central characters. The paper develops the idea of the centrality of main characters in representations of longer texts, by considering, among other things, the way information is presented in novels, with L’Éducation Sentimentale by Gustav Flaubert as a case study. Conclusions are also drawn about the role of representations of people in the representation of other types of text.
{"title":"Opinion Piece: How People Structure Representations of Discourse","authors":"Alan Garnham","doi":"10.5210/dad.2021.101","DOIUrl":"https://doi.org/10.5210/dad.2021.101","url":null,"abstract":"Mental models or situation models include representations of people, but much of the literature about such models focuses on the representation of eventualities (events, states, and processes) or (small-scale) situations. In the well-known event-indexing model of Zwaan, Langston, and Graesser (1995), for example, protagonists are just one of five dimensions on which situation models are indexed. They are not given any additional special status. Consideration of longer narratives, and the ways in which readers or listeners relate to them, suggest that people have a more central status in the way we think about texts, and hence in discourse representations, Indeed, such considerations suggest that discourse representations are organised around (the representations of) central characters. The paper develops the idea of the centrality of main characters in representations of longer texts, by considering, among other things, the way information is presented in novels, with L’Éducation Sentimentale by Gustav Flaubert as a case study. Conclusions are also drawn about the role of representations of people in the representation of other types of text.","PeriodicalId":37604,"journal":{"name":"Dialogue and Discourse","volume":"9 1","pages":"1-20"},"PeriodicalIF":0.0,"publicationDate":"2021-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78133771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}