{"title":"Video, talk and text: How do parties communicate coherently across modalities in live videostreams?","authors":"Scott Dutt, Sage Graham","doi":"10.1016/j.dcm.2023.100726","DOIUrl":null,"url":null,"abstract":"<div><p>This study explores how participants co-accomplish coherence in multi-modal conversation. We observed online watch parties, with an interest in the intertwining of text, talk and video. The data comes from two different communities on the live-streaming platform Twitch.tv. Each community consists of one live-streamer and their viewership. The live-streamers broadcast an audio-visual artifact, while simultaneously communicating with their viewers in real-time. The two were highly interactive, despite one (the streamer) speaking and reading, and the other (the chatters) typing and listening. The crisscross of modalities introduces challenges for the management and intelligibility of conversation. One coherence strategy involved a 4-stage process whereby streamers redirected viewers’ attention, and then initiated a collaborative activity. This 4–stage sequence illustrates a predictable structure that is potentially applicable to other digital & multimodal environments. Methodological challenges of digitally-mediated interaction are addressed, such as transcription of voice-and-text cross-modal conversation.</p></div>","PeriodicalId":46649,"journal":{"name":"Discourse Context & Media","volume":null,"pages":null},"PeriodicalIF":2.3000,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Discourse Context & Media","FirstCategoryId":"98","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2211695823000594","RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMMUNICATION","Score":null,"Total":0}
引用次数: 0
Abstract
This study explores how participants co-accomplish coherence in multi-modal conversation. We observed online watch parties, with an interest in the intertwining of text, talk and video. The data comes from two different communities on the live-streaming platform Twitch.tv. Each community consists of one live-streamer and their viewership. The live-streamers broadcast an audio-visual artifact, while simultaneously communicating with their viewers in real-time. The two were highly interactive, despite one (the streamer) speaking and reading, and the other (the chatters) typing and listening. The crisscross of modalities introduces challenges for the management and intelligibility of conversation. One coherence strategy involved a 4-stage process whereby streamers redirected viewers’ attention, and then initiated a collaborative activity. This 4–stage sequence illustrates a predictable structure that is potentially applicable to other digital & multimodal environments. Methodological challenges of digitally-mediated interaction are addressed, such as transcription of voice-and-text cross-modal conversation.