{"title":"Eventi dal vivo e accessibilità: uno studio di caso sul respeaking interlinguistico","authors":"Annalisa Sandrelli","doi":"10.1285/I22390359V43P145","DOIUrl":null,"url":null,"abstract":"Respeaking is commonly used to subtitle live TV programmes for the benefit of deaf and hard-of-hearing viewers (SDH subtitles): it is based on the interaction between the respeaker's competences and a speech recognition software trained to recognise his/her voice. In some countries this technique is also used to subtitle live events, such as conferences, debates, and so on. In recent years there have been attempts to use the interlingual variant of the technique, thus making it possible to translate a speech from one language to another (as you do in simultaneous interpreting) and obtain real-time subtitles. The respeaker listens to the incoming message and interprets it simultaneously to the software, adding punctuation verbally by means of voice commands; then, the respeaker's audio input is processed by the software, which produces the transcript; the respeakerchecks the output and corrects any errors, and the subtitles are finally made available to the audience. After a short overview of the experimental research carried out in this field and the available theoretical models, the present paper assesses the accuracy of the interlingual subtitles (English into Italian) produced via IRSP in a live event (a symposium). The analysis outlines the specific difficulties of this technique when applied to a live event, with the aim of producing useful data and suggestions for the development of practice guidelines.","PeriodicalId":30935,"journal":{"name":"Lingue e Linguaggi","volume":"65 4 1","pages":"145-168"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Lingue e Linguaggi","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1285/I22390359V43P145","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Respeaking is commonly used to subtitle live TV programmes for the benefit of deaf and hard-of-hearing viewers (SDH subtitles): it is based on the interaction between the respeaker's competences and a speech recognition software trained to recognise his/her voice. In some countries this technique is also used to subtitle live events, such as conferences, debates, and so on. In recent years there have been attempts to use the interlingual variant of the technique, thus making it possible to translate a speech from one language to another (as you do in simultaneous interpreting) and obtain real-time subtitles. The respeaker listens to the incoming message and interprets it simultaneously to the software, adding punctuation verbally by means of voice commands; then, the respeaker's audio input is processed by the software, which produces the transcript; the respeakerchecks the output and corrects any errors, and the subtitles are finally made available to the audience. After a short overview of the experimental research carried out in this field and the available theoretical models, the present paper assesses the accuracy of the interlingual subtitles (English into Italian) produced via IRSP in a live event (a symposium). The analysis outlines the specific difficulties of this technique when applied to a live event, with the aim of producing useful data and suggestions for the development of practice guidelines.