Using computerised comparative judgement to assess translation

IF 1 3区 文学 0 LANGUAGE & LINGUISTICS Across Languages and Cultures Pub Date : 2022-05-09 DOI:10.1556/084.2022.00001
Chao Han, Bei Hu, Qin Fan, Jing Duan, Xi Li
{"title":"Using computerised comparative judgement to assess translation","authors":"Chao Han, Bei Hu, Qin Fan, Jing Duan, Xi Li","doi":"10.1556/084.2022.00001","DOIUrl":null,"url":null,"abstract":"\n Translation assessment represents a productive line of research in Translation Studies. An array of methods has been trialled to assess translation quality, ranging from intuitive assessment to error analysis and from rubric scoring to item-based assessment. In this article, we introduce a lesser-known approach to translation assessment called comparative judgement. Rooted in psychophysical analysis, comparative judgement grounds itself on the assumption that humans tend to be more accurate in making relative judgements than in making absolute judgements. We conducted an experiment, as both a methodological exploration and a feasibility investigation, in which novice and experienced judges were recruited to assess English-Chinese translation, using a computerised comparative judgement platform. The collected data were analysed to shed light on the validity and reliability of assessment results and the judges’ perceptions. Our analysis shows that (1) overall, comparative judgement produced valid measures and facilitated judgement reliability, although such results seemed to be affected by translation directionality and judges’ experience, and (2) the judges were generally confident about their decisions, despite some emergent factors undermining the validity of their decision making. Finally, we discuss the use of comparative judgement as a possible method in translation assessment and its implications for future practice and research.","PeriodicalId":44202,"journal":{"name":"Across Languages and Cultures","volume":" ","pages":""},"PeriodicalIF":1.0000,"publicationDate":"2022-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Across Languages and Cultures","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1556/084.2022.00001","RegionNum":3,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"LANGUAGE & LINGUISTICS","Score":null,"Total":0}
引用次数: 0

Abstract

Translation assessment represents a productive line of research in Translation Studies. An array of methods has been trialled to assess translation quality, ranging from intuitive assessment to error analysis and from rubric scoring to item-based assessment. In this article, we introduce a lesser-known approach to translation assessment called comparative judgement. Rooted in psychophysical analysis, comparative judgement grounds itself on the assumption that humans tend to be more accurate in making relative judgements than in making absolute judgements. We conducted an experiment, as both a methodological exploration and a feasibility investigation, in which novice and experienced judges were recruited to assess English-Chinese translation, using a computerised comparative judgement platform. The collected data were analysed to shed light on the validity and reliability of assessment results and the judges’ perceptions. Our analysis shows that (1) overall, comparative judgement produced valid measures and facilitated judgement reliability, although such results seemed to be affected by translation directionality and judges’ experience, and (2) the judges were generally confident about their decisions, despite some emergent factors undermining the validity of their decision making. Finally, we discuss the use of comparative judgement as a possible method in translation assessment and its implications for future practice and research.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
用计算机比较判断评价翻译
翻译评价是翻译研究中一个富有成效的研究方向。人们尝试了一系列评估翻译质量的方法,从直观评估到错误分析,从标题评分到基于项目的评估。在本文中,我们将介绍一种鲜为人知的翻译评估方法——比较判断。比较判断植根于心理物理分析,它的基础是这样一个假设,即人类在做出相对判断时往往比做出绝对判断时更准确。我们进行了一项实验,作为方法探索和可行性调查,在这项实验中,我们招募了新手和有经验的法官,使用计算机比较判断平台来评估英汉翻译。对收集到的数据进行了分析,以阐明评估结果的有效性和可靠性以及法官的看法。我们的分析表明:(1)总体而言,比较判断产生了有效的度量并促进了判断的可靠性,尽管这种结果似乎受到翻译方向性和法官经验的影响;(2)尽管一些突发因素削弱了他们的决策的有效性,但法官对他们的决策总体上是有信心的。最后,我们讨论了比较判断作为翻译评价的一种可能方法及其对未来实践和研究的启示。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
1.70
自引率
14.30%
发文量
21
期刊介绍: Across Languages and Cultures publishes original articles and reviews on all sub-disciplines of Translation and Interpreting (T/I) Studies: general T/I theory, descriptive T/I studies and applied T/I studies. Special emphasis is laid on the questions of multilingualism, language policy and translation policy. Publications on new research methods and models are encouraged. Publishes book reviews, news, announcements and advertisements.
期刊最新文献
Agenda-setting and journalistic translation: The New York Times in English, Spanish and Chinese Exploring the relevance and relative importance of interpreting aptitude constructs and their underlying factors: A data-driven tripartite investigation ‘Islamic State’ in Translation A literary translation in the making: A process-oriented perspective Beyond singability: A descriptive-explanatory analysis of Polish translations of Frank Sinatra's My Way
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1