社会语言自动编码也有公平性问题:衡量和减少偏见

IF 1.1 2区 文学 0 LANGUAGE & LINGUISTICS Linguistics Vanguard Pub Date : 2024-03-11 DOI:10.1515/lingvan-2022-0114
Dan Villarreal
{"title":"社会语言自动编码也有公平性问题:衡量和减少偏见","authors":"Dan Villarreal","doi":"10.1515/lingvan-2022-0114","DOIUrl":null,"url":null,"abstract":"Sociolinguistics researchers can use sociolinguistic auto-coding (SLAC) to predict humans’ hand-codes of sociolinguistic data. While auto-coding promises opportunities for greater efficiency, like other computational methods there are inherent concerns about this method’s <jats:italic>fairness</jats:italic> – whether it generates equally valid predictions for different speaker groups. Unfairness would be problematic for sociolinguistic work given the central importance of correlating speaker groups to differences in variable usage. The current study examines SLAC fairness through the lens of gender fairness in auto-coding Southland New Zealand English non-prevocalic /r/. First, given that there are multiple, mutually incompatible definitions of machine learning fairness, I argue that fairness for SLAC is best captured by two definitions (overall accuracy equality and class accuracy equality) corresponding to three fairness metrics. Second, I empirically assess the extent to which SLAC is prone to unfairness; I find that a specific auto-coder described in previous literature performed poorly on all three fairness metrics. Third, to remedy these imbalances, I tested unfairness mitigation strategies on the same data; I find several strategies that reduced unfairness to virtually zero. I close by discussing what SLAC fairness means not just for auto-coding, but more broadly for how we conceptualize variation as an object of study.","PeriodicalId":55960,"journal":{"name":"Linguistics Vanguard","volume":"2016 1","pages":""},"PeriodicalIF":1.1000,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Sociolinguistic auto-coding has fairness problems too: measuring and mitigating bias\",\"authors\":\"Dan Villarreal\",\"doi\":\"10.1515/lingvan-2022-0114\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Sociolinguistics researchers can use sociolinguistic auto-coding (SLAC) to predict humans’ hand-codes of sociolinguistic data. While auto-coding promises opportunities for greater efficiency, like other computational methods there are inherent concerns about this method’s <jats:italic>fairness</jats:italic> – whether it generates equally valid predictions for different speaker groups. Unfairness would be problematic for sociolinguistic work given the central importance of correlating speaker groups to differences in variable usage. The current study examines SLAC fairness through the lens of gender fairness in auto-coding Southland New Zealand English non-prevocalic /r/. First, given that there are multiple, mutually incompatible definitions of machine learning fairness, I argue that fairness for SLAC is best captured by two definitions (overall accuracy equality and class accuracy equality) corresponding to three fairness metrics. Second, I empirically assess the extent to which SLAC is prone to unfairness; I find that a specific auto-coder described in previous literature performed poorly on all three fairness metrics. Third, to remedy these imbalances, I tested unfairness mitigation strategies on the same data; I find several strategies that reduced unfairness to virtually zero. I close by discussing what SLAC fairness means not just for auto-coding, but more broadly for how we conceptualize variation as an object of study.\",\"PeriodicalId\":55960,\"journal\":{\"name\":\"Linguistics Vanguard\",\"volume\":\"2016 1\",\"pages\":\"\"},\"PeriodicalIF\":1.1000,\"publicationDate\":\"2024-03-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Linguistics Vanguard\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1515/lingvan-2022-0114\",\"RegionNum\":2,\"RegionCategory\":\"文学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"0\",\"JCRName\":\"LANGUAGE & LINGUISTICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Linguistics Vanguard","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1515/lingvan-2022-0114","RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"0","JCRName":"LANGUAGE & LINGUISTICS","Score":null,"Total":0}
引用次数: 0

摘要

社会语言学研究人员可以使用社会语言自动编码(SLAC)来预测人类对社会语言数据的手码。虽然自动编码有望带来更高的效率,但与其他计算方法一样,这种方法的公平性也存在固有的问题,即它是否能对不同的说话者群体做出同样有效的预测。考虑到将说话者群体与变量使用差异相关联的核心重要性,不公平将给社会语言学工作带来问题。目前的研究通过对新西兰南岛英语非前元音 /r/ 进行自动编码时的性别公平性这一视角来检验 SLAC 的公平性。首先,鉴于机器学习的公平性存在多种互不兼容的定义,我认为 SLAC 的公平性最好由两个定义(总体准确性平等和类别准确性平等)来体现,这两个定义对应三个公平性指标。其次,我对 SLAC 易受不公平影响的程度进行了实证评估;我发现以往文献中描述的特定自动编码器在所有三个公平性指标上的表现都很差。第三,为了纠正这些不平衡,我在相同的数据上测试了不公平缓解策略;我发现有几种策略可以将不公平降低到几乎为零。最后,我将讨论 SLAC 公平性不仅对自动编码意味着什么,而且对我们如何将变异概念化为研究对象意味着什么。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Sociolinguistic auto-coding has fairness problems too: measuring and mitigating bias
Sociolinguistics researchers can use sociolinguistic auto-coding (SLAC) to predict humans’ hand-codes of sociolinguistic data. While auto-coding promises opportunities for greater efficiency, like other computational methods there are inherent concerns about this method’s fairness – whether it generates equally valid predictions for different speaker groups. Unfairness would be problematic for sociolinguistic work given the central importance of correlating speaker groups to differences in variable usage. The current study examines SLAC fairness through the lens of gender fairness in auto-coding Southland New Zealand English non-prevocalic /r/. First, given that there are multiple, mutually incompatible definitions of machine learning fairness, I argue that fairness for SLAC is best captured by two definitions (overall accuracy equality and class accuracy equality) corresponding to three fairness metrics. Second, I empirically assess the extent to which SLAC is prone to unfairness; I find that a specific auto-coder described in previous literature performed poorly on all three fairness metrics. Third, to remedy these imbalances, I tested unfairness mitigation strategies on the same data; I find several strategies that reduced unfairness to virtually zero. I close by discussing what SLAC fairness means not just for auto-coding, but more broadly for how we conceptualize variation as an object of study.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
2.00
自引率
18.20%
发文量
105
期刊介绍: Linguistics Vanguard is a new channel for high quality articles and innovative approaches in all major fields of linguistics. This multimodal journal is published solely online and provides an accessible platform supporting both traditional and new kinds of publications. Linguistics Vanguard seeks to publish concise and up-to-date reports on the state of the art in linguistics as well as cutting-edge research papers. With its topical breadth of coverage and anticipated quick rate of production, it is one of the leading platforms for scientific exchange in linguistics. Its broad theoretical range, international scope, and diversity of article formats engage students and scholars alike. All topics within linguistics are welcome. The journal especially encourages submissions taking advantage of its new multimodal platform designed to integrate interactive content, including audio and video, images, maps, software code, raw data, and any other media that enhances the traditional written word. The novel platform and concise article format allows for rapid turnaround of submissions. Full peer review assures quality and enables authors to receive appropriate credit for their work. The journal publishes general submissions as well as special collections. Ideas for special collections may be submitted to the editors for consideration.
期刊最新文献
From sociolinguistic perception to strategic action in the study of social meaning. Sign recognition: the effect of parameters and features in sign mispronunciations. The use of the narrative final vowel -á by the Lingala-speaking youth of Kinshasa: from anterior to near/recent past Re-taking the field: resuming in-person fieldwork amid the COVID-19 pandemic Bibliographic bias and information-density sampling
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1