Failures of Fairness in Automation Require a Deeper Understanding of Human-ML Augmentation

MIS Q. Pub Date : 2021-09-01 DOI:10.25300/misq/2021/16535
Mike H. M. Teodorescu, Lily Morse, Yazeed Awwad, Gerald C. Kane
{"title":"Failures of Fairness in Automation Require a Deeper Understanding of Human-ML Augmentation","authors":"Mike H. M. Teodorescu, Lily Morse, Yazeed Awwad, Gerald C. Kane","doi":"10.25300/misq/2021/16535","DOIUrl":null,"url":null,"abstract":"Machine learning (ML) tools reduce the costs of performing repetitive, time-consuming tasks yet run the risk of introducing systematic unfairness into organizational processes. Automated approaches to achieving fair- ness often fail in complex situations, leading some researchers to suggest that human augmentation of ML tools is necessary. However, our current understanding of human–ML augmentation remains limited. In this paper, we argue that the Information Systems (IS) discipline needs a more sophisticated view of and research into human–ML augmentation. We introduce a typology of augmentation for fairness consisting of four quadrants: reactive oversight, proactive oversight, informed reliance, and supervised reliance. We identify significant intersections with previous IS research and distinct managerial approaches to fairness for each quadrant. Several potential research questions emerge from fundamental differences between ML tools trained on data and traditional IS built with code. IS researchers may discover that the differences of ML tools undermine some of the fundamental assumptions upon which classic IS theories and concepts rest. ML may require massive rethinking of significant portions of the corpus of IS research in light of these differences, representing an exciting frontier for research into human–ML augmentation in the years ahead that IS researchers should embrace.","PeriodicalId":18743,"journal":{"name":"MIS Q.","volume":"4 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"49","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"MIS Q.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.25300/misq/2021/16535","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 49

Abstract

Machine learning (ML) tools reduce the costs of performing repetitive, time-consuming tasks yet run the risk of introducing systematic unfairness into organizational processes. Automated approaches to achieving fair- ness often fail in complex situations, leading some researchers to suggest that human augmentation of ML tools is necessary. However, our current understanding of human–ML augmentation remains limited. In this paper, we argue that the Information Systems (IS) discipline needs a more sophisticated view of and research into human–ML augmentation. We introduce a typology of augmentation for fairness consisting of four quadrants: reactive oversight, proactive oversight, informed reliance, and supervised reliance. We identify significant intersections with previous IS research and distinct managerial approaches to fairness for each quadrant. Several potential research questions emerge from fundamental differences between ML tools trained on data and traditional IS built with code. IS researchers may discover that the differences of ML tools undermine some of the fundamental assumptions upon which classic IS theories and concepts rest. ML may require massive rethinking of significant portions of the corpus of IS research in light of these differences, representing an exciting frontier for research into human–ML augmentation in the years ahead that IS researchers should embrace.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
自动化公平性的失败需要更深入地理解人类-机器学习增强
机器学习(ML)工具降低了执行重复、耗时任务的成本,但也存在将系统性不公平引入组织流程的风险。实现公平的自动化方法在复杂的情况下经常失败,这导致一些研究人员认为,人工增强机器学习工具是必要的。然而,我们目前对人类-机器学习增强的理解仍然有限。在本文中,我们认为信息系统(IS)学科需要对人类-机器学习增强有更复杂的看法和研究。我们引入了一种由四个象限组成的公平增强类型:被动监督、主动监督、知情依赖和监督依赖。我们确定了与以前的IS研究和每个象限的公平的独特管理方法的重要交叉点。基于数据训练的机器学习工具与基于代码构建的传统信息系统之间存在根本性差异,由此产生了几个潜在的研究问题。信息系统研究人员可能会发现,机器学习工具的差异破坏了经典信息系统理论和概念所依据的一些基本假设。鉴于这些差异,机器学习可能需要对信息系统研究语料库的重要部分进行大规模的重新思考,这代表了未来几年人类机器学习增强研究的一个令人兴奋的前沿,这是信息系统研究人员应该拥抱的。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Unintended Emotional Effects of Online Health Communities: A Text Mining-Supported Empirical Study Understanding the Digital Resilience of Physicians during the COVID-19 Pandemic: An Empirical Study Putting Religious Bias in Context: How Offline and Online Contexts Shape Religious Bias in Online Prosocial Lending Exploiting Expert Knowledge for Assigning Firms to Industries: A Novel Deep Learning Method Attaining Individual Creativity and Performance in Multidisciplinary and Geographically Distributed IT Project Teams: The Role of Transactive Memory Systems
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1