犯错误

IF 0.9 3区 哲学 Q2 HISTORY & PHILOSOPHY OF SCIENCE Osiris Pub Date : 2023-01-01 DOI:10.1086/725146
Mike Ananny
{"title":"犯错误","authors":"Mike Ananny","doi":"10.1086/725146","DOIUrl":null,"url":null,"abstract":"From law and politics to commerce and art, algorithms are powerful sociotechnical forces. But what does it mean when algorithms “fail”? What do we learn about sociotechnical dynamics when algorithms are seen to have erred or made a mistake? Seeing algorithms as culture, I argue that algorithmic errors are constructs of intertwined computational, psychological, organizational, infrastructural, discursive, and normative forces. Through three stories of error, I show algorithmic failures as illustrations not only of algorithmic power but also of normative forces that define success, rationalize iteration, and distribute harm. Instead of seeing algorithmic errors as unavoidable parts of technological innovation or self-evident transgressions, I instead see them as evidence of how people think systems should work, and the power to declare failures, trigger fixes, and envision futures by discovering and repairing mistakes. This power to “make mistakes” is a crucial and largely understudied form of sociotechnical control.","PeriodicalId":54659,"journal":{"name":"Osiris","volume":"38 1","pages":"223 - 241"},"PeriodicalIF":0.9000,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Making Mistakes\",\"authors\":\"Mike Ananny\",\"doi\":\"10.1086/725146\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"From law and politics to commerce and art, algorithms are powerful sociotechnical forces. But what does it mean when algorithms “fail”? What do we learn about sociotechnical dynamics when algorithms are seen to have erred or made a mistake? Seeing algorithms as culture, I argue that algorithmic errors are constructs of intertwined computational, psychological, organizational, infrastructural, discursive, and normative forces. Through three stories of error, I show algorithmic failures as illustrations not only of algorithmic power but also of normative forces that define success, rationalize iteration, and distribute harm. Instead of seeing algorithmic errors as unavoidable parts of technological innovation or self-evident transgressions, I instead see them as evidence of how people think systems should work, and the power to declare failures, trigger fixes, and envision futures by discovering and repairing mistakes. This power to “make mistakes” is a crucial and largely understudied form of sociotechnical control.\",\"PeriodicalId\":54659,\"journal\":{\"name\":\"Osiris\",\"volume\":\"38 1\",\"pages\":\"223 - 241\"},\"PeriodicalIF\":0.9000,\"publicationDate\":\"2023-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Osiris\",\"FirstCategoryId\":\"98\",\"ListUrlMain\":\"https://doi.org/10.1086/725146\",\"RegionNum\":3,\"RegionCategory\":\"哲学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"HISTORY & PHILOSOPHY OF SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Osiris","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1086/725146","RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"HISTORY & PHILOSOPHY OF SCIENCE","Score":null,"Total":0}
引用次数: 0

摘要

从法律和政治到商业和艺术,算法都是强大的社会技术力量。但是算法“失败”意味着什么呢?当算法出现错误或犯了错误时,我们对社会技术动态学到了什么?将算法视为一种文化,我认为算法错误是计算、心理、组织、基础设施、话语和规范力量交织在一起的产物。通过三个错误的故事,我展示了算法的失败,不仅说明了算法的力量,而且说明了定义成功、使迭代合理化和分配危害的规范力量。我不认为算法错误是技术创新中不可避免的一部分,也不认为它们是不言而喻的违规行为,而是将它们视为人们认为系统应该如何工作的证据,以及通过发现和修复错误来宣布失败、触发修复和展望未来的能力。这种“犯错误”的能力是社会技术控制的一种至关重要的、尚未得到充分研究的形式。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Making Mistakes
From law and politics to commerce and art, algorithms are powerful sociotechnical forces. But what does it mean when algorithms “fail”? What do we learn about sociotechnical dynamics when algorithms are seen to have erred or made a mistake? Seeing algorithms as culture, I argue that algorithmic errors are constructs of intertwined computational, psychological, organizational, infrastructural, discursive, and normative forces. Through three stories of error, I show algorithmic failures as illustrations not only of algorithmic power but also of normative forces that define success, rationalize iteration, and distribute harm. Instead of seeing algorithmic errors as unavoidable parts of technological innovation or self-evident transgressions, I instead see them as evidence of how people think systems should work, and the power to declare failures, trigger fixes, and envision futures by discovering and repairing mistakes. This power to “make mistakes” is a crucial and largely understudied form of sociotechnical control.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Osiris
Osiris 管理科学-科学史与科学哲学
CiteScore
1.10
自引率
0.00%
发文量
18
审稿时长
>12 weeks
期刊介绍: Founded in 1936 by George Sarton, and relaunched by the History of Science Society in 1985, Osiris is an annual thematic journal that highlights research on significant themes in the history of science. Recent volumes have included Scientific Masculinities, History of Science and the Emotions, and Data Histories.
期刊最新文献
Front and Back Matter Notes on the Contributors Acknowledgments Statecraft by Algorithms Introduction
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1