Computerize the Race Problem?: Why We Must Plan for a Just AI Future

Charlton D. McIlwain
{"title":"Computerize the Race Problem?: Why We Must Plan for a Just AI Future","authors":"Charlton D. McIlwain","doi":"10.1145/3375627.3377140","DOIUrl":null,"url":null,"abstract":"1960s civil rights and racial justice activists tried to warn us about our technological ways, but we didn't hear them talk. The so-called wizards who stayed up late ignored or dismissed black voices, calling out from street corners to pulpits, union halls to the corridors of Congress. Instead, the men who took the first giant leaps towards conceiving and building our earliest \"thinking\" and \"learning\" machines aligned themselves with industry, government and their elite science and engineering institutions. Together, they conspired to make those fighting for racial justice the problem that their new computing machines would be designed to solve. And solve that problem they did, through color-coded, automated, and algorithmically-driven indignities and inumahities that thrive to this day. But what if yesterday's technological elite had listened to those Other voices? What if they had let them into their conversations, their classrooms, their labs, boardrooms and government task forces to help determine what new tools to build, how to build them and - most importantly - how to deploy them? What might our world look like today if the advocates for racial justice had been given the chance to frame the day's most preeminent technological question for the world and ask, \"Computerize the Race Problem?\" Better yet, what might our AI-driven future look like if we ask ourselves this question today?","PeriodicalId":93612,"journal":{"name":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","volume":"19 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2020-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3375627.3377140","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

1960s civil rights and racial justice activists tried to warn us about our technological ways, but we didn't hear them talk. The so-called wizards who stayed up late ignored or dismissed black voices, calling out from street corners to pulpits, union halls to the corridors of Congress. Instead, the men who took the first giant leaps towards conceiving and building our earliest "thinking" and "learning" machines aligned themselves with industry, government and their elite science and engineering institutions. Together, they conspired to make those fighting for racial justice the problem that their new computing machines would be designed to solve. And solve that problem they did, through color-coded, automated, and algorithmically-driven indignities and inumahities that thrive to this day. But what if yesterday's technological elite had listened to those Other voices? What if they had let them into their conversations, their classrooms, their labs, boardrooms and government task forces to help determine what new tools to build, how to build them and - most importantly - how to deploy them? What might our world look like today if the advocates for racial justice had been given the chance to frame the day's most preeminent technological question for the world and ask, "Computerize the Race Problem?" Better yet, what might our AI-driven future look like if we ask ourselves this question today?
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
把种族问题电脑化?:为什么我们必须规划一个公正的人工智能未来
20世纪60年代,民权和种族正义活动家试图警告我们警惕我们的技术方式,但我们没有听到他们说话。那些熬夜的所谓“巫师”们无视或无视黑人的声音,从街角到讲坛,从工会大厅到国会走廊,他们大声疾呼。相反,那些在构思和制造我们最早的“思考”和“学习”机器方面迈出第一步的人,与工业界、政府及其精英科学和工程机构结盟。他们一起密谋让那些为种族正义而战的人成为他们设计的新计算机要解决的问题。他们确实解决了这个问题,通过颜色编码,自动化和算法驱动的侮辱和歧视,这些问题一直持续到今天。但是,如果昨天的技术精英听取了这些其他的声音呢?如果他们让他们参与到他们的谈话、教室、实验室、董事会和政府工作小组中来,帮助决定要建立什么新工具,如何建立它们,最重要的是如何部署它们,结果会怎样?如果种族正义的倡导者有机会向世界提出当今最突出的技术问题,并提出“将种族问题电脑化”,我们今天的世界会是什么样子?更好的是,如果我们今天问自己这个问题,人工智能驱动的未来会是什么样子?
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Bias in Artificial Intelligence Models in Financial Services Privacy Preserving Machine Learning Systems AIES '22: AAAI/ACM Conference on AI, Ethics, and Society, Oxford, United Kingdom, May 19 - 21, 2021 To Scale: The Universalist and Imperialist Narrative of Big Tech AIES '21: AAAI/ACM Conference on AI, Ethics, and Society, Virtual Event, USA, May 19-21, 2021
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1