Kenneth Ward Church, Annika Marie Schoene, John E. Ortega, Raman Chandrasekar, Valia Kordoni
{"title":"Emerging trends: Unfair, biased, addictive, dangerous, deadly, and insanely profitable","authors":"Kenneth Ward Church, Annika Marie Schoene, John E. Ortega, Raman Chandrasekar, Valia Kordoni","doi":"10.1017/s1351324922000481","DOIUrl":null,"url":null,"abstract":"\n There has been considerable work recently in the natural language community and elsewhere on Responsible AI. Much of this work focuses on fairness and biases (henceforth Risks 1.0), following the 2016 best seller: Weapons of Math Destruction. Two books published in 2022, The Chaos Machine and Like, Comment, Subscribe, raise additional risks to public health/safety/security such as genocide, insurrection, polarized politics, vaccinations (henceforth, Risks 2.0). These books suggest that the use of machine learning to maximize engagement in social media has created a Frankenstein Monster that is exploiting human weaknesses with persuasive technology, the illusory truth effect, Pavlovian conditioning, and Skinner’s intermittent variable reinforcement. Just as we cannot expect tobacco companies to sell fewer cigarettes and prioritize public health ahead of profits, so too, it may be asking too much of companies (and countries) to stop trafficking in misinformation given that it is so effective and so insanely profitable (at least in the short term). Eventually, we believe the current chaos will end, like the lawlessness in Wild West, because chaos is bad for business. As computer scientists, this paper will summarize criticisms from other fields and focus on implications for computer science; we will not attempt to contribute to those other fields. There is quite a bit of work in computer science on these risks, especially on Risks 1.0 (bias and fairness), but more work is needed, especially on Risks 2.0 (addictive, dangerous, and deadly).","PeriodicalId":49143,"journal":{"name":"Natural Language Engineering","volume":"02 1","pages":"483-508"},"PeriodicalIF":2.3000,"publicationDate":"2022-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Natural Language Engineering","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1017/s1351324922000481","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 3
Abstract
There has been considerable work recently in the natural language community and elsewhere on Responsible AI. Much of this work focuses on fairness and biases (henceforth Risks 1.0), following the 2016 best seller: Weapons of Math Destruction. Two books published in 2022, The Chaos Machine and Like, Comment, Subscribe, raise additional risks to public health/safety/security such as genocide, insurrection, polarized politics, vaccinations (henceforth, Risks 2.0). These books suggest that the use of machine learning to maximize engagement in social media has created a Frankenstein Monster that is exploiting human weaknesses with persuasive technology, the illusory truth effect, Pavlovian conditioning, and Skinner’s intermittent variable reinforcement. Just as we cannot expect tobacco companies to sell fewer cigarettes and prioritize public health ahead of profits, so too, it may be asking too much of companies (and countries) to stop trafficking in misinformation given that it is so effective and so insanely profitable (at least in the short term). Eventually, we believe the current chaos will end, like the lawlessness in Wild West, because chaos is bad for business. As computer scientists, this paper will summarize criticisms from other fields and focus on implications for computer science; we will not attempt to contribute to those other fields. There is quite a bit of work in computer science on these risks, especially on Risks 1.0 (bias and fairness), but more work is needed, especially on Risks 2.0 (addictive, dangerous, and deadly).
最近在自然语言社区和其他地方有大量关于负责任的人工智能的工作。在2016年的畅销书《数学毁灭武器》(Weapons of Math Destruction)之后,这本书的大部分内容都集中在公平和偏见上(下文简称《风险1.0》)。2022年出版的两本书,《混乱机器》和《喜欢,评论,订阅》,提出了对公共卫生/安全/安全的额外风险,如种族灭绝,叛乱,两极分化的政治,疫苗接种(从今往后,风险2.0)。这些书表明,利用机器学习来最大限度地提高社交媒体的参与度,创造了一个弗兰肯斯坦怪物,它利用有说服力的技术、虚幻的真相效应、巴甫洛夫条件反射和斯金纳的间歇性变量强化来利用人类的弱点。正如我们不能期望烟草公司减少香烟的销量,把公众健康放在利润之前,同样,鉴于虚假信息的贩运如此有效,而且如此有利可图(至少在短期内),它可能对公司(和国家)提出了太多的要求,要求它们停止虚假信息的贩运。最终,我们相信目前的混乱会结束,就像狂野西部的无法无天一样,因为混乱对商业是不利的。作为计算机科学家,本文将总结来自其他领域的批评,并关注对计算机科学的影响;我们将不试图对那些其他领域作出贡献。在计算机科学领域,针对这些风险已经做了相当多的工作,尤其是在风险1.0(偏见和公平性)方面,但还需要做更多的工作,尤其是在风险2.0(上瘾、危险和致命)方面。
期刊介绍:
Natural Language Engineering meets the needs of professionals and researchers working in all areas of computerised language processing, whether from the perspective of theoretical or descriptive linguistics, lexicology, computer science or engineering. Its aim is to bridge the gap between traditional computational linguistics research and the implementation of practical applications with potential real-world use. As well as publishing research articles on a broad range of topics - from text analysis, machine translation, information retrieval and speech analysis and generation to integrated systems and multi modal interfaces - it also publishes special issues on specific areas and technologies within these topics, an industry watch column and book reviews.