人工智能调整:评估推荐系统的全球影响

IF 3 3区 管理学 Q1 ECONOMICS Futures Pub Date : 2024-04-17 DOI:10.1016/j.futures.2024.103383
Ljubisa Bojic
{"title":"人工智能调整:评估推荐系统的全球影响","authors":"Ljubisa Bojic","doi":"10.1016/j.futures.2024.103383","DOIUrl":null,"url":null,"abstract":"<div><p>The recent growing concerns surrounding the pervasive adoption of generative AI can be traced back to the long-standing influence of AI algorithms that have predominantly served as content curators on large online platforms. These algorithms are used by online services and platforms to decide what content to show and in what order, and they can have a negative impact, including the spread of misinformation, social polarization, and echo chambers around important topics. Frances Haugen, a former Facebook employee turned whistleblower, has drawn significant public attention to this issue by revealing the company's alleged knowledge about the negative impacts of their own algorithms. Additionally, a recent initiative to ban TikTok as a threat to US national security indicates the influence of recommender systems. The objective of this study is threefold. The first goal is to provide an exhaustive evaluation of the profound worldwide influence exerted by algorithm-based recommendations. The second goal is to determine the degree of priority accorded by the scientific community to pivotal subjects in recommender systems discussions, such as misinformation, polarization, addiction, emotional contagion, privacy, and bias. Finally, the third goal is to assess whether the level of scientific research and discourse is commensurate with the significant impact these recommendation systems have globally. The research concludes the impact of recommender systems on society has been largely neglected by the scientific community, despite the fact that more than half of the world's population interacts with them on a daily basis. This becomes especially apparent when considering that algorithms exert influence not just on major societal issues but on every aspect of a user's online experience. The potential consequences for humanity are discussed, such as addiction to technology, weakening relations between humans, and the homogenizing effects on human minds. One possible direction to address the challenges posed by these algorithms is the application of algorithmic regulation to promote content diversity and facilitate democratic engagement, such as the tripartite solution which is elaborated upon in the conclusion. Therefore, future research should not only be centered around further evaluating influence of this technology, but also the analysis of how such systems can be regulated. A broader conversation among all stakeholders should be evoked on these potential approaches, aiming to align AI with societal values and enhance human well-being.</p></div>","PeriodicalId":48239,"journal":{"name":"Futures","volume":"160 ","pages":"Article 103383"},"PeriodicalIF":3.0000,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0016328724000661/pdfft?md5=3bd1b3019e1b306fc3d470c4b1032202&pid=1-s2.0-S0016328724000661-main.pdf","citationCount":"0","resultStr":"{\"title\":\"AI alignment: Assessing the global impact of recommender systems\",\"authors\":\"Ljubisa Bojic\",\"doi\":\"10.1016/j.futures.2024.103383\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>The recent growing concerns surrounding the pervasive adoption of generative AI can be traced back to the long-standing influence of AI algorithms that have predominantly served as content curators on large online platforms. These algorithms are used by online services and platforms to decide what content to show and in what order, and they can have a negative impact, including the spread of misinformation, social polarization, and echo chambers around important topics. Frances Haugen, a former Facebook employee turned whistleblower, has drawn significant public attention to this issue by revealing the company's alleged knowledge about the negative impacts of their own algorithms. Additionally, a recent initiative to ban TikTok as a threat to US national security indicates the influence of recommender systems. The objective of this study is threefold. The first goal is to provide an exhaustive evaluation of the profound worldwide influence exerted by algorithm-based recommendations. The second goal is to determine the degree of priority accorded by the scientific community to pivotal subjects in recommender systems discussions, such as misinformation, polarization, addiction, emotional contagion, privacy, and bias. Finally, the third goal is to assess whether the level of scientific research and discourse is commensurate with the significant impact these recommendation systems have globally. The research concludes the impact of recommender systems on society has been largely neglected by the scientific community, despite the fact that more than half of the world's population interacts with them on a daily basis. This becomes especially apparent when considering that algorithms exert influence not just on major societal issues but on every aspect of a user's online experience. The potential consequences for humanity are discussed, such as addiction to technology, weakening relations between humans, and the homogenizing effects on human minds. One possible direction to address the challenges posed by these algorithms is the application of algorithmic regulation to promote content diversity and facilitate democratic engagement, such as the tripartite solution which is elaborated upon in the conclusion. Therefore, future research should not only be centered around further evaluating influence of this technology, but also the analysis of how such systems can be regulated. A broader conversation among all stakeholders should be evoked on these potential approaches, aiming to align AI with societal values and enhance human well-being.</p></div>\",\"PeriodicalId\":48239,\"journal\":{\"name\":\"Futures\",\"volume\":\"160 \",\"pages\":\"Article 103383\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2024-04-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S0016328724000661/pdfft?md5=3bd1b3019e1b306fc3d470c4b1032202&pid=1-s2.0-S0016328724000661-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Futures\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0016328724000661\",\"RegionNum\":3,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"ECONOMICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Futures","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0016328724000661","RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ECONOMICS","Score":null,"Total":0}
引用次数: 0

摘要

最近,人们对普遍采用生成式人工智能的担忧与日俱增,这可以追溯到人工智能算法的长期影响,这些算法主要在大型在线平台上充当内容策划者。这些算法被在线服务和平台用来决定显示什么内容以及显示的顺序,它们可能会产生负面影响,包括错误信息的传播、社会两极分化以及围绕重要话题的回音室。弗朗西斯-豪根(Frances Haugen)是 Facebook 的前雇员,后成为举报人,她揭露了该公司涉嫌了解自己算法的负面影响,从而引起了公众对这一问题的极大关注。此外,最近一项将 TikTok 视为对美国国家安全的威胁而予以禁止的倡议也表明了推荐系统的影响力。本研究有三个目标。第一个目标是对基于算法的推荐在全球范围内产生的深远影响进行详尽的评估。第二个目标是确定科学界对推荐系统讨论的关键主题的优先程度,如错误信息、两极分化、成瘾、情绪传染、隐私和偏见。最后,第三个目标是评估科学研究和讨论的水平是否与这些推荐系统在全球产生的重大影响相称。研究认为,尽管全球一半以上的人口每天都在与推荐系统互动,但科学界在很大程度上忽视了推荐系统对社会的影响。考虑到算法不仅对重大社会问题,而且对用户在线体验的方方面面都有影响,这一点就变得尤为明显。我们讨论了算法对人类的潜在影响,如沉迷于技术、削弱人与人之间的关系以及对人类思维的同质化影响。应对这些算法带来的挑战的一个可能方向是应用算法监管来促进内容多样性和民主参与,例如结论中阐述的三方解决方案。因此,未来的研究不仅应围绕进一步评估这种技术的影响展开,还应分析如何对这种系统进行监管。应在所有利益相关者之间就这些潜在方法展开更广泛的对话,旨在使人工智能与社会价值观保持一致,增进人类福祉。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
AI alignment: Assessing the global impact of recommender systems

The recent growing concerns surrounding the pervasive adoption of generative AI can be traced back to the long-standing influence of AI algorithms that have predominantly served as content curators on large online platforms. These algorithms are used by online services and platforms to decide what content to show and in what order, and they can have a negative impact, including the spread of misinformation, social polarization, and echo chambers around important topics. Frances Haugen, a former Facebook employee turned whistleblower, has drawn significant public attention to this issue by revealing the company's alleged knowledge about the negative impacts of their own algorithms. Additionally, a recent initiative to ban TikTok as a threat to US national security indicates the influence of recommender systems. The objective of this study is threefold. The first goal is to provide an exhaustive evaluation of the profound worldwide influence exerted by algorithm-based recommendations. The second goal is to determine the degree of priority accorded by the scientific community to pivotal subjects in recommender systems discussions, such as misinformation, polarization, addiction, emotional contagion, privacy, and bias. Finally, the third goal is to assess whether the level of scientific research and discourse is commensurate with the significant impact these recommendation systems have globally. The research concludes the impact of recommender systems on society has been largely neglected by the scientific community, despite the fact that more than half of the world's population interacts with them on a daily basis. This becomes especially apparent when considering that algorithms exert influence not just on major societal issues but on every aspect of a user's online experience. The potential consequences for humanity are discussed, such as addiction to technology, weakening relations between humans, and the homogenizing effects on human minds. One possible direction to address the challenges posed by these algorithms is the application of algorithmic regulation to promote content diversity and facilitate democratic engagement, such as the tripartite solution which is elaborated upon in the conclusion. Therefore, future research should not only be centered around further evaluating influence of this technology, but also the analysis of how such systems can be regulated. A broader conversation among all stakeholders should be evoked on these potential approaches, aiming to align AI with societal values and enhance human well-being.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Futures
Futures Multiple-
CiteScore
6.00
自引率
10.00%
发文量
124
期刊介绍: Futures is an international, refereed, multidisciplinary journal concerned with medium and long-term futures of cultures and societies, science and technology, economics and politics, environment and the planet and individuals and humanity. Covering methods and practices of futures studies, the journal seeks to examine possible and alternative futures of all human endeavours. Futures seeks to promote divergent and pluralistic visions, ideas and opinions about the future. The editors do not necessarily agree with the views expressed in the pages of Futures
期刊最新文献
Theorizing ‘the future’ in higher education: A framework for studying affective futurity Editorial Board Tell me an (un)fortunate story: Advancing storytelling methods in energy futures research Feminist urban futures: Envisioning the future of Ukrainian cities through the lens of the displaced community in Valencia (Spain) Envisioning Inclusive Futures: Organizational Alternatives Beyond the Business Case Approach in the Spectrum of Utopia and Dystopia
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1