首页 > 最新文献

AI & Society最新文献

英文 中文
AI ethics discourse: a call to embrace complexity, interdisciplinarity, and epistemic humility 人工智能伦理讨论:呼吁拥抱复杂性、跨学科性和认识论上的谦逊
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-06-14 DOI: 10.1007/s00146-023-01708-y
Joshua C. Gellers
{"title":"AI ethics discourse: a call to embrace complexity, interdisciplinarity, and epistemic humility","authors":"Joshua C. Gellers","doi":"10.1007/s00146-023-01708-y","DOIUrl":"10.1007/s00146-023-01708-y","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2593 - 2594"},"PeriodicalIF":2.9,"publicationDate":"2023-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123474350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Friendly AI will still be our master. Or, why we should not want to be the pets of super-intelligent computers 友好的人工智能仍将是我们的主人。或者说,为什么我们不应该成为超级智能计算机的宠物?
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-06-13 DOI: 10.1007/s00146-023-01698-x
Robert Sparrow

When asked about humanity’s future relationship with computers, Marvin Minsky famously replied “If we’re lucky, they might decide to keep us as pets”. A number of eminent authorities continue to argue that there is a real danger that “super-intelligent” machines will enslave—perhaps even destroy—humanity. One might think that it would swiftly follow that we should abandon the pursuit of AI. Instead, most of those who purport to be concerned about the existential threat posed by AI default to worrying about what they call the “Friendly AI problem”. Roughly speaking this is the question of how we might ensure that the AI that will develop from the first AI that we create will remain sympathetic to humanity and continue to serve, or at least take account of, our interests. In this paper I draw on the “neo-republican” philosophy of Philip Pettit to argue that solving the Friendly AI problem would not change the fact that the advent of super-intelligent AI would be disastrous for humanity by virtue of rendering us the slaves of machines. A key insight of the republican tradition is that freedom requires equality of a certain sort, which is clearly lacking between pets and their owners. Benevolence is not enough. As long as AI has the power to interfere in humanity’s choices, and the capacity to do so without reference to our interests, then it will dominate us and thereby render us unfree. The pets of kind owners are still pets, which is not a status which humanity should embrace. If we really think that there is a risk that research on AI will lead to the emergence of a superintelligence, then we need to think again about the wisdom of researching AI at all.

当被问及人类未来与计算机的关系时,马文-明斯基(Marvin Minsky)做出了著名的回答:"如果我们幸运的话,它们可能会决定把我们当宠物养着"。许多著名的权威人士仍然认为,"超级智能 "机器确实存在奴役甚至毁灭人类的危险。人们可能会认为,我们应该迅速放弃对人工智能的追求。然而,大多数声称关注人工智能对人类生存构成的威胁的人却转而担心他们所谓的 "友好人工智能问题"。粗略地说,这就是我们如何确保从我们创造的第一个人工智能发展而来的人工智能将继续同情人类,并继续为我们的利益服务,或至少考虑到我们的利益。在本文中,我借鉴了菲利普-佩蒂特(Philip Pettit)的 "新共和主义 "哲学,认为解决 "友好的人工智能 "问题并不能改变这样一个事实,即超级智能人工智能的出现将使人类沦为机器的奴隶,从而给人类带来灾难性的后果。共和主义传统的一个重要观点是,自由需要某种平等,而宠物与主人之间显然缺乏这种平等。仅有仁慈是不够的。只要人工智能有能力干预人类的选择,并且能够不考虑我们的利益,那么它就会支配我们,从而使我们失去自由。善良主人的宠物仍然是宠物,这不是人类应该接受的身份。如果我们真的认为人工智能研究有可能导致超级智能的出现,那么我们就需要重新思考研究人工智能是否明智。
{"title":"Friendly AI will still be our master. Or, why we should not want to be the pets of super-intelligent computers","authors":"Robert Sparrow","doi":"10.1007/s00146-023-01698-x","DOIUrl":"10.1007/s00146-023-01698-x","url":null,"abstract":"<div><p>When asked about humanity’s future relationship with computers, Marvin Minsky famously replied “If we’re lucky, they might decide to keep us as pets”. A number of eminent authorities continue to argue that there is a real danger that “super-intelligent” machines will enslave—perhaps even destroy—humanity. One might think that it would swiftly follow that we should abandon the pursuit of AI. Instead, most of those who purport to be concerned about the existential threat posed by AI default to worrying about what they call the “Friendly AI problem”. Roughly speaking this is the question of how we might ensure that the AI that will develop from the first AI that we create will remain sympathetic to humanity and continue to serve, or at least take account of, our interests. In this paper I draw on the “neo-republican” philosophy of Philip Pettit to argue that solving the Friendly AI problem would not change the fact that the advent of super-intelligent AI would be disastrous for humanity by virtue of rendering us the slaves of machines. A key insight of the republican tradition is that freedom requires equality of a certain sort, which is clearly lacking between pets and their owners. Benevolence is not enough. As long as AI has the power to interfere in humanity’s choices, and the capacity to do so without reference to our interests, then it will dominate us and thereby render us unfree. The pets of kind owners are still pets, which is not a status which humanity should embrace. If we really think that there is a risk that research on AI will lead to the emergence of a superintelligence, then we need to think again about the wisdom of researching AI at all.\u0000</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2439 - 2444"},"PeriodicalIF":2.9,"publicationDate":"2023-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01698-x.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123898757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pauses, parrots, and poor arguments: real-world constraints undermine recent calls for AI regulation 停顿、鹦鹉和拙劣的论证:现实世界的限制削弱了最近对人工智能监管的呼吁
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-06-12 DOI: 10.1007/s00146-023-01703-3
Bartlomiej Chomanski
{"title":"Pauses, parrots, and poor arguments: real-world constraints undermine recent calls for AI regulation","authors":"Bartlomiej Chomanski","doi":"10.1007/s00146-023-01703-3","DOIUrl":"10.1007/s00146-023-01703-3","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2585 - 2587"},"PeriodicalIF":2.9,"publicationDate":"2023-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122664781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The galloping editor 飞奔的编辑
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-06-10 DOI: 10.1007/s00146-023-01707-z
Gabriel Lanyi

Classical natural language processing endeavored to understand the language of native speakers. When this proved to lie beyond the horizon, a scaled-down version settled for text analysis and processing but retained the old name and acronym. But text ≠ language. Any combination of signs and symbols qualifies as text. Language presupposes meaning, which is what connects it to real life. Failing to distinguish between the two results in confusing humanoids (machines thinking like humans) with machinoids (humans thinking like machines). As scientific English (SciEng) became the lingua franca of science, it has acquired all the traits of a machine language: reduced vocabulary, where fewer and fewer words have taken on more and more meanings; prescribed use of pronouns; depersonalized rigid syntactic forms and rules of composition. Compliance with SciEng standards can be automatically verified, which means that Sci Eng can be automatically imitated, what is referred to as AI writing (ChatGPT). The article discusses an attempt to automatically correct deviations from the rules by what is touted as AI editing.

经典的自然语言处理致力于理解母语使用者的语言。当这一目标被证明无法实现时,一个缩小版的自然语言处理就定名为文本分析和处理,但保留了原来的名称和缩写。但文本≠语言。任何符号和象征的组合都可称为文本。语言以意义为前提,而意义则是语言与现实生活的纽带。不区分这两者,就会混淆 humanoids(像人类一样思考的机器)和 machinoids(像机器一样思考的人类)。随着科学英语(SciEng)成为科学界的通用语言,它已经具备了机器语言的所有特征:词汇量减少,越来越少的单词被赋予越来越多的含义;规定使用代词;去个性化的严格句法形式和构成规则。对 SciEng 标准的遵守情况可以自动验证,这意味着 Sci Eng 可以被自动模仿,即所谓的人工智能写作(ChatGPT)。文章讨论了通过所谓的人工智能编辑来自动纠正偏离规则的尝试。
{"title":"The galloping editor","authors":"Gabriel Lanyi","doi":"10.1007/s00146-023-01707-z","DOIUrl":"10.1007/s00146-023-01707-z","url":null,"abstract":"<div><p>Classical natural language processing endeavored to understand the language of native speakers. When this proved to lie beyond the horizon, a scaled-down version settled for text analysis and processing but retained the old name and acronym. But text ≠ language. Any combination of signs and symbols qualifies as text. Language presupposes meaning, which is what connects it to real life. Failing to distinguish between the two results in confusing humanoids (machines thinking like humans) with machinoids (humans thinking like machines). As scientific English (SciEng) became the lingua franca of science, it has acquired all the traits of a machine language: reduced vocabulary, where fewer and fewer words have taken on more and more meanings; prescribed use of pronouns; depersonalized rigid syntactic forms and rules of composition. Compliance with SciEng standards can be automatically verified, which means that Sci Eng can be automatically imitated, what is referred to as AI writing (ChatGPT). The article discusses an attempt to automatically correct deviations from the rules by what is touted as AI editing.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2457 - 2461"},"PeriodicalIF":2.9,"publicationDate":"2023-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128554995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Moral disagreement and artificial intelligence 道德分歧与人工智能
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-06-03 DOI: 10.1007/s00146-023-01697-y
Pamela Robinson

Artificially intelligent systems will be used to make increasingly important decisions about us. Many of these decisions will have to be made without universal agreement about the relevant moral facts. For other kinds of disagreement, it is at least usually obvious what kind of solution is called for. What makes moral disagreement especially challenging is that there are three different ways of handling it. Moral solutions apply a moral theory or related principles and largely ignore the details of the disagreement. Compromise solutions apply a method of finding a compromise and taking information about the disagreement as input. Epistemic solutions apply an evidential rule that treats the details of the disagreement as evidence of moral truth. Proposals for all three kinds of solutions can be found in the AI ethics and value alignment literature, but little has been said to justify choosing one over the other. I argue that the choice is best framed in terms of moral risk.

人工智能系统将被用来对我们做出越来越重要的决定。其中许多决定都必须在对相关道德事实没有普遍共识的情况下做出。对于其他类型的分歧,至少通常很明显需要什么样的解决方案。道德分歧之所以特别具有挑战性,是因为有三种不同的处理方式。道德解决方案应用道德理论或相关原则,在很大程度上忽略分歧的细节。折中方案采用一种寻找折中方案的方法,并将有关分歧的信息作为输入。认识论解决方案采用一种证据规则,将分歧的细节视为道德真理的证据。在人工智能伦理学和价值调整的文献中,可以找到关于这三种解决方案的建议,但几乎没有人说过选择其中一种解决方案的理由。我认为,最好从道德风险的角度来进行选择。
{"title":"Moral disagreement and artificial intelligence","authors":"Pamela Robinson","doi":"10.1007/s00146-023-01697-y","DOIUrl":"10.1007/s00146-023-01697-y","url":null,"abstract":"<div><p>Artificially intelligent systems will be used to make increasingly important decisions about us. Many of these decisions will have to be made without universal agreement about the relevant moral facts. For other kinds of disagreement, it is at least usually obvious what kind of solution is called for. What makes moral disagreement especially challenging is that there are three different ways of handling it. <i>Moral solutions</i> apply a moral theory or related principles and largely ignore the details of the disagreement. <i>Compromise solutions</i> apply a method of finding a compromise and taking information about the disagreement as input. <i>Epistemic solutions</i> apply an evidential rule that treats the details of the disagreement as evidence of moral truth. Proposals for all three kinds of solutions can be found in the AI ethics and value alignment literature, but little has been said to justify choosing one over the other. I argue that the choice is best framed in terms of <i>moral risk</i>.\u0000</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2425 - 2438"},"PeriodicalIF":2.9,"publicationDate":"2023-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01697-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134974333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of artificial intelligence: risk perception and trust in the work context with different impact levels and task types 人工智能的应用:在不同影响程度和任务类型的工作环境中的风险感知和信任度
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-06-02 DOI: 10.1007/s00146-023-01699-w
Uwe Klein, Jana Depping, Laura Wohlfahrt, Pantaleon Fassbender

Following the studies of Araujo et al. (AI Soc 35:611–623, 2020) and Lee (Big Data Soc 5:1–16, 2018), this empirical study uses two scenario-based online experiments. The sample consists of 221 subjects from Germany, differing in both age and gender. The original studies are not replicated one-to-one. New scenarios are constructed as realistically as possible and focused on everyday work situations. They are based on the AI acceptance model of Scheuer (Grundlagen intelligenter KI-Assistenten und deren vertrauensvolle Nutzung. Springer, Wiesbaden, 2020) and are extended by individual descriptive elements of AI systems in comparison to the original studies. The first online experiment examines decisions made by artificial intelligence with varying degrees of impact. In the high-impact scenario, applicants are automatically selected for a job and immediately received an employment contract. In the low-impact scenario, three applicants are automatically invited for another interview. In addition, the relationship between age and risk perception is investigated. The second online experiment tests subjects’ perceived trust in decisions made by artificial intelligence, either semi-automatically through the assistance of human experts or fully automatically in comparison. Two task types are distinguished. The task type that requires “human skills”—represented as a performance evaluation situation—and the task type that requires “mechanical skills”—represented as a work distribution situation. In addition, the extent of negative emotions in automated decisions is investigated. The results are related to the findings of Araujo et al. (AI Soc 35:611–623, 2020) and Lee (Big Data Soc 5:1–16, 2018). Implications for further research activities and practical relevance are discussed.

继 Araujo 等人(AI Soc 35:611-623, 2020)和 Lee(Big Data Soc 5:1-16, 2018)的研究之后,本实证研究使用了两个基于场景的在线实验。样本由来自德国的 221 名受试者组成,他们的年龄和性别各不相同。原始研究并不是一对一复制的。新场景的构建尽可能真实,并以日常工作场景为重点。它们基于 Scheuer 的人工智能接受模型(Grundlagen intelligenter KI-Assistenten und deren vertrauensvolle Nutzung.施普林格,威斯巴登,2020 年),并通过人工智能系统的个别描述性元素与原始研究进行了比较。第一个在线实验研究了人工智能在不同影响程度下做出的决定。在高影响情景中,申请者被自动选中并立即获得一份工作合同。在低影响情景下,三名申请者会被自动邀请参加另一次面试。此外,还调查了年龄与风险感知之间的关系。第二个在线实验测试的是受试者对人工智能决策的信任度,人工智能决策可以是在人类专家协助下的半自动决策,也可以是全自动决策。实验区分了两种任务类型。需要 "人类技能 "的任务类型--表现为绩效评估情况;需要 "机械技能 "的任务类型--表现为工作分配情况。此外,还调查了自动决策中负面情绪的程度。研究结果与 Araujo 等人(AI Soc 35:611-623, 2020)和 Lee(Big Data Soc 5:1-16, 2018)的研究结果相关。讨论了进一步研究活动的意义和实际相关性。
{"title":"Application of artificial intelligence: risk perception and trust in the work context with different impact levels and task types","authors":"Uwe Klein,&nbsp;Jana Depping,&nbsp;Laura Wohlfahrt,&nbsp;Pantaleon Fassbender","doi":"10.1007/s00146-023-01699-w","DOIUrl":"10.1007/s00146-023-01699-w","url":null,"abstract":"<div><p>Following the studies of Araujo et al. (AI Soc 35:611–623, 2020) and Lee (Big Data Soc 5:1–16, 2018), this empirical study uses two scenario-based online experiments. The sample consists of 221 subjects from Germany, differing in both age and gender. The original studies are not replicated one-to-one. New scenarios are constructed as realistically as possible and focused on everyday work situations. They are based on the AI acceptance model of Scheuer (Grundlagen intelligenter KI-Assistenten und deren vertrauensvolle Nutzung. Springer, Wiesbaden, 2020) and are extended by individual descriptive elements of AI systems in comparison to the original studies. The first online experiment examines decisions made by artificial intelligence with varying degrees of impact. In the high-impact scenario, applicants are automatically selected for a job and immediately received an employment contract. In the low-impact scenario, three applicants are automatically invited for another interview. In addition, the relationship between age and risk perception is investigated. The second online experiment tests subjects’ perceived trust in decisions made by artificial intelligence, either semi-automatically through the assistance of human experts or fully automatically in comparison. Two task types are distinguished. The task type that requires “human skills”—represented as a performance evaluation situation—and the task type that requires “mechanical skills”—represented as a work distribution situation. In addition, the extent of negative emotions in automated decisions is investigated. The results are related to the findings of Araujo et al. (AI Soc 35:611–623, 2020) and Lee (Big Data Soc 5:1–16, 2018). Implications for further research activities and practical relevance are discussed.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2445 - 2456"},"PeriodicalIF":2.9,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01699-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128787268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The approach to AI emergence from the standpoint of future contingents 从未来突发事件的角度看待人工智能的出现
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-06-02 DOI: 10.1007/s00146-023-01691-4
Ignacy Sitnicki
{"title":"The approach to AI emergence from the standpoint of future contingents","authors":"Ignacy Sitnicki","doi":"10.1007/s00146-023-01691-4","DOIUrl":"10.1007/s00146-023-01691-4","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2385 - 2387"},"PeriodicalIF":2.9,"publicationDate":"2023-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127933717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligence at any price? A criterion for defining AI 不惜任何代价的智力?定义AI的标准
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-06-01 DOI: 10.1007/s00146-023-01695-0
Mihai Nadin

According to how AI has defined itself from its beginning, thinking in non-living matter, i.e., without life, is possible. The premise of symbolic AI is that operating on representations of reality machines can understand it. When this assumption did not work as expected, the mathematical model of the neuron became the engine of artificial “brains.” Connectionism followed. Currently, in the context of Machine Learning success, attempts are made at integrating the symbolic and connectionist paths. There is hope that Artificial General Intelligence (AGI) performance can be achieved. As encouraging as neuro-symbolic AI seems to be, it remains unclear whether AGI is actually a moving target as long as AI itself remains ambiguously defined. This paper makes the argument that the intelligence of machines, expressed in their performance, reflects how adequate the means used for achieving it are. Therefore, energy use and the amount of data necessary qualify as a good metric for comparing natural and artificial performance.

根据人工智能从一开始就对自己的定义,在非生命物质中思考,也就是说,没有生命,是可能的。符号人工智能的前提是,在现实表象上操作的机器可以理解它。当这个假设没有像预期的那样起作用时,神经元的数学模型就成了人工“大脑”的引擎。联结主义。目前,在机器学习成功的背景下,人们试图整合符号和连接路径。人工通用智能(AGI)的性能有望实现。尽管神经符号人工智能似乎令人鼓舞,但只要人工智能本身的定义仍然含糊不清,就不清楚AGI是否真的是一个移动的目标。本文提出的论点是,机器的智能,表现在它们的性能上,反映了实现它所使用的手段有多充分。因此,能源使用和必要的数据量有资格作为比较自然和人工性能的良好指标。
{"title":"Intelligence at any price? A criterion for defining AI","authors":"Mihai Nadin","doi":"10.1007/s00146-023-01695-0","DOIUrl":"10.1007/s00146-023-01695-0","url":null,"abstract":"<div><p>According to how AI has defined itself from its beginning, thinking in non-living matter, i.e., without life, is possible. The premise of symbolic AI is that operating on representations of reality machines can understand it. When this assumption did not work as expected, the mathematical model of the neuron became the engine of artificial “brains.” Connectionism followed. Currently, in the context of Machine Learning success, attempts are made at integrating the symbolic and connectionist paths. There is hope that Artificial General Intelligence (AGI) performance can be achieved. As encouraging as neuro-symbolic AI seems to be, it remains unclear whether AGI is actually a moving target as long as AI itself remains ambiguously defined. This paper makes the argument that the intelligence of machines, expressed in their performance, reflects how adequate the means used for achieving it are. Therefore, energy use and the amount of data necessary qualify as a good metric for comparing natural and artificial performance. \u0000</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"38 5","pages":"1813 - 1817"},"PeriodicalIF":3.0,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49998908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Abstraction, mimesis and the evolution of deep learning 抽象、模仿与深度学习的发展
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-05-31 DOI: 10.1007/s00146-023-01688-z
Jon Eklöf, Thomas Hamelryck, Cadell Last, Alexander Grima, Ulrika Lundh Snis

Deep learning developers typically rely on deep learning software frameworks (DLSFs)—simply described as pre-packaged libraries of programming components that provide high-level access to deep learning functionality. New DLSFs progressively encapsulate mathematical, statistical and computational complexity. Such higher levels of abstraction subsequently make it easier for deep learning methodology to spread through mimesis (i.e., imitation of models perceived as successful). In this study, we quantify this increase in abstraction and discuss its implications. Analyzing publicly available code from Github, we found that the introduction of DLSFs correlates both with significant increases in the number of deep learning projects and substantial reductions in the number of lines of code used. We subsequently discuss and argue the importance of abstraction in deep learning with respect to ephemeralization, technological advancement, democratization, adopting timely levels of abstraction, the emergence of mimetic deadlocks, issues related to the use of black box methods including privacy and fairness, and the concentration of technological power. Finally, we also discuss abstraction as a symptom of an ongoing technological metatransition.

深度学习开发人员通常依赖于深度学习软件框架(DLSF)--简单地说,它是预先打包好的编程组件库,可提供对深度学习功能的高级访问。新的 DLSF 逐步封装数学、统计和计算复杂性。这种更高层次的抽象随后使深度学习方法更容易通过模仿(即模仿被认为成功的模型)传播开来。在本研究中,我们将量化这种抽象程度的提高并讨论其影响。通过分析 Github 上的公开代码,我们发现 DLSF 的引入既与深度学习项目数量的显著增加相关,也与代码行数的大幅减少相关。随后,我们讨论并论证了抽象在深度学习中的重要性,涉及短暂化、技术进步、民主化、采用适时的抽象层次、拟态死锁的出现、与使用黑盒方法相关的问题(包括隐私和公平性)以及技术力量的集中。最后,我们还讨论了抽象化作为正在进行的技术元过渡的一种表现形式的问题。
{"title":"Abstraction, mimesis and the evolution of deep learning","authors":"Jon Eklöf,&nbsp;Thomas Hamelryck,&nbsp;Cadell Last,&nbsp;Alexander Grima,&nbsp;Ulrika Lundh Snis","doi":"10.1007/s00146-023-01688-z","DOIUrl":"10.1007/s00146-023-01688-z","url":null,"abstract":"<div><p>Deep learning developers typically rely on deep learning software frameworks (DLSFs)—simply described as pre-packaged libraries of programming components that provide high-level access to deep learning functionality. New DLSFs progressively encapsulate mathematical, statistical and computational complexity. Such higher levels of abstraction subsequently make it easier for deep learning methodology to spread through mimesis (i.e., imitation of models perceived as successful). In this study, we quantify this increase in abstraction and discuss its implications. Analyzing publicly available code from Github, we found that the introduction of DLSFs correlates both with significant increases in the number of deep learning projects and substantial reductions in the number of lines of code used. We subsequently discuss and argue the importance of abstraction in deep learning with respect to ephemeralization, technological advancement, democratization, adopting timely levels of abstraction, the emergence of mimetic deadlocks, issues related to the use of black box methods including privacy and fairness, and the concentration of technological power. Finally, we also discuss abstraction as a symptom of an ongoing technological metatransition.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2349 - 2357"},"PeriodicalIF":2.9,"publicationDate":"2023-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01688-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131563495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A human-centred systems manifesto for smart digital immersion in Industry 5.0: a case study of cultural heritage 工业 5.0 智能数字沉浸式系统的人本系统宣言:文化遗产案例研究
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-05-30 DOI: 10.1007/s00146-023-01693-2
Cian Murphy, Peter J. Carew, Larry Stapleton

Emergent digital technologies provide cultural heritage spaces with the opportunity to reassess their current user journey. An immersive user experience can be developed that is innovative, dynamic, and customised for each attendee. Museums have already begun to move towards interactive exhibitions utilising Artificial Intelligence (AI) and the Internet of Things (IOT), and more recently, the use of Virtual Reality (VR) and Augmented Reality (AR) has become more common in cultural heritage spaces to present items of historical significance. VR concentrates on the provision of full immersion within a digitised environment utilising a headset, whilst AR focuses on the inclusion of digitised content within the existing physical environment that can be accessed through a medium such as a mobile phone application. Machine learning techniques such as a recommender system can support an immersive user journey by issuing personalised recommendations regarding a user’s preferred future content based on their previous activity. An ethical approach is necessary to take the precautions required to protect the welfare of human participants and eliminate any aspect of stereotyping or biased behaviour. This paper sets out a human-centred manifesto intended to provide guidance when inducing smart digital immersion in cultural heritage spaces. A review of existing digital cultural heritage projects was conducted to determine their adherence to the manifesto with the findings indicating that Education was a primary focus across all projects and that Personalisation, Respect and Empathy, and Support were also highly valued. Additionally, the findings indicated that there were areas with room for improvement such as Fairness to ensure that a well-balanced human-centred system is implemented.

新兴数字技术为文化遗产空间提供了重新评估当前用户体验的机会。可以开发出创新、动态、为每位参观者量身定制的沉浸式用户体验。博物馆已经开始利用人工智能(AI)和物联网(IOT)举办互动展览,最近,虚拟现实(VR)和增强现实(AR)的使用在文化遗产空间中也越来越普遍,以展示具有历史意义的物品。虚拟现实主要是利用头戴式耳机在数字化环境中提供完全沉浸式体验,而增强现实主要是在现有物理环境中加入数字化内容,可通过手机应用程序等媒介进行访问。机器学习技术(如推荐系统)可以支持沉浸式用户体验,根据用户以前的活动,对其未来喜欢的内容进行个性化推荐。要采取必要的预防措施来保护人类参与者的福利,消除任何方面的刻板印象或偏见行为,就必须采取符合道德规范的方法。本文提出了一个以人为本的宣言,旨在为在文化遗产空间诱导智能数字沉浸提供指导。我们对现有的数字文化遗产项目进行了审查,以确定它们是否遵守了宣言,审查结果表明,教育是所有项目的首要重点,个性化、尊重和移情以及支持也受到高度重视。此外,研究结果表明,在公平性等方面还有改进的余地,以确保实施以人为本的平衡系统。
{"title":"A human-centred systems manifesto for smart digital immersion in Industry 5.0: a case study of cultural heritage","authors":"Cian Murphy,&nbsp;Peter J. Carew,&nbsp;Larry Stapleton","doi":"10.1007/s00146-023-01693-2","DOIUrl":"10.1007/s00146-023-01693-2","url":null,"abstract":"<div><p>Emergent digital technologies provide cultural heritage spaces with the opportunity to reassess their current user journey. An immersive user experience can be developed that is innovative, dynamic, and customised for each attendee. Museums have already begun to move towards interactive exhibitions utilising Artificial Intelligence (AI) and the Internet of Things (IOT), and more recently, the use of Virtual Reality (VR) and Augmented Reality (AR) has become more common in cultural heritage spaces to present items of historical significance. VR concentrates on the provision of full immersion within a digitised environment utilising a headset, whilst AR focuses on the inclusion of digitised content within the existing physical environment that can be accessed through a medium such as a mobile phone application. Machine learning techniques such as a recommender system can support an immersive user journey by issuing personalised recommendations regarding a user’s preferred future content based on their previous activity. An ethical approach is necessary to take the precautions required to protect the welfare of human participants and eliminate any aspect of stereotyping or biased behaviour. This paper sets out a human-centred manifesto intended to provide guidance when inducing smart digital immersion in cultural heritage spaces. A review of existing digital cultural heritage projects was conducted to determine their adherence to the manifesto with the findings indicating that Education was a primary focus across all projects and that Personalisation, Respect and Empathy, and Support were also highly valued. Additionally, the findings indicated that there were areas with room for improvement such as Fairness to ensure that a well-balanced human-centred system is implemented.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2401 - 2416"},"PeriodicalIF":2.9,"publicationDate":"2023-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135643162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI & Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1