首页 > 最新文献

AI & Society最新文献

英文 中文
Non-augmented reality: why we shouldn’t look through technology 非增强现实:为什么我们不应该透过技术看世界?
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-07-15 DOI: 10.1007/s00146-023-01717-x
Kyle van Oosterum
{"title":"Non-augmented reality: why we shouldn’t look through technology","authors":"Kyle van Oosterum","doi":"10.1007/s00146-023-01717-x","DOIUrl":"10.1007/s00146-023-01717-x","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2599 - 2600"},"PeriodicalIF":2.9,"publicationDate":"2023-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131529960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
“Personhood and AI: Why large language models don’t understand us” "人格与人工智能:为什么大型语言模型不理解我们?
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-07-12 DOI: 10.1007/s00146-023-01724-y
Jacob Browning

Recent artificial intelligence advances, especially those of large language models (LLMs), have increasingly shown glimpses of human-like intelligence. This has led to bold claims that these systems are no longer a mere “it” but now a “who,” a kind of person deserving respect. In this paper, I argue that this view depends on a Cartesian account of personhood, on which identifying someone as a person is based on their cognitive sophistication and ability to address common-sense reasoning problems. I contrast this with a different account of personhood, one where an agent is a person if they are autonomous, responsive to norms, and culpable for their actions. On this latter account, I show that LLMs are not person-like, as evidenced by their propensity for dishonesty, inconsistency, and offensiveness. Moreover, I argue current LLMs, given the way they are designed and trained, cannot be persons—either social or Cartesian. The upshot is that contemporary LLMs are not, and never will be, persons.

最近人工智能的进步,尤其是大型语言模型(LLMs)的进步,越来越多地显示出人类智能的曙光。这使得人们大胆地宣称,这些系统不再仅仅是 "它",而是 "谁",是一种值得尊重的人。在本文中,我将论证这种观点依赖于笛卡尔式的人格论,根据这种人格论,认定一个人是否是人取决于他的认知复杂程度和解决常识推理问题的能力。我将这一观点与另一种不同的人格观点进行了对比,后者认为,如果一个人是自主的、对规范做出反应并对自己的行为负有责任,那么他就是一个人。根据后一种观点,我证明了法学硕士并不像人,他们的不诚实、不一致和攻击性倾向就证明了这一点。此外,我还认为,鉴于当前法律硕士的设计和培训方式,他们不可能是人--无论是社会人还是笛卡尔人。因此,当代法学硕士不是人,也永远不会是人。
{"title":"“Personhood and AI: Why large language models don’t understand us”","authors":"Jacob Browning","doi":"10.1007/s00146-023-01724-y","DOIUrl":"10.1007/s00146-023-01724-y","url":null,"abstract":"<div><p>Recent artificial intelligence advances, especially those of large language models (LLMs), have increasingly shown glimpses of human-like intelligence. This has led to bold claims that these systems are no longer a mere “it” but now a “who,” a kind of person deserving respect. In this paper, I argue that this view depends on a Cartesian account of personhood, on which identifying someone as a person is based on their cognitive sophistication and ability to address common-sense reasoning problems. I contrast this with a different account of personhood, one where an agent is a person if they are autonomous, responsive to norms, and culpable for their actions. On this latter account, I show that LLMs are not person-like, as evidenced by their propensity for dishonesty, inconsistency, and offensiveness. Moreover, I argue current LLMs, given the way they are designed and trained, cannot be persons—either social or Cartesian. The upshot is that contemporary LLMs are not, and never will be, persons.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2499 - 2506"},"PeriodicalIF":2.9,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131337586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative AI, generating precariousness for workers? 生成式人工智能会给工人带来不稳定吗?
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-07-12 DOI: 10.1007/s00146-023-01719-9
Aida Ponce Del Castillo
{"title":"Generative AI, generating precariousness for workers?","authors":"Aida Ponce Del Castillo","doi":"10.1007/s00146-023-01719-9","DOIUrl":"10.1007/s00146-023-01719-9","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2601 - 2602"},"PeriodicalIF":2.9,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129492021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identifying arbitrage opportunities in retail markets with artificial intelligence 用人工智能识别零售市场的套利机会
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-07-12 DOI: 10.1007/s00146-023-01718-w
Jitsama Tanlamai, Warut Khern-am-nuai, Yossiri Adulyasak

This study uses an artificial intelligence (AI) model to identify arbitrage opportunities in the retail marketplace. Specifically, we develop an AI model to predict the optimal purchasing point based on the price movement of products in the market. Our model is trained on a large dataset collected from an online marketplace in the United States. Our model is enhanced by incorporating user-generated content (UGC), which is empirically proven to be significantly informative. Overall, the AI model attains more than 90% precision rate, while the recall rate is higher than 80% in an out-of-sample test. In addition, we conduct a field experiment to verify the external validity of the AI model in a real-life setting. Our model identifies 293 arbitrage opportunities during a one-year field experiment and generates a profit of $7.06 per arbitrage opportunity. The result demonstrates that AI performs exceptionally well in identifying arbitrage opportunities in retail markets with tangible economic values. Our results also yield important implications regarding the role of AI in the society, both from the consumer and firm perspectives.

本研究利用人工智能(AI)模型来识别零售市场中的套利机会。具体来说,我们开发了一个人工智能模型,根据市场上产品的价格走势预测最佳购买点。我们的模型是在一个从美国网上商城收集的大型数据集上训练出来的。我们的模型通过结合用户生成的内容(UGC)得到了增强,经验证明,用户生成的内容具有显著的信息量。总体而言,人工智能模型的精确率超过了 90%,而在样本外测试中的召回率则超过了 80%。此外,我们还进行了现场实验,以验证人工智能模型在现实生活中的外部有效性。在为期一年的实地实验中,我们的模型识别出了 293 个套利机会,并为每个套利机会创造了 7.06 美元的利润。结果表明,人工智能在识别具有实际经济价值的零售市场套利机会方面表现优异。我们的研究结果还从消费者和企业的角度,对人工智能在社会中的作用产生了重要影响。
{"title":"Identifying arbitrage opportunities in retail markets with artificial intelligence","authors":"Jitsama Tanlamai,&nbsp;Warut Khern-am-nuai,&nbsp;Yossiri Adulyasak","doi":"10.1007/s00146-023-01718-w","DOIUrl":"10.1007/s00146-023-01718-w","url":null,"abstract":"<div><p>This study uses an artificial intelligence (AI) model to identify arbitrage opportunities in the retail marketplace. Specifically, we develop an AI model to predict the optimal purchasing point based on the price movement of products in the market. Our model is trained on a large dataset collected from an online marketplace in the United States. Our model is enhanced by incorporating user-generated content (UGC), which is empirically proven to be significantly informative. Overall, the AI model attains more than 90% precision rate, while the recall rate is higher than 80% in an out-of-sample test. In addition, we conduct a field experiment to verify the external validity of the AI model in a real-life setting. Our model identifies 293 arbitrage opportunities during a one-year field experiment and generates a profit of $7.06 per arbitrage opportunity. The result demonstrates that AI performs exceptionally well in identifying arbitrage opportunities in retail markets with tangible economic values. Our results also yield important implications regarding the role of AI in the society, both from the consumer and firm perspectives.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2615 - 2630"},"PeriodicalIF":2.9,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01718-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126322438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Taking AI risks seriously: a new assessment model for the AI Act 认真对待人工智能风险:人工智能法的新评估模式
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-07-12 DOI: 10.1007/s00146-023-01723-z
Claudio Novelli, Federico Casolari, Antonino Rotolo, Mariarosaria Taddeo, Luciano Floridi

The EU Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, we propose applying the risk categories to specific AI scenarios, rather than solely to fields of application, using a risk assessment model that integrates the AIA with the risk approach arising from the Intergovernmental Panel on Climate Change (IPCC) and related literature. This integrated model enables the estimation of AI risk magnitude  by considering the interaction between (a) risk determinants, (b) individual drivers of determinants, and (c) multiple risk types. We illustrate this model using large language models (LLMs) as an example.

欧盟《人工智能法》(AIA)定义了四个风险类别:不可接受、高、有限和最低。然而,由于这些类别静态地取决于人工智能的广泛应用领域,风险程度可能会被错误估计,《人工智能法》也可能无法有效执行。当涉及到对通用人工智能(GPAI)的监管时,这个问题尤其具有挑战性,因为通用人工智能的应用范围广泛,而且往往难以预测。最近对折衷案文的修订虽然引入了针对具体情况的评估,但仍然不够充分。为解决这一问题,我们建议将风险类别适用于具体的人工智能情景,而不是仅仅适用于应用领域,并使用一个风险评估模型,该模型将人工智能评估与政府间气候变化专门委员会(IPCC)和相关文献中提出的风险方法相结合。这一综合模型通过考虑以下因素之间的相互作用,估算人工智能风险的大小:(a) 风险决定因素;(b) 决定因素的个别驱动因素;(c) 多种风险类型。我们以大型语言模型 (LLM) 为例来说明这一模型。
{"title":"Taking AI risks seriously: a new assessment model for the AI Act","authors":"Claudio Novelli,&nbsp;Federico Casolari,&nbsp;Antonino Rotolo,&nbsp;Mariarosaria Taddeo,&nbsp;Luciano Floridi","doi":"10.1007/s00146-023-01723-z","DOIUrl":"10.1007/s00146-023-01723-z","url":null,"abstract":"<div><p>The EU Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, we propose applying the risk categories to specific AI scenarios, rather than solely to fields of application, using a risk assessment model that integrates the AIA with the risk approach arising from the Intergovernmental Panel on Climate Change (IPCC) and related literature. This integrated model enables the estimation of AI risk magnitude  by considering the interaction between (a) risk determinants, (b) individual drivers of determinants, and (c) multiple risk types. We illustrate this model using large language models (LLMs) as an example.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2493 - 2497"},"PeriodicalIF":2.9,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01723-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130835071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Theory languages in designing artificial intelligence 设计人工智能的理论语言
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-07-10 DOI: 10.1007/s00146-023-01716-y
Pertti Saariluoma, Antero Karvonen

The foundations of AI design discourse are worth analyzing. Here, attention is paid to the nature of theory languages used in designing new AI technologies because the limits of these languages can clarify some fundamental questions in the development of AI. We discuss three types of theory language used in designing AI products: formal, computational, and natural. Formal languages, such as mathematics, logic, and programming languages, have fixed meanings and no actual-world semantics. They are context- and practically content-free. Computational languages use terms referring to the actual world, i.e., to entities, events, and thoughts. Thus, computational languages have actual-world references and semantics. They are thus no longer context- or content-free. However, computational languages always have fixed meanings and, for this reason, limited domains of reference. Finally, unlike formal and computational languages, natural languages are creative, dynamic, and productive. Consequently, they can refer to an unlimited number of objects and their attributes in an unlimited number of domains. The differences between the three theory languages enable us to reflect on the traditional problems of strong and weak AI.

人工智能设计话语的基础值得分析。在此,我们关注设计新人工智能技术时所使用的理论语言的性质,因为这些语言的局限性可以澄清人工智能发展中的一些基本问题。我们将讨论用于设计人工智能产品的三种理论语言:形式语言、计算语言和自然语言。形式语言,如数学、逻辑和编程语言,有固定的含义,没有实际世界的语义。它们不涉及上下文和实际内容。计算语言使用的术语指向实际世界,即实体、事件和思想。因此,计算语言具有实际世界的指称和语义。因此,它们不再是无语境或无内容的。然而,计算语言总是有固定的含义,因此,其所指域也是有限的。最后,与形式语言和计算语言不同,自然语言具有创造性、动态性和生产性。因此,它们可以在无限多的领域中指称无限多的对象及其属性。三种理论语言之间的差异使我们能够对传统的强人工智能和弱人工智能问题进行反思。
{"title":"Theory languages in designing artificial intelligence","authors":"Pertti Saariluoma,&nbsp;Antero Karvonen","doi":"10.1007/s00146-023-01716-y","DOIUrl":"10.1007/s00146-023-01716-y","url":null,"abstract":"<div><p>The foundations of AI design discourse are worth analyzing. Here, attention is paid to the nature of theory languages used in designing new AI technologies because the limits of these languages can clarify some fundamental questions in the development of AI. We discuss three types of theory language used in designing AI products: formal, computational, and natural. Formal languages, such as mathematics, logic, and programming languages, have fixed meanings and no actual-world semantics. They are context- and practically content-free. Computational languages use terms referring to the actual world, i.e., to entities, events, and thoughts. Thus, computational languages have actual-world references and semantics. They are thus no longer context- or content-free. However, computational languages always have fixed meanings and, for this reason, limited domains of reference. Finally, unlike formal and computational languages, natural languages are creative, dynamic, and productive. Consequently, they can refer to an unlimited number of objects and their attributes in an unlimited number of domains. The differences between the three theory languages enable us to reflect on the traditional problems of strong and weak AI.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2249 - 2258"},"PeriodicalIF":2.9,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01716-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114525708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction: Urban AI: understanding the emerging role of artificial intelligence in smart cities 更正:城市人工智能:了解人工智能在智慧城市中的新兴作用
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-07-04 DOI: 10.1007/s00146-023-01702-4
Aale Luusua, Johanna Ylipulli, Marcus Foth, Alessandro Aurigi
{"title":"Correction: Urban AI: understanding the emerging role of artificial intelligence in smart cities","authors":"Aale Luusua,&nbsp;Johanna Ylipulli,&nbsp;Marcus Foth,&nbsp;Alessandro Aurigi","doi":"10.1007/s00146-023-01702-4","DOIUrl":"10.1007/s00146-023-01702-4","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2633 - 2633"},"PeriodicalIF":2.9,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142409681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Disengage to survive the AI-powered sensory overload world 脱离人工智能,在感官超载的世界中生存下来
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-07-04 DOI: 10.1007/s00146-023-01714-0
Manh-Tung Ho, Quan-Hoang Vuong
{"title":"Disengage to survive the AI-powered sensory overload world","authors":"Manh-Tung Ho,&nbsp;Quan-Hoang Vuong","doi":"10.1007/s00146-023-01714-0","DOIUrl":"10.1007/s00146-023-01714-0","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2597 - 2598"},"PeriodicalIF":2.9,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116068347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Considerations for collecting data in Māori population for automatic detection of schizophrenia using natural language processing: a New Zealand experience 利用自然语言处理技术在毛利人中收集数据以自动检测精神分裂症的注意事项:新西兰的经验
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-06-29 DOI: 10.1007/s00146-023-01700-6
Randall Ratana, Hamid Sharifzadeh, Jamuna Krishnan

In this paper, we describe the challenges of collecting data in the Māori population for automatic detection of schizophrenia using natural language processing (NLP). Existing psychometric tools for detecting are wide ranging and do not meet the health needs of indigenous persons considered at risk of developing psychosis and/or schizophrenia. Automated methods using NLP have been developed to detect psychosis and schizophrenia but lack cultural nuance in their designs. Research incorporating the cultural aspects relevant to indigenous communities is lacking in the design of existing automatic prediction tools and one of the main reasons is the scarcity of data from indigenous populations. This paper explores the current design of the New Zealand health care system and its potential impacts on access and inequities in the Māori population and details the methodology used to collect speech samples of Māori at risk of developing psychosis and schizophrenia. The paper also describes the major obstacles faced during speech data collection, key findings, and probable solutions.

在本文中,我们介绍了利用自然语言处理技术(NLP)在毛利人口中收集数据以自动检测精神分裂症所面临的挑战。现有的心理检测工具范围广泛,无法满足被认为有可能患上精神病和/或精神分裂症的土著人的健康需求。使用 NLP 开发的自动化方法可以检测精神病和精神分裂症,但其设计缺乏文化上的细微差别。在现有自动预测工具的设计中,缺乏与原住民社区相关的文化方面的研究,其中一个主要原因就是原住民数据的匮乏。本文探讨了新西兰医疗保健系统的当前设计及其对毛利人获得医疗保健服务和不公平现象的潜在影响,并详细介绍了用于收集有患精神病和精神分裂症风险的毛利人语音样本的方法。本文还介绍了在语音数据收集过程中面临的主要障碍、主要发现以及可能的解决方案。
{"title":"Considerations for collecting data in Māori population for automatic detection of schizophrenia using natural language processing: a New Zealand experience","authors":"Randall Ratana,&nbsp;Hamid Sharifzadeh,&nbsp;Jamuna Krishnan","doi":"10.1007/s00146-023-01700-6","DOIUrl":"10.1007/s00146-023-01700-6","url":null,"abstract":"<div><p>In this paper, we describe the challenges of collecting data in the Māori population for automatic detection of schizophrenia using natural language processing (NLP). Existing psychometric tools for detecting are wide ranging and do not meet the health needs of indigenous persons considered at risk of developing psychosis and/or schizophrenia. Automated methods using NLP have been developed to detect psychosis and schizophrenia but lack cultural nuance in their designs. Research incorporating the cultural aspects relevant to indigenous communities is lacking in the design of existing automatic prediction tools and one of the main reasons is the scarcity of data from indigenous populations. This paper explores the current design of the New Zealand health care system and its potential impacts on access and inequities in the Māori population and details the methodology used to collect speech samples of Māori at risk of developing psychosis and schizophrenia. The paper also describes the major obstacles faced during speech data collection, key findings, and probable solutions.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2201 - 2212"},"PeriodicalIF":2.9,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123963949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ChatGPT and societal dynamics: navigating the crossroads of AI and human interaction ChatGPT 和社会动态:在人工智能和人类互动的十字路口航行
IF 2.9 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2023-06-28 DOI: 10.1007/s00146-023-01713-1
Partha Pratim Ray, Pradip Kumar Das
{"title":"ChatGPT and societal dynamics: navigating the crossroads of AI and human interaction","authors":"Partha Pratim Ray,&nbsp;Pradip Kumar Das","doi":"10.1007/s00146-023-01713-1","DOIUrl":"10.1007/s00146-023-01713-1","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2595 - 2596"},"PeriodicalIF":2.9,"publicationDate":"2023-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129928104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI & Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1