Pub Date : 2023-07-12DOI: 10.1007/s00146-023-01724-y
Jacob Browning
Recent artificial intelligence advances, especially those of large language models (LLMs), have increasingly shown glimpses of human-like intelligence. This has led to bold claims that these systems are no longer a mere “it” but now a “who,” a kind of person deserving respect. In this paper, I argue that this view depends on a Cartesian account of personhood, on which identifying someone as a person is based on their cognitive sophistication and ability to address common-sense reasoning problems. I contrast this with a different account of personhood, one where an agent is a person if they are autonomous, responsive to norms, and culpable for their actions. On this latter account, I show that LLMs are not person-like, as evidenced by their propensity for dishonesty, inconsistency, and offensiveness. Moreover, I argue current LLMs, given the way they are designed and trained, cannot be persons—either social or Cartesian. The upshot is that contemporary LLMs are not, and never will be, persons.
{"title":"“Personhood and AI: Why large language models don’t understand us”","authors":"Jacob Browning","doi":"10.1007/s00146-023-01724-y","DOIUrl":"10.1007/s00146-023-01724-y","url":null,"abstract":"<div><p>Recent artificial intelligence advances, especially those of large language models (LLMs), have increasingly shown glimpses of human-like intelligence. This has led to bold claims that these systems are no longer a mere “it” but now a “who,” a kind of person deserving respect. In this paper, I argue that this view depends on a Cartesian account of personhood, on which identifying someone as a person is based on their cognitive sophistication and ability to address common-sense reasoning problems. I contrast this with a different account of personhood, one where an agent is a person if they are autonomous, responsive to norms, and culpable for their actions. On this latter account, I show that LLMs are not person-like, as evidenced by their propensity for dishonesty, inconsistency, and offensiveness. Moreover, I argue current LLMs, given the way they are designed and trained, cannot be persons—either social or Cartesian. The upshot is that contemporary LLMs are not, and never will be, persons.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2499 - 2506"},"PeriodicalIF":2.9,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131337586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study uses an artificial intelligence (AI) model to identify arbitrage opportunities in the retail marketplace. Specifically, we develop an AI model to predict the optimal purchasing point based on the price movement of products in the market. Our model is trained on a large dataset collected from an online marketplace in the United States. Our model is enhanced by incorporating user-generated content (UGC), which is empirically proven to be significantly informative. Overall, the AI model attains more than 90% precision rate, while the recall rate is higher than 80% in an out-of-sample test. In addition, we conduct a field experiment to verify the external validity of the AI model in a real-life setting. Our model identifies 293 arbitrage opportunities during a one-year field experiment and generates a profit of $7.06 per arbitrage opportunity. The result demonstrates that AI performs exceptionally well in identifying arbitrage opportunities in retail markets with tangible economic values. Our results also yield important implications regarding the role of AI in the society, both from the consumer and firm perspectives.
{"title":"Identifying arbitrage opportunities in retail markets with artificial intelligence","authors":"Jitsama Tanlamai, Warut Khern-am-nuai, Yossiri Adulyasak","doi":"10.1007/s00146-023-01718-w","DOIUrl":"10.1007/s00146-023-01718-w","url":null,"abstract":"<div><p>This study uses an artificial intelligence (AI) model to identify arbitrage opportunities in the retail marketplace. Specifically, we develop an AI model to predict the optimal purchasing point based on the price movement of products in the market. Our model is trained on a large dataset collected from an online marketplace in the United States. Our model is enhanced by incorporating user-generated content (UGC), which is empirically proven to be significantly informative. Overall, the AI model attains more than 90% precision rate, while the recall rate is higher than 80% in an out-of-sample test. In addition, we conduct a field experiment to verify the external validity of the AI model in a real-life setting. Our model identifies 293 arbitrage opportunities during a one-year field experiment and generates a profit of $7.06 per arbitrage opportunity. The result demonstrates that AI performs exceptionally well in identifying arbitrage opportunities in retail markets with tangible economic values. Our results also yield important implications regarding the role of AI in the society, both from the consumer and firm perspectives.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2615 - 2630"},"PeriodicalIF":2.9,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01718-w.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126322438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The EU Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, we propose applying the risk categories to specific AI scenarios, rather than solely to fields of application, using a risk assessment model that integrates the AIA with the risk approach arising from the Intergovernmental Panel on Climate Change (IPCC) and related literature. This integrated model enables the estimation of AI risk magnitude by considering the interaction between (a) risk determinants, (b) individual drivers of determinants, and (c) multiple risk types. We illustrate this model using large language models (LLMs) as an example.
{"title":"Taking AI risks seriously: a new assessment model for the AI Act","authors":"Claudio Novelli, Federico Casolari, Antonino Rotolo, Mariarosaria Taddeo, Luciano Floridi","doi":"10.1007/s00146-023-01723-z","DOIUrl":"10.1007/s00146-023-01723-z","url":null,"abstract":"<div><p>The EU Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, we propose applying the risk categories to specific AI scenarios, rather than solely to fields of application, using a risk assessment model that integrates the AIA with the risk approach arising from the Intergovernmental Panel on Climate Change (IPCC) and related literature. This integrated model enables the estimation of AI risk magnitude by considering the interaction between (a) risk determinants, (b) individual drivers of determinants, and (c) multiple risk types. We illustrate this model using large language models (LLMs) as an example.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2493 - 2497"},"PeriodicalIF":2.9,"publicationDate":"2023-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01723-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130835071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-10DOI: 10.1007/s00146-023-01716-y
Pertti Saariluoma, Antero Karvonen
The foundations of AI design discourse are worth analyzing. Here, attention is paid to the nature of theory languages used in designing new AI technologies because the limits of these languages can clarify some fundamental questions in the development of AI. We discuss three types of theory language used in designing AI products: formal, computational, and natural. Formal languages, such as mathematics, logic, and programming languages, have fixed meanings and no actual-world semantics. They are context- and practically content-free. Computational languages use terms referring to the actual world, i.e., to entities, events, and thoughts. Thus, computational languages have actual-world references and semantics. They are thus no longer context- or content-free. However, computational languages always have fixed meanings and, for this reason, limited domains of reference. Finally, unlike formal and computational languages, natural languages are creative, dynamic, and productive. Consequently, they can refer to an unlimited number of objects and their attributes in an unlimited number of domains. The differences between the three theory languages enable us to reflect on the traditional problems of strong and weak AI.
{"title":"Theory languages in designing artificial intelligence","authors":"Pertti Saariluoma, Antero Karvonen","doi":"10.1007/s00146-023-01716-y","DOIUrl":"10.1007/s00146-023-01716-y","url":null,"abstract":"<div><p>The foundations of AI design discourse are worth analyzing. Here, attention is paid to the nature of theory languages used in designing new AI technologies because the limits of these languages can clarify some fundamental questions in the development of AI. We discuss three types of theory language used in designing AI products: formal, computational, and natural. Formal languages, such as mathematics, logic, and programming languages, have fixed meanings and no actual-world semantics. They are context- and practically content-free. Computational languages use terms referring to the actual world, i.e., to entities, events, and thoughts. Thus, computational languages have actual-world references and semantics. They are thus no longer context- or content-free. However, computational languages always have fixed meanings and, for this reason, limited domains of reference. Finally, unlike formal and computational languages, natural languages are creative, dynamic, and productive. Consequently, they can refer to an unlimited number of objects and their attributes in an unlimited number of domains. The differences between the three theory languages enable us to reflect on the traditional problems of strong and weak AI.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2249 - 2258"},"PeriodicalIF":2.9,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s00146-023-01716-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114525708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we describe the challenges of collecting data in the Māori population for automatic detection of schizophrenia using natural language processing (NLP). Existing psychometric tools for detecting are wide ranging and do not meet the health needs of indigenous persons considered at risk of developing psychosis and/or schizophrenia. Automated methods using NLP have been developed to detect psychosis and schizophrenia but lack cultural nuance in their designs. Research incorporating the cultural aspects relevant to indigenous communities is lacking in the design of existing automatic prediction tools and one of the main reasons is the scarcity of data from indigenous populations. This paper explores the current design of the New Zealand health care system and its potential impacts on access and inequities in the Māori population and details the methodology used to collect speech samples of Māori at risk of developing psychosis and schizophrenia. The paper also describes the major obstacles faced during speech data collection, key findings, and probable solutions.
{"title":"Considerations for collecting data in Māori population for automatic detection of schizophrenia using natural language processing: a New Zealand experience","authors":"Randall Ratana, Hamid Sharifzadeh, Jamuna Krishnan","doi":"10.1007/s00146-023-01700-6","DOIUrl":"10.1007/s00146-023-01700-6","url":null,"abstract":"<div><p>In this paper, we describe the challenges of collecting data in the Māori population for automatic detection of schizophrenia using natural language processing (NLP). Existing psychometric tools for detecting are wide ranging and do not meet the health needs of indigenous persons considered at risk of developing psychosis and/or schizophrenia. Automated methods using NLP have been developed to detect psychosis and schizophrenia but lack cultural nuance in their designs. Research incorporating the cultural aspects relevant to indigenous communities is lacking in the design of existing automatic prediction tools and one of the main reasons is the scarcity of data from indigenous populations. This paper explores the current design of the New Zealand health care system and its potential impacts on access and inequities in the Māori population and details the methodology used to collect speech samples of Māori at risk of developing psychosis and schizophrenia. The paper also describes the major obstacles faced during speech data collection, key findings, and probable solutions.</p></div>","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2201 - 2212"},"PeriodicalIF":2.9,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123963949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-28DOI: 10.1007/s00146-023-01713-1
Partha Pratim Ray, Pradip Kumar Das
{"title":"ChatGPT and societal dynamics: navigating the crossroads of AI and human interaction","authors":"Partha Pratim Ray, Pradip Kumar Das","doi":"10.1007/s00146-023-01713-1","DOIUrl":"10.1007/s00146-023-01713-1","url":null,"abstract":"","PeriodicalId":47165,"journal":{"name":"AI & Society","volume":"39 5","pages":"2595 - 2596"},"PeriodicalIF":2.9,"publicationDate":"2023-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129928104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}