Pub Date : 2024-10-28DOI: 10.1038/s41562-024-02024-1
Michelle Vaccaro, Abdullah Almaatouq, Thomas Malone
Inspired by the increasing use of artificial intelligence (AI) to augment humans, researchers have studied human–AI systems involving different tasks, systems and populations. Despite such a large body of work, we lack a broad conceptual understanding of when combinations of humans and AI are better than either alone. Here we addressed this question by conducting a preregistered systematic review and meta-analysis of 106 experimental studies reporting 370 effect sizes. We searched an interdisciplinary set of databases (the Association for Computing Machinery Digital Library, the Web of Science and the Association for Information Systems eLibrary) for studies published between 1 January 2020 and 30 June 2023. Each study was required to include an original human-participants experiment that evaluated the performance of humans alone, AI alone and human–AI combinations. First, we found that, on average, human–AI combinations performed significantly worse than the best of humans or AI alone (Hedges’ g = −0.23; 95% confidence interval, −0.39 to −0.07). Second, we found performance losses in tasks that involved making decisions and significantly greater gains in tasks that involved creating content. Finally, when humans outperformed AI alone, we found performance gains in the combination, but when AI outperformed humans alone, we found losses. Limitations of the evidence assessed here include possible publication bias and variations in the study designs analysed. Overall, these findings highlight the heterogeneity of the effects of human–AI collaboration and point to promising avenues for improving human–AI systems.
{"title":"When combinations of humans and AI are useful: A systematic review and meta-analysis","authors":"Michelle Vaccaro, Abdullah Almaatouq, Thomas Malone","doi":"10.1038/s41562-024-02024-1","DOIUrl":"https://doi.org/10.1038/s41562-024-02024-1","url":null,"abstract":"<p>Inspired by the increasing use of artificial intelligence (AI) to augment humans, researchers have studied human–AI systems involving different tasks, systems and populations. Despite such a large body of work, we lack a broad conceptual understanding of when combinations of humans and AI are better than either alone. Here we addressed this question by conducting a preregistered systematic review and meta-analysis of 106 experimental studies reporting 370 effect sizes. We searched an interdisciplinary set of databases (the Association for Computing Machinery Digital Library, the Web of Science and the Association for Information Systems eLibrary) for studies published between 1 January 2020 and 30 June 2023. Each study was required to include an original human-participants experiment that evaluated the performance of humans alone, AI alone and human–AI combinations. First, we found that, on average, human–AI combinations performed significantly worse than the best of humans or AI alone (Hedges’ <i>g</i> = −0.23; 95% confidence interval, −0.39 to −0.07). Second, we found performance losses in tasks that involved making decisions and significantly greater gains in tasks that involved creating content. Finally, when humans outperformed AI alone, we found performance gains in the combination, but when AI outperformed humans alone, we found losses. Limitations of the evidence assessed here include possible publication bias and variations in the study designs analysed. Overall, these findings highlight the heterogeneity of the effects of human–AI collaboration and point to promising avenues for improving human–AI systems.</p>","PeriodicalId":19074,"journal":{"name":"Nature Human Behaviour","volume":null,"pages":null},"PeriodicalIF":29.9,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142519255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-22DOI: 10.1038/s41562-024-01995-5
Massimo Chiriatti, Marianna Ganapini, Enrico Panai, Mario Ubiali, Giuseppe Riva
{"title":"The case for human–AI interaction as system 0 thinking","authors":"Massimo Chiriatti, Marianna Ganapini, Enrico Panai, Mario Ubiali, Giuseppe Riva","doi":"10.1038/s41562-024-01995-5","DOIUrl":"10.1038/s41562-024-01995-5","url":null,"abstract":"","PeriodicalId":19074,"journal":{"name":"Nature Human Behaviour","volume":null,"pages":null},"PeriodicalIF":21.4,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142486673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-22DOI: 10.1038/s41562-024-02001-8
Milena Tsvetkova, Taha Yasseri, Niccolo Pescetelli, Tobias Werner
From fake social media accounts and generative artificial intelligence chatbots to trading algorithms and self-driving vehicles, robots, bots and algorithms are proliferating and permeating our communication channels, social interactions, economic transactions and transportation arteries. Networks of multiple interdependent and interacting humans and intelligent machines constitute complex social systems for which the collective outcomes cannot be deduced from either human or machine behaviour alone. Under this paradigm, we review recent research and identify general dynamics and patterns in situations of competition, coordination, cooperation, contagion and collective decision-making, with context-rich examples from high-frequency trading markets, a social media platform, an open collaboration community and a discussion forum. To ensure more robust and resilient human–machine communities, we require a new sociology of humans and machines. Researchers should study these communities using complex system methods; engineers should explicitly design artificial intelligence for human–machine and machine–machine interactions; and regulators should govern the ecological diversity and social co-development of humans and machines. This Perspective calls for a new sociology of humans and machines to study groups and networks comprising multiple interacting humans and algorithms, bots or robots. A deeper understanding of human–machine social systems can contribute new and valued insights for AI research, design and policy.
{"title":"A new sociology of humans and machines","authors":"Milena Tsvetkova, Taha Yasseri, Niccolo Pescetelli, Tobias Werner","doi":"10.1038/s41562-024-02001-8","DOIUrl":"10.1038/s41562-024-02001-8","url":null,"abstract":"From fake social media accounts and generative artificial intelligence chatbots to trading algorithms and self-driving vehicles, robots, bots and algorithms are proliferating and permeating our communication channels, social interactions, economic transactions and transportation arteries. Networks of multiple interdependent and interacting humans and intelligent machines constitute complex social systems for which the collective outcomes cannot be deduced from either human or machine behaviour alone. Under this paradigm, we review recent research and identify general dynamics and patterns in situations of competition, coordination, cooperation, contagion and collective decision-making, with context-rich examples from high-frequency trading markets, a social media platform, an open collaboration community and a discussion forum. To ensure more robust and resilient human–machine communities, we require a new sociology of humans and machines. Researchers should study these communities using complex system methods; engineers should explicitly design artificial intelligence for human–machine and machine–machine interactions; and regulators should govern the ecological diversity and social co-development of humans and machines. This Perspective calls for a new sociology of humans and machines to study groups and networks comprising multiple interacting humans and algorithms, bots or robots. A deeper understanding of human–machine social systems can contribute new and valued insights for AI research, design and policy.","PeriodicalId":19074,"journal":{"name":"Nature Human Behaviour","volume":null,"pages":null},"PeriodicalIF":21.4,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142487019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-22DOI: 10.1038/s41562-024-02005-4
Christopher Starke, Alfio Ventura, Clara Bersch, Meeyoung Cha, Claes de Vreese, Philipp Doebler, Mengchen Dong, Nicole Krämer, Margarita Leib, Jochen Peter, Lea Schäfer, Ivan Soraperra, Jessica Szczuka, Erik Tuchtfeld, Rebecca Wald, Nils Köbis
As artificial intelligence tools become more sophisticated, humans build synthetic relationships with them. Synthetic relationships differ fundamentally from traditional human–machine interactions and present new risks, such as privacy breaches, psychological manipulation and the erosion of human autonomy. This necessitates proactive, human-centred policies.
{"title":"Risks and protective measures for synthetic relationships","authors":"Christopher Starke, Alfio Ventura, Clara Bersch, Meeyoung Cha, Claes de Vreese, Philipp Doebler, Mengchen Dong, Nicole Krämer, Margarita Leib, Jochen Peter, Lea Schäfer, Ivan Soraperra, Jessica Szczuka, Erik Tuchtfeld, Rebecca Wald, Nils Köbis","doi":"10.1038/s41562-024-02005-4","DOIUrl":"10.1038/s41562-024-02005-4","url":null,"abstract":"As artificial intelligence tools become more sophisticated, humans build synthetic relationships with them. Synthetic relationships differ fundamentally from traditional human–machine interactions and present new risks, such as privacy breaches, psychological manipulation and the erosion of human autonomy. This necessitates proactive, human-centred policies.","PeriodicalId":19074,"journal":{"name":"Nature Human Behaviour","volume":null,"pages":null},"PeriodicalIF":21.4,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142486674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-22DOI: 10.1038/s41562-024-01991-9
Katherine M. Collins, Ilia Sucholutsky, Umang Bhatt, Kartik Chandra, Lionel Wong, Mina Lee, Cedegao E. Zhang, Tan Zhi-Xuan, Mark Ho, Vikash Mansinghka, Adrian Weller, Joshua B. Tenenbaum, Thomas L. Griffiths
What do we want from machine intelligence? We envision machines that are not just tools for thought but partners in thought: reasonable, insightful, knowledgeable, reliable and trustworthy systems that think with us. Current artificial intelligence systems satisfy some of these criteria, some of the time. In this Perspective, we show how the science of collaborative cognition can be put to work to engineer systems that really can be called ‘thought partners’, systems built to meet our expectations and complement our limitations. We lay out several modes of collaborative thought in which humans and artificial intelligence thought partners can engage, and we propose desiderata for human-compatible thought partnerships. Drawing on motifs from computational cognitive science, we motivate an alternative scaling path for the design of thought partners and ecosystems around their use through a Bayesian lens, whereby the partners we construct actively build and reason over models of the human and world. In this Perspective, the authors advance a view for the science of collaborative cognition to engineer systems that can be considered thought partners, systems built to meet our expectations and complement our limitations.
{"title":"Building machines that learn and think with people","authors":"Katherine M. Collins, Ilia Sucholutsky, Umang Bhatt, Kartik Chandra, Lionel Wong, Mina Lee, Cedegao E. Zhang, Tan Zhi-Xuan, Mark Ho, Vikash Mansinghka, Adrian Weller, Joshua B. Tenenbaum, Thomas L. Griffiths","doi":"10.1038/s41562-024-01991-9","DOIUrl":"10.1038/s41562-024-01991-9","url":null,"abstract":"What do we want from machine intelligence? We envision machines that are not just tools for thought but partners in thought: reasonable, insightful, knowledgeable, reliable and trustworthy systems that think with us. Current artificial intelligence systems satisfy some of these criteria, some of the time. In this Perspective, we show how the science of collaborative cognition can be put to work to engineer systems that really can be called ‘thought partners’, systems built to meet our expectations and complement our limitations. We lay out several modes of collaborative thought in which humans and artificial intelligence thought partners can engage, and we propose desiderata for human-compatible thought partnerships. Drawing on motifs from computational cognitive science, we motivate an alternative scaling path for the design of thought partners and ecosystems around their use through a Bayesian lens, whereby the partners we construct actively build and reason over models of the human and world. In this Perspective, the authors advance a view for the science of collaborative cognition to engineer systems that can be considered thought partners, systems built to meet our expectations and complement our limitations.","PeriodicalId":19074,"journal":{"name":"Nature Human Behaviour","volume":null,"pages":null},"PeriodicalIF":21.4,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142486675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-22DOI: 10.1038/s41562-024-01987-5
Daisuke Sakamoto, Tetsuo Ono
{"title":"Metaverse technologies can foster an inclusive society","authors":"Daisuke Sakamoto, Tetsuo Ono","doi":"10.1038/s41562-024-01987-5","DOIUrl":"10.1038/s41562-024-01987-5","url":null,"abstract":"","PeriodicalId":19074,"journal":{"name":"Nature Human Behaviour","volume":null,"pages":null},"PeriodicalIF":21.4,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142487006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-22DOI: 10.1038/s41562-024-02004-5
Lixiang Yan, Samuel Greiff, Ziwen Teuber, Dragan Gašević
Generative artificial intelligence (GenAI) holds the potential to transform the delivery, cultivation and evaluation of human learning. Here the authors examine the integration of GenAI as a tool for human learning, addressing its promises and challenges from a holistic viewpoint that integrates insights from learning sciences, educational technology and human–computer interaction. GenAI promises to enhance learning experiences by scaling personalized support, diversifying learning materials, enabling timely feedback and innovating assessment methods. However, it also presents critical issues such as model imperfections, ethical dilemmas and the disruption of traditional assessments. Thus, cultivating AI literacy and adaptive skills is imperative for facilitating informed engagement with GenAI technologies. Rigorous research across learning contexts is essential to evaluate GenAI’s effect on human cognition, metacognition and creativity. Humanity must learn with and about GenAI, ensuring that it becomes a powerful ally in the pursuit of knowledge and innovation, rather than a crutch that undermines our intellectual abilities. This Perspective describes the roles of generative AI in providing personalized support, diversity and innovative assessment in learning. However, it also raises ethical concerns and highlights issues such as model imperfection, underscoring the need for AI literacy and adaptability.
{"title":"Promises and challenges of generative artificial intelligence for human learning","authors":"Lixiang Yan, Samuel Greiff, Ziwen Teuber, Dragan Gašević","doi":"10.1038/s41562-024-02004-5","DOIUrl":"10.1038/s41562-024-02004-5","url":null,"abstract":"Generative artificial intelligence (GenAI) holds the potential to transform the delivery, cultivation and evaluation of human learning. Here the authors examine the integration of GenAI as a tool for human learning, addressing its promises and challenges from a holistic viewpoint that integrates insights from learning sciences, educational technology and human–computer interaction. GenAI promises to enhance learning experiences by scaling personalized support, diversifying learning materials, enabling timely feedback and innovating assessment methods. However, it also presents critical issues such as model imperfections, ethical dilemmas and the disruption of traditional assessments. Thus, cultivating AI literacy and adaptive skills is imperative for facilitating informed engagement with GenAI technologies. Rigorous research across learning contexts is essential to evaluate GenAI’s effect on human cognition, metacognition and creativity. Humanity must learn with and about GenAI, ensuring that it becomes a powerful ally in the pursuit of knowledge and innovation, rather than a crutch that undermines our intellectual abilities. This Perspective describes the roles of generative AI in providing personalized support, diversity and innovative assessment in learning. However, it also raises ethical concerns and highlights issues such as model imperfection, underscoring the need for AI literacy and adaptability.","PeriodicalId":19074,"journal":{"name":"Nature Human Behaviour","volume":null,"pages":null},"PeriodicalIF":21.4,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142487043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-22DOI: 10.1038/s41562-024-01938-0
Rada Mihalcea, Laura Biester, Ryan L. Boyd, Zhijing Jin, Veronica Perez-Rosas, Steven Wilson, James W. Pennebaker
The ways people use language can reveal clues to their emotions, social behaviours, thinking styles, cultures and the worlds around them. In the past two decades, research at the intersection of social psychology and computer science has been developing tools to analyse natural language from written or spoken text to better understand social processes and behaviour. The goal of this Review is to provide a brief overview of the methods and data currently being used and to discuss the underlying meaning of what language analyses can reveal in comparison with more traditional methodologies such as surveys or hand-scored language samples. Language reveals clues to human emotions, social behaviours, thinking styles and cultures. This Review provides a brief overview of computational methods to analyse natural language from written or spoken text as a new tool to investigate social processes and understand human behaviour.
{"title":"How developments in natural language processing help us in understanding human behaviour","authors":"Rada Mihalcea, Laura Biester, Ryan L. Boyd, Zhijing Jin, Veronica Perez-Rosas, Steven Wilson, James W. Pennebaker","doi":"10.1038/s41562-024-01938-0","DOIUrl":"10.1038/s41562-024-01938-0","url":null,"abstract":"The ways people use language can reveal clues to their emotions, social behaviours, thinking styles, cultures and the worlds around them. In the past two decades, research at the intersection of social psychology and computer science has been developing tools to analyse natural language from written or spoken text to better understand social processes and behaviour. The goal of this Review is to provide a brief overview of the methods and data currently being used and to discuss the underlying meaning of what language analyses can reveal in comparison with more traditional methodologies such as surveys or hand-scored language samples. Language reveals clues to human emotions, social behaviours, thinking styles and cultures. This Review provides a brief overview of computational methods to analyse natural language from written or spoken text as a new tool to investigate social processes and understand human behaviour.","PeriodicalId":19074,"journal":{"name":"Nature Human Behaviour","volume":null,"pages":null},"PeriodicalIF":21.4,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142486676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-22DOI: 10.1038/s41562-024-02049-6
As digital technologies become ever more pervasive and sophisticated, understanding the nuances of the relationship between humans and machines becomes increasingly important. Spanning a range of disciplines, from computer science and psychology to medicine and education, this issue’s Focus includes a diverse array of voices and perspectives on the many ways in which humans and digital machines interact and communicate with each other, as well as the societal implications and ethical considerations of emerging technologies.
{"title":"Embracing the ubiquity of machines","authors":"","doi":"10.1038/s41562-024-02049-6","DOIUrl":"10.1038/s41562-024-02049-6","url":null,"abstract":"As digital technologies become ever more pervasive and sophisticated, understanding the nuances of the relationship between humans and machines becomes increasingly important. Spanning a range of disciplines, from computer science and psychology to medicine and education, this issue’s Focus includes a diverse array of voices and perspectives on the many ways in which humans and digital machines interact and communicate with each other, as well as the societal implications and ethical considerations of emerging technologies.","PeriodicalId":19074,"journal":{"name":"Nature Human Behaviour","volume":null,"pages":null},"PeriodicalIF":21.4,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.nature.com/articles/s41562-024-02049-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142486672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-21DOI: 10.1038/s41562-024-02018-z
Shaul Shalvi
Honesty oaths are commonly used to promote ethical behaviour, but their effectiveness is not well understood. A mega-study involving thousands of people shows that taking an oath to be honest can reduce tax evasion in an online economic game.
{"title":"Honesty oaths for rule-following","authors":"Shaul Shalvi","doi":"10.1038/s41562-024-02018-z","DOIUrl":"https://doi.org/10.1038/s41562-024-02018-z","url":null,"abstract":"Honesty oaths are commonly used to promote ethical behaviour, but their effectiveness is not well understood. A mega-study involving thousands of people shows that taking an oath to be honest can reduce tax evasion in an online economic game.","PeriodicalId":19074,"journal":{"name":"Nature Human Behaviour","volume":null,"pages":null},"PeriodicalIF":29.9,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142451796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}