首页 > 最新文献

Computers in Human Behavior: Artificial Humans最新文献

英文 中文
Am I still human? Wearing an exoskeleton impacts self-perceptions of warmth, competence, attractiveness, and machine-likeness 我还是人吗?穿戴外骨骼会影响对温暖、能力、吸引力和机器相似性的自我认知
Pub Date : 2024-05-31 DOI: 10.1016/j.chbah.2024.100073
Sandra Maria Siedl, Martina Mara

Occupational exoskeletons are body-worn technologies capable of enhancing a wearer's naturally given strength at work. Despite increasing interest in their physical effects, their implications for user self-perception have been largely overlooked. Addressing common concerns about body-enhancing technologies, our study explored how real-world use of a robotic exoskeleton affects a wearer's mechanistic dehumanization and perceived attractiveness of the self. In a within-subjects laboratory experiment, n = 119 participants performed various practical work tasks (carrying, screwing, riveting) with and without the Ironhand active hand exoskeleton. After each condition, they completed a questionnaire. We expected that in the exoskeleton condition self-perceptions of warmth and attractiveness would be less pronounced and self-perceptions of being competent and machine-like would be more pronounced. Study data supported these hypotheses and showed perceived competence, machine-likeness, and attractiveness to be relevant to technology acceptance. Our findings provide the first evidence that body-enhancement technologies may be associated with tendencies towards self-dehumanization, and underline the multifaceted role of exoskeleton-induced competence gain. By examining user self-perceptions that relate to mechanistic dehumanization and aesthetic appeal, our research highlights the need to better understand psychological impacts of exoskeletons on human wearers.

职业外骨骼是一种佩戴在身上的技术,能够增强佩戴者在工作中的自然力量。尽管人们对它们的物理效果越来越感兴趣,但它们对用户自我感知的影响却在很大程度上被忽视了。为了解决人们对增强体质技术的普遍担忧,我们的研究探讨了机器人外骨骼在现实世界中的使用如何影响穿戴者的机械非人化和对自我吸引力的感知。在一项主体内实验室实验中,n=119 名参与者在使用或不使用 Ironhand 主动手部外骨骼的情况下完成了各种实际工作任务(搬运、拧螺丝、铆接)。每种情况结束后,他们都要填写一份问卷。我们预计,在使用外骨骼的情况下,自我感觉温暖和有吸引力的程度会降低,而自我感觉称职和像机器的程度会提高。研究数据支持了这些假设,并表明能力感知、机器感知和吸引力感知与技术接受度相关。我们的研究结果首次证明了身体增强技术可能与自我非人化倾向有关,并强调了外骨骼引发的能力增强的多方面作用。通过研究与机械非人化和美学吸引力相关的用户自我感知,我们的研究强调了更好地理解外骨骼对人类穿戴者心理影响的必要性。
{"title":"Am I still human? Wearing an exoskeleton impacts self-perceptions of warmth, competence, attractiveness, and machine-likeness","authors":"Sandra Maria Siedl,&nbsp;Martina Mara","doi":"10.1016/j.chbah.2024.100073","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100073","url":null,"abstract":"<div><p>Occupational exoskeletons are body-worn technologies capable of enhancing a wearer's naturally given strength at work. Despite increasing interest in their physical effects, their implications for user self-perception have been largely overlooked. Addressing common concerns about body-enhancing technologies, our study explored how real-world use of a robotic exoskeleton affects a wearer's mechanistic dehumanization and perceived attractiveness of the self. In a within-subjects laboratory experiment, n = 119 participants performed various practical work tasks (carrying, screwing, riveting) with and without the <em>Ironhand</em> active hand exoskeleton. After each condition, they completed a questionnaire. We expected that in the exoskeleton condition self-perceptions of warmth and attractiveness would be less pronounced and self-perceptions of being competent and machine-like would be more pronounced. Study data supported these hypotheses and showed perceived competence, machine-likeness, and attractiveness to be relevant to technology acceptance. Our findings provide the first evidence that body-enhancement technologies may be associated with tendencies towards self-dehumanization, and underline the multifaceted role of exoskeleton-induced competence gain. By examining user self-perceptions that relate to mechanistic dehumanization and aesthetic appeal, our research highlights the need to better understand psychological impacts of exoskeletons on human wearers.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100073"},"PeriodicalIF":0.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000331/pdfft?md5=cdbc3d3a9a85f6c53c5c3975b75c6aa2&pid=1-s2.0-S2949882124000331-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141315054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On trust in humans and trust in artificial intelligence: A study with samples from Singapore and Germany extending recent research 关于对人类的信任和对人工智能的信任:以新加坡和德国的样本为基础,扩展近期研究的一项研究
Pub Date : 2024-05-10 DOI: 10.1016/j.chbah.2024.100070
Christian Montag , Benjamin Becker , Benjamin J. Li

The AI revolution is shaping societies around the world. People interact daily with a growing number of products and services that feature AI integration. Without doubt rapid developments in AI will bring positive outcomes, but also challenges. In this realm it is important to understand if people trust this omni-use technology, because trust represents an essential prerequisite (to be willing) to use AI products and this in turn likely has an impact on how much AI will be embraced by national economies with consequences for the local work forces. To shed more light on trusting AI, the present work aims to understand how much the variables trust in AI and trust in humans overlap. This is important to understand, because much is already known about trust in humans, and if the concepts overlap, much of our understanding of trust in humans might be transferable to trusting AI. In samples from Singapore (n = 535) and Germany (n = 954) we could observe varying degrees of positive relations between the trust in AI/humans variables. Whereas trust in AI/humans showed a small positive association in Germany, there was a moderate positive association in Singapore. Further, this paper revisits associations between individual differences in the Big Five of Personality and general attitudes towards AI including trust.

The present work shows that trust in humans and trust in AI share only small amounts of variance, but this depends on culture (varying here from about 4 to 11 percent of shared variance). Future research should further investigate such associations but by also considering assessments of trust in specific AI-empowered-products and AI-empowered-services, where things might be different.

人工智能革命正在塑造世界各地的社会。人们每天都在与越来越多集成了人工智能的产品和服务打交道。毫无疑问,人工智能的快速发展将带来积极的成果,但也会带来挑战。在这一领域,重要的是要了解人们是否信任这种全方位使用的技术,因为信任是(愿意)使用人工智能产品的基本前提,而这反过来又可能影响到人工智能在多大程度上被国家经济所接受,并对当地劳动力产生影响。为了进一步阐明对人工智能的信任,本研究旨在了解对人工智能的信任和对人类的信任这两个变量的重叠程度。了解这一点非常重要,因为人们已经对人类的信任有了很多了解,如果这两个概念重叠,我们对人类信任的很多理解可能就会转移到对人工智能的信任上。在新加坡(n = 535)和德国(n = 954)的样本中,我们可以观察到人工智能/人类信任变量之间不同程度的正相关关系。在德国,对人工智能/人类的信任显示出微小的正相关,而在新加坡则显示出中等程度的正相关。此外,本文还重新探讨了五大人格中的个体差异与对人工智能的一般态度(包括信任)之间的关联。目前的研究表明,对人类的信任和对人工智能的信任只存在少量的共同差异,但这取决于文化(共同差异约为 4% 到 11%)。未来的研究应进一步调查这种关联,但也应考虑对特定人工智能产品和人工智能服务的信任评估,因为在这些产品和服务中,情况可能会有所不同。
{"title":"On trust in humans and trust in artificial intelligence: A study with samples from Singapore and Germany extending recent research","authors":"Christian Montag ,&nbsp;Benjamin Becker ,&nbsp;Benjamin J. Li","doi":"10.1016/j.chbah.2024.100070","DOIUrl":"10.1016/j.chbah.2024.100070","url":null,"abstract":"<div><p>The AI revolution is shaping societies around the world. People interact daily with a growing number of products and services that feature AI integration. Without doubt rapid developments in AI will bring positive outcomes, but also challenges. In this realm it is important to understand if people trust this omni-use technology, because trust represents an essential prerequisite (to be willing) to use AI products and this in turn likely has an impact on how much AI will be embraced by national economies with consequences for the local work forces. To shed more light on trusting AI, the present work aims to understand how much the variables <em>trust in AI</em> and <em>trust in humans</em> overlap. This is important to understand, because much is already known about trust in humans, and if the concepts overlap, much of our understanding of trust in humans might be transferable to trusting AI. In samples from Singapore (n = 535) and Germany (n = 954) we could observe varying degrees of positive relations between the <em>trust in AI/humans</em> variables. Whereas <em>trust in AI/humans</em> showed a small positive association in Germany, there was a moderate positive association in Singapore. Further, this paper revisits associations between individual differences in the Big Five of Personality and general attitudes towards AI including trust.</p><p>The present work shows that <em>trust in humans</em> and <em>trust in AI</em> share only small amounts of variance, but this depends on culture (varying here from about 4 to 11 percent of shared variance). Future research should further investigate such associations but by also considering assessments of trust in specific AI-empowered-products and AI-empowered-services, where things might be different.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 2","pages":"Article 100070"},"PeriodicalIF":0.0,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000306/pdfft?md5=79d1e52e0296b5cc72a13b7bfacaaf35&pid=1-s2.0-S2949882124000306-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141042698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI literacy for users – A comprehensive review and future research directions of learning methods, components, and effects 面向用户的人工智能扫盲--关于学习方法、内容和效果的全面回顾与未来研究方向
Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100062
Marc Pinski, Alexander Benlian

The rapid advancement of artificial intelligence (AI) has brought transformative changes to various aspects of human life, leading to an exponential increase in the number of AI users. The broad access and usage of AI enable immense benefits but also give rise to significant challenges. One way for AI users to address these challenges is to develop AI literacy, referring to human proficiency in different subject areas of AI that enable purposeful, efficient, and ethical usage of AI technologies. This study aims to comprehensively understand and structure the research on AI literacy for AI users through a systematic, scoping literature review. Therefore, we synthesize the literature, provide a conceptual framework, and develop a research agenda. Our review paper holistically assesses the fragmented AI literacy research landscape (68 papers) while critically examining its specificity to different user groups and its distinction from other technology literacies, exposing that research efforts are partly not well integrated. We organize our findings in an overarching conceptual framework structured along the learning methods leading to, the components constituting, and the effects stemming from AI literacy. Our research agenda – oriented along the developed conceptual framework – sheds light on the most promising research opportunities to prepare AI users for an AI-powered future of work and society.

人工智能(AI)的快速发展给人类生活的各个方面带来了变革,导致人工智能用户数量呈指数级增长。人工智能的广泛接触和使用带来了巨大的好处,但也带来了巨大的挑战。人工智能用户应对这些挑战的方法之一是培养人工智能素养,即人类在人工智能不同学科领域的能力,从而能够有目的、高效率、合乎道德地使用人工智能技术。本研究旨在通过系统的、范围广泛的文献综述,全面了解和构建有关人工智能用户人工智能素养的研究。因此,我们对文献进行了综合,提供了一个概念框架,并制定了一个研究议程。我们的综述论文全面评估了支离破碎的人工智能扫盲研究(68 篇论文),同时批判性地审视了其对不同用户群体的特殊性及其与其他技术扫盲的区别,揭露了部分研究工作没有得到很好整合的问题。我们根据人工智能素养的学习方法、构成要素和效果,将研究结果归纳为一个总体概念框架。我们的研究议程--以已开发的概念框架为导向--揭示了最有前途的研究机会,让人工智能用户为人工智能驱动的未来工作和社会做好准备。
{"title":"AI literacy for users – A comprehensive review and future research directions of learning methods, components, and effects","authors":"Marc Pinski,&nbsp;Alexander Benlian","doi":"10.1016/j.chbah.2024.100062","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100062","url":null,"abstract":"<div><p>The rapid advancement of artificial intelligence (AI) has brought transformative changes to various aspects of human life, leading to an exponential increase in the number of AI users. The broad access and usage of AI enable immense benefits but also give rise to significant challenges. One way for AI users to address these challenges is to develop AI literacy, referring to human proficiency in different subject areas of AI that enable purposeful, efficient, and ethical usage of AI technologies. This study aims to comprehensively understand and structure the research on AI literacy for AI users through a systematic, scoping literature review. Therefore, we synthesize the literature, provide a conceptual framework, and develop a research agenda. Our review paper holistically assesses the fragmented AI literacy research landscape (68 papers) while critically examining its specificity to different user groups and its distinction from other technology literacies, exposing that research efforts are partly not well integrated. We organize our findings in an overarching conceptual framework structured along the learning methods leading to, the components constituting, and the effects stemming from AI literacy. Our research agenda – oriented along the developed conceptual framework – sheds light on the most promising research opportunities to prepare AI users for an AI-powered future of work and society.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100062"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000227/pdfft?md5=67048bb47ad6e81dd544c466338d703f&pid=1-s2.0-S2949882124000227-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140163093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling morality and spirituality in artificial chaplains 人工牧师的道德和灵性建模
Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100051
Mark Graves
{"title":"Modeling morality and spirituality in artificial chaplains","authors":"Mark Graves","doi":"10.1016/j.chbah.2024.100051","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100051","url":null,"abstract":"","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100051"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000112/pdfft?md5=c4380ab3c86812f04171e97918fb3c5d&pid=1-s2.0-S2949882124000112-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139744221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual vs. Human influencers: The battle for consumer hearts and minds 虚拟影响者与人工影响者:消费者心智之争
Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100059
Abhishek Dondapati, Ranjit Kumar Dehury

Virtual influencers, or fictional CGI-generated social media personas, are gaining popularity. However, research lacks information on how they compare to human influencers in shaping consumer attitudes and purchase intent. This study examines whether perceived homophily and para-social relationships mediate the effect of influencer type on purchase intent and the moderating effect of perceived authenticity. A 2 × 2 between-subjects experiment manipulated influencer type (virtual vs. human) and product type (hedonic vs. utilitarian). Young adult participants viewed an Instagram profile of a lifestyle influencer. Authenticity, perceived homophily, para-social relationship, and purchase intent were measured using established scales. Perceived homophily and para-social relationships mediate the effect of influencer type on purchase intent. A significant interaction showed that perceived authenticity moderated the mediated pathway, such that the indirect effect via para-social relationship and perceived homophily was stronger for human influencers. Maintaining an authentic persona is critical for virtual influencers to sway consumer behaviours, especially for audiences less familiar with social media.

虚拟影响者或由 CGI 生成的虚构社交媒体角色越来越受欢迎。然而,有关虚拟影响者与人类影响者在塑造消费者态度和购买意向方面的比较的研究还很缺乏。本研究探讨了感知到的同质性和准社会关系是否会调节影响者类型对购买意向的影响,以及感知到的真实性的调节作用。一项 2 × 2 的主体间实验操纵了影响者类型(虚拟与人类)和产品类型(享乐与功利)。年轻的成年参与者观看了一位生活方式影响者的 Instagram 简介。实验采用既定量表对真实性、感知同质性、准社会关系和购买意向进行了测量。感知到的同质性和准社会关系调解了影响者类型对购买意向的影响。显着的交互作用表明,感知到的真实性调节了中介途径,因此通过社会辅助关系和感知到的同亲关系产生的间接效应对人类影响者更强。虚拟影响者要想左右消费者的行为,尤其是对于不太熟悉社交媒体的受众来说,保持真实的形象至关重要。
{"title":"Virtual vs. Human influencers: The battle for consumer hearts and minds","authors":"Abhishek Dondapati,&nbsp;Ranjit Kumar Dehury","doi":"10.1016/j.chbah.2024.100059","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100059","url":null,"abstract":"<div><p>Virtual influencers, or fictional CGI-generated social media personas, are gaining popularity. However, research lacks information on how they compare to human influencers in shaping consumer attitudes and purchase intent. This study examines whether perceived homophily and para-social relationships mediate the effect of influencer type on purchase intent and the moderating effect of perceived authenticity. A 2 × 2 between-subjects experiment manipulated influencer type (virtual vs. human) and product type (hedonic vs. utilitarian). Young adult participants viewed an Instagram profile of a lifestyle influencer. Authenticity, perceived homophily, para-social relationship, and purchase intent were measured using established scales. Perceived homophily and para-social relationships mediate the effect of influencer type on purchase intent. A significant interaction showed that perceived authenticity moderated the mediated pathway, such that the indirect effect via para-social relationship and perceived homophily was stronger for human influencers. Maintaining an authentic persona is critical for virtual influencers to sway consumer behaviours, especially for audiences less familiar with social media.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100059"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000197/pdfft?md5=20eb84dd566ad4d79f74fed42380915b&pid=1-s2.0-S2949882124000197-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140000257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trust in artificial intelligence: Literature review and main path analysis 人工智能中的信任:文献综述和主要路径分析
Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100043
Bruno Miranda Henrique , Eugene Santos Jr.

Artificial Intelligence (AI) is present in various modern systems, but it is still subject to acceptance in many fields. Medical diagnosis, autonomous driving cars, recommender systems and robotics are examples of areas in which some humans distrust AI technology, which ultimately leads to low acceptance rates. Conversely, those same applications can have humans who over rely on AI, acting as recommended by the systems with no criticism regarding the risks of a wrong decision. Therefore, there is an optimal balance with respect to trust in AI, achieved by calibration of expectations and capabilities. In this context, the literature about factors influencing trust in AI and its calibration is scattered among research fields, with no objective summaries of the overall evolution of the theme. In order to close this gap, this paper contributes a literature review of the most influential papers on the subject of trust in AI, selected by quantitative methods. It also proposes a Main Path Analysis of the literature, highlighting how the theme has evolved over the years. As results, researchers will find an overview on trust in AI based on the most important papers objectively selected and also tendencies and opportunities for future research.

人工智能(AI)存在于各种现代系统中,但在许多领域仍有待接受。医疗诊断、自动驾驶汽车、推荐系统和机器人技术都是一些人类不信任人工智能技术的领域,最终导致接受率低下。反之,同样是这些应用,人类也可能过度依赖人工智能,按照系统的建议行事,对错误决策的风险不闻不问。因此,对人工智能的信任需要一个最佳平衡点,通过校准期望值和能力来实现。在这种情况下,有关影响人工智能信任度及其校准的因素的文献散见于各个研究领域,没有对这一主题的整体演变进行客观总结。为了填补这一空白,本文通过定量方法,对人工智能信任主题中最具影响力的论文进行了文献综述。本文还提出了文献的主要路径分析,强调了该主题多年来的演变过程。研究人员将根据客观筛选出的最重要文献,对人工智能中的信任问题进行综述,并发现未来研究的趋势和机遇。
{"title":"Trust in artificial intelligence: Literature review and main path analysis","authors":"Bruno Miranda Henrique ,&nbsp;Eugene Santos Jr.","doi":"10.1016/j.chbah.2024.100043","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100043","url":null,"abstract":"<div><p>Artificial Intelligence (AI) is present in various modern systems, but it is still subject to acceptance in many fields. Medical diagnosis, autonomous driving cars, recommender systems and robotics are examples of areas in which some humans distrust AI technology, which ultimately leads to low acceptance rates. Conversely, those same applications can have humans who over rely on AI, acting as recommended by the systems with no criticism regarding the risks of a wrong decision. Therefore, there is an optimal balance with respect to trust in AI, achieved by calibration of expectations and capabilities. In this context, the literature about factors influencing trust in AI and its calibration is scattered among research fields, with no objective summaries of the overall evolution of the theme. In order to close this gap, this paper contributes a literature review of the most influential papers on the subject of trust in AI, selected by quantitative methods. It also proposes a Main Path Analysis of the literature, highlighting how the theme has evolved over the years. As results, researchers will find an overview on trust in AI based on the most important papers objectively selected and also tendencies and opportunities for future research.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100043"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000033/pdfft?md5=730364a034e2bd4ec1f23bf724f7adef&pid=1-s2.0-S2949882124000033-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139550002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A review of assessment for learning with artificial intelligence 人工智能学习评估综述
Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2023.100040
Bahar Memarian, Tenzin Doleck

The reformed Assessment For Learning (AFL) practices the design of activities and evaluation and feedback processes that improve student learning. While Artificial Intelligence (AI) has blossomed as a field in education, less work has been done to examine the studies and challenges reported between AFL and AI. We conduct a review of the literature to examine the state of work on AFL and AI in the education literature. A review of articles in Web of Science, SCOPUS, and Google Scholar yielded 35 studies for review. We share the trends in research design, AFL conceptions, and AI challenges in the reviewed studies. We offer the implications of AFL and AI and considerations for future research.

经过改革的 "学习评估"(Assessment For Learning,AFL)是指设计能够提高学生学习效果的活动、评价和反馈过程。虽然人工智能(AI)已在教育领域蓬勃发展,但对 AFL 和 AI 之间的研究和挑战的研究却较少。我们进行了一次文献综述,以研究教育文献中有关 AFL 和 AI 的工作状况。通过对 Web of Science、SCOPUS 和 Google Scholar 中的文章进行综述,我们得出了 35 篇研究综述。我们分享了这些研究在研究设计、AFL 概念和人工智能挑战方面的趋势。我们提出了 AFL 和人工智能的影响以及未来研究的考虑因素。
{"title":"A review of assessment for learning with artificial intelligence","authors":"Bahar Memarian,&nbsp;Tenzin Doleck","doi":"10.1016/j.chbah.2023.100040","DOIUrl":"10.1016/j.chbah.2023.100040","url":null,"abstract":"<div><p>The reformed Assessment For Learning (AFL) practices the design of activities and evaluation and feedback processes that improve student learning. While Artificial Intelligence (AI) has blossomed as a field in education, less work has been done to examine the studies and challenges reported between AFL and AI. We conduct a review of the literature to examine the state of work on AFL and AI in the education literature. A review of articles in Web of Science, SCOPUS, and Google Scholar yielded 35 studies for review. We share the trends in research design, AFL conceptions, and AI challenges in the reviewed studies. We offer the implications of AFL and AI and considerations for future research.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100040"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882123000403/pdfft?md5=7027156594dcf9b4d5bc0dc0e9c5dca9&pid=1-s2.0-S2949882123000403-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139191639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Co-creating art with generative artificial intelligence: Implications for artworks and artists 与生成式人工智能共同创造艺术:对艺术作品和艺术家的影响
Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100056
Uwe Messer

Synthetic visual art is becoming a commodity due to generative artificial intelligence (AI). The trend of using AI for co-creation will not spare artists’ creative processes, and it is important to understand how the use of generative AI at different stages of the creative process affects both the evaluation of the artist and the result of the human-machine collaboration (i.e., the visual artifact). In three experiments (N = 560), this research explores how the evaluation of artworks is transformed by the revelation that the artist collaborated with AI at different stages of the creative process. The results show that co-created art is less liked and recognized, especially when AI was used in the implementation stage. While co-created art is perceived as more novel, it lacks creative authenticity, which exerts a dominant influence. The results also show that artists’ perceptions suffer from the co-creation process, and that artists who co-create are less admired because they are perceived as less authentic. Two boundary conditions are identified. The negative effect can be mitigated by disclosing the level of artist involvement in co-creation with AI (e.g., by training the algorithm on a curated set of images vs. simply prompting an off-the-shelf AI image generator). In the context of art that is perceived as commercially motivated (e.g., stock images), the effect is also diminished. This research has important implications for the literature on human-AI-collaboration, research on authenticity, and the ongoing policy debate regarding the transparency of algorithmic presence.

由于人工智能(AI)的产生,合成视觉艺术正在成为一种商品。使用人工智能进行共同创作的趋势不会放过艺术家的创作过程,因此了解在创作过程的不同阶段使用生成式人工智能如何影响对艺术家的评价以及人机合作的结果(即视觉作品)非常重要。在三项实验(N = 560)中,本研究探讨了艺术作品的评价如何因艺术家在创作过程的不同阶段与人工智能合作的启示而发生变化。结果表明,共同创作的艺术作品较少受到喜爱和认可,尤其是在实施阶段使用人工智能时。虽然共同创作的艺术被认为更新颖,但却缺乏创作的真实性,这一点具有主导性影响。研究结果还表明,艺术家的看法会受到共同创作过程的影响,而共同创作的艺术家会因为被认为不够真实而较少受到欣赏。结果确定了两个边界条件。通过公开艺术家参与人工智能共同创作的程度(例如,在一组精心策划的图像上训练算法,而不是简单地提示现成的人工智能图像生成器),可以减轻负面影响。在艺术被认为具有商业动机(如股票图像)的情况下,效果也会减弱。这项研究对有关人类与人工智能合作的文献、真实性研究以及正在进行的有关算法存在透明度的政策辩论具有重要意义。
{"title":"Co-creating art with generative artificial intelligence: Implications for artworks and artists","authors":"Uwe Messer","doi":"10.1016/j.chbah.2024.100056","DOIUrl":"10.1016/j.chbah.2024.100056","url":null,"abstract":"<div><p>Synthetic visual art is becoming a commodity due to generative artificial intelligence (AI). The trend of using AI for co-creation will not spare artists’ creative processes, and it is important to understand how the use of generative AI at different stages of the creative process affects both the evaluation of the artist and the result of the human-machine collaboration (i.e., the visual artifact). In three experiments (N = 560), this research explores how the evaluation of artworks is transformed by the revelation that the artist collaborated with AI at different stages of the creative process. The results show that co-created art is less liked and recognized, especially when AI was used in the implementation stage. While co-created art is perceived as more novel, it lacks creative authenticity, which exerts a dominant influence. The results also show that artists’ perceptions suffer from the co-creation process, and that artists who co-create are less admired because they are perceived as less authentic. Two boundary conditions are identified. The negative effect can be mitigated by disclosing the level of artist involvement in co-creation with AI (e.g., by training the algorithm on a curated set of images vs. simply prompting an off-the-shelf AI image generator). In the context of art that is perceived as commercially motivated (e.g., stock images), the effect is also diminished. This research has important implications for the literature on human-AI-collaboration, research on authenticity, and the ongoing policy debate regarding the transparency of algorithmic presence.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100056"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000161/pdfft?md5=117db880bc1bfc8ee95dd810da305f04&pid=1-s2.0-S2949882124000161-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139884737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The effect of source disclosure on evaluation of AI-generated messages 信息来源披露对人工智能生成的信息评价的影响
Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100058
Sue Lim, Ralf Schmälzle

Advancements in artificial intelligence (AI) over the last decade demonstrate that machines can exhibit communicative behavior and influence how humans think, feel, and behave. In fact, the recent development of ChatGPT has shown that large language models (LLMs) can be leveraged to generate high-quality communication content at scale and across domains, suggesting that they will be increasingly used in practice. However, many questions remain about how knowing the source of the messages influences recipients' evaluation of and preference for AI-generated messages compared to human-generated messages. This paper investigated this topic in the context of vaping prevention messaging. In Study 1, which was pre-registered, we examined the influence of source disclosure on young adults' evaluation of AI-generated health prevention messages compared to human-generated messages. We found that source disclosure (i.e., labeling the source of a message as AI vs. human) significantly impacted the evaluation of the messages but did not significantly alter message rankings. In a follow-up study (Study 2), we examined how the influence of source disclosure may vary by the adults’ negative attitudes towards AI. We found a significant moderating effect of negative attitudes towards AI on message evaluation, but not for message selection. However, source disclosure decreased the preference for AI-generated messages for those with moderate levels (statistically significant) and high levels (directional) of negative attitudes towards AI. Overall, the results of this series of studies showed a slight bias against AI-generated messages once the source was disclosed, adding to the emerging area of study that lies at the intersection of AI and communication.

过去十年来,人工智能(AI)的进步表明,机器可以表现出交流行为,并影响人类的思维、感觉和行为方式。事实上,最近开发的 ChatGPT 已经表明,大型语言模型(LLMs)可用于大规模跨领域生成高质量的交流内容,这表明它们将越来越多地应用于实践中。然而,与人类生成的信息相比,了解信息的来源如何影响接收者对人工智能生成的信息的评价和偏好,仍然存在许多问题。本文以预防吸烟信息为背景对这一问题进行了研究。在预先登记的研究 1 中,我们考察了信息来源披露与人工智能生成的信息相比对年轻人对人工智能生成的健康预防信息的评价的影响。我们发现,信息来源披露(即标明信息来源是人工智能还是人类)对信息评价有显著影响,但对信息排名没有显著改变。在后续研究(研究 2)中,我们考察了来源披露的影响如何因成年人对人工智能的负面态度而异。我们发现,对人工智能的负面态度对信息评价有明显的调节作用,但对信息选择没有影响。然而,对于那些对人工智能持中度负面态度(统计学上有意义)和高度负面态度(方向性)的人来说,信息源披露降低了他们对人工智能生成的信息的偏好。总体而言,这一系列研究的结果表明,一旦公开信息来源,人们对人工智能生成的信息就会产生轻微的偏好,这为人工智能与传播交叉领域的新兴研究增添了新的内容。
{"title":"The effect of source disclosure on evaluation of AI-generated messages","authors":"Sue Lim,&nbsp;Ralf Schmälzle","doi":"10.1016/j.chbah.2024.100058","DOIUrl":"https://doi.org/10.1016/j.chbah.2024.100058","url":null,"abstract":"<div><p>Advancements in artificial intelligence (AI) over the last decade demonstrate that machines can exhibit communicative behavior and influence how humans think, feel, and behave. In fact, the recent development of ChatGPT has shown that large language models (LLMs) can be leveraged to generate high-quality communication content at scale and across domains, suggesting that they will be increasingly used in practice. However, many questions remain about how knowing the source of the messages influences recipients' evaluation of and preference for AI-generated messages compared to human-generated messages. This paper investigated this topic in the context of vaping prevention messaging. In Study 1, which was pre-registered, we examined the influence of source disclosure on young adults' evaluation of AI-generated health prevention messages compared to human-generated messages. We found that source disclosure (i.e., labeling the source of a message as AI vs. human) significantly impacted the evaluation of the messages but did not significantly alter message rankings. In a follow-up study (Study 2), we examined how the influence of source disclosure may vary by the adults’ negative attitudes towards AI. We found a significant moderating effect of negative attitudes towards AI on message evaluation, but not for message selection. However, source disclosure decreased the preference for AI-generated messages for those with moderate levels (statistically significant) and high levels (directional) of negative attitudes towards AI. Overall, the results of this series of studies showed a slight bias against AI-generated messages once the source was disclosed, adding to the emerging area of study that lies at the intersection of AI and communication.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100058"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000185/pdfft?md5=137b14adf60a30776f098531f8e0d44c&pid=1-s2.0-S2949882124000185-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140062815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual voices for real change: The efficacy of virtual humans in pro-environmental social marketing for mitigating misinformation about climate change 虚拟声音促进真实变化:虚拟人在减少气候变化误导的环保社会营销中的功效
Pub Date : 2024-01-01 DOI: 10.1016/j.chbah.2024.100047
Won-Ki Moon , Y. Greg Song , Lucy Atkinson

Academics have focused their research on the rise of non-human entities, particularly virtual humans. To assess the effectiveness of virtual humans in influencing individual behavior through campaigns, we conducted two separate online experiments involving different participant groups: university students (N = 167) and U.S. adults (N = 320). We compared individuals’ responses to video-type pro-environmental campaigns featuring a virtual or actual human scientist as the central figure who provides testimonials about their individual efforts to prevent misinformation about climate change. The results indicate that an actual human protagonist evoked a stronger sense of identification compared to a virtual human counterpart. Nevertheless, we also observed that virtual humans can evoke empathy for the characters, leading individuals to perceive them as living entities who can have emotions. The insights gleaned from this study have the potential to shape the creation of virtual human content in various domains, including pro-social campaigns and marketing communications.

学术界的研究重点是非人类实体的兴起,尤其是虚拟人。为了评估虚拟人通过宣传活动影响个人行为的效果,我们分别进行了两项在线实验,涉及不同的参与者群体:大学生(167 人)和美国成年人(320 人)。我们比较了个人对以虚拟或真实人类科学家为中心人物的视频型环保宣传活动的反应,这些科学家提供了他们为防止气候变化误导所做的个人努力的证明。结果表明,与虚拟人相比,真人主角能唤起更强烈的认同感。不过,我们也观察到,虚拟人可以唤起人们对角色的共鸣,从而使人们将其视为有情感的活生生的实体。从这项研究中获得的启示有可能在各个领域(包括亲社会活动和营销传播)塑造虚拟人内容。
{"title":"Virtual voices for real change: The efficacy of virtual humans in pro-environmental social marketing for mitigating misinformation about climate change","authors":"Won-Ki Moon ,&nbsp;Y. Greg Song ,&nbsp;Lucy Atkinson","doi":"10.1016/j.chbah.2024.100047","DOIUrl":"10.1016/j.chbah.2024.100047","url":null,"abstract":"<div><p>Academics have focused their research on the rise of non-human entities, particularly virtual humans. To assess the effectiveness of virtual humans in influencing individual behavior through campaigns, we conducted two separate online experiments involving different participant groups: university students (N = 167) and U.S. adults (N = 320). We compared individuals’ responses to video-type pro-environmental campaigns featuring a virtual or actual human scientist as the central figure who provides testimonials about their individual efforts to prevent misinformation about climate change. The results indicate that an actual human protagonist evoked a stronger sense of identification compared to a virtual human counterpart. Nevertheless, we also observed that virtual humans can evoke empathy for the characters, leading individuals to perceive them as living entities who can have emotions. The insights gleaned from this study have the potential to shape the creation of virtual human content in various domains, including pro-social campaigns and marketing communications.</p></div>","PeriodicalId":100324,"journal":{"name":"Computers in Human Behavior: Artificial Humans","volume":"2 1","pages":"Article 100047"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2949882124000070/pdfft?md5=4855892cb89ecc21d2e7dd741dce8b3b&pid=1-s2.0-S2949882124000070-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139637773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computers in Human Behavior: Artificial Humans
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1