Perceptions, attitudes and trust toward artificial intelligence — An assessment of the public opinion

Gian Luca Liehner, Alexander Hick, Hannah Biermann, P. Brauner, M. Ziefle
{"title":"Perceptions, attitudes and trust toward artificial intelligence — An assessment of the public opinion","authors":"Gian Luca Liehner, Alexander Hick, Hannah Biermann, P. Brauner, M. Ziefle","doi":"10.54941/ahfe1003271","DOIUrl":null,"url":null,"abstract":"Over the last couple of years, artificial intelligence (AI)—namely machine learning algorithms—has rapidly entered our daily lives. Applications can be found in medicine, law, finance, production, education, mobility, and entertainment. To achieve this, a large amount of research has been undertaken, to optimize algorithms that by learning from data are able to process natural language, recognize objects through computer vision, interact with their environment with the help of robotics, or take autonomous decisions without the help of human input. With that, AI is acquiring core human capabilities raising the question of the impact of AI use on our society and its individuals. To form a basis for addressing those questions, it is crucial to investigate the public perception of artificial intelligence. This area of research is however often overlooked as with the fast development of AI technologies demands and wishes of individuals are often neglected. To counteract this, our study focuses on the public's perception, attitudes, and trust towards artificial intelligence. To that end, we followed a two-step research approach. We first conducted semi-structured interviews which laid the foundation for an online questionnaire. Building upon the interviews, we designed an online questionnaire (N=124) in which in addition to user diversity factors such as belief in a dangerous world, sensitivity to threat, and technology optimism, we asked respondents to rate prejudices, myths, risks, and chances about AI. Our results show that in general respondents view AI as a tool that can act independently, adapt, and help them in their daily lives. With that being said, respondents also indicate that they are not able to understand the underlying mechanisms of AI, and with this doubt, the maturity of the technology, leading to privacy concerns, fear of misuse, and security issues. While respondents are willing to use AI nevertheless, they are less willing to place their trust in the technology. From a user diversity point of view, we found, that both trust and use intention are correlated to the belief in a dangerous world and technology optimism. In summary, our research shows that while respondents are willing to use AI in their everyday lives, still some concerns remain that can impact their trust in the technology. Further research should explore the mediation of concerns to include them in a responsible development process that ensures a positive impact of AI on individuals' lives and our society.","PeriodicalId":405313,"journal":{"name":"Artificial Intelligence and Social Computing","volume":"88 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial Intelligence and Social Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.54941/ahfe1003271","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Over the last couple of years, artificial intelligence (AI)—namely machine learning algorithms—has rapidly entered our daily lives. Applications can be found in medicine, law, finance, production, education, mobility, and entertainment. To achieve this, a large amount of research has been undertaken, to optimize algorithms that by learning from data are able to process natural language, recognize objects through computer vision, interact with their environment with the help of robotics, or take autonomous decisions without the help of human input. With that, AI is acquiring core human capabilities raising the question of the impact of AI use on our society and its individuals. To form a basis for addressing those questions, it is crucial to investigate the public perception of artificial intelligence. This area of research is however often overlooked as with the fast development of AI technologies demands and wishes of individuals are often neglected. To counteract this, our study focuses on the public's perception, attitudes, and trust towards artificial intelligence. To that end, we followed a two-step research approach. We first conducted semi-structured interviews which laid the foundation for an online questionnaire. Building upon the interviews, we designed an online questionnaire (N=124) in which in addition to user diversity factors such as belief in a dangerous world, sensitivity to threat, and technology optimism, we asked respondents to rate prejudices, myths, risks, and chances about AI. Our results show that in general respondents view AI as a tool that can act independently, adapt, and help them in their daily lives. With that being said, respondents also indicate that they are not able to understand the underlying mechanisms of AI, and with this doubt, the maturity of the technology, leading to privacy concerns, fear of misuse, and security issues. While respondents are willing to use AI nevertheless, they are less willing to place their trust in the technology. From a user diversity point of view, we found, that both trust and use intention are correlated to the belief in a dangerous world and technology optimism. In summary, our research shows that while respondents are willing to use AI in their everyday lives, still some concerns remain that can impact their trust in the technology. Further research should explore the mediation of concerns to include them in a responsible development process that ensures a positive impact of AI on individuals' lives and our society.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
对人工智能的认知、态度和信任——对公众舆论的评估
在过去的几年里,人工智能(AI)——即机器学习算法——迅速进入了我们的日常生活。应用领域包括医药、法律、金融、生产、教育、交通和娱乐。为了实现这一目标,已经进行了大量的研究,通过从数据中学习来优化算法,这些算法能够处理自然语言,通过计算机视觉识别物体,在机器人技术的帮助下与环境互动,或者在没有人类输入的帮助下自主决策。因此,人工智能正在获得人类的核心能力,这引发了人工智能使用对我们的社会和个人的影响的问题。为了形成解决这些问题的基础,调查公众对人工智能的看法至关重要。然而,随着人工智能技术的快速发展,这一研究领域往往被忽视,因为个人的需求和愿望往往被忽视。为了解决这个问题,我们的研究集中在公众对人工智能的看法、态度和信任上。为此,我们采用了两步研究方法。我们首先进行了半结构化访谈,为在线问卷调查奠定了基础。在访谈的基础上,我们设计了一份在线问卷(N=124),其中除了用户多样性因素(如对危险世界的信念、对威胁的敏感性和技术乐观主义)外,我们还要求受访者对人工智能的偏见、神话、风险和机会进行评级。我们的研究结果表明,一般受访者认为人工智能是一种可以独立行动、适应并帮助他们日常生活的工具。话虽如此,受访者还表示,他们无法理解人工智能的潜在机制,并且由于这种怀疑,技术的成熟,导致隐私担忧,担心滥用和安全问题。尽管受访者愿意使用人工智能,但他们不太愿意信任这项技术。从用户多样性的角度来看,我们发现信任和使用意图都与危险世界信念和技术乐观主义相关。总之,我们的研究表明,尽管受访者愿意在日常生活中使用人工智能,但仍然存在一些可能影响他们对这项技术信任的担忧。进一步的研究应该探索对担忧的调解,将它们纳入负责任的发展进程,以确保人工智能对个人生活和社会产生积极影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Hepatitis predictive analysis model through deep learning using neural networks based on patient history A machine learning approach for optimizing waiting times in a hand surgery operation center Automated Decision Support for Collaborative, Interactive Classification Dynamically monitoring crowd-worker's reliability with interval-valued labels Detection of inappropriate images on smartphones based on computer vision techniques
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1