发现机器人:调查用户对社交机器人的检测线索以及他们验证推特个人资料的意愿

IF 9 1区 心理学 Q1 PSYCHOLOGY, EXPERIMENTAL Computers in Human Behavior Pub Date : 2023-09-01 DOI:10.1016/j.chb.2023.107819
Thao Ngo , Magdalena Wischnewski , Rebecca Bernemann , Martin Jansen , Nicole Krämer
{"title":"发现机器人:调查用户对社交机器人的检测线索以及他们验证推特个人资料的意愿","authors":"Thao Ngo ,&nbsp;Magdalena Wischnewski ,&nbsp;Rebecca Bernemann ,&nbsp;Martin Jansen ,&nbsp;Nicole Krämer","doi":"10.1016/j.chb.2023.107819","DOIUrl":null,"url":null,"abstract":"<div><p>Detecting social bots is important for users to assess the credibility and trustworthiness of information on social media. In this work, we therefore investigate how users become suspicious of social bots and users' willingness to verify Twitter profiles. Focusing on political social bots, we first explored which cues users apply to detect social bots in a qualitative online study (<em>N</em> = 30). Content analysis revealed three cue categories: content and form, behavior, profile characteristics. In a subsequent online experiment (<em>N</em> = 221), we examined which cues evoke users’ willingness to verify profiles. Extending prior literature on partisan-motivated reasoning, we further investigated the effects of <em>type of profile</em> (bot, ambiguous, human) and <em>opinion-congruency</em>, i.e., whether a profile shares the same opinion or not, on the willingness to verify a Twitter profile. Our analysis showed that homogeneity in behavior and content and form was most important to users. Confirming our hypothesis, participants were more willing to verify opinion-incongruent profiles than congruent ones. Bot profiles were most likely to be verified. Our main conclusion is that users apply profile verification tools to confirm their perception of a social media profile instead of alleviating their uncertainties about it. Partisan-motivated reasoning drives profile verification for bot and human profiles.</p></div>","PeriodicalId":48471,"journal":{"name":"Computers in Human Behavior","volume":"146 ","pages":"Article 107819"},"PeriodicalIF":9.0000,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Spot the bot: Investigating user's detection cues for social bots and their willingness to verify Twitter profiles\",\"authors\":\"Thao Ngo ,&nbsp;Magdalena Wischnewski ,&nbsp;Rebecca Bernemann ,&nbsp;Martin Jansen ,&nbsp;Nicole Krämer\",\"doi\":\"10.1016/j.chb.2023.107819\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Detecting social bots is important for users to assess the credibility and trustworthiness of information on social media. In this work, we therefore investigate how users become suspicious of social bots and users' willingness to verify Twitter profiles. Focusing on political social bots, we first explored which cues users apply to detect social bots in a qualitative online study (<em>N</em> = 30). Content analysis revealed three cue categories: content and form, behavior, profile characteristics. In a subsequent online experiment (<em>N</em> = 221), we examined which cues evoke users’ willingness to verify profiles. Extending prior literature on partisan-motivated reasoning, we further investigated the effects of <em>type of profile</em> (bot, ambiguous, human) and <em>opinion-congruency</em>, i.e., whether a profile shares the same opinion or not, on the willingness to verify a Twitter profile. Our analysis showed that homogeneity in behavior and content and form was most important to users. Confirming our hypothesis, participants were more willing to verify opinion-incongruent profiles than congruent ones. Bot profiles were most likely to be verified. Our main conclusion is that users apply profile verification tools to confirm their perception of a social media profile instead of alleviating their uncertainties about it. Partisan-motivated reasoning drives profile verification for bot and human profiles.</p></div>\",\"PeriodicalId\":48471,\"journal\":{\"name\":\"Computers in Human Behavior\",\"volume\":\"146 \",\"pages\":\"Article 107819\"},\"PeriodicalIF\":9.0000,\"publicationDate\":\"2023-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Computers in Human Behavior\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S074756322300170X\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers in Human Behavior","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S074756322300170X","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
引用次数: 0

摘要

检测社交机器人对于用户评估社交媒体上信息的可信度和可信度很重要。因此,在这项工作中,我们调查了用户如何对社交机器人产生怀疑,以及用户是否愿意验证推特个人资料。围绕政治社交机器人,我们首先在一项定性在线研究中探讨了用户应用哪些线索来检测社交机器人(N=30)。内容分析揭示了三类线索:内容与形式、行为、侧面特征。在随后的一项在线实验(N=221)中,我们研究了哪些线索会唤起用户验证个人资料的意愿。扩展先前关于党派动机推理的文献,我们进一步研究了个人资料类型(机器人、模糊、人类)和意见一致性,即个人资料是否共享相同意见,对验证推特个人资料的意愿的影响。我们的分析表明,行为、内容和形式的同质性对用户来说是最重要的。证实了我们的假设,参与者更愿意验证意见不一致的情况,而不是一致的情况。机器人配置文件最有可能得到验证。我们的主要结论是,用户应用个人资料验证工具来确认他们对社交媒体个人资料的看法,而不是减轻他们对其的不确定性。基于党派的推理推动了机器人和人类个人资料的个人资料验证。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Spot the bot: Investigating user's detection cues for social bots and their willingness to verify Twitter profiles

Detecting social bots is important for users to assess the credibility and trustworthiness of information on social media. In this work, we therefore investigate how users become suspicious of social bots and users' willingness to verify Twitter profiles. Focusing on political social bots, we first explored which cues users apply to detect social bots in a qualitative online study (N = 30). Content analysis revealed three cue categories: content and form, behavior, profile characteristics. In a subsequent online experiment (N = 221), we examined which cues evoke users’ willingness to verify profiles. Extending prior literature on partisan-motivated reasoning, we further investigated the effects of type of profile (bot, ambiguous, human) and opinion-congruency, i.e., whether a profile shares the same opinion or not, on the willingness to verify a Twitter profile. Our analysis showed that homogeneity in behavior and content and form was most important to users. Confirming our hypothesis, participants were more willing to verify opinion-incongruent profiles than congruent ones. Bot profiles were most likely to be verified. Our main conclusion is that users apply profile verification tools to confirm their perception of a social media profile instead of alleviating their uncertainties about it. Partisan-motivated reasoning drives profile verification for bot and human profiles.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
19.10
自引率
4.00%
发文量
381
审稿时长
40 days
期刊介绍: Computers in Human Behavior is a scholarly journal that explores the psychological aspects of computer use. It covers original theoretical works, research reports, literature reviews, and software and book reviews. The journal examines both the use of computers in psychology, psychiatry, and related fields, and the psychological impact of computer use on individuals, groups, and society. Articles discuss topics such as professional practice, training, research, human development, learning, cognition, personality, and social interactions. It focuses on human interactions with computers, considering the computer as a medium through which human behaviors are shaped and expressed. Professionals interested in the psychological aspects of computer use will find this journal valuable, even with limited knowledge of computers.
期刊最新文献
What makes an app authentic? Determining antecedents of perceived authenticity in an AI-powered service app The effects of self-explanation on game-based learning: Evidence from eye-tracking analyses Avatars at risk: Exploring public response to sexual violence in immersive digital spaces Perception of non-binary social media users towards authentic non-binary social media influencers Editorial Board
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1