Adrian Lüders, Stefan Reiss, Alejandro Dinkelberg, Pádraig MacCarron, Michael Quayle
{"title":"不是我们这类人!党派偏见如何扭曲人们对 Twitter 上政治机器人的看法(现在 X)","authors":"Adrian Lüders, Stefan Reiss, Alejandro Dinkelberg, Pádraig MacCarron, Michael Quayle","doi":"10.1111/bjso.12794","DOIUrl":null,"url":null,"abstract":"Social bots, employed to manipulate public opinion, pose a novel threat to digital societies. Existing bot research has emphasized technological aspects while neglecting psychological factors shaping human–bot interactions. This research addresses this gap within the context of the US‐American electorate. Two datasets provide evidence that partisanship distorts (a) online users' representation of bots, (b) their ability to identify them, and (c) their intentions to interact with them. Study 1 explores global bot perceptions on through survey data from <jats:italic>N</jats:italic> = 452 Twitter (now X) users. Results suggest that users tend to attribute bot‐related dangers to political adversaries, rather than recognizing bots as a shared threat to political discourse. Study 2 (<jats:italic>N</jats:italic> = 619) evaluates the consequences of such misrepresentations for the quality of online interactions. In an online experiment, participants were asked to differentiate between human and bot profiles. Results indicate that partisan leanings explained systematic judgement errors. The same data suggest that participants aim to avoid interacting with bots. However, biased judgements may undermine this motivation in praxis. In sum, the presented findings underscore the importance of interdisciplinary strategies that consider technological and human factors to address the threats posed by bots in a rapidly evolving digital landscape.","PeriodicalId":48304,"journal":{"name":"British Journal of Social Psychology","volume":"7 1","pages":""},"PeriodicalIF":3.2000,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Not our kind of crowd! How partisan bias distorts perceptions of political bots on Twitter (now X)\",\"authors\":\"Adrian Lüders, Stefan Reiss, Alejandro Dinkelberg, Pádraig MacCarron, Michael Quayle\",\"doi\":\"10.1111/bjso.12794\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Social bots, employed to manipulate public opinion, pose a novel threat to digital societies. Existing bot research has emphasized technological aspects while neglecting psychological factors shaping human–bot interactions. This research addresses this gap within the context of the US‐American electorate. Two datasets provide evidence that partisanship distorts (a) online users' representation of bots, (b) their ability to identify them, and (c) their intentions to interact with them. Study 1 explores global bot perceptions on through survey data from <jats:italic>N</jats:italic> = 452 Twitter (now X) users. Results suggest that users tend to attribute bot‐related dangers to political adversaries, rather than recognizing bots as a shared threat to political discourse. Study 2 (<jats:italic>N</jats:italic> = 619) evaluates the consequences of such misrepresentations for the quality of online interactions. In an online experiment, participants were asked to differentiate between human and bot profiles. Results indicate that partisan leanings explained systematic judgement errors. The same data suggest that participants aim to avoid interacting with bots. However, biased judgements may undermine this motivation in praxis. In sum, the presented findings underscore the importance of interdisciplinary strategies that consider technological and human factors to address the threats posed by bots in a rapidly evolving digital landscape.\",\"PeriodicalId\":48304,\"journal\":{\"name\":\"British Journal of Social Psychology\",\"volume\":\"7 1\",\"pages\":\"\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2024-08-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"British Journal of Social Psychology\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1111/bjso.12794\",\"RegionNum\":2,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, SOCIAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"British Journal of Social Psychology","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1111/bjso.12794","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, SOCIAL","Score":null,"Total":0}
Not our kind of crowd! How partisan bias distorts perceptions of political bots on Twitter (now X)
Social bots, employed to manipulate public opinion, pose a novel threat to digital societies. Existing bot research has emphasized technological aspects while neglecting psychological factors shaping human–bot interactions. This research addresses this gap within the context of the US‐American electorate. Two datasets provide evidence that partisanship distorts (a) online users' representation of bots, (b) their ability to identify them, and (c) their intentions to interact with them. Study 1 explores global bot perceptions on through survey data from N = 452 Twitter (now X) users. Results suggest that users tend to attribute bot‐related dangers to political adversaries, rather than recognizing bots as a shared threat to political discourse. Study 2 (N = 619) evaluates the consequences of such misrepresentations for the quality of online interactions. In an online experiment, participants were asked to differentiate between human and bot profiles. Results indicate that partisan leanings explained systematic judgement errors. The same data suggest that participants aim to avoid interacting with bots. However, biased judgements may undermine this motivation in praxis. In sum, the presented findings underscore the importance of interdisciplinary strategies that consider technological and human factors to address the threats posed by bots in a rapidly evolving digital landscape.
期刊介绍:
The British Journal of Social Psychology publishes work from scholars based in all parts of the world, and manuscripts that present data on a wide range of populations inside and outside the UK. It publishes original papers in all areas of social psychology including: • social cognition • attitudes • group processes • social influence • intergroup relations • self and identity • nonverbal communication • social psychological aspects of personality, affect and emotion • language and discourse Submissions addressing these topics from a variety of approaches and methods, both quantitative and qualitative are welcomed. We publish papers of the following kinds: • empirical papers that address theoretical issues; • theoretical papers, including analyses of existing social psychological theories and presentations of theoretical innovations, extensions, or integrations; • review papers that provide an evaluation of work within a given area of social psychology and that present proposals for further research in that area; • methodological papers concerning issues that are particularly relevant to a wide range of social psychologists; • an invited agenda article as the first article in the first part of every volume. The editorial team aims to handle papers as efficiently as possible. In 2016, papers were triaged within less than a week, and the average turnaround time from receipt of the manuscript to first decision sent back to the authors was 47 days.