Stefan Sütterlin, Torvald F. Ask, Sophia Mägerle, Sandra Glöckler, Leandra Wolf, Julian Schray, Alaya Chandi, Teodora Bursac, Ali Khodabakhsh, Benjamin J. Knox, Matthew Canham, R. Lugo
{"title":"Individual Deep Fake Recognition Skills are Affected by Viewer's Political Orientation, Agreement with Content and Device Used","authors":"Stefan Sütterlin, Torvald F. Ask, Sophia Mägerle, Sandra Glöckler, Leandra Wolf, Julian Schray, Alaya Chandi, Teodora Bursac, Ali Khodabakhsh, Benjamin J. Knox, Matthew Canham, R. Lugo","doi":"10.31234/osf.io/hwujb","DOIUrl":null,"url":null,"abstract":"AI-generated “deep fakes” are becoming increasingly professional and can be expected to become an essential tool for cybercriminals conducting targeted and tailored social engineering attacks, as well as for others aiming for influencing public opinion in a more general sense. While the technological arms race is resulting in increasingly efficient forensic detection tools, these are unlikely to be in place and applied by common users on an everyday basis any time soon, especially if social engineering attacks are camouflaged as unsuspicious conversations. To date, most cybercriminals do not yet have the necessary resources, competencies or the required raw material featuring the target to produce perfect impersonifications. To raise awareness and efficiently train individuals in recognizing the most widespread deep fakes, the understanding of what may cause individual differences in the ability to recognize them can be central. Previous research suggested a close relationship between political attitudes and top-down perceptual and subsequent cognitive processing styles. In this study, we aimed to investigate the impact of political attitudes and agreement with the political message content on the individual’s deep fake recognition skills.In this study, 163 adults (72 females = 44.2%) judged a series of video clips with politicians’ statements across the political spectrum regarding their authenticity and their agreement with the message that was transported. Half of the presented videos were fabricated via lip-sync technology. In addition to the particular agreement to each statement made, more global political attitudes towards social and economic topics were assessed via the Social and Economic Conservatism Scale (SECS).Data analysis revealed robust negative associations between participants’ general and in particular social conservatism and their ability to recognize fabricated videos. This effect was pronounced where there was a specific agreement with the message content. Deep fakes watched on mobile phones and tablets were considerably less likely to be recognized as such compared to when watched on stationary computers.To the best of our knowledge, this study is the first to investigate and establish the association between political attitudes and interindividual differences in deep fake recognition. The study further supports very recently published research suggesting relationships between conservatism and perceived credibility of conspiracy theories and fake news in general. Implications for further research on psychological mechanisms underlying this effect are discussed.","PeriodicalId":129626,"journal":{"name":"Interacción","volume":"32 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Interacción","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.31234/osf.io/hwujb","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
AI-generated “deep fakes” are becoming increasingly professional and can be expected to become an essential tool for cybercriminals conducting targeted and tailored social engineering attacks, as well as for others aiming for influencing public opinion in a more general sense. While the technological arms race is resulting in increasingly efficient forensic detection tools, these are unlikely to be in place and applied by common users on an everyday basis any time soon, especially if social engineering attacks are camouflaged as unsuspicious conversations. To date, most cybercriminals do not yet have the necessary resources, competencies or the required raw material featuring the target to produce perfect impersonifications. To raise awareness and efficiently train individuals in recognizing the most widespread deep fakes, the understanding of what may cause individual differences in the ability to recognize them can be central. Previous research suggested a close relationship between political attitudes and top-down perceptual and subsequent cognitive processing styles. In this study, we aimed to investigate the impact of political attitudes and agreement with the political message content on the individual’s deep fake recognition skills.In this study, 163 adults (72 females = 44.2%) judged a series of video clips with politicians’ statements across the political spectrum regarding their authenticity and their agreement with the message that was transported. Half of the presented videos were fabricated via lip-sync technology. In addition to the particular agreement to each statement made, more global political attitudes towards social and economic topics were assessed via the Social and Economic Conservatism Scale (SECS).Data analysis revealed robust negative associations between participants’ general and in particular social conservatism and their ability to recognize fabricated videos. This effect was pronounced where there was a specific agreement with the message content. Deep fakes watched on mobile phones and tablets were considerably less likely to be recognized as such compared to when watched on stationary computers.To the best of our knowledge, this study is the first to investigate and establish the association between political attitudes and interindividual differences in deep fake recognition. The study further supports very recently published research suggesting relationships between conservatism and perceived credibility of conspiracy theories and fake news in general. Implications for further research on psychological mechanisms underlying this effect are discussed.