Pub Date : 2023-07-01DOI: 10.1089/cyber.2023.29289.rfs2022
Susan J Persky
{"title":"Rosalind Franklin Society Proudly Announces the 2022 Award Recipient for <i>Cyberpsychology, Behavior, and Social Networking</i>.","authors":"Susan J Persky","doi":"10.1089/cyber.2023.29289.rfs2022","DOIUrl":"https://doi.org/10.1089/cyber.2023.29289.rfs2022","url":null,"abstract":"","PeriodicalId":10872,"journal":{"name":"Cyberpsychology, behavior and social networking","volume":"26 7","pages":"457"},"PeriodicalIF":6.6,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9841363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matthew Costello, Nishant Vishwamitra, Song Liao, Long Cheng, Feng Luo, Hongxin Hu
Hate crimes and hateful rhetoric targeting individuals of Asian descent have increased since the outbreak of COVID-19. These troubling trends have heightened concerns about the role of the Internet in facilitating radicalization. This article explores the existence of three warning signs of radicalization-fixation, group identification, and energy bursts-using data from Twitter and Reddit. Data were collected before and after the outbreak of COVID-19 to assess the role of the pandemic in affecting social media behavior. Using computational social science and Natural Language Processing techniques, we looked for signs of radicalization targeting China or Chinese individuals. Results show that fixation on the terms China and Chinese increased on Twitter and Reddit after the pandemic began. Moreover, tweets and posts containing either of these terms became more hateful, offensive, and negative after the outbreak. We also found evidence of individuals identifying more closely with a particular group, or adopting an "us vs. them" mentality, after the outbreak of COVID-19. These findings were especially prominent in subreddits catering to self-identified Republicans and Conservatives. Finally, we detected bursts of activity on Twitter and Reddit following the start of the pandemic. These warning signs suggest COVID-19 may have had a radicalizing effect on some social media users. This work is important because it not only shows the potential radicalizing effect of the pandemic, but also demonstrates the ability to detect warning signs of radicalization on social media. This is critical, as detecting warning signs of radicalization can potentially help curb hate-fueled violence.
{"title":"COVID-19 and Sinophobia: Detecting Warning Signs of Radicalization on Twitter and Reddit.","authors":"Matthew Costello, Nishant Vishwamitra, Song Liao, Long Cheng, Feng Luo, Hongxin Hu","doi":"10.1089/cyber.2022.0200","DOIUrl":"https://doi.org/10.1089/cyber.2022.0200","url":null,"abstract":"<p><p>Hate crimes and hateful rhetoric targeting individuals of Asian descent have increased since the outbreak of COVID-19. These troubling trends have heightened concerns about the role of the Internet in facilitating radicalization. This article explores the existence of three warning signs of radicalization-fixation, group identification, and energy bursts-using data from Twitter and Reddit. Data were collected before and after the outbreak of COVID-19 to assess the role of the pandemic in affecting social media behavior. Using computational social science and Natural Language Processing techniques, we looked for signs of radicalization targeting China or Chinese individuals. Results show that fixation on the terms China and Chinese increased on Twitter and Reddit after the pandemic began. Moreover, tweets and posts containing either of these terms became more hateful, offensive, and negative after the outbreak. We also found evidence of individuals identifying more closely with a particular group, or adopting an \"us vs. them\" mentality, after the outbreak of COVID-19. These findings were especially prominent in subreddits catering to self-identified Republicans and Conservatives. Finally, we detected bursts of activity on Twitter and Reddit following the start of the pandemic. These warning signs suggest COVID-19 may have had a radicalizing effect on some social media users. This work is important because it not only shows the potential radicalizing effect of the pandemic, but also demonstrates the ability to detect warning signs of radicalization on social media. This is critical, as detecting warning signs of radicalization can potentially help curb hate-fueled violence.</p>","PeriodicalId":10872,"journal":{"name":"Cyberpsychology, behavior and social networking","volume":"26 7","pages":"546-553"},"PeriodicalIF":6.6,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9841364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ina Weber, Heidi Vandebosch, Karolien Poels, Sara Pabian
Online hate speech on social media platforms causes harm to those who are victimized as well as society at large. The prevalence of hateful content has, thus, prompted numerous calls for improved countermeasures and prevention. For such interventions to be effective, it is necessary to gain a nuanced understanding of influences that facilitate the spread of hate speech. This study does so by investigating what are relevant digital determinants for online hate perpetration. Moreover, the study explores possibilities of different technology-driven interventions for prevention. Thereby, the study specifically considers the digital environments in which online hate speech is most often produced and disseminated, namely social media platforms. We apply frameworks related to the concept of digital affordances to focus on the role that technological features of these platforms play in the context of online hate speech. Data were collected using the Delphi method in which a selected sample of experts from both research and practice answered multiple rounds of surveys with the goal of reaching a group consensus. The study encompassed an open-ended collection of initial ideas, followed by a multiple-choice questionnaire to identify, and rate the most relevant determinants. Usefulness of the suggested intervention ideas was assessed through the three lenses of human-centered design. The results of both thematic analysis and non-parametric statistics yield insights on how features of social media platforms can be both determinants that facilitate online hate perpetration as well as crucial mechanisms of preventive interventions. Implications of these findings for future intervention development are discussed.
{"title":"Features for Hate? Using the Delphi Method to Explore Digital Determinants for Online Hate Perpetration and Possibilities for Intervention.","authors":"Ina Weber, Heidi Vandebosch, Karolien Poels, Sara Pabian","doi":"10.1089/cyber.2022.0195","DOIUrl":"https://doi.org/10.1089/cyber.2022.0195","url":null,"abstract":"<p><p>Online hate speech on social media platforms causes harm to those who are victimized as well as society at large. The prevalence of hateful content has, thus, prompted numerous calls for improved countermeasures and prevention. For such interventions to be effective, it is necessary to gain a nuanced understanding of influences that facilitate the spread of hate speech. This study does so by investigating what are relevant digital determinants for online hate perpetration. Moreover, the study explores possibilities of different technology-driven interventions for prevention. Thereby, the study specifically considers the digital environments in which online hate speech is most often produced and disseminated, namely social media platforms. We apply frameworks related to the concept of digital affordances to focus on the role that technological features of these platforms play in the context of online hate speech. Data were collected using the Delphi method in which a selected sample of experts from both research and practice answered multiple rounds of surveys with the goal of reaching a group consensus. The study encompassed an open-ended collection of initial ideas, followed by a multiple-choice questionnaire to identify, and rate the most relevant determinants. Usefulness of the suggested intervention ideas was assessed through the three lenses of human-centered design. The results of both thematic analysis and non-parametric statistics yield insights on how features of social media platforms can be both determinants that facilitate online hate perpetration as well as crucial mechanisms of preventive interventions. Implications of these findings for future intervention development are discussed.</p>","PeriodicalId":10872,"journal":{"name":"Cyberpsychology, behavior and social networking","volume":"26 7","pages":"479-488"},"PeriodicalIF":6.6,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9844688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1089/cyber.2023.29285.ceu
Vincenzo Cialdini, Daniele Di Lernia, Giuseppe Riva
{"title":"SyncWork: Comparison of Brain Synchrony between Agile and Face-to-Face Work Using an EEG Hyperscanning Paradigm.","authors":"Vincenzo Cialdini, Daniele Di Lernia, Giuseppe Riva","doi":"10.1089/cyber.2023.29285.ceu","DOIUrl":"https://doi.org/10.1089/cyber.2023.29285.ceu","url":null,"abstract":"","PeriodicalId":10872,"journal":{"name":"Cyberpsychology, behavior and social networking","volume":"26 7","pages":"572-574"},"PeriodicalIF":6.6,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10245832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial intelligence (AI) has been increasingly integrated into content moderation to detect and remove hate speech on social media. An online experiment (N = 478) was conducted to examine how moderation agents (AI vs. human vs. human-AI collaboration) and removal explanations (with vs. without) affect users' perceptions and acceptance of removal decisions for hate speech targeting social groups with certain characteristics, such as religion or sexual orientation. The results showed that individuals exhibit consistent levels of perceived trustworthiness and acceptance of removal decisions regardless of the type of moderation agent. When explanations for the content takedown were provided, removal decisions made jointly by humans and AI were perceived as more trustworthy than the same decisions made by humans alone, which increased users' willingness to accept the verdict. However, this moderated mediation effect was only significant when Muslims, not homosexuals, were the target of hate speech.
{"title":"Content Moderation on Social Media: Does It Matter Who and Why Moderates Hate Speech?","authors":"Sai Wang, Ki Joon Kim","doi":"10.1089/cyber.2022.0158","DOIUrl":"https://doi.org/10.1089/cyber.2022.0158","url":null,"abstract":"<p><p>Artificial intelligence (AI) has been increasingly integrated into content moderation to detect and remove hate speech on social media. An online experiment (<i>N</i> = 478) was conducted to examine how moderation agents (AI vs. human vs. human-AI collaboration) and removal explanations (with vs. without) affect users' perceptions and acceptance of removal decisions for hate speech targeting social groups with certain characteristics, such as religion or sexual orientation. The results showed that individuals exhibit consistent levels of perceived trustworthiness and acceptance of removal decisions regardless of the type of moderation agent. When explanations for the content takedown were provided, removal decisions made jointly by humans and AI were perceived as more trustworthy than the same decisions made by humans alone, which increased users' willingness to accept the verdict. However, this moderated mediation effect was only significant when Muslims, not homosexuals, were the target of hate speech.</p>","PeriodicalId":10872,"journal":{"name":"Cyberpsychology, behavior and social networking","volume":"26 7","pages":"527-534"},"PeriodicalIF":6.6,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9832515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent studies have documented increases in anti-Asian hate throughout the COVID-19 pandemic. Yet relatively little is known about how anti-Asian content on social media, as well as positive messages to combat the hate, have varied over time. In this study, we investigated temporal changes in the frequency of anti-Asian and counter-hate messages on Twitter during the first 16 months of the COVID-19 pandemic. Using the Twitter Data Collection Application Programming Interface, we queried all tweets from January 30, 2020 to April 30, 2021 that contained specific anti-Asian (e.g., #chinavirus, #kungflu) and counter-hate (e.g., #hateisavirus) keywords. From this initial data set, we extracted a random subset of 1,000 Twitter users who had used one or more anti-Asian or counter-hate keywords. For each of these users, we calculated the total number of anti-Asian and counter-hate keywords posted each month. Latent growth curve analysis revealed that the frequency of anti-Asian keywords fluctuated over time in a curvilinear pattern, increasing steadily in the early months and then decreasing in the later months of our data collection. In contrast, the frequency of counter-hate keywords remained low for several months and then increased in a linear manner. Significant between-user variability in both anti-Asian and counter-hate content was observed, highlighting individual differences in the generation of hate and counter-hate messages within our sample. Together, these findings begin to shed light on longitudinal patterns of hate and counter-hate on social media during the COVID-19 pandemic.
{"title":"An Analysis of Temporal Trends in Anti-Asian Hate and Counter-Hate on Twitter During the COVID-19 Pandemic.","authors":"Brittany Wheeler, Seong Jung, Deborah L Hall, Monika Purohit, Yasin Silva","doi":"10.1089/cyber.2022.0206","DOIUrl":"https://doi.org/10.1089/cyber.2022.0206","url":null,"abstract":"<p><p>Recent studies have documented increases in anti-Asian hate throughout the COVID-19 pandemic. Yet relatively little is known about how anti-Asian content on social media, as well as positive messages to combat the hate, have varied over time. In this study, we investigated temporal changes in the frequency of anti-Asian and counter-hate messages on Twitter during the first 16 months of the COVID-19 pandemic. Using the Twitter Data Collection Application Programming Interface, we queried all tweets from January 30, 2020 to April 30, 2021 that contained specific anti-Asian (e.g., <i>#chinavirus, #kungflu)</i> and counter-hate (e.g., <i>#hateisavirus)</i> keywords. From this initial data set, we extracted a random subset of 1,000 Twitter users who had used one or more anti-Asian or counter-hate keywords. For each of these users, we calculated the total number of anti-Asian and counter-hate keywords posted each month. Latent growth curve analysis revealed that the frequency of anti-Asian keywords fluctuated over time in a curvilinear pattern, increasing steadily in the early months and then decreasing in the later months of our data collection. In contrast, the frequency of counter-hate keywords remained low for several months and then increased in a linear manner. Significant between-user variability in both anti-Asian and counter-hate content was observed, highlighting individual differences in the generation of hate and counter-hate messages within our sample. Together, these findings begin to shed light on longitudinal patterns of hate and counter-hate on social media during the COVID-19 pandemic.</p>","PeriodicalId":10872,"journal":{"name":"Cyberpsychology, behavior and social networking","volume":"26 7","pages":"535-545"},"PeriodicalIF":6.6,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9841362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fake news is on the rise on many social media platforms. The proliferation of fake news is concerning, yet little is known about the characteristics that may motivate social media users to denounce (or ignore) fake news when they see it posted by strangers, close friends, and family members. Active social media users (N = 218) completed an online survey examining psychological characteristics (i.e., misinformation correction importance, self-esteem) and communicative characteristics (i.e., argumentativeness, conflict style) that may relate to an individual's willingness to denounce fake news posted by either strangers or close friends/family members. Participants examined several manipulated fake news scenarios differing in political alignment and relevant topic content within a Facebook news article format. Results indicated that misinformation correction importance was positively related to willingness to denounce in the context of close friends and family, but not with strangers. Moreover, participants with higher self-esteem were less likely to denounce fake news posted by strangers (but not posted by close friends and family), which suggests that confident individuals prefer to avoid challenging people outside of their close ties. Argumentativeness was positively related to willingness to denounce fake news in all scenarios no matter the user's relationship to the fake news poster. Results for conflict styles were mixed. These findings provide preliminary evidence for how psychological, communicative, and relationship characteristics relate to social media users' decision to denounce (or ignore) fake news posted on a social media platform.
{"title":"Psychological, Communicative, and Relationship Characteristics That Relate to Social Media Users' Willingness to Denounce Fake News.","authors":"Teash Johnson, Stephen M Kromka","doi":"10.1089/cyber.2022.0204","DOIUrl":"https://doi.org/10.1089/cyber.2022.0204","url":null,"abstract":"<p><p>Fake news is on the rise on many social media platforms. The proliferation of fake news is concerning, yet little is known about the characteristics that may motivate social media users to denounce (or ignore) fake news when they see it posted by strangers, close friends, and family members. Active social media users (<i>N</i> = 218) completed an online survey examining psychological characteristics (i.e., misinformation correction importance, self-esteem) and communicative characteristics (i.e., argumentativeness, conflict style) that may relate to an individual's willingness to denounce fake news posted by either strangers or close friends/family members. Participants examined several manipulated fake news scenarios differing in political alignment and relevant topic content within a Facebook news article format. Results indicated that misinformation correction importance was positively related to willingness to denounce in the context of close friends and family, but not with strangers. Moreover, participants with higher self-esteem were less likely to denounce fake news posted by strangers (but not posted by close friends and family), which suggests that confident individuals prefer to avoid challenging people outside of their close ties. Argumentativeness was positively related to willingness to denounce fake news in all scenarios no matter the user's relationship to the fake news poster. Results for conflict styles were mixed. These findings provide preliminary evidence for how psychological, communicative, and relationship characteristics relate to social media users' decision to denounce (or ignore) fake news posted on a social media platform.</p>","PeriodicalId":10872,"journal":{"name":"Cyberpsychology, behavior and social networking","volume":"26 7","pages":"563-571"},"PeriodicalIF":6.6,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9842301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The growing challenge of digital hate speech requires an understanding of its complexity, scale, and impact. Research on experiencing digital hate speech has so far been limited to the roles of personal victim, observer, and perpetrator, with a focus on young people. However, research on hate crimes suggests that vicarious victimization may also be relevant due to its negative impacts. In addition, the lack of knowledge about the older generation neglects the fact that older people are increasingly seen as vulnerable to digital risks. Therefore, this study introduces vicarious victimization as an additional role in research on digital hate speech. Prevalence rates for the four roles are examined across the life span, using a nationally representative sample of adult Internet users in Switzerland. Additionally, all roles are correlated with life satisfaction and loneliness, two stable indicators of subjective well-being. The results show that in this national population, personal victimization and perpetration are less common (<7 percent), whereas observation and vicarious victimization are more common (>40 percent). Prevalence decreases with age in all roles. As expected, multivariate analyses show that both forms of victimization are negatively related to life satisfaction and positively related to loneliness, with these effects being stronger for personal victimization. Similarly, being an observer and being a perpetrator correlate negatively, but not significantly, with well-being. This study contributes to a theoretical and empirical distinction between personal and vicarious victims and provides insight into their effects on well-being in a population largely unexplored in terms of age and national representativeness.
{"title":"Digital Hate Speech Experiences Across Age Groups and Their Impact on Well-Being: A Nationally Representative Survey in Switzerland.","authors":"Lea Stahel, Dirk Baier","doi":"10.1089/cyber.2022.0185","DOIUrl":"https://doi.org/10.1089/cyber.2022.0185","url":null,"abstract":"<p><p>The growing challenge of digital hate speech requires an understanding of its complexity, scale, and impact. Research on experiencing digital hate speech has so far been limited to the roles of personal victim, observer, and perpetrator, with a focus on young people. However, research on hate crimes suggests that vicarious victimization may also be relevant due to its negative impacts. In addition, the lack of knowledge about the older generation neglects the fact that older people are increasingly seen as vulnerable to digital risks. Therefore, this study introduces vicarious victimization as an additional role in research on digital hate speech. Prevalence rates for the four roles are examined across the life span, using a nationally representative sample of adult Internet users in Switzerland. Additionally, all roles are correlated with life satisfaction and loneliness, two stable indicators of subjective well-being. The results show that in this national population, personal victimization and perpetration are less common (<7 percent), whereas observation and vicarious victimization are more common (>40 percent). Prevalence decreases with age in all roles. As expected, multivariate analyses show that both forms of victimization are negatively related to life satisfaction and positively related to loneliness, with these effects being stronger for personal victimization. Similarly, being an observer and being a perpetrator correlate negatively, but not significantly, with well-being. This study contributes to a theoretical and empirical distinction between personal and vicarious victims and provides insight into their effects on well-being in a population largely unexplored in terms of age and national representativeness.</p>","PeriodicalId":10872,"journal":{"name":"Cyberpsychology, behavior and social networking","volume":"26 7","pages":"519-526"},"PeriodicalIF":6.6,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9842826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marie Bedrosova, Vojtech Mylek, Lenka Dedkova, Anca Velicu
Cyberhate is one of the online risks that adolescents can experience online. It is considered a content risk when it is unintentionally encountered and a conduct risk when the user actively searches for it. Previous research has not differentiated between these experiences, although they can concern different groups of adolescents and be connected to distinctive risk factors. To address this, our study first focuses on both unintentional and intentional exposure and investigates the individual-level risk factors that differentiate them. Second, we compare each exposed group of adolescents with those who were not exposed to cyberhate. We used survey data from a representative sample of adolescents (N = 6,033, aged 12-16 years, 50.3 percent girls) from eight European countries-Czechia, Finland, Flanders, France, Italy, Poland, Romania, and Slovakia-and conducted multinomial logistic regression. Our findings show that adolescents with higher sensation seeking, proactive normative beliefs about aggression (NBA), and who report cyberhate perpetration, are at higher risk of intentionally searching for cyberhate contents compared with those who are unintentionally exposed. In comparison with unexposed adolescents, reporting other risky experiences was a risk factor for both types of exposure. Furthermore, NBA worked differently-reactive NBA was a risk factor for intentional exposure, but proactive NBA did not play a role and even decreased the chance of unintentional exposure. Digital skills increased both types of exposure. Our findings stress the need to differentiate between intentional and unintentional cyberhate exposure and to examine proactive and reactive NBA separately.
{"title":"Who Is Searching for Cyberhate? Adolescents' Characteristics Associated with Intentional or Unintentional Exposure to Cyberhate.","authors":"Marie Bedrosova, Vojtech Mylek, Lenka Dedkova, Anca Velicu","doi":"10.1089/cyber.2022.0201","DOIUrl":"https://doi.org/10.1089/cyber.2022.0201","url":null,"abstract":"<p><p>Cyberhate is one of the online risks that adolescents can experience online. It is considered a content risk when it is unintentionally encountered and a conduct risk when the user actively searches for it. Previous research has not differentiated between these experiences, although they can concern different groups of adolescents and be connected to distinctive risk factors. To address this, our study first focuses on both unintentional and intentional exposure and investigates the individual-level risk factors that differentiate them. Second, we compare each exposed group of adolescents with those who were not exposed to cyberhate. We used survey data from a representative sample of adolescents (<i>N</i> = 6,033, aged 12-16 years, 50.3 percent girls) from eight European countries-Czechia, Finland, Flanders, France, Italy, Poland, Romania, and Slovakia-and conducted multinomial logistic regression. Our findings show that adolescents with higher sensation seeking, proactive normative beliefs about aggression (NBA), and who report cyberhate perpetration, are at higher risk of intentionally searching for cyberhate contents compared with those who are unintentionally exposed. In comparison with unexposed adolescents, reporting other risky experiences was a risk factor for both types of exposure. Furthermore, NBA worked differently-reactive NBA was a risk factor for intentional exposure, but proactive NBA did not play a role and even decreased the chance of unintentional exposure. Digital skills increased both types of exposure. Our findings stress the need to differentiate between intentional and unintentional cyberhate exposure and to examine proactive and reactive NBA separately.</p>","PeriodicalId":10872,"journal":{"name":"Cyberpsychology, behavior and social networking","volume":"26 7","pages":"462-471"},"PeriodicalIF":6.6,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9844684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Online hate speech (OHS) is a prevalent societal problem, but most studies investigating the reasons and causes of OHS focus on the perpetrators' side while ignoring the bystanders' and the victims' side. This is also true for the underlying theories. Therefore, we proposed a new Action-Theoretical Model of Online Hate Speech (ATMOHS), which assumes that OHS is a product of environmental, situational, and personal variables with three groups involved (perpetrators, bystanders, and victims) that each have their own set of motives, attitudes, traits, and norm beliefs that are impacting their behavior. The model was put to a first test with an online survey using a quota sample of the German online population (N = 1,791). The study at hand is a first analysis of these data that focus on the motives of OHS. Results show that wanting to be a role model for others is an important motive on the active bystanders' side. However, it could not be confirmed that any aggression motive is important on the perpetrators' side or that undeservingness is an important motive on the victims' side. Future studies could investigate if there are other motives for the victims' side that are in-line with the underlying theory of learned helplessness, or if there is a better theory for modeling the victims' side. Future studies could also develop a better scale for aggression motives. In practice, prevention programs could focus on being a role model for others as a relevant motive for becoming an active bystander.
{"title":"Motives of Online Hate Speech: Results from a Quota Sample Online Survey.","authors":"M Rohangis Mohseni","doi":"10.1089/cyber.2022.0188","DOIUrl":"https://doi.org/10.1089/cyber.2022.0188","url":null,"abstract":"<p><p>Online hate speech (OHS) is a prevalent societal problem, but most studies investigating the reasons and causes of OHS focus on the perpetrators' side while ignoring the bystanders' and the victims' side. This is also true for the underlying theories. Therefore, we proposed a new Action-Theoretical Model of Online Hate Speech (ATMOHS), which assumes that OHS is a product of environmental, situational, and personal variables with three groups involved (perpetrators, bystanders, and victims) that each have their own set of motives, attitudes, traits, and norm beliefs that are impacting their behavior. The model was put to a first test with an online survey using a quota sample of the German online population (<i>N</i> = 1,791). The study at hand is a first analysis of these data that focus on the motives of OHS. Results show that wanting to be a role model for others is an important motive on the active bystanders' side. However, it could not be confirmed that any aggression motive is important on the perpetrators' side or that undeservingness is an important motive on the victims' side. Future studies could investigate if there are other motives for the victims' side that are in-line with the underlying theory of learned helplessness, or if there is a better theory for modeling the victims' side. Future studies could also develop a better scale for aggression motives. In practice, prevention programs could focus on being a role model for others as a relevant motive for becoming an active bystander.</p>","PeriodicalId":10872,"journal":{"name":"Cyberpsychology, behavior and social networking","volume":"26 7","pages":"499-506"},"PeriodicalIF":6.6,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9896284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}