{"title":"Social media-induced polarisation","authors":"Israr Qureshi, Babita Bhatt","doi":"10.1111/isj.12525","DOIUrl":null,"url":null,"abstract":"<p>In contemporary discourse, a discernible surge in socio-cultural fragmentation, political schism and right-wing hate speech has emerged, exacerbated by the proliferation of extremist ideologies and discriminatory rhetoric (Das & Schroeder, <span>2021</span>; Ghasiya & Sasahara, <span>2022</span>; Hameleers, <span>2022</span>; Risius et al., <span>2024</span>). This phenomenon is starkly evident in online harassment, the dissemination of misinformation and the normalisation of confrontational dialogue, indicating a pressing demand for the cultivation of inclusive digital environments. Over the past two decades, the evolution of social media platforms has significantly contributed to this trend by employing algorithmic curation and engendering personalised information bubbles that foster heightened polarisation and the segregation of content consumption. While these platforms offer societal benefits such as timely access to news, they concurrently erode trust and facilitate the dissemination of extreme viewpoints and conspiracy theories (Abdalla Mikhaeil & Baskerville, <span>2024</span>). Consequently, they have led to cyberbalkanisation, amplifying societal divides along the faultlines of ethnicity, religion, ideologies and sexual orientation. Compounded by a decline in trust in both institutions and fellow citizens, this expansion of communication avenues has provided fertile ground for the proliferation of extreme opinions, accompanied by challenges such as the dissemination of misinformation and the propagation of toxic headlines. Thus, an imminent imperative exists for scholarly inquiry aimed at comprehending the theoretical foundations of social media-induced polarisation and devising effective interventions to mitigate its deleterious societal impacts.</p><p>In the context of contemporary democracies, public deliberation, which is fundamental for societal progress, faces formidable barriers such as escalating incivility, the propagation of misinformation and polarisation across political, environmental and social spectra (French et al., <span>2024</span>; Miller et al., <span>2024</span>; Weismueller et al., <span>2024</span>). Despite serving as hubs for diverse interactions, social media platforms concurrently foster echo chambers, potentially obstructing the possibility of bridging divides. The complex interplay between social media and polarisation remains a contentious subject, with divergent perspectives on its role in shaping online discourse (Qureshi et al., <span>2020</span>). However, the ramifications of social media extend far beyond political domains, influencing environmental activism, public health responses and business marketing strategies. Moreover, the algorithmic curation utilised by these platforms poses formidable challenges, as it may exacerbate echo chambers and impede the exchange of diverse viewpoints (cf. Miller et al., <span>2024</span>). These platforms play a pivotal role in shaping societal dynamics, impacting attitudes, behaviours and the trajectory of critical issues. Thus, a nuanced understanding and concerted efforts to address the multifaceted impact of social media on public discourse are imperative for fostering inclusive and well-informed deliberation in contemporary democracies (Figure 1).</p><p>Social media-induced polarisation (SMIP) manifests the interplay of social, communicative, individual and selection processes with technological mechanisms such as platform features, algorithmic curation, likes and other signalling tools (Qureshi et al., <span>2020</span>). Social processes, encompassing interactions and influences among individuals, play a substantial role in shaping polarisation dynamics. Factors such as herd behaviour, social dominance and social identity formation delineate group dynamics and ideological cleavages, mirroring societal hierarchies and sectarian discord. Additionally, right-wing authoritarianism exacerbates polarisation by fostering the dissemination of misinformation across various domains (Hameleers, <span>2022</span>). Individual processes, propelled by confirmation bias and motivated reasoning, culminate in selective exposure and biased assimilation of information, thereby reinforcing ideological schisms (Qureshi et al., <span>2020</span>). Theories such as moral panic and moral contagion elucidate the influence of online interactions on attitude polarisation and emotive discourse. Some users use social media features to remain anonymous but also signal identity. Thus, <i>idenonymity</i> (identity + anonymity) refers to the dual strategy adopted by social media users, mainly trolls and those with extreme viewpoints, who maintain anonymity through pseudonyms while simultaneously signalling aspects of their identity, such as ideological affiliations or political leanings, through political avatar or hashtags (cf. Jaidka et al., <span>2022</span>). Communication processes facilitated by social media platforms afford avenues for marginalised voices but also engender flaming and agenda-setting, shaping public opinion and exacerbating polarisation (Qureshi et al., <span>2020</span>). Selection processes, such as homophily, selective exposure, biased assimilation and false consensus effect, further perpetuate polarisation as users gravitate towards and disseminate attitude-consistent information, reinforcing ideological echo chambers (Qureshi et al., <span>2020</span>). <i>Herded anarchy</i> refers to the semblance of disorder and chaos on the surface but with underlying control and guidance by algorithmic curation that amplifies extreme views and shapes discourse within digital spaces.</p><p>Polarisation arising from the interplay of these processes underscores the proliferation of fake news and disinformation, further entrenching polarisation dynamics. The result is <i>regression to meanness</i>, a phenomenon where the dominance of extreme views on social media, fuelled by fake news and algorithmic curation, leads to the marginalisation of moderate voices. This results in an environment where toxic or aggressive voices become more prevalent over time, overshadowing more balanced perspectives. Ultimately, the dominance of extreme voices perpetuates a cycle of fear, anxiety and isolation among moderate individuals, underscoring the multifaceted nature of SMIP and the intricate interplay of social, psychological and technological factors. A nuanced understanding of these dynamics is pivotal for mitigating the deleterious effects of polarisation and fostering inclusive discourse in digital spaces.</p><p>The pervasive ubiquity of social media in everyday life has entrenched it as a pivotal platform for communication, information dissemination and community formation. Initially lauded with utopian aspirations, the evolution of social media has unveiled a dystopian reality typified by the proliferation of disinformation, misinformation and partisan narratives, culminating in heightened polarisation within digital spheres. To confront this issue and prefigure a more equitable future (Bhatt et al., <span>2024</span>), a comprehensive approach involving stakeholders such as users, platform developers, policymakers and advertisers is imperative. Strategies aimed at mitigating polarisation should encompass the promotion of media literacy, the advocacy for algorithmic transparency, the diversification of content recommendations, community moderation, the cultivation of civil discourse, cross-sector collaboration and a re-evaluation of the affordances of social media platforms. The papers included in this special issue shed light on some of these mechanisms.</p><p>The seven papers comprising this special issue provide valuable insights into the expansive domain of SMIP. While not exhaustive, these articles offer a glimpse into the potential avenues of inquiry within this field and establish foundational benchmarks for future exploration. Demonstrating excellence in Information Systems (IS) research, these papers explore specific facets of SMIP, showcasing a diversity of theoretical perspectives, epistemological approaches and methodological frameworks applicable to SMIP studies. These papers provide a foundation for new research explorations to advance the SMIP research agenda.</p><p>Miller et al. (<span>2024</span>) studied the complex aspects of social media engagement, especially how user biases and the likelihood of paying attention affect responses to disinformation. Their research illuminates the intricate relationships between political alignment, truth bias, communicative suspicion and media literacy within social media contexts, contributing significantly to disinformation studies. Notably, the study adopts an experimental approach, considering concepts such as amplification cycles, persuasion, polarisation and aversion, thus enriching Elaboration Likelihood Model (ELM) literature. Their findings demonstrate that political alignment moderates the impact of suspicion on truth bias and engagement. As alignment increases, suspicion's effect on truth bias shifts positively, while truth bias's effect on engagement becomes pronounced. Integrating new media literacy theory into ELM underscores users' ability to discern disinformation from factual content. Surprisingly, critical consuming media literacy, when viewed as a stable trait rather than an intervention, correlates positively with disinformation engagement, challenging assumptions about media literacy's efficacy in mitigating truth bias. Moreover, the study suggests that critical consuming media literacy might erroneously empower users, potentially leading to increased disinformation engagement, particularly among politically biased individuals. These findings highlight the need for educational efforts promoting scepticism across political affiliations to combat disinformation effectively. Future research could investigate social media metrics and user perceptions to enhance the understanding of engagement dynamics and realism in online environments.</p><p>Risius et al. (<span>2024</span>) conducted a sociotechnical investigation into online extremism, arguing for the essential integration of societal and technological perspectives in crafting more effective regulatory policies. Through a systematic review of 222 articles, they aim to map the current research landscape, identify gaps and propose future research trajectories. Their research identifies two primary research streams. The first stream focuses on understanding online extremism, particularly examining how digital technologies have transformed it compared with traditional forms. The authors highlight a gap in comprehending the amplifying effect of internet technologies on extremism and advocate for inquiries into how online extremism differs from conventional manifestations, including its impact on extremist groups' strategies and structures. The second stream concentrates on countering online extremism, stressing the need for a nuanced understanding to develop effective counterstrategies. They caution against simply replicating traditional measures and emphasise the unique challenges of online extremism, such as its broad reach and potential for radicalising a wider audience. Utilising a sociotechnical lens, the authors advocate for analysing the interaction between social and technical elements to grasp online extremism fully. They underscore the importance of addressing both individual and societal impacts of digital technologies, including considerations of user privacy and platform characteristics. Thus, they make a strong case for continuing research to understand online extremism better. They stress the importance of having many different places where research is published and considering real-world risks when making rules and policies to control online extremism.</p><p>Weismueller et al. (<span>2024</span>) offer insights into how misinformation and extreme political content affect social media use, emotional reactions and political polarisation. They show how different types of content lead to different emotional responses depending on a person's political beliefs. They discovered that misinformation and extreme political content get shared more often on social media than accurate or less extreme political information, confirming what others have found before. Moreover, the study reveals that exposure to misinformation and extreme partisan content often evokes more intense negative emotional responses than encounters with accurate or less extreme content. These emotional reactions significantly contribute to polarisation among social media users. Additionally, the research demonstrates that individuals with stronger political ideologies tend to experience heightened negative emotions in response to extreme partisan content, highlighting the influence of personal beliefs on user interaction with social media content. Theoretically, the study advances the understanding of how different types of information influence user behaviour, emphasising falsehood, partisanship, negative emotions and political polarisation. However, the study presents mixed findings regarding the role of political ideology in moderating emotional responses to misinformation, suggesting a complex interplay between personal beliefs and content reactions. The findings highlight how important it is for social media platforms to check information from political elites and extreme partisan sources carefully. Moreover, educating users on critically engaging with political content on social media is crucial. Future research should investigate specific content characteristics that exacerbate the sharing and polarisation effects of misinformation and extreme partisan content.</p><p>Abdalla Mikhaeil and Baskerville (<span>2024</span>) explore how online conspiracy theories have become more extreme, leading to radical beliefs, especially with the help of social media features. They carefully examine how these theories become more extreme online, introducing a theoretical model that considers social identity, digital platform features and online community dynamics. Central to their framework is the concept of affordances. The authors emphasise the substantial influence of digital environments in shaping and amplifying these theories, highlighting the significance of understanding online radicalisation and its societal implications. Their theoretical framework, rooted in affordances, delineates the progression from first-order affordances to second-order affordances for escalation, emphasising shared social identity and ongoing resource commitment. The study underscores the role of social media platforms like 4chan and TikTok in fostering the growth of conspiracy theories, contrasting them with older platforms such as Twitter, YouTube and Facebook. Moreover, the authors advocate for interdisciplinary approaches to develop de-escalation strategies and enhance social media governance to mitigate the spread and impact of conspiracy theories. Additionally, they stress the applicability of their findings to various contexts, including lone-wolf terrorism and events like the U.S. Capitol riot. The research highlights social identity as a crucial factor in conspiracy theory radicalisation, suggesting avenues for future research to explore similar identity-driven phenomena and develop de-escalation strategies.</p><p>Wang et al. (<span>2024</span>) explore how social media interactions impact people's behaviour and opinions. They emphasise how the design of social media platforms plays a crucial role in shaping social norms and behaviours. The study specifically looks at how interactions on social media, such as using features like the ‘friend function,’ can affect the division of opinions in user reviews. To study this, they use a quasi-experimental design, propensity score matching (PSM) and difference-in-differences (DID). The findings reveal that the utilisation of the friend function is linked to less polarised reviews, with a more pronounced effect observed in positive reviews than negative ones. Moreover, the analysis suggests that highly engaged users are less affected by the friend function, indicating a nuanced relationship between engagement level and social influence. Theoretically, the research challenges the notion that social influence exacerbates opinion polarisation, demonstrating its potential as a mitigating factor. It distinguishes between the normative influence of online friends and informational influence, shedding light on the mechanisms underlying polarisation reduction. Furthermore, the study highlights the practical implications for social media platform designers, advocating for deliberate design strategies to cultivate a more socially oriented normative environment and reduce polarisation. Future research avenues include investigating the effects of social influence on review polarity through field experiments, analysing review content and exploring the impact of different types of online friendships. Overall, Wang et al.'s study enriches our understanding of SMIP, offering valuable insights for both theoretical development and practical application in platform design and social interaction dynamics.</p><p>Zhu et al. (<span>2024</span>) discuss the role of accountability mobilisation in combating misinformation and mitigating SMIP. They investigate the intricate interplay between cultural dynamics, notably guanxi, and the effectiveness of such interventions, advocating for culturally sensitive strategies across diverse social media landscapes. The study scrutinises the efficacy of accountability mobilisation to foster the prosocial punishment of misinformation disseminators on social media, particularly within China's guanxi culture. The authors elucidate how societal divisions increasingly align along an ‘Us versus Them’ axis, posing governance challenges, eroding institutional trust and jeopardising democratic systems. In response, the Chinese government enacted regulations in 2017 to empower social media users as misinformation monitors, a move especially relevant within the context of guanxi culture, where traditional prosocial punishment mechanisms are less prevalent due to associated personal costs. Employing a Vignette Survey Experiment (VSE) on WeChat users and analysing data through a random regression model, the study unveils that accountability mobilisation significantly amplifies prosocial punishment among bystanders, potentially disrupting the SMIP pathway of misinformation. However, the moderating influence of guanxi culture dampens this effect, as individuals are less inclined to apply prosocial punishment, mainly when misinformation spreaders belong to their guanxi network. The research underscores the practical utility of regulations assigning specific accountability to individuals, such as chat group administrators, to enhance their willingness to engage in prosocial punishment. Yet, the impact of guanxi necessitates nuanced approaches across varied cultural milieus. It highlights the imperative for further inquiry into the relationship between misinformation and polarisation, examining diverse misinformation types and cultural contexts.</p><p>French et al. (<span>2024</span>) present a comprehensive examination of ‘disinformation’, emphasising its pivotal role in societal polarisation and proposing strategies for its mitigation. Introducing the innovative Typology of Disinformation Intentionality and Impact (DII), this paper offers a framework to classify disinformation threats based on their intentionality and impact, specifically emphasising virality and polarisation. Grounded in Information Manipulation Theory (IMT), the research elucidates how disinformation is crafted and disseminated, making substantial contributions to understanding social media-induced polarisation. Employing decision theory and risk management principles, the study advocates for proactive approaches to evaluate and categorise disinformation risks, departing from traditional reactive tactics. The DII typology classifies disinformation based on spreaders' belief in its truthfulness (intentionality) and its potential to go viral and induce polarisation (impact). This results in a matrix with four quadrants identifying different types of disinformation spreaders. A case study of disinformation campaigns during the US presidential elections illustrates the practical application of the DII typology, providing insights into the nature and scale of disinformation issues. The study proposes specific mitigation strategies for each category within the DII typology, including monitoring and responding to low-impact disinformation, raising awareness, managing high-impact campaigns and discouraging dissemination of high-impact, intentional disinformation. Furthermore, the authors advocate for establishing a Disinformation Management Officer (DMO) role within organisations. This role entails ongoing monitoring, assessment and response to disinformation threats, aligning strategies with the DII typology to combat disinformation effectively.</p><p>In conclusion, the articles featured in this special issue significantly advance our understanding of SMIP. Through various theoretical frameworks, empirical evidence and practical implications, they offer valuable insights into the intricate challenges posed by polarisation within digital contexts. Despite the perceived advantages of social media platforms, this collection of research brings to light the urgent concerns arising from increased social media driven by misinformation and polarisation, the important issues of our time that are often sidelined in public policy discourse. By exploring various facets of SMIP, these articles deepen our understanding of the phenomenon and its profound consequences, fostering a collective awareness of social media's role in perpetuating socio-cultural polarisation. The relevance of this special issue is evident as it addresses both theoretical complexities and methodological obstacles, while also suggesting potential solutions that could stimulate broader discussions on socio-cultural polarisation induced by social media. This body of work is poised to benefit scholars across diverse disciplines, policymakers and organisational leaders by equipping them with valuable insights and tools to navigate the complexities of polarisation in digital spaces. Ultimately, we anticipate that the insights gleaned from this editorial and the articles within this special issue will empower stakeholders with the conceptual and empirical resources necessary to foster inclusive and cohesive digital environments, thus mitigating the adverse impacts of polarisation induced by misinformation, disinformation, fake news, deepfakes, conspiracy theories and herded anarchy. Such polarisation often manifests itself in echo chambers, filter bubbles, cyberbalkanisation, splinternet, ghettoisation and <i>regression to meanness</i>, emphasising the critical importance of addressing these issues to promote informed discourse and societal cohesion.</p>","PeriodicalId":48049,"journal":{"name":"Information Systems Journal","volume":"34 4","pages":"1425-1431"},"PeriodicalIF":6.5000,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/isj.12525","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Systems Journal","FirstCategoryId":"91","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/isj.12525","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 0
Abstract
In contemporary discourse, a discernible surge in socio-cultural fragmentation, political schism and right-wing hate speech has emerged, exacerbated by the proliferation of extremist ideologies and discriminatory rhetoric (Das & Schroeder, 2021; Ghasiya & Sasahara, 2022; Hameleers, 2022; Risius et al., 2024). This phenomenon is starkly evident in online harassment, the dissemination of misinformation and the normalisation of confrontational dialogue, indicating a pressing demand for the cultivation of inclusive digital environments. Over the past two decades, the evolution of social media platforms has significantly contributed to this trend by employing algorithmic curation and engendering personalised information bubbles that foster heightened polarisation and the segregation of content consumption. While these platforms offer societal benefits such as timely access to news, they concurrently erode trust and facilitate the dissemination of extreme viewpoints and conspiracy theories (Abdalla Mikhaeil & Baskerville, 2024). Consequently, they have led to cyberbalkanisation, amplifying societal divides along the faultlines of ethnicity, religion, ideologies and sexual orientation. Compounded by a decline in trust in both institutions and fellow citizens, this expansion of communication avenues has provided fertile ground for the proliferation of extreme opinions, accompanied by challenges such as the dissemination of misinformation and the propagation of toxic headlines. Thus, an imminent imperative exists for scholarly inquiry aimed at comprehending the theoretical foundations of social media-induced polarisation and devising effective interventions to mitigate its deleterious societal impacts.
In the context of contemporary democracies, public deliberation, which is fundamental for societal progress, faces formidable barriers such as escalating incivility, the propagation of misinformation and polarisation across political, environmental and social spectra (French et al., 2024; Miller et al., 2024; Weismueller et al., 2024). Despite serving as hubs for diverse interactions, social media platforms concurrently foster echo chambers, potentially obstructing the possibility of bridging divides. The complex interplay between social media and polarisation remains a contentious subject, with divergent perspectives on its role in shaping online discourse (Qureshi et al., 2020). However, the ramifications of social media extend far beyond political domains, influencing environmental activism, public health responses and business marketing strategies. Moreover, the algorithmic curation utilised by these platforms poses formidable challenges, as it may exacerbate echo chambers and impede the exchange of diverse viewpoints (cf. Miller et al., 2024). These platforms play a pivotal role in shaping societal dynamics, impacting attitudes, behaviours and the trajectory of critical issues. Thus, a nuanced understanding and concerted efforts to address the multifaceted impact of social media on public discourse are imperative for fostering inclusive and well-informed deliberation in contemporary democracies (Figure 1).
Social media-induced polarisation (SMIP) manifests the interplay of social, communicative, individual and selection processes with technological mechanisms such as platform features, algorithmic curation, likes and other signalling tools (Qureshi et al., 2020). Social processes, encompassing interactions and influences among individuals, play a substantial role in shaping polarisation dynamics. Factors such as herd behaviour, social dominance and social identity formation delineate group dynamics and ideological cleavages, mirroring societal hierarchies and sectarian discord. Additionally, right-wing authoritarianism exacerbates polarisation by fostering the dissemination of misinformation across various domains (Hameleers, 2022). Individual processes, propelled by confirmation bias and motivated reasoning, culminate in selective exposure and biased assimilation of information, thereby reinforcing ideological schisms (Qureshi et al., 2020). Theories such as moral panic and moral contagion elucidate the influence of online interactions on attitude polarisation and emotive discourse. Some users use social media features to remain anonymous but also signal identity. Thus, idenonymity (identity + anonymity) refers to the dual strategy adopted by social media users, mainly trolls and those with extreme viewpoints, who maintain anonymity through pseudonyms while simultaneously signalling aspects of their identity, such as ideological affiliations or political leanings, through political avatar or hashtags (cf. Jaidka et al., 2022). Communication processes facilitated by social media platforms afford avenues for marginalised voices but also engender flaming and agenda-setting, shaping public opinion and exacerbating polarisation (Qureshi et al., 2020). Selection processes, such as homophily, selective exposure, biased assimilation and false consensus effect, further perpetuate polarisation as users gravitate towards and disseminate attitude-consistent information, reinforcing ideological echo chambers (Qureshi et al., 2020). Herded anarchy refers to the semblance of disorder and chaos on the surface but with underlying control and guidance by algorithmic curation that amplifies extreme views and shapes discourse within digital spaces.
Polarisation arising from the interplay of these processes underscores the proliferation of fake news and disinformation, further entrenching polarisation dynamics. The result is regression to meanness, a phenomenon where the dominance of extreme views on social media, fuelled by fake news and algorithmic curation, leads to the marginalisation of moderate voices. This results in an environment where toxic or aggressive voices become more prevalent over time, overshadowing more balanced perspectives. Ultimately, the dominance of extreme voices perpetuates a cycle of fear, anxiety and isolation among moderate individuals, underscoring the multifaceted nature of SMIP and the intricate interplay of social, psychological and technological factors. A nuanced understanding of these dynamics is pivotal for mitigating the deleterious effects of polarisation and fostering inclusive discourse in digital spaces.
The pervasive ubiquity of social media in everyday life has entrenched it as a pivotal platform for communication, information dissemination and community formation. Initially lauded with utopian aspirations, the evolution of social media has unveiled a dystopian reality typified by the proliferation of disinformation, misinformation and partisan narratives, culminating in heightened polarisation within digital spheres. To confront this issue and prefigure a more equitable future (Bhatt et al., 2024), a comprehensive approach involving stakeholders such as users, platform developers, policymakers and advertisers is imperative. Strategies aimed at mitigating polarisation should encompass the promotion of media literacy, the advocacy for algorithmic transparency, the diversification of content recommendations, community moderation, the cultivation of civil discourse, cross-sector collaboration and a re-evaluation of the affordances of social media platforms. The papers included in this special issue shed light on some of these mechanisms.
The seven papers comprising this special issue provide valuable insights into the expansive domain of SMIP. While not exhaustive, these articles offer a glimpse into the potential avenues of inquiry within this field and establish foundational benchmarks for future exploration. Demonstrating excellence in Information Systems (IS) research, these papers explore specific facets of SMIP, showcasing a diversity of theoretical perspectives, epistemological approaches and methodological frameworks applicable to SMIP studies. These papers provide a foundation for new research explorations to advance the SMIP research agenda.
Miller et al. (2024) studied the complex aspects of social media engagement, especially how user biases and the likelihood of paying attention affect responses to disinformation. Their research illuminates the intricate relationships between political alignment, truth bias, communicative suspicion and media literacy within social media contexts, contributing significantly to disinformation studies. Notably, the study adopts an experimental approach, considering concepts such as amplification cycles, persuasion, polarisation and aversion, thus enriching Elaboration Likelihood Model (ELM) literature. Their findings demonstrate that political alignment moderates the impact of suspicion on truth bias and engagement. As alignment increases, suspicion's effect on truth bias shifts positively, while truth bias's effect on engagement becomes pronounced. Integrating new media literacy theory into ELM underscores users' ability to discern disinformation from factual content. Surprisingly, critical consuming media literacy, when viewed as a stable trait rather than an intervention, correlates positively with disinformation engagement, challenging assumptions about media literacy's efficacy in mitigating truth bias. Moreover, the study suggests that critical consuming media literacy might erroneously empower users, potentially leading to increased disinformation engagement, particularly among politically biased individuals. These findings highlight the need for educational efforts promoting scepticism across political affiliations to combat disinformation effectively. Future research could investigate social media metrics and user perceptions to enhance the understanding of engagement dynamics and realism in online environments.
Risius et al. (2024) conducted a sociotechnical investigation into online extremism, arguing for the essential integration of societal and technological perspectives in crafting more effective regulatory policies. Through a systematic review of 222 articles, they aim to map the current research landscape, identify gaps and propose future research trajectories. Their research identifies two primary research streams. The first stream focuses on understanding online extremism, particularly examining how digital technologies have transformed it compared with traditional forms. The authors highlight a gap in comprehending the amplifying effect of internet technologies on extremism and advocate for inquiries into how online extremism differs from conventional manifestations, including its impact on extremist groups' strategies and structures. The second stream concentrates on countering online extremism, stressing the need for a nuanced understanding to develop effective counterstrategies. They caution against simply replicating traditional measures and emphasise the unique challenges of online extremism, such as its broad reach and potential for radicalising a wider audience. Utilising a sociotechnical lens, the authors advocate for analysing the interaction between social and technical elements to grasp online extremism fully. They underscore the importance of addressing both individual and societal impacts of digital technologies, including considerations of user privacy and platform characteristics. Thus, they make a strong case for continuing research to understand online extremism better. They stress the importance of having many different places where research is published and considering real-world risks when making rules and policies to control online extremism.
Weismueller et al. (2024) offer insights into how misinformation and extreme political content affect social media use, emotional reactions and political polarisation. They show how different types of content lead to different emotional responses depending on a person's political beliefs. They discovered that misinformation and extreme political content get shared more often on social media than accurate or less extreme political information, confirming what others have found before. Moreover, the study reveals that exposure to misinformation and extreme partisan content often evokes more intense negative emotional responses than encounters with accurate or less extreme content. These emotional reactions significantly contribute to polarisation among social media users. Additionally, the research demonstrates that individuals with stronger political ideologies tend to experience heightened negative emotions in response to extreme partisan content, highlighting the influence of personal beliefs on user interaction with social media content. Theoretically, the study advances the understanding of how different types of information influence user behaviour, emphasising falsehood, partisanship, negative emotions and political polarisation. However, the study presents mixed findings regarding the role of political ideology in moderating emotional responses to misinformation, suggesting a complex interplay between personal beliefs and content reactions. The findings highlight how important it is for social media platforms to check information from political elites and extreme partisan sources carefully. Moreover, educating users on critically engaging with political content on social media is crucial. Future research should investigate specific content characteristics that exacerbate the sharing and polarisation effects of misinformation and extreme partisan content.
Abdalla Mikhaeil and Baskerville (2024) explore how online conspiracy theories have become more extreme, leading to radical beliefs, especially with the help of social media features. They carefully examine how these theories become more extreme online, introducing a theoretical model that considers social identity, digital platform features and online community dynamics. Central to their framework is the concept of affordances. The authors emphasise the substantial influence of digital environments in shaping and amplifying these theories, highlighting the significance of understanding online radicalisation and its societal implications. Their theoretical framework, rooted in affordances, delineates the progression from first-order affordances to second-order affordances for escalation, emphasising shared social identity and ongoing resource commitment. The study underscores the role of social media platforms like 4chan and TikTok in fostering the growth of conspiracy theories, contrasting them with older platforms such as Twitter, YouTube and Facebook. Moreover, the authors advocate for interdisciplinary approaches to develop de-escalation strategies and enhance social media governance to mitigate the spread and impact of conspiracy theories. Additionally, they stress the applicability of their findings to various contexts, including lone-wolf terrorism and events like the U.S. Capitol riot. The research highlights social identity as a crucial factor in conspiracy theory radicalisation, suggesting avenues for future research to explore similar identity-driven phenomena and develop de-escalation strategies.
Wang et al. (2024) explore how social media interactions impact people's behaviour and opinions. They emphasise how the design of social media platforms plays a crucial role in shaping social norms and behaviours. The study specifically looks at how interactions on social media, such as using features like the ‘friend function,’ can affect the division of opinions in user reviews. To study this, they use a quasi-experimental design, propensity score matching (PSM) and difference-in-differences (DID). The findings reveal that the utilisation of the friend function is linked to less polarised reviews, with a more pronounced effect observed in positive reviews than negative ones. Moreover, the analysis suggests that highly engaged users are less affected by the friend function, indicating a nuanced relationship between engagement level and social influence. Theoretically, the research challenges the notion that social influence exacerbates opinion polarisation, demonstrating its potential as a mitigating factor. It distinguishes between the normative influence of online friends and informational influence, shedding light on the mechanisms underlying polarisation reduction. Furthermore, the study highlights the practical implications for social media platform designers, advocating for deliberate design strategies to cultivate a more socially oriented normative environment and reduce polarisation. Future research avenues include investigating the effects of social influence on review polarity through field experiments, analysing review content and exploring the impact of different types of online friendships. Overall, Wang et al.'s study enriches our understanding of SMIP, offering valuable insights for both theoretical development and practical application in platform design and social interaction dynamics.
Zhu et al. (2024) discuss the role of accountability mobilisation in combating misinformation and mitigating SMIP. They investigate the intricate interplay between cultural dynamics, notably guanxi, and the effectiveness of such interventions, advocating for culturally sensitive strategies across diverse social media landscapes. The study scrutinises the efficacy of accountability mobilisation to foster the prosocial punishment of misinformation disseminators on social media, particularly within China's guanxi culture. The authors elucidate how societal divisions increasingly align along an ‘Us versus Them’ axis, posing governance challenges, eroding institutional trust and jeopardising democratic systems. In response, the Chinese government enacted regulations in 2017 to empower social media users as misinformation monitors, a move especially relevant within the context of guanxi culture, where traditional prosocial punishment mechanisms are less prevalent due to associated personal costs. Employing a Vignette Survey Experiment (VSE) on WeChat users and analysing data through a random regression model, the study unveils that accountability mobilisation significantly amplifies prosocial punishment among bystanders, potentially disrupting the SMIP pathway of misinformation. However, the moderating influence of guanxi culture dampens this effect, as individuals are less inclined to apply prosocial punishment, mainly when misinformation spreaders belong to their guanxi network. The research underscores the practical utility of regulations assigning specific accountability to individuals, such as chat group administrators, to enhance their willingness to engage in prosocial punishment. Yet, the impact of guanxi necessitates nuanced approaches across varied cultural milieus. It highlights the imperative for further inquiry into the relationship between misinformation and polarisation, examining diverse misinformation types and cultural contexts.
French et al. (2024) present a comprehensive examination of ‘disinformation’, emphasising its pivotal role in societal polarisation and proposing strategies for its mitigation. Introducing the innovative Typology of Disinformation Intentionality and Impact (DII), this paper offers a framework to classify disinformation threats based on their intentionality and impact, specifically emphasising virality and polarisation. Grounded in Information Manipulation Theory (IMT), the research elucidates how disinformation is crafted and disseminated, making substantial contributions to understanding social media-induced polarisation. Employing decision theory and risk management principles, the study advocates for proactive approaches to evaluate and categorise disinformation risks, departing from traditional reactive tactics. The DII typology classifies disinformation based on spreaders' belief in its truthfulness (intentionality) and its potential to go viral and induce polarisation (impact). This results in a matrix with four quadrants identifying different types of disinformation spreaders. A case study of disinformation campaigns during the US presidential elections illustrates the practical application of the DII typology, providing insights into the nature and scale of disinformation issues. The study proposes specific mitigation strategies for each category within the DII typology, including monitoring and responding to low-impact disinformation, raising awareness, managing high-impact campaigns and discouraging dissemination of high-impact, intentional disinformation. Furthermore, the authors advocate for establishing a Disinformation Management Officer (DMO) role within organisations. This role entails ongoing monitoring, assessment and response to disinformation threats, aligning strategies with the DII typology to combat disinformation effectively.
In conclusion, the articles featured in this special issue significantly advance our understanding of SMIP. Through various theoretical frameworks, empirical evidence and practical implications, they offer valuable insights into the intricate challenges posed by polarisation within digital contexts. Despite the perceived advantages of social media platforms, this collection of research brings to light the urgent concerns arising from increased social media driven by misinformation and polarisation, the important issues of our time that are often sidelined in public policy discourse. By exploring various facets of SMIP, these articles deepen our understanding of the phenomenon and its profound consequences, fostering a collective awareness of social media's role in perpetuating socio-cultural polarisation. The relevance of this special issue is evident as it addresses both theoretical complexities and methodological obstacles, while also suggesting potential solutions that could stimulate broader discussions on socio-cultural polarisation induced by social media. This body of work is poised to benefit scholars across diverse disciplines, policymakers and organisational leaders by equipping them with valuable insights and tools to navigate the complexities of polarisation in digital spaces. Ultimately, we anticipate that the insights gleaned from this editorial and the articles within this special issue will empower stakeholders with the conceptual and empirical resources necessary to foster inclusive and cohesive digital environments, thus mitigating the adverse impacts of polarisation induced by misinformation, disinformation, fake news, deepfakes, conspiracy theories and herded anarchy. Such polarisation often manifests itself in echo chambers, filter bubbles, cyberbalkanisation, splinternet, ghettoisation and regression to meanness, emphasising the critical importance of addressing these issues to promote informed discourse and societal cohesion.
期刊介绍:
The Information Systems Journal (ISJ) is an international journal promoting the study of, and interest in, information systems. Articles are welcome on research, practice, experience, current issues and debates. The ISJ encourages submissions that reflect the wide and interdisciplinary nature of the subject and articles that integrate technological disciplines with social, contextual and management issues, based on research using appropriate research methods.The ISJ has particularly built its reputation by publishing qualitative research and it continues to welcome such papers. Quantitative research papers are also welcome but they need to emphasise the context of the research and the theoretical and practical implications of their findings.The ISJ does not publish purely technical papers.