首页 > 最新文献

Online Social Networks and Media最新文献

英文 中文
IMMENSE: Inductive Multi-perspective User Classification in Social Networks 社交网络中的归纳多视角用户分类
IF 2.9 Q1 Social Sciences Pub Date : 2025-09-12 DOI: 10.1016/j.osnem.2025.100335
Francesco Benedetti , Antonio Pellicani , Gianvito Pio , Michelangelo Ceci
Online social networks increasingly expose people to users who propagate discriminatory, hateful, and violent content. Young users, in particular, are vulnerable to exposure to such content, which can have harmful psychological and social repercussions. Given the massive scale of today’s social networks, in terms of both published content and number of users, there is an urgent need for effective systems to aid Law Enforcement Agencies (LEAs) in identifying and addressing users that disseminate malicious content. In this work we introduce IMMENSE, a machine learning-based method for detecting malicious social network users. Our approach adopts a hybrid classification strategy that integrates three perspectives: the semantics of the users’ published content, their social relationships and their spatial information. Such contextual perspectives potentially enhance classification performance beyond text-only analysis. Importantly, IMMENSE employs an inductive learning approach, enabling it to classify previously unseen users or entire new networks without the need for costly and time-consuming model retraining procedures. Experiments carried out on a real-world Twitter/X dataset showed the superiority of IMMENSE against five state of the art competitors, confirming the benefits of its hybrid approach for effective deployment in social network monitoring systems.
在线社交网络越来越多地将人们暴露在传播歧视、仇恨和暴力内容的用户面前。年轻用户尤其容易接触到这类内容,可能产生有害的心理和社会影响。鉴于当今社交网络的巨大规模,无论是发布的内容还是用户数量,都迫切需要有效的系统来帮助执法机构(LEAs)识别和处理传播恶意内容的用户。在这项工作中,我们介绍了一种基于机器学习的方法,用于检测恶意社交网络用户。我们的方法采用了一种混合分类策略,该策略集成了三个视角:用户发布内容的语义、他们的社会关系和他们的空间信息。这种上下文透视图可能会提高纯文本分析之外的分类性能。重要的是,vast采用归纳学习方法,使其能够对以前未见过的用户或整个新网络进行分类,而无需昂贵且耗时的模型再训练过程。在真实的Twitter/X数据集上进行的实验表明,与五个最先进的竞争对手相比,vast具有优势,证实了其混合方法在社交网络监控系统中有效部署的好处。
{"title":"IMMENSE: Inductive Multi-perspective User Classification in Social Networks","authors":"Francesco Benedetti ,&nbsp;Antonio Pellicani ,&nbsp;Gianvito Pio ,&nbsp;Michelangelo Ceci","doi":"10.1016/j.osnem.2025.100335","DOIUrl":"10.1016/j.osnem.2025.100335","url":null,"abstract":"<div><div>Online social networks increasingly expose people to users who propagate discriminatory, hateful, and violent content. Young users, in particular, are vulnerable to exposure to such content, which can have harmful psychological and social repercussions. Given the massive scale of today’s social networks, in terms of both published content and number of users, there is an urgent need for effective systems to aid Law Enforcement Agencies (LEAs) in identifying and addressing users that disseminate malicious content. In this work we introduce IMMENSE, a machine learning-based method for detecting malicious social network users. Our approach adopts a hybrid classification strategy that integrates three perspectives: the semantics of the users’ published content, their social relationships and their spatial information. Such contextual perspectives potentially enhance classification performance beyond text-only analysis. Importantly, IMMENSE employs an inductive learning approach, enabling it to classify previously unseen users or entire new networks without the need for costly and time-consuming model retraining procedures. Experiments carried out on a real-world Twitter/X dataset showed the superiority of IMMENSE against five state of the art competitors, confirming the benefits of its hybrid approach for effective deployment in social network monitoring systems.</div></div>","PeriodicalId":52228,"journal":{"name":"Online Social Networks and Media","volume":"50 ","pages":"Article 100335"},"PeriodicalIF":2.9,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145050487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent nudging for truth: Mitigating rumor and misinformation in social networks with behavioral strategies 智能推动真相:用行为策略减轻社交网络中的谣言和错误信息
IF 2.9 Q1 Social Sciences Pub Date : 2025-08-22 DOI: 10.1016/j.osnem.2025.100333
Indu V. , Sabu M. Thampi
Social networks play a crucial role in disseminating information during emergencies and natural disasters, but they also facilitate the spread of rumors and misinformation, which can have adverse effects on society. Numerous false messages related to the COVID-19 pandemic circulated on social networks, causing unnecessary fear and anxiety, and leading to various mental health issues. Despite strict measures by social network providers and government authorities to curb fake news, many users continue to fall victim to misinformation. This highlights the need for novel approaches that incorporate user participation in mitigating rumors on social networks. Since users are the primary consumers and spreaders of information, their involvement is essential in maintaining information hygiene. We propose a novel approach based on nudging theory to motivate users to post or share only verified information on their social network profiles, thereby positively influencing their information-sharing behavior. Our approach utilizes three nudging strategies: Confront nudge, Reinforcement nudge, and Social Influence nudge. We have developed a Chrome browser plug-in for Twitter that prompts users to verify the authenticity of tweets and rate them before sharing. Additionally, user profiles receive a rating based on the average ratings of their posted tweets. The effectiveness of this mechanism was tested in a field study involving 125 Twitter users over one month. The results suggest that the proposed approach is a promising solution for limiting the propagation of rumors on social networks.
在紧急情况和自然灾害期间,社交网络在传播信息方面发挥着至关重要的作用,但它们也促进了谣言和错误信息的传播,这可能对社会产生不利影响。社交网络上流传着大量与新冠肺炎相关的虚假信息,造成了不必要的恐惧和焦虑,并导致了各种心理健康问题。尽管社交网络提供商和政府当局采取了严格的措施来遏制假新闻,但许多用户继续成为错误信息的受害者。这凸显了在社交网络上需要采用新颖的方法,将用户参与纳入减少谣言的范围。由于用户是信息的主要消费者和传播者,他们的参与对于保持信息卫生至关重要。我们提出了一种基于轻推理论的新方法,以激励用户在其社交网络档案中只发布或分享经过验证的信息,从而积极影响他们的信息共享行为。我们的方法采用三种助推策略:对抗助推、强化助推和社会影响助推。我们为Twitter开发了一个Chrome浏览器插件,提示用户在分享之前验证推文的真实性并对其进行评级。此外,用户配置文件会根据他们发布的tweet的平均评级获得评级。在一项涉及125名Twitter用户的为期一个月的实地研究中,对这一机制的有效性进行了测试。结果表明,所提出的方法是一种有希望的解决方案,以限制谣言在社交网络上的传播。
{"title":"Intelligent nudging for truth: Mitigating rumor and misinformation in social networks with behavioral strategies","authors":"Indu V. ,&nbsp;Sabu M. Thampi","doi":"10.1016/j.osnem.2025.100333","DOIUrl":"10.1016/j.osnem.2025.100333","url":null,"abstract":"<div><div>Social networks play a crucial role in disseminating information during emergencies and natural disasters, but they also facilitate the spread of rumors and misinformation, which can have adverse effects on society. Numerous false messages related to the COVID-19 pandemic circulated on social networks, causing unnecessary fear and anxiety, and leading to various mental health issues. Despite strict measures by social network providers and government authorities to curb fake news, many users continue to fall victim to misinformation. This highlights the need for novel approaches that incorporate user participation in mitigating rumors on social networks. Since users are the primary consumers and spreaders of information, their involvement is essential in maintaining information hygiene. We propose a novel approach based on nudging theory to motivate users to post or share only verified information on their social network profiles, thereby positively influencing their information-sharing behavior. Our approach utilizes three nudging strategies: Confront nudge, Reinforcement nudge, and Social Influence nudge. We have developed a Chrome browser plug-in for Twitter that prompts users to verify the authenticity of tweets and rate them before sharing. Additionally, user profiles receive a rating based on the average ratings of their posted tweets. The effectiveness of this mechanism was tested in a field study involving 125 Twitter users over one month. The results suggest that the proposed approach is a promising solution for limiting the propagation of rumors on social networks.</div></div>","PeriodicalId":52228,"journal":{"name":"Online Social Networks and Media","volume":"49 ","pages":"Article 100333"},"PeriodicalIF":2.9,"publicationDate":"2025-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144889996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WhatsApp tiplines and multilingual claims in the 2021 Indian assembly elections 2021年印度议会选举中,WhatsApp的话题和多语种言论
IF 2.9 Q1 Social Sciences Pub Date : 2025-08-16 DOI: 10.1016/j.osnem.2025.100323
Gautam Kishore Shahi , Scott A. Hale
WhatsApp tiplines, first launched in 2019 to combat misinformation, enable users to interact with fact-checkers to verify misleading content. This study analyzes 580 unique claims (tips) from 451 users, covering both high-resource languages (English, Hindi) and a low-resource language (Telugu) during the 2021 Indian assembly elections using a mixed-method approach. We categorize the claims into three categories, election, COVID-19, and others, and observe variations across languages. We compare content similarity through frequent word analysis and clustering of neural sentence embeddings. We also investigate user overlap across languages and fact-checking organizations. We measure the average time required to debunk claims and inform tipline users. Results reveal similarities in claims across languages, with some users submitting tips in multiple languages to the same fact-checkers. Fact-checkers generally require a couple of days to debunk a new claim and share the results with users. Notably, no user submits claims to multiple fact-checking organizations, indicating that each organization maintains a unique audience. We provide practical recommendations for using tiplines during elections with ethical consideration of user information.
于2019年首次推出的WhatsApp tiplines旨在打击错误信息,使用户能够与事实核查员互动,以核实误导性内容。本研究使用混合方法分析了来自451名用户的580个独特主张(提示),涵盖了2021年印度议会选举期间的高资源语言(英语,印地语)和低资源语言(泰卢固语)。我们将这些说法分为三类:选举、COVID-19和其他,并观察了不同语言之间的差异。我们通过频繁词分析和神经句子嵌入聚类来比较内容相似度。我们还调查了不同语言和事实核查组织的用户重叠情况。我们测量揭穿索赔并通知热线用户所需的平均时间。结果显示,不同语言的用户的说法有相似之处,一些用户用多种语言向相同的事实核查员提交提示。事实核查员通常需要几天的时间来揭穿一个新的说法,并与用户分享结果。值得注意的是,没有用户向多个事实检查组织提交声明,这表明每个组织都有一个独特的受众。我们提供实用的建议,在选举期间使用纪律,并考虑用户信息的道德。
{"title":"WhatsApp tiplines and multilingual claims in the 2021 Indian assembly elections","authors":"Gautam Kishore Shahi ,&nbsp;Scott A. Hale","doi":"10.1016/j.osnem.2025.100323","DOIUrl":"10.1016/j.osnem.2025.100323","url":null,"abstract":"<div><div>WhatsApp tiplines, first launched in 2019 to combat misinformation, enable users to interact with fact-checkers to verify misleading content. This study analyzes 580 unique claims (tips) from 451 users, covering both high-resource languages (English, Hindi) and a low-resource language (Telugu) during the 2021 Indian assembly elections using a mixed-method approach. We categorize the claims into three categories, election, COVID-19, and others, and observe variations across languages. We compare content similarity through frequent word analysis and clustering of neural sentence embeddings. We also investigate user overlap across languages and fact-checking organizations. We measure the average time required to debunk claims and inform tipline users. Results reveal similarities in claims across languages, with some users submitting tips in multiple languages to the same fact-checkers. Fact-checkers generally require a couple of days to debunk a new claim and share the results with users. Notably, no user submits claims to multiple fact-checking organizations, indicating that each organization maintains a unique audience. We provide practical recommendations for using tiplines during elections with ethical consideration of user information.</div></div>","PeriodicalId":52228,"journal":{"name":"Online Social Networks and Media","volume":"49 ","pages":"Article 100323"},"PeriodicalIF":2.9,"publicationDate":"2025-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144852544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SPREADSHOT: Analysis of fake news spreading through topic modeling and bipartite weighted graphs SPREADSHOT:通过主题建模和二部加权图分析假新闻传播
IF 2.9 Q1 Social Sciences Pub Date : 2025-08-01 DOI: 10.1016/j.osnem.2025.100324
Carmela Bernardo, Marta Catillo, Antonio Pecchia, Francesco Vasca, Umberto Villano
Spreading of fake news is one of the primary drivers of misinformation in social networks. Graph-based approaches that analyze fake news dissemination are mostly dedicated to fake news detection and consider homogeneous tree-based networks obtained by following the diffusion of single messages through users, thus lacking the ability to identify implicit patterns among spreaders and topics. Alternatively, heterogeneous graphs have been proposed, although the detection remains their main goal and the use of graph centralities is rather limited. In this paper, bipartite weighted graphs are used to analyze fake news and spreaders by utilizing topic modeling and a combination of network centrality measures. The proposed architecture, called SPREADSHOT, leverages a topic modeling technique to identify key topics or subjects within a collection of fake news articles published by spreaders, thus generating a bipartite weighted graph. By projecting the graph model to the space of spreaders, one can identify the strengths of links between them in terms of fakeness correlation on common topics. Moreover, the closeness and betweennes centralities highlight spreaders who represent key enablers in the dissemination of fakeness on different topics. The projection of the bipartite graph to the space of topics allows one to identify topics which are more prone to misinformation. By collecting specific network measures, a synthetic fakeness networking index is defined which characterizes the behaviors and roles of spreaders and topics in the fakeness dissemination. The effectiveness of the proposed technique is demonstrated through tests on the LIAR dataset.
假新闻的传播是社交网络中错误信息的主要驱动因素之一。分析假新闻传播的基于图的方法大多致力于假新闻检测,并考虑通过跟踪单个消息在用户中的扩散而获得的同质树状网络,因此缺乏识别传播者和主题之间隐含模式的能力。另外,异质图也被提出,尽管检测仍然是他们的主要目标,而且图中心性的使用相当有限。本文采用主题建模和网络中心性测度相结合的方法,利用二部加权图对假新闻及其传播者进行分析。所提出的架构称为SPREADSHOT,利用主题建模技术来识别传播者发布的假新闻文章集合中的关键主题或主题,从而生成二部加权图。通过将图模型投射到传播者的空间,可以根据共同话题的虚假相关性来识别它们之间的链接强度。此外,中心性之间的接近性突出了传播者,他们代表了在不同主题上传播虚假的关键推动者。二部图到主题空间的投影允许人们识别更容易产生错误信息的主题。通过收集具体的网络测度,定义一个综合的虚假网络指数,表征传播者和话题在虚假传播中的行为和作用。通过对LIAR数据集的测试,证明了该方法的有效性。
{"title":"SPREADSHOT: Analysis of fake news spreading through topic modeling and bipartite weighted graphs","authors":"Carmela Bernardo,&nbsp;Marta Catillo,&nbsp;Antonio Pecchia,&nbsp;Francesco Vasca,&nbsp;Umberto Villano","doi":"10.1016/j.osnem.2025.100324","DOIUrl":"10.1016/j.osnem.2025.100324","url":null,"abstract":"<div><div>Spreading of fake news is one of the primary drivers of misinformation in social networks. Graph-based approaches that analyze fake news dissemination are mostly dedicated to fake news detection and consider homogeneous tree-based networks obtained by following the diffusion of single messages through users, thus lacking the ability to identify implicit patterns among spreaders and topics. Alternatively, heterogeneous graphs have been proposed, although the detection remains their main goal and the use of graph centralities is rather limited. In this paper, bipartite weighted graphs are used to analyze fake news and spreaders by utilizing topic modeling and a combination of network centrality measures. The proposed architecture, called SPREADSHOT, leverages a topic modeling technique to identify key topics or subjects within a collection of fake news articles published by spreaders, thus generating a bipartite weighted graph. By projecting the graph model to the space of spreaders, one can identify the strengths of links between them in terms of fakeness correlation on common topics. Moreover, the closeness and betweennes centralities highlight spreaders who represent key enablers in the dissemination of fakeness on different topics. The projection of the bipartite graph to the space of topics allows one to identify topics which are more prone to misinformation. By collecting specific network measures, a synthetic fakeness networking index is defined which characterizes the behaviors and roles of spreaders and topics in the fakeness dissemination. The effectiveness of the proposed technique is demonstrated through tests on the LIAR dataset.</div></div>","PeriodicalId":52228,"journal":{"name":"Online Social Networks and Media","volume":"49 ","pages":"Article 100324"},"PeriodicalIF":2.9,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144758057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing the potential of generative agents in crowdsourced fact-checking 评估生成代理在众包事实核查中的潜力
Q1 Social Sciences Pub Date : 2025-07-28 DOI: 10.1016/j.osnem.2025.100326
Luigia Costabile , Gian Marco Orlando , Valerio La Gatta , Vincenzo Moscato
The growing spread of online misinformation has created an urgent need for scalable, reliable fact-checking solutions. Crowdsourced fact-checking—where non-experts evaluate claim veracity—offers a cost-effective alternative to expert verification, despite concerns about variability in quality and bias. Encouraged by promising results in certain contexts, major platforms such as X (formerly Twitter), Facebook, and Instagram have begun shifting from centralized moderation to decentralized, crowd-based approaches.
In parallel, advances in Large Language Models (LLMs) have shown strong performance across core fact-checking tasks, including claim detection and evidence evaluation. However, their potential role in crowdsourced workflows remains unexplored. This paper investigates whether LLM-powered generative agents—autonomous entities that emulate human behavior and decision-making—can meaningfully contribute to fact-checking tasks traditionally reserved for human crowds.
Using the protocol of La Barbera et al. (2024), we simulate crowds of generative agents with diverse demographic and ideological profiles. Agents retrieve evidence, assess claims along multiple quality dimensions, and issue final veracity judgments. Our results show that agent crowds outperform human crowds in truthfulness classification, exhibit higher internal consistency, and show reduced susceptibility to social and cognitive biases. Compared to humans, agents rely more systematically on informative criteria such as Accuracy, Precision, and Informativeness, suggesting a more structured decision-making process. Overall, our findings highlight the potential of generative agents as scalable, consistent, and less biased contributors to crowd-based fact-checking systems.
随着网上错误信息的日益传播,人们迫切需要可扩展的、可靠的事实核查解决方案。众包事实核查——由非专家评估声称的真实性——提供了一种成本效益高的替代专家验证的方法,尽管人们担心质量的可变性和偏见。受到某些情况下有希望的结果的鼓舞,X(以前的Twitter)、Facebook和Instagram等主要平台已经开始从集中式审核转向分散的、基于人群的方式。与此同时,大型语言模型(llm)的进步在核心事实检查任务(包括索赔检测和证据评估)中表现出了强大的性能。然而,它们在众包工作流中的潜在作用仍未被探索。本文研究了llm驱动的生成代理(模仿人类行为和决策的自主实体)是否可以有意地为传统上为人类群体保留的事实检查任务做出贡献。使用La Barbera等人(2024)的协议,我们模拟了具有不同人口统计和意识形态特征的生成代理群体。代理人检索证据,沿着多个质量维度评估索赔,并发布最终的准确性判断。我们的研究结果表明,智能体群体在真实性分类方面优于人类群体,表现出更高的内部一致性,并且对社会和认知偏见的敏感性降低。与人类相比,智能体更系统地依赖于信息标准,如准确性、精度和信息性,这表明决策过程更加结构化。总的来说,我们的研究结果突出了生成代理作为基于人群的事实核查系统中可扩展、一致和较少偏见的贡献者的潜力。
{"title":"Assessing the potential of generative agents in crowdsourced fact-checking","authors":"Luigia Costabile ,&nbsp;Gian Marco Orlando ,&nbsp;Valerio La Gatta ,&nbsp;Vincenzo Moscato","doi":"10.1016/j.osnem.2025.100326","DOIUrl":"10.1016/j.osnem.2025.100326","url":null,"abstract":"<div><div>The growing spread of online misinformation has created an urgent need for scalable, reliable fact-checking solutions. Crowdsourced fact-checking—where non-experts evaluate claim veracity—offers a cost-effective alternative to expert verification, despite concerns about variability in quality and bias. Encouraged by promising results in certain contexts, major platforms such as X (formerly Twitter), Facebook, and Instagram have begun shifting from centralized moderation to decentralized, crowd-based approaches.</div><div>In parallel, advances in Large Language Models (LLMs) have shown strong performance across core fact-checking tasks, including claim detection and evidence evaluation. However, their potential role in crowdsourced workflows remains unexplored. This paper investigates whether LLM-powered generative agents—autonomous entities that emulate human behavior and decision-making—can meaningfully contribute to fact-checking tasks traditionally reserved for human crowds.</div><div>Using the protocol of La Barbera et al. (2024), we simulate crowds of generative agents with diverse demographic and ideological profiles. Agents retrieve evidence, assess claims along multiple quality dimensions, and issue final veracity judgments. Our results show that agent crowds outperform human crowds in truthfulness classification, exhibit higher internal consistency, and show reduced susceptibility to social and cognitive biases. Compared to humans, agents rely more systematically on informative criteria such as <em>Accuracy</em>, <em>Precision</em>, and <em>Informativeness</em>, suggesting a more structured decision-making process. Overall, our findings highlight the potential of generative agents as scalable, consistent, and less biased contributors to crowd-based fact-checking systems.</div></div>","PeriodicalId":52228,"journal":{"name":"Online Social Networks and Media","volume":"48 ","pages":"Article 100326"},"PeriodicalIF":0.0,"publicationDate":"2025-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144713785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mitigating radicalization in recommender systems by rewiring graph with deep reinforcement learning 通过深度强化学习重新布线图来缓解推荐系统中的激进化
Q1 Social Sciences Pub Date : 2025-07-26 DOI: 10.1016/j.osnem.2025.100325
Omran Berjawi , Giuseppe Fenza , Rida Khatoun , Vincenzo Loia
Recommender systems play a crucial role in enhancing user experiences by suggesting content based on users consumption histories. However, a significant challenge they encounter is managing the radicalized contents spreading and preventing users from becoming trapped in radicalized pathways. This paper address the radicalization problem in recommendation systems (RS) by proposing a graph-based approach called Deep Reinforcement Learning Graph Rewiring (DRLGR). First, we measure the radicalization score (Rad(G)) for the recommendation graph by assessing the extent of users’ exposure to radical content. Second, we develop a Reinforcement Learning (RL) method, which learns over time which edges among many possible ones should be rewired to reduce the Rad(G). The experimental results on video and news recommendation datasets show that DRLGR consistently reduces the radicalization score and demonstrates more sustained improvements over time, particularly in more complex graphs compared to baseline methods and heuristic approach such as HEU that may reduce radicalization more rapidly in the early stages with fewer interventions but plateau over time.
推荐系统通过根据用户的消费历史来推荐内容,在增强用户体验方面发挥着至关重要的作用。然而,他们面临的一个重大挑战是如何控制激进内容的传播,防止用户陷入激进的路径。本文通过提出一种称为深度强化学习图重新布线(DRLGR)的基于图的方法来解决推荐系统(RS)中的激进化问题。首先,我们通过评估用户接触激进内容的程度来衡量推荐图的激进得分(Rad(G))。其次,我们开发了一种强化学习(RL)方法,该方法随着时间的推移学习应该重新连接许多可能的边缘以减少Rad(G)。在视频和新闻推荐数据集上的实验结果表明,DRLGR持续降低激进化得分,并随着时间的推移显示出更持久的改善,特别是在更复杂的图表中,与基线方法和启发式方法(如HEU)相比,后者可能在早期阶段更快地减少激进化,干预较少,但随着时间的推移会趋于平稳。
{"title":"Mitigating radicalization in recommender systems by rewiring graph with deep reinforcement learning","authors":"Omran Berjawi ,&nbsp;Giuseppe Fenza ,&nbsp;Rida Khatoun ,&nbsp;Vincenzo Loia","doi":"10.1016/j.osnem.2025.100325","DOIUrl":"10.1016/j.osnem.2025.100325","url":null,"abstract":"<div><div>Recommender systems play a crucial role in enhancing user experiences by suggesting content based on users consumption histories. However, a significant challenge they encounter is managing the radicalized contents spreading and preventing users from becoming trapped in radicalized pathways. This paper address the radicalization problem in recommendation systems (RS) by proposing a graph-based approach called Deep Reinforcement Learning Graph Rewiring (DRLGR). First, we measure the radicalization score (Rad(G)) for the recommendation graph by assessing the extent of users’ exposure to radical content. Second, we develop a Reinforcement Learning (RL) method, which learns over time which edges among many possible ones should be rewired to reduce the Rad(G). The experimental results on video and news recommendation datasets show that DRLGR consistently reduces the radicalization score and demonstrates more sustained improvements over time, particularly in more complex graphs compared to baseline methods and heuristic approach such as HEU that may reduce radicalization more rapidly in the early stages with fewer interventions but plateau over time.</div></div>","PeriodicalId":52228,"journal":{"name":"Online Social Networks and Media","volume":"48 ","pages":"Article 100325"},"PeriodicalIF":0.0,"publicationDate":"2025-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144711820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computational analysis of Information Disorder in Cognitive Warfare 认知战中信息混乱的计算分析
Q1 Social Sciences Pub Date : 2025-07-22 DOI: 10.1016/j.osnem.2025.100322
Angelo Gaeta , Vincenzo Loia , Angelo Lorusso , Francesco Orciuoli , Antonella Pascuzzo
Cognitive Warfare represents the modern evolution of traditional conflict, where the human mind emerges as the primary battleground, and information serves as a weapon to influence people’s thoughts, perceptions, and behaviors. Adopting the Information Disorder perspective, this work meticulously explores the phenomena associated with Cognitive Warfare, particularly as they spread across online social networks and media, to better understand their textual nature. In particular, the work focuses on specific cognitive weapons predominantly used by malicious actors in this context, such as the dissemination of misleading political news, junk science, and conspiracy theories. Therefore, the paper proposes an approach to identify, extract, and assess text-based features able to characterize the forms of Information Disorder involved in Cognitive Warfare. The proposed approach starts with a literature review and ends by assessing the identified and selected features through comprehensive experimentation based on a well-known dataset and conducted through the application of machine learning methods. In particular, by applying the Rough Set Theory and explainable AI it is found that features belonging to readability, psychological, and linguistic categories demonstrate a significant contribution in classifying the aforementioned forms of disorder. The obtained results are highly valuable as they can be leveraged to analyze critical aspects of Information Disorder, such as identifying the intent behind manipulated content and its targeted audience.
认知战代表了传统冲突的现代演变,人类的思想成为主要战场,信息成为影响人们思想、观念和行为的武器。采用信息混乱的视角,这项工作细致地探讨了与认知战相关的现象,特别是当它们在在线社交网络和媒体上传播时,以更好地理解它们的文本本质。特别地,这项工作侧重于在这种情况下恶意行为者主要使用的特定认知武器,例如传播误导性政治新闻、垃圾科学和阴谋论。因此,本文提出了一种方法来识别、提取和评估基于文本的特征,这些特征能够表征认知战中涉及的信息混乱的形式。提出的方法从文献综述开始,通过基于知名数据集的综合实验评估识别和选择的特征,并通过应用机器学习方法进行评估。特别是,通过应用粗糙集理论和可解释的人工智能,发现属于可读性,心理和语言类别的特征在对上述形式的障碍进行分类方面表现出重大贡献。所获得的结果非常有价值,因为它们可以用于分析信息紊乱的关键方面,例如识别被操纵的内容及其目标受众背后的意图。
{"title":"Computational analysis of Information Disorder in Cognitive Warfare","authors":"Angelo Gaeta ,&nbsp;Vincenzo Loia ,&nbsp;Angelo Lorusso ,&nbsp;Francesco Orciuoli ,&nbsp;Antonella Pascuzzo","doi":"10.1016/j.osnem.2025.100322","DOIUrl":"10.1016/j.osnem.2025.100322","url":null,"abstract":"<div><div>Cognitive Warfare represents the modern evolution of traditional conflict, where the human mind emerges as the primary battleground, and information serves as a weapon to influence people’s thoughts, perceptions, and behaviors. Adopting the Information Disorder perspective, this work meticulously explores the phenomena associated with Cognitive Warfare, particularly as they spread across online social networks and media, to better understand their textual nature. In particular, the work focuses on specific cognitive weapons predominantly used by malicious actors in this context, such as the dissemination of misleading political news, junk science, and conspiracy theories. Therefore, the paper proposes an approach to identify, extract, and assess text-based features able to characterize the forms of Information Disorder involved in Cognitive Warfare. The proposed approach starts with a literature review and ends by assessing the identified and selected features through comprehensive experimentation based on a well-known dataset and conducted through the application of machine learning methods. In particular, by applying the Rough Set Theory and explainable AI it is found that features belonging to readability, psychological, and linguistic categories demonstrate a significant contribution in classifying the aforementioned forms of disorder. The obtained results are highly valuable as they can be leveraged to analyze critical aspects of Information Disorder, such as identifying the intent behind manipulated content and its targeted audience.</div></div>","PeriodicalId":52228,"journal":{"name":"Online Social Networks and Media","volume":"48 ","pages":"Article 100322"},"PeriodicalIF":0.0,"publicationDate":"2025-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144679439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting mental disorder on social media: A ChatGPT-augmented explainable approach 在社交媒体上检测精神障碍:一种chatgpt增强的可解释方法
Q1 Social Sciences Pub Date : 2025-07-08 DOI: 10.1016/j.osnem.2025.100321
Loris Belcastro, Riccardo Cantini, Fabrizio Marozzo, Domenico Talia, Paolo Trunfio
In the digital era, the prevalence of depressive symptoms expressed on social media has raised serious concerns, necessitating advanced methodologies for timely detection. This paper addresses the challenge of interpretable depression detection by proposing a novel methodology that effectively combines Large Language Models (LLMs) with eXplainable Artificial Intelligence (XAI) and conversational agents like ChatGPT. In our methodology, explanations are achieved by integrating BERTweet, a Twitter-specific variant of BERT, into a novel self-explanatory model, namely BERT-XDD, capable of providing both classification and explanations via masked attention. The interpretability is further enhanced using ChatGPT to transform technical explanations into human-readable commentaries. By introducing an effective and modular approach for interpretable depression detection, our methodology can contribute to the development of socially responsible digital platforms, fostering early intervention and support for mental health challenges under the guidance of qualified healthcare professionals.
在数字时代,在社交媒体上表达的抑郁症状的流行引起了严重关注,需要先进的方法来及时发现。本文通过提出一种新的方法来解决可解释抑郁症检测的挑战,该方法有效地将大型语言模型(llm)与可解释人工智能(XAI)和会话代理(如ChatGPT)相结合。在我们的方法中,解释是通过将BERTweet (BERT的一个特定于twitter的变体)集成到一个新的自解释模型中来实现的,即BERT- xdd,该模型能够通过隐藏注意提供分类和解释。使用ChatGPT将技术解释转换为人类可读的注释,进一步增强了可解释性。通过引入一种有效的模块化方法来进行可解释的抑郁症检测,我们的方法可以促进对社会负责的数字平台的发展,在合格的医疗保健专业人员的指导下促进对心理健康挑战的早期干预和支持。
{"title":"Detecting mental disorder on social media: A ChatGPT-augmented explainable approach","authors":"Loris Belcastro,&nbsp;Riccardo Cantini,&nbsp;Fabrizio Marozzo,&nbsp;Domenico Talia,&nbsp;Paolo Trunfio","doi":"10.1016/j.osnem.2025.100321","DOIUrl":"10.1016/j.osnem.2025.100321","url":null,"abstract":"<div><div>In the digital era, the prevalence of depressive symptoms expressed on social media has raised serious concerns, necessitating advanced methodologies for timely detection. This paper addresses the challenge of interpretable depression detection by proposing a novel methodology that effectively combines Large Language Models (LLMs) with eXplainable Artificial Intelligence (XAI) and conversational agents like ChatGPT. In our methodology, explanations are achieved by integrating BERTweet, a Twitter-specific variant of BERT, into a novel self-explanatory model, namely BERT-XDD, capable of providing both classification and explanations via masked attention. The interpretability is further enhanced using ChatGPT to transform technical explanations into human-readable commentaries. By introducing an effective and modular approach for interpretable depression detection, our methodology can contribute to the development of socially responsible digital platforms, fostering early intervention and support for mental health challenges under the guidance of qualified healthcare professionals.</div></div>","PeriodicalId":52228,"journal":{"name":"Online Social Networks and Media","volume":"48 ","pages":"Article 100321"},"PeriodicalIF":0.0,"publicationDate":"2025-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144572466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating the heterogeneous effects of a massive content moderation intervention via Difference-in-Differences 通过差异中的差异研究大规模内容节制干预的异质效应
Q1 Social Sciences Pub Date : 2025-07-01 DOI: 10.1016/j.osnem.2025.100320
Lorenzo Cima , Benedetta Tessa , Amaury Trujillo , Stefano Cresci , Marco Avvenuti
In today’s online environments, users encounter harm and abuse on a daily basis. Therefore, content moderation is crucial to ensure their safety and well-being. However, the effectiveness of many moderation interventions is still uncertain. Here, we apply a causal inference approach to shed light on the effectiveness of The Great Ban, a massive social media deplatforming intervention on Reddit. We analyze 53M comments shared by nearly 34K users, providing in-depth results on both the intended and unintended consequences of the ban. Our causal analyses reveal that 15.6% of the moderated users abandoned the platform while the remaining ones decreased their overall toxicity by 4.1%. Nonetheless, a small subset of users exhibited marked increases in both the intensity and volume of toxic behavior, particularly among those whose activity levels changed after the intervention. However, these reactions were not accompanied by greater activity or engagement, suggesting that even the most toxic users maintained a limited overall impact. Our findings bring to light new insights on the effectiveness of deplatforming moderation interventions. Furthermore, they also contribute to informing future content moderation strategies and regulations.
在当今的网络环境中,用户每天都会遇到伤害和虐待。因此,内容审核对于确保他们的安全和福祉至关重要。然而,许多适度干预措施的有效性仍不确定。在这里,我们运用因果推理的方法来阐明大禁令的有效性,大禁令是Reddit上大规模的社交媒体平台干预。我们分析了近3.4万用户分享的5300万条评论,对禁令的预期和意外后果提供了深入的结果。我们的因果分析显示,15.6%的审核用户放弃了该平台,而其余用户的总体毒性降低了4.1%。尽管如此,一小部分使用者在毒性行为的强度和数量上都表现出明显的增加,特别是那些在干预后活动水平发生变化的人。然而,这些反应并没有伴随着更大的活动或参与,这表明即使是最有毒的使用者也保持了有限的总体影响。我们的发现为去平台化适度干预的有效性带来了新的见解。此外,它们还有助于为未来的内容审核策略和法规提供信息。
{"title":"Investigating the heterogeneous effects of a massive content moderation intervention via Difference-in-Differences","authors":"Lorenzo Cima ,&nbsp;Benedetta Tessa ,&nbsp;Amaury Trujillo ,&nbsp;Stefano Cresci ,&nbsp;Marco Avvenuti","doi":"10.1016/j.osnem.2025.100320","DOIUrl":"10.1016/j.osnem.2025.100320","url":null,"abstract":"<div><div>In today’s online environments, users encounter harm and abuse on a daily basis. Therefore, content moderation is crucial to ensure their safety and well-being. However, the effectiveness of many moderation interventions is still uncertain. Here, we apply a causal inference approach to shed light on the effectiveness of The Great Ban, a massive social media deplatforming intervention on Reddit. We analyze 53M comments shared by nearly 34K users, providing in-depth results on both the intended and unintended consequences of the ban. Our causal analyses reveal that 15.6% of the moderated users abandoned the platform while the remaining ones decreased their overall toxicity by 4.1%. Nonetheless, a small subset of users exhibited marked increases in both the intensity and volume of toxic behavior, particularly among those whose activity levels changed after the intervention. However, these reactions were not accompanied by greater activity or engagement, suggesting that even the most toxic users maintained a limited overall impact. Our findings bring to light new insights on the effectiveness of deplatforming moderation interventions. Furthermore, they also contribute to informing future content moderation strategies and regulations.</div></div>","PeriodicalId":52228,"journal":{"name":"Online Social Networks and Media","volume":"48 ","pages":"Article 100320"},"PeriodicalIF":0.0,"publicationDate":"2025-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144517875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Safeguarding Decentralized Social Media: LLM Agents for Automating Community Rule Compliance 保护分散的社交媒体:自动化社区规则遵从的LLM代理
Q1 Social Sciences Pub Date : 2025-06-24 DOI: 10.1016/j.osnem.2025.100319
Lucio La Cava, Andrea Tagarelli
Ensuring content compliance with community guidelines is crucial for maintaining healthy online social environments. However, traditional human-based compliance checking struggles with scaling due to the increasing volume of user-generated content and a limited number of moderators. Recent advancements in Natural Language Understanding demonstrated by Large Language Models unlock new opportunities for automated content compliance verification. This work evaluates six AI-agents built on Open-LLMs for automated rule compliance checking in Decentralized Social Networks, a challenging environment due to heterogeneous community scopes and rules. By analyzing over 50,000 posts from hundreds of Mastodon servers, we find that AI-agents effectively detect non-compliant content, grasp linguistic subtleties, and adapt to diverse community contexts. Most agents also show high inter-rater reliability and consistency in score justification and suggestions for compliance. Human-based evaluation with domain experts confirmed the agents’ reliability and usefulness, rendering them promising tools for semi-automated or human-in-the-loop content moderation systems.
Warning: This manuscript may contain sensitive content as it quotes harmful/hateful social media posts.
确保内容符合社区准则对于维护健康的在线社交环境至关重要。然而,由于用户生成内容的数量不断增加和审核员数量有限,传统的基于人工的合规性检查难以扩展。最近在自然语言理解方面的进步由大型语言模型所证明,这为自动化内容遵从性验证打开了新的机会。这项工作评估了建立在open - llm上的六个人工智能代理,用于在分散的社交网络中进行自动规则遵从性检查,这是一个具有挑战性的环境,由于不同的社区范围和规则。通过分析来自数百个Mastodon服务器的50,000多条帖子,我们发现ai代理可以有效地检测不合规内容,掌握语言的微妙之处,并适应不同的社区环境。大多数代理人在评分证明和依从性建议方面也表现出较高的评分者间信度和一致性。由领域专家进行的基于人的评估证实了代理的可靠性和有用性,使它们成为半自动或人在环内容审核系统的有前途的工具。警告:此手稿可能包含敏感内容,因为它引用了有害/仇恨的社交媒体帖子。
{"title":"Safeguarding Decentralized Social Media: LLM Agents for Automating Community Rule Compliance","authors":"Lucio La Cava,&nbsp;Andrea Tagarelli","doi":"10.1016/j.osnem.2025.100319","DOIUrl":"10.1016/j.osnem.2025.100319","url":null,"abstract":"<div><div>Ensuring content compliance with community guidelines is crucial for maintaining healthy online social environments. However, traditional human-based compliance checking struggles with scaling due to the increasing volume of user-generated content and a limited number of moderators. Recent advancements in Natural Language Understanding demonstrated by Large Language Models unlock new opportunities for automated content compliance verification. This work evaluates six AI-agents built on Open-LLMs for automated rule compliance checking in Decentralized Social Networks, a challenging environment due to heterogeneous community scopes and rules. By analyzing over 50,000 posts from hundreds of Mastodon servers, we find that AI-agents effectively detect non-compliant content, grasp linguistic subtleties, and adapt to diverse community contexts. Most agents also show high inter-rater reliability and consistency in score justification and suggestions for compliance. Human-based evaluation with domain experts confirmed the agents’ reliability and usefulness, rendering them promising tools for semi-automated or human-in-the-loop content moderation systems.</div><div><em>Warning: This manuscript may contain sensitive content as it quotes harmful/hateful social media posts.</em></div></div>","PeriodicalId":52228,"journal":{"name":"Online Social Networks and Media","volume":"48 ","pages":"Article 100319"},"PeriodicalIF":0.0,"publicationDate":"2025-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144472285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Online Social Networks and Media
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1