A Graph-Based Context-Aware Model to Understand Online Conversations

IF 2.6 4区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS ACM Transactions on the Web Pub Date : 2023-11-03 DOI:10.1145/3624579
Vibhor Agarwal, Anthony P. Young, Sagar Joglekar, Nishanth Sastry
{"title":"A Graph-Based Context-Aware Model to Understand Online Conversations","authors":"Vibhor Agarwal, Anthony P. Young, Sagar Joglekar, Nishanth Sastry","doi":"10.1145/3624579","DOIUrl":null,"url":null,"abstract":"Online forums that allow for participatory engagement between users have been transformative for the public discussion of many important issues. However, such conversations can sometimes escalate into full-blown exchanges of hate and misinformation. Existing approaches in natural language processing (NLP), such as deep learning models for classification tasks, use as inputs only a single comment or a pair of comments depending upon whether the task concerns the inference of properties of the individual comments or the replies between pairs of comments, respectively. However, in online conversations, comments and replies may be based on external context beyond the immediately relevant information that is input to the model. Therefore, being aware of the conversations’ surrounding contexts should improve the model’s performance for the inference task at hand. We propose GraphNLI , 1 a novel graph-based deep learning architecture that uses graph walks to incorporate the wider context of a conversation in a principled manner. Specifically, a graph walk starts from a given comment and samples “nearby” comments in the same or parallel conversation threads, which results in additional embeddings that are aggregated together with the initial comment’s embedding. We then use these enriched embeddings for downstream NLP prediction tasks that are important for online conversations. We evaluate GraphNLI on two such tasks - polarity prediction and misogynistic hate speech detection - and find that our model consistently outperforms all relevant baselines for both tasks. Specifically, GraphNLI with a biased root-seeking random walk performs with a macro- F 1 score of 3 and 6 percentage points better than the best-performing BERT-based baselines for the polarity prediction and hate speech detection tasks, respectively. We also perform extensive ablative experiments and hyperparameter searches to understand the efficacy of GraphNLI. This demonstrates the potential of context-aware models to capture the global context along with the local context of online conversations for these two tasks.","PeriodicalId":50940,"journal":{"name":"ACM Transactions on the Web","volume":"185 S499","pages":"0"},"PeriodicalIF":2.6000,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"ACM Transactions on the Web","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3624579","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 4

Abstract

Online forums that allow for participatory engagement between users have been transformative for the public discussion of many important issues. However, such conversations can sometimes escalate into full-blown exchanges of hate and misinformation. Existing approaches in natural language processing (NLP), such as deep learning models for classification tasks, use as inputs only a single comment or a pair of comments depending upon whether the task concerns the inference of properties of the individual comments or the replies between pairs of comments, respectively. However, in online conversations, comments and replies may be based on external context beyond the immediately relevant information that is input to the model. Therefore, being aware of the conversations’ surrounding contexts should improve the model’s performance for the inference task at hand. We propose GraphNLI , 1 a novel graph-based deep learning architecture that uses graph walks to incorporate the wider context of a conversation in a principled manner. Specifically, a graph walk starts from a given comment and samples “nearby” comments in the same or parallel conversation threads, which results in additional embeddings that are aggregated together with the initial comment’s embedding. We then use these enriched embeddings for downstream NLP prediction tasks that are important for online conversations. We evaluate GraphNLI on two such tasks - polarity prediction and misogynistic hate speech detection - and find that our model consistently outperforms all relevant baselines for both tasks. Specifically, GraphNLI with a biased root-seeking random walk performs with a macro- F 1 score of 3 and 6 percentage points better than the best-performing BERT-based baselines for the polarity prediction and hate speech detection tasks, respectively. We also perform extensive ablative experiments and hyperparameter searches to understand the efficacy of GraphNLI. This demonstrates the potential of context-aware models to capture the global context along with the local context of online conversations for these two tasks.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
一个基于图的上下文感知模型来理解在线对话
允许用户参与的在线论坛对许多重要问题的公开讨论具有变革性作用。然而,这样的对话有时会升级为全面的仇恨和错误信息的交流。现有的自然语言处理(NLP)方法,如用于分类任务的深度学习模型,只使用单个评论或一对评论作为输入,这取决于任务是否涉及单个评论的属性推理或评论对之间的回复。然而,在在线对话中,评论和回复可能基于输入到模型的直接相关信息之外的外部上下文。因此,意识到对话的周围上下文应该可以提高模型在手头推理任务中的性能。我们提出了GraphNLI, 1这是一种新颖的基于图的深度学习架构,它使用图行走以一种有原则的方式将更广泛的对话上下文结合起来。具体来说,图遍历从给定的评论开始,并在相同或并行的会话线程中采样“附近”的评论,这将导致与初始评论的嵌入聚合在一起的附加嵌入。然后,我们将这些丰富的嵌入用于下游NLP预测任务,这对在线对话很重要。我们在两个这样的任务上评估了GraphNLI——极性预测和厌女仇恨言论检测——并发现我们的模型在这两个任务上的表现始终优于所有相关的基线。具体来说,在极性预测和仇恨言论检测任务中,带有偏见寻根随机漫步的graphhnli的宏观f1得分分别比表现最好的基于bert的基线高3和6个百分点。我们还进行了广泛的烧蚀实验和超参数搜索,以了解GraphNLI的功效。这证明了上下文感知模型在捕获全局上下文以及用于这两个任务的在线对话的本地上下文方面的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
ACM Transactions on the Web
ACM Transactions on the Web 工程技术-计算机:软件工程
CiteScore
4.90
自引率
0.00%
发文量
26
审稿时长
7.5 months
期刊介绍: Transactions on the Web (TWEB) is a journal publishing refereed articles reporting the results of research on Web content, applications, use, and related enabling technologies. Topics in the scope of TWEB include but are not limited to the following: Browsers and Web Interfaces; Electronic Commerce; Electronic Publishing; Hypertext and Hypermedia; Semantic Web; Web Engineering; Web Services; and Service-Oriented Computing XML. In addition, papers addressing the intersection of the following broader technologies with the Web are also in scope: Accessibility; Business Services Education; Knowledge Management and Representation; Mobility and pervasive computing; Performance and scalability; Recommender systems; Searching, Indexing, Classification, Retrieval and Querying, Data Mining and Analysis; Security and Privacy; and User Interfaces. Papers discussing specific Web technologies, applications, content generation and management and use are within scope. Also, papers describing novel applications of the web as well as papers on the underlying technologies are welcome.
期刊最新文献
DCDIMB: Dynamic Community-based Diversified Influence Maximization using Bridge Nodes Know their Customers: An Empirical Study of Online Account Enumeration Attacks Learning Dynamic Multimodal Network Slot Concepts from the Web for Forecasting Environmental, Social and Governance Ratings MuLX-QA: Classifying Multi-Labels and Extracting Rationale Spans in Social Media Posts Heterogeneous Graph Neural Network with Personalized and Adaptive Diversity for News Recommendation
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1