首页 > 最新文献

Proceedings of the ... International AAAI Conference on Weblogs and Social Media. International AAAI Conference on Weblogs and Social Media最新文献

英文 中文
Unifying the Extremes: Developing a Unified Model for Detecting and Predicting Extremist Traits and Radicalization. 统一极端:开发一个统一的模型来检测和预测极端主义特征和激进化。
Allison Lahnala, Vasudha Varadarajan, Lucie Flek, H Andrew Schwartz, Ryan L Boyd

The proliferation of ideological movements into extremist factions via social media has become a global concern. While radicalization has been studied extensively within the context of specific ideologies, our ability to accurately characterize extremism in more generalizable terms remains underdeveloped. In this paper, we propose a novel method for extracting and analyzing extremist discourse across a range of online ideological community forums. By focusing on verbal behavioral signatures of extremist traits, we develop a framework for quantifying extremism at both user and community levels. Our research identifies 11 distinct factors, which we term "The Extremist Eleven," as a generalized psychosocial model of extremism. Applying our method to various online communities, we demonstrate an ability to characterize ideologically diverse communities across the 11 extremist traits. We demonstrate the power of this method by analyzing user histories from members of the incel community. We find that our framework accurately predicts which users join the incel community up to 10 months before their actual entry with an AUC of > 0.6, steadily increasing to AUC ~ 0.9 three to four months before the event. Further, we find that upon entry into an ideological forum, the users tend to maintain their level of extremist traits within the community, while still remaining distinguishable from the general online discourse. Our findings contribute to the study of extremism by introducing a more holistic, cross-ideological approach that transcends traditional, trait-specific models.

意识形态运动通过社交媒体向极端主义派别扩散已经成为全球关注的问题。虽然在特定意识形态的背景下对激进化进行了广泛的研究,但我们用更普遍的术语准确描述极端主义的能力仍然不发达。在本文中,我们提出了一种新的方法,用于提取和分析一系列在线意识形态社区论坛中的极端主义话语。通过关注极端主义特征的言语行为特征,我们开发了一个在用户和社区层面量化极端主义的框架。我们的研究确定了11个不同的因素,我们称之为“极端主义11”,作为极端主义的广义社会心理模型。将我们的方法应用于各种在线社区,我们展示了通过11种极端主义特征来描述意识形态多样化社区的能力。我们通过分析来自incel社区成员的用户历史来展示这种方法的强大功能。我们发现,我们的框架可以准确地预测哪些用户在实际进入前10个月加入incel社区,AUC为0.6,并在活动前3到4个月稳步增加到0.9。此外,我们发现,在进入意识形态论坛后,用户倾向于在社区内保持他们的极端主义特征水平,同时仍然与一般的在线话语区分开来。我们的发现通过引入一种更全面、跨意识形态的方法,超越了传统的、特定特征的模型,为研究极端主义做出了贡献。
{"title":"Unifying the Extremes: Developing a Unified Model for Detecting and Predicting Extremist Traits and Radicalization.","authors":"Allison Lahnala, Vasudha Varadarajan, Lucie Flek, H Andrew Schwartz, Ryan L Boyd","doi":"10.1609/icwsm.v19i1.35860","DOIUrl":"10.1609/icwsm.v19i1.35860","url":null,"abstract":"<p><p>The proliferation of ideological movements into extremist factions via social media has become a global concern. While radicalization has been studied extensively within the context of specific ideologies, our ability to accurately characterize extremism in more generalizable terms remains underdeveloped. In this paper, we propose a novel method for extracting and analyzing extremist discourse across a range of online ideological community forums. By focusing on verbal behavioral signatures of extremist traits, we develop a framework for quantifying extremism at both user and community levels. Our research identifies 11 distinct factors, which we term \"The Extremist Eleven,\" as a generalized psychosocial model of extremism. Applying our method to various online communities, we demonstrate an ability to characterize ideologically diverse communities across the 11 extremist traits. We demonstrate the power of this method by analyzing user histories from members of the incel community. We find that our framework accurately predicts which users join the incel community up to 10 months before their actual entry with an AUC of > 0.6, steadily increasing to AUC ~ 0.9 three to four months before the event. Further, we find that upon entry into an ideological forum, the users tend to maintain their level of extremist traits within the community, while still remaining distinguishable from the general online discourse. Our findings contribute to the study of extremism by introducing a more holistic, cross-ideological approach that transcends traditional, trait-specific models.</p>","PeriodicalId":74525,"journal":{"name":"Proceedings of the ... International AAAI Conference on Weblogs and Social Media. International AAAI Conference on Weblogs and Social Media","volume":"19 ","pages":"1051-1067"},"PeriodicalIF":0.0,"publicationDate":"2025-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12584583/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145454254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Supporters and Skeptics: LLM-based Analysis of Engagement with Mental Health (Mis)Information Content on Video-sharing Platforms. 支持者和怀疑者:基于法学硕士的视频分享平台上心理健康信息内容参与分析。
Viet Cuong Nguyen, Mini Jain, Abhijat Chauhan, Heather Jaime Soled, Santiago Alvarez Lesmes, Zihang Li, Michael L Birnbaum, Sunny X Tang, Srijan Kumar, Munmun De Choudhury

Over one in five adults in the US lives with a mental illness. In the face of a shortage of mental health professionals and offline resources, online short-form video content has grown to serve as a crucial conduit for disseminating mental health help and resources. However, the ease of content creation and access also contributes to the spread of misinformation, posing risks to accurate diagnosis and treatment. Detecting and understanding engagement with such content is crucial to mitigating their harmful effects on public health. We perform the first quantitative study of the phenomenon using YouTube Shorts and Bitchute as the sites of study. We contribute MentalMisinfo, a novel labeled mental health misinformation (MHMisinfo) dataset of 739 videos (639 from Youtube and 100 from Bitchute) and 135372 comments in total, using an expert-driven annotation schema. We first found that few-shot in-context learning with large language models (LLMs) are effective in detecting MHMisinfo videos. Next, we discover distinct and potentially alarming linguistic patterns in how audiences engage with MHMisinfo videos through commentary on both video-sharing platforms. Across the two platforms, comments could exacerbate prevailing stigma with some groups showing heightened susceptibility to and alignment with MHMisinfo. We discuss technical and public health-driven adaptive solutions to tackling the "epidemic" of mental health misinformation online.

美国超过五分之一的成年人患有精神疾病。面对精神卫生专业人员和线下资源的短缺,在线短视频内容已经发展成为传播精神卫生帮助和资源的重要渠道。然而,内容创建和访问的便利性也助长了错误信息的传播,给准确的诊断和治疗带来了风险。发现和了解对此类内容的参与对于减轻其对公众健康的有害影响至关重要。我们使用YouTube Shorts和Bitchute作为研究网站,对这一现象进行了首次定量研究。我们使用专家驱动的注释模式,贡献了MentalMisinfo,这是一个新的标记心理健康错误信息(MHMisinfo)数据集,该数据集包含739个视频(639个来自Youtube, 100个来自Bitchute)和135372条评论。我们首先发现使用大型语言模型(llm)的少镜头上下文学习在检测MHMisinfo视频方面是有效的。接下来,我们发现了观众如何通过两个视频分享平台上的评论与MHMisinfo视频互动的独特且可能令人担忧的语言模式。在这两个平台上,评论可能会加剧普遍存在的耻辱感,因为一些群体对MHMisinfo表现出更高的敏感性和一致性。我们讨论了技术和公共卫生驱动的适应性解决方案,以解决在线心理健康错误信息的“流行病”。
{"title":"Supporters and Skeptics: LLM-based Analysis of Engagement with Mental Health (Mis)Information Content on Video-sharing Platforms.","authors":"Viet Cuong Nguyen, Mini Jain, Abhijat Chauhan, Heather Jaime Soled, Santiago Alvarez Lesmes, Zihang Li, Michael L Birnbaum, Sunny X Tang, Srijan Kumar, Munmun De Choudhury","doi":"10.1609/icwsm.v19i1.35875","DOIUrl":"https://doi.org/10.1609/icwsm.v19i1.35875","url":null,"abstract":"<p><p>Over one in five adults in the US lives with a mental illness. In the face of a shortage of mental health professionals and offline resources, online short-form video content has grown to serve as a crucial conduit for disseminating mental health help and resources. However, the ease of content creation and access also contributes to the spread of misinformation, posing risks to accurate diagnosis and treatment. Detecting and understanding engagement with such content is crucial to mitigating their harmful effects on public health. We perform the first quantitative study of the phenomenon using YouTube Shorts and Bitchute as the sites of study. We contribute MentalMisinfo, a novel labeled mental health misinformation (MHMisinfo) dataset of 739 videos (639 from Youtube and 100 from Bitchute) and 135372 comments in total, using an expert-driven annotation schema. We first found that few-shot in-context learning with large language models (LLMs) are effective in detecting MHMisinfo videos. Next, we discover distinct and potentially alarming linguistic patterns in how audiences engage with MHMisinfo videos through commentary on both video-sharing platforms. Across the two platforms, comments could exacerbate prevailing stigma with some groups showing heightened susceptibility to and alignment with MHMisinfo. We discuss technical and public health-driven adaptive solutions to tackling the \"epidemic\" of mental health misinformation online.</p>","PeriodicalId":74525,"journal":{"name":"Proceedings of the ... International AAAI Conference on Weblogs and Social Media. International AAAI Conference on Weblogs and Social Media","volume":"19 ","pages":"1329-1345"},"PeriodicalIF":0.0,"publicationDate":"2025-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12365693/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144981761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large-Scale Analysis of Online Questions Related to Opioid Use Disorder on Reddit. Reddit上与阿片类药物使用障碍相关的在线问题的大规模分析
Tanmay Laud, Akadia Kacha-Ochana, Steven A Sumner, Vikram Krishnasamy, Royal Law, Lyna Schieber, Munmun De Choudhury, Mai ElSherief

Opioid use disorder (OUD) is a leading health problem that affects individual well-being as well as general public health. Due to a variety of reasons, including the stigma faced by people using opioids, online communities for recovery and support were formed on different social media platforms. In these communities, people share their experiences and solicit information by asking questions to learn about opioid use and recovery. However, these communities do not always contain clinically verified information. In this paper, we study natural language questions asked in the context of OUD-related discourse on Reddit. We adopt transformer-based question detection along with hierarchical clustering across 19 subreddits to identify six coarse-grained categories and 69 fine-grained categories of OUD-related questions. Our analysis uncovers ten areas of information seeking from Reddit users in the context of OUD: drug sales, specific drug-related questions, OUD treatment, drug uses, side effects, withdrawal, lifestyle, drug testing, pain management and others, during the study period of 2018-2021. Our work provides a major step in improving the understanding of OUD-related questions people ask unobtrusively on Reddit. We finally discuss technological interventions and public health harm reduction techniques based on the topics of these questions.

阿片类药物使用障碍(OUD)是影响个人福祉和一般公众健康的主要健康问题。由于各种原因,包括使用阿片类药物的人所面临的耻辱,在不同的社交媒体平台上形成了用于康复和支持的在线社区。在这些社区中,人们通过提问来分享他们的经验并获取信息,以了解阿片类药物的使用和康复。然而,这些社区并不总是包含临床验证的信息。在本文中,我们研究了Reddit上与oud相关的话语中提出的自然语言问题。我们采用基于转换器的问题检测以及跨19个subreddits的分层聚类来识别6个粗粒度类别和69个细粒度类别的oud相关问题。我们的分析揭示了在2018-2021年的研究期间,Reddit用户在OUD背景下寻求的十个信息领域:药物销售、特定药物相关问题、OUD治疗、药物使用、副作用、戒断、生活方式、药物测试、疼痛管理等。我们的工作提供了一个重要的步骤,以提高人们在Reddit上不显眼地提出的与oud相关的问题的理解。最后,我们根据这些问题的主题讨论了技术干预和减少公共卫生危害的技术。
{"title":"Large-Scale Analysis of Online Questions Related to Opioid Use Disorder on Reddit.","authors":"Tanmay Laud, Akadia Kacha-Ochana, Steven A Sumner, Vikram Krishnasamy, Royal Law, Lyna Schieber, Munmun De Choudhury, Mai ElSherief","doi":"10.1609/icwsm.v19i1.35861","DOIUrl":"10.1609/icwsm.v19i1.35861","url":null,"abstract":"<p><p>Opioid use disorder (OUD) is a leading health problem that affects individual well-being as well as general public health. Due to a variety of reasons, including the stigma faced by people using opioids, online communities for recovery and support were formed on different social media platforms. In these communities, people share their experiences and solicit information by asking questions to learn about opioid use and recovery. However, these communities do not always contain clinically verified information. In this paper, we study natural language questions asked in the context of OUD-related discourse on Reddit. We adopt transformer-based question detection along with hierarchical clustering across 19 subreddits to identify six coarse-grained categories and 69 fine-grained categories of OUD-related questions. Our analysis uncovers ten areas of information seeking from Reddit users in the context of OUD: drug sales, specific drug-related questions, OUD treatment, drug uses, side effects, withdrawal, lifestyle, drug testing, pain management and others, during the study period of 2018-2021. Our work provides a major step in improving the understanding of OUD-related questions people ask unobtrusively on Reddit. We finally discuss technological interventions and public health harm reduction techniques based on the topics of these questions.</p>","PeriodicalId":74525,"journal":{"name":"Proceedings of the ... International AAAI Conference on Weblogs and Social Media. International AAAI Conference on Weblogs and Social Media","volume":"19 ","pages":"1068-1084"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12766712/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145914018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reliability Analysis of Psychological Concept Extraction and Classification in User-penned Text. 用户自写文本中心理概念提取与分类的信度分析。
Muskan Garg, Msvpj Sathvik, Shaina Raza, Amrit Chadha, Sunghwan Sohn

The social NLP research community witness a recent surge in the computational advancements of mental health analysis to build responsible AI models for a complex interplay between language use and self-perception. Such responsible AI models aid in quantifying the psychological concepts from user-penned texts on social media. On thinking beyond the low-level (classification) task, we advance the existing binary classification dataset, towards a higher-level task of reliability analysis through the lens of explanations, posing it as one of the safety measures. We annotate the LoST dataset to capture nuanced textual cues that suggest the presence of low self-esteem in the posts of Reddit users. We further state that the NLP models developed for determining the presence of low self-esteem, focus more on three types of textual cues: (i) Trigger: words that triggers mental disturbance, (ii) LoST indicators: text indicators emphasizing low self-esteem, and (iii) Consequences: words describing the consequences of mental disturbance. We implement existing classifiers to examine the attention mechanism in pre-trained language models (PLMs) for a domain-specific psychology-grounded task. Our findings suggest the need of shifting the focus of PLMs from Trigger and Consequences to a more comprehensive explanation, emphasizing LoST indicators while determining low self-esteem in Reddit posts.

社会NLP研究界最近见证了心理健康分析的计算进步,为语言使用和自我感知之间的复杂相互作用建立负责任的人工智能模型。这种负责任的人工智能模型有助于量化社交媒体上用户撰写的文本中的心理概念。在超越低级(分类)任务的思考上,我们通过解释的视角,将现有的二元分类数据集推进到更高层次的可靠性分析任务,并将其作为安全措施之一。我们对LoST数据集进行了注释,以捕捉细微的文本线索,这些线索表明Reddit用户的帖子中存在低自尊。我们进一步指出,为确定低自尊的存在而开发的NLP模型更多地关注三种类型的文本线索:(i)触发:触发精神障碍的词语;(ii)丢失的指标:强调低自尊的文本指标;(iii)后果:描述精神障碍后果的词语。我们实现了现有的分类器来检查预训练语言模型(PLMs)中特定领域心理学基础任务的注意机制。我们的研究结果表明,需要将plm的重点从触发和后果转移到更全面的解释上,强调LoST指标,同时确定Reddit帖子中的低自尊。
{"title":"Reliability Analysis of Psychological Concept Extraction and Classification in User-penned Text.","authors":"Muskan Garg, Msvpj Sathvik, Shaina Raza, Amrit Chadha, Sunghwan Sohn","doi":"10.1609/icwsm.v18i1.31324","DOIUrl":"10.1609/icwsm.v18i1.31324","url":null,"abstract":"<p><p>The social NLP research community witness a recent surge in the computational advancements of mental health analysis to build responsible AI models for a complex interplay between language use and self-perception. Such responsible AI models aid in quantifying the psychological concepts from user-penned texts on social media. On thinking beyond the low-level (<i>classification</i>) task, we advance the existing binary classification dataset, towards a higher-level task of reliability analysis through the lens of explanations, posing it as one of the safety measures. We annotate the <i>LoST</i> dataset to capture nuanced textual cues that suggest the presence of low self-esteem in the posts of Reddit users. We further state that the NLP models developed for determining the presence of low self-esteem, focus more on three types of textual cues: (i) <i>Trigger</i>: words that triggers mental disturbance, (ii) <i>LoST indicators</i>: text indicators emphasizing low self-esteem, and (iii) <i>Consequences</i>: words describing the consequences of mental disturbance. We implement existing classifiers to examine the attention mechanism in pre-trained language models (PLMs) for a domain-specific psychology-grounded task. Our findings suggest the need of shifting the focus of PLMs from <i>Trigger</i> and <i>Consequences</i> to a more comprehensive explanation, emphasizing <i>LoST indicators</i> while determining low self-esteem in Reddit posts.</p>","PeriodicalId":74525,"journal":{"name":"Proceedings of the ... International AAAI Conference on Weblogs and Social Media. International AAAI Conference on Weblogs and Social Media","volume":"18 ","pages":"422-434"},"PeriodicalIF":0.0,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11881108/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143568840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Negative Associations in Word Embeddings Predict Anti-black Bias across Regions-but Only via Name Frequency. 词嵌入中的负关联预测跨地区的反黑人偏见——但仅通过名字频率。
Austin van Loon, Salvatore Giorgi, Robb Willer, Johannes Eichstaedt

The word embedding association test (WEAT) is an important method for measuring linguistic biases against social groups such as ethnic minorities in large text corpora. It does so by comparing the semantic relatedness of words prototypical of the groups (e.g., names unique to those groups) and attribute words (e.g., 'pleasant' and 'unpleasant' words). We show that anti-Black WEAT estimates from geo-tagged social media data at the level of metropolitan statistical areas strongly correlate with several measures of racial animus-even when controlling for sociodemographic covariates. However, we also show that every one of these correlations is explained by a third variable: the frequency of Black names in the underlying corpora relative to White names. This occurs because word embeddings tend to group positive (negative) words and frequent (rare) words together in the estimated semantic space. As the frequency of Black names on social media is strongly correlated with Black Americans' prevalence in the population, this results in spuriously high anti-Black WEAT estimates wherever few Black Americans live. This suggests that research using the WEAT to measure bias should consider term frequency, and also demonstrates the potential consequences of using black-box models like word embeddings to study human cognition and behavior.

词嵌入关联测试(WEAT)是测量大文本语料库中对少数民族等社会群体的语言偏见的重要方法。它通过比较这些群体的原型词(例如,这些群体特有的名字)和属性词(例如,“愉快的”和“不愉快的”词)的语义相关性来做到这一点。我们表明,在大都市统计区域的水平上,从地理标记的社交媒体数据中得出的反黑人WEAT估计与种族敌意的几个衡量指标密切相关——即使在控制社会人口统计协变量的情况下也是如此。然而,我们也表明,这些相关性中的每一个都可以用第三个变量来解释:黑人名字在基础语料库中相对于白人名字的频率。这是因为词嵌入倾向于在估计的语义空间中将肯定(否定)词和频繁(罕见)词组合在一起。由于黑人名字在社交媒体上出现的频率与美国黑人在人口中的流行程度密切相关,这就导致了在美国黑人很少的地方,反黑人WEAT的估计高得令人难以置信。这表明,使用WEAT来衡量偏见的研究应该考虑术语频率,也表明了使用黑盒模型(如词嵌入)来研究人类认知和行为的潜在后果。
{"title":"Negative Associations in Word Embeddings Predict Anti-black Bias across Regions-but Only via Name Frequency.","authors":"Austin van Loon,&nbsp;Salvatore Giorgi,&nbsp;Robb Willer,&nbsp;Johannes Eichstaedt","doi":"10.1609/icwsm.v16i1.19399","DOIUrl":"https://doi.org/10.1609/icwsm.v16i1.19399","url":null,"abstract":"<p><p>The word embedding association test (WEAT) is an important method for measuring linguistic biases against social groups such as ethnic minorities in large text corpora. It does so by comparing the semantic relatedness of words prototypical of the groups (e.g., names unique to those groups) and attribute words (e.g., 'pleasant' and 'unpleasant' words). We show that anti-Black WEAT estimates from geo-tagged social media data at the level of metropolitan statistical areas strongly correlate with several measures of racial animus-even when controlling for sociodemographic covariates. However, we also show that every one of these correlations is explained by a third variable: the frequency of Black names in the underlying corpora relative to White names. This occurs because word embeddings tend to group positive (negative) words and frequent (rare) words together in the estimated semantic space. As the frequency of Black names on social media is strongly correlated with Black Americans' prevalence in the population, this results in spuriously high anti-Black WEAT estimates wherever few Black Americans live. This suggests that research using the WEAT to measure bias should consider term frequency, and also demonstrates the potential consequences of using black-box models like word embeddings to study human cognition and behavior.</p>","PeriodicalId":74525,"journal":{"name":"Proceedings of the ... International AAAI Conference on Weblogs and Social Media. International AAAI Conference on Weblogs and Social Media","volume":"16 ","pages":"1419-1424"},"PeriodicalIF":0.0,"publicationDate":"2022-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10147343/pdf/nihms-1842382.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9399665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Correcting Sociodemographic Selection Biases for Population Prediction from Social Media. 从社交媒体中纠正人口预测的社会人口选择偏差。
Salvatore Giorgi, Veronica E Lynn, Keshav Gupta, Farhan Ahmed, Sandra Matz, Lyle H Ungar, H Andrew Schwartz

Social media is increasingly used for large-scale population predictions, such as estimating community health statistics. However, social media users are not typically a representative sample of the intended population - a "selection bias". Within the social sciences, such a bias is typically addressed with restratification techniques, where observations are reweighted according to how under- or over-sampled their socio-demographic groups are. Yet, restratifaction is rarely evaluated for improving prediction. In this two-part study, we first evaluate standard, "out-of-the-box" restratification techniques, finding they provide no improvement and often even degraded prediction accuracies across four tasks of esimating U.S. county population health statistics from Twitter. The core reasons for degraded performance seem to be tied to their reliance on either sparse or shrunken estimates of each population's socio-demographics. In the second part of our study, we develop and evaluate Robust Poststratification, which consists of three methods to address these problems: (1) estimator redistribution to account for shrinking, as well as (2) adaptive binning and (3) informed smoothing to handle sparse socio-demographic estimates. We show that each of these methods leads to significant improvement in prediction accuracies over the standard restratification approaches. Taken together, Robust Poststratification enables state-of-the-art prediction accuracies, yielding a 53.0% increase in variance explained (R 2) in the case of surveyed life satisfaction, and a 17.8% average increase across all tasks.

社交媒体越来越多地被用于大规模人口预测,如估算社区健康统计数据。然而,社交媒体用户通常不是目标人群的代表性样本,这就是 "选择偏差"。在社会科学领域,这种偏差通常通过限制技术来解决,即根据社会人口群体样本不足或过多的程度对观察结果进行重新加权。然而,人们却很少对限制加权法是否能改善预测效果进行评估。在这项由两部分组成的研究中,我们首先评估了标准的、"开箱即用 "的restratifaction 技术,发现这些技术在从 Twitter 估算美国县级人口健康统计数据的四项任务中没有任何改进,甚至经常降低预测准确度。性能下降的核心原因似乎与它们对每个人口社会人口统计稀疏或缩减估计值的依赖有关。在研究的第二部分,我们开发并评估了稳健后分层法(Robust Poststratification),其中包括三种解决这些问题的方法:(1)估计器再分配以考虑缩减,以及(2)自适应分档和(3)知情平滑以处理稀疏的社会人口估计值。我们的研究表明,与标准限制方法相比,上述每种方法都能显著提高预测精度。综合来看,稳健后分层法实现了最先进的预测准确度,在调查生活满意度的情况下,解释方差(R 2)提高了 53.0%,在所有任务中平均提高了 17.8%。
{"title":"Correcting Sociodemographic Selection Biases for Population Prediction from Social Media.","authors":"Salvatore Giorgi, Veronica E Lynn, Keshav Gupta, Farhan Ahmed, Sandra Matz, Lyle H Ungar, H Andrew Schwartz","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Social media is increasingly used for large-scale population predictions, such as estimating community health statistics. However, social media users are not typically a representative sample of the intended population - a \"selection bias\". Within the social sciences, such a bias is typically addressed with <i>restratification</i> techniques, where observations are reweighted according to how under- or over-sampled their socio-demographic groups are. Yet, restratifaction is rarely evaluated for improving prediction. In this two-part study, we first evaluate standard, \"out-of-the-box\" restratification techniques, finding they provide no improvement and often even degraded prediction accuracies across four tasks of esimating U.S. county population health statistics from Twitter. The core reasons for degraded performance seem to be tied to their reliance on either sparse or shrunken estimates of each population's socio-demographics. In the second part of our study, we develop and evaluate Robust Poststratification, which consists of three methods to address these problems: (1) <i>estimator redistribution</i> to account for shrinking, as well as (2) <i>adaptive binning</i> and (3) <i>informed smoothing</i> to handle sparse socio-demographic estimates. We show that each of these methods leads to significant improvement in prediction accuracies over the standard restratification approaches. Taken together, Robust Poststratification enables state-of-the-art prediction accuracies, yielding a 53.0% increase in variance explained (<i>R</i> <sup>2</sup>) in the case of surveyed life satisfaction, and a 17.8% average increase across all tasks.</p>","PeriodicalId":74525,"journal":{"name":"Proceedings of the ... International AAAI Conference on Weblogs and Social Media. International AAAI Conference on Weblogs and Social Media","volume":"16 1","pages":"228-240"},"PeriodicalIF":0.0,"publicationDate":"2022-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9714525/pdf/nihms-1842768.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"35254726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classifying Minority Stress Disclosure on Social Media with Bidirectional Long Short-Term Memory. 利用双向长短期记忆对社交媒体上的少数群体压力披露进行分类。
Cory J Cascalheira, Shah Muhammad Hamdi, Jillian R Scheer, Koustuv Saha, Soukaina Filali Boubrahimi, Munmun De Choudhury

Because of their stigmatized social status, sexual and gender minority (SGM; e.g., gay, transgender) people experience minority stress (i.e., identity-based stress arising from adverse social conditions). Given that minority stress is the leading framework for understanding health inequity among SGM people, researchers and clinicians need accurate methods to detect minority stress. Since social media fulfills important developmental, affiliative, and coping functions for SGM people, social media may be an ecologically valid channel for detecting minority stress. In this paper, we propose a bidirectional long short-term memory (BI-LSTM) network for classifying minority stress disclosed on Reddit. Our experiments on a dataset of 12,645 Reddit posts resulted in an average accuracy of 65%.

性与性别少数群体(SGM,如同性恋、变性人)由于其被污名化的社会地位,会经历少数群体压力(即由不利社会条件引起的基于身份的压力)。鉴于少数群体压力是了解 SGM 健康不平等的主要框架,研究人员和临床医生需要准确的方法来检测少数群体压力。由于社交媒体对 SGM 人具有重要的发展、从属关系和应对功能,因此社交媒体可能是检测少数群体压力的生态有效渠道。在本文中,我们提出了一种双向长短期记忆(BI-LSTM)网络,用于对 Reddit 上披露的少数群体压力进行分类。我们在一个包含 12,645 个 Reddit 帖子的数据集上进行了实验,结果显示平均准确率为 65%。
{"title":"Classifying Minority Stress Disclosure on Social Media with Bidirectional Long Short-Term Memory.","authors":"Cory J Cascalheira, Shah Muhammad Hamdi, Jillian R Scheer, Koustuv Saha, Soukaina Filali Boubrahimi, Munmun De Choudhury","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Because of their stigmatized social status, sexual and gender minority (SGM; e.g., gay, transgender) people experience minority stress (i.e., identity-based stress arising from adverse social conditions). Given that minority stress is the leading framework for understanding health inequity among SGM people, researchers and clinicians need accurate methods to detect minority stress. Since social media fulfills important developmental, affiliative, and coping functions for SGM people, social media may be an ecologically valid channel for detecting minority stress. In this paper, we propose a bidirectional long short-term memory (BI-LSTM) network for classifying minority stress disclosed on Reddit. Our experiments on a dataset of 12,645 Reddit posts resulted in an average accuracy of 65%.</p>","PeriodicalId":74525,"journal":{"name":"Proceedings of the ... International AAAI Conference on Weblogs and Social Media. International AAAI Conference on Weblogs and Social Media","volume":" ","pages":"1373-1377"},"PeriodicalIF":0.0,"publicationDate":"2022-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9235017/pdf/nihms-1816009.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"40408344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Classifying Minority Stress Disclosure on Social Media with Bidirectional Long Short-Term Memory 双向长短期记忆对少数民族社交媒体压力披露的分类研究
C. Cascalheira, S. M. Hamdi, Jillian R. Scheer, Koustuv Saha, S. F. Boubrahimi, M. Choudhury
Because of their stigmatized social status, sexual and gender minority (SGM; e.g., gay, transgender) people experience minority stress (i.e., identity-based stress arising from adverse social conditions). Given that minority stress is the leading framework for understanding health inequity among SGM people, researchers and clinicians need accurate methods to detect minority stress. Since social media fulfills important developmental, affiliative, and coping functions for SGM people, social media may be an ecologically valid channel for detecting minority stress. In this paper, we propose a bidirectional long short-term memory (BI-LSTM) network for classifying minority stress disclosed on Reddit. Our experiments on a dataset of 12,645 Reddit posts resulted in an average accuracy of 65%.
由于他们被污名化的社会地位,性少数和性别少数(SGM;例如,同性恋,变性人)经历少数压力(即,由不利的社会条件产生的基于身份的压力)。鉴于少数群体压力是理解SGM人群健康不平等的主要框架,研究人员和临床医生需要准确的方法来检测少数群体压力。由于社交媒体履行了SGM人群重要的发展、隶属和应对功能,社交媒体可能是检测少数群体压力的生态有效渠道。在本文中,我们提出了一个双向长短期记忆(BI-LSTM)网络用于分类Reddit上披露的少数派压力。我们对12645个Reddit帖子的数据集进行了实验,结果平均准确率为65%。
{"title":"Classifying Minority Stress Disclosure on Social Media with Bidirectional Long Short-Term Memory","authors":"C. Cascalheira, S. M. Hamdi, Jillian R. Scheer, Koustuv Saha, S. F. Boubrahimi, M. Choudhury","doi":"10.1609/icwsm.v16i1.19390","DOIUrl":"https://doi.org/10.1609/icwsm.v16i1.19390","url":null,"abstract":"Because of their stigmatized social status, sexual and gender minority (SGM; e.g., gay, transgender) people experience minority stress (i.e., identity-based stress arising from adverse social conditions). Given that minority stress is the leading framework for understanding health inequity among SGM people, researchers and clinicians need accurate methods to detect minority stress. Since social media fulfills important developmental, affiliative, and coping functions for SGM people, social media may be an ecologically valid channel for detecting minority stress. In this paper, we propose a bidirectional long short-term memory (BI-LSTM) network for classifying minority stress disclosed on Reddit. Our experiments on a dataset of 12,645 Reddit posts resulted in an average accuracy of 65%.","PeriodicalId":74525,"journal":{"name":"Proceedings of the ... International AAAI Conference on Weblogs and Social Media. International AAAI Conference on Weblogs and Social Media","volume":"11 6 1","pages":"1373-1377"},"PeriodicalIF":0.0,"publicationDate":"2022-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75217014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Tweet Classification to Assist Human Moderation for Suicide Prevention. 推文分类协助人类适度自杀预防。
Ramit Sawhney, Harshit Joshi, Alicia Nobles, Rajiv Ratn Shah

Social media platforms are already engaged in leveraging existing online socio-technical systems to employ just-in-time interventions for suicide prevention to the public. These efforts primarily rely on self-reports of potential self-harm content that is reviewed by moderators. Most recently, platforms have employed automated models to identify self-harm content, but acknowledge that these automated models still struggle to understand the nuance of human language (e.g., sarcasm). By explicitly focusing on Twitter posts that could easily be misidentified by a model as expressing suicidal intent (i.e., they contain similar phrases such as "wanting to die"), our work examines the temporal differences in historical expressions of general and emotional language prior to a clear expression of suicidal intent. Additionally, we analyze time-aware neural models that build on these language variants and factors in the historical, emotional spectrum of a user's tweeting activity. The strongest model achieves high (statistically significant) performance (macro F1=0.804, recall=0.813) to identify social media indicative of suicidal intent. Using three use cases of tweets with phrases common to suicidal intent, we qualitatively analyze and interpret how such models decided if suicidal intent was present and discuss how these analyses may be used to alleviate the burden on human moderators within the known constraints of how moderation is performed (e.g., no access to the user's timeline). Finally, we discuss the ethical implications of such data-driven models and inferences about suicidal intent from social media. Content warning: this article discusses self-harm and suicide.

社交媒体平台已经开始利用现有的在线社会技术系统,为公众提供及时的自杀预防干预。这些努力主要依赖于由版主审查的潜在自残内容的自我报告。最近,平台已经使用自动化模型来识别自残内容,但承认这些自动化模型仍然难以理解人类语言的细微差别(例如,讽刺)。通过明确关注可能容易被模型错误识别为表达自杀意图的Twitter帖子(即,它们包含类似的短语,如“想死”),我们的工作检查了在明确表达自杀意图之前,一般语言和情感语言的历史表达的时间差异。此外,我们分析了建立在这些语言变体和历史因素上的时间感知神经模型,用户的推文活动的情感谱。最强的模型在识别社交媒体暗示的自杀意图方面取得了很高(统计显著)的表现(宏观F1=0.804,召回率=0.813)。使用三个带有自杀意图常见短语的推文用例,我们定性地分析和解释了这些模型如何决定是否存在自杀意图,并讨论了如何使用这些分析来减轻人类版主在如何执行审核的已知约束(例如,无法访问用户的时间轴)中的负担。最后,我们讨论了这种数据驱动模型的伦理含义,以及社交媒体对自杀意图的推断。内容警告:本文讨论自残和自杀。
{"title":"Tweet Classification to Assist Human Moderation for Suicide Prevention.","authors":"Ramit Sawhney,&nbsp;Harshit Joshi,&nbsp;Alicia Nobles,&nbsp;Rajiv Ratn Shah","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>Social media platforms are already engaged in leveraging existing online socio-technical systems to employ just-in-time interventions for suicide prevention to the public. These efforts primarily rely on self-reports of potential self-harm content that is reviewed by moderators. Most recently, platforms have employed automated models to identify self-harm content, but acknowledge that these automated models still struggle to understand the nuance of human language (e.g., sarcasm). By explicitly focusing on Twitter posts that could easily be misidentified by a model as expressing suicidal intent (i.e., they contain similar phrases such as \"wanting to die\"), our work examines the temporal differences in historical expressions of general and emotional language prior to a clear expression of suicidal intent. Additionally, we analyze time-aware neural models that build on these language variants and factors in the historical, emotional spectrum of a user's tweeting activity. The strongest model achieves high (statistically significant) performance (macro F1=0.804, recall=0.813) to identify social media indicative of suicidal intent. Using three use cases of tweets with phrases common to suicidal intent, we qualitatively analyze and interpret how such models decided if suicidal intent was present and discuss how these analyses may be used to alleviate the burden on human moderators within the known constraints of how moderation is performed (e.g., no access to the user's timeline). Finally, we discuss the ethical implications of such data-driven models and inferences about suicidal intent from social media. <b>Content warning: this article discusses self-harm and suicide.</b></p>","PeriodicalId":74525,"journal":{"name":"Proceedings of the ... International AAAI Conference on Weblogs and Social Media. International AAAI Conference on Weblogs and Social Media","volume":" ","pages":"609-620"},"PeriodicalIF":0.0,"publicationDate":"2021-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8843106/pdf/nihms-1774843.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39627521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Well-Being Depends on Social Comparison: Hierarchical Models of Twitter Language Suggest That Richer Neighbors Make You Less Happy. 幸福取决于社会比较:推特语言的等级模型表明,更富有的邻居会让你更不快乐。
Salvatore Giorgi, Sharath Chandra Guntuku, Johannes C Eichstaedt, Claire Pajot, H Andrew Schwartz, Lyle H Ungar

Psychological research has shown that subjective well-being is sensitive to social comparison effects; individuals report decreased happiness when their neighbors earn more than they do. In this work, we use Twitter language to estimate the well-being of users, and model both individual and neighborhood income using hierarchical modeling across counties in the United States (US). We show that language-based estimates from a sample of 5.8 million Twitter users replicate results obtained from large-scale well-being surveys - relatively richer neighbors leads to lower well-being, even when controlling for absolute income. Furthermore, predicting individual-level happiness using hierarchical models (i.e., individuals within their communities) out-predicts standard baselines. We also explore language associated with relative income differences and find that individuals with lower income than their community tend to swear (f*ck, sh*t, b*tch), express anger (pissed, bullsh*t, wtf), hesitation (don't, anymore, idk, confused) and acts of social deviance (weed, blunt, drunk). These results suggest that social comparison robustly affects reported well-being, and that Twitter language analyses can be used to both measure these effects and shed light on their underlying psychological dynamics.

心理学研究表明,主观幸福感对社会比较效应敏感;当邻居挣得比自己多时,个人的幸福感会下降。在这项工作中,我们使用Twitter语言来估计用户的福祉,并使用分层模型对美国各县的个人和社区收入进行建模。我们从580万Twitter用户样本中得出的基于语言的估计与大规模幸福感调查的结果一致——即使在控制绝对收入的情况下,相对富裕的邻居也会导致较低的幸福感。此外,使用等级模型(即社区内的个人)预测个人层面的幸福感超出了标准基线。我们还研究了与相对收入差异相关的语言,发现收入低于社区的人倾向于咒骂(f*ck, sh*t, b*tch),表达愤怒(pissed, bullsh*t, wtf),犹豫(don't, more, idk, confused)和社会越界行为(weed, blunt, drunk)。这些结果表明,社会比较强烈地影响着报告的幸福感,Twitter语言分析既可以用来衡量这些影响,也可以用来揭示他们潜在的心理动态。
{"title":"Well-Being Depends on Social Comparison: Hierarchical Models of Twitter Language Suggest That Richer Neighbors Make You Less Happy.","authors":"Salvatore Giorgi,&nbsp;Sharath Chandra Guntuku,&nbsp;Johannes C Eichstaedt,&nbsp;Claire Pajot,&nbsp;H Andrew Schwartz,&nbsp;Lyle H Ungar","doi":"10.1609/icwsm.v15i1.18132","DOIUrl":"https://doi.org/10.1609/icwsm.v15i1.18132","url":null,"abstract":"<p><p>Psychological research has shown that subjective well-being is sensitive to social comparison effects; individuals report decreased happiness when their neighbors earn more than they do. In this work, we use Twitter language to estimate the well-being of users, and model both individual and neighborhood income using hierarchical modeling across counties in the United States (US). We show that language-based estimates from a sample of 5.8 million Twitter users replicate results obtained from large-scale well-being surveys - relatively richer neighbors leads to lower well-being, even when controlling for absolute income. Furthermore, predicting individual-level happiness using hierarchical models (i.e., individuals within their communities) out-predicts standard baselines. We also explore language associated with relative income differences and find that individuals with lower income than their community tend to swear (f*ck, sh*t, b*tch), express anger (pissed, bullsh*t, wtf), hesitation (don't, anymore, idk, confused) and acts of social deviance (weed, blunt, drunk). These results suggest that social comparison robustly affects reported well-being, and that Twitter language analyses can be used to both measure these effects and shed light on their underlying psychological dynamics.</p>","PeriodicalId":74525,"journal":{"name":"Proceedings of the ... International AAAI Conference on Weblogs and Social Media. International AAAI Conference on Weblogs and Social Media","volume":"15 ","pages":"1069-1074"},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10099468/pdf/nihms-1854629.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9328583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Proceedings of the ... International AAAI Conference on Weblogs and Social Media. International AAAI Conference on Weblogs and Social Media
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1