Pub Date : 2025-10-16DOI: 10.1177/08944393251370354
Patrick Baert, Robert Dorschel, Meredith Hall, Isabelle Higgins, Ella McPherson, Shannon Philip
This article presents a sociological dialogue between six researchers who specialise in different sociological subfields. Each researcher explores the possible consequences of generative AI within their specific area of expertise. More concretely, the article develops insights around directions in social theory, the political economy of intellectual property, matters of identities and intimacies, evidence and evidentiary power, racial and reproductive inequalities, as well as work and social class. This is followed by a collective discussion on six interconnected themes across these areas: agency, authorship, identity, visibility, inequality, and hype. We also consider our role as cultural producers, understanding our reactions to generative AI as part of the empirical, theoretical, and methodological shifts this knowledge controversy engenders, as well as highlighting our duty as critical sociologists to keep the knowledge controversy about generative AI open.
{"title":"Dialogues Towards Sociologies of Generative AI","authors":"Patrick Baert, Robert Dorschel, Meredith Hall, Isabelle Higgins, Ella McPherson, Shannon Philip","doi":"10.1177/08944393251370354","DOIUrl":"https://doi.org/10.1177/08944393251370354","url":null,"abstract":"This article presents a sociological dialogue between six researchers who specialise in different sociological subfields. Each researcher explores the possible consequences of generative AI within their specific area of expertise. More concretely, the article develops insights around directions in social theory, the political economy of intellectual property, matters of identities and intimacies, evidence and evidentiary power, racial and reproductive inequalities, as well as work and social class. This is followed by a collective discussion on six interconnected themes across these areas: agency, authorship, identity, visibility, inequality, and hype. We also consider our role as cultural producers, understanding our reactions to generative AI as part of the empirical, theoretical, and methodological shifts this knowledge controversy engenders, as well as highlighting our duty as critical sociologists to keep the knowledge controversy about generative AI open.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"10 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2025-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145310743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-09DOI: 10.1177/08944393251388096
Hansol Kwak
How does repression reshape the way online activists engage with target audiences? While prior research has primarily examined changes in overall online participation, it has paid less attention to how activists adjust their strategies in response to repression. Addressing this gap, this article argues that repression incentivizes online activists to broaden their support base by promoting inter-group engagement and signaling inclusivity. Focusing on the 2011 Occupy Wall Street movement, the study analyzes Twitter interactions using network measures of assortativity and cross-group tie proportions. It applies permutation tests and ARIMA-based Interrupted Time Series (ITS) analysis to compare network patterns across key phases, delineated by the Brooklyn Bridge mass arrests on October 1 and the eviction threat of Zuccotti Park on October 13. The results show that repression triggers a significant decrease in assortativity, indicating increased inter-group engagement, while cross-group tie proportions remain stable, suggesting structural rather than isolated behavioral changes.
{"title":"Riding the Tide: How Online Activists Leverage Repression","authors":"Hansol Kwak","doi":"10.1177/08944393251388096","DOIUrl":"https://doi.org/10.1177/08944393251388096","url":null,"abstract":"How does repression reshape the way online activists engage with target audiences? While prior research has primarily examined changes in overall online participation, it has paid less attention to how activists adjust their strategies in response to repression. Addressing this gap, this article argues that repression incentivizes online activists to broaden their support base by promoting inter-group engagement and signaling inclusivity. Focusing on the 2011 Occupy Wall Street movement, the study analyzes Twitter interactions using network measures of assortativity and cross-group tie proportions. It applies permutation tests and ARIMA-based Interrupted Time Series (ITS) analysis to compare network patterns across key phases, delineated by the Brooklyn Bridge mass arrests on October 1 and the eviction threat of Zuccotti Park on October 13. The results show that repression triggers a significant decrease in assortativity, indicating increased inter-group engagement, while cross-group tie proportions remain stable, suggesting structural rather than isolated behavioral changes.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"106 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2025-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145246714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-01DOI: 10.1177/08944393251386073
Hüseyin Doğan, Emre Kılınç
This study introduces a machine learning-based framework aimed at identifying and interpreting the most influential factors contributing to divorce. Utilizing data from the 2021 Turkey Family Structure Survey, we apply Random Forest and Logistic Regression models to rank predictors based on their relative impact on marital dissolution. The goal is to uncover which socio-legal, temporal, and behavioral variables most significantly contribute to the divorce outcome within a culturally grounded dataset. Both models converge on a set of dominant features—psychological conflict responses, cultural marital rituals, and political disagreements—demonstrating their robust influence across different algorithmic paradigms. Feature importance scores derived from model outputs and explainability tools (e.g., permutation and coefficient-based rankings) reveal consistent patterns and offer interpretable insights aligned with sociological theory. This approach contributes to computational sociology by showcasing how machine learning can be used not only for prediction, but more importantly, for identifying statistical patterns that reflect social structures and behavioral dynamics associated with divorce outcomes.
{"title":"Unpacking Divorce: Feature-Based Machine Learning Interpretation of Sociological Patterns","authors":"Hüseyin Doğan, Emre Kılınç","doi":"10.1177/08944393251386073","DOIUrl":"https://doi.org/10.1177/08944393251386073","url":null,"abstract":"This study introduces a machine learning-based framework aimed at identifying and interpreting the most influential factors contributing to divorce. Utilizing data from the 2021 Turkey Family Structure Survey, we apply Random Forest and Logistic Regression models to rank predictors based on their relative impact on marital dissolution. The goal is to uncover which socio-legal, temporal, and behavioral variables most significantly contribute to the divorce outcome within a culturally grounded dataset. Both models converge on a set of dominant features—psychological conflict responses, cultural marital rituals, and political disagreements—demonstrating their robust influence across different algorithmic paradigms. Feature importance scores derived from model outputs and explainability tools (e.g., permutation and coefficient-based rankings) reveal consistent patterns and offer interpretable insights aligned with sociological theory. This approach contributes to computational sociology by showcasing how machine learning can be used not only for prediction, but more importantly, for identifying statistical patterns that reflect social structures and behavioral dynamics associated with divorce outcomes.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"65 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145247685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-25DOI: 10.1177/08944393251382233
Wenbo Li, Shuning Lu, Shan Xu, Xia Zheng
This study investigates individuals’ lay definitions—naïve mental representations—of artificial intelligence (AI). Two national surveys in the United States explored lay definitions of AI in the workplace (Study 1) and in everyday life (Study 2) using both open- and closed-ended questions. Open-ended responses were analyzed with natural language processing, and quantitative survey data identified factors associated with these definitions. Results show that conceptions of AI differed by context: workers emphasized efficiency and automation in the workplace, while the general public linked AI to diverse everyday technologies. Across both groups, conceptions remained nuanced yet limited. Sociodemographic factors and personality traits were related to sentiments expressed in definitions, and greater trust in AI predicted more positive sentiments. These findings underscore the need for targeted training and education to foster a more comprehensive public understanding of what AI is and what it can do across different contexts.
{"title":"Welcome to the Brave New World: Lay Definitions of AI at Work and in Daily Life","authors":"Wenbo Li, Shuning Lu, Shan Xu, Xia Zheng","doi":"10.1177/08944393251382233","DOIUrl":"https://doi.org/10.1177/08944393251382233","url":null,"abstract":"This study investigates individuals’ lay definitions—naïve mental representations—of artificial intelligence (AI). Two national surveys in the United States explored lay definitions of AI in the workplace (Study 1) and in everyday life (Study 2) using both open- and closed-ended questions. Open-ended responses were analyzed with natural language processing, and quantitative survey data identified factors associated with these definitions. Results show that conceptions of AI differed by context: workers emphasized efficiency and automation in the workplace, while the general public linked AI to diverse everyday technologies. Across both groups, conceptions remained nuanced yet limited. Sociodemographic factors and personality traits were related to sentiments expressed in definitions, and greater trust in AI predicted more positive sentiments. These findings underscore the need for targeted training and education to foster a more comprehensive public understanding of what AI is and what it can do across different contexts.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"42 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2025-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145154093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-24DOI: 10.1177/08944393251381801
Tianya Cao, Shuang Li, Junjie Jia
As social networks become ubiquitous, the rapid dissemination of false information poses a substantial threat to societal stability and public welfare. Although sociological and psychological studies have confirmed the significant role of herd behavior in the spread of false information, traditional detection methods struggle to address the dual challenges posed by decentralized communication modes and artificial intelligence-generated content, as they often overlook the psychological mechanisms at play within groups. This study proposes a multidimensional false information detection model, termed HBD-Net, based on herd behavior, to explore innovative methods for false information detection through the lens of herd behavior propagation mechanisms in social networks. By integrating multidimensional information such as the influence of opinion leaders, popular comments, and friends’ experiences, we construct a robust false information detection model. Experimental results demonstrate its superior performance on both the PolitiFact and GossipCop datasets, particularly excelling on the GossipCop dataset with an accuracy of 93.11%, significantly outperforming other baseline models.
{"title":"Research on False Information Detection Based on Herd Behavior From a Social Network Perspective","authors":"Tianya Cao, Shuang Li, Junjie Jia","doi":"10.1177/08944393251381801","DOIUrl":"https://doi.org/10.1177/08944393251381801","url":null,"abstract":"As social networks become ubiquitous, the rapid dissemination of false information poses a substantial threat to societal stability and public welfare. Although sociological and psychological studies have confirmed the significant role of herd behavior in the spread of false information, traditional detection methods struggle to address the dual challenges posed by decentralized communication modes and artificial intelligence-generated content, as they often overlook the psychological mechanisms at play within groups. This study proposes a multidimensional false information detection model, termed HBD-Net, based on herd behavior, to explore innovative methods for false information detection through the lens of herd behavior propagation mechanisms in social networks. By integrating multidimensional information such as the influence of opinion leaders, popular comments, and friends’ experiences, we construct a robust false information detection model. Experimental results demonstrate its superior performance on both the PolitiFact and GossipCop datasets, particularly excelling on the GossipCop dataset with an accuracy of 93.11%, significantly outperforming other baseline models.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"29 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2025-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145141517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-22DOI: 10.1177/08944393251367041
Bernd Weiß, Heinz Leitgöb, Claudia Wagner
The spread of modern digital technologies, such as social media online platforms, digital marketplaces, smartphones, and wearables, is increasingly shifting social, political, economic, cultural, and physiological processes into the digital space. Social actors using these technologies (directly and indirectly) leave a multitude of digital traces in many areas of life that sum up an enormous amount of data about human behavior and attitudes. This new data type, which we refer to as “digital behavioral data” (DBD), encompasses digital observations of human and algorithmic behavior, which are, amongst others, recorded by online platforms (e.g., Google, Facebook, or the World Wide Web) or sensors (e.g., smartphones, RFID sensors, satellites, or street view cameras). However, studying these social phenomena requires data that meets specific quality standards. While data quality frameworks—such as the Total Survey Error framework—have a long-standing tradition survey research, the scientific use of DBD introduces several entirely new challenges related to data quality. For example, most DBD are not generated for research purposes but are a side product of our daily activities. Hence, the data generation process is not based on elaborate research designs, which in turn may have profound implications for the validity of the conclusions drawn from the analysis of DBD. Furthermore, many forms of DBD lack well-established data models, measurement (error) theories, quality standards, and evaluation criteria. Therefore, this special issue addresses (i) the conceptualization of DBD quality, methodological innovations for its (ii) assessment, and (iii) improvement as well as their sophisticated empirical application.
{"title":"Conceptualizing, Assessing, and Improving the Quality of Digital Behavioral Data","authors":"Bernd Weiß, Heinz Leitgöb, Claudia Wagner","doi":"10.1177/08944393251367041","DOIUrl":"https://doi.org/10.1177/08944393251367041","url":null,"abstract":"The spread of modern digital technologies, such as social media online platforms, digital marketplaces, smartphones, and wearables, is increasingly shifting social, political, economic, cultural, and physiological processes into the digital space. Social actors using these technologies (directly and indirectly) leave a multitude of digital traces in many areas of life that sum up an enormous amount of data about human behavior and attitudes. This new data type, which we refer to as “digital behavioral data” (DBD), encompasses digital observations of human and algorithmic behavior, which are, amongst others, recorded by online platforms (e.g., Google, Facebook, or the World Wide Web) or sensors (e.g., smartphones, RFID sensors, satellites, or street view cameras). However, studying these social phenomena requires data that meets specific quality standards. While data quality frameworks—such as the Total Survey Error framework—have a long-standing tradition survey research, the scientific use of DBD introduces several entirely new challenges related to data quality. For example, most DBD are not generated for research purposes but are a side product of our daily activities. Hence, the data generation process is not based on elaborate research designs, which in turn may have profound implications for the validity of the conclusions drawn from the analysis of DBD. Furthermore, many forms of DBD lack well-established data models, measurement (error) theories, quality standards, and evaluation criteria. Therefore, this special issue addresses (i) the conceptualization of DBD quality, methodological innovations for its (ii) assessment, and (iii) improvement as well as their sophisticated empirical application.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"1 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2025-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145116340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-19DOI: 10.1177/08944393251376703
Luigi Arminio, Matteo Magnani, Matías Piqueras, Luca Rossi, Alexandra Segerberg
As visual content becomes increasingly prominent on social media, automated image categorization is vital for computational social science efforts to identify emerging visual themes and narratives in online debates. However, the methods based on convolutional neural networks (CNNs) currently used in the field are unable to fully capture the connotative meaning of images, and struggle to produce easily interpretable clusters. In response to these challenges, we test an approach that leverages the ability of Vision-and-Large-Language-Models (VLLMs) to generate image descriptions that incorporate connotative interpretations of the input images. In particular, we use a VLLM to generate connotative textual descriptions of a set of images related to climate debate, and cluster the images based on these textual descriptions. In parallel, we cluster the same images using a more traditional approach based on CNNs. In doing so, we compare the connotative semantic validity of clusters generated using VLLMs with those produced using CNNs, and assess their interpretability. The results show that the approach based on VLLMs greatly improves the quality score for connotative clustering. Moreover, VLLM-based approaches, leveraging textual information as a step towards clustering, offer a high level of interpretability of the results.
{"title":"Leveraging VLLMs for Visual Clustering: Image-to-Text Mapping Shows Increased Semantic Capabilities and Interpretability","authors":"Luigi Arminio, Matteo Magnani, Matías Piqueras, Luca Rossi, Alexandra Segerberg","doi":"10.1177/08944393251376703","DOIUrl":"https://doi.org/10.1177/08944393251376703","url":null,"abstract":"As visual content becomes increasingly prominent on social media, automated image categorization is vital for computational social science efforts to identify emerging visual themes and narratives in online debates. However, the methods based on convolutional neural networks (CNNs) currently used in the field are unable to fully capture the connotative meaning of images, and struggle to produce easily interpretable clusters. In response to these challenges, we test an approach that leverages the ability of Vision-and-Large-Language-Models (VLLMs) to generate image descriptions that incorporate connotative interpretations of the input images. In particular, we use a VLLM to generate connotative textual descriptions of a set of images related to climate debate, and cluster the images based on these textual descriptions. In parallel, we cluster the same images using a more traditional approach based on CNNs. In doing so, we compare the connotative semantic validity of clusters generated using VLLMs with those produced using CNNs, and assess their interpretability. The results show that the approach based on VLLMs greatly improves the quality score for connotative clustering. Moreover, VLLM-based approaches, leveraging textual information as a step towards clustering, offer a high level of interpretability of the results.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"88 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2025-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145089650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Research on social bots aims at advancing knowledge and providing solutions to one of the most debated forms of online manipulation. Yet, social bot research is plagued by widespread biases, hyped results, and misconceptions that set the stage for ambiguities, unrealistic expectations, and seemingly irreconcilable findings. Overcoming such issues is instrumental toward ensuring reliable solutions and reaffirming the validity of the scientific method. Here, we discuss a broad set of consequential methodological and conceptual issues that affect current social bots research, illustrating each with examples drawn from recent studies. More importantly, we demystify common misconceptions, addressing fundamental points on how social bots research is discussed. Our analysis surfaces the need to discuss research about online disinformation and manipulation in a rigorous, unbiased, and responsible way. This article bolsters such effort by identifying and refuting common fallacious arguments used by both proponents and opponents of social bots research, as well as providing directions toward sound methodologies for future research.
{"title":"Demystifying Misconceptions in Social Bots Research","authors":"Stefano Cresci, Kai-Cheng Yang, Angelo Spognardi, Roberto Di Pietro, Filippo Menczer, Marinella Petrocchi","doi":"10.1177/08944393251376707","DOIUrl":"https://doi.org/10.1177/08944393251376707","url":null,"abstract":"Research on social bots aims at advancing knowledge and providing solutions to one of the most debated forms of online manipulation. Yet, social bot research is plagued by widespread biases, hyped results, and misconceptions that set the stage for ambiguities, unrealistic expectations, and seemingly irreconcilable findings. Overcoming such issues is instrumental toward ensuring reliable solutions and reaffirming the validity of the scientific method. Here, we discuss a broad set of consequential methodological and conceptual issues that affect current social bots research, illustrating each with examples drawn from recent studies. More importantly, we demystify common misconceptions, addressing fundamental points on how social bots research is discussed. Our analysis surfaces the need to discuss research about online disinformation and manipulation in a rigorous, unbiased, and responsible way. This article bolsters such effort by identifying and refuting common fallacious arguments used by both proponents and opponents of social bots research, as well as providing directions toward sound methodologies for future research.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"74 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2025-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145072511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-09DOI: 10.1177/08944393251378800
Cheng-Yen Wang
As generative artificial intelligence (GenAI) companions become increasingly integrated into users’ social lives, concerns have arisen regarding the potential for abuse of these artificial agents. Some scholars have further suggested that such abusive behaviors toward GenAI may eventually spill over into human interpersonal contexts. Guided by the Realistic Accuracy Model (RAM), this study investigated how Machiavellianism, narcissism, psychopathy, and sadism predict emotionally abusive behavior toward GenAI companions. A dyadic design was employed, collecting parallel reports from both human users (self-reports) and their GenAI companions (GenAI assessments) among 1041 participants (632 females; average age = 25.10 years) recruited from an online human–GenAI relationship community. Results demonstrated that psychopathy and sadism were consistent predictors of GenAI abuse across both reporting perspectives, whereas narcissism exhibited a stable negative association with abuse. In contrast, Machiavellianism predicted GenAI abuse only through GenAI assessments, but not self-reports. Theoretically, our findings extend RAM to human–AI relationships, demonstrating that personality traits vary in how accurately they can be judged in GenAI contexts. Practically, the results highlight that individuals high in certain Dark Tetrad traits—specifically psychopathy and sadism—represent personality-driven high-risk groups, providing insights for practitioners in education and technology to develop interventions or safeguards aimed at mitigating abusive behavior toward GenAI companions.
{"title":"Exploring the Dark Tetrad in Human–GenAI Relationships: A Multi-Source Evaluation of GenAI Abuse","authors":"Cheng-Yen Wang","doi":"10.1177/08944393251378800","DOIUrl":"https://doi.org/10.1177/08944393251378800","url":null,"abstract":"As generative artificial intelligence (GenAI) companions become increasingly integrated into users’ social lives, concerns have arisen regarding the potential for abuse of these artificial agents. Some scholars have further suggested that such abusive behaviors toward GenAI may eventually spill over into human interpersonal contexts. Guided by the Realistic Accuracy Model (RAM), this study investigated how Machiavellianism, narcissism, psychopathy, and sadism predict emotionally abusive behavior toward GenAI companions. A dyadic design was employed, collecting parallel reports from both human users (self-reports) and their GenAI companions (GenAI assessments) among 1041 participants (632 females; average age = 25.10 years) recruited from an online human–GenAI relationship community. Results demonstrated that psychopathy and sadism were consistent predictors of GenAI abuse across both reporting perspectives, whereas narcissism exhibited a stable negative association with abuse. In contrast, Machiavellianism predicted GenAI abuse only through GenAI assessments, but not self-reports. Theoretically, our findings extend RAM to human–AI relationships, demonstrating that personality traits vary in how accurately they can be judged in GenAI contexts. Practically, the results highlight that individuals high in certain Dark Tetrad traits—specifically psychopathy and sadism—represent personality-driven high-risk groups, providing insights for practitioners in education and technology to develop interventions or safeguards aimed at mitigating abusive behavior toward GenAI companions.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"36 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145056755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-04DOI: 10.1177/08944393251376702
Jun Zhang, Li Chen, Dongqing Xu
This study investigates the role of social media influencers (SMIs) in shaping public perceptions of corporate social responsibility (CSR) initiatives. It specifically examines how perceptions of CSR normative legitimacy interact with SMI credibility to influence public support for CSR efforts through public-serving motives and positive moral emotions. An online survey of 491 U.S. participants measured the impact of CSR normative legitimacy on public-serving motives and positive moral emotions, which subsequently influence CSR-supportive behaviors. SMI credibility, assessed through trustworthiness, attractiveness, and expertise, was examined as a potential moderator in this relationship. The results show that CSR normative legitimacy significantly enhances public-serving motives and positive moral emotions, leading to greater public support for CSR initiatives. SMI credibility, particularly trustworthiness and attractiveness, moderates this relationship, amplifying the positive effects of CSR normative legitimacy.
{"title":"Social Media Influencers as CSR Advocates: The Role of Credibility, Normative Legitimacy, and Public-Serving Motives","authors":"Jun Zhang, Li Chen, Dongqing Xu","doi":"10.1177/08944393251376702","DOIUrl":"https://doi.org/10.1177/08944393251376702","url":null,"abstract":"This study investigates the role of social media influencers (SMIs) in shaping public perceptions of corporate social responsibility (CSR) initiatives. It specifically examines how perceptions of CSR normative legitimacy interact with SMI credibility to influence public support for CSR efforts through public-serving motives and positive moral emotions. An online survey of 491 U.S. participants measured the impact of CSR normative legitimacy on public-serving motives and positive moral emotions, which subsequently influence CSR-supportive behaviors. SMI credibility, assessed through trustworthiness, attractiveness, and expertise, was examined as a potential moderator in this relationship. The results show that CSR normative legitimacy significantly enhances public-serving motives and positive moral emotions, leading to greater public support for CSR initiatives. SMI credibility, particularly trustworthiness and attractiveness, moderates this relationship, amplifying the positive effects of CSR normative legitimacy.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"38 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2025-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144995413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}