Pub Date : 2023-11-08DOI: 10.1177/08944393231212252
Kathryn Haglin, Soren Jordan, Grant Ferguson
Media stories on the economy tout automation as one of the biggest contemporary technological changes in America and argue that many Americans may lose their jobs because of it. Politicians and financial elites often promote a policy of Universal Basic Income (UBI) as a solution to the potential unemployment caused by automation, suggesting Americans should support UBI to protect them from this technological disruption. This linkage and basic descriptive findings are largely untested: we don’t know much about whether Americans support UBI, see automation as a threat to their job, or connect the two in any meaningful way. Using a Mechanical Turk survey of 3600 respondents, we examine the relationship between Americans’ perception of how much automation threatens their jobs, how much automation actually threatens their jobs, and their support for UBI. Our results indicate that while the public does not view automation as the same threat that elites do, Americans who believe their jobs will be automated are more likely to support UBI. These relationships, however, vary considerably by political party.
{"title":"They’re Coming for You! How Perceptions of Automation Affect Public Support for Universal Basic Income","authors":"Kathryn Haglin, Soren Jordan, Grant Ferguson","doi":"10.1177/08944393231212252","DOIUrl":"https://doi.org/10.1177/08944393231212252","url":null,"abstract":"Media stories on the economy tout automation as one of the biggest contemporary technological changes in America and argue that many Americans may lose their jobs because of it. Politicians and financial elites often promote a policy of Universal Basic Income (UBI) as a solution to the potential unemployment caused by automation, suggesting Americans should support UBI to protect them from this technological disruption. This linkage and basic descriptive findings are largely untested: we don’t know much about whether Americans support UBI, see automation as a threat to their job, or connect the two in any meaningful way. Using a Mechanical Turk survey of 3600 respondents, we examine the relationship between Americans’ perception of how much automation threatens their jobs, how much automation actually threatens their jobs, and their support for UBI. Our results indicate that while the public does not view automation as the same threat that elites do, Americans who believe their jobs will be automated are more likely to support UBI. These relationships, however, vary considerably by political party.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"23 26","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135390654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-07DOI: 10.1177/08944393231212251
Mads Solberg, Ralf Kirchhoff
Robots are projected to affect healthcare services in significant, but unpredictable, ways. Many believe robots will add value to future healthcare, but their arrival has triggered controversy. Debates revolve around how robotics will impact healthcare provision, their effects on the future of labor and caregiver–patient relationships, and ethical dilemmas associated with autonomous machines. This study investigates media representations of healthcare robotics in Norway over a twenty-year period, using a mixed-methods design. Media representations affect public opinion in multiple ways. By assembling and presenting information through stories, they not only set the agenda by broadcasting values, experiences, and expectations about new technologies, but also frame and prime specific understandings of issues. First, we employ an inductive text-mining approach known as “topic modeling,” a computational method for eliciting abstract semantic structures from large text corpora. Using Non-Negative Matrix Factorization, we implement a topic model of manifest content from 752 articles, published in Norwegian print media between 1.1.2000 and 2.10.2020, sampled from a comprehensive database for news media (Atekst, Retriever). We complement this computational lens with a more fine-grained, qualitative analysis of content in exemplary texts sampled from each topic. Here, we identify prominent “frames,” discursive cues for interpreting how various stakeholders talk about healthcare robotics as a contested domain of policy and practice in a comprehensive welfare state. We also highlight some benefits of this approach for analyzing media discourse and stakeholder perspectives on controversial technologies.
{"title":"Media Representations of Healthcare Robotics in Norway 2000-2020: A Topic Modeling Approach","authors":"Mads Solberg, Ralf Kirchhoff","doi":"10.1177/08944393231212251","DOIUrl":"https://doi.org/10.1177/08944393231212251","url":null,"abstract":"Robots are projected to affect healthcare services in significant, but unpredictable, ways. Many believe robots will add value to future healthcare, but their arrival has triggered controversy. Debates revolve around how robotics will impact healthcare provision, their effects on the future of labor and caregiver–patient relationships, and ethical dilemmas associated with autonomous machines. This study investigates media representations of healthcare robotics in Norway over a twenty-year period, using a mixed-methods design. Media representations affect public opinion in multiple ways. By assembling and presenting information through stories, they not only set the agenda by broadcasting values, experiences, and expectations about new technologies, but also frame and prime specific understandings of issues. First, we employ an inductive text-mining approach known as “topic modeling,” a computational method for eliciting abstract semantic structures from large text corpora. Using Non-Negative Matrix Factorization, we implement a topic model of manifest content from 752 articles, published in Norwegian print media between 1.1.2000 and 2.10.2020, sampled from a comprehensive database for news media (Atekst, Retriever). We complement this computational lens with a more fine-grained, qualitative analysis of content in exemplary texts sampled from each topic. Here, we identify prominent “frames,” discursive cues for interpreting how various stakeholders talk about healthcare robotics as a contested domain of policy and practice in a comprehensive welfare state. We also highlight some benefits of this approach for analyzing media discourse and stakeholder perspectives on controversial technologies.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"82 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135432832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01DOI: 10.1177/08944393231211270
James Hawdon, Ashley Reichelmann, Matthew Costello, Vicente J. Llorent, Pekka Räsänen, Izabela Zych, Atte Oksanen, Catherine Blaya
The purpose of this research is to test the validity of commonly used measures of exposure to and production of online extremism. Specifically, we investigate if a definition of hate influences survey responses about the production of and exposure to online hate. To explore the effects of a definition, we used a split experimental design on a sample of 18 to 25-year-old Americans where half of the respondents were exposed to the European Union’s definition of hate speech and the other half were not. Then, all respondents completed a survey with commonly used items measuring exposure to and perpetration of online hate. The results reveal that providing a definition affects self-reported levels of exposure and perpetration, but the effects are dependent on race. The findings provide evidence that survey responses about online hate may be conditioned by social desirability and framing biases. The findings that group differences exist in how questions about hate are interpreted when definitions of it are not provided mean we must be careful when using measures that try to capture exposure to and the production of hate. While more research is needed, we recommend providing a clear, unambiguous definition when using surveys to measure online hate.
{"title":"Measuring Hate: Does a Definition Affect Self-Reported Levels of Perpetration and Exposure to Online Hate in Surveys?","authors":"James Hawdon, Ashley Reichelmann, Matthew Costello, Vicente J. Llorent, Pekka Räsänen, Izabela Zych, Atte Oksanen, Catherine Blaya","doi":"10.1177/08944393231211270","DOIUrl":"https://doi.org/10.1177/08944393231211270","url":null,"abstract":"The purpose of this research is to test the validity of commonly used measures of exposure to and production of online extremism. Specifically, we investigate if a definition of hate influences survey responses about the production of and exposure to online hate. To explore the effects of a definition, we used a split experimental design on a sample of 18 to 25-year-old Americans where half of the respondents were exposed to the European Union’s definition of hate speech and the other half were not. Then, all respondents completed a survey with commonly used items measuring exposure to and perpetration of online hate. The results reveal that providing a definition affects self-reported levels of exposure and perpetration, but the effects are dependent on race. The findings provide evidence that survey responses about online hate may be conditioned by social desirability and framing biases. The findings that group differences exist in how questions about hate are interpreted when definitions of it are not provided mean we must be careful when using measures that try to capture exposure to and the production of hate. While more research is needed, we recommend providing a clear, unambiguous definition when using surveys to measure online hate.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"55 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135271566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Objectives: This research intended to examine the demographic and clinical attributes of stroke admissions in a rural Nigerian hospital.
Materials and methods: A retrospective analysis of stroke admissions was conducted over 1 year. All necessary data were obtained from patients' records and SPSS was employed for data analysis. P < 0.05 was deemed significant.
Results: There were 52 stroke cases, accounting for 5.9% of medical admissions. The patients' mean age was 62.81 ± 12.71 years, while females constituted 51.9% of cases. Common risk factors included hypertension (76.9%), hyperlipidemia (38.5%), alcohol (26.9%), and diabetes mellitus (26.9%). Clinical manifestations included hemiparesis/plegia (84.6%), altered consciousness (63.5%), slurred speech (61.5%), cranial nerve deficit (61.5%), aphasia (42.3%), and headache (34.6%). Ischemic stroke (71.2%) predominated over hemorrhagic stroke (28.8%). The average hospitalization duration was 17.62 ± 8.91 days, and the mean onset to arrival time was 121.31 ± 136.06 h. Discharge and mortality rates were 82.7% and 13.5%, respectively. The association between stroke subtypes and mortality was significant (P = 0.001).
Conclusion: Stroke constitutes a significant portion of medical admissions in Nigeria, with ischemic stroke being more prevalent. High mortality rates underscore the urgent need to manage risk factors to prevent stroke.
{"title":"A comprehensive analysis of stroke admissions at a rural Nigerian tertiary health facility: Insights from a single-center study.","authors":"Cyril Oshomah Erameh, Airenakho Emorinken, Blessyn Omoye Akpasubi","doi":"10.25259/JNRP_76_2023","DOIUrl":"10.25259/JNRP_76_2023","url":null,"abstract":"<p><strong>Objectives: </strong>This research intended to examine the demographic and clinical attributes of stroke admissions in a rural Nigerian hospital.</p><p><strong>Materials and methods: </strong>A retrospective analysis of stroke admissions was conducted over 1 year. All necessary data were obtained from patients' records and SPSS was employed for data analysis. <i>P</i> < 0.05 was deemed significant.</p><p><strong>Results: </strong>There were 52 stroke cases, accounting for 5.9% of medical admissions. The patients' mean age was 62.81 ± 12.71 years, while females constituted 51.9% of cases. Common risk factors included hypertension (76.9%), hyperlipidemia (38.5%), alcohol (26.9%), and diabetes mellitus (26.9%). Clinical manifestations included hemiparesis/plegia (84.6%), altered consciousness (63.5%), slurred speech (61.5%), cranial nerve deficit (61.5%), aphasia (42.3%), and headache (34.6%). Ischemic stroke (71.2%) predominated over hemorrhagic stroke (28.8%). The average hospitalization duration was 17.62 ± 8.91 days, and the mean onset to arrival time was 121.31 ± 136.06 h. Discharge and mortality rates were 82.7% and 13.5%, respectively. The association between stroke subtypes and mortality was significant (<i>P</i> = 0.001).</p><p><strong>Conclusion: </strong>Stroke constitutes a significant portion of medical admissions in Nigeria, with ischemic stroke being more prevalent. High mortality rates underscore the urgent need to manage risk factors to prevent stroke.</p>","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"40 1","pages":"703-709"},"PeriodicalIF":1.4,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10696323/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90447720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-26DOI: 10.1177/08944393231204163
Travis Noakes, Patricia Harpur, Corrie Uys
Qualitative data analysis software (QDAS) packages that support live data extraction are a relatively recent innovation. Little has been written concerning the research implications of differences in such QDAS packages’ functionalities, and how such disparities might contribute to contrasting analytical opportunities. Consequently, early-stage researchers may experience difficulties in choosing an apt QDAS for Twitter analysis. In response to both methodological gaps, this paper presents a software comparison across the four QDAS tools that support live Twitter data imports, namely, ATLAS.ti™, NVivo™, MAXQDA™ and QDA Miner™. The authors’ QDAS features checklist for these tools spotlights many differences in their functionalities. These disparities were tested through data imports and thematic coding that was derived from the same queries and codebook. The authors’ resultant QDAS experiences were compared during the first activity of a broad qualitative analysis process, ‘organising data’. Notwithstanding large difference in QDAS pricing, it was surprising how much the tools varied for aspects of qualitative research organisation. Notably, the quantum of data extracted for the same query differed, largely due to contrasts in the types and amount of data that the four QDAS could extract. Variations in how each supported visual organisation also shaped researchers’ opportunities for becoming familiar with Twitter users and their tweet content. Such disparities suggest that choosing a suitable QDAS for organising live Twitter data must dovetail with a researcher’s focus: ATLAS.ti accommodates scholars focused on wrangling unstructured data for personal meaning-making, while MAXQDA suits the mixed-methods researcher. QDA Miner’s easy-to-learn user interface suits a highly efficient implementation of methods, whilst NVivo supports relatively rapid analysis of tweet content. Such findings may help guide Twitter social science researchers and others in QDAS tool selection. Future research can explore disparities in other qualitative research phases, or contrast data extraction routes for a variety of microblogging services.
{"title":"Noteworthy Disparities With Four CAQDAS Tools: Explorations in Organising Live Twitter Data","authors":"Travis Noakes, Patricia Harpur, Corrie Uys","doi":"10.1177/08944393231204163","DOIUrl":"https://doi.org/10.1177/08944393231204163","url":null,"abstract":"Qualitative data analysis software (QDAS) packages that support live data extraction are a relatively recent innovation. Little has been written concerning the research implications of differences in such QDAS packages’ functionalities, and how such disparities might contribute to contrasting analytical opportunities. Consequently, early-stage researchers may experience difficulties in choosing an apt QDAS for Twitter analysis. In response to both methodological gaps, this paper presents a software comparison across the four QDAS tools that support live Twitter data imports, namely, ATLAS.ti™, NVivo™, MAXQDA™ and QDA Miner™. The authors’ QDAS features checklist for these tools spotlights many differences in their functionalities. These disparities were tested through data imports and thematic coding that was derived from the same queries and codebook. The authors’ resultant QDAS experiences were compared during the first activity of a broad qualitative analysis process, ‘organising data’. Notwithstanding large difference in QDAS pricing, it was surprising how much the tools varied for aspects of qualitative research organisation. Notably, the quantum of data extracted for the same query differed, largely due to contrasts in the types and amount of data that the four QDAS could extract. Variations in how each supported visual organisation also shaped researchers’ opportunities for becoming familiar with Twitter users and their tweet content. Such disparities suggest that choosing a suitable QDAS for organising live Twitter data must dovetail with a researcher’s focus: ATLAS.ti accommodates scholars focused on wrangling unstructured data for personal meaning-making, while MAXQDA suits the mixed-methods researcher. QDA Miner’s easy-to-learn user interface suits a highly efficient implementation of methods, whilst NVivo supports relatively rapid analysis of tweet content. Such findings may help guide Twitter social science researchers and others in QDAS tool selection. Future research can explore disparities in other qualitative research phases, or contrast data extraction routes for a variety of microblogging services.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134887158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-12DOI: 10.1177/08944393231193731
Melanie Hirsch, Alice Binder, Jörg Matthes
The availability of online data has altered the role of social media. By offering targeted online advertising, that is, persuasive messages tailored to user groups, political parties profit from large data profiles to send fine-grained advertising appeals to susceptible voters. This between-subject experiment ( N = 421) investigates the influence of targeted political advertising disclosures (targeting vs. no-targeting disclosure), political fit (high vs. low), and issue fit (high vs. low) on recipients’ party evaluation and chilling effect intentions. The mediating role of targeting knowledge (TK) and perceived manipulative intent (PMI), two dimensions of persuasion knowledge, are investigated. The findings show that disclosing a targeting strategy and a high political fit activated individuals’ TK, that is, their recognition that their data had been used to show the ads, which then increased the evaluation of the political party and individuals’ intentions to engage in future chilling effect behaviors. High political fit decreased individuals’ reflections about the appropriateness of the targeted political ads (i.e., PMI), which then increased party evaluation. Issue fit did not affect individuals’ persuasion knowledge.
{"title":"The Influence of Political Fit, Issue Fit, and Targeted Political Advertising Disclosures on Persuasion Knowledge, Party Evaluation, and Chilling Effects","authors":"Melanie Hirsch, Alice Binder, Jörg Matthes","doi":"10.1177/08944393231193731","DOIUrl":"https://doi.org/10.1177/08944393231193731","url":null,"abstract":"The availability of online data has altered the role of social media. By offering targeted online advertising, that is, persuasive messages tailored to user groups, political parties profit from large data profiles to send fine-grained advertising appeals to susceptible voters. This between-subject experiment ( N = 421) investigates the influence of targeted political advertising disclosures (targeting vs. no-targeting disclosure), political fit (high vs. low), and issue fit (high vs. low) on recipients’ party evaluation and chilling effect intentions. The mediating role of targeting knowledge (TK) and perceived manipulative intent (PMI), two dimensions of persuasion knowledge, are investigated. The findings show that disclosing a targeting strategy and a high political fit activated individuals’ TK, that is, their recognition that their data had been used to show the ads, which then increased the evaluation of the political party and individuals’ intentions to engage in future chilling effect behaviors. High political fit decreased individuals’ reflections about the appropriateness of the targeted political ads (i.e., PMI), which then increased party evaluation. Issue fit did not affect individuals’ persuasion knowledge.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135878241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-26DOI: 10.1177/08944393231196281
Cameron F. Atkinson
The systematic literature review (SLR) is the gold standard in providing research a firm evidence foundation to support decision-making. Researchers seeking to increase the rigour, transparency, and replicability of their SLRs are provided a range of guidelines towards these ends. Artificial Intelligence (AI) and Machine Learning Techniques (MLTs) developed with computer programming languages can provide methods to increase the speed, rigour, transparency, and repeatability of SLRs. Aimed towards researchers with coding experience, and who want to utilise AI and MLTs to synthesise and abstract data obtained through a SLR, this article sets out how computer languages can be used to facilitate unsupervised machine learning for synthesising and abstracting data sets extracted during a SLR. Utilising an already known qualitative method, Deductive Qualitative Analysis, this article illustrates the supportive role that AI and MLTs can play in the coding and categorisation of extracted SLR data, and in synthesising SLR data. Using a data set extracted during a SLR as a proof of concept, this article will include the coding used to create a well-established MLT, Topic Modelling using Latent Dirichlet allocation. This technique provides a working example of how researchers can use AI and MLTs to automate the data synthesis and abstraction stage of their SLR, and aide in increasing the speed, frugality, and rigour of research projects.
{"title":"Cheap, Quick, and Rigorous: Artificial Intelligence and the Systematic Literature Review","authors":"Cameron F. Atkinson","doi":"10.1177/08944393231196281","DOIUrl":"https://doi.org/10.1177/08944393231196281","url":null,"abstract":"The systematic literature review (SLR) is the gold standard in providing research a firm evidence foundation to support decision-making. Researchers seeking to increase the rigour, transparency, and replicability of their SLRs are provided a range of guidelines towards these ends. Artificial Intelligence (AI) and Machine Learning Techniques (MLTs) developed with computer programming languages can provide methods to increase the speed, rigour, transparency, and repeatability of SLRs. Aimed towards researchers with coding experience, and who want to utilise AI and MLTs to synthesise and abstract data obtained through a SLR, this article sets out how computer languages can be used to facilitate unsupervised machine learning for synthesising and abstracting data sets extracted during a SLR. Utilising an already known qualitative method, Deductive Qualitative Analysis, this article illustrates the supportive role that AI and MLTs can play in the coding and categorisation of extracted SLR data, and in synthesising SLR data. Using a data set extracted during a SLR as a proof of concept, this article will include the coding used to create a well-established MLT, Topic Modelling using Latent Dirichlet allocation. This technique provides a working example of how researchers can use AI and MLTs to automate the data synthesis and abstraction stage of their SLR, and aide in increasing the speed, frugality, and rigour of research projects.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":" ","pages":""},"PeriodicalIF":4.1,"publicationDate":"2023-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44736667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-14DOI: 10.1177/08944393231195471
Roberto Ulloa, Mykola Makhortykh, Aleksandra Urman, Juhi Kulshrestha
The 2020 US elections news coverage was extensive, with new pieces of information generated rapidly. This evolving scenario presented an opportunity to study the performance of search engines in a context in which they had to quickly process information as it was published. We analyze novelty, a measurement of new items that emerge in the top news search results, to compare the coverage and visibility of different topics. Using virtual agents that simulate human web browsing behavior to collect search engine result pages, we conduct a longitudinal study of news results of five search engines collected in short bursts (every 21 minutes) from two regions (Oregon, US and Frankfurt, Germany), starting on election day and lasting until one day after the announcement of Biden as the winner. We find more new items emerging for election related queries (“joe biden,” “donald trump,” and “us elections”) compared to topical (e.g., “coronavirus”) or stable (e.g., “holocaust”) queries. We demonstrate that our method captures sudden changes in highly covered news topics as well as multiple differences across search engines and regions over time. We highlight novelty imbalances between candidate queries which affect their visibility during electoral periods, and conclude that, when it comes to news, search engines are responsible for such imbalances, either due to their algorithms or the set of news sources that they rely on.
{"title":"Novelty in News Search: A Longitudinal Study of the 2020 US Elections","authors":"Roberto Ulloa, Mykola Makhortykh, Aleksandra Urman, Juhi Kulshrestha","doi":"10.1177/08944393231195471","DOIUrl":"https://doi.org/10.1177/08944393231195471","url":null,"abstract":"The 2020 US elections news coverage was extensive, with new pieces of information generated rapidly. This evolving scenario presented an opportunity to study the performance of search engines in a context in which they had to quickly process information as it was published. We analyze novelty, a measurement of new items that emerge in the top news search results, to compare the coverage and visibility of different topics. Using virtual agents that simulate human web browsing behavior to collect search engine result pages, we conduct a longitudinal study of news results of five search engines collected in short bursts (every 21 minutes) from two regions (Oregon, US and Frankfurt, Germany), starting on election day and lasting until one day after the announcement of Biden as the winner. We find more new items emerging for election related queries (“joe biden,” “donald trump,” and “us elections”) compared to topical (e.g., “coronavirus”) or stable (e.g., “holocaust”) queries. We demonstrate that our method captures sudden changes in highly covered news topics as well as multiple differences across search engines and regions over time. We highlight novelty imbalances between candidate queries which affect their visibility during electoral periods, and conclude that, when it comes to news, search engines are responsible for such imbalances, either due to their algorithms or the set of news sources that they rely on.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135263220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-12DOI: 10.1177/08944393231194983
Vinicius Marino Carvalho
Between 1276 and 1318, English magnates unsuccessfully attempted to establish a lordship in the Irish kingdom of Thomond, southwestern Ireland, by exploiting a dynastic feud dividing the then-ruling lineage, the Uí Bhriain. The conflict coincided with a series of extreme events that beset western Europe in the late 13th and early 14th centuries, such as the beginning of the Little Ice Age and the Great European Famine of 1315–1322. The goal of this work was to evaluate to the extent to which economic degradation at the turn of the 14th century affected the outcome of the war. The hypothesis that such degradation affected the war’s outcome was tested using agent-based modeling, which involved the virtual reconstruction of Late Medieval Thomond to study past conditions by proxy. This article describes the historical research carried out to elaborate the conceptual model, the implementation of the model as a computer simulation, and the experiments carried out to virtually explore the Uí Bhriain Civil War. A quantitative analysis of the experimental results revealed some correlation between late 13th century economic degradation and the fortunes of belligerent factions in the wars of 1276–1318, although the effect was not sufficiently strong to have been a crucial factor in the outcome of the conflict.
{"title":"The Impact of Economic Degradation on the Uí Bhriain Civil War (1276–1318): An Agent-Based Modeling Approach","authors":"Vinicius Marino Carvalho","doi":"10.1177/08944393231194983","DOIUrl":"https://doi.org/10.1177/08944393231194983","url":null,"abstract":"Between 1276 and 1318, English magnates unsuccessfully attempted to establish a lordship in the Irish kingdom of Thomond, southwestern Ireland, by exploiting a dynastic feud dividing the then-ruling lineage, the Uí Bhriain. The conflict coincided with a series of extreme events that beset western Europe in the late 13th and early 14th centuries, such as the beginning of the Little Ice Age and the Great European Famine of 1315–1322. The goal of this work was to evaluate to the extent to which economic degradation at the turn of the 14th century affected the outcome of the war. The hypothesis that such degradation affected the war’s outcome was tested using agent-based modeling, which involved the virtual reconstruction of Late Medieval Thomond to study past conditions by proxy. This article describes the historical research carried out to elaborate the conceptual model, the implementation of the model as a computer simulation, and the experiments carried out to virtually explore the Uí Bhriain Civil War. A quantitative analysis of the experimental results revealed some correlation between late 13th century economic degradation and the fortunes of belligerent factions in the wars of 1276–1318, although the effect was not sufficiently strong to have been a crucial factor in the outcome of the conflict.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":" ","pages":""},"PeriodicalIF":4.1,"publicationDate":"2023-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49499954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-02DOI: 10.1177/08944393231192237
Jianhua Zhou, Haiyan Zhao, Yan Zou
This study examined how depressive symptoms play mediating roles between cyberbullying and traditional bullying victimization and suicidal ideation and the moderating roles of cognitive reappraisal and emotion invalidation. A total of 1,823 Chinese adolescents (Mean age = 11.20, SD = 1.21, 47.8% girls) participated this study. Results showed that cyberbullying victimization was more strongly related to suicidal ideation than traditional bullying victimization. Depressive symptoms played mediating roles between cyberbullying and traditional bullying victimization and suicidal ideation. Cognitive reappraisal mitigated the effects of cyberbullying and traditional bullying victimization on depressive symptoms, and perceived emotion invalidation strengthened the effect of depressive symptoms on suicidal ideation. Results further showed that the mediating effect of depressive symptoms was more prominent when there were low levels of cognitive reappraisal and more perceived emotion invalidation. Promoting youths’ cognitive reappraisal and providing validating responses to their depressive symptoms could mitigate the destructive effects of bullying victimization on suicidal ideation.
{"title":"Cyberbullying and Traditional Bullying Victimization, Depressive Symptoms, and Suicidal Ideation Among Chinese Early Adolescents: Cognitive Reappraisal and Emotion Invalidation as Moderators","authors":"Jianhua Zhou, Haiyan Zhao, Yan Zou","doi":"10.1177/08944393231192237","DOIUrl":"https://doi.org/10.1177/08944393231192237","url":null,"abstract":"This study examined how depressive symptoms play mediating roles between cyberbullying and traditional bullying victimization and suicidal ideation and the moderating roles of cognitive reappraisal and emotion invalidation. A total of 1,823 Chinese adolescents (Mean age = 11.20, SD = 1.21, 47.8% girls) participated this study. Results showed that cyberbullying victimization was more strongly related to suicidal ideation than traditional bullying victimization. Depressive symptoms played mediating roles between cyberbullying and traditional bullying victimization and suicidal ideation. Cognitive reappraisal mitigated the effects of cyberbullying and traditional bullying victimization on depressive symptoms, and perceived emotion invalidation strengthened the effect of depressive symptoms on suicidal ideation. Results further showed that the mediating effect of depressive symptoms was more prominent when there were low levels of cognitive reappraisal and more perceived emotion invalidation. Promoting youths’ cognitive reappraisal and providing validating responses to their depressive symptoms could mitigate the destructive effects of bullying victimization on suicidal ideation.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":" ","pages":""},"PeriodicalIF":4.1,"publicationDate":"2023-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42664723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}