Pub Date : 2024-04-27DOI: 10.1177/08944393241245395
Jessica Daikeler, Leon Fröhling, Indira Sen, Lukas Birkenmaier, Tobias Gummer, Jan Schwalbach, Henning Silber, Bernd Weiß, Katrin Weller, Clemens Lechner
While survey data has long been the focus of quantitative social science analyses, observational and content data, although long-established, are gaining renewed attention; especially when this type of data is obtained by and for observing digital content and behavior. Today, digital technologies allow social scientists to track “everyday behavior” and to extract opinions from public discussions on online platforms. These new types of digital traces of human behavior, together with computational methods for analyzing them, have opened new avenues for analyzing, understanding, and addressing social science research questions. However, even the most innovative and extensive amounts of data are hollow if they are not of high quality. But what does data quality mean for modern social science data? To investigate this rather abstract question the present study focuses on four objectives. First, we provide researchers with a decision tree to identify appropriate data quality frameworks for a given use case. Second, we determine which data types and quality dimensions are already addressed in the existing frameworks. Third, we identify gaps with respect to different data types and data quality dimensions within the existing frameworks which need to be filled. And fourth, we provide a detailed literature overview for the intrinsic and extrinsic perspectives on data quality. By conducting a systematic literature review based on text mining methods, we identified and reviewed 58 data quality frameworks. In our decision tree, the three categories, namely, data type, the perspective it takes, and its level of granularity, help researchers to find appropriate data quality frameworks. We, furthermore, discovered gaps in the available frameworks with respect to visual and especially linked data and point out in our review that even famous frameworks might miss important aspects. The article ends with a critical discussion of the current state of the literature and potential future research avenues.
{"title":"Assessing Data Quality in the Age of Digital Social Research: A Systematic Review","authors":"Jessica Daikeler, Leon Fröhling, Indira Sen, Lukas Birkenmaier, Tobias Gummer, Jan Schwalbach, Henning Silber, Bernd Weiß, Katrin Weller, Clemens Lechner","doi":"10.1177/08944393241245395","DOIUrl":"https://doi.org/10.1177/08944393241245395","url":null,"abstract":"While survey data has long been the focus of quantitative social science analyses, observational and content data, although long-established, are gaining renewed attention; especially when this type of data is obtained by and for observing digital content and behavior. Today, digital technologies allow social scientists to track “everyday behavior” and to extract opinions from public discussions on online platforms. These new types of digital traces of human behavior, together with computational methods for analyzing them, have opened new avenues for analyzing, understanding, and addressing social science research questions. However, even the most innovative and extensive amounts of data are hollow if they are not of high quality. But what does data quality mean for modern social science data? To investigate this rather abstract question the present study focuses on four objectives. First, we provide researchers with a decision tree to identify appropriate data quality frameworks for a given use case. Second, we determine which data types and quality dimensions are already addressed in the existing frameworks. Third, we identify gaps with respect to different data types and data quality dimensions within the existing frameworks which need to be filled. And fourth, we provide a detailed literature overview for the intrinsic and extrinsic perspectives on data quality. By conducting a systematic literature review based on text mining methods, we identified and reviewed 58 data quality frameworks. In our decision tree, the three categories, namely, data type, the perspective it takes, and its level of granularity, help researchers to find appropriate data quality frameworks. We, furthermore, discovered gaps in the available frameworks with respect to visual and especially linked data and point out in our review that even famous frameworks might miss important aspects. The article ends with a critical discussion of the current state of the literature and potential future research avenues.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"2016 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140808512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-27DOI: 10.1177/08944393241249724
Iolie Nicolaidou, Ronit Kampf
Israeli-Jews and Palestinians cannot easily be exposed to contradicting information about “the other” in the intractable Israeli-Palestinian conflict because of the emotionally charged situation and prevailing ethnocentrism. Serious games like PeaceMaker are used as innovative interventions for peace education. Winning PeaceMaker indicates better conflict resolution skills and developing an informative viewpoint regarding the situation, which is required for conflict resolution and peacebuilding. The evaluation of the effectiveness of prosocial games in educating about conflict and peace in the literature is severely lacking. We examine the effects of this computerized simulation of the Israeli–Palestinian conflict on enhancing knowledge about the conflict and “the other” among undergraduate players who are direct parties (i.e., Israeli-Jews and Palestinians) and third parties (i.e., Americans and Cypriots). In addition, we investigate the knowledge gap between direct parties and third parties who won and did not win the game. Using questionnaires, we conducted a quasi-experimental study with 168 undergraduates using a pre- and post-intervention research design. We found that direct parties to the conflict acquired significantly more knowledge about the other side, and third parties acquired significantly more knowledge about the conflict after playing PeaceMaker. In addition, PeaceMaker minimized the knowledge gap after playing the game among direct parties who won the game and those who did not win and increased the knowledge gap between third parties who won the game and those who did not win. Our results suggest that serious games might be effective interventions for peace education, because they appear to enhance knowledge about the conflict, and about “the other” particularly for young people who are direct parties to this divide.
{"title":"Serious Games, Knowledge Acquisition, and Conflict Resolution: The Case of PeaceMaker as a Peace Education Tool","authors":"Iolie Nicolaidou, Ronit Kampf","doi":"10.1177/08944393241249724","DOIUrl":"https://doi.org/10.1177/08944393241249724","url":null,"abstract":"Israeli-Jews and Palestinians cannot easily be exposed to contradicting information about “the other” in the intractable Israeli-Palestinian conflict because of the emotionally charged situation and prevailing ethnocentrism. Serious games like PeaceMaker are used as innovative interventions for peace education. Winning PeaceMaker indicates better conflict resolution skills and developing an informative viewpoint regarding the situation, which is required for conflict resolution and peacebuilding. The evaluation of the effectiveness of prosocial games in educating about conflict and peace in the literature is severely lacking. We examine the effects of this computerized simulation of the Israeli–Palestinian conflict on enhancing knowledge about the conflict and “the other” among undergraduate players who are direct parties (i.e., Israeli-Jews and Palestinians) and third parties (i.e., Americans and Cypriots). In addition, we investigate the knowledge gap between direct parties and third parties who won and did not win the game. Using questionnaires, we conducted a quasi-experimental study with 168 undergraduates using a pre- and post-intervention research design. We found that direct parties to the conflict acquired significantly more knowledge about the other side, and third parties acquired significantly more knowledge about the conflict after playing PeaceMaker. In addition, PeaceMaker minimized the knowledge gap after playing the game among direct parties who won the game and those who did not win and increased the knowledge gap between third parties who won the game and those who did not win. Our results suggest that serious games might be effective interventions for peace education, because they appear to enhance knowledge about the conflict, and about “the other” particularly for young people who are direct parties to this divide.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"30 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140808520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-25DOI: 10.1177/08944393241247425
Franziska M. Leipold, Pascal J. Kieslich, Felix Henninger, Amanda Fernández-Fontelo, Sonja Greven, Frauke Kreuter
Online surveys are a widely used mode of data collection. However, as no interviewer is present, respondents face any difficulties they encounter alone, which may lead to measurement error and biased or (at worst) invalid conclusions. Detecting response difficulty is therefore vital. Previous research has predominantly focused on response times to detect general response difficulty. However, response difficulty may stem from different sources, such as overly complex wording or similarity between response options. So far, the question of whether indicators can discriminate between these sources has not been addressed. The goal of the present study, therefore, was to evaluate whether specific characteristics of participants’ cursor movements are related to specific properties of survey questions that increase response difficulty. In a preregistered online experiment, we manipulated the length of the question text, the complexity of the question wording, and the difficulty of the response options orthogonally between questions. We hypothesized that these changes would lead to increased response times, hovers (movement pauses), and y-flips (changes in vertical movement direction), respectively. As expected, each manipulation led to an increase in the corresponding measure, although the other dependent variables were affected as well. However, the strengths of the effects did differ as expected between the mouse-tracking indices: Hovers were more sensitive to complex wording than to question difficulty, while the opposite was true for y-flips. These results indicate that differentiating sources of response difficulty might indeed be feasible using mouse-tracking.
{"title":"Detecting Respondent Burden in Online Surveys: How Different Sources of Question Difficulty Influence Cursor Movements","authors":"Franziska M. Leipold, Pascal J. Kieslich, Felix Henninger, Amanda Fernández-Fontelo, Sonja Greven, Frauke Kreuter","doi":"10.1177/08944393241247425","DOIUrl":"https://doi.org/10.1177/08944393241247425","url":null,"abstract":"Online surveys are a widely used mode of data collection. However, as no interviewer is present, respondents face any difficulties they encounter alone, which may lead to measurement error and biased or (at worst) invalid conclusions. Detecting response difficulty is therefore vital. Previous research has predominantly focused on response times to detect general response difficulty. However, response difficulty may stem from different sources, such as overly complex wording or similarity between response options. So far, the question of whether indicators can discriminate between these sources has not been addressed. The goal of the present study, therefore, was to evaluate whether specific characteristics of participants’ cursor movements are related to specific properties of survey questions that increase response difficulty. In a preregistered online experiment, we manipulated the length of the question text, the complexity of the question wording, and the difficulty of the response options orthogonally between questions. We hypothesized that these changes would lead to increased response times, hovers (movement pauses), and y-flips (changes in vertical movement direction), respectively. As expected, each manipulation led to an increase in the corresponding measure, although the other dependent variables were affected as well. However, the strengths of the effects did differ as expected between the mouse-tracking indices: Hovers were more sensitive to complex wording than to question difficulty, while the opposite was true for y-flips. These results indicate that differentiating sources of response difficulty might indeed be feasible using mouse-tracking.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"151 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140651841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-18DOI: 10.1177/08944393241247420
Aaron Erlich, Danielle F. Jung, James D. Long
How does media coverage of electoral campaigns distinguish parties and candidates in emerging democracies? To answer, we present a multi-step procedure that we apply in South Africa. First, we develop a theoretically informed classification of election coverage as either “narrow” or “broad” from within the entire corpus of news coverage during an electoral campaign. Second, to deploy our classification scheme, we use a supervised machine learning approach to classify news as “broad,” “narrow,” or “not election-related.” Finally, we combine our supervised classification with a topic modeling algorithm (BERTTopic) that is based on Bidirectional Encoder Representations from Transformers (BERT), in addition to other statistical and machine learning methods. The combination of our classification scheme, BERTTopic, and associated methods allows us to identify the main election-related themes among broad and narrow election-related coverage, and how different candidates and parties are associated with these themes. We provide an in-depth discussion of our method for interested users in the social sciences. We then apply our proposed techniques on text from nearly 100,000 news articles during South Africa’s 2014 campaign and test our empirical predictions about candidate and party coverage of corruption, the economy, health, public infrastructure, and security. The application of our method highlights a nuanced campaign environment in South Africa; candidates and parties frequently receive distinct and substantive coverage on key campaign themes.
{"title":"Covering the Campaign: Computational Tools for Measuring Differences in Candidate and Party News Coverage With Application to an Emerging Democracy","authors":"Aaron Erlich, Danielle F. Jung, James D. Long","doi":"10.1177/08944393241247420","DOIUrl":"https://doi.org/10.1177/08944393241247420","url":null,"abstract":"How does media coverage of electoral campaigns distinguish parties and candidates in emerging democracies? To answer, we present a multi-step procedure that we apply in South Africa. First, we develop a theoretically informed classification of election coverage as either “narrow” or “broad” from within the entire corpus of news coverage during an electoral campaign. Second, to deploy our classification scheme, we use a supervised machine learning approach to classify news as “broad,” “narrow,” or “not election-related.” Finally, we combine our supervised classification with a topic modeling algorithm (BERTTopic) that is based on Bidirectional Encoder Representations from Transformers (BERT), in addition to other statistical and machine learning methods. The combination of our classification scheme, BERTTopic, and associated methods allows us to identify the main election-related themes among broad and narrow election-related coverage, and how different candidates and parties are associated with these themes. We provide an in-depth discussion of our method for interested users in the social sciences. We then apply our proposed techniques on text from nearly 100,000 news articles during South Africa’s 2014 campaign and test our empirical predictions about candidate and party coverage of corruption, the economy, health, public infrastructure, and security. The application of our method highlights a nuanced campaign environment in South Africa; candidates and parties frequently receive distinct and substantive coverage on key campaign themes.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"16 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140622915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-16DOI: 10.1177/08944393241247427
Anson Au
The October 2017 Las Vegas shooting was the deadliest shooting in modern American history, but little scholarship has examined the public uproar in its wake, particularly in digital networks. Drawing on a corpus of 100,000 public Tweets and 1,119,638 unique words written in reaction to the shooting, this article addresses this lacuna by investigating the topics of reactions and their linkages with elites. This article theorizes that elites invigorate the emotionality of public reactions and broker the connection between discursive and affective content in digital networks. The results show that Tweets engaging with elites expressed statistically greater emotionality and extremity in emotional valences compared to Tweets written independent of elites. Additionally, this article identifies variations in the discursive themes invoked based on the types of elites. Mentions of non-political elites drew on themes about expressive support and depictions of the immediate environment with little emotional extremity. By contrast, mentions of political elites drew on themes about broader policy debates on gun ownership laws and adherent policy reforms. Unlike with non-political elites, mentions of political elites also exhibited greater extremity in negative emotional valences, reflective of increasing polarization in American politics.
{"title":"How Elites Invigorate Emotionality and Extremity in Digital Networks","authors":"Anson Au","doi":"10.1177/08944393241247427","DOIUrl":"https://doi.org/10.1177/08944393241247427","url":null,"abstract":"The October 2017 Las Vegas shooting was the deadliest shooting in modern American history, but little scholarship has examined the public uproar in its wake, particularly in digital networks. Drawing on a corpus of 100,000 public Tweets and 1,119,638 unique words written in reaction to the shooting, this article addresses this lacuna by investigating the topics of reactions and their linkages with elites. This article theorizes that elites invigorate the emotionality of public reactions and broker the connection between discursive and affective content in digital networks. The results show that Tweets engaging with elites expressed statistically greater emotionality and extremity in emotional valences compared to Tweets written independent of elites. Additionally, this article identifies variations in the discursive themes invoked based on the types of elites. Mentions of non-political elites drew on themes about expressive support and depictions of the immediate environment with little emotional extremity. By contrast, mentions of political elites drew on themes about broader policy debates on gun ownership laws and adherent policy reforms. Unlike with non-political elites, mentions of political elites also exhibited greater extremity in negative emotional valences, reflective of increasing polarization in American politics.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"47 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140603847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-15DOI: 10.1177/08944393241246281
Tim Schatto-Eckrodt, Lena Clever, Lena Frischlich
Consuming conspiracy theories erodes trust in democratic institutions, while conspiracy beliefs demotivate democratic participation, posing a potential threat to democracy. The proliferation of social media, especially the emergence of numerous alternative platforms with minimal moderation, has greatly facilitated the dissemination and consumption of conspiracy theories. Nevertheless, there remains a dearth of knowledge concerning the origin and evolution of specific conspiracy theories across different platforms. This study aims to address this gap through a large-scale, cross-platform examination of the genesis of new conspiracy theories surrounding the death of Jeffrey Epstein. Through a (semi-) automated content analysis conducted on a distinctive dataset comprising N = 8,020,314 Epstein-related posts posted on both established platforms ( Twitter, Reddit) and alternative platforms ( Gab and 4Chan), we demonstrate that conspiracy theories emerge early and influence public discourse well in advance of reports from established media sources. Our data shows that users of the studied platforms immediately turn to conspirational explanations, exhibiting skepticism towards the official representation of events. Especially on alternative platforms, this skepticism swiftly transformed into unwarranted conspiracy theorizing, partly bolstered by references to alternative news media sources. The present study shows how conspirational explanations thrive in low information environments and how alternative media plays a role in turning rational skepticism into unwarranted conspiracy theories.
{"title":"The Seed of Doubt: Examining the Role of Alternative Social and News Media for the Birth of a Conspiracy Theory","authors":"Tim Schatto-Eckrodt, Lena Clever, Lena Frischlich","doi":"10.1177/08944393241246281","DOIUrl":"https://doi.org/10.1177/08944393241246281","url":null,"abstract":"Consuming conspiracy theories erodes trust in democratic institutions, while conspiracy beliefs demotivate democratic participation, posing a potential threat to democracy. The proliferation of social media, especially the emergence of numerous alternative platforms with minimal moderation, has greatly facilitated the dissemination and consumption of conspiracy theories. Nevertheless, there remains a dearth of knowledge concerning the origin and evolution of specific conspiracy theories across different platforms. This study aims to address this gap through a large-scale, cross-platform examination of the genesis of new conspiracy theories surrounding the death of Jeffrey Epstein. Through a (semi-) automated content analysis conducted on a distinctive dataset comprising N = 8,020,314 Epstein-related posts posted on both established platforms ( Twitter, Reddit) and alternative platforms ( Gab and 4Chan), we demonstrate that conspiracy theories emerge early and influence public discourse well in advance of reports from established media sources. Our data shows that users of the studied platforms immediately turn to conspirational explanations, exhibiting skepticism towards the official representation of events. Especially on alternative platforms, this skepticism swiftly transformed into unwarranted conspiracy theorizing, partly bolstered by references to alternative news media sources. The present study shows how conspirational explanations thrive in low information environments and how alternative media plays a role in turning rational skepticism into unwarranted conspiracy theories.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"47 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140553190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-15DOI: 10.1177/08944393241245390
Nikolitsa Grigoropoulou, Mario L. Small
Large-scale data from private companies offer new opportunities to examine topics of scientific and social significance, such as racial inequality, partisan polarization, and activity-based segregation. However, because such data are often generated through automated processes, their accuracy and reliability for social science research remain unclear. The present study examines how quality issues in large-scale data from private companies can afflict the reporting of even ostensibly uncomplicated values. We assess the reliability with which an often-used device tracking data source, SafeGraph, sorted data it acquired on financial institutions into categories, such as banks and payday lenders, based on a standard classification system. We find major classification problems that vary by type of institution, and remarkably high rates of unidentified closures and duplicate records. We suggest that classification problems can affect research based on large-scale private data in four ways: detection, efficiency, validity, and bias. We discuss the implications of our findings, and list a set of problems researchers should consider when using large-scale data from companies.
{"title":"Are Large-Scale Data From Private Companies Reliable? An Analysis of Machine-Generated Business Location Data in a Popular Dataset","authors":"Nikolitsa Grigoropoulou, Mario L. Small","doi":"10.1177/08944393241245390","DOIUrl":"https://doi.org/10.1177/08944393241245390","url":null,"abstract":"Large-scale data from private companies offer new opportunities to examine topics of scientific and social significance, such as racial inequality, partisan polarization, and activity-based segregation. However, because such data are often generated through automated processes, their accuracy and reliability for social science research remain unclear. The present study examines how quality issues in large-scale data from private companies can afflict the reporting of even ostensibly uncomplicated values. We assess the reliability with which an often-used device tracking data source, SafeGraph, sorted data it acquired on financial institutions into categories, such as banks and payday lenders, based on a standard classification system. We find major classification problems that vary by type of institution, and remarkably high rates of unidentified closures and duplicate records. We suggest that classification problems can affect research based on large-scale private data in four ways: detection, efficiency, validity, and bias. We discuss the implications of our findings, and list a set of problems researchers should consider when using large-scale data from companies.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"29 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140557300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-10DOI: 10.1177/08944393241246282
David Levine, Tali Gazit
This study examines the role of information sources in the ultra-Orthodox (Haredi) Jewish community’s coping with the coronavirus (COVID-19) pandemic in Israel by comparing their use of digital versus traditional information platforms. The study examined coping with COVID-19, considering explanatory variables such as Community Sense of Coherence (C-SOC), Internet usage, and other demographic variables. Using an online survey, 212 participants responded who identified as ultra-Orthodox and had access to the Internet, of which 47.2% were women and 52.8% were men, with a mean age of 37.66 ( SD = 12.60). Findings showed that the emotional and cognitive coping levels of members of ultra-Orthodox society with COVID-19 utilizing digital information sources were significantly better than those among community members using traditional information sources. Furthermore, the more the Internet was used for information or social needs, the more digital information sources helped community members cope with the crisis from an emotional and cognitive viewpoint. Conversely, the more participants felt that ultra-Orthodox society is a significant factor that helps them face life’s challenges (C-SOC), the better they coped with the pandemic utilizing traditional information sources. This study presents a novel, previously unstudied approach to ultra-Orthodox society’s coping methods with a worldwide crisis, whether through digital or traditional information sources. The study’s findings emphasize the need to make reliable and timely digital information accessible to this community, especially during a crisis, while respecting the culture and values of ultra-Orthodox society.
{"title":"Unorthodox Information Sources of Coping With the COVID-19 Crisis in the Ultra-Orthodox Society","authors":"David Levine, Tali Gazit","doi":"10.1177/08944393241246282","DOIUrl":"https://doi.org/10.1177/08944393241246282","url":null,"abstract":"This study examines the role of information sources in the ultra-Orthodox (Haredi) Jewish community’s coping with the coronavirus (COVID-19) pandemic in Israel by comparing their use of digital versus traditional information platforms. The study examined coping with COVID-19, considering explanatory variables such as Community Sense of Coherence (C-SOC), Internet usage, and other demographic variables. Using an online survey, 212 participants responded who identified as ultra-Orthodox and had access to the Internet, of which 47.2% were women and 52.8% were men, with a mean age of 37.66 ( SD = 12.60). Findings showed that the emotional and cognitive coping levels of members of ultra-Orthodox society with COVID-19 utilizing digital information sources were significantly better than those among community members using traditional information sources. Furthermore, the more the Internet was used for information or social needs, the more digital information sources helped community members cope with the crisis from an emotional and cognitive viewpoint. Conversely, the more participants felt that ultra-Orthodox society is a significant factor that helps them face life’s challenges (C-SOC), the better they coped with the pandemic utilizing traditional information sources. This study presents a novel, previously unstudied approach to ultra-Orthodox society’s coping methods with a worldwide crisis, whether through digital or traditional information sources. The study’s findings emphasize the need to make reliable and timely digital information accessible to this community, especially during a crisis, while respecting the culture and values of ultra-Orthodox society.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"71 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140545500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-14DOI: 10.1177/08944393241235175
Jakob Mökander, Ralph Schroeder
In this paper, we first frame the use of artificial intelligence (AI) systems in the public sector as a continuation and intensification of long-standing rationalization and bureaucratization processes. Drawing on Weber, we understand the core of these processes to be the replacement of traditions with instrumental rationality, that is, the most calculable and efficient way of achieving any given policy objective. Second, we demonstrate how much of the criticisms, both among the public and in scholarship, directed towards AI systems spring from well-known tensions at the heart of Weberian rationalization. To illustrate this point, we introduce a thought experiment whereby AI systems are used to optimize tax policy to advance a specific normative end: reducing economic inequality. Our analysis shows that building a machine-like tax system that promotes social and economic equality is possible. However, our analysis also highlights that AI-driven policy optimization (i) comes at the exclusion of other competing political values, (ii) overrides citizens’ sense of their (non-instrumental) obligations to each other, and (iii) undermines the notion of humans as self-determining beings. Third, we observe that contemporary scholarship and advocacy directed towards ensuring that AI systems are legal, ethical, and safe build on and reinforce central assumptions that underpin the process of rationalization, including the modern idea that science can sweep away oppressive systems and replace them with a rule of reason that would rescue humans from moral injustices. That is overly optimistic: science can only provide the means – it cannot dictate the ends. Nonetheless, the use of AI in the public sector can also benefit the institutions and processes of liberal democracies. Most importantly, AI-driven policy optimization demands that normative ends are made explicit and formalized, thereby subjecting them to public scrutiny, deliberation, and debate.
{"title":"Artificial Intelligence, Rationalization, and the Limits of Control in the Public Sector: The Case of Tax Policy Optimization","authors":"Jakob Mökander, Ralph Schroeder","doi":"10.1177/08944393241235175","DOIUrl":"https://doi.org/10.1177/08944393241235175","url":null,"abstract":"In this paper, we first frame the use of artificial intelligence (AI) systems in the public sector as a continuation and intensification of long-standing rationalization and bureaucratization processes. Drawing on Weber, we understand the core of these processes to be the replacement of traditions with instrumental rationality, that is, the most calculable and efficient way of achieving any given policy objective. Second, we demonstrate how much of the criticisms, both among the public and in scholarship, directed towards AI systems spring from well-known tensions at the heart of Weberian rationalization. To illustrate this point, we introduce a thought experiment whereby AI systems are used to optimize tax policy to advance a specific normative end: reducing economic inequality. Our analysis shows that building a machine-like tax system that promotes social and economic equality is possible. However, our analysis also highlights that AI-driven policy optimization (i) comes at the exclusion of other competing political values, (ii) overrides citizens’ sense of their (non-instrumental) obligations to each other, and (iii) undermines the notion of humans as self-determining beings. Third, we observe that contemporary scholarship and advocacy directed towards ensuring that AI systems are legal, ethical, and safe build on and reinforce central assumptions that underpin the process of rationalization, including the modern idea that science can sweep away oppressive systems and replace them with a rule of reason that would rescue humans from moral injustices. That is overly optimistic: science can only provide the means – it cannot dictate the ends. Nonetheless, the use of AI in the public sector can also benefit the institutions and processes of liberal democracies. Most importantly, AI-driven policy optimization demands that normative ends are made explicit and formalized, thereby subjecting them to public scrutiny, deliberation, and debate.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"19 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140142191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-23DOI: 10.1177/08944393241235182
Anne Reinhardt, Sophie Mayen, Claudia Wilhelm
Mobile Experience Sampling (MES) is a promising tool for understanding youth digital media use and its effects. Unfortunately, the method suffers from high levels of missing data. Depending on whether the data is randomly or non-randomly missing, it can have severe effects on the validity of findings. For this reason, we investigated predictors of non-response in an MES study on displacement effects of digital media use on adolescents’ well-being and academic performance ( N = 347). Multilevel binary logistic regression identified significant influencing factors of response odds, such as afternoon beeps and being outside. Importantly, adolescents with poorer school grades were more likely to miss beeps. Because this missingness was related to the outcome variable, modern missing data methods such as multiple imputation should be applied before analyzing the data. Understanding the reasons for non-response can be seen as the first step to preventing, minimizing, and handling missing data in MES studies, ultimately ensuring that the collected data is fully utilized to draw accurate conclusions.
移动体验取样(MES)是了解青少年数字媒体使用情况及其影响的一种很有前途的工具。遗憾的是,这种方法存在大量数据缺失的问题。根据数据是随机缺失还是非随机缺失,缺失数据会严重影响研究结果的有效性。因此,我们在一项关于数字媒体的使用对青少年幸福感和学习成绩的影响的多层次调查研究(N = 347)中调查了未回应的预测因素。多层次二元逻辑回归确定了影响响应几率的重要因素,如下午的哔哔声和在户外。重要的是,学习成绩较差的青少年更有可能错过提示音。由于这种缺失与结果变量有关,因此在分析数据前应采用多重估算等现代缺失数据方法。了解无响应的原因可被视为预防、尽量减少和处理 MES 研究中数据缺失的第一步,最终确保收集到的数据得到充分利用,从而得出准确的结论。
{"title":"Uncovering the Missing Pieces: Predictors of Nonresponse in a Mobile Experience Sampling Study on Media Effects Among Youth","authors":"Anne Reinhardt, Sophie Mayen, Claudia Wilhelm","doi":"10.1177/08944393241235182","DOIUrl":"https://doi.org/10.1177/08944393241235182","url":null,"abstract":"Mobile Experience Sampling (MES) is a promising tool for understanding youth digital media use and its effects. Unfortunately, the method suffers from high levels of missing data. Depending on whether the data is randomly or non-randomly missing, it can have severe effects on the validity of findings. For this reason, we investigated predictors of non-response in an MES study on displacement effects of digital media use on adolescents’ well-being and academic performance ( N = 347). Multilevel binary logistic regression identified significant influencing factors of response odds, such as afternoon beeps and being outside. Importantly, adolescents with poorer school grades were more likely to miss beeps. Because this missingness was related to the outcome variable, modern missing data methods such as multiple imputation should be applied before analyzing the data. Understanding the reasons for non-response can be seen as the first step to preventing, minimizing, and handling missing data in MES studies, ultimately ensuring that the collected data is fully utilized to draw accurate conclusions.","PeriodicalId":49509,"journal":{"name":"Social Science Computer Review","volume":"1 1","pages":""},"PeriodicalIF":4.1,"publicationDate":"2024-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139939049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}