Pub Date : 2023-01-12DOI: 10.1080/19312458.2022.2156489
Brahim Zarouali, Theo Araujo, Jakob Ohme, Claes H. de Vreese
{"title":"Comparing Chatbots and Online Surveys for (Longitudinal) Data Collection: An Investigation of Response Characteristics, Data Quality, and User Evaluation","authors":"Brahim Zarouali, Theo Araujo, Jakob Ohme, Claes H. de Vreese","doi":"10.1080/19312458.2022.2156489","DOIUrl":"https://doi.org/10.1080/19312458.2022.2156489","url":null,"abstract":"","PeriodicalId":47552,"journal":{"name":"Communication Methods and Measures","volume":null,"pages":null},"PeriodicalIF":11.4,"publicationDate":"2023-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45616538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-01Epub Date: 2023-01-18DOI: 10.1080/19312458.2023.2167197
Miriam Brinberg, David M Lydon-Staley
Communication research often focuses on processes of communication, such as how messages impact individuals over time or how interpersonal relationships develop and change. Despite their importance, these change processes are often implicit in much theoretical and empirical work in communication. Intensive longitudinal data are becoming increasingly feasible to collect and, when coupled with appropriate analytic frameworks, enable researchers to better explore and articulate the types of change underlying communication processes. To facilitate the study of change processes, we (a) describe advances in data collection and analytic methods that allow researchers to articulate complex change processes of phenomena in communication research, (b) provide an overview of change processes and how they may be captured with intensive longitudinal methods, and (c) discuss considerations of capturing change when designing and implementing studies. We are excited about the future of studying processes of change in communication research, and we look forward to the iterations between empirical tests and theory revision that will occur as researchers delve into studying change within communication processes.
{"title":"Conceptualizing and Examining Change in Communication Research.","authors":"Miriam Brinberg, David M Lydon-Staley","doi":"10.1080/19312458.2023.2167197","DOIUrl":"10.1080/19312458.2023.2167197","url":null,"abstract":"<p><p>Communication research often focuses on <i>processes</i> of communication, such as how messages impact individuals over time or how interpersonal relationships develop and change. Despite their importance, these change processes are often implicit in much theoretical and empirical work in communication. Intensive longitudinal data are becoming increasingly feasible to collect and, when coupled with appropriate analytic frameworks, enable researchers to better explore and articulate the types of change underlying communication processes. To facilitate the study of change processes, we (a) describe advances in data collection and analytic methods that allow researchers to articulate complex change processes of phenomena in communication research, (b) provide an overview of change processes and how they may be captured with intensive longitudinal methods, and (c) discuss considerations of capturing change when designing and implementing studies. We are excited about the future of studying processes of change in communication research, and we look forward to the iterations between empirical tests and theory revision that will occur as researchers delve into studying change within communication processes.</p>","PeriodicalId":47552,"journal":{"name":"Communication Methods and Measures","volume":null,"pages":null},"PeriodicalIF":6.3,"publicationDate":"2023-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10139745/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9404666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-12DOI: 10.1080/19312458.2022.2151579
Christina Viehmann, Tilman Beck, Marcus Maurer, Oliver Quiring, Iryna Gurevych
ABSTRACT Supervised machine learning (SML) provides us with tools to efficiently scrutinize large corpora of communication texts. Yet, setting up such a tool involves plenty of decisions starting with the data needed for training, the selection of an algorithm, and the details of model training. We aim at establishing a firm link between communication research tasks and the corresponding state-of-the-art in natural language processing research by systematically comparing the performance of different automatic text analysis approaches. We do this for a challenging task – stance detection of opinions on policy measures to tackle the COVID-19 pandemic in Germany voiced on Twitter. Our results add evidence that pre-trained language models such as BERT outperform feature-based and other neural network approaches. Yet, the gains one can achieve differ greatly depending on the specific merits of pre-training (i.e., use of different language models). Adding to the robustness of our conclusions, we run a generalizability check with a different use case in terms of language and topic. Additionally, we illustrate how the amount and quality of training data affect model performance pointing to potential compensation effects. Based on our results, we derive important practical recommendations for setting up such SML tools to study communication texts.
{"title":"Investigating Opinions on Public Policies in Digital Media: Setting up a Supervised Machine Learning Tool for Stance Classification","authors":"Christina Viehmann, Tilman Beck, Marcus Maurer, Oliver Quiring, Iryna Gurevych","doi":"10.1080/19312458.2022.2151579","DOIUrl":"https://doi.org/10.1080/19312458.2022.2151579","url":null,"abstract":"ABSTRACT Supervised machine learning (SML) provides us with tools to efficiently scrutinize large corpora of communication texts. Yet, setting up such a tool involves plenty of decisions starting with the data needed for training, the selection of an algorithm, and the details of model training. We aim at establishing a firm link between communication research tasks and the corresponding state-of-the-art in natural language processing research by systematically comparing the performance of different automatic text analysis approaches. We do this for a challenging task – stance detection of opinions on policy measures to tackle the COVID-19 pandemic in Germany voiced on Twitter. Our results add evidence that pre-trained language models such as BERT outperform feature-based and other neural network approaches. Yet, the gains one can achieve differ greatly depending on the specific merits of pre-training (i.e., use of different language models). Adding to the robustness of our conclusions, we run a generalizability check with a different use case in terms of language and topic. Additionally, we illustrate how the amount and quality of training data affect model performance pointing to potential compensation effects. Based on our results, we derive important practical recommendations for setting up such SML tools to study communication texts.","PeriodicalId":47552,"journal":{"name":"Communication Methods and Measures","volume":null,"pages":null},"PeriodicalIF":11.4,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46967481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-11-24DOI: 10.1080/19312458.2022.2144631
Zachary B. Massey, Ioana A. Cionea
ABSTRACT Although theory and research have discussed identity insecurity as a factor that influences intercultural communication behaviors, no reliable measure has been presented to capture this construct. This paper reports results from three studies designed to test and validate a new measure of identity insecurity. Study 1 (N = 173) included item generation and exploratory factor analysis, revealing five unidimensional identity insecurity factors, termed individual identity insecurity, public presentation insecurity, dissimilar others insecurity, reactive insecurity, and social identity insecurity. Study 2 (N = 524) confirmed this five-factor structure. Study 3 (N = 807) further examined the structure and construct validity of the identity insecurity scale. Results confirmed the five-factor solution obtained in Study 2 and revealed the measure had good construct, convergent, and discriminant validity. Thus, we conclude that the proposed new measure for identity insecurity has a clear factor structure, strong factor loadings, and good reliability and validity.
{"title":"A New Scale for Measuring Identity Insecurity","authors":"Zachary B. Massey, Ioana A. Cionea","doi":"10.1080/19312458.2022.2144631","DOIUrl":"https://doi.org/10.1080/19312458.2022.2144631","url":null,"abstract":"ABSTRACT Although theory and research have discussed identity insecurity as a factor that influences intercultural communication behaviors, no reliable measure has been presented to capture this construct. This paper reports results from three studies designed to test and validate a new measure of identity insecurity. Study 1 (N = 173) included item generation and exploratory factor analysis, revealing five unidimensional identity insecurity factors, termed individual identity insecurity, public presentation insecurity, dissimilar others insecurity, reactive insecurity, and social identity insecurity. Study 2 (N = 524) confirmed this five-factor structure. Study 3 (N = 807) further examined the structure and construct validity of the identity insecurity scale. Results confirmed the five-factor solution obtained in Study 2 and revealed the measure had good construct, convergent, and discriminant validity. Thus, we conclude that the proposed new measure for identity insecurity has a clear factor structure, strong factor loadings, and good reliability and validity.","PeriodicalId":47552,"journal":{"name":"Communication Methods and Measures","volume":null,"pages":null},"PeriodicalIF":11.4,"publicationDate":"2022-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48445683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-28DOI: 10.1080/19312458.2022.2128099
Matthias Hoffmann, Felipe G. Santos, Christina Neumayer, Dan Mercea
ABSTRACT This paper presents a critical discussion of the processing, reliability and implications of free big data repositories. We argue that big data is not only the starting point of scientific analyses but also the outcome of a long string of invisible or semi-visible tasks, often masked by the fetish of size that supposedly lends validity to big data. We unpack these notions by illustrating the process of extracting protest event data from the Global Database of Events, Language and Tone (GDELT) in six European countries over a period of seven years. To stand up to rigorous scientific scrutiny, we collected additional data by computational means and undertook large-scale neural-network translation tasks, dictionary-based content analyses, machine-learning classification tasks, and human coding. In a documentation and critical discussion of this process, we render visible opaque procedures that inevitably shape any dataset and show how this type of freely available datasets require significant additional resources of knowledge, labor, money, and computational power. We conclude that while these processes can ultimately yield more valid datasets, the supposedly free and ready-to-use big news data repositories should not be taken at face value.
{"title":"Lifting the Veil on the Use of Big Data News Repositories: A Documentation and Critical Discussion of A Protest Event Analysis","authors":"Matthias Hoffmann, Felipe G. Santos, Christina Neumayer, Dan Mercea","doi":"10.1080/19312458.2022.2128099","DOIUrl":"https://doi.org/10.1080/19312458.2022.2128099","url":null,"abstract":"ABSTRACT This paper presents a critical discussion of the processing, reliability and implications of free big data repositories. We argue that big data is not only the starting point of scientific analyses but also the outcome of a long string of invisible or semi-visible tasks, often masked by the fetish of size that supposedly lends validity to big data. We unpack these notions by illustrating the process of extracting protest event data from the Global Database of Events, Language and Tone (GDELT) in six European countries over a period of seven years. To stand up to rigorous scientific scrutiny, we collected additional data by computational means and undertook large-scale neural-network translation tasks, dictionary-based content analyses, machine-learning classification tasks, and human coding. In a documentation and critical discussion of this process, we render visible opaque procedures that inevitably shape any dataset and show how this type of freely available datasets require significant additional resources of knowledge, labor, money, and computational power. We conclude that while these processes can ultimately yield more valid datasets, the supposedly free and ready-to-use big news data repositories should not be taken at face value.","PeriodicalId":47552,"journal":{"name":"Communication Methods and Measures","volume":null,"pages":null},"PeriodicalIF":11.4,"publicationDate":"2022-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45060098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-22DOI: 10.1080/19312458.2022.2122423
Cédric Courtois, Thomas Frissen
ABSTRACT Internet memes are a fundamental aspect of digital culture. Despite being individual expressions, they vastly transcend the individual level as windows into and vehicles for wide-stretching social, cultural, and political narratives. Empirical research into meme culture is thriving, yet particularly compartmentalized. In the humanities and social sciences, most efforts involve in-depth linguistic and visual analyses of mostly handpicked examples of memes, begging the question on the origins and meanings of those particular expressions. In technical disciplines, such as computer science, efforts are focused on the large-scale identification and classification of meme images, as well as patterns of “viral” spread at scale. This contribution aims to bridge the chasm between depth and scale by introducing a three-step approach suitable for “computational grounded theoretical” studies in which (1) an automated procedure establishes formal links between meme images drawn from a large-scale corpus paving the way for (2) network analysis to infer patterns of relatedness and spread, and (3) practically classifying visually related images in file folders for the purpose of further local, hermeneutic analysis. The procedure is demonstrated and evaluated on two datasets: an artificially constructed, structured dataset and a naturally harvested unstructured dataset. Future horizons and domains of application are discussed.
{"title":"Computer Vision and Internet Meme Genealogy: An Evaluation of Image Feature Matching as a Technique for Pattern Detection","authors":"Cédric Courtois, Thomas Frissen","doi":"10.1080/19312458.2022.2122423","DOIUrl":"https://doi.org/10.1080/19312458.2022.2122423","url":null,"abstract":"ABSTRACT Internet memes are a fundamental aspect of digital culture. Despite being individual expressions, they vastly transcend the individual level as windows into and vehicles for wide-stretching social, cultural, and political narratives. Empirical research into meme culture is thriving, yet particularly compartmentalized. In the humanities and social sciences, most efforts involve in-depth linguistic and visual analyses of mostly handpicked examples of memes, begging the question on the origins and meanings of those particular expressions. In technical disciplines, such as computer science, efforts are focused on the large-scale identification and classification of meme images, as well as patterns of “viral” spread at scale. This contribution aims to bridge the chasm between depth and scale by introducing a three-step approach suitable for “computational grounded theoretical” studies in which (1) an automated procedure establishes formal links between meme images drawn from a large-scale corpus paving the way for (2) network analysis to infer patterns of relatedness and spread, and (3) practically classifying visually related images in file folders for the purpose of further local, hermeneutic analysis. The procedure is demonstrated and evaluated on two datasets: an artificially constructed, structured dataset and a naturally harvested unstructured dataset. Future horizons and domains of application are discussed.","PeriodicalId":47552,"journal":{"name":"Communication Methods and Measures","volume":null,"pages":null},"PeriodicalIF":11.4,"publicationDate":"2022-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43629782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-12DOI: 10.1080/19312458.2022.2109608
Irene I. van Driel, Anastasia Giachanou, J. Pouwels, L. Boeschoten, Ine Beyens, P. Valkenburg
ABSTRACT Studies assessing the effects of social media use are largely based on measures of time spent on social media. In recent years, scholars increasingly ask for more insights in social media activities and content people engage with. Data Download Packages (DDPs), the archives of social media platforms that each European user has the right to download, provide a new and promising method to collect timestamped and content-based information about social media use. In this paper, we first detail the experiences and insights of a data collection of 110 Instagram DDPs gathered from 102 adolescents. We successively discuss the challenges and opportunities of collecting and analyzing DDPs to help future researchers in their consideration of whether and how to use DDPs. DDPs provide tremendous opportunities to get insight in the frequency, range, and content of social media activities, from browsing to searching and posting. Yet, collecting, processing, and analyzing DDPs is also complex and laborious, and demands numerous procedural and analytical choices and decisions.
{"title":"Promises and Pitfalls of Social Media Data Donations","authors":"Irene I. van Driel, Anastasia Giachanou, J. Pouwels, L. Boeschoten, Ine Beyens, P. Valkenburg","doi":"10.1080/19312458.2022.2109608","DOIUrl":"https://doi.org/10.1080/19312458.2022.2109608","url":null,"abstract":"ABSTRACT Studies assessing the effects of social media use are largely based on measures of time spent on social media. In recent years, scholars increasingly ask for more insights in social media activities and content people engage with. Data Download Packages (DDPs), the archives of social media platforms that each European user has the right to download, provide a new and promising method to collect timestamped and content-based information about social media use. In this paper, we first detail the experiences and insights of a data collection of 110 Instagram DDPs gathered from 102 adolescents. We successively discuss the challenges and opportunities of collecting and analyzing DDPs to help future researchers in their consideration of whether and how to use DDPs. DDPs provide tremendous opportunities to get insight in the frequency, range, and content of social media activities, from browsing to searching and posting. Yet, collecting, processing, and analyzing DDPs is also complex and laborious, and demands numerous procedural and analytical choices and decisions.","PeriodicalId":47552,"journal":{"name":"Communication Methods and Measures","volume":null,"pages":null},"PeriodicalIF":11.4,"publicationDate":"2022-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43684630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-09DOI: 10.1080/19312458.2022.2086690
L. Coenen, T. Smits
ABSTRACT This paper discusses ‘strong-form’ frequentist testing as a useful complement to null hypothesis testing in communication science. In a ‘strong-form’ set-up a researcher defines a hypothetical effect size of (minimal) theoretical interest and assesses to what extent her findings falsify or corroborate that particular hypothesis. We argue that the idea of ‘strong-form’ testing aligns closely with the ideals of the movements for scientific reform, discuss its technical application within the context of the General Linear Model, and show how the relevant P-value-like quantities can be calculated and interpreted. We also provide examples and a simulation to illustrate how a strong-form set-up requires more nuanced reflections about research findings. In addition, we discuss some pitfalls that might still hold back strong-form tests from widespread adoption.
{"title":"Strong-Form Frequentist Testing In Communication Science: Principles, Opportunities, And Challenges","authors":"L. Coenen, T. Smits","doi":"10.1080/19312458.2022.2086690","DOIUrl":"https://doi.org/10.1080/19312458.2022.2086690","url":null,"abstract":"ABSTRACT This paper discusses ‘strong-form’ frequentist testing as a useful complement to null hypothesis testing in communication science. In a ‘strong-form’ set-up a researcher defines a hypothetical effect size of (minimal) theoretical interest and assesses to what extent her findings falsify or corroborate that particular hypothesis. We argue that the idea of ‘strong-form’ testing aligns closely with the ideals of the movements for scientific reform, discuss its technical application within the context of the General Linear Model, and show how the relevant P-value-like quantities can be calculated and interpreted. We also provide examples and a simulation to illustrate how a strong-form set-up requires more nuanced reflections about research findings. In addition, we discuss some pitfalls that might still hold back strong-form tests from widespread adoption.","PeriodicalId":47552,"journal":{"name":"Communication Methods and Measures","volume":null,"pages":null},"PeriodicalIF":11.4,"publicationDate":"2022-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47110525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-03DOI: 10.1080/19312458.2022.2085249
F. Mangold, Michael Scharkow
ABSTRACT Although media and communication scholars have suggested various analytical methods for measuring and comparing news audience polarization across countries, we lack a systematic assessment of the metrics produced by these techniques. Using survey data from the 2016 Reuters Institute Digital News Report on news use in 26 countries, we address this gap through a resampling simulation experiment. Our simulation revealed a strong impact of analytical choices, which invited disparate interpretations in terms of how polarized news audiences are, how strongly audience polarization structurally varies between news environments, and how news audience polarization is distributed cross-nationally. Alternative choices led to profound differences in the compatibility, consistency, and validity of the empirical news audience polarization estimates. We conclude from these results that a more precise methodological understanding of news audience polarization metrics informs our capability to draw meaningful inferences from empirical work.
{"title":"Metrics of News Audience Polarization: Same or Different?","authors":"F. Mangold, Michael Scharkow","doi":"10.1080/19312458.2022.2085249","DOIUrl":"https://doi.org/10.1080/19312458.2022.2085249","url":null,"abstract":"ABSTRACT Although media and communication scholars have suggested various analytical methods for measuring and comparing news audience polarization across countries, we lack a systematic assessment of the metrics produced by these techniques. Using survey data from the 2016 Reuters Institute Digital News Report on news use in 26 countries, we address this gap through a resampling simulation experiment. Our simulation revealed a strong impact of analytical choices, which invited disparate interpretations in terms of how polarized news audiences are, how strongly audience polarization structurally varies between news environments, and how news audience polarization is distributed cross-nationally. Alternative choices led to profound differences in the compatibility, consistency, and validity of the empirical news audience polarization estimates. We conclude from these results that a more precise methodological understanding of news audience polarization metrics informs our capability to draw meaningful inferences from empirical work.","PeriodicalId":47552,"journal":{"name":"Communication Methods and Measures","volume":null,"pages":null},"PeriodicalIF":11.4,"publicationDate":"2022-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42577724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-27DOI: 10.1080/19312458.2022.2088713
Stephanie K Van Stee, Qinghua Yang, Stephen A. Rains
ABSTRACT Insufficient statistical reporting practices (ISRPs) involve failure to report effect sizes or the information necessary to compute them in quantitative research. ISRPs can present problems for advancing knowledge in the field of Communication. Because, among other issues, studies containing ISRPs cannot be included in meta-analytic reviews, this practice undermines our ability to quantitatively summarize the findings from communication research with precision. We examine the prevalence and consequences of ISRPs among 50 meta-analyses published in four flagship communication journals. Our findings indicate that 80% of meta-analyses excluded at least one otherwise qualified primary study due to ISRPs, with a median of 6.5% of studies (k = 2) excluded per meta-analysis. The amount of inaccuracy introduced by ISRPs in the results of meta-analyses was small. Implications of the findings for communication research are discussed.
{"title":"An Empirical Investigation of Inadequate Statistical Reporting Practices in Communication Meta-Analyses and Their Consequences","authors":"Stephanie K Van Stee, Qinghua Yang, Stephen A. Rains","doi":"10.1080/19312458.2022.2088713","DOIUrl":"https://doi.org/10.1080/19312458.2022.2088713","url":null,"abstract":"ABSTRACT Insufficient statistical reporting practices (ISRPs) involve failure to report effect sizes or the information necessary to compute them in quantitative research. ISRPs can present problems for advancing knowledge in the field of Communication. Because, among other issues, studies containing ISRPs cannot be included in meta-analytic reviews, this practice undermines our ability to quantitatively summarize the findings from communication research with precision. We examine the prevalence and consequences of ISRPs among 50 meta-analyses published in four flagship communication journals. Our findings indicate that 80% of meta-analyses excluded at least one otherwise qualified primary study due to ISRPs, with a median of 6.5% of studies (k = 2) excluded per meta-analysis. The amount of inaccuracy introduced by ISRPs in the results of meta-analyses was small. Implications of the findings for communication research are discussed.","PeriodicalId":47552,"journal":{"name":"Communication Methods and Measures","volume":null,"pages":null},"PeriodicalIF":11.4,"publicationDate":"2022-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48799268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}