{"title":"Missing data and careless responses: recommendations for instructional communication","authors":"Zac D. Johnson","doi":"10.1080/03634523.2023.2171445","DOIUrl":null,"url":null,"abstract":"Data collection is, without question, a resource intensive process. Unfortunately, many survey responses are returned incomplete, or individuals respond carelessly. These issues are exacerbated by the increase in online data collection, which often results in lower response rates and higher instances of careless respondents than paper-andpencil surveys, which are not without their own drawbacks (Lefever et al., 2007; Nichols & Edlund, 2020). The issues of missing data and careless responses ultimately equate to more sunk costs for researchers only for the data to be incomplete or otherwise problematic. Notably, these issues are accompanied by higher rates of type I or type II error (see Allison, 2003), meaning that claims drawn from these datasets may not be easily replicated due to faulty parameter estimates related to the original dataset. These issues hinder the ability for researchers to more deeply explore the relationship between communication and learning. Thankfully, there are strategies that quantitative researchers may utilize to address these issues, and in so doing more thoroughly and accurately ascertain communication’s relationship to learning. Each of the following methodological strategies is largely absent from the current instructional communication research canon and is relatively accessible. First, instructional communication researchers should begin by considering the length of their measurement instruments. As our methods have grown more sophisticated, we have included more and more in our models and research questions; each additional construct equates to more items to which participants must read and respond. Scholars routinely consider four, five, or even more variables, resulting in participants being asked to provide upwards of 100 responses (e.g., Schrodt et al., 2009; Sidelinger et al., 2011). Participants lose interest and stop responding carefully or stop responding entirely; this, as described above, is a significant problem. Thus, instructional communication scholars should consider shortening measurement instruments (see Raykov et al., 2015). Perhaps we do not need 18 items to assess teacher confirmation (Ellis, 2000) or teacher credibility (Teven & McCroskey, 1997); perhaps far fewer items would suffice while maintaining validity. Shorter instruments would help to address some of the issues underlying missing data and careless responses. Additionally, shorter instruments may afford researchers the opportunity to consider more complex relationships between additional variables without overburdening participants. A reconsideration of these scales validity may also reveal factor structures that are more accurate representations of communication related to instruction (Reise, 2012).","PeriodicalId":0,"journal":{"name":"","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/03634523.2023.2171445","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Data collection is, without question, a resource intensive process. Unfortunately, many survey responses are returned incomplete, or individuals respond carelessly. These issues are exacerbated by the increase in online data collection, which often results in lower response rates and higher instances of careless respondents than paper-andpencil surveys, which are not without their own drawbacks (Lefever et al., 2007; Nichols & Edlund, 2020). The issues of missing data and careless responses ultimately equate to more sunk costs for researchers only for the data to be incomplete or otherwise problematic. Notably, these issues are accompanied by higher rates of type I or type II error (see Allison, 2003), meaning that claims drawn from these datasets may not be easily replicated due to faulty parameter estimates related to the original dataset. These issues hinder the ability for researchers to more deeply explore the relationship between communication and learning. Thankfully, there are strategies that quantitative researchers may utilize to address these issues, and in so doing more thoroughly and accurately ascertain communication’s relationship to learning. Each of the following methodological strategies is largely absent from the current instructional communication research canon and is relatively accessible. First, instructional communication researchers should begin by considering the length of their measurement instruments. As our methods have grown more sophisticated, we have included more and more in our models and research questions; each additional construct equates to more items to which participants must read and respond. Scholars routinely consider four, five, or even more variables, resulting in participants being asked to provide upwards of 100 responses (e.g., Schrodt et al., 2009; Sidelinger et al., 2011). Participants lose interest and stop responding carefully or stop responding entirely; this, as described above, is a significant problem. Thus, instructional communication scholars should consider shortening measurement instruments (see Raykov et al., 2015). Perhaps we do not need 18 items to assess teacher confirmation (Ellis, 2000) or teacher credibility (Teven & McCroskey, 1997); perhaps far fewer items would suffice while maintaining validity. Shorter instruments would help to address some of the issues underlying missing data and careless responses. Additionally, shorter instruments may afford researchers the opportunity to consider more complex relationships between additional variables without overburdening participants. A reconsideration of these scales validity may also reveal factor structures that are more accurate representations of communication related to instruction (Reise, 2012).