{"title":"Drivers of Data Quality in Advertising Research: Differences across MTurk and Professional Panel Samples","authors":"Christopher Berry, J. Kees, Scot Burton","doi":"10.1080/00913367.2022.2079026","DOIUrl":null,"url":null,"abstract":"Abstract Crowdsourcing has emerged as a preferred data collection methodology for advertising and social science researchers because these samples avoid the higher costs associated with professional panel data. Yet, there are ongoing concerns about data quality for online sources. This research examines differences in data quality for an advertising experiment across five popular online data sources, including professional panels and crowdsourced platforms. Effects of underlying mechanisms impacting data quality, including response satisficing, multitasking, and effort, are examined. As proposed, a serial mediation model shows that data source is, directly and indirectly, related to these antecedents of data quality. Satisficing is positively related to multitasking and negatively related to effort, and both mediators (in parallel) extend to data quality, indicating that the indirect effects on data quality through these mediating variables are significant. In general, a vetted MTurk sample (i.e., CloudResearch Approved) produces higher quality data than the other four sources. Regardless of the data source, researchers should utilize safeguards to ensure data quality. Safeguards and other strategies to obtain high-quality data from online samples are offered.","PeriodicalId":48337,"journal":{"name":"Journal of Advertising","volume":"51 1","pages":"515 - 529"},"PeriodicalIF":5.4000,"publicationDate":"2022-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"13","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Advertising","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1080/00913367.2022.2079026","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BUSINESS","Score":null,"Total":0}
引用次数: 13
Abstract
Abstract Crowdsourcing has emerged as a preferred data collection methodology for advertising and social science researchers because these samples avoid the higher costs associated with professional panel data. Yet, there are ongoing concerns about data quality for online sources. This research examines differences in data quality for an advertising experiment across five popular online data sources, including professional panels and crowdsourced platforms. Effects of underlying mechanisms impacting data quality, including response satisficing, multitasking, and effort, are examined. As proposed, a serial mediation model shows that data source is, directly and indirectly, related to these antecedents of data quality. Satisficing is positively related to multitasking and negatively related to effort, and both mediators (in parallel) extend to data quality, indicating that the indirect effects on data quality through these mediating variables are significant. In general, a vetted MTurk sample (i.e., CloudResearch Approved) produces higher quality data than the other four sources. Regardless of the data source, researchers should utilize safeguards to ensure data quality. Safeguards and other strategies to obtain high-quality data from online samples are offered.
期刊介绍:
The Journal of Advertising is the premier journal devoted to the development of advertising theory and its relationship to practice. The major purpose of the Journal is to provide a public forum where ideas about advertising can be expressed. Research dealing with the economic, political, social, and environmental aspects of advertising, and methodological advances in advertising research represent some of the key foci of the Journal. Other topics of interest recently covered by the Journal include the assessment of advertising effectiveness, advertising ethics, and global issues surrounding advertising.