{"title":"高质量在线数据收集的责任在于研究人员,而非招聘平台","authors":"Christine Cuskley, Justin Sulik","doi":"10.1177/17456916241242734","DOIUrl":null,"url":null,"abstract":"A recent article in Perspectives on Psychological Science (Webb & Tangney, 2022) reported a study in which just 2.6% of participants recruited on Amazon’s Mechanical Turk (MTurk) were deemed “valid.” The authors highlighted some well-established limitations of MTurk, but their central claims—that MTurk is “too good to be true” and that it captured “only 14 human beings . . . [out of] N = 529”—are radically misleading, yet have been repeated widely. This commentary aims to (a) correct the record (i.e., by showing that Webb and Tangney’s approach to data collection led to unusually low data quality) and (b) offer a shift in perspective for running high-quality studies online. Negative attitudes toward MTurk sometimes reflect a fundamental misunderstanding of what the platform offers and how it should be used in research. Beyond pointing to research that details strategies for effective design and recruitment on MTurk, we stress that MTurk is not suitable for every study. Effective use requires specific expertise and design considerations. Like all tools used in research—from advanced hardware to specialist software—the tool itself places constraints on what one should use it for. Ultimately, high-quality data is the responsibility of the researcher, not the crowdsourcing platform.","PeriodicalId":19757,"journal":{"name":"Perspectives on Psychological Science","volume":null,"pages":null},"PeriodicalIF":10.5000,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"The Burden for High-Quality Online Data Collection Lies With Researchers, Not Recruitment Platforms\",\"authors\":\"Christine Cuskley, Justin Sulik\",\"doi\":\"10.1177/17456916241242734\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A recent article in Perspectives on Psychological Science (Webb & Tangney, 2022) reported a study in which just 2.6% of participants recruited on Amazon’s Mechanical Turk (MTurk) were deemed “valid.” The authors highlighted some well-established limitations of MTurk, but their central claims—that MTurk is “too good to be true” and that it captured “only 14 human beings . . . [out of] N = 529”—are radically misleading, yet have been repeated widely. This commentary aims to (a) correct the record (i.e., by showing that Webb and Tangney’s approach to data collection led to unusually low data quality) and (b) offer a shift in perspective for running high-quality studies online. Negative attitudes toward MTurk sometimes reflect a fundamental misunderstanding of what the platform offers and how it should be used in research. Beyond pointing to research that details strategies for effective design and recruitment on MTurk, we stress that MTurk is not suitable for every study. Effective use requires specific expertise and design considerations. Like all tools used in research—from advanced hardware to specialist software—the tool itself places constraints on what one should use it for. Ultimately, high-quality data is the responsibility of the researcher, not the crowdsourcing platform.\",\"PeriodicalId\":19757,\"journal\":{\"name\":\"Perspectives on Psychological Science\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":10.5000,\"publicationDate\":\"2024-04-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Perspectives on Psychological Science\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1177/17456916241242734\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Perspectives on Psychological Science","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1177/17456916241242734","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
The Burden for High-Quality Online Data Collection Lies With Researchers, Not Recruitment Platforms
A recent article in Perspectives on Psychological Science (Webb & Tangney, 2022) reported a study in which just 2.6% of participants recruited on Amazon’s Mechanical Turk (MTurk) were deemed “valid.” The authors highlighted some well-established limitations of MTurk, but their central claims—that MTurk is “too good to be true” and that it captured “only 14 human beings . . . [out of] N = 529”—are radically misleading, yet have been repeated widely. This commentary aims to (a) correct the record (i.e., by showing that Webb and Tangney’s approach to data collection led to unusually low data quality) and (b) offer a shift in perspective for running high-quality studies online. Negative attitudes toward MTurk sometimes reflect a fundamental misunderstanding of what the platform offers and how it should be used in research. Beyond pointing to research that details strategies for effective design and recruitment on MTurk, we stress that MTurk is not suitable for every study. Effective use requires specific expertise and design considerations. Like all tools used in research—from advanced hardware to specialist software—the tool itself places constraints on what one should use it for. Ultimately, high-quality data is the responsibility of the researcher, not the crowdsourcing platform.
期刊介绍:
Perspectives on Psychological Science is a journal that publishes a diverse range of articles and reports in the field of psychology. The journal includes broad integrative reviews, overviews of research programs, meta-analyses, theoretical statements, book reviews, and articles on various topics such as the philosophy of science and opinion pieces about major issues in the field. It also features autobiographical reflections of senior members of the field, occasional humorous essays and sketches, and even has a section for invited and submitted articles.
The impact of the journal can be seen through the reverberation of a 2009 article on correlative analyses commonly used in neuroimaging studies, which still influences the field. Additionally, a recent special issue of Perspectives, featuring prominent researchers discussing the "Next Big Questions in Psychology," is shaping the future trajectory of the discipline.
Perspectives on Psychological Science provides metrics that showcase the performance of the journal. However, the Association for Psychological Science, of which the journal is a signatory of DORA, recommends against using journal-based metrics for assessing individual scientist contributions, such as for hiring, promotion, or funding decisions. Therefore, the metrics provided by Perspectives on Psychological Science should only be used by those interested in evaluating the journal itself.