E. Gibson, Evelina Fedorenko, Diogo Almeida, Leon Bergen, Joan Bresnan, David Caplan, Nick Chater, Morten H. Christiansen, Mike Frank, Adele Goldberg, Helen Goodluck, Greg Hickok, Ray Jackendoff, N. Kanwisher, R. Levy, Maryellen Macdonald, James Myers, Colin Phillips, Steven Piantadosi, Steve Pinker, D. Poeppel, Omer Preminger, Ian Roberts, Greg Scontras, Jon Sprouse, Carson Schü, Mike Tanenhaus, Vince Walsh, Duane Watson, E. Zweig
{"title":"在语法和语义研究中需要定量方法","authors":"E. Gibson, Evelina Fedorenko, Diogo Almeida, Leon Bergen, Joan Bresnan, David Caplan, Nick Chater, Morten H. Christiansen, Mike Frank, Adele Goldberg, Helen Goodluck, Greg Hickok, Ray Jackendoff, N. Kanwisher, R. Levy, Maryellen Macdonald, James Myers, Colin Phillips, Steven Piantadosi, Steve Pinker, D. Poeppel, Omer Preminger, Ian Roberts, Greg Scontras, Jon Sprouse, Carson Schü, Mike Tanenhaus, Vince Walsh, Duane Watson, E. Zweig","doi":"10.1080/01690965.2010.515080","DOIUrl":null,"url":null,"abstract":"The prevalent method in syntax and semantics research involves obtaining a judgement of the acceptability of a sentence/meaning pair, typically by just the author of the paper, sometimes with feedback from colleagues. This methodology does not allow proper testing of scientific hypotheses because of (a) the small number of experimental participants (typically one); (b) the small number of experimental stimuli (typically one); (c) cognitive biases on the part of the researcher and participants; and (d) the effect of the preceding context (e.g., other constructions the researcher may have been recently considering). In the current paper we respond to some arguments that have been given in support of continuing to use the traditional nonquantitative method in syntax/semantics research. One recent defence of the traditional method comes from Phillips (2009), who argues that no harm has come from the nonquantitative approach in syntax research thus far. Phillips argues that there are no cases in the literature where an incorrect intuitive judgement has become the basis for a widely accepted generalisation or an important theoretical claim. He therefore concludes that there is no reason to adopt more rigorous data collection standards. We challenge Philips' conclusion by presenting three cases from the literature where a faulty intuition has led to incorrect generalisations and mistaken theorising, plausibly due to cognitive biases on the part of the researchers. Furthermore, we present additional arguments for rigorous data collection standards. For example, allowing lax data collection standards has the undesirable effect that the results and claims will often be ignored by researchers with stronger methodological standards. Finally, we observe that behavioural experiments are easier to conduct in English than ever before, with the advent of Amazon.com's Mechanical Turk, a marketplace interface that can be used for collecting behavioural data over the internet.","PeriodicalId":87410,"journal":{"name":"Language and cognitive processes","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2013-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/01690965.2010.515080","citationCount":"160","resultStr":"{\"title\":\"The need for quantitative methods in syntax and semantics research\",\"authors\":\"E. Gibson, Evelina Fedorenko, Diogo Almeida, Leon Bergen, Joan Bresnan, David Caplan, Nick Chater, Morten H. Christiansen, Mike Frank, Adele Goldberg, Helen Goodluck, Greg Hickok, Ray Jackendoff, N. Kanwisher, R. Levy, Maryellen Macdonald, James Myers, Colin Phillips, Steven Piantadosi, Steve Pinker, D. Poeppel, Omer Preminger, Ian Roberts, Greg Scontras, Jon Sprouse, Carson Schü, Mike Tanenhaus, Vince Walsh, Duane Watson, E. Zweig\",\"doi\":\"10.1080/01690965.2010.515080\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The prevalent method in syntax and semantics research involves obtaining a judgement of the acceptability of a sentence/meaning pair, typically by just the author of the paper, sometimes with feedback from colleagues. This methodology does not allow proper testing of scientific hypotheses because of (a) the small number of experimental participants (typically one); (b) the small number of experimental stimuli (typically one); (c) cognitive biases on the part of the researcher and participants; and (d) the effect of the preceding context (e.g., other constructions the researcher may have been recently considering). In the current paper we respond to some arguments that have been given in support of continuing to use the traditional nonquantitative method in syntax/semantics research. One recent defence of the traditional method comes from Phillips (2009), who argues that no harm has come from the nonquantitative approach in syntax research thus far. Phillips argues that there are no cases in the literature where an incorrect intuitive judgement has become the basis for a widely accepted generalisation or an important theoretical claim. He therefore concludes that there is no reason to adopt more rigorous data collection standards. We challenge Philips' conclusion by presenting three cases from the literature where a faulty intuition has led to incorrect generalisations and mistaken theorising, plausibly due to cognitive biases on the part of the researchers. Furthermore, we present additional arguments for rigorous data collection standards. For example, allowing lax data collection standards has the undesirable effect that the results and claims will often be ignored by researchers with stronger methodological standards. Finally, we observe that behavioural experiments are easier to conduct in English than ever before, with the advent of Amazon.com's Mechanical Turk, a marketplace interface that can be used for collecting behavioural data over the internet.\",\"PeriodicalId\":87410,\"journal\":{\"name\":\"Language and cognitive processes\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2013-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://sci-hub-pdf.com/10.1080/01690965.2010.515080\",\"citationCount\":\"160\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Language and cognitive processes\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/01690965.2010.515080\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Language and cognitive processes","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/01690965.2010.515080","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
The need for quantitative methods in syntax and semantics research
The prevalent method in syntax and semantics research involves obtaining a judgement of the acceptability of a sentence/meaning pair, typically by just the author of the paper, sometimes with feedback from colleagues. This methodology does not allow proper testing of scientific hypotheses because of (a) the small number of experimental participants (typically one); (b) the small number of experimental stimuli (typically one); (c) cognitive biases on the part of the researcher and participants; and (d) the effect of the preceding context (e.g., other constructions the researcher may have been recently considering). In the current paper we respond to some arguments that have been given in support of continuing to use the traditional nonquantitative method in syntax/semantics research. One recent defence of the traditional method comes from Phillips (2009), who argues that no harm has come from the nonquantitative approach in syntax research thus far. Phillips argues that there are no cases in the literature where an incorrect intuitive judgement has become the basis for a widely accepted generalisation or an important theoretical claim. He therefore concludes that there is no reason to adopt more rigorous data collection standards. We challenge Philips' conclusion by presenting three cases from the literature where a faulty intuition has led to incorrect generalisations and mistaken theorising, plausibly due to cognitive biases on the part of the researchers. Furthermore, we present additional arguments for rigorous data collection standards. For example, allowing lax data collection standards has the undesirable effect that the results and claims will often be ignored by researchers with stronger methodological standards. Finally, we observe that behavioural experiments are easier to conduct in English than ever before, with the advent of Amazon.com's Mechanical Turk, a marketplace interface that can be used for collecting behavioural data over the internet.