M. Burton-Chellew, Victoire D'Amico, Claire Guérin
{"title":"The Strategy Method Risks Conflating Confusion with a Social Preference for Conditional Cooperation in Public Goods Games","authors":"M. Burton-Chellew, Victoire D'Amico, Claire Guérin","doi":"10.3390/g13060069","DOIUrl":null,"url":null,"abstract":"The strategy method is often used in public goods games to measure an individual’s willingness to cooperate depending on the level of cooperation by their groupmates (conditional cooperation). However, while the strategy method is informative, it risks conflating confusion with a desire for fair outcomes, and its presentation may risk inducing elevated levels of conditional cooperation. This problem was highlighted by two previous studies which found that the strategy method could also detect equivalent levels of cooperation even among those grouped with computerized groupmates, indicative of confusion or irrational responses. However, these studies did not use large samples (n = 40 or 72) and only made participants complete the strategy method one time, with computerized groupmates, preventing within-participant comparisons. Here, in contrast, 845 participants completed the strategy method two times, once with human and once with computerized groupmates. Our research aims were twofold: (1) to check the robustness of previous results with a large sample under various presentation conditions; and (2) to use a within-participant design to categorize participants according to how they behaved across the two scenarios. Ideally, a clean and reliable measure of conditional cooperation would find participants conditionally cooperating with humans and not cooperating with computers. Worryingly, only 7% of participants met this criterion. Overall, 83% of participants cooperated with the computers, and the mean contributions towards computers were 89% as large as those towards humans. These results, robust to the various presentation and order effects, pose serious concerns for the measurement of social preferences and question the idea that human cooperation is motivated by a concern for equal outcomes.","PeriodicalId":35065,"journal":{"name":"Games","volume":"13 1","pages":"69"},"PeriodicalIF":0.6000,"publicationDate":"2022-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Games","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/g13060069","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"ECONOMICS","Score":null,"Total":0}
引用次数: 2
Abstract
The strategy method is often used in public goods games to measure an individual’s willingness to cooperate depending on the level of cooperation by their groupmates (conditional cooperation). However, while the strategy method is informative, it risks conflating confusion with a desire for fair outcomes, and its presentation may risk inducing elevated levels of conditional cooperation. This problem was highlighted by two previous studies which found that the strategy method could also detect equivalent levels of cooperation even among those grouped with computerized groupmates, indicative of confusion or irrational responses. However, these studies did not use large samples (n = 40 or 72) and only made participants complete the strategy method one time, with computerized groupmates, preventing within-participant comparisons. Here, in contrast, 845 participants completed the strategy method two times, once with human and once with computerized groupmates. Our research aims were twofold: (1) to check the robustness of previous results with a large sample under various presentation conditions; and (2) to use a within-participant design to categorize participants according to how they behaved across the two scenarios. Ideally, a clean and reliable measure of conditional cooperation would find participants conditionally cooperating with humans and not cooperating with computers. Worryingly, only 7% of participants met this criterion. Overall, 83% of participants cooperated with the computers, and the mean contributions towards computers were 89% as large as those towards humans. These results, robust to the various presentation and order effects, pose serious concerns for the measurement of social preferences and question the idea that human cooperation is motivated by a concern for equal outcomes.
GamesDecision Sciences-Statistics, Probability and Uncertainty
CiteScore
1.60
自引率
11.10%
发文量
65
审稿时长
11 weeks
期刊介绍:
Games (ISSN 2073-4336) is an international, peer-reviewed, quick-refereeing open access journal (free for readers), which provides an advanced forum for studies related to strategic interaction, game theory and its applications, and decision making. The aim is to provide an interdisciplinary forum for all behavioral sciences and related fields, including economics, psychology, political science, mathematics, computer science, and biology (including animal behavior). To guarantee a rapid refereeing and editorial process, Games follows standard publication practices in the natural sciences.