Susan Wagner Cook, Elle M. D. Wernette, Madison Valentine, Mary Aldugom, Todd Pruner, Kimberly M. Fenn
Although children learn more when teachers gesture, it is not clear how gesture supports learning. Here, we sought to investigate the nature of the memory processes that underlie the observed benefits of gesture on lasting learning. We hypothesized that instruction with gesture might create memory representations that are particularly resistant to interference. We investigated this possibility in a classroom study with 402 second- and third-grade children. Participants received classroom-level instruction in mathematical equivalence using videos with or without accompanying gesture. After instruction, children solved problems that were either visually similar to the problems that were taught, and consistent with an operational interpretation of the equal sign (interference), or visually distinct from equivalence problems and without an equal sign (control) in order to assess the role of gesture in resisting interference after learning. Gesture facilitated learning, but the effects of gesture and interference varied depending on type of problem being solved and the strategies that children used to solve problems prior to instruction. Some children benefitted from gesture, while others did not. These findings have implications for understanding the mechanisms underlying the beneficial effect of gesture on mathematical learning, revealing that gesture does not work via a general mechanism like enhancing attention or engagement that would apply to children with all forms of prior knowledge.
{"title":"How Prior Knowledge, Gesture Instruction, and Interference After Instruction Interact to Influence Learning of Mathematical Equivalence","authors":"Susan Wagner Cook, Elle M. D. Wernette, Madison Valentine, Mary Aldugom, Todd Pruner, Kimberly M. Fenn","doi":"10.1111/cogs.13412","DOIUrl":"10.1111/cogs.13412","url":null,"abstract":"<p>Although children learn more when teachers gesture, it is not clear <i>how</i> gesture supports learning. Here, we sought to investigate the nature of the memory processes that underlie the observed benefits of gesture on lasting learning. We hypothesized that instruction with gesture might create memory representations that are particularly resistant to interference. We investigated this possibility in a classroom study with 402 second- and third-grade children. Participants received classroom-level instruction in mathematical equivalence using videos with or without accompanying gesture. After instruction, children solved problems that were either visually similar to the problems that were taught, and consistent with an operational interpretation of the equal sign (interference), or visually distinct from equivalence problems and without an equal sign (control) in order to assess the role of gesture in resisting interference after learning. Gesture facilitated learning, but the effects of gesture and interference varied depending on type of problem being solved and the strategies that children used to solve problems prior to instruction. Some children benefitted from gesture, while others did not. These findings have implications for understanding the mechanisms underlying the beneficial effect of gesture on mathematical learning, revealing that gesture does not work via a general mechanism like enhancing attention or engagement that would apply to children with all forms of prior knowledge.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139944558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Categorization behavior can be fruitfully analyzed in terms of the trade-off between as high as possible faithfulness in the transmission of information about samples of the classes to be categorized, and as low as possible transmission costs for that same information. The kinds of categorization behaviors we associate with conceptual atoms, prototypes, and exemplars emerge naturally as a result of this trade-off, in the presence of certain natural constraints on the probabilistic distribution of samples, and the ways in which we measure faithfulness. Beyond the general structure of categorization in these circumstances, the same information-centered perspective can shed light on other, more concrete properties of human categorization performance, such as the results of certain prominent experiments on supervised categorization.
{"title":"The Information-Processing Perspective on Categorization","authors":"Manolo Martínez","doi":"10.1111/cogs.13411","DOIUrl":"10.1111/cogs.13411","url":null,"abstract":"<p>Categorization behavior can be fruitfully analyzed in terms of the trade-off between as high as possible faithfulness in the transmission of information about samples of the classes to be categorized, and as low as possible transmission costs for that same information. The kinds of categorization behaviors we associate with conceptual atoms, prototypes, and exemplars emerge naturally as a result of this trade-off, in the presence of certain natural constraints on the probabilistic distribution of samples, and the ways in which we measure faithfulness. Beyond the general structure of categorization in these circumstances, the same information-centered perspective can shed light on other, more concrete properties of human categorization performance, such as the results of certain prominent experiments on supervised categorization.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13411","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139944559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Distributional models of lexical semantics are capable of acquiring sophisticated representations of word meanings. The main theoretical insight provided by these models is that they demonstrate the systematic connection between the knowledge that people acquire and the experience that they have with the natural language environment. However, linguistic experience is inherently variable and differs radically across people due to demographic and cultural variables. Recently, distributional models have been used to examine how word meanings vary across languages and it was found that there is considerable variability in the meanings of words across languages for most semantic categories. The goal of this article is to examine how variable word meanings are across individual language users within a single language. This was accomplished by assembling 500 individual user corpora attained from the online forum Reddit. Each user corpus ranged between 3.8 and 32.3 million words each, and a count-based distributional framework was used to extract word meanings for each user. These representations were then used to estimate the semantic alignment of word meanings across individual language users. It was found that there are significant levels of relativity in word meanings across individuals, and these differences are partially explained by other psycholinguistic factors, such as concreteness, semantic diversity, and social aspects of language usage. These results point to word meanings being fundamentally relative and contextually fluid, with this relativeness being related to the individualized nature of linguistic experience.
{"title":"Determining the Relativity of Word Meanings Through the Construction of Individualized Models of Semantic Memory","authors":"Brendan T. Johns","doi":"10.1111/cogs.13413","DOIUrl":"10.1111/cogs.13413","url":null,"abstract":"<p>Distributional models of lexical semantics are capable of acquiring sophisticated representations of word meanings. The main theoretical insight provided by these models is that they demonstrate the systematic connection between the knowledge that people acquire and the experience that they have with the natural language environment. However, linguistic experience is inherently variable and differs radically across people due to demographic and cultural variables. Recently, distributional models have been used to examine how word meanings vary across languages and it was found that there is considerable variability in the meanings of words across languages for most semantic categories. The goal of this article is to examine how variable word meanings are across individual language users within a single language. This was accomplished by assembling 500 individual user corpora attained from the online forum Reddit. Each user corpus ranged between 3.8 and 32.3 million words each, and a count-based distributional framework was used to extract word meanings for each user. These representations were then used to estimate the semantic alignment of word meanings across individual language users. It was found that there are significant levels of relativity in word meanings across individuals, and these differences are partially explained by other psycholinguistic factors, such as concreteness, semantic diversity, and social aspects of language usage. These results point to word meanings being fundamentally relative and contextually fluid, with this relativeness being related to the individualized nature of linguistic experience.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139944557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas St. Pierre, Jida Jaffan, Craig G. Chambers, Elizabeth K. Johnson
Adults are skilled at using language to construct/negotiate identity and to signal affiliation with others, but little is known about how these abilities develop in children. Clearly, children mirror statistical patterns in their local environment (e.g., Canadian children using zed instead of zee), but do they flexibly adapt their linguistic choices on the fly in response to the choices of different peers? To address this question, we examined the effect of group membership on 7- to 9-year-olds' labeling of objects in a trivia game, exploring whether they were more likely to use a particular label (e.g., sofa vs. couch) if members of their “team” also used that label. In a preregistered study, children (N = 72) were assigned to a team (red or green) and were asked during experimental trials to answer questions—which had multiple possible answers (e.g., blackboard or chalkboard)—after hearing two teammates and two opponents respond to the same question. Results showed that children were significantly more likely to produce labels less commonly used by the community (i.e., dispreferred labels) when their teammates had produced those labels. Crucially, this effect was tied to group membership, and could not be explained by children simply repeating the most recently used label. These findings demonstrate how social processes (i.e., group membership) can guide linguistic variation in children.
{"title":"The Icing on the Cake. Or Is it Frosting? The Influence of Group Membership on Children's Lexical Choices","authors":"Thomas St. Pierre, Jida Jaffan, Craig G. Chambers, Elizabeth K. Johnson","doi":"10.1111/cogs.13410","DOIUrl":"10.1111/cogs.13410","url":null,"abstract":"<p>Adults are skilled at using language to construct/negotiate identity and to signal affiliation with others, but little is known about how these abilities develop in children. Clearly, children mirror statistical patterns in their local environment (e.g., Canadian children using <i>zed</i> instead of <i>zee</i>), but do they flexibly adapt their linguistic choices on the fly in response to the choices of different peers? To address this question, we examined the effect of group membership on 7- to 9-year-olds' labeling of objects in a trivia game, exploring whether they were more likely to use a particular label (e.g., <i>sofa</i> vs. <i>couch</i>) if members of their “team” also used that label. In a preregistered study, children (<i>N</i> = 72) were assigned to a team (red or green) and were asked during experimental trials to answer questions—which had multiple possible answers (e.g., <i>blackboard</i> or <i>chalkboard</i>)—after hearing two teammates and two opponents respond to the same question. Results showed that children were significantly more likely to produce labels less commonly used by the community (i.e., dispreferred labels) when their teammates had produced those labels. Crucially, this effect was tied to group membership, and could not be explained by children simply repeating the most recently used label. These findings demonstrate how social processes (i.e., group membership) can guide linguistic variation in children.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13410","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139940964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
How do people come to opposite causal judgments about societal problems, such as whether a public health policy reduced COVID-19 cases? The current research tests an understudied cognitive mechanism in which people may agree about what actually happened (e.g., that a public health policy was implemented and COVID-19 cases declined), but can be made to disagree about the counterfactual, or what would have happened otherwise (e.g., whether COVID-19 cases would have declined naturally without intervention) via comparison cases. Across two preregistered studies (total N = 480), participants reasoned about the implementation of a public policy that was followed by an immediate decline in novel virus cases. Study 1 shows that people's judgments about the causal impact of the policy could be pushed in opposite directions by emphasizing comparison cases that imply different counterfactual outcomes. Study 2 finds that people recognize they can use such information to influence others. Specifically, in service of persuading others to support or reject a public health policy, people systematically showed comparison cases implying the counterfactual outcome that aligned with their position. These findings were robust across samples of U.S. college students and politically and socioeconomically diverse U.S. adults. Together, these studies suggest that implied counterfactuals are a powerful tool that individuals can use to manufacture others’ causal judgments and warrant further investigation as a mechanism contributing to belief polarization.
{"title":"Calculated Comparisons: Manufacturing Societal Causal Judgments by Implying Different Counterfactual Outcomes","authors":"Jamie Amemiya, Gail D. Heyman, Caren M. Walker","doi":"10.1111/cogs.13408","DOIUrl":"10.1111/cogs.13408","url":null,"abstract":"<p>How do people come to opposite causal judgments about societal problems, such as whether a public health policy reduced COVID-19 cases? The current research tests an understudied cognitive mechanism in which people may agree about what <i>actually</i> happened (e.g., that a public health policy was implemented and COVID-19 cases declined), but can be made to disagree about the counterfactual, or what <i>would have</i> happened otherwise (e.g., whether COVID-19 cases would have declined naturally without intervention) via comparison cases. Across two preregistered studies (total <i>N</i> = 480), participants reasoned about the implementation of a public policy that was followed by an immediate decline in novel virus cases. Study 1 shows that people's judgments about the causal impact of the policy could be pushed in opposite directions by emphasizing comparison cases that imply different counterfactual outcomes. Study 2 finds that people recognize they can use such information to influence others. Specifically, in service of persuading others to support or reject a public health policy, people systematically showed comparison cases implying the counterfactual outcome that aligned with their position. These findings were robust across samples of U.S. college students and politically and socioeconomically diverse U.S. adults. Together, these studies suggest that implied counterfactuals are a powerful tool that individuals can use to manufacture others’ causal judgments and warrant further investigation as a mechanism contributing to belief polarization.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13408","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139698719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Spontaneous eye blinks are modulated around perceptual events. Our previous study, using a visual ambiguous stimulus, indicated that blink probability decreases before a reported perceptual switch. In the current study, we tested our hypothesis that an absence of blinks marks a time in which perceptual switches are facilitated in- and outside the visual domain. In three experiments, presenting either a visual motion quartet in light or darkness or a bistable auditory streaming stimulus, we found a co-occurrence of blink rate reduction with increased perceptual switch probability. In the visual domain, perceptual switches induced by a short interruption of visual input (blank) allowed an estimate of the timing of the perceptual event with respect to the motor response. This provided the first evidence that the blink reduction was not a consequence of the perceptual switch. Importantly, by showing that the time between switches and the previous blink was significantly longer than the inter-blink interval, our studies allowed to conclude that perceptual switches did not happen at random but followed a prolonged period of nonblinking. Correspondingly, blink rate and switch rate showed an inverse relationship. Our study supports the idea that the absence or presence of blinks maps perceptual processes independent of the sensory modality.
{"title":"Spontaneous Eye Blinks Map the Probability of Perceptual Reinterpretation During Visual and Auditory Ambiguity","authors":"Supriya Murali, Barbara Händel","doi":"10.1111/cogs.13414","DOIUrl":"10.1111/cogs.13414","url":null,"abstract":"<p>Spontaneous eye blinks are modulated around perceptual events. Our previous study, using a visual ambiguous stimulus, indicated that blink probability decreases before a reported perceptual switch. In the current study, we tested our hypothesis that an absence of blinks marks a time in which perceptual switches are facilitated in- and outside the visual domain. In three experiments, presenting either a visual motion quartet in light or darkness or a bistable auditory streaming stimulus, we found a co-occurrence of blink rate reduction with increased perceptual switch probability. In the visual domain, perceptual switches induced by a short interruption of visual input (blank) allowed an estimate of the timing of the perceptual event with respect to the motor response. This provided the first evidence that the blink reduction was not a consequence of the perceptual switch. Importantly, by showing that the time between switches and the previous blink was significantly longer than the inter-blink interval, our studies allowed to conclude that perceptual switches did not happen at random but followed a prolonged period of nonblinking. Correspondingly, blink rate and switch rate showed an inverse relationship. Our study supports the idea that the absence or presence of blinks maps perceptual processes independent of the sensory modality.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13414","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139698720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
People are not as fast or as strong as many other creatures that evolved around us. What gives us an evolutionary advantage is working together to achieve common aims. Coordinating joint action begins at a tender age with such cooperative activities as alternating babbling and clapping games. Adult joint activities are far more complex and use multiple means of coordination. Joint action has attracted qualitative analyses by sociolinguists, cognitive scientists, and philosophers as well as empirical analyses and theories by cognitive scientists. Here, we analyze how joint action is spontaneously coordinated from start to finish in a novel complex real-life joint activity, assembling a piece of furniture, a task that captures the essentials of joint action, collaborators, things in the world, and communicative devices. Pairs of strangers assembled a TV cart from a stack of parts and a photo of the completed cart. Coordination prior to each assembly action was coded as explicit, using speech or gesture, or implicit, actions that both advanced the task and communicated the next step. Initial planning relied on explicit communication about structure, but not action nor division of labor, which were improvised. That served to establish a joint representation of the goal that informed actions and monitored progress. As assembly progressed, coordination was increasingly implicit, through actions alone. Joint action is a dynamic interplay of explicit and implicit signaling with respect to things in the world to coordinate ongoing progress, guided by a shared representation of the goal.
{"title":"Putting it Together, Together","authors":"Chen Zheng, Barbara Tversky","doi":"10.1111/cogs.13405","DOIUrl":"10.1111/cogs.13405","url":null,"abstract":"<p>People are not as fast or as strong as many other creatures that evolved around us. What gives us an evolutionary advantage is working together to achieve common aims. Coordinating joint action begins at a tender age with such cooperative activities as alternating babbling and clapping games. Adult joint activities are far more complex and use multiple means of coordination. Joint action has attracted qualitative analyses by sociolinguists, cognitive scientists, and philosophers as well as empirical analyses and theories by cognitive scientists. Here, we analyze how joint action is spontaneously coordinated from start to finish in a novel complex real-life joint activity, assembling a piece of furniture, a task that captures the essentials of joint action, collaborators, things in the world, and communicative devices. Pairs of strangers assembled a TV cart from a stack of parts and a photo of the completed cart. Coordination prior to each assembly action was coded as <i>explicit</i>, using speech or gesture, or <i>implicit</i>, actions that both advanced the task and communicated the next step. Initial planning relied on explicit communication about structure, but not action nor division of labor, which were improvised. That served to establish a joint representation of the goal that informed actions and monitored progress. As assembly progressed, coordination was increasingly implicit, through actions alone. Joint action is a dynamic interplay of explicit and implicit signaling with respect to things in the world to coordinate ongoing progress, guided by a shared representation of the goal.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139673280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fabian Tomaschek, Michael Ramscar, Jessie S. Nixon
Sequence learning is fundamental to a wide range of cognitive functions. Explaining how sequences—and the relations between the elements they comprise—are learned is a fundamental challenge to cognitive science. However, although hundreds of articles addressing this question are published each year, the actual learning mechanisms involved in the learning of sequences are rarely investigated. We present three experiments that seek to examine these mechanisms during a typing task. Experiments 1 and 2 tested learning during typing single letters on each trial. Experiment 3 tested for “chunking” of these letters into “words.” The results of these experiments were used to examine the mechanisms that could best account for them, with a focus on two particular proposals: statistical transitional probability learning and discriminative error-driven learning. Experiments 1 and 2 showed that error-driven learning was a better predictor of response latencies than either n-gram frequencies or transitional probabilities. No evidence for chunking was found in Experiment 3, probably due to interspersing visual cues with the motor response. In addition, learning occurred across a greater distance in Experiment 1 than Experiment 2, suggesting that the greater predictability that comes with increased structure leads to greater learnability. These results shed new light on the mechanism responsible for sequence learning. Despite the widely held assumption that transitional probability learning is essential to this process, the present results suggest instead that the sequences are learned through a process of discriminative learning, involving prediction and feedback from prediction error.
{"title":"The Keys to the Future? An Examination of Statistical Versus Discriminative Accounts of Serial Pattern Learning","authors":"Fabian Tomaschek, Michael Ramscar, Jessie S. Nixon","doi":"10.1111/cogs.13404","DOIUrl":"10.1111/cogs.13404","url":null,"abstract":"<p>Sequence learning is fundamental to a wide range of cognitive functions. Explaining how sequences—and the relations between the elements they comprise—are learned is a fundamental challenge to cognitive science. However, although hundreds of articles addressing this question are published each year, the actual learning mechanisms involved in the learning of sequences are rarely investigated. We present three experiments that seek to examine these mechanisms during a typing task. Experiments 1 and 2 tested learning during typing single letters on each trial. Experiment 3 tested for “chunking” of these letters into “words.” The results of these experiments were used to examine the mechanisms that could best account for them, with a focus on two particular proposals: statistical transitional probability learning and discriminative error-driven learning. Experiments 1 and 2 showed that error-driven learning was a better predictor of response latencies than either n-gram frequencies or transitional probabilities. No evidence for chunking was found in Experiment 3, probably due to interspersing visual cues with the motor response. In addition, learning occurred across a greater distance in Experiment 1 than Experiment 2, suggesting that the greater predictability that comes with increased structure leads to greater learnability. These results shed new light on the mechanism responsible for sequence learning. Despite the widely held assumption that transitional probability learning is essential to this process, the present results suggest instead that the sequences are learned through a process of discriminative learning, involving prediction and feedback from prediction error.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13404","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139643155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Given a rich environment, how do we decide on what information to use? A view of a single entity (e.g., a group of birds) affords many distinct interpretations, including their number, average size, and spatial extent. An enduring challenge for cognition, therefore, is to focus resources on the most relevant evidence for any particular decision. In the present study, subjects completed three tasks—number discrimination, surface area discrimination, and convex hull discrimination—with the same stimulus set, where these three features were orthogonalized. Therefore, only the relevant feature provided consistent evidence for decisions in each task. This allowed us to determine how well humans discriminate each feature dimension and what evidence they relied on to do so. We introduce a novel computational approach that fits both feature precision and feature use. We found that the most relevant feature for each decision is extracted and relied on, with minor contributions from competing features. These results suggest that multiple feature dimensions are separately represented for each attended ensemble of many items and that cognition is efficient at selecting the appropriate evidence for a decision.
{"title":"Modeling Magnitude Discrimination: Effects of Internal Precision and Attentional Weighting of Feature Dimensions","authors":"Emily M. Sanford, Chad M. Topaz, Justin Halberda","doi":"10.1111/cogs.13409","DOIUrl":"10.1111/cogs.13409","url":null,"abstract":"<p>Given a rich environment, how do we decide on what information to use? A view of a single entity (e.g., a group of birds) affords many distinct interpretations, including their number, average size, and spatial extent. An enduring challenge for cognition, therefore, is to focus resources on the most relevant evidence for any particular decision. In the present study, subjects completed three tasks—number discrimination, surface area discrimination, and convex hull discrimination—with the same stimulus set, where these three features were orthogonalized. Therefore, only the relevant feature provided consistent evidence for decisions in each task. This allowed us to determine how well humans discriminate each feature dimension and what evidence they relied on to do so. We introduce a novel computational approach that fits both feature precision and feature use. We found that the most relevant feature for each decision is extracted and relied on, with minor contributions from competing features. These results suggest that multiple feature dimensions are separately represented for each attended ensemble of many items and that cognition is efficient at selecting the appropriate evidence for a decision.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139643154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pascale Willemsen, Lucien Baumgartner, Bianca Cepollaro, Kevin Reuter
Acts that are considered undesirable standardly violate our expectations. In contrast, acts that count as morally desirable can either meet our expectations or exceed them. The zone in which an act can be morally desirable yet not exceed our expectations is what we call the zone of moral indifference, and it has so far been neglected. In this paper, we show that people can use positive terms in a deflated manner to refer to actions in the zone of moral indifference, whereas negative terms cannot be so interpreted.
被认为不可取的行为通常会违背我们的期望。相反,在道德上可取的行为要么符合我们的期望,要么超出我们的期望。我们称之为道德冷漠区(zone of moral indifference),迄今为止,这一区域一直被忽视。在本文中,我们将证明人们可以用褒义词来指代道德冷漠区中的行为,而贬义词则不能这样解释。
{"title":"Evaluative Deflation, Social Expectations, and the Zone of Moral Indifference","authors":"Pascale Willemsen, Lucien Baumgartner, Bianca Cepollaro, Kevin Reuter","doi":"10.1111/cogs.13406","DOIUrl":"10.1111/cogs.13406","url":null,"abstract":"<p>Acts that are considered undesirable standardly violate our expectations. In contrast, acts that count as morally desirable can either meet our expectations or exceed them. The zone in which an act can be morally desirable yet not exceed our expectations is what we call the zone of moral indifference, and it has so far been neglected. In this paper, we show that people can use positive terms in a deflated manner to refer to actions in the zone of moral indifference, whereas negative terms cannot be so interpreted.</p>","PeriodicalId":48349,"journal":{"name":"Cognitive Science","volume":null,"pages":null},"PeriodicalIF":2.5,"publicationDate":"2024-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/cogs.13406","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139571846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}