Pub Date : 2025-08-01Epub Date: 2025-05-29DOI: 10.1016/j.jmp.2025.102925
Diana Karimova , Sara van Erp , Roger Th.A.J. Leenders , Joris Mulder
In the social and behavioral sciences and related fields, statistical models are becoming increasingly complex with more parameters to explain intricate dependency structures among larger sets of variables. Regularization techniques, like penalized regression, help identify key parameters by shrinking negligible effects to zero, resulting in parsimonious solutions with strong predictive performance. This paper introduces a simple and flexible approximate Bayesian regularization (ABR) procedure, combining a Gaussian approximation of the likelihood with a Bayesian shrinkage prior to obtain a regularized posterior. Parsimonious (interpretable) solutions are obtained by taking the posterior modes. Parameter uncertainty is quantified using the full posterior. Implemented in the R package shrinkem, the method is evaluated in synthetic and empirical applications. Its flexibility is demonstrated across various models, including linear regression, relational event models, mediation analysis, factor analysis, and Gaussian graphical models.
{"title":"Honey, I shrunk the irrelevant effects! Simple and flexible approximate Bayesian regularization","authors":"Diana Karimova , Sara van Erp , Roger Th.A.J. Leenders , Joris Mulder","doi":"10.1016/j.jmp.2025.102925","DOIUrl":"10.1016/j.jmp.2025.102925","url":null,"abstract":"<div><div>In the social and behavioral sciences and related fields, statistical models are becoming increasingly complex with more parameters to explain intricate dependency structures among larger sets of variables. Regularization techniques, like penalized regression, help identify key parameters by shrinking negligible effects to zero, resulting in parsimonious solutions with strong predictive performance. This paper introduces a simple and flexible approximate Bayesian regularization (ABR) procedure, combining a Gaussian approximation of the likelihood with a Bayesian shrinkage prior to obtain a regularized posterior. Parsimonious (interpretable) solutions are obtained by taking the posterior modes. Parameter uncertainty is quantified using the full posterior. Implemented in the R package <span>shrinkem</span>, the method is evaluated in synthetic and empirical applications. Its flexibility is demonstrated across various models, including linear regression, relational event models, mediation analysis, factor analysis, and Gaussian graphical models.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"126 ","pages":"Article 102925"},"PeriodicalIF":2.2,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144169562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01Epub Date: 2025-04-22DOI: 10.1016/j.jmp.2025.102918
Chris Thornton
In the Wason selection task, subjects show a tendency towards counter-logical behaviour. Evidence gained from this experiment raises questions about the role that deductive logic plays in human reasoning. A prominent explanation of the effect uses an information-gain model. Rather than reasoning deductively, it is argued that subjects seek to reduce uncertainty. The bias that is observed is seen to stem from maximizing information gain in this adaptively rational way. This theoretical article shows that a Boolean generalization of the information-gain model is potentially considered the normative foundation of reasoning, in which case several inferences traditionally considered errors are found to be valid. The article examines how this affects inferences involving both over-extension of logical implication and overestimation of conjunctive probability.
{"title":"A Boolean generalization of the information-gain model can eliminate specific reasoning errors","authors":"Chris Thornton","doi":"10.1016/j.jmp.2025.102918","DOIUrl":"10.1016/j.jmp.2025.102918","url":null,"abstract":"<div><div>In the Wason selection task, subjects show a tendency towards counter-logical behaviour. Evidence gained from this experiment raises questions about the role that deductive logic plays in human reasoning. A prominent explanation of the effect uses an information-gain model. Rather than reasoning deductively, it is argued that subjects seek to reduce uncertainty. The bias that is observed is seen to stem from maximizing information gain in this adaptively rational way. This theoretical article shows that a Boolean generalization of the information-gain model is potentially considered the normative foundation of reasoning, in which case several inferences traditionally considered errors are found to be valid. The article examines how this affects inferences involving both over-extension of logical implication and overestimation of conjunctive probability.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"125 ","pages":"Article 102918"},"PeriodicalIF":2.2,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143854276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01Epub Date: 2025-05-16DOI: 10.1016/j.jmp.2025.102921
Jesse van Oostrum , Carlotta Langer , Nihat Ay
In this paper we present a concise mathematical description of active inference in discrete time. The main part of the paper serves as a basic introduction to the topic, including a detailed example of the action selection mechanism. The appendix discusses the more subtle mathematical details, targeting readers who have already studied the active inference literature but struggle to make sense of the mathematical details and derivations. Throughout, we emphasize precise and standard mathematical notation, ensuring consistency with existing texts and linking all equations to widely used references on active inference. Additionally, we provide Python code that implements the action selection and learning mechanisms described in this paper and is compatible with pymdp environments.
{"title":"A concise mathematical description of active inference in discrete time","authors":"Jesse van Oostrum , Carlotta Langer , Nihat Ay","doi":"10.1016/j.jmp.2025.102921","DOIUrl":"10.1016/j.jmp.2025.102921","url":null,"abstract":"<div><div>In this paper we present a concise mathematical description of active inference in discrete time. The main part of the paper serves as a basic introduction to the topic, including a detailed example of the action selection mechanism. The appendix discusses the more subtle mathematical details, targeting readers who have already studied the active inference literature but struggle to make sense of the mathematical details and derivations. Throughout, we emphasize precise and standard mathematical notation, ensuring consistency with existing texts and linking all equations to widely used references on active inference. Additionally, we provide Python code that implements the action selection and learning mechanisms described in this paper and is compatible with <span>pymdp</span> environments.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"125 ","pages":"Article 102921"},"PeriodicalIF":2.2,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144070519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01Epub Date: 2025-05-23DOI: 10.1016/j.jmp.2025.102926
Stefano Noventa , Jürgen Heller , Sangbeak Ye , Augustin Kelava
In the past years, several theories for assessment have been developed within the fields of Psychometrics and Mathematical Psychology. The most notable are Item Response Theory (IRT), Cognitive Diagnostic Assessment (CDA), and Knowledge Structure Theory (KST). In spite of their common goals, these theories have been developed largely independently, focusing on slightly different aspects. In Part I of this three-part work, a general framework was introduced with the aim of achieving a unified perspective. The framework consists of two primitives (structure and process) and two operations (factorization and reparametrization) that allow to derive the models of these theories and systematize them within a general taxonomy. In this second contribution, the framework introduced in Part I is used to derive both KST and CDA models based on dichotomous latent variables, thus achieving a two-fold result: On the one hand, it settles the relation between the frameworks; On the other hand, it provides a simultaneous generalization of both frameworks, thus providing the foundations for the analysis of more general models and situations.
{"title":"Toward a unified perspective on assessment models, part II: Dichotomous latent variables","authors":"Stefano Noventa , Jürgen Heller , Sangbeak Ye , Augustin Kelava","doi":"10.1016/j.jmp.2025.102926","DOIUrl":"10.1016/j.jmp.2025.102926","url":null,"abstract":"<div><div>In the past years, several theories for assessment have been developed within the fields of Psychometrics and Mathematical Psychology. The most notable are Item Response Theory (IRT), Cognitive Diagnostic Assessment (CDA), and Knowledge Structure Theory (KST). In spite of their common goals, these theories have been developed largely independently, focusing on slightly different aspects. In Part I of this three-part work, a general framework was introduced with the aim of achieving a unified perspective. The framework consists of two primitives (structure and process) and two operations (factorization and reparametrization) that allow to derive the models of these theories and systematize them within a general taxonomy. In this second contribution, the framework introduced in Part I is used to derive both KST and CDA models based on dichotomous latent variables, thus achieving a two-fold result: On the one hand, it settles the relation between the frameworks; On the other hand, it provides a simultaneous generalization of both frameworks, thus providing the foundations for the analysis of more general models and situations.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"125 ","pages":"Article 102926"},"PeriodicalIF":2.2,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144116015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01Epub Date: 2025-05-09DOI: 10.1016/j.jmp.2025.102924
Christopher R. Fisher , Joseph W. Houpt , Othalia Larue , Kevin Schmidt
Cognitive architectures (CAs) are unified theories of cognition which describe invariant properties in the structure and function of cognition, including how sub-systems (e.g., memory, vision) interact as a coherent system. One problem stemming from the size and flexibility of CAs is deriving critical tests of their core architectural assumptions. To address this issue, we combine systems factorial technology (SFT) and global model analysis (GMA) into a unified framework called SFT-GMA. In the framework, the prediction space is defined in terms of qualitative classes of SFT models, and GMA identifies constraints on this space based on core architectural assumptions. Critical tests are then derived and tested with SFT. Our application of SFT-GMA to ACT-R revealed two key insights: (1) we identified critical tests despite many degrees of freedom in model specification, and (2) ACT-R requires serial processing of perceptual stimuli under most conditions. These processing constraints on perception are at odds with data reported in several published experiments.
{"title":"Using systems factorial technology for global model analysis of ACT-R’s core architectural assumptions","authors":"Christopher R. Fisher , Joseph W. Houpt , Othalia Larue , Kevin Schmidt","doi":"10.1016/j.jmp.2025.102924","DOIUrl":"10.1016/j.jmp.2025.102924","url":null,"abstract":"<div><div>Cognitive architectures (CAs) are unified theories of cognition which describe invariant properties in the structure and function of cognition, including how sub-systems (e.g., memory, vision) interact as a coherent system. One problem stemming from the size and flexibility of CAs is deriving critical tests of their core architectural assumptions. To address this issue, we combine systems factorial technology (SFT) and global model analysis (GMA) into a unified framework called SFT-GMA. In the framework, the prediction space is defined in terms of qualitative classes of SFT models, and GMA identifies constraints on this space based on core architectural assumptions. Critical tests are then derived and tested with SFT. Our application of SFT-GMA to ACT-R revealed two key insights: (1) we identified critical tests despite many degrees of freedom in model specification, and (2) ACT-R requires serial processing of perceptual stimuli under most conditions. These processing constraints on perception are at odds with data reported in several published experiments.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"125 ","pages":"Article 102924"},"PeriodicalIF":2.2,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143929582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01Epub Date: 2025-04-30DOI: 10.1016/j.jmp.2025.102922
Federico Quartieri
The paper introduces a refinement of maximality, called secure maximality, and a refinement of secure maximality, called perfect maximality. The effectivity of these refinements and the connection with other relevant optimality notions are investigated. Furthermore, necessary and sufficient conditions are provided for the secure maximality of all maximals and for the perfect maximality of all maximals as well as for the perfect maximality of all secure maximals. Several sufficient conditions for (as well as two characterizations of) the existence of secure and perfect maximals are established. The precise structure of the entire sets of secure and perfect maximals is examined for some specific classes of relations like interval orders that admit a certain type of representability by means of two real-valued functions, relations induced by cones and relations that admit linear multi-utility representations.
{"title":"Secure and perfect maximality","authors":"Federico Quartieri","doi":"10.1016/j.jmp.2025.102922","DOIUrl":"10.1016/j.jmp.2025.102922","url":null,"abstract":"<div><div>The paper introduces a refinement of maximality, called secure maximality, and a refinement of secure maximality, called perfect maximality. The effectivity of these refinements and the connection with other relevant optimality notions are investigated. Furthermore, necessary and sufficient conditions are provided for the secure maximality of all maximals and for the perfect maximality of all maximals as well as for the perfect maximality of all secure maximals. Several sufficient conditions for (as well as two characterizations of) the existence of secure and perfect maximals are established. The precise structure of the entire sets of secure and perfect maximals is examined for some specific classes of relations like interval orders that admit a certain type of representability by means of two real-valued functions, relations induced by cones and relations that admit linear multi-utility representations.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"125 ","pages":"Article 102922"},"PeriodicalIF":2.2,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143891925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01Epub Date: 2025-05-14DOI: 10.1016/j.jmp.2025.102923
Andrei Khrennikov , Masanao Ozawa , Felix Benninger , Oded Shor
The past few years have seen a surge in the application of quantum-like (QL) modeling in fields such as cognition, psychology, and decision-making. Despite the success of this approach in explaining various psychological phenomena, there remains a potential dissatisfaction due to its lack of clear connection to neurophysiological processes in the brain. Currently, it remains a phenomenological approach. In this paper, we develop a QL representation of networks of communicating neurons. This representation is not based on standard quantum theory but on generalized probability theory (GPT), with a focus on the operational measurement framework (see section 2.1 for comparison of classical, quantum, and generalized probability theories). Specifically, we use a version of GPT that relies on ordered linear state spaces rather than the traditional complex Hilbert spaces. A network of communicating neurons is modeled as a weighted directed graph, which is encoded by its weight matrix. The state space of these weight matrices is embedded within the GPT framework, incorporating effect-observables and state updates within the theory of measurement instruments — a critical aspect of this model. Under the specific assumption regarding neuronal connectivity, the compound system of neuronal networks is represented using the tensor product. This representation significantly enhances the computational power of The GPT-based approach successfully replicates key QL effects, such as order, non-repeatability, and disjunction effects — phenomena often associated with decision interference. Additionally, this framework enables QL modeling in medical diagnostics for neurological conditions like depression and epilepsy. While the focus of this paper is primarily on cognition and neuronal networks, the proposed formalism and methodology can be directly applied to a broad range of biological and social networks. Furthermore, it supports the claims of superiority made by quantum-inspired computing and can serve as the foundation for developing QL-based AI systems, specifically utilizing the QL representation of oscillator networks.
{"title":"Coupling quantum-like cognition with the neuronal networks within generalized probability theory","authors":"Andrei Khrennikov , Masanao Ozawa , Felix Benninger , Oded Shor","doi":"10.1016/j.jmp.2025.102923","DOIUrl":"10.1016/j.jmp.2025.102923","url":null,"abstract":"<div><div>The past few years have seen a surge in the application of quantum-like (QL) modeling in fields such as cognition, psychology, and decision-making. Despite the success of this approach in explaining various psychological phenomena, there remains a potential dissatisfaction due to its lack of clear connection to neurophysiological processes in the brain. Currently, it remains a phenomenological approach. In this paper, we develop a QL representation of networks of communicating neurons. This representation is not based on standard quantum theory but on <em>generalized probability theory</em> (GPT), with a focus on the operational measurement framework (see section 2.1 for comparison of classical, quantum, and generalized probability theories). Specifically, we use a version of GPT that relies on ordered linear state spaces rather than the traditional complex Hilbert spaces. A network of communicating neurons is modeled as a weighted directed graph, which is encoded by its weight matrix. The state space of these weight matrices is embedded within the GPT framework, incorporating effect-observables and state updates within the theory of measurement instruments — a critical aspect of this model. Under the specific assumption regarding neuronal connectivity, the compound system <span><math><mrow><mi>S</mi><mo>=</mo><mrow><mo>(</mo><msub><mrow><mi>S</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>,</mo><msub><mrow><mi>S</mi></mrow><mrow><mn>2</mn></mrow></msub><mo>)</mo></mrow></mrow></math></span> of neuronal networks is represented using the tensor product. This <span><math><mrow><msub><mrow><mi>S</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>⊗</mo><msub><mrow><mi>S</mi></mrow><mrow><mn>2</mn></mrow></msub></mrow></math></span> representation significantly enhances the computational power of <span><math><mrow><mi>S</mi><mo>.</mo></mrow></math></span> The GPT-based approach successfully replicates key QL effects, such as order, non-repeatability, and disjunction effects — phenomena often associated with decision interference. Additionally, this framework enables QL modeling in medical diagnostics for neurological conditions like depression and epilepsy. While the focus of this paper is primarily on cognition and neuronal networks, the proposed formalism and methodology can be directly applied to a broad range of biological and social networks. Furthermore, it supports the claims of superiority made by quantum-inspired computing and can serve as the foundation for developing QL-based AI systems, specifically utilizing the QL representation of oscillator networks.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"125 ","pages":"Article 102923"},"PeriodicalIF":2.2,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143941112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01Epub Date: 2025-03-02DOI: 10.1016/j.jmp.2025.102907
Luca Stefanutti, Andrea Brancaccio
Procedural knowledge space theory aims to evaluate problem-solving skills using a formal representation of a problem space. Stefanutti et al. (2021) introduced the concept of the “shortest path space” to characterize optimal problem spaces when a task requires reaching a solution in the minimum number of moves. This paper takes that idea further. It expands the shortest-path space concept to include a wider range of optimization problems, where each move can be weighted by a real number representing its “value”. Depending on the application, the “value” could be a cost, waiting time, route length, etc. This new model, named the optimizing path space, comprises all the globally best solutions. Additionally, it sets the stage for evaluating human problem-solving skills in various areas, like cognitive and neuropsychological tests, experimental studies, and puzzles, where globally optimal solutions are required.
{"title":"The assessment of global optimization skills in procedural knowledge space theory","authors":"Luca Stefanutti, Andrea Brancaccio","doi":"10.1016/j.jmp.2025.102907","DOIUrl":"10.1016/j.jmp.2025.102907","url":null,"abstract":"<div><div>Procedural knowledge space theory aims to evaluate problem-solving skills using a formal representation of a problem space. Stefanutti et al. (2021) introduced the concept of the “shortest path space” to characterize optimal problem spaces when a task requires reaching a solution in the minimum number of moves. This paper takes that idea further. It expands the shortest-path space concept to include a wider range of optimization problems, where each move can be weighted by a real number representing its “value”. Depending on the application, the “value” could be a cost, waiting time, route length, etc. This new model, named the optimizing path space, comprises all the globally best solutions. Additionally, it sets the stage for evaluating human problem-solving skills in various areas, like cognitive and neuropsychological tests, experimental studies, and puzzles, where globally optimal solutions are required.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"125 ","pages":"Article 102907"},"PeriodicalIF":2.2,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143526855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01Epub Date: 2025-02-27DOI: 10.1016/j.jmp.2025.102906
Jiaqi Huang, Jerome Busemeyer
One of cognitive science’s core challenges is reconciling the success of probabilistic models in explaining human cognition with the observed fallacies in human probability judgments. This tutorial delves into models that address this discrepancy, shedding light on probabilistic fallacies. It encompasses earlier accounts like heuristics and averaging models, as well as contemporary, comprehensive models like quantum probability, the Probability Plus Noise model, and the Bayesian Sampler. The tutorial concludes by introducing the most recent accounts that integrate probability judgments with choice and response time, and highlighting ongoing challenges in the field.
{"title":"Models of human probability judgment errors","authors":"Jiaqi Huang, Jerome Busemeyer","doi":"10.1016/j.jmp.2025.102906","DOIUrl":"10.1016/j.jmp.2025.102906","url":null,"abstract":"<div><div>One of cognitive science’s core challenges is reconciling the success of probabilistic models in explaining human cognition with the observed fallacies in human probability judgments. This tutorial delves into models that address this discrepancy, shedding light on probabilistic fallacies. It encompasses earlier accounts like heuristics and averaging models, as well as contemporary, comprehensive models like quantum probability, the Probability Plus Noise model, and the Bayesian Sampler. The tutorial concludes by introducing the most recent accounts that integrate probability judgments with choice and response time, and highlighting ongoing challenges in the field.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"125 ","pages":"Article 102906"},"PeriodicalIF":2.2,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143507981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01Epub Date: 2025-03-10DOI: 10.1016/j.jmp.2025.102905
Ronaldo Vigo
Invariance and symmetry principles have played a fundamental if not essential role in the theoretical development of the physical and mathematical sciences. More recently, Generalized Invariance Structure Theory (GIST; Vigo, 2013, 2015; Vigo et al., 2022) has extended this methodological trajectory with respect to the study and formal modeling of human cognition. Indeed, GIST is the first systematic and extensively tested mathematical and computational theory of concept learning and categorization behavior (i.e., human generalization) based on such principles. The theory introduces an original mathematical and computational framework, with novel, more appropriate, and more natural characterizations, constructs, and measures of invariance and symmetry with respect to cognition than existing ones in the mathematical sciences and physics. These have proven effective in predicting and explaining empirically tested behavior in the domains of perception, concept learning, categorization, similarity assessment, aesthetic judgments, and decision making, among others. GIST has its roots in a precursor theory known as Categorical Invariance Theory (CIT; Vigo, 2009). This paper gives a basic introduction to two different notions of human invariance detection proposed by GIST and its precursor CIT: namely, a notion based on a cognitive mechanism of dimensional suppression, rapid attention shifting, and partial similarity assessment referred to as binding (s-invariance) and a perturbation notion based on perturbations of the values of the dimensions on which categories of object stimuli are defined (p-invariance). This is followed by the first simple formal proof of the invariance equivalence principle from GIST which asserts that the two notions are equivalent under a set of strict conditions on categories. The paper ends with a brief discussion of how GIST, unlike CIT, may be used to model probabilistic process accounts of categorization, and how it naturally and directly applies to the learning of sequential categories and to multiset-based concept learning.
在物理和数学科学的理论发展中,不变性和对称性原理即使不是必不可少的,也起到了基本的作用。最近,广义不变性结构理论(GIST;维戈,2013,2015;Vigo et al., 2022)在人类认知的研究和形式化建模方面扩展了这种方法轨迹。事实上,GIST是基于这些原理的概念学习和分类行为(即人类泛化)的第一个系统的和广泛测试的数学和计算理论。该理论引入了一个原始的数学和计算框架,与现有的数学科学和物理学相比,它具有新颖、更合适、更自然的特征、结构和认知不变性和对称性的度量。这些方法在预测和解释知觉、概念学习、分类、相似性评估、审美判断和决策等领域的经验测试行为方面已被证明是有效的。GIST起源于一个被称为范畴不变性理论(CIT;维哥,2009)。本文对GIST及其先驱CIT提出的两种不同的人类不变性检测概念进行了基本介绍:基于维度抑制、快速注意力转移和部分相似性评估的认知机制的概念称为绑定(s-不变性)和基于定义对象刺激类别的维度值的扰动(p-不变性)的概念。这是GIST对不变性等价原理的第一个简单形式证明,它断言这两个概念在一组严格的范畴条件下是等价的。本文最后简要讨论了GIST如何与CIT不同,可以用于对分类的概率过程帐户进行建模,以及它如何自然而直接地应用于顺序类别的学习和基于多集的概念学习。
{"title":"Two formal notions of higher-order invariance detection in humans (A proof of the invariance equivalence principle in Generalized Invariance Structure Theory and ramifications for related computations)","authors":"Ronaldo Vigo","doi":"10.1016/j.jmp.2025.102905","DOIUrl":"10.1016/j.jmp.2025.102905","url":null,"abstract":"<div><div>Invariance and symmetry principles have played a fundamental if not essential role in the theoretical development of the physical and mathematical sciences. More recently, Generalized Invariance Structure Theory (GIST; Vigo, 2013, 2015; Vigo et al., 2022) has extended this methodological trajectory with respect to the study and formal modeling of human cognition. Indeed, GIST is the first systematic and extensively tested mathematical and computational theory of concept learning and categorization behavior (i.e., human generalization) based on such principles. The theory introduces an original mathematical and computational framework, with novel, more appropriate, and more natural characterizations, constructs, and measures of invariance and symmetry with respect to cognition than existing ones in the mathematical sciences and physics. These have proven effective in predicting and explaining empirically tested behavior in the domains of perception, concept learning, categorization, similarity assessment, aesthetic judgments, and decision making, among others. GIST has its roots in a precursor theory known as Categorical Invariance Theory (CIT; Vigo, 2009). This paper gives a basic introduction to two different notions of human invariance detection proposed by GIST and its precursor CIT: namely, a notion based on a cognitive mechanism of dimensional suppression, rapid attention shifting, and partial similarity assessment referred to as <em>binding</em> (<em>s</em>-invariance) and a perturbation notion based on perturbations of the values of the dimensions on which categories of object stimuli are defined (<em>p</em>-invariance). This is followed by the first simple formal proof of the invariance equivalence principle from GIST which asserts that the two notions are equivalent under a set of strict conditions on categories. The paper ends with a brief discussion of how GIST, unlike CIT, may be used to model probabilistic process accounts of categorization, and how it naturally and directly applies to the learning of sequential categories and to multiset-based concept learning.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"125 ","pages":"Article 102905"},"PeriodicalIF":2.2,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143577240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}