Pub Date : 2025-05-01DOI: 10.1016/j.jmp.2025.102926
Stefano Noventa , Jürgen Heller , Sangbeak Ye , Augustin Kelava
In the past years, several theories for assessment have been developed within the fields of Psychometrics and Mathematical Psychology. The most notable are Item Response Theory (IRT), Cognitive Diagnostic Assessment (CDA), and Knowledge Structure Theory (KST). In spite of their common goals, these theories have been developed largely independently, focusing on slightly different aspects. In Part I of this three-part work, a general framework was introduced with the aim of achieving a unified perspective. The framework consists of two primitives (structure and process) and two operations (factorization and reparametrization) that allow to derive the models of these theories and systematize them within a general taxonomy. In this second contribution, the framework introduced in Part I is used to derive both KST and CDA models based on dichotomous latent variables, thus achieving a two-fold result: On the one hand, it settles the relation between the frameworks; On the other hand, it provides a simultaneous generalization of both frameworks, thus providing the foundations for the analysis of more general models and situations.
{"title":"Toward a unified perspective on assessment models, part II: Dichotomous latent variables","authors":"Stefano Noventa , Jürgen Heller , Sangbeak Ye , Augustin Kelava","doi":"10.1016/j.jmp.2025.102926","DOIUrl":"10.1016/j.jmp.2025.102926","url":null,"abstract":"<div><div>In the past years, several theories for assessment have been developed within the fields of Psychometrics and Mathematical Psychology. The most notable are Item Response Theory (IRT), Cognitive Diagnostic Assessment (CDA), and Knowledge Structure Theory (KST). In spite of their common goals, these theories have been developed largely independently, focusing on slightly different aspects. In Part I of this three-part work, a general framework was introduced with the aim of achieving a unified perspective. The framework consists of two primitives (structure and process) and two operations (factorization and reparametrization) that allow to derive the models of these theories and systematize them within a general taxonomy. In this second contribution, the framework introduced in Part I is used to derive both KST and CDA models based on dichotomous latent variables, thus achieving a two-fold result: On the one hand, it settles the relation between the frameworks; On the other hand, it provides a simultaneous generalization of both frameworks, thus providing the foundations for the analysis of more general models and situations.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"125 ","pages":"Article 102926"},"PeriodicalIF":2.2,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144116015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01DOI: 10.1016/j.jmp.2025.102922
Federico Quartieri
The paper introduces a refinement of maximality, called secure maximality, and a refinement of secure maximality, called perfect maximality. The effectivity of these refinements and the connection with other relevant optimality notions are investigated. Furthermore, necessary and sufficient conditions are provided for the secure maximality of all maximals and for the perfect maximality of all maximals as well as for the perfect maximality of all secure maximals. Several sufficient conditions for (as well as two characterizations of) the existence of secure and perfect maximals are established. The precise structure of the entire sets of secure and perfect maximals is examined for some specific classes of relations like interval orders that admit a certain type of representability by means of two real-valued functions, relations induced by cones and relations that admit linear multi-utility representations.
{"title":"Secure and perfect maximality","authors":"Federico Quartieri","doi":"10.1016/j.jmp.2025.102922","DOIUrl":"10.1016/j.jmp.2025.102922","url":null,"abstract":"<div><div>The paper introduces a refinement of maximality, called secure maximality, and a refinement of secure maximality, called perfect maximality. The effectivity of these refinements and the connection with other relevant optimality notions are investigated. Furthermore, necessary and sufficient conditions are provided for the secure maximality of all maximals and for the perfect maximality of all maximals as well as for the perfect maximality of all secure maximals. Several sufficient conditions for (as well as two characterizations of) the existence of secure and perfect maximals are established. The precise structure of the entire sets of secure and perfect maximals is examined for some specific classes of relations like interval orders that admit a certain type of representability by means of two real-valued functions, relations induced by cones and relations that admit linear multi-utility representations.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"125 ","pages":"Article 102922"},"PeriodicalIF":2.2,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143891925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01DOI: 10.1016/j.jmp.2025.102924
Christopher R. Fisher , Joseph W. Houpt , Othalia Larue , Kevin Schmidt
Cognitive architectures (CAs) are unified theories of cognition which describe invariant properties in the structure and function of cognition, including how sub-systems (e.g., memory, vision) interact as a coherent system. One problem stemming from the size and flexibility of CAs is deriving critical tests of their core architectural assumptions. To address this issue, we combine systems factorial technology (SFT) and global model analysis (GMA) into a unified framework called SFT-GMA. In the framework, the prediction space is defined in terms of qualitative classes of SFT models, and GMA identifies constraints on this space based on core architectural assumptions. Critical tests are then derived and tested with SFT. Our application of SFT-GMA to ACT-R revealed two key insights: (1) we identified critical tests despite many degrees of freedom in model specification, and (2) ACT-R requires serial processing of perceptual stimuli under most conditions. These processing constraints on perception are at odds with data reported in several published experiments.
{"title":"Using systems factorial technology for global model analysis of ACT-R’s core architectural assumptions","authors":"Christopher R. Fisher , Joseph W. Houpt , Othalia Larue , Kevin Schmidt","doi":"10.1016/j.jmp.2025.102924","DOIUrl":"10.1016/j.jmp.2025.102924","url":null,"abstract":"<div><div>Cognitive architectures (CAs) are unified theories of cognition which describe invariant properties in the structure and function of cognition, including how sub-systems (e.g., memory, vision) interact as a coherent system. One problem stemming from the size and flexibility of CAs is deriving critical tests of their core architectural assumptions. To address this issue, we combine systems factorial technology (SFT) and global model analysis (GMA) into a unified framework called SFT-GMA. In the framework, the prediction space is defined in terms of qualitative classes of SFT models, and GMA identifies constraints on this space based on core architectural assumptions. Critical tests are then derived and tested with SFT. Our application of SFT-GMA to ACT-R revealed two key insights: (1) we identified critical tests despite many degrees of freedom in model specification, and (2) ACT-R requires serial processing of perceptual stimuli under most conditions. These processing constraints on perception are at odds with data reported in several published experiments.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"125 ","pages":"Article 102924"},"PeriodicalIF":2.2,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143929582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-01DOI: 10.1016/j.jmp.2025.102923
Andrei Khrennikov , Masanao Ozawa , Felix Benninger , Oded Shor
The past few years have seen a surge in the application of quantum-like (QL) modeling in fields such as cognition, psychology, and decision-making. Despite the success of this approach in explaining various psychological phenomena, there remains a potential dissatisfaction due to its lack of clear connection to neurophysiological processes in the brain. Currently, it remains a phenomenological approach. In this paper, we develop a QL representation of networks of communicating neurons. This representation is not based on standard quantum theory but on generalized probability theory (GPT), with a focus on the operational measurement framework (see section 2.1 for comparison of classical, quantum, and generalized probability theories). Specifically, we use a version of GPT that relies on ordered linear state spaces rather than the traditional complex Hilbert spaces. A network of communicating neurons is modeled as a weighted directed graph, which is encoded by its weight matrix. The state space of these weight matrices is embedded within the GPT framework, incorporating effect-observables and state updates within the theory of measurement instruments — a critical aspect of this model. Under the specific assumption regarding neuronal connectivity, the compound system of neuronal networks is represented using the tensor product. This representation significantly enhances the computational power of The GPT-based approach successfully replicates key QL effects, such as order, non-repeatability, and disjunction effects — phenomena often associated with decision interference. Additionally, this framework enables QL modeling in medical diagnostics for neurological conditions like depression and epilepsy. While the focus of this paper is primarily on cognition and neuronal networks, the proposed formalism and methodology can be directly applied to a broad range of biological and social networks. Furthermore, it supports the claims of superiority made by quantum-inspired computing and can serve as the foundation for developing QL-based AI systems, specifically utilizing the QL representation of oscillator networks.
{"title":"Coupling quantum-like cognition with the neuronal networks within generalized probability theory","authors":"Andrei Khrennikov , Masanao Ozawa , Felix Benninger , Oded Shor","doi":"10.1016/j.jmp.2025.102923","DOIUrl":"10.1016/j.jmp.2025.102923","url":null,"abstract":"<div><div>The past few years have seen a surge in the application of quantum-like (QL) modeling in fields such as cognition, psychology, and decision-making. Despite the success of this approach in explaining various psychological phenomena, there remains a potential dissatisfaction due to its lack of clear connection to neurophysiological processes in the brain. Currently, it remains a phenomenological approach. In this paper, we develop a QL representation of networks of communicating neurons. This representation is not based on standard quantum theory but on <em>generalized probability theory</em> (GPT), with a focus on the operational measurement framework (see section 2.1 for comparison of classical, quantum, and generalized probability theories). Specifically, we use a version of GPT that relies on ordered linear state spaces rather than the traditional complex Hilbert spaces. A network of communicating neurons is modeled as a weighted directed graph, which is encoded by its weight matrix. The state space of these weight matrices is embedded within the GPT framework, incorporating effect-observables and state updates within the theory of measurement instruments — a critical aspect of this model. Under the specific assumption regarding neuronal connectivity, the compound system <span><math><mrow><mi>S</mi><mo>=</mo><mrow><mo>(</mo><msub><mrow><mi>S</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>,</mo><msub><mrow><mi>S</mi></mrow><mrow><mn>2</mn></mrow></msub><mo>)</mo></mrow></mrow></math></span> of neuronal networks is represented using the tensor product. This <span><math><mrow><msub><mrow><mi>S</mi></mrow><mrow><mn>1</mn></mrow></msub><mo>⊗</mo><msub><mrow><mi>S</mi></mrow><mrow><mn>2</mn></mrow></msub></mrow></math></span> representation significantly enhances the computational power of <span><math><mrow><mi>S</mi><mo>.</mo></mrow></math></span> The GPT-based approach successfully replicates key QL effects, such as order, non-repeatability, and disjunction effects — phenomena often associated with decision interference. Additionally, this framework enables QL modeling in medical diagnostics for neurological conditions like depression and epilepsy. While the focus of this paper is primarily on cognition and neuronal networks, the proposed formalism and methodology can be directly applied to a broad range of biological and social networks. Furthermore, it supports the claims of superiority made by quantum-inspired computing and can serve as the foundation for developing QL-based AI systems, specifically utilizing the QL representation of oscillator networks.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"125 ","pages":"Article 102923"},"PeriodicalIF":2.2,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143941112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-22DOI: 10.1016/j.jmp.2025.102918
Chris Thornton
In the Wason selection task, subjects show a tendency towards counter-logical behaviour. Evidence gained from this experiment raises questions about the role that deductive logic plays in human reasoning. A prominent explanation of the effect uses an information-gain model. Rather than reasoning deductively, it is argued that subjects seek to reduce uncertainty. The bias that is observed is seen to stem from maximizing information gain in this adaptively rational way. This theoretical article shows that a Boolean generalization of the information-gain model is potentially considered the normative foundation of reasoning, in which case several inferences traditionally considered errors are found to be valid. The article examines how this affects inferences involving both over-extension of logical implication and overestimation of conjunctive probability.
{"title":"A Boolean generalization of the information-gain model can eliminate specific reasoning errors","authors":"Chris Thornton","doi":"10.1016/j.jmp.2025.102918","DOIUrl":"10.1016/j.jmp.2025.102918","url":null,"abstract":"<div><div>In the Wason selection task, subjects show a tendency towards counter-logical behaviour. Evidence gained from this experiment raises questions about the role that deductive logic plays in human reasoning. A prominent explanation of the effect uses an information-gain model. Rather than reasoning deductively, it is argued that subjects seek to reduce uncertainty. The bias that is observed is seen to stem from maximizing information gain in this adaptively rational way. This theoretical article shows that a Boolean generalization of the information-gain model is potentially considered the normative foundation of reasoning, in which case several inferences traditionally considered errors are found to be valid. The article examines how this affects inferences involving both over-extension of logical implication and overestimation of conjunctive probability.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"125 ","pages":"Article 102918"},"PeriodicalIF":2.2,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143854276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-12DOI: 10.1016/j.jmp.2025.102920
Hanno von Bergen, Reinhard Diestel
Using the recently developed mathematical theory of tangles, we re-assess the mathematical foundations for applications of the five factor model in personality tests by a new, mathematically rigorous, quantitative method. Our findings broadly confirm the validity of current tests, but also show that more detailed information can be extracted from existing data.
We found that the big five traits appear at different levels of scrutiny. Some already emerge at a coarse resolution of our tools at which others cannot yet be discerned, while at a resolution where these can be discerned, and distinguished, some of the former traits are no longer visible but have split into more refined traits or disintegrated altogether.
We also identified traits other than the five targeted in those tests. These include more general traits combining two or more of the big five, as well as more specific traits refining some of them.
All our analysis is structural and quantitative, and thus rigorous in explicitly defined mathematical terms. Since tangles, once computed, can be described concisely in terms of very few explicit statements referring only to the test questions used, our findings are also directly open to interpretation by experts in psychology.
Tangle analysis can be applied similarly to other topics in psychology. Our paper is intended to serve as a first indication of what may be possible.
{"title":"Traits and tangles: An analysis of the Big Five paradigm by tangle-based clustering","authors":"Hanno von Bergen, Reinhard Diestel","doi":"10.1016/j.jmp.2025.102920","DOIUrl":"10.1016/j.jmp.2025.102920","url":null,"abstract":"<div><div>Using the recently developed mathematical theory of tangles, we re-assess the mathematical foundations for applications of the five factor model in personality tests by a new, mathematically rigorous, quantitative method. Our findings broadly confirm the validity of current tests, but also show that more detailed information can be extracted from existing data.</div><div>We found that the big five traits appear at different levels of scrutiny. Some already emerge at a coarse resolution of our tools at which others cannot yet be discerned, while at a resolution where these <em>can</em> be discerned, and distinguished, some of the former traits are no longer visible but have split into more refined traits or disintegrated altogether.</div><div>We also identified traits other than the five targeted in those tests. These include more general traits combining two or more of the big five, as well as more specific traits refining some of them.</div><div>All our analysis is structural and quantitative, and thus rigorous in explicitly defined mathematical terms. Since tangles, once computed, can be described concisely in terms of very few explicit statements referring only to the test questions used, our findings are also directly open to interpretation by experts in psychology.</div><div>Tangle analysis can be applied similarly to other topics in psychology. Our paper is intended to serve as a first indication of what may be possible.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"125 ","pages":"Article 102920"},"PeriodicalIF":2.2,"publicationDate":"2025-04-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143823837","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-09DOI: 10.1016/j.jmp.2025.102917
Michael D. Nunez , Anna-Lena Schubert , Gidon T. Frischkorn , Klaus Oberauer
Diffusion Decision Models (DDMs) are a widely used class of models that assume an accumulation of evidence during a quick decision. These models are often used as measurement models to assess individual differences in cognitive processes such as evidence accumulation rate and response caution. An underlying assumption of these models is that there is internal noise in the evidence accumulation process. We argue that this internal noise is a relevant psychological construct that is likely to vary over participants and explain differences in cognitive ability. In some cases a change in noise is a more parsimonious explanation of joint changes in speed-accuracy tradeoffs and ability. However, fitting traditional DDMs to behavioral data cannot yield estimates of an individual’s evidence accumulation rate, caution, and internal noise at the same time. This is due to an intrinsic unidentifiability of these parameters in DDMs. We explored the practical consequences of this unidentifiability by estimating the Bayesian joint posterior distributions of parameters (and thus joint uncertainty) for simulated data. We also introduce methods of estimating these parameters. Fundamentally, these parameters can be identified in two ways: (1) We can assume that one of the three parameters is fixed to a constant. We show that fixing one parameter, as is typical in fitting DDMs, results in parameter estimates that are ratios of true cognitive parameters including the parameter that is fixed. By fixing another parameter instead of noise, different ratios are estimated, which may be useful for measuring individual differences. (2) Alternatively, we could use additional observed variables that we can reasonably assume to be related to model parameters. Electroencephalographic (EEG) data or single-unit activity from animals can yield candidate measures. We show parameter recovery for models with true (simulated) connections to such additional covariates, as well as some recovery in misspecified models. We evaluate this approach with both single-trial and participant-level additional observed variables. Our findings reveal that with the integration of additional data, it becomes possible to discern individual differences across all parameters, enhancing the utility of DDMs without relying on strong assumptions. However, there are some important caveats with these new modeling approaches, and we provide recommendations for their use. This research paves the way to use the deeper theoretical understanding of sequential sampling models and the new modeling methods to measure individual differences in internal noise during decision-making.
{"title":"Cognitive models of decision-making with identifiable parameters: Diffusion decision models with within-trial noise","authors":"Michael D. Nunez , Anna-Lena Schubert , Gidon T. Frischkorn , Klaus Oberauer","doi":"10.1016/j.jmp.2025.102917","DOIUrl":"10.1016/j.jmp.2025.102917","url":null,"abstract":"<div><div>Diffusion Decision Models (DDMs) are a widely used class of models that assume an accumulation of evidence during a quick decision. These models are often used as measurement models to assess individual differences in cognitive processes such as evidence accumulation rate and response caution. An underlying assumption of these models is that there is internal noise in the evidence accumulation process. We argue that this internal noise is a relevant psychological construct that is likely to vary over participants and explain differences in cognitive ability. In some cases a change in noise is a more parsimonious explanation of joint changes in speed-accuracy tradeoffs and ability. However, fitting traditional DDMs to behavioral data cannot yield estimates of an individual’s evidence accumulation rate, caution, and internal noise at the same time. This is due to an intrinsic unidentifiability of these parameters in DDMs. We explored the practical consequences of this unidentifiability by estimating the Bayesian joint posterior distributions of parameters (and thus joint uncertainty) for simulated data. We also introduce methods of estimating these parameters. Fundamentally, these parameters can be identified in two ways: (1) We can assume that one of the three parameters is fixed to a constant. We show that fixing one parameter, as is typical in fitting DDMs, results in parameter estimates that are ratios of true cognitive parameters including the parameter that is fixed. By fixing another parameter instead of noise, different ratios are estimated, which may be useful for measuring individual differences. (2) Alternatively, we could use additional observed variables that we can reasonably assume to be related to model parameters. Electroencephalographic (EEG) data or single-unit activity from animals can yield candidate measures. We show parameter recovery for models with true (simulated) connections to such additional covariates, as well as some recovery in misspecified models. We evaluate this approach with both single-trial and participant-level additional observed variables. Our findings reveal that with the integration of additional data, it becomes possible to discern individual differences across all parameters, enhancing the utility of DDMs without relying on strong assumptions. However, there are some important caveats with these new modeling approaches, and we provide recommendations for their use. This research paves the way to use the deeper theoretical understanding of sequential sampling models and the new modeling methods to measure individual differences in internal noise during decision-making.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"125 ","pages":"Article 102917"},"PeriodicalIF":2.2,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143799868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-24DOI: 10.1016/j.jmp.2025.102919
Keith A. Schneider
Studying metacognition, the introspection of one's own decisions, can provide insights into the mechanisms underlying the decisions. Here we show that observers’ uncertainty about their decisions incorporates both the entropy of the stimuli and the entropy of their response probabilities across the psychometric function. Describing uncertainty data with a functional form permits the measurement of internal parameters not measurable from the decision responses alone. To test and demonstrate the utility of this novel model, we measured uncertainty in 11 participants as they judged the relative contrast appearance of two stimuli in several experiments employing implicit bias or attentional cues. The entropy model enabled an otherwise intractable quantitative analysis of participants’ uncertainty, which in one case distinguished two comparative judgments that produced nearly identical psychometric functions. In contrast, comparative and equality judgments with different behavioral reports yielded uncertainty reports that were not significantly different. The entropy model was able to successfully account for uncertainty in these two different types of decisions that resulted in differently shaped psychometric functions, and the entropy contribution from the stimuli, which were identical across experiments, was consistent. An observer's uncertainty could therefore be measured as the total entropy of the inputs and outputs of the stimulus-response system, i.e. the entropy of the stimuli plus the entropy of the observer's responses.
{"title":"An entropy model of decision uncertainty","authors":"Keith A. Schneider","doi":"10.1016/j.jmp.2025.102919","DOIUrl":"10.1016/j.jmp.2025.102919","url":null,"abstract":"<div><div>Studying metacognition, the introspection of one's own decisions, can provide insights into the mechanisms underlying the decisions. Here we show that observers’ uncertainty about their decisions incorporates both the entropy of the stimuli and the entropy of their response probabilities across the psychometric function. Describing uncertainty data with a functional form permits the measurement of internal parameters not measurable from the decision responses alone. To test and demonstrate the utility of this novel model, we measured uncertainty in 11 participants as they judged the relative contrast appearance of two stimuli in several experiments employing implicit bias or attentional cues. The entropy model enabled an otherwise intractable quantitative analysis of participants’ uncertainty, which in one case distinguished two comparative judgments that produced nearly identical psychometric functions. In contrast, comparative and equality judgments with different behavioral reports yielded uncertainty reports that were not significantly different. The entropy model was able to successfully account for uncertainty in these two different types of decisions that resulted in differently shaped psychometric functions, and the entropy contribution from the stimuli, which were identical across experiments, was consistent. An observer's uncertainty could therefore be measured as the total entropy of the inputs and outputs of the stimulus-response system, i.e. the entropy of the stimuli plus the entropy of the observer's responses.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"125 ","pages":"Article 102919"},"PeriodicalIF":2.2,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143679056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-10DOI: 10.1016/j.jmp.2025.102905
Ronaldo Vigo
Invariance and symmetry principles have played a fundamental if not essential role in the theoretical development of the physical and mathematical sciences. More recently, Generalized Invariance Structure Theory (GIST; Vigo, 2013, 2015; Vigo et al., 2022) has extended this methodological trajectory with respect to the study and formal modeling of human cognition. Indeed, GIST is the first systematic and extensively tested mathematical and computational theory of concept learning and categorization behavior (i.e., human generalization) based on such principles. The theory introduces an original mathematical and computational framework, with novel, more appropriate, and more natural characterizations, constructs, and measures of invariance and symmetry with respect to cognition than existing ones in the mathematical sciences and physics. These have proven effective in predicting and explaining empirically tested behavior in the domains of perception, concept learning, categorization, similarity assessment, aesthetic judgments, and decision making, among others. GIST has its roots in a precursor theory known as Categorical Invariance Theory (CIT; Vigo, 2009). This paper gives a basic introduction to two different notions of human invariance detection proposed by GIST and its precursor CIT: namely, a notion based on a cognitive mechanism of dimensional suppression, rapid attention shifting, and partial similarity assessment referred to as binding (s-invariance) and a perturbation notion based on perturbations of the values of the dimensions on which categories of object stimuli are defined (p-invariance). This is followed by the first simple formal proof of the invariance equivalence principle from GIST which asserts that the two notions are equivalent under a set of strict conditions on categories. The paper ends with a brief discussion of how GIST, unlike CIT, may be used to model probabilistic process accounts of categorization, and how it naturally and directly applies to the learning of sequential categories and to multiset-based concept learning.
在物理和数学科学的理论发展中,不变性和对称性原理即使不是必不可少的,也起到了基本的作用。最近,广义不变性结构理论(GIST;维戈,2013,2015;Vigo et al., 2022)在人类认知的研究和形式化建模方面扩展了这种方法轨迹。事实上,GIST是基于这些原理的概念学习和分类行为(即人类泛化)的第一个系统的和广泛测试的数学和计算理论。该理论引入了一个原始的数学和计算框架,与现有的数学科学和物理学相比,它具有新颖、更合适、更自然的特征、结构和认知不变性和对称性的度量。这些方法在预测和解释知觉、概念学习、分类、相似性评估、审美判断和决策等领域的经验测试行为方面已被证明是有效的。GIST起源于一个被称为范畴不变性理论(CIT;维哥,2009)。本文对GIST及其先驱CIT提出的两种不同的人类不变性检测概念进行了基本介绍:基于维度抑制、快速注意力转移和部分相似性评估的认知机制的概念称为绑定(s-不变性)和基于定义对象刺激类别的维度值的扰动(p-不变性)的概念。这是GIST对不变性等价原理的第一个简单形式证明,它断言这两个概念在一组严格的范畴条件下是等价的。本文最后简要讨论了GIST如何与CIT不同,可以用于对分类的概率过程帐户进行建模,以及它如何自然而直接地应用于顺序类别的学习和基于多集的概念学习。
{"title":"Two formal notions of higher-order invariance detection in humans (A proof of the invariance equivalence principle in Generalized Invariance Structure Theory and ramifications for related computations)","authors":"Ronaldo Vigo","doi":"10.1016/j.jmp.2025.102905","DOIUrl":"10.1016/j.jmp.2025.102905","url":null,"abstract":"<div><div>Invariance and symmetry principles have played a fundamental if not essential role in the theoretical development of the physical and mathematical sciences. More recently, Generalized Invariance Structure Theory (GIST; Vigo, 2013, 2015; Vigo et al., 2022) has extended this methodological trajectory with respect to the study and formal modeling of human cognition. Indeed, GIST is the first systematic and extensively tested mathematical and computational theory of concept learning and categorization behavior (i.e., human generalization) based on such principles. The theory introduces an original mathematical and computational framework, with novel, more appropriate, and more natural characterizations, constructs, and measures of invariance and symmetry with respect to cognition than existing ones in the mathematical sciences and physics. These have proven effective in predicting and explaining empirically tested behavior in the domains of perception, concept learning, categorization, similarity assessment, aesthetic judgments, and decision making, among others. GIST has its roots in a precursor theory known as Categorical Invariance Theory (CIT; Vigo, 2009). This paper gives a basic introduction to two different notions of human invariance detection proposed by GIST and its precursor CIT: namely, a notion based on a cognitive mechanism of dimensional suppression, rapid attention shifting, and partial similarity assessment referred to as <em>binding</em> (<em>s</em>-invariance) and a perturbation notion based on perturbations of the values of the dimensions on which categories of object stimuli are defined (<em>p</em>-invariance). This is followed by the first simple formal proof of the invariance equivalence principle from GIST which asserts that the two notions are equivalent under a set of strict conditions on categories. The paper ends with a brief discussion of how GIST, unlike CIT, may be used to model probabilistic process accounts of categorization, and how it naturally and directly applies to the learning of sequential categories and to multiset-based concept learning.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"125 ","pages":"Article 102905"},"PeriodicalIF":2.2,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143577240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-02DOI: 10.1016/j.jmp.2025.102907
Luca Stefanutti, Andrea Brancaccio
Procedural knowledge space theory aims to evaluate problem-solving skills using a formal representation of a problem space. Stefanutti et al. (2021) introduced the concept of the “shortest path space” to characterize optimal problem spaces when a task requires reaching a solution in the minimum number of moves. This paper takes that idea further. It expands the shortest-path space concept to include a wider range of optimization problems, where each move can be weighted by a real number representing its “value”. Depending on the application, the “value” could be a cost, waiting time, route length, etc. This new model, named the optimizing path space, comprises all the globally best solutions. Additionally, it sets the stage for evaluating human problem-solving skills in various areas, like cognitive and neuropsychological tests, experimental studies, and puzzles, where globally optimal solutions are required.
{"title":"The assessment of global optimization skills in procedural knowledge space theory","authors":"Luca Stefanutti, Andrea Brancaccio","doi":"10.1016/j.jmp.2025.102907","DOIUrl":"10.1016/j.jmp.2025.102907","url":null,"abstract":"<div><div>Procedural knowledge space theory aims to evaluate problem-solving skills using a formal representation of a problem space. Stefanutti et al. (2021) introduced the concept of the “shortest path space” to characterize optimal problem spaces when a task requires reaching a solution in the minimum number of moves. This paper takes that idea further. It expands the shortest-path space concept to include a wider range of optimization problems, where each move can be weighted by a real number representing its “value”. Depending on the application, the “value” could be a cost, waiting time, route length, etc. This new model, named the optimizing path space, comprises all the globally best solutions. Additionally, it sets the stage for evaluating human problem-solving skills in various areas, like cognitive and neuropsychological tests, experimental studies, and puzzles, where globally optimal solutions are required.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"125 ","pages":"Article 102907"},"PeriodicalIF":2.2,"publicationDate":"2025-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143526855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}