Pub Date : 2025-11-02DOI: 10.1016/j.jmp.2025.102953
Gongxun Wang , Jinjin Li , Jun-e Feng
In knowledge structure theory, the conjunctive model is the dual of the disjunctive model. What, then, is the dual of the competence model? Regarding the competence model, prior work has established necessary and sufficient conditions for delineating knowledge spaces via the competence model and has well studied the fringe characterization of knowledge states in delineated knowledge spaces. Accordingly, what are the necessary and sufficient conditions for delineating simple closure spaces via the competence model? How can the fringe of knowledge states be characterized in delineated simple closure spaces? Furthermore, in the competence model, top space characterization is complex. How to simplify it? To address these problems, this paper proposes the dual skill function (i.e., dual competence model) and the complement skill function. The dual competence model provides a novel methodology for analyzing the competence model, enabling the transfer of results on delineated knowledge spaces to their dual closure spaces, and offering a more direct characterization of top spaces. In doing so, it effectively addresses the latter three problems. These results refine knowledge structure theory.
{"title":"The dual and the complement of a skill function","authors":"Gongxun Wang , Jinjin Li , Jun-e Feng","doi":"10.1016/j.jmp.2025.102953","DOIUrl":"10.1016/j.jmp.2025.102953","url":null,"abstract":"<div><div>In knowledge structure theory, the conjunctive model is the dual of the disjunctive model. What, then, is the dual of the competence model? Regarding the competence model, prior work has established necessary and sufficient conditions for delineating knowledge spaces via the competence model and has well studied the fringe characterization of knowledge states in delineated knowledge spaces. Accordingly, what are the necessary and sufficient conditions for delineating simple closure spaces via the competence model? How can the fringe of knowledge states be characterized in delineated simple closure spaces? Furthermore, in the competence model, top space characterization is complex. How to simplify it? To address these problems, this paper proposes the dual skill function (i.e., dual competence model) and the complement skill function. The dual competence model provides a novel methodology for analyzing the competence model, enabling the transfer of results on delineated knowledge spaces to their dual closure spaces, and offering a more direct characterization of top spaces. In doing so, it effectively addresses the latter three problems. These results refine knowledge structure theory.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"127 ","pages":"Article 102953"},"PeriodicalIF":1.5,"publicationDate":"2025-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145465504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-15DOI: 10.1016/j.jmp.2025.102950
Dominik R. Bach
Experiment-based calibration is a novel method for measurement validation, which – unlike classical validity metrics – does not require stable between-person variance. In this approach, the latent variable to be measured is manipulated by an experiment, and its predicted scores – termed standard scores – are compared against the measured scores. Previous work has shown that under plausible boundary conditions, the correlation between standard and measured scores – termed retrodictive validity – is informative about measurement accuracy, i.e. combined trueness and precision. Here, I expand these findings in several directions. First, I formalise the approach in a probability-theoretic framework with the concept of a standardised calibration space. Second, I relate this framework to classical validity theory and show that the boundary conditions in fact apply to any form of criterion validity, including classical convergent validity. Thus, I state precise and empirically quantifiable boundary conditions under which criterion validity metrics are informative on validity. Third, I relate these boundary conditions to confounding variables, i.e. correlated latent variables. I show that in the limit, calibration will converge on the latent variable that is most closely related to the standard. Finally, I provide a framework for modelling the data-generating process with Markov kernels, and identify sufficient conditions under which the data generation model results in a calibration space. In sum, this article provides a formal probability-theoretic framework for experiment-based calibration and facilitates modelling and empirical assessment of the data generating processes.
{"title":"Experiment-based calibration in psychology: Foundational and data-generating model","authors":"Dominik R. Bach","doi":"10.1016/j.jmp.2025.102950","DOIUrl":"10.1016/j.jmp.2025.102950","url":null,"abstract":"<div><div>Experiment-based calibration is a novel method for measurement validation, which – unlike classical validity metrics – does not require stable between-person variance. In this approach, the latent variable to be measured is manipulated by an experiment, and its predicted scores – termed standard scores – are compared against the measured scores. Previous work has shown that under plausible boundary conditions, the correlation between standard and measured scores – termed retrodictive validity – is informative about measurement accuracy, i.e. combined trueness and precision. Here, I expand these findings in several directions. First, I formalise the approach in a probability-theoretic framework with the concept of a standardised calibration space. Second, I relate this framework to classical validity theory and show that the boundary conditions in fact apply to any form of criterion validity, including classical convergent validity. Thus, I state precise and empirically quantifiable boundary conditions under which criterion validity metrics are informative on validity. Third, I relate these boundary conditions to confounding variables, i.e. correlated latent variables. I show that in the limit, calibration will converge on the latent variable that is most closely related to the standard. Finally, I provide a framework for modelling the data-generating process with Markov kernels, and identify sufficient conditions under which the data generation model results in a calibration space. In sum, this article provides a formal probability-theoretic framework for experiment-based calibration and facilitates modelling and empirical assessment of the data generating processes.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"127 ","pages":"Article 102950"},"PeriodicalIF":1.5,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145059932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-12DOI: 10.1016/j.jmp.2025.102943
Eszter Gselmann , Christopher W. Doble , Yung-Fong Hsu
Iverson (2006b) proposed the law of similarity for the sensitivity functions . Compared to the former models, the generality of this one lies in that here and can also depend on the variables and . In the literature, this model (or its special cases) is usually considered together with a given psychophysical representation (e.g. Fechnerian, subtractive, or affine). Our goal, however, is to study at first Iverson’s law of similarity on its own. We show that if certain mild assumptions are fulfilled, then can be written in a rather simple form containing only one-variable functions. The obtained form proves to be very useful when we assume some kind of representation.
Motivated by Hsu and Iverson (2016), we then study the above model assuming that the mapping is multiplicatively translational. First, we show how these mappings can be characterized. Later we turn to the examination of Falmagne’s power law. According to our results, the corresponding function can have a Fechnerian representation, and also it can have a subtractive representation. We close the paper with the study of the shift invariance property.
{"title":"On Iverson’s law of similarity","authors":"Eszter Gselmann , Christopher W. Doble , Yung-Fong Hsu","doi":"10.1016/j.jmp.2025.102943","DOIUrl":"10.1016/j.jmp.2025.102943","url":null,"abstract":"<div><div><span><span>Iverson (2006b)</span></span> proposed the law of similarity <span><span><span><math><mrow><msub><mrow><mi>ξ</mi></mrow><mrow><mi>s</mi></mrow></msub><mrow><mo>(</mo><mi>λ</mi><mi>x</mi><mo>)</mo></mrow><mo>=</mo><mi>γ</mi><mrow><mo>(</mo><mi>λ</mi><mo>,</mo><mi>s</mi><mo>)</mo></mrow><msub><mrow><mi>ξ</mi></mrow><mrow><mi>η</mi><mrow><mo>(</mo><mi>λ</mi><mo>,</mo><mi>s</mi><mo>)</mo></mrow></mrow></msub><mrow><mo>(</mo><mi>x</mi><mo>)</mo></mrow></mrow></math></span></span></span>for the sensitivity functions <span><math><mrow><msub><mrow><mi>ξ</mi></mrow><mrow><mi>s</mi></mrow></msub><mspace></mspace><mrow><mo>(</mo><mi>s</mi><mo>∈</mo><mi>S</mi><mo>)</mo></mrow></mrow></math></span>. Compared to the former models, the generality of this one lies in that here <span><math><mi>γ</mi></math></span> and <span><math><mi>η</mi></math></span> can also depend on the variables <span><math><mi>λ</mi></math></span> and <span><math><mi>s</mi></math></span>. In the literature, this model (or its special cases) is usually considered together with a given psychophysical representation (e.g. Fechnerian, subtractive, or affine). Our goal, however, is to study at first Iverson’s law of similarity on its own. We show that if certain mild assumptions are fulfilled, then <span><math><mi>ξ</mi></math></span> can be written in a rather simple form containing only one-variable functions. The obtained form proves to be very useful when we assume some kind of representation.</div><div>Motivated by <span><span>Hsu and Iverson (2016)</span></span>, we then study the above model assuming that the mapping <span><math><mi>η</mi></math></span> is multiplicatively translational. First, we show how these mappings can be characterized. Later we turn to the examination of Falmagne’s power law. According to our results, the corresponding function <span><math><mi>ξ</mi></math></span> can have a Fechnerian representation, and also it can have a subtractive representation. We close the paper with the study of the shift invariance property.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"127 ","pages":"Article 102943"},"PeriodicalIF":1.5,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145049728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-09-03DOI: 10.1016/j.jmp.2025.102942
Björn Meder , Charley M. Wu , Felix G. Rebitschek
Any medical innovation must first prove its benefits with reliable evidence from clinical trials. Evidence is commonly expressed using two metrics, summarizing treatment benefits based on either absolute risk reductions (ARRs) or relative risk reductions (RRRs). Both metrics are derived from the same data, but they implement conceptually distinct ideas. Here, we analyze these risk reductions measures from a causal modeling perspective. First, we show that ARR is equivalent to , while RRR is equivalent to causal power, thus clarifying the implicit causal assumptions. Second, we show how this formal equivalence establishes a relationship with causal Bayes nets theory, offering a basis for incorporating risk reduction metrics into a computational modeling framework. Leveraging these analyses, we demonstrate that under dynamically varying baseline risks, ARRs and RRRs lead to strongly diverging predictions. Specifically, the inherent assumption of a linear parameterization of the underlying causal graph can lead to incorrect conclusions when generalizing treatment benefits (e.g, predicting the effect of a vaccine in new populations with different baseline risks). Our analyses highlight the shared principles underlying risk reduction metrics and measures of causal strength, emphasizing the potential for explicating causal structure and inference in medical research.
{"title":"Causal analysis of absolute and relative risk reductions","authors":"Björn Meder , Charley M. Wu , Felix G. Rebitschek","doi":"10.1016/j.jmp.2025.102942","DOIUrl":"10.1016/j.jmp.2025.102942","url":null,"abstract":"<div><div>Any medical innovation must first prove its benefits with reliable evidence from clinical trials. Evidence is commonly expressed using two metrics, summarizing treatment benefits based on either absolute risk reductions (ARRs) or relative risk reductions (RRRs). Both metrics are derived from the same data, but they implement conceptually distinct ideas. Here, we analyze these risk reductions measures from a causal modeling perspective. First, we show that ARR is equivalent to <span><math><mrow><mi>Δ</mi><mi>P</mi></mrow></math></span>, while RRR is equivalent to causal power, thus clarifying the implicit causal assumptions. Second, we show how this formal equivalence establishes a relationship with causal Bayes nets theory, offering a basis for incorporating risk reduction metrics into a computational modeling framework. Leveraging these analyses, we demonstrate that under dynamically varying baseline risks, ARRs and RRRs lead to strongly diverging predictions. Specifically, the inherent assumption of a linear parameterization of the underlying causal graph can lead to incorrect conclusions when generalizing treatment benefits (e.g, predicting the effect of a vaccine in new populations with different baseline risks). Our analyses highlight the shared principles underlying risk reduction metrics and measures of causal strength, emphasizing the potential for explicating causal structure and inference in medical research.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"127 ","pages":"Article 102942"},"PeriodicalIF":1.5,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144989436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1016/j.jmp.2025.102940
Gianni Bosi , Esteban Induráin , Ana Munárriz , Yeray R. Rincón
This paper contributes to the theoretical literature on decision models where agents may encounter challenges in comparing alternatives. We introduce a characterization of countable Richter–Peleg multi-utility representations, both semicontinuous (upper and lower) and continuous, within preorders that may not be total. The proposed theorems provide a comprehensive mathematical framework, complementing previous results of Alcantud et al. and Bosi on countable multi-utility representations. Our characterizations establish necessary and sufficient conditions through topological properties and constructive methods via indicator functions. Furthermore, we introduce a topological framework aligned with the property of strong local non-satiation and provide a novel theorem containing sufficient conditions for the existence of countable upper semi-continuous multi-utility representations of a preorder. The results demonstrate that preference representations can be achieved using countably many functions rather than uncountable families, with implications for computational tractability and the identification of maximal elements in optimization contexts.
{"title":"Characterization of countable and continuous Richter–Peleg multi-utility representations","authors":"Gianni Bosi , Esteban Induráin , Ana Munárriz , Yeray R. Rincón","doi":"10.1016/j.jmp.2025.102940","DOIUrl":"10.1016/j.jmp.2025.102940","url":null,"abstract":"<div><div>This paper contributes to the theoretical literature on decision models where agents may encounter challenges in comparing alternatives. We introduce a characterization of countable Richter–Peleg multi-utility representations, both semicontinuous (upper and lower) and continuous, within preorders that may not be total. The proposed theorems provide a comprehensive mathematical framework, complementing previous results of Alcantud et al. and Bosi on countable multi-utility representations. Our characterizations establish necessary and sufficient conditions through topological properties and constructive methods via indicator functions. Furthermore, we introduce a topological framework aligned with the property of strong local non-satiation and provide a novel theorem containing sufficient conditions for the existence of countable upper semi-continuous multi-utility representations of a preorder. The results demonstrate that preference representations can be achieved using countably many functions rather than uncountable families, with implications for computational tractability and the identification of maximal elements in optimization contexts.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"126 ","pages":"Article 102940"},"PeriodicalIF":1.5,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144886895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-08-01DOI: 10.1016/j.jmp.2025.102941
Bo Wang , Jinjin Li , Bochi Xu , Wen Sun , Yingru Lin
The present paper introduces an attribute map that offers an alternative approach to modeling polytomous item–response relationships. This new attribute map is based on the principle that each available attribute can independently enable an item to reach a specific observable response level. The paper rigorously defines this attribute map and establishes the corresponding item–response function. Using these two maps, a coherent attribute structure is constructed, leading to a competence-based polytomous assessment structure. Finally, a straightforward mathematical example is provided to illustrate the validity and practical applicability of this theoretical framework.
{"title":"An alternative attribute map for polytomous assessment structures","authors":"Bo Wang , Jinjin Li , Bochi Xu , Wen Sun , Yingru Lin","doi":"10.1016/j.jmp.2025.102941","DOIUrl":"10.1016/j.jmp.2025.102941","url":null,"abstract":"<div><div>The present paper introduces an attribute map that offers an alternative approach to modeling polytomous item–response relationships. This new attribute map is based on the principle that each available attribute can independently enable an item to reach a specific observable response level. The paper rigorously defines this attribute map and establishes the corresponding item–response function. Using these two maps, a coherent attribute structure is constructed, leading to a competence-based polytomous assessment structure. Finally, a straightforward mathematical example is provided to illustrate the validity and practical applicability of this theoretical framework.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"126 ","pages":"Article 102941"},"PeriodicalIF":1.5,"publicationDate":"2025-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144890458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-27DOI: 10.1016/j.jmp.2025.102938
Johannes Müller-Trede , Michel Regenwetter
Classical random utility models imply a consistency property called regularity. Decision makers who satisfy regularity are at least as likely to choose an option from a set of available options as from any larger set that contains . In light of ample empirical evidence for context-dependent choice that violates regularity, some researchers have questioned the descriptive validity of all random utility models. In this article, we show that not all random utility models imply regularity. We propose a general framework for random utility models that accommodate context dependence and may violate regularity. Mathematically, like the classical models, context-dependent random utility models form convex polytopes. They yield behavioral predictions for those choice sets from which choices are made, by specifying combinations of preference rankings across two or more contexts. We discuss how context-dependent models can be less or more parsimonious than the classical models. Random utility models with or without regularity can be tested with contemporary methods of order-constrained inference.
{"title":"Random utility without regularity","authors":"Johannes Müller-Trede , Michel Regenwetter","doi":"10.1016/j.jmp.2025.102938","DOIUrl":"10.1016/j.jmp.2025.102938","url":null,"abstract":"<div><div>Classical random utility models imply a consistency property called <em>regularity</em>. Decision makers who satisfy regularity are at least as likely to choose an option <span><math><mi>x</mi></math></span> from a set <span><math><mi>X</mi></math></span> of available options as from any larger set <span><math><mi>Y</mi></math></span> that contains <span><math><mi>X</mi></math></span>. In light of ample empirical evidence for context-dependent choice that violates regularity, some researchers have questioned the descriptive validity of all random utility models. In this article, we show that not all random utility models imply regularity. We propose a general framework for random utility models that accommodate context dependence and may violate regularity. Mathematically, like the classical models, context-dependent random utility models form convex polytopes. They yield behavioral predictions for those choice sets from which choices are made, by specifying combinations of preference rankings across two or more contexts. We discuss how context-dependent models can be less or more parsimonious than the classical models. Random utility models with or without regularity can be tested with contemporary methods of order-constrained inference.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"126 ","pages":"Article 102938"},"PeriodicalIF":2.2,"publicationDate":"2025-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144713491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-19DOI: 10.1016/j.jmp.2025.102939
Paola Manzini , Marco Mariotti , Henrik Petri
The classic (to date unsolved) stochastic binary choice problem asks under what conditions a given stochastic choice function defined on pairs of alternatives derives from a random ranking. We propose a solution to the problem for the case in which at most two rankings are assigned positive probability. This case is psychologically motivated and interesting for applications. It is structurally different from the general case in that the choice functions that are derived from a random ranking do not necessarily form a convex polytope, hence they are not even in principle described by a set of linear inequalities.
{"title":"The stochastic 2-binary choice problem","authors":"Paola Manzini , Marco Mariotti , Henrik Petri","doi":"10.1016/j.jmp.2025.102939","DOIUrl":"10.1016/j.jmp.2025.102939","url":null,"abstract":"<div><div>The classic (to date unsolved) stochastic binary choice problem asks under what conditions a given stochastic choice function defined on pairs of alternatives derives from a random ranking. We propose a solution to the problem for the case in which at most two rankings are assigned positive probability. This case is psychologically motivated and interesting for applications. It is structurally different from the general case in that the choice functions that are derived from a random ranking do not necessarily form a convex polytope, hence they are not even in principle described by a set of linear inequalities.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"126 ","pages":"Article 102939"},"PeriodicalIF":2.2,"publicationDate":"2025-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144665559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-30DOI: 10.1016/j.jmp.2025.102927
Tom H. Rosenström , Alasdair I. Houston
We present a novel interpretation of delay discounting – a theoretical mechanism by which decision-makers discount the current value of reward if it is obtained at a future time rather than immediately. The theory proposes that decision-makers rationally account for the natural phenomenon of compounded interests (use exponential discounting) but need to take an average or expected value over some uncertainty distribution for the compound interest rate. Hence, the name Expected Exponential Discounting (EED) theory of inter-temporal choice. We show that EED provides a mechanism that unifies multiple empirically discovered descriptive discounting functions and fits to key qualitative findings about delay discounting in humans under non-sequential contexts, such as for hypothetical questions about delayed rewards. The general, falsifiable and comparatively minimal EED theory provides a good sanity check for more complex accounts of delay discounting, but also supports the derivation of new empirical predictions and reference points.
{"title":"Expected exponential discounting in inter-temporal decision making","authors":"Tom H. Rosenström , Alasdair I. Houston","doi":"10.1016/j.jmp.2025.102927","DOIUrl":"10.1016/j.jmp.2025.102927","url":null,"abstract":"<div><div>We present a novel interpretation of delay discounting – a theoretical mechanism by which decision-makers discount the current value of reward if it is obtained at a future time rather than immediately. The theory proposes that decision-makers rationally account for the natural phenomenon of compounded interests (use exponential discounting) but need to take an average or expected value over some uncertainty distribution for the compound interest rate. Hence, the name Expected Exponential Discounting (EED) theory of inter-temporal choice. We show that EED provides a mechanism that unifies multiple empirically discovered descriptive discounting functions and fits to key qualitative findings about delay discounting in humans under non-sequential contexts, such as for hypothetical questions about delayed rewards. The general, falsifiable and comparatively minimal EED theory provides a good sanity check for more complex accounts of delay discounting, but also supports the derivation of new empirical predictions and reference points.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"126 ","pages":"Article 102927"},"PeriodicalIF":2.2,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144169561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-29DOI: 10.1016/j.jmp.2025.102925
Diana Karimova , Sara van Erp , Roger Th.A.J. Leenders , Joris Mulder
In the social and behavioral sciences and related fields, statistical models are becoming increasingly complex with more parameters to explain intricate dependency structures among larger sets of variables. Regularization techniques, like penalized regression, help identify key parameters by shrinking negligible effects to zero, resulting in parsimonious solutions with strong predictive performance. This paper introduces a simple and flexible approximate Bayesian regularization (ABR) procedure, combining a Gaussian approximation of the likelihood with a Bayesian shrinkage prior to obtain a regularized posterior. Parsimonious (interpretable) solutions are obtained by taking the posterior modes. Parameter uncertainty is quantified using the full posterior. Implemented in the R package shrinkem, the method is evaluated in synthetic and empirical applications. Its flexibility is demonstrated across various models, including linear regression, relational event models, mediation analysis, factor analysis, and Gaussian graphical models.
{"title":"Honey, I shrunk the irrelevant effects! Simple and flexible approximate Bayesian regularization","authors":"Diana Karimova , Sara van Erp , Roger Th.A.J. Leenders , Joris Mulder","doi":"10.1016/j.jmp.2025.102925","DOIUrl":"10.1016/j.jmp.2025.102925","url":null,"abstract":"<div><div>In the social and behavioral sciences and related fields, statistical models are becoming increasingly complex with more parameters to explain intricate dependency structures among larger sets of variables. Regularization techniques, like penalized regression, help identify key parameters by shrinking negligible effects to zero, resulting in parsimonious solutions with strong predictive performance. This paper introduces a simple and flexible approximate Bayesian regularization (ABR) procedure, combining a Gaussian approximation of the likelihood with a Bayesian shrinkage prior to obtain a regularized posterior. Parsimonious (interpretable) solutions are obtained by taking the posterior modes. Parameter uncertainty is quantified using the full posterior. Implemented in the R package <span>shrinkem</span>, the method is evaluated in synthetic and empirical applications. Its flexibility is demonstrated across various models, including linear regression, relational event models, mediation analysis, factor analysis, and Gaussian graphical models.</div></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"126 ","pages":"Article 102925"},"PeriodicalIF":2.2,"publicationDate":"2025-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144169562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}