Pub Date : 2024-05-14DOI: 10.1016/j.jmp.2024.102845
Fred S. Roberts , Clintin P. Davis-Stober , Michel Regenwetter
{"title":"The mathematical psychology of Peter Fishburn","authors":"Fred S. Roberts , Clintin P. Davis-Stober , Michel Regenwetter","doi":"10.1016/j.jmp.2024.102845","DOIUrl":"https://doi.org/10.1016/j.jmp.2024.102845","url":null,"abstract":"","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"120 ","pages":"Article 102845"},"PeriodicalIF":1.8,"publicationDate":"2024-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140948335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-16DOI: 10.1016/j.jmp.2024.102857
Karl Christoph Klauer, Raphael Hartmann, Constantin G. Meyer-Grant
We propose an extension of the widely used class of multinomial processing tree models by incorporating response times via diffusion-model kernels. Multinomial processing tree models are models of categorical data in terms of a number of cognitive and guessing processes estimating the probabilities with which each process outcome occurs. The new method allows one to estimate completion times of each process along with outcome probability and thereby provides process-oriented accounts of accuracy and latency data in all domains in which multinomial processing tree models have been applied. Furthermore, the new models are implemented hierarchically so that individual differences are explicitly accounted for and do not bias the population-level estimates. The new approach overcomes a number of shortcomings of previous extensions of multinomial models to incorporate response times. We evaluate the new method’s performance via a recovery study and simulation-based calibration. The method allows one to test hypotheses about processing architecture, and it provides an extension of traditional diffusion model analyses where multinomial models have been proposed for the modeled paradigm. We illustrate these and other benefits of the new model class using five existing data sets from recognition memory.
{"title":"RT-MPTs: Process models for response-time distributions with diffusion-model kernels","authors":"Karl Christoph Klauer, Raphael Hartmann, Constantin G. Meyer-Grant","doi":"10.1016/j.jmp.2024.102857","DOIUrl":"https://doi.org/10.1016/j.jmp.2024.102857","url":null,"abstract":"<div><p>We propose an extension of the widely used class of multinomial processing tree models by incorporating response times via diffusion-model kernels. Multinomial processing tree models are models of categorical data in terms of a number of cognitive and guessing processes estimating the probabilities with which each process outcome occurs. The new method allows one to estimate completion times of each process along with outcome probability and thereby provides process-oriented accounts of accuracy and latency data in all domains in which multinomial processing tree models have been applied. Furthermore, the new models are implemented hierarchically so that individual differences are explicitly accounted for and do not bias the population-level estimates. The new approach overcomes a number of shortcomings of previous extensions of multinomial models to incorporate response times. We evaluate the new method’s performance via a recovery study and simulation-based calibration. The method allows one to test hypotheses about processing architecture, and it provides an extension of traditional diffusion model analyses where multinomial models have been proposed for the modeled paradigm. We illustrate these and other benefits of the new model class using five existing data sets from recognition memory.</p></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"120 ","pages":"Article 102857"},"PeriodicalIF":1.8,"publicationDate":"2024-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0022249624000270/pdfft?md5=7db8ec5c88e223bf8d8bb5ba6e2cf417&pid=1-s2.0-S0022249624000270-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140559000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-11DOI: 10.1016/j.jmp.2024.102856
Niels Vanhasbroeck, Tim Loossens, Francis Tuerlinckx
In this paper, we establish a formal connection between two dynamic modeling approaches that are often taken to study affect dynamics. More specifically, we show that the exponential discounting model can be rewritten to a specific case of the VARMAX, thereby shedding light on the underlying similarities and assumptions of the two models. This derivation has some important consequences for research. First, it allows researchers who use discounting models in their studies to use the tools established within the broader time series literature to evaluate the applicability of their models. Second, it lays bare some of the implicit restrictions discounting models put on their parameters and, therefore, provides a foundation for empirical testing and validation of these models. One of these restrictions concerns the exponential shape of the discounting function that is often assumed in the affect dynamical literature. As an alternative, we briefly introduce the quasi-hyperbolic discounting function.
{"title":"Two peas in a pod: Discounting models as a special case of the VARMAX","authors":"Niels Vanhasbroeck, Tim Loossens, Francis Tuerlinckx","doi":"10.1016/j.jmp.2024.102856","DOIUrl":"https://doi.org/10.1016/j.jmp.2024.102856","url":null,"abstract":"<div><p>In this paper, we establish a formal connection between two dynamic modeling approaches that are often taken to study affect dynamics. More specifically, we show that the exponential discounting model can be rewritten to a specific case of the VARMAX, thereby shedding light on the underlying similarities and assumptions of the two models. This derivation has some important consequences for research. First, it allows researchers who use discounting models in their studies to use the tools established within the broader time series literature to evaluate the applicability of their models. Second, it lays bare some of the implicit restrictions discounting models put on their parameters and, therefore, provides a foundation for empirical testing and validation of these models. One of these restrictions concerns the exponential shape of the discounting function that is often assumed in the affect dynamical literature. As an alternative, we briefly introduce the quasi-hyperbolic discounting function.</p></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"120 ","pages":"Article 102856"},"PeriodicalIF":1.8,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140547219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-11DOI: 10.1016/j.jmp.2024.102855
Hau-Hung Yang, Yung-Fong Hsu
In classical psychophysics, the study of threshold and underlying representations is of theoretical interest, and the relevant issue of finding the stimulus intensity corresponding to a certain threshold level is an important topic. In the literature, researchers have developed various adaptive (also known as ‘up-down’) methods, including the fixed step-size and variable step-size methods, for the estimation of threshold. A common feature of this family of methods is that the stimulus to be assigned to the current trial depends upon the participant’s response in the previous trial(s), and very often a binary response format is adopted. A well-known earlier work of the variable step-size adaptive methods is the Robbins–Monro process (and its accelerated version). However, previous studies have paid little attention to other facets of response variables (in addition to the binary response variable) that could be jointly embedded into the process. This article concerns a generalization of the Robbins–Monro process by incorporating an additional response variable, such as the response time or the response confidence, into the process. We first prove the consistency of the estimator from the generalized method. We then conduct a Monte Carlo simulation study to explore some finite-sample properties of the estimator from the generalized method with either the response time or the response confidence as the variable of interest, and compare its performance with the original method. The results show that the two methods (and their accelerated version) are comparable. The issue of relative efficiency is also discussed.
{"title":"The generalized Robbins–Monro process and its application to psychophysical experiments for threshold estimation","authors":"Hau-Hung Yang, Yung-Fong Hsu","doi":"10.1016/j.jmp.2024.102855","DOIUrl":"https://doi.org/10.1016/j.jmp.2024.102855","url":null,"abstract":"<div><p>In classical psychophysics, the study of threshold and underlying representations is of theoretical interest, and the relevant issue of finding the stimulus intensity corresponding to a certain threshold level is an important topic. In the literature, researchers have developed various adaptive (also known as ‘up-down’) methods, including the fixed step-size and variable step-size methods, for the estimation of threshold. A common feature of this family of methods is that the stimulus to be assigned to the current trial depends upon the participant’s response in the previous trial(s), and very often a binary response format is adopted. A well-known earlier work of the variable step-size adaptive methods is the Robbins–Monro process (and its accelerated version). However, previous studies have paid little attention to other facets of response variables (in addition to the binary response variable) that could be jointly embedded into the process. This article concerns a generalization of the Robbins–Monro process by incorporating an additional response variable, such as the response time or the response confidence, into the process. We first prove the consistency of the estimator from the generalized method. We then conduct a Monte Carlo simulation study to explore some finite-sample properties of the estimator from the generalized method with either the response time or the response confidence as the variable of interest, and compare its performance with the original method. The results show that the two methods (and their accelerated version) are comparable. The issue of relative efficiency is also discussed.</p></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"120 ","pages":"Article 102855"},"PeriodicalIF":1.8,"publicationDate":"2024-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140543297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-26DOI: 10.1016/j.jmp.2024.102844
Bernard Sinclair-Desgagné
A wealth of empirical evidence shows that people display opposite behaviors when deciding whether to rely on an algorithm, even if it is inexpensive to do so and using the algorithm should enhance their own performance. This paper develops a formal theory to explain some of these conflicting facts and submit new testable predictions. Drawing from decision analysis, I invoke two key notions: the ‘value of information’ and the ‘value of control’. The value of information matters to users of algorithms like recommender systems and prediction machines, which essentially provide information. I find that ambiguity aversion or a subjective cost of employing an algorithm will tend to decrease the value of algorithmic information, while repeated exposure to an algorithm might not always increase this value. The value of control matters to users who may delegate decision making to an algorithm. I model how, under partial delegation, imperfect understanding of what the algorithm actually does (so the algorithm is in fact a black box) can cause algorithm aversion. Some possible remedies are formulated and discussed.
{"title":"On the (non-) reliance on algorithms—A decision-theoretic account","authors":"Bernard Sinclair-Desgagné","doi":"10.1016/j.jmp.2024.102844","DOIUrl":"https://doi.org/10.1016/j.jmp.2024.102844","url":null,"abstract":"<div><p>A wealth of empirical evidence shows that people display opposite behaviors when deciding whether to rely on an algorithm, even if it is inexpensive to do so and using the algorithm should enhance their own performance. This paper develops a formal theory to explain some of these conflicting facts and submit new testable predictions. Drawing from decision analysis, I invoke two key notions: the ‘value of information’ and the ‘value of control’. The value of information matters to users of algorithms like recommender systems and prediction machines, which essentially provide information. I find that ambiguity aversion or a subjective cost of employing an algorithm will tend to decrease the value of algorithmic information, while repeated exposure to an algorithm might not always increase this value. The value of control matters to users who may delegate decision making to an algorithm. I model how, under partial delegation, imperfect understanding of what the algorithm actually does (so the algorithm is in fact a black box) can cause algorithm aversion. Some possible remedies are formulated and discussed.</p></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"119 ","pages":"Article 102844"},"PeriodicalIF":1.8,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140290848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-10DOI: 10.1016/j.jmp.2024.102843
Ralf Engbert , Maximilian M. Rabe
Dynamical models are crucial for developing process-oriented, quantitative theories in cognition and behavior. Due to the impressive progress in cognitive theory, domain-specific dynamical models are complex, which typically creates challenges in statistical inference. Mathematical models of eye-movement control might be looked upon as a representative case study. In this tutorial, we introduce and analyze the SWIFT model (Engbert et al., 2002; Engbert et al., 2005), a dynamical modeling framework for eye-movement control in reading that was developed to explain all types of saccades observed in experiments from an activation-based approach. We provide an introduction to dynamical modeling, which explains the basic concepts of SWIFT and its statistical inference. We discuss the likelihood function of a simplified version of the SWIFT model as a key foundation for Bayesian parameter estimation (Rabe et al., 2021; Seelig et al., 2019). In posterior predictive checks, we demonstrate that the simplified model can reproduce interindividual differences via parameter variation. All computations in this tutorial are implemented in the R-Language for Statistical Computing and are made publicly available. We expect that the tutorial might be helpful for advancing dynamical models in other areas of cognitive science.
动态模型对于发展认知和行为方面以过程为导向的定量理论至关重要。由于认知理论取得了令人瞩目的进展,特定领域的动态模型非常复杂,这通常会给统计推断带来挑战。眼球运动控制的数学模型可以作为一个代表性案例。在本教程中,我们将介绍并分析 SWIFT 模型(Engbert 等人,2002 年;Engbert 等人,2005 年),这是一个用于阅读中眼球运动控制的动力学建模框架,其开发目的是从基于激活的方法来解释实验中观察到的所有类型的囊视。我们将介绍动态建模,解释 SWIFT 及其统计推断的基本概念。我们讨论了作为贝叶斯参数估计关键基础的 SWIFT 模型简化版的似然函数(Rabe 等人,2021 年;Seelig 等人,2019 年)。在后验预测检查中,我们证明简化模型可以通过参数变化再现个体间差异。本教程中的所有计算均采用 R 统计计算语言实现,并公开发布。我们希望本教程能对认知科学其他领域的动力学模型的发展有所帮助。
{"title":"A tutorial on Bayesian inference for dynamical modeling of eye-movement control during reading","authors":"Ralf Engbert , Maximilian M. Rabe","doi":"10.1016/j.jmp.2024.102843","DOIUrl":"https://doi.org/10.1016/j.jmp.2024.102843","url":null,"abstract":"<div><p>Dynamical models are crucial for developing process-oriented, quantitative theories in cognition and behavior. Due to the impressive progress in cognitive theory, domain-specific dynamical models are complex, which typically creates challenges in statistical inference. Mathematical models of eye-movement control might be looked upon as a representative case study. In this tutorial, we introduce and analyze the SWIFT model (Engbert et al., 2002; Engbert et al., 2005), a dynamical modeling framework for eye-movement control in reading that was developed to explain all types of saccades observed in experiments from an activation-based approach. We provide an introduction to dynamical modeling, which explains the basic concepts of SWIFT and its statistical inference. We discuss the likelihood function of a simplified version of the SWIFT model as a key foundation for Bayesian parameter estimation (Rabe et al., 2021; Seelig et al., 2019). In posterior predictive checks, we demonstrate that the simplified model can reproduce interindividual differences via parameter variation. All computations in this tutorial are implemented in the <span>R</span>-Language for Statistical Computing and are made publicly available. We expect that the tutorial might be helpful for advancing dynamical models in other areas of cognitive science.</p></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"119 ","pages":"Article 102843"},"PeriodicalIF":1.8,"publicationDate":"2024-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140069384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-28DOI: 10.1016/j.jmp.2024.102842
Jing-Jing Li , Chengchun Shi , Lexin Li , Anne G.E. Collins
Computational cognitive modeling is an important tool for understanding the processes supporting human and animal decision-making. Choice data in decision-making tasks are inherently noisy, and separating noise from signal can improve the quality of computational modeling. Common approaches to model decision noise often assume constant levels of noise or exploration throughout learning (e.g., the -softmax policy). However, this assumption is not guaranteed to hold – for example, a subject might disengage and lapse into an inattentive phase for a series of trials in the middle of otherwise low-noise performance. Here, we introduce a new, computationally inexpensive method to dynamically estimate the levels of noise fluctuations in choice behavior, under a model assumption that the agent can transition between two discrete latent states (e.g., fully engaged and random). Using simulations, we show that modeling noise levels dynamically instead of statically can substantially improve model fit and parameter estimation, especially in the presence of long periods of noisy behavior, such as prolonged lapses of attention. We further demonstrate the empirical benefits of dynamic noise estimation at the individual and group levels by validating it on four published datasets featuring diverse populations, tasks, and models. Based on the theoretical and empirical evaluation of the method reported in the current work, we expect that dynamic noise estimation will improve modeling in many decision-making paradigms over the static noise estimation method currently used in the modeling literature, while keeping additional model complexity and assumptions minimal.
{"title":"Dynamic noise estimation: A generalized method for modeling noise fluctuations in decision-making","authors":"Jing-Jing Li , Chengchun Shi , Lexin Li , Anne G.E. Collins","doi":"10.1016/j.jmp.2024.102842","DOIUrl":"https://doi.org/10.1016/j.jmp.2024.102842","url":null,"abstract":"<div><p>Computational cognitive modeling is an important tool for understanding the processes supporting human and animal decision-making. Choice data in decision-making tasks are inherently noisy, and separating noise from signal can improve the quality of computational modeling. Common approaches to model decision noise often assume constant levels of noise or exploration throughout learning (e.g., the <span><math><mi>ϵ</mi></math></span>-softmax policy). However, this assumption is not guaranteed to hold – for example, a subject might disengage and lapse into an inattentive phase for a series of trials in the middle of otherwise low-noise performance. Here, we introduce a new, computationally inexpensive method to dynamically estimate the levels of noise fluctuations in choice behavior, under a model assumption that the agent can transition between two discrete latent states (e.g., fully engaged and random). Using simulations, we show that modeling noise levels dynamically instead of statically can substantially improve model fit and parameter estimation, especially in the presence of long periods of noisy behavior, such as prolonged lapses of attention. We further demonstrate the empirical benefits of dynamic noise estimation at the individual and group levels by validating it on four published datasets featuring diverse populations, tasks, and models. Based on the theoretical and empirical evaluation of the method reported in the current work, we expect that dynamic noise estimation will improve modeling in many decision-making paradigms over the static noise estimation method currently used in the modeling literature, while keeping additional model complexity and assumptions minimal.</p></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"119 ","pages":"Article 102842"},"PeriodicalIF":1.8,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139985604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-14DOI: 10.1016/j.jmp.2024.102841
Tadamasa Sawada , Denis Volk
A cusp of a curve in a 2D image is an important feature of the curve for visual perception. It is intuitively obvious that the cusp of the 2D curve can be attributed to an angular feature contained in a 3D scene. It is accidental when a space curve with a cusp in a 3D scene is projected to a smooth curve without any cusp in a 2D image. Note that there is also an interesting case in which a smooth space curve without any cusp is accidentally projected to a 2D curve with a cusp. The angle of the cusp of the 2D curve is arbitrary but it is determined by the shape of the space curve. In this study, we will show the necessary and sufficient conditions that are needed to produce a space smooth curve that is projected to a 2D curve with a cusp under both perspective and orthographic projections. We will also show how the angle of the cusp is determined, and that these conditions are only satisfied accidentally.
{"title":"An accidental image feature that appears but not disappears","authors":"Tadamasa Sawada , Denis Volk","doi":"10.1016/j.jmp.2024.102841","DOIUrl":"https://doi.org/10.1016/j.jmp.2024.102841","url":null,"abstract":"<div><p>A cusp of a curve in a 2D image is an important feature of the curve for visual perception. It is intuitively obvious that the cusp of the 2D curve can be attributed to an angular feature contained in a 3D scene. It is accidental when a space curve with a cusp in a 3D scene is projected to a smooth curve without any cusp in a 2D image. Note that there is also an interesting case in which a smooth space curve without any cusp is accidentally projected to a 2D curve with a cusp. The angle of the cusp of the 2D curve is arbitrary but it is determined by the shape of the space curve. In this study, we will show the necessary and sufficient conditions that are needed to produce a space smooth curve that is projected to a 2D curve with a cusp under both perspective and orthographic projections. We will also show how the angle of the cusp is determined, and that these conditions are only satisfied accidentally.</p></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"119 ","pages":"Article 102841"},"PeriodicalIF":1.8,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139732702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-01-16DOI: 10.1016/j.jmp.2024.102840
Bo Wang , Jinjin Li
Enhancing learning effectiveness and comprehension, well-gradedness plays a crucial role in knowledge structure theory by establishing a systematic and progressive knowledge system. Extensive research has been conducted in this domain, resulting in significant findings. This paper explores the properties of well-gradedness in polytomous knowledge structures, shedding light on both classical confirmations and exceptional cases. A key characteristic of well-gradedness is the presence of adjacent elements within a non-empty family that exhibit a distance of 1. The study investigates various manifestations of well-gradedness, including its discriminative properties and its manifestation in discriminative factorial polytomous structures. Furthermore, intriguing deviations from classical standards in minimal polytomous states are uncovered, revealing unexpected behaviors.
{"title":"Exploring well-gradedness in polytomous knowledge structures","authors":"Bo Wang , Jinjin Li","doi":"10.1016/j.jmp.2024.102840","DOIUrl":"10.1016/j.jmp.2024.102840","url":null,"abstract":"<div><p>Enhancing learning effectiveness and comprehension, well-gradedness plays a crucial role in knowledge structure theory by establishing a systematic and progressive knowledge system. Extensive research has been conducted in this domain, resulting in significant findings. This paper explores the properties of well-gradedness in polytomous knowledge structures, shedding light on both classical confirmations and exceptional cases. A key characteristic of well-gradedness is the presence of adjacent elements within a non-empty family that exhibit a distance of 1. The study investigates various manifestations of well-gradedness, including its discriminative properties and its manifestation in discriminative factorial polytomous structures. Furthermore, intriguing deviations from classical standards in minimal polytomous states are uncovered, revealing unexpected behaviors.</p></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"119 ","pages":"Article 102840"},"PeriodicalIF":1.8,"publicationDate":"2024-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139475512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-01DOI: 10.1016/j.jmp.2023.102817
Alexander Karpov
The paper studies a variety of domains of preference orders that are closely related to single-peaked preferences. We develop recursive formulas for the number of single-peaked preference profiles and the number of preference profiles that are single-peaked on a circle. The number of Arrow’s single-peaked preference profiles is found for three, four, and five alternatives. Random sampling applications are discussed. For restricted tier preference profiles, a forbidden subprofiles characterization and an exact enumeration formula are obtained. It is also shown that each Fishburn’s preference profile is single-peaked on a circle preference profile, and Fishburn’s preference profiles cannot be characterized by forbidden subprofiles.
{"title":"Structure of single-peaked preferences","authors":"Alexander Karpov","doi":"10.1016/j.jmp.2023.102817","DOIUrl":"10.1016/j.jmp.2023.102817","url":null,"abstract":"<div><p>The paper studies a variety of domains of preference orders that are closely related to single-peaked preferences. We develop recursive formulas for the number of single-peaked preference profiles and the number of preference profiles that are single-peaked on a circle. The number of Arrow’s single-peaked preference profiles is found for three, four, and five alternatives. Random sampling applications are discussed. For restricted tier preference profiles, a forbidden subprofiles characterization and an exact enumeration formula are obtained. It is also shown that each Fishburn’s preference profile is single-peaked on a circle preference profile, and Fishburn’s preference profiles cannot be characterized by forbidden subprofiles.</p></div>","PeriodicalId":50140,"journal":{"name":"Journal of Mathematical Psychology","volume":"117 ","pages":"Article 102817"},"PeriodicalIF":1.8,"publicationDate":"2023-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135962830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}