Pub Date : 2025-02-02DOI: 10.1177/01466216251316277
Chen-Wei Liu
Response time (RT) has been an essential resource for supplementing the estimation accuracy of latent traits and item parameters in educational testing. Most item response theory (IRT) approaches are based on parametric RT models. However, since test takers may alter their behaviors during a test due to motivation or strategy shifts, fatigue, or other causes, parametric IRT models are unlikely to capture such subtle and nonlinear information. In this work, we propose a novel semi-parametric IRT model with O'Sullivan splines to accommodate the flexible mean RT shapes and explore the underlying nonlinear relationships between latent traits and RT. A simulation study was conducted to demonstrate the substantial improvement in parameter estimation achieved by the new model, as well as the detriment of using parametric models in terms of biases and measurement errors. Using this model, a dataset of mathematics test scores and RT from the Programme for International Student Assessment was analyzed to demonstrate the evident nonlinearity and to compare the proposed model with existing models in terms of model fitting. The findings presented in this study indicate the promising nature of the new approach, suggesting its potential as an additional psychometric tool to enhance test reliability and reduce measurement errors.
{"title":"Semi-Parametric Item Response Theory With O'Sullivan Splines for Item Responses and Response Time.","authors":"Chen-Wei Liu","doi":"10.1177/01466216251316277","DOIUrl":"10.1177/01466216251316277","url":null,"abstract":"<p><p>Response time (RT) has been an essential resource for supplementing the estimation accuracy of latent traits and item parameters in educational testing. Most item response theory (IRT) approaches are based on parametric RT models. However, since test takers may alter their behaviors during a test due to motivation or strategy shifts, fatigue, or other causes, parametric IRT models are unlikely to capture such subtle and nonlinear information. In this work, we propose a novel semi-parametric IRT model with O'Sullivan splines to accommodate the flexible mean RT shapes and explore the underlying nonlinear relationships between latent traits and RT. A simulation study was conducted to demonstrate the substantial improvement in parameter estimation achieved by the new model, as well as the detriment of using parametric models in terms of biases and measurement errors. Using this model, a dataset of mathematics test scores and RT from the Programme for International Student Assessment was analyzed to demonstrate the evident nonlinearity and to compare the proposed model with existing models in terms of model fitting. The findings presented in this study indicate the promising nature of the new approach, suggesting its potential as an additional psychometric tool to enhance test reliability and reduce measurement errors.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251316277"},"PeriodicalIF":1.0,"publicationDate":"2025-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11789044/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143190883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-28DOI: 10.1177/01466216251316276
Lihong Song, Wenyi Wang
Under the theory of sequential design, compound optimal design with two optimality criteria can be used to solve the problem of efficient calibration of item parameters of item response theory model. In order to efficiently calibrate item parameters in computerized testing, a compound optimal design is proposed for the simultaneous estimation of item difficulty and discrimination parameters under the two-parameter logistic model, which adaptively focuses on optimizing the parameter which is difficult to estimate. The compound optimal design using the acceptance probability can provide ability design points to optimize the item difficulty and discrimination parameters, respectively. Simulation and real data analysis studies showed that the compound optimal design outperformed than the D-optimal and random design in terms of the recovery of both discrimination and difficulty parameters.
{"title":"Compound Optimal Design for Online Item Calibration Under the Two-Parameter Logistic Model.","authors":"Lihong Song, Wenyi Wang","doi":"10.1177/01466216251316276","DOIUrl":"10.1177/01466216251316276","url":null,"abstract":"<p><p>Under the theory of sequential design, compound optimal design with two optimality criteria can be used to solve the problem of efficient calibration of item parameters of item response theory model. In order to efficiently calibrate item parameters in computerized testing, a compound optimal design is proposed for the simultaneous estimation of item difficulty and discrimination parameters under the two-parameter logistic model, which adaptively focuses on optimizing the parameter which is difficult to estimate. The compound optimal design using the acceptance probability can provide ability design points to optimize the item difficulty and discrimination parameters, respectively. Simulation and real data analysis studies showed that the compound optimal design outperformed than the D-optimal and random design in terms of the recovery of both discrimination and difficulty parameters.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251316276"},"PeriodicalIF":1.0,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11775943/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143068983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-27DOI: 10.1177/01466216251316278
David M LaHuis, Caitlin E Blackmore, Gage M Ammons
This study compared maximum a posteriori (MAP), expected a posteriori (EAP), and Markov Chain Monte Carlo (MCMC) approaches to computing person scores from the Multi-Unidimensional Pairwise Preference Model. The MCMC approach used the No-U-Turn sampling (NUTS). Results suggested the EAP with fully crossed quadrature and the NUTS outperformed the others when there were fewer dimensions. In addition, the NUTS produced the most accurate estimates in larger dimension conditions. The number of items per dimension had the largest effect on person parameter recovery.
{"title":"Comparing Approaches to Estimating Person Parameters for the MUPP Model.","authors":"David M LaHuis, Caitlin E Blackmore, Gage M Ammons","doi":"10.1177/01466216251316278","DOIUrl":"10.1177/01466216251316278","url":null,"abstract":"<p><p>This study compared maximum a posteriori (MAP), expected a posteriori (EAP), and Markov Chain Monte Carlo (MCMC) approaches to computing person scores from the Multi-Unidimensional Pairwise Preference Model. The MCMC approach used the No-U-Turn sampling (NUTS). Results suggested the EAP with fully crossed quadrature and the NUTS outperformed the others when there were fewer dimensions. In addition, the NUTS produced the most accurate estimates in larger dimension conditions. The number of items per dimension had the largest effect on person parameter recovery.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251316278"},"PeriodicalIF":1.0,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11775930/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143068980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-27DOI: 10.1177/01466216251316559
Sandip Sinharay, Matthew S Johnson
This article suggests a new approach based on Bayesian decision theory (e.g., Cronbach & Gleser, 1965; Ferguson, 1967) for detection of test fraud. The approach leads to a simple decision rule that involves the computation of the posterior probability that an examinee committed test fraud given the data. The suggested approach was applied to a real data set that involved actual test fraud.
{"title":"Application of Bayesian Decision Theory in Detecting Test Fraud.","authors":"Sandip Sinharay, Matthew S Johnson","doi":"10.1177/01466216251316559","DOIUrl":"https://doi.org/10.1177/01466216251316559","url":null,"abstract":"<p><p>This article suggests a new approach based on Bayesian decision theory (e.g., Cronbach & Gleser, 1965; Ferguson, 1967) for detection of test fraud. The approach leads to a simple decision rule that involves the computation of the posterior probability that an examinee committed test fraud given the data. The suggested approach was applied to a real data set that involved actual test fraud.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251316559"},"PeriodicalIF":1.0,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11773507/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143068974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-26DOI: 10.1177/01466216251316275
Joost R van Ginkel, Julian D Karch
{"title":"R Package for Calculating Estimators of the Proportion of Explained Variance and Standardized Regression Coefficients in Multiply Imputed Datasets.","authors":"Joost R van Ginkel, Julian D Karch","doi":"10.1177/01466216251316275","DOIUrl":"10.1177/01466216251316275","url":null,"abstract":"","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251316275"},"PeriodicalIF":1.0,"publicationDate":"2025-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11770685/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143060765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-24DOI: 10.1177/01466216251316282
Peter Baldwin, Irina Grabovsky, Kimberly A Swygert, Thomas Fogle, Pilar Reid, Brian E Clauser
Methods for detecting item parameter drift may be inadequate when every exposed item is at risk for drift. To address this scenario, a strategy for detecting item parameter drift is proposed that uses only unexposed items deployed in a stratified random method within an experimental design. The proposed method is illustrated by investigating unexpected score increases on a high-stakes licensure exam. Results for this example were suggestive of item parameter drift but not significant at the .05 level.
{"title":"An Experimental Design to Investigate Item Parameter Drift.","authors":"Peter Baldwin, Irina Grabovsky, Kimberly A Swygert, Thomas Fogle, Pilar Reid, Brian E Clauser","doi":"10.1177/01466216251316282","DOIUrl":"10.1177/01466216251316282","url":null,"abstract":"<p><p>Methods for detecting item parameter drift may be inadequate when every exposed item is at risk for drift. To address this scenario, a strategy for detecting item parameter drift is proposed that uses only unexposed items deployed in a stratified random method within an experimental design. The proposed method is illustrated by investigating unexpected score increases on a high-stakes licensure exam. Results for this example were suggestive of item parameter drift but not significant at the .05 level.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216251316282"},"PeriodicalIF":1.0,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11760077/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143048347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-30DOI: 10.1177/01466216241310599
Allison W Cooperman, Ming Him Tai, Joseph N DeWeese, David J Weiss
Adaptive measurement of change (AMC) uses computerized adaptive testing (CAT) to measure and test the significance of intraindividual change on one or more latent traits. The extant AMC research has so far assumed that item parameter values are constant across testing occasions. Yet item parameters might change over time, a phenomenon termed item parameter drift (IPD). The current study examined AMC's performance in the context of IPD with unidimensional, dichotomous CATs across two testing occasions. A Monte Carlo simulation revealed that AMC false and true positive rates were primarily affected by changes in the difficulty parameter. False positive rates were related to the location of the drift items relative to the latent trait continuum, as the administration of more drift items spuriously increased the magnitude of estimated trait change. Moreover, true positive rates depended upon an interaction between the direction of difficulty parameter drift and the latent trait change trajectory. A follow-up simulation further showed that the number of items in the CAT with parameter drift impacted AMC false and true positive rates, with these relationships moderated by IPD characteristics and the latent trait change trajectory. It is recommended that test administrators confirm the absence of IPD prior to using AMC for measuring intraindividual change with educational and psychological tests.
{"title":"Adaptive Measurement of Change in the Context of Item Parameter Drift.","authors":"Allison W Cooperman, Ming Him Tai, Joseph N DeWeese, David J Weiss","doi":"10.1177/01466216241310599","DOIUrl":"10.1177/01466216241310599","url":null,"abstract":"<p><p>Adaptive measurement of change (AMC) uses computerized adaptive testing (CAT) to measure and test the significance of intraindividual change on one or more latent traits. The extant AMC research has so far assumed that item parameter values are constant across testing occasions. Yet item parameters might change over time, a phenomenon termed item parameter drift (IPD). The current study examined AMC's performance in the context of IPD with unidimensional, dichotomous CATs across two testing occasions. A Monte Carlo simulation revealed that AMC false and true positive rates were primarily affected by changes in the difficulty parameter. False positive rates were related to the location of the drift items relative to the latent trait continuum, as the administration of more drift items spuriously increased the magnitude of estimated trait change. Moreover, true positive rates depended upon an interaction between the direction of difficulty parameter drift and the latent trait change trajectory. A follow-up simulation further showed that the number of items in the CAT with parameter drift impacted AMC false and true positive rates, with these relationships moderated by IPD characteristics and the latent trait change trajectory. It is recommended that test administrators confirm the absence of IPD prior to using AMC for measuring intraindividual change with educational and psychological tests.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216241310599"},"PeriodicalIF":1.0,"publicationDate":"2024-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11683792/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142915981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-26DOI: 10.1177/01466216241310598
Xin Xu, Jinxin Guo, Tao Xin
In psychological and educational measurement, a testlet-based test is a common and popular format, especially in some large-scale assessments. In modeling testlet effects, a standard bifactor model, as a common strategy, assumes different testlet effects and the main effect to be fully independently distributed. However, it is difficult to establish perfectly independent clusters as this assumption. To address this issue, correlations among testlets could be taken into account in fitting data. Moreover, one may desire to maintain a good practical interpretation of the sparse loading matrix. In this paper, we propose data-driven learning of significant correlations in the covariance matrix through a latent variable selection method. Under the proposed method, a regularization is performed on the weak correlations for the extended bifactor model. Further, a stochastic expectation maximization algorithm is employed for efficient computation. Results from simulation studies show the consistency of the proposed method in selecting significant correlations. Empirical data from the 2015 Program for International Student Assessment is analyzed using the proposed method as an example.
{"title":"Inference of Correlations Among Testlet Effects: A Latent Variable Selection Method.","authors":"Xin Xu, Jinxin Guo, Tao Xin","doi":"10.1177/01466216241310598","DOIUrl":"10.1177/01466216241310598","url":null,"abstract":"<p><p>In psychological and educational measurement, a testlet-based test is a common and popular format, especially in some large-scale assessments. In modeling testlet effects, a standard bifactor model, as a common strategy, assumes different testlet effects and the main effect to be fully independently distributed. However, it is difficult to establish perfectly independent clusters as this assumption. To address this issue, correlations among testlets could be taken into account in fitting data. Moreover, one may desire to maintain a good practical interpretation of the sparse loading matrix. In this paper, we propose data-driven learning of significant correlations in the covariance matrix through a latent variable selection method. Under the proposed method, a regularization is performed on the weak correlations for the extended bifactor model. Further, a stochastic expectation maximization algorithm is employed for efficient computation. Results from simulation studies show the consistency of the proposed method in selecting significant correlations. Empirical data from the 2015 Program for International Student Assessment is analyzed using the proposed method as an example.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216241310598"},"PeriodicalIF":1.0,"publicationDate":"2024-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11670239/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142903933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-20DOI: 10.1177/01466216241310600
James O Ramsay, Juan Li, Joakim Wallmark, Marie Wiberg
Modifications of current psychometric models for analyzing test data are proposed that produce an additive scale measure of information. This information measure is a one-dimensional space curve or curved surface manifold that is invariant across varying manifold indexing systems. The arc length along a curve manifold is used as it is an additive metric having a defined zero and a version of the bit as a unit. This property, referred to here as the scope of the test or an item, facilitates the evaluation of graphs and numerical summaries. The measurement power of the test is defined by the length of the manifold, and the performance or experiential level of a person by a position along the curve. In this study, we also use all information from the items including the information from the distractors. Test data from a large-scale college admissions test are used to illustrate the test information manifold perspective and to compare it with the well-known item response theory nominal model. It is illustrated that the use of information theory opens a vista of new ways of assessing item performance and inter-item dependency, as well as test takers' knowledge.
{"title":"An Information Manifold Perspective for Analyzing Test Data.","authors":"James O Ramsay, Juan Li, Joakim Wallmark, Marie Wiberg","doi":"10.1177/01466216241310600","DOIUrl":"10.1177/01466216241310600","url":null,"abstract":"<p><p>Modifications of current psychometric models for analyzing test data are proposed that produce an additive scale measure of information. This information measure is a one-dimensional space curve or curved surface manifold that is invariant across varying manifold indexing systems. The arc length along a curve manifold is used as it is an additive metric having a defined zero and a version of the bit as a unit. This property, referred to here as the scope of the test or an item, facilitates the evaluation of graphs and numerical summaries. The measurement power of the test is defined by the length of the manifold, and the performance or experiential level of a person by a position along the curve. In this study, we also use all information from the items including the information from the distractors. Test data from a large-scale college admissions test are used to illustrate the test information manifold perspective and to compare it with the well-known item response theory nominal model. It is illustrated that the use of information theory opens a vista of new ways of assessing item performance and inter-item dependency, as well as test takers' knowledge.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216241310600"},"PeriodicalIF":1.0,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11662344/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142878097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-19DOI: 10.1177/01466216241310602
Shan Huang, Hidetoki Ishii
Many studies on differential item functioning (DIF) detection rely on single detection methods (SDMs), each of which necessitates specific assumptions that may not always be validated. Using an inappropriate SDM can lead to diminished accuracy in DIF detection. To address this limitation, a novel multi-detector combination (MDC) approach is proposed. Unlike SDMs, MDC effectively evaluates the relevance of different SDMs under various test conditions and integrates them using supervised learning, thereby mitigating the risk associated with selecting a suboptimal SDM for DIF detection. This study aimed to validate the accuracy of the MDC approach by applying five types of SDMs and four distinct supervised learning methods in MDC modeling. Model performance was assessed using the area under the curve (AUC), which provided a comprehensive measure of the ability of the model to distinguish between classes across all threshold levels, with higher AUC values indicating higher accuracy. The MDC methods consistently achieved higher average AUC values compared to SDMs in both matched test sets (where test conditions align with the training set) and unmatched test sets. Furthermore, MDC outperformed all SDMs under each test condition. These findings indicated that MDC is highly accurate and robust across diverse test conditions, establishing it as a viable method for practical DIF detection.
{"title":"A Generalized Multi-Detector Combination Approach for Differential Item Functioning Detection.","authors":"Shan Huang, Hidetoki Ishii","doi":"10.1177/01466216241310602","DOIUrl":"10.1177/01466216241310602","url":null,"abstract":"<p><p>Many studies on differential item functioning (DIF) detection rely on single detection methods (SDMs), each of which necessitates specific assumptions that may not always be validated. Using an inappropriate SDM can lead to diminished accuracy in DIF detection. To address this limitation, a novel multi-detector combination (MDC) approach is proposed. Unlike SDMs, MDC effectively evaluates the relevance of different SDMs under various test conditions and integrates them using supervised learning, thereby mitigating the risk associated with selecting a suboptimal SDM for DIF detection. This study aimed to validate the accuracy of the MDC approach by applying five types of SDMs and four distinct supervised learning methods in MDC modeling. Model performance was assessed using the area under the curve (AUC), which provided a comprehensive measure of the ability of the model to distinguish between classes across all threshold levels, with higher AUC values indicating higher accuracy. The MDC methods consistently achieved higher average AUC values compared to SDMs in both matched test sets (where test conditions align with the training set) and unmatched test sets. Furthermore, MDC outperformed all SDMs under each test condition. These findings indicated that MDC is highly accurate and robust across diverse test conditions, establishing it as a viable method for practical DIF detection.</p>","PeriodicalId":48300,"journal":{"name":"Applied Psychological Measurement","volume":" ","pages":"01466216241310602"},"PeriodicalIF":1.0,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11660104/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142878074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}