{"title":"Comparison of different reliability estimation methods for single-item assessment: a simulation study.","authors":"Sijun Zhang, Kimberly Colvin","doi":"10.3389/fpsyg.2024.1482016","DOIUrl":null,"url":null,"abstract":"<p><p>Single-item assessments have recently become popular in various fields, and researchers have developed methods for estimating the reliability of single-item assessments, some based on factor analysis and correction for attenuation, and others using the double monotonicity model, Guttman's λ<sub>6</sub>, or the latent class model. However, no empirical study has investigated which method best estimates the reliability of single-item assessments. This study investigated this question using a simulation study. To represent assessments as they are found in practice, the simulation study varied several aspects: the item discrimination parameter, the test length of the multi-item assessment of the same construct, the sample size, and the correlation between the single-item assessment and the multi-item assessment of the same construct. The results suggest that by using the method based on the double monotonicity model and the method based on correction for attenuation simultaneously, researchers can obtain the most precise estimate of the range of reliability of a single-item assessment in 94.44% of cases. The test length of a multi-item assessment of the same construct, the item discrimination parameter, the sample size, and the correlation between the single-item assessment and the multi-item assessment of the same construct did not influence the choice of method choice.</p>","PeriodicalId":12525,"journal":{"name":"Frontiers in Psychology","volume":"15 ","pages":"1482016"},"PeriodicalIF":2.6000,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11568483/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Psychology","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.3389/fpsyg.2024.1482016","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Single-item assessments have recently become popular in various fields, and researchers have developed methods for estimating the reliability of single-item assessments, some based on factor analysis and correction for attenuation, and others using the double monotonicity model, Guttman's λ6, or the latent class model. However, no empirical study has investigated which method best estimates the reliability of single-item assessments. This study investigated this question using a simulation study. To represent assessments as they are found in practice, the simulation study varied several aspects: the item discrimination parameter, the test length of the multi-item assessment of the same construct, the sample size, and the correlation between the single-item assessment and the multi-item assessment of the same construct. The results suggest that by using the method based on the double monotonicity model and the method based on correction for attenuation simultaneously, researchers can obtain the most precise estimate of the range of reliability of a single-item assessment in 94.44% of cases. The test length of a multi-item assessment of the same construct, the item discrimination parameter, the sample size, and the correlation between the single-item assessment and the multi-item assessment of the same construct did not influence the choice of method choice.
期刊介绍:
Frontiers in Psychology is the largest journal in its field, publishing rigorously peer-reviewed research across the psychological sciences, from clinical research to cognitive science, from perception to consciousness, from imaging studies to human factors, and from animal cognition to social psychology. Field Chief Editor Axel Cleeremans at the Free University of Brussels is supported by an outstanding Editorial Board of international researchers. This multidisciplinary open-access journal is at the forefront of disseminating and communicating scientific knowledge and impactful discoveries to researchers, academics, clinicians and the public worldwide. The journal publishes the best research across the entire field of psychology. Today, psychological science is becoming increasingly important at all levels of society, from the treatment of clinical disorders to our basic understanding of how the mind works. It is highly interdisciplinary, borrowing questions from philosophy, methods from neuroscience and insights from clinical practice - all in the goal of furthering our grasp of human nature and society, as well as our ability to develop new intervention methods.