{"title":"Reliability representativeness: How well does coefficient alpha summarize reliability across the score distribution?","authors":"Daniel McNeish, Denis Dumas","doi":"10.3758/s13428-025-02611-8","DOIUrl":null,"url":null,"abstract":"<p><p>Scale scores in psychology studies are commonly accompanied by a reliability coefficient like alpha. Coefficient alpha is an index that summarizes reliability across the entire score distribution, implying equal precision for all scores. However, an underappreciated fact is that reliability can be conditional such that scores in certain parts of the score distribution may be more reliable than others. This conditional perspective of reliability is common in item response theory (IRT), but psychologists are generally not well versed in IRT. Correspondingly, the representativeness of a single summary index like alpha across the entire score distribution can be unclear but is rarely considered. If conditional reliability is fairly homogeneous across the score distribution, coefficient alpha may be sufficiently representative and a useful summary. But, if conditional reliability is heterogeneous across the score distribution, alpha may be unrepresentative and may not align with the reliability of a typical score in the data or with a particularly important score like a cut point where decisions are made. This paper proposes a method, R package, and Shiny application to quantify the potential differences between coefficient alpha and conditional reliability across the score distribution. The goal is to facilitate comparisons between conditional reliability and reliability summary indices so that psychologists can contextualize the reliability of their scores more clearly and comprehensively.</p>","PeriodicalId":8717,"journal":{"name":"Behavior Research Methods","volume":"57 3","pages":"93"},"PeriodicalIF":4.6000,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Behavior Research Methods","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.3758/s13428-025-02611-8","RegionNum":2,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
Reliability representativeness: How well does coefficient alpha summarize reliability across the score distribution?
Scale scores in psychology studies are commonly accompanied by a reliability coefficient like alpha. Coefficient alpha is an index that summarizes reliability across the entire score distribution, implying equal precision for all scores. However, an underappreciated fact is that reliability can be conditional such that scores in certain parts of the score distribution may be more reliable than others. This conditional perspective of reliability is common in item response theory (IRT), but psychologists are generally not well versed in IRT. Correspondingly, the representativeness of a single summary index like alpha across the entire score distribution can be unclear but is rarely considered. If conditional reliability is fairly homogeneous across the score distribution, coefficient alpha may be sufficiently representative and a useful summary. But, if conditional reliability is heterogeneous across the score distribution, alpha may be unrepresentative and may not align with the reliability of a typical score in the data or with a particularly important score like a cut point where decisions are made. This paper proposes a method, R package, and Shiny application to quantify the potential differences between coefficient alpha and conditional reliability across the score distribution. The goal is to facilitate comparisons between conditional reliability and reliability summary indices so that psychologists can contextualize the reliability of their scores more clearly and comprehensively.
期刊介绍:
Behavior Research Methods publishes articles concerned with the methods, techniques, and instrumentation of research in experimental psychology. The journal focuses particularly on the use of computer technology in psychological research. An annual special issue is devoted to this field.