{"title":"改进了标准化效应大小之间差异的置信区间。","authors":"Kevin D Bird","doi":"10.1037/met0000494","DOIUrl":null,"url":null,"abstract":"<p><p>An evaluation of a difference between effect sizes from two dependent variables in a single study is likely to be based on differences between standard scores if raw scores on those variables are not scaled in comparable units of measurement. The standardization used for this purpose is usually sample-based rather than population-based, but the consequences of this distinction for the construction of confidence intervals on differential effects have not been systematically examined. In this article I show that differential effect confidence intervals (CIs) constructed from differences between the standard scores produced by sample-based standardization can be too narrow when those effects are large and dependent variables are highly correlated, particularly in within-subjects designs. I propose a new approach to the construction of differential effect CIs based on differences between adjusted sample-based standard scores that allow conventional CI procedures to produce Bonett-type CIs (Bonett, 2008) on individual effects. Computer simulations show that differential effect CIs constructed from adjusted standard scores can provide much better coverage probabilities than CIs constructed from unadjusted standard scores. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>","PeriodicalId":20782,"journal":{"name":"Psychological methods","volume":"28 5","pages":"1142-1153"},"PeriodicalIF":7.6000,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Improved confidence intervals for differences between standardized effect sizes.\",\"authors\":\"Kevin D Bird\",\"doi\":\"10.1037/met0000494\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>An evaluation of a difference between effect sizes from two dependent variables in a single study is likely to be based on differences between standard scores if raw scores on those variables are not scaled in comparable units of measurement. The standardization used for this purpose is usually sample-based rather than population-based, but the consequences of this distinction for the construction of confidence intervals on differential effects have not been systematically examined. In this article I show that differential effect confidence intervals (CIs) constructed from differences between the standard scores produced by sample-based standardization can be too narrow when those effects are large and dependent variables are highly correlated, particularly in within-subjects designs. I propose a new approach to the construction of differential effect CIs based on differences between adjusted sample-based standard scores that allow conventional CI procedures to produce Bonett-type CIs (Bonett, 2008) on individual effects. Computer simulations show that differential effect CIs constructed from adjusted standard scores can provide much better coverage probabilities than CIs constructed from unadjusted standard scores. (PsycInfo Database Record (c) 2023 APA, all rights reserved).</p>\",\"PeriodicalId\":20782,\"journal\":{\"name\":\"Psychological methods\",\"volume\":\"28 5\",\"pages\":\"1142-1153\"},\"PeriodicalIF\":7.6000,\"publicationDate\":\"2023-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Psychological methods\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://doi.org/10.1037/met0000494\",\"RegionNum\":1,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2022/4/11 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"PSYCHOLOGY, MULTIDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Psychological methods","FirstCategoryId":"102","ListUrlMain":"https://doi.org/10.1037/met0000494","RegionNum":1,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2022/4/11 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"PSYCHOLOGY, MULTIDISCIPLINARY","Score":null,"Total":0}
Improved confidence intervals for differences between standardized effect sizes.
An evaluation of a difference between effect sizes from two dependent variables in a single study is likely to be based on differences between standard scores if raw scores on those variables are not scaled in comparable units of measurement. The standardization used for this purpose is usually sample-based rather than population-based, but the consequences of this distinction for the construction of confidence intervals on differential effects have not been systematically examined. In this article I show that differential effect confidence intervals (CIs) constructed from differences between the standard scores produced by sample-based standardization can be too narrow when those effects are large and dependent variables are highly correlated, particularly in within-subjects designs. I propose a new approach to the construction of differential effect CIs based on differences between adjusted sample-based standard scores that allow conventional CI procedures to produce Bonett-type CIs (Bonett, 2008) on individual effects. Computer simulations show that differential effect CIs constructed from adjusted standard scores can provide much better coverage probabilities than CIs constructed from unadjusted standard scores. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
期刊介绍:
Psychological Methods is devoted to the development and dissemination of methods for collecting, analyzing, understanding, and interpreting psychological data. Its purpose is the dissemination of innovations in research design, measurement, methodology, and quantitative and qualitative analysis to the psychological community; its further purpose is to promote effective communication about related substantive and methodological issues. The audience is expected to be diverse and to include those who develop new procedures, those who are responsible for undergraduate and graduate training in design, measurement, and statistics, as well as those who employ those procedures in research.