{"title":"反应导向设计对单例实验设计基线计数结果的影响","authors":"Daniel M. Swan, J. Pustejovsky, Natasha Beretvas","doi":"10.1080/17489539.2020.1739048","DOIUrl":null,"url":null,"abstract":"Abstract In single-case experimental design (SCED) research, researchers often choose when to start treatment based on whether the baseline data collected so far are stable, using what is called a response-guided design. There is evidence that response-guided designs are common, and researchers have described a variety of criteria for assessing stability. With many of these criteria, making judgments about stability could yield data with limited variability, which may have consequences for statistical inference and effect size estimates. However, little research has examined the impact of response-guided design on the resulting data. Drawing on both applied and methodological research, we describe several algorithms as models for response-guided design. We use simulation methods to assess how using a response-guided design impacts the baseline data pattern. The simulations generate baseline data in the form of frequency counts, a common type of outcome in SCEDs. Most of the response-guided algorithms we identified lead to baselines with approximately unbiased mean levels, but nearly all of them lead to underestimates in the baseline variance. We discuss implications for the use of response-guided designs in practice and for the plausibility of specific algorithms as representations of actual research practice.","PeriodicalId":39977,"journal":{"name":"Evidence-Based Communication Assessment and Intervention","volume":"28 1","pages":"107 - 82"},"PeriodicalIF":0.0000,"publicationDate":"2020-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"17","resultStr":"{\"title\":\"The impact of response-guided designs on count outcomes in single-case experimental design baselines\",\"authors\":\"Daniel M. Swan, J. Pustejovsky, Natasha Beretvas\",\"doi\":\"10.1080/17489539.2020.1739048\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract In single-case experimental design (SCED) research, researchers often choose when to start treatment based on whether the baseline data collected so far are stable, using what is called a response-guided design. There is evidence that response-guided designs are common, and researchers have described a variety of criteria for assessing stability. With many of these criteria, making judgments about stability could yield data with limited variability, which may have consequences for statistical inference and effect size estimates. However, little research has examined the impact of response-guided design on the resulting data. Drawing on both applied and methodological research, we describe several algorithms as models for response-guided design. We use simulation methods to assess how using a response-guided design impacts the baseline data pattern. The simulations generate baseline data in the form of frequency counts, a common type of outcome in SCEDs. Most of the response-guided algorithms we identified lead to baselines with approximately unbiased mean levels, but nearly all of them lead to underestimates in the baseline variance. We discuss implications for the use of response-guided designs in practice and for the plausibility of specific algorithms as representations of actual research practice.\",\"PeriodicalId\":39977,\"journal\":{\"name\":\"Evidence-Based Communication Assessment and Intervention\",\"volume\":\"28 1\",\"pages\":\"107 - 82\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-03-26\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"17\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Evidence-Based Communication Assessment and Intervention\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/17489539.2020.1739048\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Evidence-Based Communication Assessment and Intervention","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/17489539.2020.1739048","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Social Sciences","Score":null,"Total":0}
The impact of response-guided designs on count outcomes in single-case experimental design baselines
Abstract In single-case experimental design (SCED) research, researchers often choose when to start treatment based on whether the baseline data collected so far are stable, using what is called a response-guided design. There is evidence that response-guided designs are common, and researchers have described a variety of criteria for assessing stability. With many of these criteria, making judgments about stability could yield data with limited variability, which may have consequences for statistical inference and effect size estimates. However, little research has examined the impact of response-guided design on the resulting data. Drawing on both applied and methodological research, we describe several algorithms as models for response-guided design. We use simulation methods to assess how using a response-guided design impacts the baseline data pattern. The simulations generate baseline data in the form of frequency counts, a common type of outcome in SCEDs. Most of the response-guided algorithms we identified lead to baselines with approximately unbiased mean levels, but nearly all of them lead to underestimates in the baseline variance. We discuss implications for the use of response-guided designs in practice and for the plausibility of specific algorithms as representations of actual research practice.
期刊介绍:
Evidence-Based Communication Assessment and Intervention (EBCAI) brings together professionals who work in clinical and educational practice as well as researchers from all disciplines to promote evidence-based practice (EBP) in serving individuals with communication impairments. The primary aims of EBCAI are to: Promote evidence-based practice (EBP) in communication assessment and intervention; Appraise the latest and best communication assessment and intervention studies so as to facilitate the use of research findings in clinical and educational practice; Provide a forum for discussions that advance EBP; and Disseminate research on EBP. We target speech-language pathologists, special educators, regular educators, applied behavior analysts, clinical psychologists, physical therapists, and occupational therapists who serve children or adults with communication impairments.