Pub Date : 2020-06-23DOI: 10.1080/17489539.2020.1776936
Nataly Lim, Fabiola Vargas Londono
{"title":"Language for learning is a promising intervention for promoting generalization across novel stimuli, but methodological concerns limit further conclusions","authors":"Nataly Lim, Fabiola Vargas Londono","doi":"10.1080/17489539.2020.1776936","DOIUrl":"https://doi.org/10.1080/17489539.2020.1776936","url":null,"abstract":"","PeriodicalId":39977,"journal":{"name":"Evidence-Based Communication Assessment and Intervention","volume":"17 1","pages":"175 - 178"},"PeriodicalIF":0.0,"publicationDate":"2020-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79121336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-15DOI: 10.1080/17489539.2020.1765472
Ciara L. Ousley, Tracy J. Raulston
Q(1) Which is more effective at establishing initial auditory-visual discriminations for individuals with autism spectrum disorder: functional reinforcement or arbitrary reinforcement contingencies...
{"title":"Preliminary evidence suggests that functional reinforcement contingencies may result in more rapid acquisition of initial auditory-visual discriminations for some individuals with autism spectrum disorder1","authors":"Ciara L. Ousley, Tracy J. Raulston","doi":"10.1080/17489539.2020.1765472","DOIUrl":"https://doi.org/10.1080/17489539.2020.1765472","url":null,"abstract":"Q(1) Which is more effective at establishing initial auditory-visual discriminations for individuals with autism spectrum disorder: functional reinforcement or arbitrary reinforcement contingencies...","PeriodicalId":39977,"journal":{"name":"Evidence-Based Communication Assessment and Intervention","volume":"225 1","pages":"152 - 159"},"PeriodicalIF":0.0,"publicationDate":"2020-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78472746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-10DOI: 10.1080/17489539.2020.1764204
Reem Muharib, R. Lang
(1) What interventions have been used to increase social-communication behaviors of students with autism spectrumdisorders (ASD) in inclusive elementary school settings? (2) What are the outcomes of social-communication interventions for students with ASD in inclusive elementary school settings? (3) What resources (i.e. personnel, peers, and setting characteristics) were required to implement social-communication interventions to students with ASD in inclusive elementary school settings?
{"title":"Systematic review suggests social-communication interventions can be effective when implemented in inclusive schools with children with autism spectrum disorders1","authors":"Reem Muharib, R. Lang","doi":"10.1080/17489539.2020.1764204","DOIUrl":"https://doi.org/10.1080/17489539.2020.1764204","url":null,"abstract":"(1) What interventions have been used to increase social-communication behaviors of students with autism spectrumdisorders (ASD) in inclusive elementary school settings? (2) What are the outcomes of social-communication interventions for students with ASD in inclusive elementary school settings? (3) What resources (i.e. personnel, peers, and setting characteristics) were required to implement social-communication interventions to students with ASD in inclusive elementary school settings?","PeriodicalId":39977,"journal":{"name":"Evidence-Based Communication Assessment and Intervention","volume":"38 1","pages":"109 - 112"},"PeriodicalIF":0.0,"publicationDate":"2020-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82798123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-06-01DOI: 10.1080/17489539.2020.1762971
Tonya N. Davis
All three participants were exposed to the same three conditions: baseline, intervention, and instruction. The introduction of intervention and instruction was staggered across participants. In order to transition from baseline to the intervention condition, participants must have first completed a minimum of five daily baseline sessions in which the improvement index indicated maintenance or deterioration of spelling accuracy. Although not explicitly reported, the graph of results appears to indicate that all three participants met this criteria simultaneously, on the fifth daily baseline session. The authors did not report how they determined the order in which participants would be transitioned to the intervention condition. In order to transition from intervention to the instruction condition, participants must have met accuracyand frequency-related criteria in three frequency-building practices conducted during the intervention phase.
{"title":"Brief intervention targeting letter sounds, letter naming, and segmenting frequency skills show promise for improving spelling accuracy1","authors":"Tonya N. Davis","doi":"10.1080/17489539.2020.1762971","DOIUrl":"https://doi.org/10.1080/17489539.2020.1762971","url":null,"abstract":"All three participants were exposed to the same three conditions: baseline, intervention, and instruction. The introduction of intervention and instruction was staggered across participants. In order to transition from baseline to the intervention condition, participants must have first completed a minimum of five daily baseline sessions in which the improvement index indicated maintenance or deterioration of spelling accuracy. Although not explicitly reported, the graph of results appears to indicate that all three participants met this criteria simultaneously, on the fifth daily baseline session. The authors did not report how they determined the order in which participants would be transitioned to the intervention condition. In order to transition from intervention to the instruction condition, participants must have met accuracyand frequency-related criteria in three frequency-building practices conducted during the intervention phase.","PeriodicalId":39977,"journal":{"name":"Evidence-Based Communication Assessment and Intervention","volume":"36 1","pages":"138 - 145"},"PeriodicalIF":0.0,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82296381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-14DOI: 10.1080/17489539.2020.1759766
Catharine Lory, Mandy Rispoli
Q (1) What are the effects of 5-min response interruption and redirection (RIRD) procedure on vocal stereotypy across settings?(2) What are the immediate and subsequent effects of RIRD on vocal ste...
{"title":"Effective reduction in vocal stereotypy across natural settings through response interruption and redirection and the potential for maintained effects1","authors":"Catharine Lory, Mandy Rispoli","doi":"10.1080/17489539.2020.1759766","DOIUrl":"https://doi.org/10.1080/17489539.2020.1759766","url":null,"abstract":"Q (1) What are the effects of 5-min response interruption and redirection (RIRD) procedure on vocal stereotypy across settings?(2) What are the immediate and subsequent effects of RIRD on vocal ste...","PeriodicalId":39977,"journal":{"name":"Evidence-Based Communication Assessment and Intervention","volume":"10 1","pages":"123 - 130"},"PeriodicalIF":0.0,"publicationDate":"2020-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87342296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-05-13DOI: 10.1080/17489539.2020.1753292
Sue Ann S. Lee (Commentary author)
Study duration: The data were collected twice a week. However, the total duration of the study is not clear. The number of baseline sessions varied from two to four. Treatment sessions were consistent across participants with a total of 16 sessions. Eight weeks were devoted to each treatment condition. Between each treatment condition, two midpoint probes were collected. Finally, two to three maintenance probes were collected after the 16 treatment sessions were completed. It is not clear when the midpoint or maintenance probes were collected.
{"title":"Controlled data supports the effectiveness of ultrasound feedback in treatment of vocalic /r/ errors in children with speech sound disorders1","authors":"Sue Ann S. Lee (Commentary author)","doi":"10.1080/17489539.2020.1753292","DOIUrl":"https://doi.org/10.1080/17489539.2020.1753292","url":null,"abstract":"Study duration: The data were collected twice a week. However, the total duration of the study is not clear. The number of baseline sessions varied from two to four. Treatment sessions were consistent across participants with a total of 16 sessions. Eight weeks were devoted to each treatment condition. Between each treatment condition, two midpoint probes were collected. Finally, two to three maintenance probes were collected after the 16 treatment sessions were completed. It is not clear when the midpoint or maintenance probes were collected.","PeriodicalId":39977,"journal":{"name":"Evidence-Based Communication Assessment and Intervention","volume":"28 1","pages":"118 - 122"},"PeriodicalIF":0.0,"publicationDate":"2020-05-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86710823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-02DOI: 10.1080/17489539.2020.1747146
Mariola Moeyaert, Diana Akhmedjanova, J. Ferron, S. N. Beretvas, W. Noortgate
Abstract The methodology of single-case experimental designs (SCED) has been expanding its efforts toward rigorous design tactics to address a variety of research questions related to intervention effectiveness. Effect size indicators appropriate to quantify the magnitude and the direction of interventions have been recommended and intensively studied for the major SCED design tactics, such as reversal designs, multiple-baseline designs across participants, and alternating treatment designs. In order to address complex and more sophisticated research questions, two or more different single-case design tactics can be merged (i.e., “combined SCEDs”). The two most common combined SCEDs are (a) a combination of a multiple-baseline design across participants with an embedded ABAB reversal design, and (b) a combination of a multiple-baseline design across participants with an embedded alternating treatment design. While these combined designs have the potential to address complex research questions and demonstrate functional relations, the development and use of proper effect size indicators lag behind and remain unexplored. Therefore, this study probes into the quantitative analysis of combined SCEDs using regression-based effect size estimates and two-level hierarchical linear modeling. This study is the first demonstration of effect size estimation for combined designs.
{"title":"Effect size estimation for combined single-case experimental designs","authors":"Mariola Moeyaert, Diana Akhmedjanova, J. Ferron, S. N. Beretvas, W. Noortgate","doi":"10.1080/17489539.2020.1747146","DOIUrl":"https://doi.org/10.1080/17489539.2020.1747146","url":null,"abstract":"Abstract The methodology of single-case experimental designs (SCED) has been expanding its efforts toward rigorous design tactics to address a variety of research questions related to intervention effectiveness. Effect size indicators appropriate to quantify the magnitude and the direction of interventions have been recommended and intensively studied for the major SCED design tactics, such as reversal designs, multiple-baseline designs across participants, and alternating treatment designs. In order to address complex and more sophisticated research questions, two or more different single-case design tactics can be merged (i.e., “combined SCEDs”). The two most common combined SCEDs are (a) a combination of a multiple-baseline design across participants with an embedded ABAB reversal design, and (b) a combination of a multiple-baseline design across participants with an embedded alternating treatment design. While these combined designs have the potential to address complex research questions and demonstrate functional relations, the development and use of proper effect size indicators lag behind and remain unexplored. Therefore, this study probes into the quantitative analysis of combined SCEDs using regression-based effect size estimates and two-level hierarchical linear modeling. This study is the first demonstration of effect size estimation for combined designs.","PeriodicalId":39977,"journal":{"name":"Evidence-Based Communication Assessment and Intervention","volume":"26 25","pages":"28 - 51"},"PeriodicalIF":0.0,"publicationDate":"2020-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/17489539.2020.1747146","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72522444","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-04-02DOI: 10.1080/17489539.2020.1741842
Oliver Wendt, D. Rindskopf
We are pleased to introduce the first of two special issues dedicated to statistical andmetaanalysis of single-case experimental designs (SCEDs). This first issue is focused on the analysis of data from SCEDs while the forthcoming second issue will document the state-of-the-art in SCED research synthesis. In the field of communication disorders, SCEDs play a pivotal role for the evaluation of treatment effects. The methodology has become increasingly used in clinical research, especially when dealing with very heterogeneous populations such as, for example, autism spectrum and other developmental disorders, behavior disorders, communication disorders, learning disabilities, mental health disorders, and physical impairments. The problem of obtaining homogeneous samples of participants with similar characteristics and the high cost of clinical research make groupcomparison designs difficult to implement with these populations. Consequently, SCEDs constitute a considerable percentage of treatment studies across the fields of behavioral, disability, educational, and rehabilitation research (e.g., Schlosser, 2009; Wendt, 2007). A growing array of scholarly disciplines has incorporated SCEDs into their methodological repertoire, which is reflected by over 45 professional, peer-reviewed journals now reporting single-subject experimental research (Anderson, 2001; American Psychological Association, 2002). Despite the widespread use, SCEDs were not always recognized as a valuable source of evidence for the identification of effective clinical treatments (Evans et al., 2014). When the evidence-based practice (EBP) movement originated, the initial emphasis was on randomized-controlled trials (RCTs), and systematic reviews and metaanalyses of RCTs as preferred sources of evidence. It took certain efforts to raise the interest in and recognition of SCEDs, for example: Horner et al. (2005) pointed out the value of SCEDs in documenting EBP. Schlosser and Raghavendra (2004) explained why SCEDs should be considered Level 2 evidence alongside RCTs and quasi-experimental group designs on hierarchies of evidence for low incidence populations. Later on, the Oxford Center for Evidence-based Medicine brought attention to small sample research by classifying the randomizedN=1 trial as Level 1 evidence for deriving treatment decisions in individual patients (Howick et al., 2011). Finally, the American Speech-LanguageHearing Association (2020) included SCEDs under Experimental Study Designs suitable to answer questions about the efficacy of interventions. The increasing interest in SCEDs gained further momentum when applied research started to discuss issues of quality criteria and appraisal, as well as consistency in reporting (e.g., Kratochwill et al., 2013; Tate et al., 2014; Wendt & Miller, 2012). Similar to other areas of applied sciences, For correspondence: Oliver Wendt, School of Communication Sciences and Disorders, University of Central Florida, Orlando, FL 3281
我们很高兴地介绍两期特刊中的第一期,专门介绍单例实验设计(SCEDs)的统计和荟萃分析。第一期的重点是分析来自经济与经济发展的数据,而即将出版的第二期将记录经济与经济发展研究综合的最新进展。在沟通障碍领域,SCEDs在评估治疗效果方面起着举足轻重的作用。该方法已越来越多地用于临床研究,特别是在处理非常异质性的人群时,例如,自闭症谱系和其他发育障碍、行为障碍、沟通障碍、学习障碍、精神健康障碍和身体障碍。获得具有相似特征的参与者的同质样本的问题以及临床研究的高成本使得在这些人群中实施组比较设计变得困难。因此,sced在行为、残疾、教育和康复研究领域的治疗研究中占相当大的比例(例如,Schlosser, 2009;比,2007)。越来越多的学术学科将SCEDs纳入了他们的方法库,这反映在超过45个专业的同行评审期刊上,现在报告单主题实验研究(Anderson, 2001;美国心理学会,2002)。尽管sced被广泛使用,但它并不总是被认为是确定有效临床治疗的有价值的证据来源(Evans等,2014)。当循证实践(EBP)运动兴起时,最初的重点是随机对照试验(rct),并将rct的系统评价和荟萃分析作为首选的证据来源。提高对sced的兴趣和认识需要一定的努力,例如:Horner et al.(2005)指出sced在记录EBP方面的价值。Schlosser和Raghavendra(2004)解释了为什么SCEDs应该与rct和准实验组设计一起被认为是低发病率人群证据层次的二级证据。后来,牛津循证医学中心(Oxford Center for evidence -based Medicine)通过将随机化n =1的试验分类为一级证据,以引起对小样本研究的关注,从而得出个体患者的治疗决策(Howick et al., 2011)。最后,美国语言听力协会(2020)将sced纳入实验研究设计,适合回答有关干预措施有效性的问题。当应用研究开始讨论质量标准和评估以及报告一致性问题时(例如,Kratochwill等人,2013;Tate et al., 2014;Wendt & Miller, 2012)。与其他应用科学领域类似,信函:奥利弗·温特,传播科学与疾病学院,佛罗里达中部大学,奥兰多,佛罗里达州32816-2215。E-mail: oliver.wendt@ucf.edu循证沟通评估与干预,2020年第14卷,1-2、1-5期,https://doi.org/10.1080/17489539.2020.1741842
{"title":"Exploring new directions in statistical analysis of single-case experimental designs","authors":"Oliver Wendt, D. Rindskopf","doi":"10.1080/17489539.2020.1741842","DOIUrl":"https://doi.org/10.1080/17489539.2020.1741842","url":null,"abstract":"We are pleased to introduce the first of two special issues dedicated to statistical andmetaanalysis of single-case experimental designs (SCEDs). This first issue is focused on the analysis of data from SCEDs while the forthcoming second issue will document the state-of-the-art in SCED research synthesis. In the field of communication disorders, SCEDs play a pivotal role for the evaluation of treatment effects. The methodology has become increasingly used in clinical research, especially when dealing with very heterogeneous populations such as, for example, autism spectrum and other developmental disorders, behavior disorders, communication disorders, learning disabilities, mental health disorders, and physical impairments. The problem of obtaining homogeneous samples of participants with similar characteristics and the high cost of clinical research make groupcomparison designs difficult to implement with these populations. Consequently, SCEDs constitute a considerable percentage of treatment studies across the fields of behavioral, disability, educational, and rehabilitation research (e.g., Schlosser, 2009; Wendt, 2007). A growing array of scholarly disciplines has incorporated SCEDs into their methodological repertoire, which is reflected by over 45 professional, peer-reviewed journals now reporting single-subject experimental research (Anderson, 2001; American Psychological Association, 2002). Despite the widespread use, SCEDs were not always recognized as a valuable source of evidence for the identification of effective clinical treatments (Evans et al., 2014). When the evidence-based practice (EBP) movement originated, the initial emphasis was on randomized-controlled trials (RCTs), and systematic reviews and metaanalyses of RCTs as preferred sources of evidence. It took certain efforts to raise the interest in and recognition of SCEDs, for example: Horner et al. (2005) pointed out the value of SCEDs in documenting EBP. Schlosser and Raghavendra (2004) explained why SCEDs should be considered Level 2 evidence alongside RCTs and quasi-experimental group designs on hierarchies of evidence for low incidence populations. Later on, the Oxford Center for Evidence-based Medicine brought attention to small sample research by classifying the randomizedN=1 trial as Level 1 evidence for deriving treatment decisions in individual patients (Howick et al., 2011). Finally, the American Speech-LanguageHearing Association (2020) included SCEDs under Experimental Study Designs suitable to answer questions about the efficacy of interventions. The increasing interest in SCEDs gained further momentum when applied research started to discuss issues of quality criteria and appraisal, as well as consistency in reporting (e.g., Kratochwill et al., 2013; Tate et al., 2014; Wendt & Miller, 2012). Similar to other areas of applied sciences, For correspondence: Oliver Wendt, School of Communication Sciences and Disorders, University of Central Florida, Orlando, FL 3281","PeriodicalId":39977,"journal":{"name":"Evidence-Based Communication Assessment and Intervention","volume":"36 1","pages":"1 - 5"},"PeriodicalIF":0.0,"publicationDate":"2020-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90957675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-26DOI: 10.1080/17489539.2020.1739048
Daniel M. Swan, J. Pustejovsky, Natasha Beretvas
Abstract In single-case experimental design (SCED) research, researchers often choose when to start treatment based on whether the baseline data collected so far are stable, using what is called a response-guided design. There is evidence that response-guided designs are common, and researchers have described a variety of criteria for assessing stability. With many of these criteria, making judgments about stability could yield data with limited variability, which may have consequences for statistical inference and effect size estimates. However, little research has examined the impact of response-guided design on the resulting data. Drawing on both applied and methodological research, we describe several algorithms as models for response-guided design. We use simulation methods to assess how using a response-guided design impacts the baseline data pattern. The simulations generate baseline data in the form of frequency counts, a common type of outcome in SCEDs. Most of the response-guided algorithms we identified lead to baselines with approximately unbiased mean levels, but nearly all of them lead to underestimates in the baseline variance. We discuss implications for the use of response-guided designs in practice and for the plausibility of specific algorithms as representations of actual research practice.
{"title":"The impact of response-guided designs on count outcomes in single-case experimental design baselines","authors":"Daniel M. Swan, J. Pustejovsky, Natasha Beretvas","doi":"10.1080/17489539.2020.1739048","DOIUrl":"https://doi.org/10.1080/17489539.2020.1739048","url":null,"abstract":"Abstract In single-case experimental design (SCED) research, researchers often choose when to start treatment based on whether the baseline data collected so far are stable, using what is called a response-guided design. There is evidence that response-guided designs are common, and researchers have described a variety of criteria for assessing stability. With many of these criteria, making judgments about stability could yield data with limited variability, which may have consequences for statistical inference and effect size estimates. However, little research has examined the impact of response-guided design on the resulting data. Drawing on both applied and methodological research, we describe several algorithms as models for response-guided design. We use simulation methods to assess how using a response-guided design impacts the baseline data pattern. The simulations generate baseline data in the form of frequency counts, a common type of outcome in SCEDs. Most of the response-guided algorithms we identified lead to baselines with approximately unbiased mean levels, but nearly all of them lead to underestimates in the baseline variance. We discuss implications for the use of response-guided designs in practice and for the plausibility of specific algorithms as representations of actual research practice.","PeriodicalId":39977,"journal":{"name":"Evidence-Based Communication Assessment and Intervention","volume":"28 1","pages":"107 - 82"},"PeriodicalIF":0.0,"publicationDate":"2020-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81553054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-17DOI: 10.1080/17489539.2020.1738625
L. Barnard‐Brak, Laci Watkins, D. Richman
Abstract The current study examined the relation between the ratio of baseline to treatment sessions and how differences in this ratio can influence estimation of treatment effect size from temporally adjacent baseline and treatment phases of any single-case experimental design (SCED). The current study describes how Bayesian statistical analyses can be used to aggregate treatment outcomes across subjects to meta-analyze SCED data. One-third of all A versus B comparisons (based upon simulated average values) did have a 10% or more bias, with the vast majority of the bias being substantially fewer data points in baseline compared to treatment sessions. SCEDs require relatively steady state responding; thus researchers may run relatively more B sessions compared to A sessions in the course of visually inspecting graphically depicted data. When the standard deviation for the number of A sessions was approximately twice as large or more than the B phase standard deviation, the degree of AB sessions ratio bias decreased substantially. SCED practitioners can use results of the current study to determine the potential benefits of running additional baseline or treatment sessions.
{"title":"Estimating effect size with respect to variance in baseline to treatment phases of single-case experimental designs: A Bayesian simulation study","authors":"L. Barnard‐Brak, Laci Watkins, D. Richman","doi":"10.1080/17489539.2020.1738625","DOIUrl":"https://doi.org/10.1080/17489539.2020.1738625","url":null,"abstract":"Abstract The current study examined the relation between the ratio of baseline to treatment sessions and how differences in this ratio can influence estimation of treatment effect size from temporally adjacent baseline and treatment phases of any single-case experimental design (SCED). The current study describes how Bayesian statistical analyses can be used to aggregate treatment outcomes across subjects to meta-analyze SCED data. One-third of all A versus B comparisons (based upon simulated average values) did have a 10% or more bias, with the vast majority of the bias being substantially fewer data points in baseline compared to treatment sessions. SCEDs require relatively steady state responding; thus researchers may run relatively more B sessions compared to A sessions in the course of visually inspecting graphically depicted data. When the standard deviation for the number of A sessions was approximately twice as large or more than the B phase standard deviation, the degree of AB sessions ratio bias decreased substantially. SCED practitioners can use results of the current study to determine the potential benefits of running additional baseline or treatment sessions.","PeriodicalId":39977,"journal":{"name":"Evidence-Based Communication Assessment and Intervention","volume":"32 1","pages":"69 - 81"},"PeriodicalIF":0.0,"publicationDate":"2020-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87381911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}