{"title":"探索单例实验设计统计分析的新方向","authors":"Oliver Wendt, D. Rindskopf","doi":"10.1080/17489539.2020.1741842","DOIUrl":null,"url":null,"abstract":"We are pleased to introduce the first of two special issues dedicated to statistical andmetaanalysis of single-case experimental designs (SCEDs). This first issue is focused on the analysis of data from SCEDs while the forthcoming second issue will document the state-of-the-art in SCED research synthesis. In the field of communication disorders, SCEDs play a pivotal role for the evaluation of treatment effects. The methodology has become increasingly used in clinical research, especially when dealing with very heterogeneous populations such as, for example, autism spectrum and other developmental disorders, behavior disorders, communication disorders, learning disabilities, mental health disorders, and physical impairments. The problem of obtaining homogeneous samples of participants with similar characteristics and the high cost of clinical research make groupcomparison designs difficult to implement with these populations. Consequently, SCEDs constitute a considerable percentage of treatment studies across the fields of behavioral, disability, educational, and rehabilitation research (e.g., Schlosser, 2009; Wendt, 2007). A growing array of scholarly disciplines has incorporated SCEDs into their methodological repertoire, which is reflected by over 45 professional, peer-reviewed journals now reporting single-subject experimental research (Anderson, 2001; American Psychological Association, 2002). Despite the widespread use, SCEDs were not always recognized as a valuable source of evidence for the identification of effective clinical treatments (Evans et al., 2014). When the evidence-based practice (EBP) movement originated, the initial emphasis was on randomized-controlled trials (RCTs), and systematic reviews and metaanalyses of RCTs as preferred sources of evidence. It took certain efforts to raise the interest in and recognition of SCEDs, for example: Horner et al. (2005) pointed out the value of SCEDs in documenting EBP. Schlosser and Raghavendra (2004) explained why SCEDs should be considered Level 2 evidence alongside RCTs and quasi-experimental group designs on hierarchies of evidence for low incidence populations. Later on, the Oxford Center for Evidence-based Medicine brought attention to small sample research by classifying the randomizedN=1 trial as Level 1 evidence for deriving treatment decisions in individual patients (Howick et al., 2011). Finally, the American Speech-LanguageHearing Association (2020) included SCEDs under Experimental Study Designs suitable to answer questions about the efficacy of interventions. The increasing interest in SCEDs gained further momentum when applied research started to discuss issues of quality criteria and appraisal, as well as consistency in reporting (e.g., Kratochwill et al., 2013; Tate et al., 2014; Wendt & Miller, 2012). Similar to other areas of applied sciences, For correspondence: Oliver Wendt, School of Communication Sciences and Disorders, University of Central Florida, Orlando, FL 32816-2215. E-mail: oliver.wendt@ucf.edu Evidence-Based Communication Assessment and Intervention, 2020 Vol. 14, Nos. 1–2, 1–5, https://doi.org/10.1080/17489539.2020.1741842","PeriodicalId":39977,"journal":{"name":"Evidence-Based Communication Assessment and Intervention","volume":"36 1","pages":"1 - 5"},"PeriodicalIF":0.0000,"publicationDate":"2020-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Exploring new directions in statistical analysis of single-case experimental designs\",\"authors\":\"Oliver Wendt, D. Rindskopf\",\"doi\":\"10.1080/17489539.2020.1741842\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We are pleased to introduce the first of two special issues dedicated to statistical andmetaanalysis of single-case experimental designs (SCEDs). This first issue is focused on the analysis of data from SCEDs while the forthcoming second issue will document the state-of-the-art in SCED research synthesis. In the field of communication disorders, SCEDs play a pivotal role for the evaluation of treatment effects. The methodology has become increasingly used in clinical research, especially when dealing with very heterogeneous populations such as, for example, autism spectrum and other developmental disorders, behavior disorders, communication disorders, learning disabilities, mental health disorders, and physical impairments. The problem of obtaining homogeneous samples of participants with similar characteristics and the high cost of clinical research make groupcomparison designs difficult to implement with these populations. Consequently, SCEDs constitute a considerable percentage of treatment studies across the fields of behavioral, disability, educational, and rehabilitation research (e.g., Schlosser, 2009; Wendt, 2007). A growing array of scholarly disciplines has incorporated SCEDs into their methodological repertoire, which is reflected by over 45 professional, peer-reviewed journals now reporting single-subject experimental research (Anderson, 2001; American Psychological Association, 2002). Despite the widespread use, SCEDs were not always recognized as a valuable source of evidence for the identification of effective clinical treatments (Evans et al., 2014). When the evidence-based practice (EBP) movement originated, the initial emphasis was on randomized-controlled trials (RCTs), and systematic reviews and metaanalyses of RCTs as preferred sources of evidence. It took certain efforts to raise the interest in and recognition of SCEDs, for example: Horner et al. (2005) pointed out the value of SCEDs in documenting EBP. Schlosser and Raghavendra (2004) explained why SCEDs should be considered Level 2 evidence alongside RCTs and quasi-experimental group designs on hierarchies of evidence for low incidence populations. Later on, the Oxford Center for Evidence-based Medicine brought attention to small sample research by classifying the randomizedN=1 trial as Level 1 evidence for deriving treatment decisions in individual patients (Howick et al., 2011). Finally, the American Speech-LanguageHearing Association (2020) included SCEDs under Experimental Study Designs suitable to answer questions about the efficacy of interventions. The increasing interest in SCEDs gained further momentum when applied research started to discuss issues of quality criteria and appraisal, as well as consistency in reporting (e.g., Kratochwill et al., 2013; Tate et al., 2014; Wendt & Miller, 2012). Similar to other areas of applied sciences, For correspondence: Oliver Wendt, School of Communication Sciences and Disorders, University of Central Florida, Orlando, FL 32816-2215. E-mail: oliver.wendt@ucf.edu Evidence-Based Communication Assessment and Intervention, 2020 Vol. 14, Nos. 1–2, 1–5, https://doi.org/10.1080/17489539.2020.1741842\",\"PeriodicalId\":39977,\"journal\":{\"name\":\"Evidence-Based Communication Assessment and Intervention\",\"volume\":\"36 1\",\"pages\":\"1 - 5\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-04-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Evidence-Based Communication Assessment and Intervention\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/17489539.2020.1741842\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"Social Sciences\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Evidence-Based Communication Assessment and Intervention","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/17489539.2020.1741842","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 6
摘要
我们很高兴地介绍两期特刊中的第一期,专门介绍单例实验设计(SCEDs)的统计和荟萃分析。第一期的重点是分析来自经济与经济发展的数据,而即将出版的第二期将记录经济与经济发展研究综合的最新进展。在沟通障碍领域,SCEDs在评估治疗效果方面起着举足轻重的作用。该方法已越来越多地用于临床研究,特别是在处理非常异质性的人群时,例如,自闭症谱系和其他发育障碍、行为障碍、沟通障碍、学习障碍、精神健康障碍和身体障碍。获得具有相似特征的参与者的同质样本的问题以及临床研究的高成本使得在这些人群中实施组比较设计变得困难。因此,sced在行为、残疾、教育和康复研究领域的治疗研究中占相当大的比例(例如,Schlosser, 2009;比,2007)。越来越多的学术学科将SCEDs纳入了他们的方法库,这反映在超过45个专业的同行评审期刊上,现在报告单主题实验研究(Anderson, 2001;美国心理学会,2002)。尽管sced被广泛使用,但它并不总是被认为是确定有效临床治疗的有价值的证据来源(Evans等,2014)。当循证实践(EBP)运动兴起时,最初的重点是随机对照试验(rct),并将rct的系统评价和荟萃分析作为首选的证据来源。提高对sced的兴趣和认识需要一定的努力,例如:Horner et al.(2005)指出sced在记录EBP方面的价值。Schlosser和Raghavendra(2004)解释了为什么SCEDs应该与rct和准实验组设计一起被认为是低发病率人群证据层次的二级证据。后来,牛津循证医学中心(Oxford Center for evidence -based Medicine)通过将随机化n =1的试验分类为一级证据,以引起对小样本研究的关注,从而得出个体患者的治疗决策(Howick et al., 2011)。最后,美国语言听力协会(2020)将sced纳入实验研究设计,适合回答有关干预措施有效性的问题。当应用研究开始讨论质量标准和评估以及报告一致性问题时(例如,Kratochwill等人,2013;Tate et al., 2014;Wendt & Miller, 2012)。与其他应用科学领域类似,信函:奥利弗·温特,传播科学与疾病学院,佛罗里达中部大学,奥兰多,佛罗里达州32816-2215。E-mail: oliver.wendt@ucf.edu循证沟通评估与干预,2020年第14卷,1-2、1-5期,https://doi.org/10.1080/17489539.2020.1741842
Exploring new directions in statistical analysis of single-case experimental designs
We are pleased to introduce the first of two special issues dedicated to statistical andmetaanalysis of single-case experimental designs (SCEDs). This first issue is focused on the analysis of data from SCEDs while the forthcoming second issue will document the state-of-the-art in SCED research synthesis. In the field of communication disorders, SCEDs play a pivotal role for the evaluation of treatment effects. The methodology has become increasingly used in clinical research, especially when dealing with very heterogeneous populations such as, for example, autism spectrum and other developmental disorders, behavior disorders, communication disorders, learning disabilities, mental health disorders, and physical impairments. The problem of obtaining homogeneous samples of participants with similar characteristics and the high cost of clinical research make groupcomparison designs difficult to implement with these populations. Consequently, SCEDs constitute a considerable percentage of treatment studies across the fields of behavioral, disability, educational, and rehabilitation research (e.g., Schlosser, 2009; Wendt, 2007). A growing array of scholarly disciplines has incorporated SCEDs into their methodological repertoire, which is reflected by over 45 professional, peer-reviewed journals now reporting single-subject experimental research (Anderson, 2001; American Psychological Association, 2002). Despite the widespread use, SCEDs were not always recognized as a valuable source of evidence for the identification of effective clinical treatments (Evans et al., 2014). When the evidence-based practice (EBP) movement originated, the initial emphasis was on randomized-controlled trials (RCTs), and systematic reviews and metaanalyses of RCTs as preferred sources of evidence. It took certain efforts to raise the interest in and recognition of SCEDs, for example: Horner et al. (2005) pointed out the value of SCEDs in documenting EBP. Schlosser and Raghavendra (2004) explained why SCEDs should be considered Level 2 evidence alongside RCTs and quasi-experimental group designs on hierarchies of evidence for low incidence populations. Later on, the Oxford Center for Evidence-based Medicine brought attention to small sample research by classifying the randomizedN=1 trial as Level 1 evidence for deriving treatment decisions in individual patients (Howick et al., 2011). Finally, the American Speech-LanguageHearing Association (2020) included SCEDs under Experimental Study Designs suitable to answer questions about the efficacy of interventions. The increasing interest in SCEDs gained further momentum when applied research started to discuss issues of quality criteria and appraisal, as well as consistency in reporting (e.g., Kratochwill et al., 2013; Tate et al., 2014; Wendt & Miller, 2012). Similar to other areas of applied sciences, For correspondence: Oliver Wendt, School of Communication Sciences and Disorders, University of Central Florida, Orlando, FL 32816-2215. E-mail: oliver.wendt@ucf.edu Evidence-Based Communication Assessment and Intervention, 2020 Vol. 14, Nos. 1–2, 1–5, https://doi.org/10.1080/17489539.2020.1741842
期刊介绍:
Evidence-Based Communication Assessment and Intervention (EBCAI) brings together professionals who work in clinical and educational practice as well as researchers from all disciplines to promote evidence-based practice (EBP) in serving individuals with communication impairments. The primary aims of EBCAI are to: Promote evidence-based practice (EBP) in communication assessment and intervention; Appraise the latest and best communication assessment and intervention studies so as to facilitate the use of research findings in clinical and educational practice; Provide a forum for discussions that advance EBP; and Disseminate research on EBP. We target speech-language pathologists, special educators, regular educators, applied behavior analysts, clinical psychologists, physical therapists, and occupational therapists who serve children or adults with communication impairments.