Pub Date : 2023-11-01Epub Date: 2019-08-30DOI: 10.1177/0145445519863035
Michael Perdices, Robyn L Tate, Ulrike Rosenkoetter
Critical appraisal scales play an important role in evaluating methodological rigor (MR) of between-groups and single-case designs (SCDs). For intervention research this forms an essential basis for ascertaining the strength of evidence. Yet, few such scales provide classifications that take into account the differential weighting of items contributing to internal validity. This study aimed to develop an algorithm derived from the Risk of Bias in N-of-1 Trials (RoBiNT) Scale to classify MR and risk of bias magnitude in SCDs. The algorithm was applied to 46 SCD experiments. Two experiments (4%) were classified as Very High MR, 14 (30%) as High, 5 (11%) as Moderate, 2 (4%) as Fair, 2 (4%) as Low, and 21 (46%) as Very Low. These proportions were comparable to the What Works Clearinghouse classifications: 13 (28%) met standards, 8 (17%) met standards with reservations, and 25 (54%) did not meet standards. There was strong association between the two classification systems.
临界评估量表在评估组间和单病例设计(SCD)的方法严谨性(MR)方面发挥着重要作用。对于干预研究来说,这是确定证据强度的重要基础。然而,很少有这样的量表提供考虑到对内部有效性有贡献的项目的不同权重的分类。本研究旨在开发一种源自1次试验中N次偏倚风险量表(RoBiNT)的算法,对SCD中的MR和偏倚程度风险进行分类。该算法已应用于46个SCD实验。两个实验(4%)被归类为甚高MR,14个(30%)被归类于高MR,5个(11%)被分类为中等MR,2个(4%)为一般MR,2(4%)归类于低MR,21个(46%)归类于甚低MR。这些比例与What Works Clearinghouse的分类相当:13个(28%)符合标准,8个(17%)符合保留标准,25个(54%)不符合标准。这两个分类系统之间有很强的联系。
{"title":"An Algorithm to Evaluate Methodological Rigor and Risk of Bias in Single-Case Studies.","authors":"Michael Perdices, Robyn L Tate, Ulrike Rosenkoetter","doi":"10.1177/0145445519863035","DOIUrl":"10.1177/0145445519863035","url":null,"abstract":"<p><p>Critical appraisal scales play an important role in evaluating methodological rigor (MR) of between-groups and single-case designs (SCDs). For intervention research this forms an essential basis for ascertaining the strength of evidence. Yet, few such scales provide classifications that take into account the differential weighting of items contributing to internal validity. This study aimed to develop an algorithm derived from the Risk of Bias in N-of-1 Trials (RoBiNT) Scale to classify MR and risk of bias magnitude in SCDs. The algorithm was applied to 46 SCD experiments. Two experiments (4%) were classified as Very High MR, 14 (30%) as High, 5 (11%) as Moderate, 2 (4%) as Fair, 2 (4%) as Low, and 21 (46%) as Very Low. These proportions were comparable to the What Works Clearinghouse classifications: 13 (28%) met standards, 8 (17%) met standards with reservations, and 25 (54%) did not meet standards. There was strong association between the two classification systems.</p>","PeriodicalId":48037,"journal":{"name":"Behavior Modification","volume":"1 1","pages":"1482-1509"},"PeriodicalIF":2.3,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/0145445519863035","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49024358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01Epub Date: 2019-06-19DOI: 10.1177/0145445519853793
René Tanious, Tamal Kumar De, Bart Michiels, Wim Van den Noortgate, Patrick Onghena
The current article presents a systematic review of consistency in single-case ABAB phase designs. We applied the CONsistency of DAta Patterns (CONDAP) measure to a sample of 460 data sets retrieved from 119 applied studies published over the past 50 years. The main purpose was to (a) identify typical CONDAP values found in published ABAB designs and (b) develop interpretational guidelines for CONDAP to be used for future studies to assess the consistency of data patterns from similar phases. The overall distribution of CONDAP values is right-skewed with several extreme values to the right of the center of the distribution. The B-phase CONDAP values fall within a narrower range than the A-phase CONDAP values. Based on the cumulative distribution of CONDAP values, we offer the following interpretational guidelines in terms of consistency: very high, 0 ≤ CONDAP ≤ 0.5; high, 0.5 < CONDAP ≤ 1; medium, 1 < CONDAP < 1.5; low, 1.5 < CONDAP ≤ 2; very low, CONDAP > 2. We give examples of combining CONDAP benchmarks with visual analysis of single-case ABAB phase designs and conclude that the majority of data patterns (41.2%) in published ABAB phase designs is medium consistent.
{"title":"Consistency in Single-Case ABAB Phase Designs: A Systematic Review.","authors":"René Tanious, Tamal Kumar De, Bart Michiels, Wim Van den Noortgate, Patrick Onghena","doi":"10.1177/0145445519853793","DOIUrl":"10.1177/0145445519853793","url":null,"abstract":"<p><p>The current article presents a systematic review of consistency in single-case ABAB phase designs. We applied the CONsistency of DAta Patterns (CONDAP) measure to a sample of 460 data sets retrieved from 119 applied studies published over the past 50 years. The main purpose was to (a) identify typical CONDAP values found in published ABAB designs and (b) develop interpretational guidelines for CONDAP to be used for future studies to assess the consistency of data patterns from similar phases. The overall distribution of CONDAP values is right-skewed with several extreme values to the right of the center of the distribution. The B-phase CONDAP values fall within a narrower range than the A-phase CONDAP values. Based on the cumulative distribution of CONDAP values, we offer the following interpretational guidelines in terms of consistency: very high, 0 ≤ CONDAP ≤ 0.5; high, 0.5 < CONDAP ≤ 1; medium, 1 < CONDAP < 1.5; low, 1.5 < CONDAP ≤ 2; very low, CONDAP > 2. We give examples of combining CONDAP benchmarks with visual analysis of single-case ABAB phase designs and conclude that the majority of data patterns (41.2%) in published ABAB phase designs is medium consistent.</p>","PeriodicalId":48037,"journal":{"name":"Behavior Modification","volume":" ","pages":"1377-1406"},"PeriodicalIF":2.3,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/0145445519853793","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37347823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01Epub Date: 2018-08-05DOI: 10.1177/0145445518792251
Andrew A Cooper, Alexander C Kline, Allison L Baier, Norah C Feeny
Dropout is a ubiquitous psychotherapy outcome in clinical practice and treatment research alike, yet it remains a poorly understood problem. Contemporary dropout research is dominated by models of prediction that lack a strong theoretical foundation, often drawing on data from clinical trials that report on dropout in an inconsistent and incomplete fashion. In this article, we assert that dropout is a critical treatment outcome that is worthy of investigation as a mechanistic process. After briefly describing the scope of the dropout problem, we discuss the many factors that limit the field’s present understanding of dropout. We then articulate and illustrate a transdiagnostic conceptual framework for examining psychotherapy dropout in contemporary research, concluding with recommendations for future research. With a more comprehensive understanding of the factors affecting retention, research efforts can shift toward investigating key processes underlying treatment dropout, thus, boosting prediction and informing strategies to mitigate dropout in clinical practice.
{"title":"Rethinking Research on Prediction and Prevention of Psychotherapy Dropout: A Mechanism-Oriented Approach.","authors":"Andrew A Cooper, Alexander C Kline, Allison L Baier, Norah C Feeny","doi":"10.1177/0145445518792251","DOIUrl":"10.1177/0145445518792251","url":null,"abstract":"Dropout is a ubiquitous psychotherapy outcome in clinical practice and treatment research alike, yet it remains a poorly understood problem. Contemporary dropout research is dominated by models of prediction that lack a strong theoretical foundation, often drawing on data from clinical trials that report on dropout in an inconsistent and incomplete fashion. In this article, we assert that dropout is a critical treatment outcome that is worthy of investigation as a mechanistic process. After briefly describing the scope of the dropout problem, we discuss the many factors that limit the field’s present understanding of dropout. We then articulate and illustrate a transdiagnostic conceptual framework for examining psychotherapy dropout in contemporary research, concluding with recommendations for future research. With a more comprehensive understanding of the factors affecting retention, research efforts can shift toward investigating key processes underlying treatment dropout, thus, boosting prediction and informing strategies to mitigate dropout in clinical practice.","PeriodicalId":48037,"journal":{"name":"Behavior Modification","volume":" ","pages":"1195-1218"},"PeriodicalIF":2.3,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/0145445518792251","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36373624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01Epub Date: 2018-08-23DOI: 10.1177/0145445518796202
James F Boswell, Carly M Schwartzman
Recent work has highlighted that process-outcome relationships are likely to vary depending on the client, yet much work remains to be done in the area of tailoring interventions to a given client. This naturalistic single-case analysis provides an example of augmenting a treatment protocol with "off protocol" relaxation methods, based on routinely collected outcome information to guide shared decision making. Intensive case study analyses were applied to one client with principal generalized anxiety disorder and comorbid major depressive disorder receiving transdiagnostic cognitive-behavioral therapy. The client completed two routine anxiety and depression symptom and functioning scales prior to each session of naturalistic treatment. Time series analyses were applied to the two symptom measures. Among the results, (a) significant linear decreases in anxiety and depression from baseline to posttreatment were observed; and (b) the introduction of relaxation methods had a significant impact on the course of anxiety symptom change. In conclusion, routine outcome assessment can be used to inform intervention augmentation with individual clients. Furthermore, regular assessment is needed to determine if a client may benefit from an alternative set of specific intervention strategies.
{"title":"An Exploration of Intervention Augmentation in a Single Case.","authors":"James F Boswell, Carly M Schwartzman","doi":"10.1177/0145445518796202","DOIUrl":"10.1177/0145445518796202","url":null,"abstract":"<p><p>Recent work has highlighted that process-outcome relationships are likely to vary depending on the client, yet much work remains to be done in the area of tailoring interventions to a given client. This naturalistic single-case analysis provides an example of augmenting a treatment protocol with \"off protocol\" relaxation methods, based on routinely collected outcome information to guide shared decision making. Intensive case study analyses were applied to one client with principal generalized anxiety disorder and comorbid major depressive disorder receiving transdiagnostic cognitive-behavioral therapy. The client completed two routine anxiety and depression symptom and functioning scales prior to each session of naturalistic treatment. Time series analyses were applied to the two symptom measures. Among the results, (a) significant linear decreases in anxiety and depression from baseline to posttreatment were observed; and (b) the introduction of relaxation methods had a significant impact on the course of anxiety symptom change. In conclusion, routine outcome assessment can be used to inform intervention augmentation with individual clients. Furthermore, regular assessment is needed to determine if a client may benefit from an alternative set of specific intervention strategies.</p>","PeriodicalId":48037,"journal":{"name":"Behavior Modification","volume":" ","pages":"1219-1241"},"PeriodicalIF":2.3,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/0145445518796202","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"36422140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01Epub Date: 2019-08-02DOI: 10.1177/0145445519864264
James E Pustejovsky, Daniel M Swan, Kyle W English
There has been growing interest in using statistical methods to analyze data and estimate effect size indices from studies that use single-case designs (SCDs), as a complement to traditional visual inspection methods. The validity of a statistical method rests on whether its assumptions are plausible representations of the process by which the data were collected, yet there is evidence that some assumptions-particularly regarding normality of error distributions-may be inappropriate for single-case data. To develop more appropriate modeling assumptions and statistical methods, researchers must attend to the features of real SCD data. In this study, we examine several features of SCDs with behavioral outcome measures in order to inform development of statistical methods. Drawing on a corpus of over 300 studies, including approximately 1,800 cases, from seven systematic reviews that cover a range of interventions and outcome constructs, we report the distribution of study designs, distribution of outcome measurement procedures, and features of baseline outcome data distributions for the most common types of measurements used in single-case research. We discuss implications for the development of more realistic assumptions regarding outcome distributions in SCD studies, as well as the design of Monte Carlo simulation studies evaluating the performance of statistical analysis techniques for SCD data.
{"title":"An Examination of Measurement Procedures and Characteristics of Baseline Outcome Data in Single-Case Research.","authors":"James E Pustejovsky, Daniel M Swan, Kyle W English","doi":"10.1177/0145445519864264","DOIUrl":"10.1177/0145445519864264","url":null,"abstract":"<p><p>There has been growing interest in using statistical methods to analyze data and estimate effect size indices from studies that use single-case designs (SCDs), as a complement to traditional visual inspection methods. The validity of a statistical method rests on whether its assumptions are plausible representations of the process by which the data were collected, yet there is evidence that some assumptions-particularly regarding normality of error distributions-may be inappropriate for single-case data. To develop more appropriate modeling assumptions and statistical methods, researchers must attend to the features of real SCD data. In this study, we examine several features of SCDs with behavioral outcome measures in order to inform development of statistical methods. Drawing on a corpus of over 300 studies, including approximately 1,800 cases, from seven systematic reviews that cover a range of interventions and outcome constructs, we report the distribution of study designs, distribution of outcome measurement procedures, and features of baseline outcome data distributions for the most common types of measurements used in single-case research. We discuss implications for the development of more realistic assumptions regarding outcome distributions in SCD studies, as well as the design of Monte Carlo simulation studies evaluating the performance of statistical analysis techniques for SCD data.</p>","PeriodicalId":48037,"journal":{"name":"Behavior Modification","volume":"1 1","pages":"1423-1454"},"PeriodicalIF":2.3,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/0145445519864264","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47986897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01Epub Date: 2019-04-09DOI: 10.1177/0145445519839213
Kristen M Brogan, John T Rapp, Bailey R Sturdivant
The continuation of a baseline pattern of responding into a treatment phase, sometimes referred to as a "transition state," can obscure interpretation of data depicted in single-case experimental designs (SCEDs). For example, when using visual analysis, transition states may lead to the conclusion that the treatment is ineffective. Likewise, the inclusion of overlapping data points in some statistical analyses may lead to conclusions that the treatment had a small effect size and give rise to publication bias. This study reviewed 20 volumes in a journal that publishes primarily SCEDs studies. We defined a transition state as a situation wherein at least the first three consecutive data points of a treatment phase or condition are within the range of the baseline phase or condition. Results indicate that transitions states (a) were present for 7.4% of graphs that met inclusion criteria and (b) occurred for a mean of 4.9 data points before leading to behavior change. We discuss some implications and directions for future research on transition states.
{"title":"Transition States in Single Case Experimental Designs.","authors":"Kristen M Brogan, John T Rapp, Bailey R Sturdivant","doi":"10.1177/0145445519839213","DOIUrl":"10.1177/0145445519839213","url":null,"abstract":"<p><p>The continuation of a baseline pattern of responding into a treatment phase, sometimes referred to as a \"transition state,\" can obscure interpretation of data depicted in single-case experimental designs (SCEDs). For example, when using visual analysis, transition states may lead to the conclusion that the treatment is ineffective. Likewise, the inclusion of overlapping data points in some statistical analyses may lead to conclusions that the treatment had a small effect size and give rise to publication bias. This study reviewed 20 volumes in a journal that publishes primarily SCEDs studies. We defined a transition state as a situation wherein at least the first three consecutive data points of a treatment phase or condition are within the range of the baseline phase or condition. Results indicate that transitions states (a) were present for 7.4% of graphs that met inclusion criteria and (b) occurred for a mean of 4.9 data points before leading to behavior change. We discuss some implications and directions for future research on transition states.</p>","PeriodicalId":48037,"journal":{"name":"Behavior Modification","volume":" ","pages":"1269-1291"},"PeriodicalIF":2.3,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/0145445519839213","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37132409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01Epub Date: 2023-01-16DOI: 10.1177/01454455221144034
Eunkyeng Baek, Wen Luo, Kwok Hap Lam
Multilevel modeling (MLM) is an approach for meta-analyzing single-case experimental designs (SCED). In this paper, we provide a step-by-step guideline for using the MLM to meta-analyze SCED time-series data. The MLM approach is first presented using a basic three-level model, then gradually extended to represent more realistic situations of SCED data, such as modeling a time variable, moderators representing different design types and multiple outcomes, and heterogeneous within-case variance. The presented approach is then illustrated using real SCED data. Practical recommendations using the MLM approach are also provided for applied researchers based on the current methodological literature. Available free and commercial software programs to meta-analyze SCED data are also introduced, along with several hands-on software codes for applied researchers to implement their own studies. Potential advantages and limitations of using the MLM approach to meta-analyzing SCED are discussed.
{"title":"Meta-Analysis of Single-Case Experimental Design using Multilevel Modeling.","authors":"Eunkyeng Baek, Wen Luo, Kwok Hap Lam","doi":"10.1177/01454455221144034","DOIUrl":"10.1177/01454455221144034","url":null,"abstract":"<p><p>Multilevel modeling (MLM) is an approach for meta-analyzing single-case experimental designs (SCED). In this paper, we provide a step-by-step guideline for using the MLM to meta-analyze SCED time-series data. The MLM approach is first presented using a basic three-level model, then gradually extended to represent more realistic situations of SCED data, such as modeling a time variable, moderators representing different design types and multiple outcomes, and heterogeneous within-case variance. The presented approach is then illustrated using real SCED data. Practical recommendations using the MLM approach are also provided for applied researchers based on the current methodological literature. Available free and commercial software programs to meta-analyze SCED data are also introduced, along with several hands-on software codes for applied researchers to implement their own studies. Potential advantages and limitations of using the MLM approach to meta-analyzing SCED are discussed.</p>","PeriodicalId":48037,"journal":{"name":"Behavior Modification","volume":" ","pages":"1546-1573"},"PeriodicalIF":2.3,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9086083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01Epub Date: 2019-07-13DOI: 10.1177/0145445519860219
Antonia R Giannakakos, Marc J Lanovaz
Single-case experimental designs often require extended baselines or the withdrawal of treatment, which may not be feasible or ethical in some practical settings. The quasi-experimental AB design is a potential alternative, but more research is needed on its validity. The purpose of our study was to examine the validity of using nonoverlap measures of effect size to detect changes in AB designs using simulated data. In our analyses, we determined thresholds for three effect size measures beyond which the type I error rate would remain below 0.05 and then examined whether using these thresholds would provide sufficient power. Overall, our analyses show that some effect size measures may provide adequate control over type I error rate and sufficient power when analyzing data from AB designs. In sum, our results suggest that practitioners may use quasi-experimental AB designs in combination with effect size to rigorously assess progress in practice.
{"title":"Using AB Designs With Nonoverlap Effect Size Measures to Support Clinical Decision-Making: A Monte Carlo Validation.","authors":"Antonia R Giannakakos, Marc J Lanovaz","doi":"10.1177/0145445519860219","DOIUrl":"10.1177/0145445519860219","url":null,"abstract":"<p><p>Single-case experimental designs often require extended baselines or the withdrawal of treatment, which may not be feasible or ethical in some practical settings. The quasi-experimental AB design is a potential alternative, but more research is needed on its validity. The purpose of our study was to examine the validity of using nonoverlap measures of effect size to detect changes in AB designs using simulated data. In our analyses, we determined thresholds for three effect size measures beyond which the type I error rate would remain below 0.05 and then examined whether using these thresholds would provide sufficient power. Overall, our analyses show that some effect size measures may provide adequate control over type I error rate and sufficient power when analyzing data from AB designs. In sum, our results suggest that practitioners may use quasi-experimental AB designs in combination with effect size to rigorously assess progress in practice.</p>","PeriodicalId":48037,"journal":{"name":"Behavior Modification","volume":" ","pages":"1407-1422"},"PeriodicalIF":2.3,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/0145445519860219","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37143481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01Epub Date: 2019-08-23DOI: 10.1177/0145445519867054
Jennifer Ninci
Practitioners frequently use single-case data for decision-making related to behavioral programming and progress monitoring. Visual analysis is an important and primary tool for reporting results of graphed single-case data because it provides immediate, contextualized information. Criticisms exist concerning the objectivity and reliability of the visual analysis process. When practitioners are equipped with knowledge about single-case designs, including threats and safeguards to internal validity, they can make technically accurate conclusions and reliable data-based decisions with relative ease. This paper summarizes single-case experimental design and considerations for professionals to improve the accuracy and reliability of judgments made from single-case data. This paper can also help practitioners to appropriately incorporate single-case research design applications in their practice.
{"title":"Single-Case Data Analysis: A Practitioner Guide for Accurate and Reliable Decisions.","authors":"Jennifer Ninci","doi":"10.1177/0145445519867054","DOIUrl":"10.1177/0145445519867054","url":null,"abstract":"<p><p>Practitioners frequently use single-case data for decision-making related to behavioral programming and progress monitoring. Visual analysis is an important and primary tool for reporting results of graphed single-case data because it provides immediate, contextualized information. Criticisms exist concerning the objectivity and reliability of the visual analysis process. When practitioners are equipped with knowledge about single-case designs, including threats and safeguards to internal validity, they can make technically accurate conclusions and reliable data-based decisions with relative ease. This paper summarizes single-case experimental design and considerations for professionals to improve the accuracy and reliability of judgments made from single-case data. This paper can also help practitioners to appropriately incorporate single-case research design applications in their practice.</p>","PeriodicalId":48037,"journal":{"name":"Behavior Modification","volume":"1 1","pages":"1455-1481"},"PeriodicalIF":2.3,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/0145445519867054","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49533134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01Epub Date: 2019-04-29DOI: 10.1177/0145445519845603
Nicole R Nugent, Sachin R Pendse, Heather T Schatten, Michael F Armey
The purpose of this manuscript is to provide an overview of, and rationale for, the increasing adoption of a wide range of cutting-edge technological methods in assessment and intervention which are relevant for treatment. First, we review traditional approaches to measuring and monitoring affect, behavior, and cognition in behavior and cognitive-behavioral therapy. Second, we describe evolving active and passive technology-enabled approaches to behavior assessment including emerging applications of digital phenotyping facilitated through fitness trackers, smartwatches, and social media. Third, we describe ways that these emerging technologies may be used for intervention, focusing on novel applications for the use of technology in intervention efforts. Importantly, though some of the methods and approaches we describe here warrant future testing, many aspects of technology can already be easily incorporated within an established treatment framework.
{"title":"Innovations in Technology and Mechanisms of Change in Behavioral Interventions.","authors":"Nicole R Nugent, Sachin R Pendse, Heather T Schatten, Michael F Armey","doi":"10.1177/0145445519845603","DOIUrl":"10.1177/0145445519845603","url":null,"abstract":"<p><p>The purpose of this manuscript is to provide an overview of, and rationale for, the increasing adoption of a wide range of cutting-edge technological methods in assessment and intervention which are relevant for treatment. First, we review traditional approaches to measuring and monitoring affect, behavior, and cognition in behavior and cognitive-behavioral therapy. Second, we describe evolving active and passive technology-enabled approaches to behavior assessment including emerging applications of digital phenotyping facilitated through fitness trackers, smartwatches, and social media. Third, we describe ways that these emerging technologies may be used for intervention, focusing on novel applications for the use of technology in intervention efforts. Importantly, though some of the methods and approaches we describe here warrant future testing, many aspects of technology can already be easily incorporated within an established treatment framework.</p>","PeriodicalId":48037,"journal":{"name":"Behavior Modification","volume":" ","pages":"1292-1319"},"PeriodicalIF":2.3,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1177/0145445519845603","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"37189082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}