{"title":"在离散试验指导过程中,使用从丰富到精益的过渡","authors":"Joshua Jessel, Einar T. Ingvarsson","doi":"10.1080/15021149.2017.1404396","DOIUrl":null,"url":null,"abstract":"ABSTRACT Research on error correction procedures often includes the manipulation of prompting strategies or reinforcement schedules during discrete-trial instruction (DTI). We extended this research by incorporating rich-to-lean reinforcement transitions following incorrect responses with four boys diagnosed with autism spectrum disorder. With the rich-to-lean error correction procedure, less-preferred reinforcers were delivered for the next two to three correct responses following an error. This condition was compared to non-differential reinforcement, in which more-preferred items were delivered for both correct responses or attempts, and a traditional differential reinforcement procedure, in which more-preferred items were delivered for correct responding and no items were delivered following attempts. The rich-to-lean condition resulted in the acquisition of the targeted skills with two of the four participants. The traditional differential reinforcement procedure was effective with three participants (and most efficient for two of those), and the non-differential procedure was effective (and most efficient) for one participant. We suggest that rich-to-lean transitions might function to correct errors in the context of DTI.","PeriodicalId":37052,"journal":{"name":"European Journal of Behavior Analysis","volume":"30 1","pages":"291 - 306"},"PeriodicalIF":0.0000,"publicationDate":"2017-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Using rich-to-lean transitions following errors during discrete-trial instruction\",\"authors\":\"Joshua Jessel, Einar T. Ingvarsson\",\"doi\":\"10.1080/15021149.2017.1404396\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"ABSTRACT Research on error correction procedures often includes the manipulation of prompting strategies or reinforcement schedules during discrete-trial instruction (DTI). We extended this research by incorporating rich-to-lean reinforcement transitions following incorrect responses with four boys diagnosed with autism spectrum disorder. With the rich-to-lean error correction procedure, less-preferred reinforcers were delivered for the next two to three correct responses following an error. This condition was compared to non-differential reinforcement, in which more-preferred items were delivered for both correct responses or attempts, and a traditional differential reinforcement procedure, in which more-preferred items were delivered for correct responding and no items were delivered following attempts. The rich-to-lean condition resulted in the acquisition of the targeted skills with two of the four participants. The traditional differential reinforcement procedure was effective with three participants (and most efficient for two of those), and the non-differential procedure was effective (and most efficient) for one participant. We suggest that rich-to-lean transitions might function to correct errors in the context of DTI.\",\"PeriodicalId\":37052,\"journal\":{\"name\":\"European Journal of Behavior Analysis\",\"volume\":\"30 1\",\"pages\":\"291 - 306\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2017-07-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"European Journal of Behavior Analysis\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1080/15021149.2017.1404396\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"European Journal of Behavior Analysis","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1080/15021149.2017.1404396","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Using rich-to-lean transitions following errors during discrete-trial instruction
ABSTRACT Research on error correction procedures often includes the manipulation of prompting strategies or reinforcement schedules during discrete-trial instruction (DTI). We extended this research by incorporating rich-to-lean reinforcement transitions following incorrect responses with four boys diagnosed with autism spectrum disorder. With the rich-to-lean error correction procedure, less-preferred reinforcers were delivered for the next two to three correct responses following an error. This condition was compared to non-differential reinforcement, in which more-preferred items were delivered for both correct responses or attempts, and a traditional differential reinforcement procedure, in which more-preferred items were delivered for correct responding and no items were delivered following attempts. The rich-to-lean condition resulted in the acquisition of the targeted skills with two of the four participants. The traditional differential reinforcement procedure was effective with three participants (and most efficient for two of those), and the non-differential procedure was effective (and most efficient) for one participant. We suggest that rich-to-lean transitions might function to correct errors in the context of DTI.