{"title":"2020 年罗西奖讲座:不断发展的计划评估艺术。","authors":"Randall S Brown","doi":"10.1177/0193841X221121241","DOIUrl":null,"url":null,"abstract":"<p><p>Evaluation of public programs has undergone many changes over the past four decades since Peter Rossi coined his \"Iron Law\" of program evaluation: \"The expected value of any net impact assessment of any large-scale social program is zero.\" While that assessment may be somewhat overstated, the essence still holds. The failures far outnumber the successes, and the estimated favorable effects are rarely sizeable. Despite this grim assessment, much can be learned from \"failed\" experiments, and from ones that are successful in only some sites or subgroups. Advances in study design, statistical models, data, and how inferences are drawn from estimates have substantially improved our analyses and will continue to do so. However, the most actual learning about \"what works\" (and why, when, and where) is likely to come from gathering more detailed and comprehensive data on how the intervention was implemented and attempting to link that data to estimated impacts. Researchers need detailed data on the target population served, the content of the intervention, and the process by which it is delivered to participating service providers and individuals. Two examples presented here illustrate how researchers drew useful broader lessons from impact estimates for a set of related programs. Rossi posited three reasons most interventions fail-wrong question, wrong intervention, poor implementation. Speeding the accumulation of wisdom about how social programs can best help vulnerable populations will require that researchers work closely with program funders, developers, operators, and participants to gather and interpret these detailed data about program implementation.</p>","PeriodicalId":47533,"journal":{"name":"Evaluation Review","volume":"47 2","pages":"209-230"},"PeriodicalIF":3.0000,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"2020 Rossi Award Lecture: The Evolving Art of Program Evaluation.\",\"authors\":\"Randall S Brown\",\"doi\":\"10.1177/0193841X221121241\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Evaluation of public programs has undergone many changes over the past four decades since Peter Rossi coined his \\\"Iron Law\\\" of program evaluation: \\\"The expected value of any net impact assessment of any large-scale social program is zero.\\\" While that assessment may be somewhat overstated, the essence still holds. The failures far outnumber the successes, and the estimated favorable effects are rarely sizeable. Despite this grim assessment, much can be learned from \\\"failed\\\" experiments, and from ones that are successful in only some sites or subgroups. Advances in study design, statistical models, data, and how inferences are drawn from estimates have substantially improved our analyses and will continue to do so. However, the most actual learning about \\\"what works\\\" (and why, when, and where) is likely to come from gathering more detailed and comprehensive data on how the intervention was implemented and attempting to link that data to estimated impacts. Researchers need detailed data on the target population served, the content of the intervention, and the process by which it is delivered to participating service providers and individuals. Two examples presented here illustrate how researchers drew useful broader lessons from impact estimates for a set of related programs. Rossi posited three reasons most interventions fail-wrong question, wrong intervention, poor implementation. Speeding the accumulation of wisdom about how social programs can best help vulnerable populations will require that researchers work closely with program funders, developers, operators, and participants to gather and interpret these detailed data about program implementation.</p>\",\"PeriodicalId\":47533,\"journal\":{\"name\":\"Evaluation Review\",\"volume\":\"47 2\",\"pages\":\"209-230\"},\"PeriodicalIF\":3.0000,\"publicationDate\":\"2023-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Evaluation Review\",\"FirstCategoryId\":\"90\",\"ListUrlMain\":\"https://doi.org/10.1177/0193841X221121241\",\"RegionNum\":4,\"RegionCategory\":\"社会学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2022/8/29 0:00:00\",\"PubModel\":\"Epub\",\"JCR\":\"Q1\",\"JCRName\":\"SOCIAL SCIENCES, INTERDISCIPLINARY\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Evaluation Review","FirstCategoryId":"90","ListUrlMain":"https://doi.org/10.1177/0193841X221121241","RegionNum":4,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2022/8/29 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"SOCIAL SCIENCES, INTERDISCIPLINARY","Score":null,"Total":0}
2020 Rossi Award Lecture: The Evolving Art of Program Evaluation.
Evaluation of public programs has undergone many changes over the past four decades since Peter Rossi coined his "Iron Law" of program evaluation: "The expected value of any net impact assessment of any large-scale social program is zero." While that assessment may be somewhat overstated, the essence still holds. The failures far outnumber the successes, and the estimated favorable effects are rarely sizeable. Despite this grim assessment, much can be learned from "failed" experiments, and from ones that are successful in only some sites or subgroups. Advances in study design, statistical models, data, and how inferences are drawn from estimates have substantially improved our analyses and will continue to do so. However, the most actual learning about "what works" (and why, when, and where) is likely to come from gathering more detailed and comprehensive data on how the intervention was implemented and attempting to link that data to estimated impacts. Researchers need detailed data on the target population served, the content of the intervention, and the process by which it is delivered to participating service providers and individuals. Two examples presented here illustrate how researchers drew useful broader lessons from impact estimates for a set of related programs. Rossi posited three reasons most interventions fail-wrong question, wrong intervention, poor implementation. Speeding the accumulation of wisdom about how social programs can best help vulnerable populations will require that researchers work closely with program funders, developers, operators, and participants to gather and interpret these detailed data about program implementation.
期刊介绍:
Evaluation Review is the forum for researchers, planners, and policy makers engaged in the development, implementation, and utilization of studies aimed at the betterment of the human condition. The Editors invite submission of papers reporting the findings of evaluation studies in such fields as child development, health, education, income security, manpower, mental health, criminal justice, and the physical and social environments. In addition, Evaluation Review will contain articles on methodological developments, discussions of the state of the art, and commentaries on issues related to the application of research results. Special features will include periodic review essays, "research briefs", and "craft reports".