Matodzi M. Amisi, Mohammed S. Awal, Mine Pabari, Dede Bedu-Addo
Background: This article shares lessons from four case studies, documenting experiences of evidence use in different public policies in South Africa, Kenya, Ghana and the Economic Community of West African States (ECOWAS).Objectives: Most literature on evidence use in Africa focuses either on one form of evidence, that is, evaluations, systematic reviews or on the systems governments develop to support evidence use. However, the use of evidence in policy is complex and requires systems, processes, tools and information to flow between different stakeholders. In this article, we demonstrate how relationships between knowledge generators and users were built and maintained in the case studies, and how these relationships were critical for evidence use.Method: The case studies were amongst eight case studies carried out for the book entitled ‘Using Evidence in Policy and Practice: Lessons from Africa’. Ethnographic case studies drawn from both secondary and primary research, including interviews with key informants and extensive document reviews, were carried out. The research and writing process involved policymakers enabling the research to access participants’ rich observations.Results: The case studies demonstrate that initiatives to build relationships between different state agencies, between state and non-state actors and between non-state actors are critical to enable organisations to use evidence. This can be enabled by the creation of spaces for dialogue that are sensitively facilitated and ongoing for actors to be aware of evidence, understand the evidence and be motivated to use the evidence.Conclusion: Mutually beneficial and trustful relationships between individuals and institutions in different sectors are conduits through which information flows between sectors, new insights are generated and evidence used.
{"title":"How relationship and dialogue facilitate evidence use: Lessons from African countries","authors":"Matodzi M. Amisi, Mohammed S. Awal, Mine Pabari, Dede Bedu-Addo","doi":"10.4102/aej.v9i1.559","DOIUrl":"https://doi.org/10.4102/aej.v9i1.559","url":null,"abstract":"Background: This article shares lessons from four case studies, documenting experiences of evidence use in different public policies in South Africa, Kenya, Ghana and the Economic Community of West African States (ECOWAS).Objectives: Most literature on evidence use in Africa focuses either on one form of evidence, that is, evaluations, systematic reviews or on the systems governments develop to support evidence use. However, the use of evidence in policy is complex and requires systems, processes, tools and information to flow between different stakeholders. In this article, we demonstrate how relationships between knowledge generators and users were built and maintained in the case studies, and how these relationships were critical for evidence use.Method: The case studies were amongst eight case studies carried out for the book entitled ‘Using Evidence in Policy and Practice: Lessons from Africa’. Ethnographic case studies drawn from both secondary and primary research, including interviews with key informants and extensive document reviews, were carried out. The research and writing process involved policymakers enabling the research to access participants’ rich observations.Results: The case studies demonstrate that initiatives to build relationships between different state agencies, between state and non-state actors and between non-state actors are critical to enable organisations to use evidence. This can be enabled by the creation of spaces for dialogue that are sensitively facilitated and ongoing for actors to be aware of evidence, understand the evidence and be motivated to use the evidence.Conclusion: Mutually beneficial and trustful relationships between individuals and institutions in different sectors are conduits through which information flows between sectors, new insights are generated and evidence used.","PeriodicalId":37531,"journal":{"name":"African Evaluation Journal","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90212954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mark Abrahams, Matodzi M. Amisi, Cara H. Hartley, Caitlin Blaser-Mapitsa, Volker Schöer, N. Pophiwa
No abstract available.
没有摘要。
{"title":"Erratum: The Seventh Biennial South African Monitoring and Evaluation Association Conference 2019: Shaping M&E for a sustainable future – Editorial","authors":"Mark Abrahams, Matodzi M. Amisi, Cara H. Hartley, Caitlin Blaser-Mapitsa, Volker Schöer, N. Pophiwa","doi":"10.4102/aej.v9i1.579","DOIUrl":"https://doi.org/10.4102/aej.v9i1.579","url":null,"abstract":"No abstract available.","PeriodicalId":37531,"journal":{"name":"African Evaluation Journal","volume":"105 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75692153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Oladayo Omosa, T. Archibald, K. Niewolny, Max Stephenson, James C. Anderson
of
的
{"title":"Towards defining and advancing ‘Made in Africa Evaluation’","authors":"Oladayo Omosa, T. Archibald, K. Niewolny, Max Stephenson, James C. Anderson","doi":"10.4102/aej.v9i1.564","DOIUrl":"https://doi.org/10.4102/aej.v9i1.564","url":null,"abstract":"of","PeriodicalId":37531,"journal":{"name":"African Evaluation Journal","volume":"255 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72455723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Although the roadblocks to development achievement in Africa emerge noticeably from resource scarcity, lack of security and good governance, or poor economic approaches, they also surface from ineffective development management practices. The monitoring and evaluation (ME) systems effectiveness assessment by the World Bank in 2007 revealed little effectiveness, mainly on cases studied in Africa.Objective: This research investigates the framework for monitoring and evaluation system effectiveness as a development management tool and shapes its measurements. It creates a framework that will help understand better the success factors of an effective ME System and how they contribute to improved development management.Methods: A trifold approach was used, which comprises three iterations — Literature review, Case Studies, and Survey. The first revisited the most relevant literature on development management and performance monitoring systems, while the second used a qualitative study of three cases in the West Africa region. The third is a survey of a sample of practitioners and managers in West Africa, where data was analysed using correlations and regressions.Results: There are significant linkages between ‘ME-System Quality’, ‘ME-Information Quality’, and ‘ME-Service Quality’. The results highlighted that the ‘Results-Based Management Practice’ of organisations, the effective ‘Knowledge and Information Management Culture’, including learning, and the ‘Evidence-Based Decision-Making Practice’ are directly influenced by effective ME System.Conclusions: Effective ME System contributes greatly to expand ‘Improved Policy and Program Design’, ‘Improved Operational Decisions’, ‘Improved Tactical and Strategic Decisions’, and ‘Improved Capability to Advance Development Objectives’.
{"title":"How to measure monitoring and evaluation system effectiveness?","authors":"Abdourahmane Ba","doi":"10.4102/aej.v9i1.553","DOIUrl":"https://doi.org/10.4102/aej.v9i1.553","url":null,"abstract":"Background: Although the roadblocks to development achievement in Africa emerge noticeably from resource scarcity, lack of security and good governance, or poor economic approaches, they also surface from ineffective development management practices. The monitoring and evaluation (ME) systems effectiveness assessment by the World Bank in 2007 revealed little effectiveness, mainly on cases studied in Africa.Objective: This research investigates the framework for monitoring and evaluation system effectiveness as a development management tool and shapes its measurements. It creates a framework that will help understand better the success factors of an effective ME System and how they contribute to improved development management.Methods: A trifold approach was used, which comprises three iterations — Literature review, Case Studies, and Survey. The first revisited the most relevant literature on development management and performance monitoring systems, while the second used a qualitative study of three cases in the West Africa region. The third is a survey of a sample of practitioners and managers in West Africa, where data was analysed using correlations and regressions.Results: There are significant linkages between ‘ME-System Quality’, ‘ME-Information Quality’, and ‘ME-Service Quality’. The results highlighted that the ‘Results-Based Management Practice’ of organisations, the effective ‘Knowledge and Information Management Culture’, including learning, and the ‘Evidence-Based Decision-Making Practice’ are directly influenced by effective ME System.Conclusions: Effective ME System contributes greatly to expand ‘Improved Policy and Program Design’, ‘Improved Operational Decisions’, ‘Improved Tactical and Strategic Decisions’, and ‘Improved Capability to Advance Development Objectives’.","PeriodicalId":37531,"journal":{"name":"African Evaluation Journal","volume":"138 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89325988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Growing numbers of developing countries are investing in National Evaluation Systems (NESs). A key question is whether these have the potential to bring about meaningful policy change, and if so, what evaluation approaches are appropriate to support reflection and learning throughout the change process.Objectives: We describe the efforts of commissioned external evaluators in developing an evaluation approach to help critically assess the efficacy of some of the most important policies and programmes aimed at supporting South African farmers from the past two decades.Method: We present the diagnostic evaluation approach we developed. The approach guides evaluation end users through a series of logical steps to help make sense of an existing evidence base in relation to the root problems addressed, and the specific needs of the target populations. No additional evaluation data were collected. Groups who participated include government representatives, academics and representatives from non-governmental organisations and national associations supporting emerging farmers.Results: Our main evaluation findings relate to a lack of policy coherence in important key areas, most notably extension and advisory services, and microfinance and grants. This was characterised by; (1) an absence of common understanding of policies and objectives; (2) overly ambitious objectives often not directly linked to the policy frameworks; (3) lack of logical connections between target groups and interventions and (4) inadequate identification, selection, targeting and retention of beneficiaries.Conclusion: The diagnostic evaluation allowed for uniquely cross-cutting and interactive engagement with a complex evidence base. The evaluation process shed light on new evaluation review methods that might work to support a NES.
{"title":"What works for poor farmers? Insights from South Africa’s national policy evaluations","authors":"S. Chapman, Katherine Tjasink, J. Louw","doi":"10.4102/aej.v9i1.548","DOIUrl":"https://doi.org/10.4102/aej.v9i1.548","url":null,"abstract":"Background: Growing numbers of developing countries are investing in National Evaluation Systems (NESs). A key question is whether these have the potential to bring about meaningful policy change, and if so, what evaluation approaches are appropriate to support reflection and learning throughout the change process.Objectives: We describe the efforts of commissioned external evaluators in developing an evaluation approach to help critically assess the efficacy of some of the most important policies and programmes aimed at supporting South African farmers from the past two decades.Method: We present the diagnostic evaluation approach we developed. The approach guides evaluation end users through a series of logical steps to help make sense of an existing evidence base in relation to the root problems addressed, and the specific needs of the target populations. No additional evaluation data were collected. Groups who participated include government representatives, academics and representatives from non-governmental organisations and national associations supporting emerging farmers.Results: Our main evaluation findings relate to a lack of policy coherence in important key areas, most notably extension and advisory services, and microfinance and grants. This was characterised by; (1) an absence of common understanding of policies and objectives; (2) overly ambitious objectives often not directly linked to the policy frameworks; (3) lack of logical connections between target groups and interventions and (4) inadequate identification, selection, targeting and retention of beneficiaries.Conclusion: The diagnostic evaluation allowed for uniquely cross-cutting and interactive engagement with a complex evidence base. The evaluation process shed light on new evaluation review methods that might work to support a NES.","PeriodicalId":37531,"journal":{"name":"African Evaluation Journal","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89562201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: South Africa and other developing countries are facing an ever-increasing demand for competent evaluators. In addition, increasing demands are being placed on those who become evaluators. What does this mean for evaluation education in its current form and state in South Africa? In addition, what possible responses can be there to the diverse drivers of change within the dynamic social context in which evaluators operate? Objectives: This article aims to address some of the questions related to the supply and demand profile of evaluation in South Africa, which may be useful for other developing countries. Method: A literature review and key informant interviews were carried out to answer the key research questions. Results: The article describes the provision of formal evaluation education and the challenges currently facing university-based offerings. The study provides a framework for considering the interaction between the supply and demand elements in the field of evaluation. Strategies are proposed for strengthening the supply of evaluators and ensuring that these evaluators can respond to the growing demands being placed on them. Conclusion: This article is valuable for all evaluation stakeholders as it provides insight into the academic landscape of evaluation in a developing context and explores practical ways to support and strengthen capacity building efforts in similar contexts.
{"title":"Evaluation education in South Africa: Characteristics and challenges in a changing world","authors":"L. Wildschut, T. R. Silubonde","doi":"10.4102/aej.v8i1.476","DOIUrl":"https://doi.org/10.4102/aej.v8i1.476","url":null,"abstract":"Background: South Africa and other developing countries are facing an ever-increasing demand for competent evaluators. In addition, increasing demands are being placed on those who become evaluators. What does this mean for evaluation education in its current form and state in South Africa? In addition, what possible responses can be there to the diverse drivers of change within the dynamic social context in which evaluators operate? Objectives: This article aims to address some of the questions related to the supply and demand profile of evaluation in South Africa, which may be useful for other developing countries. Method: A literature review and key informant interviews were carried out to answer the key research questions. Results: The article describes the provision of formal evaluation education and the challenges currently facing university-based offerings. The study provides a framework for considering the interaction between the supply and demand elements in the field of evaluation. Strategies are proposed for strengthening the supply of evaluators and ensuring that these evaluators can respond to the growing demands being placed on them. Conclusion: This article is valuable for all evaluation stakeholders as it provides insight into the academic landscape of evaluation in a developing context and explores practical ways to support and strengthen capacity building efforts in similar contexts.","PeriodicalId":37531,"journal":{"name":"African Evaluation Journal","volume":"56 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79557340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: There has long been an assumption that Africa has low levels of impact evaluation capacity and that when impact evaluations are conducted in the region, they need to be led and conducted by researchers from the North. The Africa Centre for Evidence at the University of Johannesburg conducted a scoping study on impact evaluation capacity in sub-Saharan Africa to test this assumption. Methodology: We used a multicomponent design, which included a systematic author search, desk review, online survey (with 353 respondents) and key informant discussions. Results: Contrary to previous assumptions, we found a large number of researchers with impact evaluation capacity across sub-Saharan Africa. We identified 490 impact evaluation publications, to which 1520 unique African researchers from 34 countries had contributed. South Africa had the most impact evaluation researchers who had published, followed by Kenya and Uganda, illustrating a concentration of capacity in Southern and Eastern Africa. Authors largely resided within schools of public health and health science faculties at universities. The study showed that modules and elements of impact evaluation training had been offered in 32 countries, indicating more training opportunities than anticipated, although formal, accredited training in impact evaluation was mostly presented outside Africa. Conclusion: Contrary to previous assumptions, widespread capacity to conduct impact evaluations exists in sub-Saharan Africa, reducing the need for researcher capacity from the Global North to deliver impact evaluations in the region. However, our evidence suggests that capacity gaps exist in non-health sectors, creating an opportunity for further capacity support in these areas.
{"title":"Scoping the impact evaluation capacity in sub-Saharan Africa","authors":"Yvonne Erasmus, S. Jordaan, Ruth Stewart","doi":"10.4102/AEJ.V8I1.473","DOIUrl":"https://doi.org/10.4102/AEJ.V8I1.473","url":null,"abstract":"Background: There has long been an assumption that Africa has low levels of impact evaluation capacity and that when impact evaluations are conducted in the region, they need to be led and conducted by researchers from the North. The Africa Centre for Evidence at the University of Johannesburg conducted a scoping study on impact evaluation capacity in sub-Saharan Africa to test this assumption. Methodology: We used a multicomponent design, which included a systematic author search, desk review, online survey (with 353 respondents) and key informant discussions. Results: Contrary to previous assumptions, we found a large number of researchers with impact evaluation capacity across sub-Saharan Africa. We identified 490 impact evaluation publications, to which 1520 unique African researchers from 34 countries had contributed. South Africa had the most impact evaluation researchers who had published, followed by Kenya and Uganda, illustrating a concentration of capacity in Southern and Eastern Africa. Authors largely resided within schools of public health and health science faculties at universities. The study showed that modules and elements of impact evaluation training had been offered in 32 countries, indicating more training opportunities than anticipated, although formal, accredited training in impact evaluation was mostly presented outside Africa. Conclusion: Contrary to previous assumptions, widespread capacity to conduct impact evaluations exists in sub-Saharan Africa, reducing the need for researcher capacity from the Global North to deliver impact evaluations in the region. However, our evidence suggests that capacity gaps exist in non-health sectors, creating an opportunity for further capacity support in these areas.","PeriodicalId":37531,"journal":{"name":"African Evaluation Journal","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83860262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
How to cite this book review: Klugman, B., 2020, ‘Beware of ‘but’ – Donna Podems’s Being an Evaluator’, African Evaluation Journal 8(1), a482. https://doi. org/10.4102/aej.v8i1.482 An evaluator can provide eight positive findings; however, if BUT is used as a connector in the sentence before negative evaluation findings are shared, everything up to the word BUT will likely be forgotten. Try to use the word AND if you must use a connector – or, even better, just start a new sentence. (p. 270)
{"title":"Beware of ‘but’ – Donna Podems’s Being an Evaluator","authors":"B. Klugman","doi":"10.4102/aej.v8i1.482","DOIUrl":"https://doi.org/10.4102/aej.v8i1.482","url":null,"abstract":"How to cite this book review: Klugman, B., 2020, ‘Beware of ‘but’ – Donna Podems’s Being an Evaluator’, African Evaluation Journal 8(1), a482. https://doi. org/10.4102/aej.v8i1.482 An evaluator can provide eight positive findings; however, if BUT is used as a connector in the sentence before negative evaluation findings are shared, everything up to the word BUT will likely be forgotten. Try to use the word AND if you must use a connector – or, even better, just start a new sentence. (p. 270)","PeriodicalId":37531,"journal":{"name":"African Evaluation Journal","volume":"55 6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77832823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: South African cities have been beleaguered with urban deterioration challenges, especially dumping and littering, notwithstanding the regulatory framework and the continuous clean-up programmes undertaken by municipalities. This article identifies the challenges within eThekwini Metropolitan Municipality in addressing littering and dumping, and recommends improvements towards urban regeneration efforts.Objectives: To critically evaluate compliance of legislation and efficacy of the urban regeneration programmes implemented, which addressed littering and dumping within the inner city of eThekwini Metropolitan Municipality.Method: This convergent mixed-method research followed a case study approach and involved the analysis of primary data obtained from a qualitative perspective, including semi-structured interviews and questionnaires; and secondary analysis of quantitative data in the form of documents and reports obtained from the municipality.Results: The article identified that urban degeneration, specifically litter and dumping, occurred as a result of ineffective compliance of regulations and lack of enforcement; outdated service levels, lack of monitoring and evaluation of programmes; lack of education initiatives; ineffective leadership and governance; lack of involvement of citizens and businesses in clean city initiatives; and the negative behavioural patterns of citizens.Conclusion: The key recommendations for municipalities include implementing an integrated strategic plan for urban regeneration within a proactive policy and regulatory environment; monitoring and evaluation of programmes related to urban regeneration; increased resources dedicated to waste management and enforcement; implementing enforcement and consequence management strategies; and stimulating change in the behaviour patterns of citizens, businesses as well as municipality employees.
{"title":"An evaluation of eThekwini Municipality’s regeneration programmes on littering and dumping","authors":"N. Govender, P. Reddy","doi":"10.4102/aej.v8i1.415","DOIUrl":"https://doi.org/10.4102/aej.v8i1.415","url":null,"abstract":"Background: South African cities have been beleaguered with urban deterioration challenges, especially dumping and littering, notwithstanding the regulatory framework and the continuous clean-up programmes undertaken by municipalities. This article identifies the challenges within eThekwini Metropolitan Municipality in addressing littering and dumping, and recommends improvements towards urban regeneration efforts.Objectives: To critically evaluate compliance of legislation and efficacy of the urban regeneration programmes implemented, which addressed littering and dumping within the inner city of eThekwini Metropolitan Municipality.Method: This convergent mixed-method research followed a case study approach and involved the analysis of primary data obtained from a qualitative perspective, including semi-structured interviews and questionnaires; and secondary analysis of quantitative data in the form of documents and reports obtained from the municipality.Results: The article identified that urban degeneration, specifically litter and dumping, occurred as a result of ineffective compliance of regulations and lack of enforcement; outdated service levels, lack of monitoring and evaluation of programmes; lack of education initiatives; ineffective leadership and governance; lack of involvement of citizens and businesses in clean city initiatives; and the negative behavioural patterns of citizens.Conclusion: The key recommendations for municipalities include implementing an integrated strategic plan for urban regeneration within a proactive policy and regulatory environment; monitoring and evaluation of programmes related to urban regeneration; increased resources dedicated to waste management and enforcement; implementing enforcement and consequence management strategies; and stimulating change in the behaviour patterns of citizens, businesses as well as municipality employees.","PeriodicalId":37531,"journal":{"name":"African Evaluation Journal","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88853487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: A paucity of evaluation studies could be identified that investigated the impact of training. The lacuna of research should be viewed in light of austerity measures as well as inability to measure return of investment on training expenditure, which is substantial year on year, especially in the context of public service. Objectives: This article reports on an impact evaluation of a research methodology skills capacity workshop. Method: A quasi-experimental evaluation design in which comparison groups were utilised to evaluate the impact of a research methodology skills development intervention. A paired-sample t -test was used to measure the knowledge increase whilst controlling for the influence of comparison groups by means of an analysis of variance. A hierarchical multiple regression analysis was performed to determine how much of the variance in research methodology knowledge could be contributed to the intervention whilst controlling for facilitator effect. Results: Results indicated that the intervention had a statistically significant impact on research methodology knowledge. Furthermore, the intervention group significantly differed statistically from the control and comparison groups with respect to research methodology knowledge. Facilitator effect was found to be a moderating variable. Hierarchical regression analysis performed to isolate the impact of intervention in the absence of facilitator effect revealed a statistically significant result. Conclusion: The study augments the corpus of knowledge by providing evidence of training impact within the South African public service, especially utilising a quasi-experimental pre-test–post-test research design and isolating the impact of facilitator effect from the intervention itself.
背景:调查培训影响的评估研究很少。应从紧缩措施以及无法衡量培训支出的投资回报率的角度来看待这一研究空白,因为培训支出每年都很可观,尤其是在公共服务领域。目的:本文报告了对研究方法技能能力讲习班的影响评估。评估方法采用准实验评估设计,利用比较组来评估研究方法技能发展干预措施的影响。采用配对样本 t 检验来衡量知识增长情况,同时通过方差分析来控制对比组的影响。进行了分层多元回归分析,以确定研究方法学知识的差异有多少是干预措施造成的,同时控制促进者效应。结果显示结果表明,干预对研究方法论知识的影响具有统计学意义。此外,在研究方法论知识方面,干预组与对照组和比较组在统计上有明显差异。研究发现,促进者效应是一个调节变量。在没有促进者效应的情况下,为分离干预措施的影响而进行的层次回归分析表明,结果在统计学上具有重要意义。结论本研究提供了南非公共服务部门培训影响的证据,特别是利用了准实验性的前测-后测研究设计,并将促进者效应的影响与干预本身分离开来,从而丰富了知识库。
{"title":"A quasi-experimental evaluation of a skills capacity workshop in the South African public service","authors":"P. Jonck, R. D. Coning","doi":"10.4102/aej.v8i1.421","DOIUrl":"https://doi.org/10.4102/aej.v8i1.421","url":null,"abstract":"Background: A paucity of evaluation studies could be identified that investigated the impact of training. The lacuna of research should be viewed in light of austerity measures as well as inability to measure return of investment on training expenditure, which is substantial year on year, especially in the context of public service. Objectives: This article reports on an impact evaluation of a research methodology skills capacity workshop. Method: A quasi-experimental evaluation design in which comparison groups were utilised to evaluate the impact of a research methodology skills development intervention. A paired-sample t -test was used to measure the knowledge increase whilst controlling for the influence of comparison groups by means of an analysis of variance. A hierarchical multiple regression analysis was performed to determine how much of the variance in research methodology knowledge could be contributed to the intervention whilst controlling for facilitator effect. Results: Results indicated that the intervention had a statistically significant impact on research methodology knowledge. Furthermore, the intervention group significantly differed statistically from the control and comparison groups with respect to research methodology knowledge. Facilitator effect was found to be a moderating variable. Hierarchical regression analysis performed to isolate the impact of intervention in the absence of facilitator effect revealed a statistically significant result. Conclusion: The study augments the corpus of knowledge by providing evidence of training impact within the South African public service, especially utilising a quasi-experimental pre-test–post-test research design and isolating the impact of facilitator effect from the intervention itself.","PeriodicalId":37531,"journal":{"name":"African Evaluation Journal","volume":" 17","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141219449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}