Pub Date : 2024-01-10DOI: 10.1177/13563890231223174
Eleanor Hazell, Ian Goldman, B. Rabie, Jen Norins, T. Chirau, Taruna Gupta
In 2021, the South African Monitoring and Evaluation Association facilitated an evaluation hackathon that engaged diverse stakeholders in co-creation processes to develop practical solutions to address complex problems facing the monitoring and evaluation sector. The event catalysed broad-based ownership and enabled the South African Monitoring and Evaluation Association to coordinate the creative energy, commitment and resources of its members, government and other partners to achieve outcomes that would not be possible to achieve otherwise. The article analyses the co-creation approach adopted for the hackathon across four phases, namely initiation, process design/planning, co-design and development and application/follow-up. A retrospective analysis of the process and results identified eight key elements that enabled or impeded the successful completion of hackathon outputs and their conversion into useful products. These elements are facilitative leadership, purposive stakeholder selection, a well-delimited task, preparation, process facilitation, a valued product, voluntary contributions and further capacity. The lessons learnt provide useful insight for future efforts to generate localised, contextualised responses to evaluation problems.
{"title":"Using co-creation to address monitoring and evaluation challenges: The experience of South Africa’s evaluation hackathon","authors":"Eleanor Hazell, Ian Goldman, B. Rabie, Jen Norins, T. Chirau, Taruna Gupta","doi":"10.1177/13563890231223174","DOIUrl":"https://doi.org/10.1177/13563890231223174","url":null,"abstract":"In 2021, the South African Monitoring and Evaluation Association facilitated an evaluation hackathon that engaged diverse stakeholders in co-creation processes to develop practical solutions to address complex problems facing the monitoring and evaluation sector. The event catalysed broad-based ownership and enabled the South African Monitoring and Evaluation Association to coordinate the creative energy, commitment and resources of its members, government and other partners to achieve outcomes that would not be possible to achieve otherwise. The article analyses the co-creation approach adopted for the hackathon across four phases, namely initiation, process design/planning, co-design and development and application/follow-up. A retrospective analysis of the process and results identified eight key elements that enabled or impeded the successful completion of hackathon outputs and their conversion into useful products. These elements are facilitative leadership, purposive stakeholder selection, a well-delimited task, preparation, process facilitation, a valued product, voluntary contributions and further capacity. The lessons learnt provide useful insight for future efforts to generate localised, contextualised responses to evaluation problems.","PeriodicalId":47511,"journal":{"name":"Evaluation","volume":"75 9","pages":""},"PeriodicalIF":2.4,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139440413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-30DOI: 10.1177/13563890231220032
Callum Donaldson-Murdoch, Rebecca Adler, Dui Jasinghe
There is still widespread debate about the role that an evaluator could, should or even must play. Evaluations are often complex and require a broad array of interlinked roles and areas of expertise, making it difficult to generate a singular definition of an evaluator. Furthermore, an evaluator may be expected to change their responsibilities and actions throughout the stages of an evaluation, changing the role they play. As practitioners, we believe there is value in contributing to this discussion, providing our perspectives based on personal experience and data collected during the 2022 European Evaluation Society conference. In this article we seek to describe six evaluator traits which we believe to be the most influential to the roles which we play, and six evaluator roles which best illustrate the impact of changing these traits.
{"title":"What role should we play to be effective evaluators? – practitioner reflections","authors":"Callum Donaldson-Murdoch, Rebecca Adler, Dui Jasinghe","doi":"10.1177/13563890231220032","DOIUrl":"https://doi.org/10.1177/13563890231220032","url":null,"abstract":"There is still widespread debate about the role that an evaluator could, should or even must play. Evaluations are often complex and require a broad array of interlinked roles and areas of expertise, making it difficult to generate a singular definition of an evaluator. Furthermore, an evaluator may be expected to change their responsibilities and actions throughout the stages of an evaluation, changing the role they play. As practitioners, we believe there is value in contributing to this discussion, providing our perspectives based on personal experience and data collected during the 2022 European Evaluation Society conference. In this article we seek to describe six evaluator traits which we believe to be the most influential to the roles which we play, and six evaluator roles which best illustrate the impact of changing these traits.","PeriodicalId":47511,"journal":{"name":"Evaluation","volume":" 12","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139138202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-28DOI: 10.1177/13563890231218275
Gabriel Sidman, Carlo Carugi
Within international development, formative evaluation is becoming increasingly important to make rapid assessments of project design and support adaptive learning in early implementation of ongoing interventions. Such evaluation is critical for institutions with short funding cycles, which need early evidence to assess the utility of new initiatives to inform donors’ decision-making for upcoming funding cycles. However, obtaining quantitative evidence is difficult in formative evaluation as results are not yet available or visible early in the project cycle. Geospatial multi-criteria suitability analysis provides one method for evaluating the relevance of program and project design through creating a quantitative spatial index, combining data on several spatial indicators to evaluate project site selection and help inform future priority geographies. This study demonstrates the use of such a geospatial analysis in the formative evaluation of the Global Environment Facility’s food systems integrated programs.
{"title":"Evaluating site selection at design in food systems interventions: A formative geospatial approach","authors":"Gabriel Sidman, Carlo Carugi","doi":"10.1177/13563890231218275","DOIUrl":"https://doi.org/10.1177/13563890231218275","url":null,"abstract":"Within international development, formative evaluation is becoming increasingly important to make rapid assessments of project design and support adaptive learning in early implementation of ongoing interventions. Such evaluation is critical for institutions with short funding cycles, which need early evidence to assess the utility of new initiatives to inform donors’ decision-making for upcoming funding cycles. However, obtaining quantitative evidence is difficult in formative evaluation as results are not yet available or visible early in the project cycle. Geospatial multi-criteria suitability analysis provides one method for evaluating the relevance of program and project design through creating a quantitative spatial index, combining data on several spatial indicators to evaluate project site selection and help inform future priority geographies. This study demonstrates the use of such a geospatial analysis in the formative evaluation of the Global Environment Facility’s food systems integrated programs.","PeriodicalId":47511,"journal":{"name":"Evaluation","volume":"14 6","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139150106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-28DOI: 10.1177/13563890231221526
J. Gargani, Julian King
Value for money poses the question, “What is good resource use?” It is often answered with a narrow economic analysis that does not adequately address what diverse people value. We suggest new principles and methods that may help evaluators answer the question better. First, we define value for money, which sits at the intersection of evaluation and economics. Next, we make the case for a holistic assessment of value for money that evaluators can conduct with tools they already have, like rubrics. We introduce three principles that further align value for money with evaluation: value depends on the credibility of estimates; things do not have value, people place value on things; and people value the same things differently. Together, they suggest evaluators should arrive at multiple, possibly conflicting conclusions that represent diverse value perspectives. We demonstrate how this may be done using a value-for-money rubric to improve resource allocation for impact.
{"title":"Principles and methods to advance value for money","authors":"J. Gargani, Julian King","doi":"10.1177/13563890231221526","DOIUrl":"https://doi.org/10.1177/13563890231221526","url":null,"abstract":"Value for money poses the question, “What is good resource use?” It is often answered with a narrow economic analysis that does not adequately address what diverse people value. We suggest new principles and methods that may help evaluators answer the question better. First, we define value for money, which sits at the intersection of evaluation and economics. Next, we make the case for a holistic assessment of value for money that evaluators can conduct with tools they already have, like rubrics. We introduce three principles that further align value for money with evaluation: value depends on the credibility of estimates; things do not have value, people place value on things; and people value the same things differently. Together, they suggest evaluators should arrive at multiple, possibly conflicting conclusions that represent diverse value perspectives. We demonstrate how this may be done using a value-for-money rubric to improve resource allocation for impact.","PeriodicalId":47511,"journal":{"name":"Evaluation","volume":"17 4","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139150030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-21DOI: 10.1177/13563890231218276
Julia Leininger, Armin von Schiller
The evidence generated and used in development cooperation has changed remarkably over the last decades. When it comes to the field of democracy support, these developments have been less significant. Routinised, evidence-based programming is far from a reality here. Compared to other fields, the goals of the interventions and assumed theories of change remain underspecified. Under these circumstances, evaluating and learning is difficult, and as a result, evidence gaps remain large and the translation of evidence into action often unsuccessful. This is particularly dramatic at a time when this field is regaining attention amid global autocratisation trends. In this article, we analyse the specific barriers and challenges democracy support faces to generate and use evidence. Furthermore, we identify evidence gaps and propose impact-oriented accompanying research as an evaluation approach that can make a significant contribution towards advancing the evidence agenda in this field.
{"title":"What works in democracy support? How to fill evidence and usability gaps through evaluation","authors":"Julia Leininger, Armin von Schiller","doi":"10.1177/13563890231218276","DOIUrl":"https://doi.org/10.1177/13563890231218276","url":null,"abstract":"The evidence generated and used in development cooperation has changed remarkably over the last decades. When it comes to the field of democracy support, these developments have been less significant. Routinised, evidence-based programming is far from a reality here. Compared to other fields, the goals of the interventions and assumed theories of change remain underspecified. Under these circumstances, evaluating and learning is difficult, and as a result, evidence gaps remain large and the translation of evidence into action often unsuccessful. This is particularly dramatic at a time when this field is regaining attention amid global autocratisation trends. In this article, we analyse the specific barriers and challenges democracy support faces to generate and use evidence. Furthermore, we identify evidence gaps and propose impact-oriented accompanying research as an evaluation approach that can make a significant contribution towards advancing the evidence agenda in this field.","PeriodicalId":47511,"journal":{"name":"Evaluation","volume":"57 49","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138949517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-21DOI: 10.1177/13563890231213643
Dana Jayne Linnell, Bianca Montrosse‐Moorhead
This article is part of a larger project to examine who calls themselves an evaluator and why, as well as how evaluators differ from non-evaluators. For the present article, 40 professionals doing applied work (e.g. evaluators, researchers) participated in an hour-long semi-structured interview, which involved questions about their journey into the field, applied practice, and professional identity. Research questions were: what does the journey into the field look like for evaluators and similar professionals, and how do they describe the similarities and differences between evaluators and other similar professionals? Results showed evaluators and non-evaluators have unique journeys into the field. Furthermore, evaluators and other similar professionals describe the similarities and differences similarly, yet there are also some misconceptions similar professionals have regarding evaluators and evaluation. This article contributes to the larger conversation on the professionalization of evaluation by helping understand the jurisdictional boundaries between evaluation and other related fields.
{"title":"Navigating the boundaries between evaluators and similar applied professionals","authors":"Dana Jayne Linnell, Bianca Montrosse‐Moorhead","doi":"10.1177/13563890231213643","DOIUrl":"https://doi.org/10.1177/13563890231213643","url":null,"abstract":"This article is part of a larger project to examine who calls themselves an evaluator and why, as well as how evaluators differ from non-evaluators. For the present article, 40 professionals doing applied work (e.g. evaluators, researchers) participated in an hour-long semi-structured interview, which involved questions about their journey into the field, applied practice, and professional identity. Research questions were: what does the journey into the field look like for evaluators and similar professionals, and how do they describe the similarities and differences between evaluators and other similar professionals? Results showed evaluators and non-evaluators have unique journeys into the field. Furthermore, evaluators and other similar professionals describe the similarities and differences similarly, yet there are also some misconceptions similar professionals have regarding evaluators and evaluation. This article contributes to the larger conversation on the professionalization of evaluation by helping understand the jurisdictional boundaries between evaluation and other related fields.","PeriodicalId":47511,"journal":{"name":"Evaluation","volume":"50 3","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138949960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-11DOI: 10.1177/13563890231207115
Charlotte Laubek, Isabelle Bourgeois
Evaluation capacity building is generally conceptualized as occurring either at the individual or the organizational levels. However, ongoing societal crises require organizations within or across sectors to work together to find solutions to complex problems and to evaluate joint initiatives. Interorganizational evaluation capacity is required to ensure the ongoing conduct and use of evaluations to support interorganizational decision-making and improvement. This exploratory study describes and analyzes four cases of interorganizational evaluation capacity building initiatives in the public, health and community sectors in Canada and Denmark to identify their key dimensions. Preliminary findings highlight the importance of developing individual and organizational evaluation capacity as well as the need to provide stakeholders with interorganizational evaluation training and projects in which they can work together to better learn about each other’s organizations and challenges and find solutions to common problems.
{"title":"Interorganizational evaluation capacity building in the public, health and community sectors","authors":"Charlotte Laubek, Isabelle Bourgeois","doi":"10.1177/13563890231207115","DOIUrl":"https://doi.org/10.1177/13563890231207115","url":null,"abstract":"Evaluation capacity building is generally conceptualized as occurring either at the individual or the organizational levels. However, ongoing societal crises require organizations within or across sectors to work together to find solutions to complex problems and to evaluate joint initiatives. Interorganizational evaluation capacity is required to ensure the ongoing conduct and use of evaluations to support interorganizational decision-making and improvement. This exploratory study describes and analyzes four cases of interorganizational evaluation capacity building initiatives in the public, health and community sectors in Canada and Denmark to identify their key dimensions. Preliminary findings highlight the importance of developing individual and organizational evaluation capacity as well as the need to provide stakeholders with interorganizational evaluation training and projects in which they can work together to better learn about each other’s organizations and challenges and find solutions to common problems.","PeriodicalId":47511,"journal":{"name":"Evaluation","volume":"8 2","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138980210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-29DOI: 10.1177/13563890231215075
M. Faling, Greetje Schouten, Sietze Vellema
Evaluation in complex programs assembling multiple actors and combining various interventions faces contradictory requirements. In this article, we take a management perspective to show how to recognize and accommodate these contradictory elements as paradoxes. Through reflective practice we identify five paradoxes, each consisting of two contradicting logics: the paradox of purpose—between accountability and learning; the paradox of position—between autonomy and involvement; the paradox of permeability—between openness and closedness; the paradox of method—between rigor and flexibility; and the paradox of acceptance—between credibility and feasibility. We infer the paradoxes from our work in monitoring and evaluation and action research embedded in 2SCALE, a program working on inclusive agribusiness and food security in a complex environment. The intractable nature of paradoxes means they cannot be permanently resolved. Making productive use of paradoxes most likely raises new contradictions, which merit a continuous acknowledging and accommodating for well-functioning monitoring and evaluation systems.
{"title":"Navigating competing demands in monitoring and evaluation: Five key paradoxes","authors":"M. Faling, Greetje Schouten, Sietze Vellema","doi":"10.1177/13563890231215075","DOIUrl":"https://doi.org/10.1177/13563890231215075","url":null,"abstract":"Evaluation in complex programs assembling multiple actors and combining various interventions faces contradictory requirements. In this article, we take a management perspective to show how to recognize and accommodate these contradictory elements as paradoxes. Through reflective practice we identify five paradoxes, each consisting of two contradicting logics: the paradox of purpose—between accountability and learning; the paradox of position—between autonomy and involvement; the paradox of permeability—between openness and closedness; the paradox of method—between rigor and flexibility; and the paradox of acceptance—between credibility and feasibility. We infer the paradoxes from our work in monitoring and evaluation and action research embedded in 2SCALE, a program working on inclusive agribusiness and food security in a complex environment. The intractable nature of paradoxes means they cannot be permanently resolved. Making productive use of paradoxes most likely raises new contradictions, which merit a continuous acknowledging and accommodating for well-functioning monitoring and evaluation systems.","PeriodicalId":47511,"journal":{"name":"Evaluation","volume":"6 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139212311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-29DOI: 10.1177/13563890231210328
Marko Nousiainen, L. Leemann
This study introduces a mixed-method model for the realistic evaluation of programmes promoting the experience of social inclusion of people in disadvantaged positions. It combines qualitative and quantitative methods for exploring the context-mechanism-outcome- configurations of four cases consisting of development projects. Qualitative analyses depict the context-mechanism-outcome-configurations using participants’ interviews and small success stories as data. Quantitative analyses of a longitudinal survey including the Experiences of Social Inclusion Scale examine the context-mechanism-outcome-configurations in a larger group of participants and re-test the qualitative findings. Thus, they help to overcome the positive selection bias of the small success stories. The mixed-method approach is fruitful especially because the qualitative and the quantitative analyses amend each other’s shortcomings. In the promotion of social inclusion, it is important to help people to see themselves as active agents and allow them to connect to larger social domains.
本研究介绍了一种混合方法模式,用于对促进弱势群体融入社会的计划进行现实评估。它结合了定性和定量方法,以探索由发展项目组成的四个案例的背景--机制--结果--配置。定性分析以参与者的访谈和小型成功案例为数据,描述了背景-机制-结果-配置。纵向调查的定量分析包括 "社会包容体验量表"(Experiences of Social Inclusion Scale),在更大的参与者群体中检验环境--机制--结果--配置,并重新检验定性分析的结果。因此,它们有助于克服小型成功案例的正面选择偏差。混合方法富有成果,特别是因为定性分析和定量分析可以弥补彼此的不足。在促进社会包容的过程中,重要的是要帮助人们将自己视为积极的推动者,并允许他们与更大的社会领域建立联系。
{"title":"Realistic evaluation of social inclusion","authors":"Marko Nousiainen, L. Leemann","doi":"10.1177/13563890231210328","DOIUrl":"https://doi.org/10.1177/13563890231210328","url":null,"abstract":"This study introduces a mixed-method model for the realistic evaluation of programmes promoting the experience of social inclusion of people in disadvantaged positions. It combines qualitative and quantitative methods for exploring the context-mechanism-outcome- configurations of four cases consisting of development projects. Qualitative analyses depict the context-mechanism-outcome-configurations using participants’ interviews and small success stories as data. Quantitative analyses of a longitudinal survey including the Experiences of Social Inclusion Scale examine the context-mechanism-outcome-configurations in a larger group of participants and re-test the qualitative findings. Thus, they help to overcome the positive selection bias of the small success stories. The mixed-method approach is fruitful especially because the qualitative and the quantitative analyses amend each other’s shortcomings. In the promotion of social inclusion, it is important to help people to see themselves as active agents and allow them to connect to larger social domains.","PeriodicalId":47511,"journal":{"name":"Evaluation","volume":"38 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139210388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-28DOI: 10.1177/13563890231208468
Peter Dahler-Larsen, Estelle Raimondo
{"title":"The sceptical turn in evaluation and what to do with it: Keynote presentation delivered by Peter Dahler-Larsen and Estelle Raimondo at the EES conference in Copenhagen, June 10, 2022","authors":"Peter Dahler-Larsen, Estelle Raimondo","doi":"10.1177/13563890231208468","DOIUrl":"https://doi.org/10.1177/13563890231208468","url":null,"abstract":"","PeriodicalId":47511,"journal":{"name":"Evaluation","volume":"76 1","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139221908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}