Isabelle Soares, Thai N. Van Quoc, C. Yamu, Gerd Weitkamp
Abstract This paper investigates how socio-spatial aspects of creativity, operationalized as the causal relations between the built environment and perceived creativity in university campuses’ public spaces, are currently applied in practice. Moreover, it discusses practitioners’ perceptions regarding research-generated evidence on socio-spatial aspects of creativity according to three effectiveness aspects: credibility, relevance, and applicability. The “research-generated evidence” is herein derived from data-driven knowledge generated by multi-disciplinary methodologies (e.g., self-reported perceptions, participatory tools, geospatial analysis, observations). Through a thematic analysis of interviews with practitioners involved in the (re)development of campuses public spaces of inner-city campuses and science parks in Amsterdam, Utrecht, and Groningen. We concluded that socio-spatial aspects of creativity concepts were addressed only at the decision-making level for Utrecht Science Park. Correspondingly, while presented evidence was considered by most practitioners as relevant for practice, perceptions of credibility and applicability vary according to institutional goals, practitioners’ habits in practice, and their involvement in projects’ roles and phases. The newfound interrelationships between the three effectiveness aspects highlighted (a) the institutional fragmentation issues in campuses and public spaces projects, (b) the research-practice gap related to such projects, which occur beyond the university campuses’ context, and (c) insights on the relationship between evidence generated through research-based data-driven knowledge and urban planning practice, policy, and governance related to knowledge environments. We concluded that if research-generated evidence on socio-spatial aspects of creativity is to be integrated into the evidence-based practice of campuses’ public spaces, an alignment between researchers, multiple actors involved, policy framing, and goal achievements are fundamental.
{"title":"Socio-spatial aspects of creativity and their role in the planning and design of university campuses’ public spaces: A practitioners’ perspective","authors":"Isabelle Soares, Thai N. Van Quoc, C. Yamu, Gerd Weitkamp","doi":"10.1017/dap.2022.27","DOIUrl":"https://doi.org/10.1017/dap.2022.27","url":null,"abstract":"Abstract This paper investigates how socio-spatial aspects of creativity, operationalized as the causal relations between the built environment and perceived creativity in university campuses’ public spaces, are currently applied in practice. Moreover, it discusses practitioners’ perceptions regarding research-generated evidence on socio-spatial aspects of creativity according to three effectiveness aspects: credibility, relevance, and applicability. The “research-generated evidence” is herein derived from data-driven knowledge generated by multi-disciplinary methodologies (e.g., self-reported perceptions, participatory tools, geospatial analysis, observations). Through a thematic analysis of interviews with practitioners involved in the (re)development of campuses public spaces of inner-city campuses and science parks in Amsterdam, Utrecht, and Groningen. We concluded that socio-spatial aspects of creativity concepts were addressed only at the decision-making level for Utrecht Science Park. Correspondingly, while presented evidence was considered by most practitioners as relevant for practice, perceptions of credibility and applicability vary according to institutional goals, practitioners’ habits in practice, and their involvement in projects’ roles and phases. The newfound interrelationships between the three effectiveness aspects highlighted (a) the institutional fragmentation issues in campuses and public spaces projects, (b) the research-practice gap related to such projects, which occur beyond the university campuses’ context, and (c) insights on the relationship between evidence generated through research-based data-driven knowledge and urban planning practice, policy, and governance related to knowledge environments. We concluded that if research-generated evidence on socio-spatial aspects of creativity is to be integrated into the evidence-based practice of campuses’ public spaces, an alignment between researchers, multiple actors involved, policy framing, and goal achievements are fundamental.","PeriodicalId":93427,"journal":{"name":"Data & policy","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45848362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shahrzad Gholami, Erwin Knippenberg, James Campbell, Daniel Andriantsimba, Anusheel Kamle, Pavitraa Parthasarathy, Ria Sankar, Cameron Birge, J. L. Lavista Ferres
Abstract Chronic food insecurity remains a challenge globally, exacerbated by climate change-driven shocks such as droughts and floods. Forecasting food insecurity levels and targeting vulnerable households is apriority for humanitarian programming to ensure timely delivery of assistance. In this study, we propose to harness a machine learning approach trained on high-frequency household survey data to infer the predictors of food insecurity and forecast household level outcomes in near real-time. Our empirical analyses leverage the Measurement Indicators for Resilience Analysis (MIRA) data collection protocol implemented by Catholic Relief Services (CRS) in southern Malawi, a series of sentinel sites collecting household data monthly. When focusing on predictors of community-level vulnerability, we show that a random forest model outperforms other algorithms and that location and self-reported welfare are the best predictors of food insecurity. We also show performance results across several neural networks and classical models for various data modeling scenarios to forecast food security. We pose that problem as binary classification via dichotomization of the food security score based on two different thresholds, which results in two different positive class to negative class ratios. Our best performing model has an F1 of 81% and an accuracy of 83% in predicting food security outcomes when the outcome is dichotomized based on threshold 16 and predictor features consist of historical food security score along with 20 variables selected by artificial intelligence explainability frameworks. These results showcase the value of combining high-frequency sentinel site data with machine learning algorithms to predict future food insecurity outcomes.
{"title":"Food security analysis and forecasting: A machine learning case study in southern Malawi","authors":"Shahrzad Gholami, Erwin Knippenberg, James Campbell, Daniel Andriantsimba, Anusheel Kamle, Pavitraa Parthasarathy, Ria Sankar, Cameron Birge, J. L. Lavista Ferres","doi":"10.1017/dap.2022.25","DOIUrl":"https://doi.org/10.1017/dap.2022.25","url":null,"abstract":"Abstract Chronic food insecurity remains a challenge globally, exacerbated by climate change-driven shocks such as droughts and floods. Forecasting food insecurity levels and targeting vulnerable households is apriority for humanitarian programming to ensure timely delivery of assistance. In this study, we propose to harness a machine learning approach trained on high-frequency household survey data to infer the predictors of food insecurity and forecast household level outcomes in near real-time. Our empirical analyses leverage the Measurement Indicators for Resilience Analysis (MIRA) data collection protocol implemented by Catholic Relief Services (CRS) in southern Malawi, a series of sentinel sites collecting household data monthly. When focusing on predictors of community-level vulnerability, we show that a random forest model outperforms other algorithms and that location and self-reported welfare are the best predictors of food insecurity. We also show performance results across several neural networks and classical models for various data modeling scenarios to forecast food security. We pose that problem as binary classification via dichotomization of the food security score based on two different thresholds, which results in two different positive class to negative class ratios. Our best performing model has an F1 of 81% and an accuracy of 83% in predicting food security outcomes when the outcome is dichotomized based on threshold 16 and predictor features consist of historical food security score along with 20 variables selected by artificial intelligence explainability frameworks. These results showcase the value of combining high-frequency sentinel site data with machine learning algorithms to predict future food insecurity outcomes.","PeriodicalId":93427,"journal":{"name":"Data & policy","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45692008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The aim of the present article is to evaluate the use of the Autoregressive Fractionally Integrated Moving Average (ARFIMA) model in predicting spatially and temporally localized political violent events using the Integrated Crisis Early Warning System (ICEWS). The performance of the ARFIMA model is compared to that of a naïve model in reference to two common relevant hypotheses: the ARFIMA model would outperform a naïve model and the rate of outperformance would deteriorate the higher the level of spatial aggregation. This analytical strategy is used to predict political violent events in Afghanistan. The analysis consists of three parts. The first is a replication of Yonamine’s study for the period beginning in April 2010 and ending in March 2012. The second part compares the results to those of Yonamine. The comparison was used to assess the validity of the conclusions drawn in the original study, which was based on the Global Database of Events, Language, and Tone, for the implementation of this approach to ICEWS data. Building on the conclusions of this comparison, the third part uses Yonamine’s approach to predict violent events in Afghanistan over a significantly longer period of time (January 1995–August 2021). The conclusions provide an assessment of the utility of short-term localized forecasting.
{"title":"Lesson (un)replicated: Predicting levels of political violence in Afghan administrative units per month using ARFIMA and ICEWS data","authors":"Tamir Libel","doi":"10.1017/dap.2022.26","DOIUrl":"https://doi.org/10.1017/dap.2022.26","url":null,"abstract":"Abstract The aim of the present article is to evaluate the use of the Autoregressive Fractionally Integrated Moving Average (ARFIMA) model in predicting spatially and temporally localized political violent events using the Integrated Crisis Early Warning System (ICEWS). The performance of the ARFIMA model is compared to that of a naïve model in reference to two common relevant hypotheses: the ARFIMA model would outperform a naïve model and the rate of outperformance would deteriorate the higher the level of spatial aggregation. This analytical strategy is used to predict political violent events in Afghanistan. The analysis consists of three parts. The first is a replication of Yonamine’s study for the period beginning in April 2010 and ending in March 2012. The second part compares the results to those of Yonamine. The comparison was used to assess the validity of the conclusions drawn in the original study, which was based on the Global Database of Events, Language, and Tone, for the implementation of this approach to ICEWS data. Building on the conclusions of this comparison, the third part uses Yonamine’s approach to predict violent events in Afghanistan over a significantly longer period of time (January 1995–August 2021). The conclusions provide an assessment of the utility of short-term localized forecasting.","PeriodicalId":93427,"journal":{"name":"Data & policy","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44803717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The transition to open data practices is straightforward albeit surprisingly challenging to implement largely due to cultural and policy issues. A general data sharing framework is presented along with two case studies that highlight these challenges and offer practical solutions that can be adjusted depending on the type of data collected, the country in which the study is initiated, and the prevailing research culture. Embracing the constraints imposed by data privacy considerations, especially for biomedical data, must be emphasized for data outside of the United States until data privacy law(s) are established at the Federal and/or State level.
{"title":"A journey toward an open data culture through transformation of shared data into a data resource","authors":"Scott D. Kahn, A. Koralova","doi":"10.1017/dap.2022.22","DOIUrl":"https://doi.org/10.1017/dap.2022.22","url":null,"abstract":"Abstract The transition to open data practices is straightforward albeit surprisingly challenging to implement largely due to cultural and policy issues. A general data sharing framework is presented along with two case studies that highlight these challenges and offer practical solutions that can be adjusted depending on the type of data collected, the country in which the study is initiated, and the prevailing research culture. Embracing the constraints imposed by data privacy considerations, especially for biomedical data, must be emphasized for data outside of the United States until data privacy law(s) are established at the Federal and/or State level.","PeriodicalId":93427,"journal":{"name":"Data & policy","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42625297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Kuzio, Mohammad Ahmadi, Kyoung-Cheol Kim, Michael R. Migaud, Yi-Fan Wang, Justin B. Bullock
{"title":"Building better global data governance – CORRIGENDUM","authors":"J. Kuzio, Mohammad Ahmadi, Kyoung-Cheol Kim, Michael R. Migaud, Yi-Fan Wang, Justin B. Bullock","doi":"10.1017/dap.2022.20","DOIUrl":"https://doi.org/10.1017/dap.2022.20","url":null,"abstract":"","PeriodicalId":93427,"journal":{"name":"Data & policy","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45999850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The response to the COVID-19 pandemic has, from the outset, been characterized by a strong focus on real-time data intelligence and the use of data-driven technologies. Against this backdrop, this article investigates the impacts of the pandemic on Scottish local government’s data practices and, in turn, whether the crisis acted as a driver for digital transformation. Mobilizing the literatures on digital government transformation, and on the impacts of crises on public administrations, the article provides insights into the dynamics of digital transformation during a heightened period of acute demands on the public sector. The research evidences an intensification of public sector data use and sharing in Scottish local authorities, with focus on health-related data and the integration of existing datasets to gather local intelligence. The research reveals significant changes related to the technical and social systems of local government organizations. These include the repurposing and adoption of information systems, the acceleration of inter and intraorganizational data sharing processes, as well as changes in ways of working and in attitudes toward data sharing and collaborations. Drawing on these findings, the article highlights the importance of identifying and articulating specific data needs in relation to concrete policy questions in order to render digital transformation relevant and effective. The article also points to the need of addressing the persistent systemic challenges underlying public sector data engagement through, on one hand, sustained investment in data capabilities and infrastructures and, on the other, support for cross-organizational collaborative spaces and networks.
{"title":"Crisis as driver of digital transformation? Scottish local governments’ response to COVID-19","authors":"Justine Gangneux, Simon Joss","doi":"10.1017/dap.2022.18","DOIUrl":"https://doi.org/10.1017/dap.2022.18","url":null,"abstract":"Abstract The response to the COVID-19 pandemic has, from the outset, been characterized by a strong focus on real-time data intelligence and the use of data-driven technologies. Against this backdrop, this article investigates the impacts of the pandemic on Scottish local government’s data practices and, in turn, whether the crisis acted as a driver for digital transformation. Mobilizing the literatures on digital government transformation, and on the impacts of crises on public administrations, the article provides insights into the dynamics of digital transformation during a heightened period of acute demands on the public sector. The research evidences an intensification of public sector data use and sharing in Scottish local authorities, with focus on health-related data and the integration of existing datasets to gather local intelligence. The research reveals significant changes related to the technical and social systems of local government organizations. These include the repurposing and adoption of information systems, the acceleration of inter and intraorganizational data sharing processes, as well as changes in ways of working and in attitudes toward data sharing and collaborations. Drawing on these findings, the article highlights the importance of identifying and articulating specific data needs in relation to concrete policy questions in order to render digital transformation relevant and effective. The article also points to the need of addressing the persistent systemic challenges underlying public sector data engagement through, on one hand, sustained investment in data capabilities and infrastructures and, on the other, support for cross-organizational collaborative spaces and networks.","PeriodicalId":93427,"journal":{"name":"Data & policy","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46556368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Kuzio, Mohammad Ahmadi, Kyoung-Cheol Kim, Michael R. Migaud, Yi-Fan Wang, Justin B. Bullock
Abstract In this article, we explore the challenges of global governance and the particular challenge presented by global data governance. We discuss a range of challenges to developing meaningful global governance institutions for regulating how companies and governments around the world manage and utilize consumer data. These challenges are compounded by their global nature and the complexities of Internet-based technologies. We argue that the following gaps exist for effective global data governance: (a) there is no overarching global framework for protecting consumer data, and it is partial and incomplete; (b) there is a lack of data protection for international data transfers, as much of the regulation that is being developed is not global in scale; and (c) new areas of data collection and use compound concerns to effective data governance in a globalized digital world. Moreover, we highlight important needs in terms of both global governance and impending challenges related to current and new uses of data. Any global governance framework should recognize the need for an iterative process where communication is ongoing between the necessary stakeholders. Agreements should incorporate common goals to maximize the potential development of global data governance norms. However, goals must remain flexible to the different data environments across nation-states while maintaining a global scope to ensure data protection. In addition, any agreement should consider the emerging challenges in this area. These challenges include new methods of data collection and use, as well as protecting individuals from manipulation and undue influence based on how their data are being used, processed, and collected.
{"title":"Building better global data governance","authors":"J. Kuzio, Mohammad Ahmadi, Kyoung-Cheol Kim, Michael R. Migaud, Yi-Fan Wang, Justin B. Bullock","doi":"10.1017/dap.2022.17","DOIUrl":"https://doi.org/10.1017/dap.2022.17","url":null,"abstract":"Abstract In this article, we explore the challenges of global governance and the particular challenge presented by global data governance. We discuss a range of challenges to developing meaningful global governance institutions for regulating how companies and governments around the world manage and utilize consumer data. These challenges are compounded by their global nature and the complexities of Internet-based technologies. We argue that the following gaps exist for effective global data governance: (a) there is no overarching global framework for protecting consumer data, and it is partial and incomplete; (b) there is a lack of data protection for international data transfers, as much of the regulation that is being developed is not global in scale; and (c) new areas of data collection and use compound concerns to effective data governance in a globalized digital world. Moreover, we highlight important needs in terms of both global governance and impending challenges related to current and new uses of data. Any global governance framework should recognize the need for an iterative process where communication is ongoing between the necessary stakeholders. Agreements should incorporate common goals to maximize the potential development of global data governance norms. However, goals must remain flexible to the different data environments across nation-states while maintaining a global scope to ensure data protection. In addition, any agreement should consider the emerging challenges in this area. These challenges include new methods of data collection and use, as well as protecting individuals from manipulation and undue influence based on how their data are being used, processed, and collected.","PeriodicalId":93427,"journal":{"name":"Data & policy","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47709664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Margot Kersing, L. van Zoonen, Kim Putters, L. Oldenhof
Abstract The welfare state is currently undergoing a transition toward data-driven policies, management, and execution. This has important repercussions for frontline bureaucrats in such a “digital welfare state.” So far, impact of data-driven tools on frontline bureaucrats is primarily described in terms of curtailing or enlarging their discretionary space to make decisions. It is unclear, however, how daily work practices and role identities of frontline bureaucrats change in situ and which norms they develop to work with new data tools. In this article, we present an empirical study about the impact of a data dashboard in the Work and Income department of the municipality of Rotterdam. We answer the following research question: Which role identities, work practices, and norms of appropriate behavior of frontline bureaucrats in the social domain are reshaped by the introduction of a data dashboard? We use a multiple methods design consisting of semi-structured interviews, ethnographic observations, and document analysis. Our results reveal two role identities among frontline bureaucrats: (a) the client coach, and (b) the caseload manager. We show that the implementation of the dashboard stimulates a shift from a client coach role identity toward a caseload manager role identity. This shift is contested as it leads to role identity conflicts among frontline bureaucrats with a client coach role. Furthermore, we establish that the accommodation of the institutional void in which the introduction of the dashboard takes place, is centered around three themes of contestation: (a) data quality, (b) quality of service provision, and (c) data representations.
{"title":"The changing roles of frontline bureaucrats in the digital welfare state: The case of a data dashboard in Rotterdam’s Work and Income department","authors":"Margot Kersing, L. van Zoonen, Kim Putters, L. Oldenhof","doi":"10.1017/dap.2022.16","DOIUrl":"https://doi.org/10.1017/dap.2022.16","url":null,"abstract":"Abstract The welfare state is currently undergoing a transition toward data-driven policies, management, and execution. This has important repercussions for frontline bureaucrats in such a “digital welfare state.” So far, impact of data-driven tools on frontline bureaucrats is primarily described in terms of curtailing or enlarging their discretionary space to make decisions. It is unclear, however, how daily work practices and role identities of frontline bureaucrats change in situ and which norms they develop to work with new data tools. In this article, we present an empirical study about the impact of a data dashboard in the Work and Income department of the municipality of Rotterdam. We answer the following research question: Which role identities, work practices, and norms of appropriate behavior of frontline bureaucrats in the social domain are reshaped by the introduction of a data dashboard? We use a multiple methods design consisting of semi-structured interviews, ethnographic observations, and document analysis. Our results reveal two role identities among frontline bureaucrats: (a) the client coach, and (b) the caseload manager. We show that the implementation of the dashboard stimulates a shift from a client coach role identity toward a caseload manager role identity. This shift is contested as it leads to role identity conflicts among frontline bureaucrats with a client coach role. Furthermore, we establish that the accommodation of the institutional void in which the introduction of the dashboard takes place, is centered around three themes of contestation: (a) data quality, (b) quality of service provision, and (c) data representations.","PeriodicalId":93427,"journal":{"name":"Data & policy","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41476068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Today’s conflicts are becoming increasingly complex, fluid, and fragmented, often involving a host of national and international actors with multiple and often divergent interests. This development poses significant challenges for conflict mediation, as mediators struggle to make sense of conflict dynamics, such as the range of conflict parties and the evolution of their political positions, the distinction between relevant and less relevant actors in peace-making, or the identification of key conflict issues and their interdependence. International peace efforts appear ill-equipped to successfully address these challenges. While technology is already being experimented with and used in a range of conflict related fields, such as conflict predicting or information gathering, less attention has been given to how technology can contribute to conflict mediation. This case study contributes to emerging research on the use of state-of-the-art machine learning technologies and techniques in conflict mediation processes. Using dialogue transcripts from peace negotiations in Yemen, this study shows how machine-learning can effectively support mediating teams by providing them with tools for knowledge management, extraction and conflict analysis. Apart from illustrating the potential of machine learning tools in conflict mediation, the article also emphasizes the importance of interdisciplinary and participatory, cocreation methodology for the development of context-sensitive and targeted tools and to ensure meaningful and responsible implementation.
{"title":"Supporting peace negotiations in the Yemen war through machine learning","authors":"M. Arana-Catania, F. V. Lier, R. Procter","doi":"10.1017/dap.2022.19","DOIUrl":"https://doi.org/10.1017/dap.2022.19","url":null,"abstract":"Abstract Today’s conflicts are becoming increasingly complex, fluid, and fragmented, often involving a host of national and international actors with multiple and often divergent interests. This development poses significant challenges for conflict mediation, as mediators struggle to make sense of conflict dynamics, such as the range of conflict parties and the evolution of their political positions, the distinction between relevant and less relevant actors in peace-making, or the identification of key conflict issues and their interdependence. International peace efforts appear ill-equipped to successfully address these challenges. While technology is already being experimented with and used in a range of conflict related fields, such as conflict predicting or information gathering, less attention has been given to how technology can contribute to conflict mediation. This case study contributes to emerging research on the use of state-of-the-art machine learning technologies and techniques in conflict mediation processes. Using dialogue transcripts from peace negotiations in Yemen, this study shows how machine-learning can effectively support mediating teams by providing them with tools for knowledge management, extraction and conflict analysis. Apart from illustrating the potential of machine learning tools in conflict mediation, the article also emphasizes the importance of interdisciplinary and participatory, cocreation methodology for the development of context-sensitive and targeted tools and to ensure meaningful and responsible implementation.","PeriodicalId":93427,"journal":{"name":"Data & policy","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42622023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Friedman, Zola Allen, Allison Fox, Jose Webert, A. Devlin
Abstract The US government invests substantial sums to control the HIV/AIDS epidemic. To monitor progress toward epidemic control, PEPFAR, or the President’s Emergency Plan for AIDS Relief, oversees a data reporting system that includes standard indicators, reporting formats, information systems, and data warehouses. These data, reported quarterly, inform understanding of the global epidemic, resource allocation, and identification of trouble spots. PEPFAR has developed tools to assess the quality of data reported. These tools made important contributions but are limited in the methods used to identify anomalous data points. The most advanced consider univariate probability distributions, whereas correlations between indicators suggest a multivariate approach is better suited. For temporal analysis, the same tool compares values to the averages of preceding periods, though does not consider underlying trends and seasonal factors. To that end, we apply two methods to identify anomalous data points among routinely collected facility-level HIV/AIDS data. One approach is Recommender Systems, an unsupervised machine learning method that captures relationships between users and items. We apply the approach in a novel way by predicting reported values, comparing predicted to reported values, and identifying the greatest deviations. For a temporal perspective, we apply time series models that are flexible to include trend and seasonality. Results of these methods were validated against manual review (95% agreement on non-anomalies, 56% agreement on anomalies for recommender systems; 96% agreement on non-anomalies, 91% agreement on anomalies for time series). This tool will apply greater methodological sophistication to monitoring data quality in an accelerated and standardized manner.
{"title":"Application of recommender systems and time series models to monitor quality at HIV/AIDS health facilities","authors":"J. Friedman, Zola Allen, Allison Fox, Jose Webert, A. Devlin","doi":"10.1017/dap.2022.15","DOIUrl":"https://doi.org/10.1017/dap.2022.15","url":null,"abstract":"Abstract The US government invests substantial sums to control the HIV/AIDS epidemic. To monitor progress toward epidemic control, PEPFAR, or the President’s Emergency Plan for AIDS Relief, oversees a data reporting system that includes standard indicators, reporting formats, information systems, and data warehouses. These data, reported quarterly, inform understanding of the global epidemic, resource allocation, and identification of trouble spots. PEPFAR has developed tools to assess the quality of data reported. These tools made important contributions but are limited in the methods used to identify anomalous data points. The most advanced consider univariate probability distributions, whereas correlations between indicators suggest a multivariate approach is better suited. For temporal analysis, the same tool compares values to the averages of preceding periods, though does not consider underlying trends and seasonal factors. To that end, we apply two methods to identify anomalous data points among routinely collected facility-level HIV/AIDS data. One approach is Recommender Systems, an unsupervised machine learning method that captures relationships between users and items. We apply the approach in a novel way by predicting reported values, comparing predicted to reported values, and identifying the greatest deviations. For a temporal perspective, we apply time series models that are flexible to include trend and seasonality. Results of these methods were validated against manual review (95% agreement on non-anomalies, 56% agreement on anomalies for recommender systems; 96% agreement on non-anomalies, 91% agreement on anomalies for time series). This tool will apply greater methodological sophistication to monitoring data quality in an accelerated and standardized manner.","PeriodicalId":93427,"journal":{"name":"Data & policy","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47080816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}