Abstract Data have played a role in urban mobility policy planning for decades, especially in forecasting demand, but much less in policy evaluations and assessments. The surge in availability and openness of (big) data in the last decade seems to provide new opportunities to meet demand for evidence-based policymaking. This paper reviews how different types of data are employed in assessments published in academic journals by analyzing 74 cases. Our review finds that (a) academic literature has currently provided limited insight in new data developments in policy practice; (b) research shows that the new types of big data provide new opportunities for evidence-based policy-making; however, (c) they cannot replace traditional data usage (surveys and statistics). Instead, combining big data with survey and Geographic Information System data in ex-ante assessments, as well as in developing decision support tools, is found to be the most effective. This could help policymakers not only to get much more insight from policy assessments, but also to help avoid the limitations of one certain type of data. Finally, current research projects are rather data supply-driven. Future research should engage with policy practitioners to reveal best practices, constraints, and potential of more demand-driven data use in mobility policy assessments in practice.
{"title":"The role of data in sustainability assessment of urban mobility policies","authors":"Xu Liu, M. Dijk","doi":"10.1017/dap.2021.32","DOIUrl":"https://doi.org/10.1017/dap.2021.32","url":null,"abstract":"Abstract Data have played a role in urban mobility policy planning for decades, especially in forecasting demand, but much less in policy evaluations and assessments. The surge in availability and openness of (big) data in the last decade seems to provide new opportunities to meet demand for evidence-based policymaking. This paper reviews how different types of data are employed in assessments published in academic journals by analyzing 74 cases. Our review finds that (a) academic literature has currently provided limited insight in new data developments in policy practice; (b) research shows that the new types of big data provide new opportunities for evidence-based policy-making; however, (c) they cannot replace traditional data usage (surveys and statistics). Instead, combining big data with survey and Geographic Information System data in ex-ante assessments, as well as in developing decision support tools, is found to be the most effective. This could help policymakers not only to get much more insight from policy assessments, but also to help avoid the limitations of one certain type of data. Finally, current research projects are rather data supply-driven. Future research should engage with policy practitioners to reveal best practices, constraints, and potential of more demand-driven data use in mobility policy assessments in practice.","PeriodicalId":93427,"journal":{"name":"Data & policy","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45412605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Karen Elliott, Kovila P. L. Coopamootoo, Edward Curran, P. Ezhilchelvan, S. Finnigan, David A C Horsfall, Zhichao Ma, Magdalene Ng, Tasos Spiliotopoulos, Han Wu, Aad van Moorsel
Abstract Financial inclusion depends on providing adjusted services for citizens with disclosed vulnerabilities. At the same time, the financial industry needs to adhere to a strict regulatory framework, which is often in conflict with the desire for inclusive, adaptive, and privacy-preserving services. In this article we study how this tension impacts the deployment of privacy-sensitive technologies aimed at financial inclusion. We conduct a qualitative study with banking experts to understand their perspectives on service development for financial inclusion. We build and demonstrate a prototype solution based on open source decentralized identifiers and verifiable credentials software and report on feedback from the banking experts on this system. The technology is promising thanks to its selective disclosure of vulnerabilities to the full control of the individual. This supports GDPR requirements, but at the same time, there is a clear tension between introducing these technologies and fulfilling other regulatory requirements, particularly with respect to “Know Your Customer.” We consider the policy implications stemming from these tensions and provide guidelines for the further design of related technologies.
{"title":"Know Your Customer: Balancing innovation and regulation for financial inclusion","authors":"Karen Elliott, Kovila P. L. Coopamootoo, Edward Curran, P. Ezhilchelvan, S. Finnigan, David A C Horsfall, Zhichao Ma, Magdalene Ng, Tasos Spiliotopoulos, Han Wu, Aad van Moorsel","doi":"10.1017/dap.2022.23","DOIUrl":"https://doi.org/10.1017/dap.2022.23","url":null,"abstract":"Abstract Financial inclusion depends on providing adjusted services for citizens with disclosed vulnerabilities. At the same time, the financial industry needs to adhere to a strict regulatory framework, which is often in conflict with the desire for inclusive, adaptive, and privacy-preserving services. In this article we study how this tension impacts the deployment of privacy-sensitive technologies aimed at financial inclusion. We conduct a qualitative study with banking experts to understand their perspectives on service development for financial inclusion. We build and demonstrate a prototype solution based on open source decentralized identifiers and verifiable credentials software and report on feedback from the banking experts on this system. The technology is promising thanks to its selective disclosure of vulnerabilities to the full control of the individual. This supports GDPR requirements, but at the same time, there is a clear tension between introducing these technologies and fulfilling other regulatory requirements, particularly with respect to “Know Your Customer.” We consider the policy implications stemming from these tensions and provide guidelines for the further design of related technologies.","PeriodicalId":93427,"journal":{"name":"Data & policy","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43629706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Trustworthiness is typically regarded as a desirable feature of national identification systems (NISs); but the variegated nature of the trustor communities associated with such systems makes it difficult to see how a single system could be equally trustworthy to all actual and potential trustors. This worry is accentuated by common theoretical accounts of trustworthiness. According to such accounts, trustworthiness is relativized to particular individuals and particular areas of activity, such that one can be trustworthy with regard to some individuals in respect of certain matters, but not trustworthy with regard to all trustors in respect of every matter. The present article challenges this relativistic approach to trustworthiness by outlining a new account of trustworthiness, dubbed the expectation-oriented account. This account allows for the possibility of an absolutist (or one-place) approach to trustworthiness. Such an account, we suggest, is the approach that best supports the effort to develop NISs. To be trustworthy, we suggest, is to minimize the error associated with trustor expectations in situations of social dependency (commonly referred to as trust situations), and to be trustworthy in an absolute sense is to assign equal value to all expectation-related errors in all trust situations. In addition to outlining the features of the expectation-oriented account, we describe some of the implications of this account for the design, development, and management of trustworthy NISs.
{"title":"Relativistic conceptions of trustworthiness: Implications for the trustworthy status of national identification systems","authors":"P. Smart, Wendy Hall, M. Boniface","doi":"10.1017/dap.2022.13","DOIUrl":"https://doi.org/10.1017/dap.2022.13","url":null,"abstract":"Abstract Trustworthiness is typically regarded as a desirable feature of national identification systems (NISs); but the variegated nature of the trustor communities associated with such systems makes it difficult to see how a single system could be equally trustworthy to all actual and potential trustors. This worry is accentuated by common theoretical accounts of trustworthiness. According to such accounts, trustworthiness is relativized to particular individuals and particular areas of activity, such that one can be trustworthy with regard to some individuals in respect of certain matters, but not trustworthy with regard to all trustors in respect of every matter. The present article challenges this relativistic approach to trustworthiness by outlining a new account of trustworthiness, dubbed the expectation-oriented account. This account allows for the possibility of an absolutist (or one-place) approach to trustworthiness. Such an account, we suggest, is the approach that best supports the effort to develop NISs. To be trustworthy, we suggest, is to minimize the error associated with trustor expectations in situations of social dependency (commonly referred to as trust situations), and to be trustworthy in an absolute sense is to assign equal value to all expectation-related errors in all trust situations. In addition to outlining the features of the expectation-oriented account, we describe some of the implications of this account for the design, development, and management of trustworthy NISs.","PeriodicalId":93427,"journal":{"name":"Data & policy","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43833108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Electronic linking of public records and predictive analytics to identify families for preventive early intervention increasingly is promoted by governments. We use the concept of social license to address questions of social legitimacy, agreement, and trust in data linkage and analytics for parents of dependent children, who are the focus of early intervention initiatives in the UK. We review data-steered family policy and early intervention operational service practices. We draw on a consensus baseline analysis of data from a probability-based panel survey of parents, to show that informed consent to data linkage and use is important to all parents, but there are social divisions of knowledge, agreement, and trust. There is more social license for data linkage by services among parents in higher occupation, qualification, and income groups, than among Black parents, lone parents, younger parents, and parents in larger households. These marginalized groups of parents, collectively, are more likely to be the focus of identification for early intervention. We argue that government awareness-raising exercises about the merits of data linkage are likely to bolster existing social license among advantaged parents while running the risk of further disengagement among disadvantaged groups. This is especially where inequalities and forecasting inaccuracies are encoded into early intervention data gathering, linking, and predictive practices, with consequences for a cohesive and equal society.
{"title":"Data linkage for early intervention in the UK: Parental social license and social divisions","authors":"R. Edwards, Val Gillies, S. Gorin","doi":"10.1017/dap.2021.34","DOIUrl":"https://doi.org/10.1017/dap.2021.34","url":null,"abstract":"Abstract Electronic linking of public records and predictive analytics to identify families for preventive early intervention increasingly is promoted by governments. We use the concept of social license to address questions of social legitimacy, agreement, and trust in data linkage and analytics for parents of dependent children, who are the focus of early intervention initiatives in the UK. We review data-steered family policy and early intervention operational service practices. We draw on a consensus baseline analysis of data from a probability-based panel survey of parents, to show that informed consent to data linkage and use is important to all parents, but there are social divisions of knowledge, agreement, and trust. There is more social license for data linkage by services among parents in higher occupation, qualification, and income groups, than among Black parents, lone parents, younger parents, and parents in larger households. These marginalized groups of parents, collectively, are more likely to be the focus of identification for early intervention. We argue that government awareness-raising exercises about the merits of data linkage are likely to bolster existing social license among advantaged parents while running the risk of further disengagement among disadvantaged groups. This is especially where inequalities and forecasting inaccuracies are encoded into early intervention data gathering, linking, and predictive practices, with consequences for a cohesive and equal society.","PeriodicalId":93427,"journal":{"name":"Data & policy","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-12-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43992750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Artificial intelligence (AI) systems are playing an overarching role in the disinformation phenomenon our world is currently facing. Such systems boost the problem not only by increasing opportunities to create realistic AI-generated fake content, but also, and essentially, by facilitating the dissemination of disinformation to a targeted audience and at scale by malicious stakeholders. This situation entails multiple ethical and human rights concerns, in particular regarding human dignity, autonomy, democracy, and peace. In reaction, other AI systems are developed to detect and moderate disinformation online. Such systems do not escape from ethical and human rights concerns either, especially regarding freedom of expression and information. Having originally started with ascending co-regulation, the European Union (EU) is now heading toward descending co-regulation of the phenomenon. In particular, the Digital Services Act proposal provides for transparency obligations and external audit for very large online platforms’ recommender systems and content moderation. While with this proposal, the Commission focusses on the regulation of content considered as problematic, the EU Parliament and the EU Council call for enhancing access to trustworthy content. In light of our study, we stress that the disinformation problem is mainly caused by the business model of the web that is based on advertising revenues, and that adapting this model would reduce the problem considerably. We also observe that while AI systems are inappropriate to moderate disinformation content online, and even to detect such content, they may be more appropriate to counter the manipulation of the digital ecosystem.
{"title":"The role of artificial intelligence in disinformation","authors":"Noémi Bontridder, Y. Poullet","doi":"10.1017/dap.2021.20","DOIUrl":"https://doi.org/10.1017/dap.2021.20","url":null,"abstract":"Abstract Artificial intelligence (AI) systems are playing an overarching role in the disinformation phenomenon our world is currently facing. Such systems boost the problem not only by increasing opportunities to create realistic AI-generated fake content, but also, and essentially, by facilitating the dissemination of disinformation to a targeted audience and at scale by malicious stakeholders. This situation entails multiple ethical and human rights concerns, in particular regarding human dignity, autonomy, democracy, and peace. In reaction, other AI systems are developed to detect and moderate disinformation online. Such systems do not escape from ethical and human rights concerns either, especially regarding freedom of expression and information. Having originally started with ascending co-regulation, the European Union (EU) is now heading toward descending co-regulation of the phenomenon. In particular, the Digital Services Act proposal provides for transparency obligations and external audit for very large online platforms’ recommender systems and content moderation. While with this proposal, the Commission focusses on the regulation of content considered as problematic, the EU Parliament and the EU Council call for enhancing access to trustworthy content. In light of our study, we stress that the disinformation problem is mainly caused by the business model of the web that is based on advertising revenues, and that adapting this model would reduce the problem considerably. We also observe that while AI systems are inappropriate to moderate disinformation content online, and even to detect such content, they may be more appropriate to counter the manipulation of the digital ecosystem.","PeriodicalId":93427,"journal":{"name":"Data & policy","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41324305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract This paper highlights the need and opportunities for constructively combining different types of (analogue and data-driven) knowledges in evidence-informed policy decision-making in future smart cities. Problematizing the assumed universality and objectivity of data-driven knowledge, we call attention to notions of “positionality” and “situatedness” in knowledge production relating to the urban present and possible futures. In order to illustrate our arguments, we draw on a case study of strategic urban (spatial) planning in the Cambridge city region in the United Kingdom. Tracing diverse knowledge production processes, including top-down data-driven knowledges derived from urban modeling, and bottom-up analogue community-based knowledges, allows us to identify locationally specific knowledge politics around evidence for policy. The findings highlight how evidence-informed urban policy can benefit from political processes of competition, contestation, negotiation, and complementarity that arise from interactions between diverse “digital” and “analogue” knowledges. We argue that studying such processes can help in assembling a more multifaceted, diverse and inclusive knowledge-base on which to base policy decisions, as well as to raise awareness and improve active participation in the ongoing “smartification” of cities.
{"title":"Knowledge politics in the smart city: A case study of strategic urban planning in Cambridge, UK","authors":"T. Nochta, N. Wahby, J. Schooling","doi":"10.1017/dap.2021.28","DOIUrl":"https://doi.org/10.1017/dap.2021.28","url":null,"abstract":"Abstract This paper highlights the need and opportunities for constructively combining different types of (analogue and data-driven) knowledges in evidence-informed policy decision-making in future smart cities. Problematizing the assumed universality and objectivity of data-driven knowledge, we call attention to notions of “positionality” and “situatedness” in knowledge production relating to the urban present and possible futures. In order to illustrate our arguments, we draw on a case study of strategic urban (spatial) planning in the Cambridge city region in the United Kingdom. Tracing diverse knowledge production processes, including top-down data-driven knowledges derived from urban modeling, and bottom-up analogue community-based knowledges, allows us to identify locationally specific knowledge politics around evidence for policy. The findings highlight how evidence-informed urban policy can benefit from political processes of competition, contestation, negotiation, and complementarity that arise from interactions between diverse “digital” and “analogue” knowledges. We argue that studying such processes can help in assembling a more multifaceted, diverse and inclusive knowledge-base on which to base policy decisions, as well as to raise awareness and improve active participation in the ongoing “smartification” of cities.","PeriodicalId":93427,"journal":{"name":"Data & policy","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44558163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Contemporary data tools such as online dashboards have been instrumental in monitoring the spread of the COVID-19 pandemic. These real-time interactive platforms allow citizens to understand the local, regional, and global spread of COVID-19 in a consolidated and intuitive manner. Despite this, little research has been conducted on how citizens respond to the data on the dashboards in terms of the pandemic and data governance issues such as privacy. In this paper, we seek to answer the research question: how can governments use data tools, such as dashboards, to balance the trade-offs between safeguarding public health and protecting data privacy during a public health crisis? This study used surveys and semi-structured interviews to understand the perspectives of the developers and users of COVID-19 dashboards in Hong Kong. A typology was also developed to assess how Hong Kong’s dashboards navigated trade-offs between data disclosure and privacy at a time of crisis compared to dashboards in other jurisdictions. Results reveal that two key factors were present in the design and improvement of COVID-19 dashboards in Hong Kong: informed actions based on open COVID-19 case data, and significant public trust built on data transparency. Finally, this study argues that norms surrounding reporting on COVID-19 cases, as well as cases for future pandemics, should be co-constructed among citizens and governments so that policies founded on such norms can be acknowledged as salient, credible, and legitimate.
{"title":"Increasing resilience via the use of personal data: Lessons from COVID-19 dashboards on data governance for the public good","authors":"V. Li, Masaru Yarime","doi":"10.1017/dap.2021.27","DOIUrl":"https://doi.org/10.1017/dap.2021.27","url":null,"abstract":"Abstract Contemporary data tools such as online dashboards have been instrumental in monitoring the spread of the COVID-19 pandemic. These real-time interactive platforms allow citizens to understand the local, regional, and global spread of COVID-19 in a consolidated and intuitive manner. Despite this, little research has been conducted on how citizens respond to the data on the dashboards in terms of the pandemic and data governance issues such as privacy. In this paper, we seek to answer the research question: how can governments use data tools, such as dashboards, to balance the trade-offs between safeguarding public health and protecting data privacy during a public health crisis? This study used surveys and semi-structured interviews to understand the perspectives of the developers and users of COVID-19 dashboards in Hong Kong. A typology was also developed to assess how Hong Kong’s dashboards navigated trade-offs between data disclosure and privacy at a time of crisis compared to dashboards in other jurisdictions. Results reveal that two key factors were present in the design and improvement of COVID-19 dashboards in Hong Kong: informed actions based on open COVID-19 case data, and significant public trust built on data transparency. Finally, this study argues that norms surrounding reporting on COVID-19 cases, as well as cases for future pandemics, should be co-constructed among citizens and governments so that policies founded on such norms can be acknowledged as salient, credible, and legitimate.","PeriodicalId":93427,"journal":{"name":"Data & policy","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57162286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Evaluating the effectiveness of nonpharmaceutical interventions (NPIs) to mitigate the COVID-19 pandemic is crucial to maximize the epidemic containment while minimizing the social and economic impact of these measures. However, this endeavor crucially relies on surveillance data publicly released by health authorities that can hide several limitations. In this article, we quantify the impact of inaccurate data on the estimation of the time-varying reproduction number $ R(t) $ , a pivotal quantity to gauge the variation of the transmissibility originated by the implementation of different NPIs. We focus on Italy and Spain, two European countries among the most severely hit by the COVID-19 pandemic. For these two countries, we highlight several biases of case-based surveillance data and temporal and spatial limitations in the data regarding the implementation of NPIs. We also demonstrate that a nonbiased estimation of $ R(t) $ could have had direct consequences on the decisions taken by the Spanish and Italian governments during the first wave of the pandemic. Our study shows that extreme care should be taken when evaluating intervention policies through publicly available epidemiological data and call for an improvement in the process of COVID-19 data collection, management, storage, and release. Better data policies will allow a more precise evaluation of the effects of containment measures, empowering public health authorities to take more informed decisions.
{"title":"Impact of data accuracy on the evaluation of COVID-19 mitigation policies","authors":"Michele Starnini, A. Aleta, M. Tizzoni, Y. Moreno","doi":"10.1017/dap.2021.25","DOIUrl":"https://doi.org/10.1017/dap.2021.25","url":null,"abstract":"Abstract Evaluating the effectiveness of nonpharmaceutical interventions (NPIs) to mitigate the COVID-19 pandemic is crucial to maximize the epidemic containment while minimizing the social and economic impact of these measures. However, this endeavor crucially relies on surveillance data publicly released by health authorities that can hide several limitations. In this article, we quantify the impact of inaccurate data on the estimation of the time-varying reproduction number $ R(t) $ , a pivotal quantity to gauge the variation of the transmissibility originated by the implementation of different NPIs. We focus on Italy and Spain, two European countries among the most severely hit by the COVID-19 pandemic. For these two countries, we highlight several biases of case-based surveillance data and temporal and spatial limitations in the data regarding the implementation of NPIs. We also demonstrate that a nonbiased estimation of $ R(t) $ could have had direct consequences on the decisions taken by the Spanish and Italian governments during the first wave of the pandemic. Our study shows that extreme care should be taken when evaluating intervention policies through publicly available epidemiological data and call for an improvement in the process of COVID-19 data collection, management, storage, and release. Better data policies will allow a more precise evaluation of the effects of containment measures, empowering public health authorities to take more informed decisions.","PeriodicalId":93427,"journal":{"name":"Data & policy","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45265319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract COVID-19 has impacted all aspects of everyday normalcy globally. During the height of the pandemic, people shared their (PI) with one goal—to protect themselves from contracting an “unknown and rapidly mutating” virus. The technologies (from applications based on mobile devices to online platforms) collect (with or without informed consent) large amounts of PI including location, travel, and personal health information. These were deployed to monitor, track, and control the spread of the virus. However, many of these measures encouraged the trade-off on privacy for safety. In this paper, we reexamine the nature of privacy through the lens of safety focused on the health sector, digital security, and what constitutes an infraction or otherwise of the privacy rights of individuals in a pandemic as experienced in the past 18 months. This paper makes a case for maintaining a balance between the benefit, which the contact tracing apps offer in the containment of COVID-19 with the need to ensure end-user privacy and data security. Specifically, it strengthens the case for designing with transparency and accountability measures and safeguards in place as critical to protecting the privacy and digital security of users—in the use, collection, and retention of user data. We recommend oversight measures to ensure compliance with the principles of lawful processing, knowing that these, among others, would ensure the integration of privacy by design principles even in unforeseen crises like an ongoing pandemic; entrench public trust and acceptance, and protect the digital security of people.
{"title":"Evaluating the trade-off between privacy, public health safety, and digital security in a pandemic","authors":"Titilope Akinsanmi, A. Salami","doi":"10.1017/dap.2021.24","DOIUrl":"https://doi.org/10.1017/dap.2021.24","url":null,"abstract":"Abstract COVID-19 has impacted all aspects of everyday normalcy globally. During the height of the pandemic, people shared their (PI) with one goal—to protect themselves from contracting an “unknown and rapidly mutating” virus. The technologies (from applications based on mobile devices to online platforms) collect (with or without informed consent) large amounts of PI including location, travel, and personal health information. These were deployed to monitor, track, and control the spread of the virus. However, many of these measures encouraged the trade-off on privacy for safety. In this paper, we reexamine the nature of privacy through the lens of safety focused on the health sector, digital security, and what constitutes an infraction or otherwise of the privacy rights of individuals in a pandemic as experienced in the past 18 months. This paper makes a case for maintaining a balance between the benefit, which the contact tracing apps offer in the containment of COVID-19 with the need to ensure end-user privacy and data security. Specifically, it strengthens the case for designing with transparency and accountability measures and safeguards in place as critical to protecting the privacy and digital security of users—in the use, collection, and retention of user data. We recommend oversight measures to ensure compliance with the principles of lawful processing, knowing that these, among others, would ensure the integration of privacy by design principles even in unforeseen crises like an ongoing pandemic; entrench public trust and acceptance, and protect the digital security of people.","PeriodicalId":93427,"journal":{"name":"Data & policy","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48281631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-22eCollection Date: 2021-01-01DOI: 10.1017/dap.2021.26
Pedro Rente Lourenco, Gurjeet Kaur, Matthew Allison, Terry Evetts
With the outbreak of COVID-19 across Europe, anonymized telecommunications data provides a key insight into population level mobility and assessing the impact and effectiveness of containment measures. Vodafone's response across its global footprint was fast and delivered key new metrics for the pandemic that have proven to be useful for a number of external entities. Cooperation with national governments and supra-national entities to help fight the COVID-19 pandemic was a key part of Vodafone's response, and in this article the different methodologies developed are analyzed, as well as the key collaborations established in this context. In this article we also analyze the regulatory challenges found, and how these can pose a risk of the full benefits of these insights not being harnessed, despite clear and efficient Privacy and Ethics assessments to ensure individual safety and data privacy.
{"title":"Data sharing and collaborations with Telco data during the COVID-19 pandemic: A Vodafone case study.","authors":"Pedro Rente Lourenco, Gurjeet Kaur, Matthew Allison, Terry Evetts","doi":"10.1017/dap.2021.26","DOIUrl":"https://doi.org/10.1017/dap.2021.26","url":null,"abstract":"<p><p>With the outbreak of COVID-19 across Europe, anonymized telecommunications data provides a key insight into population level mobility and assessing the impact and effectiveness of containment measures. Vodafone's response across its global footprint was fast and delivered key new metrics for the pandemic that have proven to be useful for a number of external entities. Cooperation with national governments and supra-national entities to help fight the COVID-19 pandemic was a key part of Vodafone's response, and in this article the different methodologies developed are analyzed, as well as the key collaborations established in this context. In this article we also analyze the regulatory challenges found, and how these can pose a risk of the full benefits of these insights not being harnessed, despite clear and efficient Privacy and Ethics assessments to ensure individual safety and data privacy.</p>","PeriodicalId":93427,"journal":{"name":"Data & policy","volume":"3 ","pages":"e33"},"PeriodicalIF":0.0,"publicationDate":"2021-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8649407/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39810997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}