Pub Date : 2022-09-20DOI: 10.1177/20539517231177620
D. Widder, D. Nafus
Responsible artificial intelligence guidelines ask engineers to consider how their systems might harm. However, contemporary artificial intelligence systems are built by composing many preexisting software modules that pass through many hands before becoming a finished product or service. How does this shape responsible artificial intelligence practice? In interviews with 27 artificial intelligence engineers across industry, open source, and academia, our participants often did not see the questions posed in responsible artificial intelligence guidelines to be within their agency, capability, or responsibility to address. We use Suchman's “located accountability” to show how responsible artificial intelligence labor is currently organized and to explore how it could be done differently. We identify cross-cutting social logics, like modularizability, scale, reputation, and customer orientation, that organize which responsible artificial intelligence actions do take place and which are relegated to low status staff or believed to be the work of the next or previous person in the imagined “supply chain.” We argue that current responsible artificial intelligence interventions, like ethics checklists and guidelines that assume panoptical knowledge and control over systems, could be improved by taking a located accountability approach, recognizing where relations and obligations might intertwine inside and outside of this supply chain.
{"title":"Dislocated accountabilities in the “AI supply chain”: Modularity and developers’ notions of responsibility","authors":"D. Widder, D. Nafus","doi":"10.1177/20539517231177620","DOIUrl":"https://doi.org/10.1177/20539517231177620","url":null,"abstract":"Responsible artificial intelligence guidelines ask engineers to consider how their systems might harm. However, contemporary artificial intelligence systems are built by composing many preexisting software modules that pass through many hands before becoming a finished product or service. How does this shape responsible artificial intelligence practice? In interviews with 27 artificial intelligence engineers across industry, open source, and academia, our participants often did not see the questions posed in responsible artificial intelligence guidelines to be within their agency, capability, or responsibility to address. We use Suchman's “located accountability” to show how responsible artificial intelligence labor is currently organized and to explore how it could be done differently. We identify cross-cutting social logics, like modularizability, scale, reputation, and customer orientation, that organize which responsible artificial intelligence actions do take place and which are relegated to low status staff or believed to be the work of the next or previous person in the imagined “supply chain.” We argue that current responsible artificial intelligence interventions, like ethics checklists and guidelines that assume panoptical knowledge and control over systems, could be improved by taking a located accountability approach, recognizing where relations and obligations might intertwine inside and outside of this supply chain.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":null,"pages":null},"PeriodicalIF":8.5,"publicationDate":"2022-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44303049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-10DOI: 10.1177/20539517231176230
Matthew Kopec, Meica Magnani, Vance Ricks, R. Torosyan, John Basl, Nicholas Miklaucic, Felix Muzny, R. Sandler, Christopher D. Wilson, Adam Wisniewski-Jensen, Cora Lundgren, Ryan Baylon, Kevin Mills, Marcy Wells
Embedding ethics modules within computer science courses has become a popular response to the growing recognition that computer science programs need to better equip their students to navigate the ethical dimensions of computing technologies such as artificial intelligence, machine learning, and big data analytics. However, the popularity of this approach has outpaced the evidence of its positive outcomes. To help close that gap, this empirical study reports positive results from Northeastern University's program that embeds values analysis modules into computer science courses. The resulting data suggest that such modules have a positive effect on students’ moral attitudes and that students leave the modules believing they are more prepared to navigate the ethical dimensions they will likely face in their eventual careers. Importantly, these gains were accomplished at an institution without a philosophy doctoral program, suggesting this strategy can be effectively employed by a wider range of institutions than many have thought.
{"title":"The effectiveness of embedded values analysis modules in Computer Science education: An empirical study","authors":"Matthew Kopec, Meica Magnani, Vance Ricks, R. Torosyan, John Basl, Nicholas Miklaucic, Felix Muzny, R. Sandler, Christopher D. Wilson, Adam Wisniewski-Jensen, Cora Lundgren, Ryan Baylon, Kevin Mills, Marcy Wells","doi":"10.1177/20539517231176230","DOIUrl":"https://doi.org/10.1177/20539517231176230","url":null,"abstract":"Embedding ethics modules within computer science courses has become a popular response to the growing recognition that computer science programs need to better equip their students to navigate the ethical dimensions of computing technologies such as artificial intelligence, machine learning, and big data analytics. However, the popularity of this approach has outpaced the evidence of its positive outcomes. To help close that gap, this empirical study reports positive results from Northeastern University's program that embeds values analysis modules into computer science courses. The resulting data suggest that such modules have a positive effect on students’ moral attitudes and that students leave the modules believing they are more prepared to navigate the ethical dimensions they will likely face in their eventual careers. Importantly, these gains were accomplished at an institution without a philosophy doctoral program, suggesting this strategy can be effectively employed by a wider range of institutions than many have thought.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":null,"pages":null},"PeriodicalIF":8.5,"publicationDate":"2022-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43917079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1177/20539517221112901
Z. Tacheva
Through a critical analysis of recent developments in the theory and practice of data science, including nascent feminist approaches to data collection and analysis, this commentary aims to signal the need for a transnational feminist orientation towards data science. I argue that while much needed in the context of persistent algorithmic oppression, a Western feminist lens limits the scope of problems, and thus—solutions, critical data scholars, and scientists can consider. A resolutely transnational feminist approach on the other hand, can provide data theorists and practitioners with the hermeneutic tools necessary to identify and disrupt instances of injustice in a more inclusive and comprehensive manner. A transnational feminist orientation to data science can pay particular attention to the communities rendered most vulnerable by algorithmic oppression, such as women of color and populations in non-Western countries. I present five ways in which transnational feminism can be leveraged as an intervention into the current data science canon.
{"title":"Taking a critical look at the critical turn in data science: From “data feminism” to transnational feminist data science","authors":"Z. Tacheva","doi":"10.1177/20539517221112901","DOIUrl":"https://doi.org/10.1177/20539517221112901","url":null,"abstract":"Through a critical analysis of recent developments in the theory and practice of data science, including nascent feminist approaches to data collection and analysis, this commentary aims to signal the need for a transnational feminist orientation towards data science. I argue that while much needed in the context of persistent algorithmic oppression, a Western feminist lens limits the scope of problems, and thus—solutions, critical data scholars, and scientists can consider. A resolutely transnational feminist approach on the other hand, can provide data theorists and practitioners with the hermeneutic tools necessary to identify and disrupt instances of injustice in a more inclusive and comprehensive manner. A transnational feminist orientation to data science can pay particular attention to the communities rendered most vulnerable by algorithmic oppression, such as women of color and populations in non-Western countries. I present five ways in which transnational feminism can be leveraged as an intervention into the current data science canon.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":null,"pages":null},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42126394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1177/20539517221124586
Ana Valdivia, Claudia Aradau, Tobias Blanke, S. Perret
In 2020, the European Union announced the award of the contract for the biometric part of the new database for border control, the Entry Exit System, to two companies: IDEMIA and Sopra Steria. Both companies had been previously involved in the development of databases for border and migration management. While there has been a growing amount of publicly available documents that show what kind of technologies are being implemented, for how much money, and by whom, there has been limited engagement with digital methods in this field. Moreover, critical border and security scholarship has largely focused on qualitative and ethnographic methods. Building on a data feminist approach, we propose a transdisciplinary methodology that goes beyond binaries of qualitative/quantitative and opacity/transparency, examines power asymmetries and makes the labour of coding visible. Empirically, we build and analyse a dataset of the contracts awarded by two European Union agencies key to its border management policies – the European Agency for Large-Scale Information Systems (eu-LISA) and the European Border and Coast Guard Agency (Frontex). We supplement the digital analysis and visualisation of networks of companies with close reading of tender documents. In so doing, we show how a transdisciplinary methodology can be a device for making datafication ‘intelligible’ at the European Union borders.
{"title":"Neither opaque nor transparent: A transdisciplinary methodology to investigate datafication at the EU borders","authors":"Ana Valdivia, Claudia Aradau, Tobias Blanke, S. Perret","doi":"10.1177/20539517221124586","DOIUrl":"https://doi.org/10.1177/20539517221124586","url":null,"abstract":"In 2020, the European Union announced the award of the contract for the biometric part of the new database for border control, the Entry Exit System, to two companies: IDEMIA and Sopra Steria. Both companies had been previously involved in the development of databases for border and migration management. While there has been a growing amount of publicly available documents that show what kind of technologies are being implemented, for how much money, and by whom, there has been limited engagement with digital methods in this field. Moreover, critical border and security scholarship has largely focused on qualitative and ethnographic methods. Building on a data feminist approach, we propose a transdisciplinary methodology that goes beyond binaries of qualitative/quantitative and opacity/transparency, examines power asymmetries and makes the labour of coding visible. Empirically, we build and analyse a dataset of the contracts awarded by two European Union agencies key to its border management policies – the European Agency for Large-Scale Information Systems (eu-LISA) and the European Border and Coast Guard Agency (Frontex). We supplement the digital analysis and visualisation of networks of companies with close reading of tender documents. In so doing, we show how a transdisciplinary methodology can be a device for making datafication ‘intelligible’ at the European Union borders.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":null,"pages":null},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41974968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1177/20539517221135162
Bernhard Rieder
This research commentary proposes a conceptual framework for studying big tech companies as “technical systems” that organize much of their operation around the mastery and operationalization of key technologies that facilitate and drive their continuous expansion. Drawing on the study of Large Technical Systems (LTS), on the work of historian Bertrand Gille, and on the economics of General Purpose Technologies (GPTs), it outlines a way to study the “tech” in “big tech” more attentively, looking for compatibilities, synergies, and dependencies between the technologies created and deployed by these companies. Using Google as example, the paper shows how to interrogate software and hardware through the lens of transversal applicability, discusses software and hardware integration, and proposes the notion of “data amalgams” to contextualize and complicate the notion of data. The goal is to complement existing vectors of “big tech” critique with a perspective sensitive to the specific materialities of specific technologies and their possible consequences.
{"title":"Towards a political economy of technical systems: The case of Google","authors":"Bernhard Rieder","doi":"10.1177/20539517221135162","DOIUrl":"https://doi.org/10.1177/20539517221135162","url":null,"abstract":"This research commentary proposes a conceptual framework for studying big tech companies as “technical systems” that organize much of their operation around the mastery and operationalization of key technologies that facilitate and drive their continuous expansion. Drawing on the study of Large Technical Systems (LTS), on the work of historian Bertrand Gille, and on the economics of General Purpose Technologies (GPTs), it outlines a way to study the “tech” in “big tech” more attentively, looking for compatibilities, synergies, and dependencies between the technologies created and deployed by these companies. Using Google as example, the paper shows how to interrogate software and hardware through the lens of transversal applicability, discusses software and hardware integration, and proposes the notion of “data amalgams” to contextualize and complicate the notion of data. The goal is to complement existing vectors of “big tech” critique with a perspective sensitive to the specific materialities of specific technologies and their possible consequences.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":null,"pages":null},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47079658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1177/20539517221136078
Sam H A Muller, M. Mostert, J. V. van Delden, Thomas Schillemans, G. V. van Thiel
Current challenges to sustaining public support for health data research have directed attention to the governance of data-intensive health research networks. Accountability is hailed as an important element of trustworthy governance frameworks for data-intensive health research networks. Yet the extent to which adequate accountability regimes in data-intensive health research networks are currently realized is questionable. Current governance of data-intensive health research networks is dominated by the limitations of a drawing board approach. As a way forward, we propose a stronger focus on accountability as learning to achieve accountable governance. As an important step in that direction, we provide two pathways: (1) developing an integrated structure for decision-making and (2) establishing a dialogue in ongoing deliberative processes. Suitable places for learning accountability to thrive are dedicated governing bodies as well as specialized committees, panels or boards which bear and guide the development of governance in data-intensive health research networks. A continuous accountability process which comprises learning and interaction accommodates the diversity of expectations, responsibilities and tasks in data-intensive health research networks to achieve responsible and effective governance.
{"title":"Learning accountable governance: Challenges and perspectives for data-intensive health research networks","authors":"Sam H A Muller, M. Mostert, J. V. van Delden, Thomas Schillemans, G. V. van Thiel","doi":"10.1177/20539517221136078","DOIUrl":"https://doi.org/10.1177/20539517221136078","url":null,"abstract":"Current challenges to sustaining public support for health data research have directed attention to the governance of data-intensive health research networks. Accountability is hailed as an important element of trustworthy governance frameworks for data-intensive health research networks. Yet the extent to which adequate accountability regimes in data-intensive health research networks are currently realized is questionable. Current governance of data-intensive health research networks is dominated by the limitations of a drawing board approach. As a way forward, we propose a stronger focus on accountability as learning to achieve accountable governance. As an important step in that direction, we provide two pathways: (1) developing an integrated structure for decision-making and (2) establishing a dialogue in ongoing deliberative processes. Suitable places for learning accountability to thrive are dedicated governing bodies as well as specialized committees, panels or boards which bear and guide the development of governance in data-intensive health research networks. A continuous accountability process which comprises learning and interaction accommodates the diversity of expectations, responsibilities and tasks in data-intensive health research networks to achieve responsible and effective governance.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":null,"pages":null},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46307312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1177/20539517221111361
C. Borch, Bo Hee Min
Machine learning systems are making considerable inroads in society owing to their ability to recognize and predict patterns. However, the decision-making logic of some widely used machine learning models, such as deep neural networks, is characterized by opacity, thereby rendering them exceedingly difficult for humans to understand and explain and, as a result, potentially risky to use. Considering the importance of addressing this opacity, this paper calls for research that studies empirically and theoretically how machine learning experts and users seek to attain machine learning explainability. Focusing on automated trading, we take steps in this direction by analyzing a trading firm’s quest for explaining its deep neural network system’s actionable predictions. We demonstrate that this explainability effort involves a particular form of human–machine interaction that contains both anthropomorphic and technomorphic elements. We discuss this attempt to attain machine learning explainability in light of reflections on cross-species companionship and consider it an example of human–machine companionship.
{"title":"Toward a sociology of machine learning explainability: Human–machine interaction in deep neural network-based automated trading","authors":"C. Borch, Bo Hee Min","doi":"10.1177/20539517221111361","DOIUrl":"https://doi.org/10.1177/20539517221111361","url":null,"abstract":"Machine learning systems are making considerable inroads in society owing to their ability to recognize and predict patterns. However, the decision-making logic of some widely used machine learning models, such as deep neural networks, is characterized by opacity, thereby rendering them exceedingly difficult for humans to understand and explain and, as a result, potentially risky to use. Considering the importance of addressing this opacity, this paper calls for research that studies empirically and theoretically how machine learning experts and users seek to attain machine learning explainability. Focusing on automated trading, we take steps in this direction by analyzing a trading firm’s quest for explaining its deep neural network system’s actionable predictions. We demonstrate that this explainability effort involves a particular form of human–machine interaction that contains both anthropomorphic and technomorphic elements. We discuss this attempt to attain machine learning explainability in light of reflections on cross-species companionship and consider it an example of human–machine companionship.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":null,"pages":null},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48201896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1177/20539517221111352
Jun Liu
With the surge in the number of data and datafied governance initiatives, arrangements, and practices across the globe, understanding various types of such initiatives, arrangements, and their structural causes has become a daunting task for scholars, policy makers, and the public. This complexity additionally generates substantial difficulties in considering different data(fied) governances commensurable with each other. To advance the discussion, this study argues that existing scholarship is inclined to embrace an organization-centric perspective that primarily concerns factors and dynamics regarding data and datafication at the organizational level at the expense of macro-level social, political, and cultural factors of both data and governance. To explicate the macro, societal dimension of data governance, this study then suggests the term “social data governance” to bring forth the consideration that data governance not only reflects the society from which it emerges but also (re)produces the policies and practices of the society in question. Drawing on theories of political science and public management, a model of social data governance is proposed to elucidate the ideological and conceptual groundings of various modes of governance from a comparative perspective. This preliminary model, consisting of a two-dimensional continuum, state intervention and societal autonomy for the one, and national cultures for the other, accounts for variations in social data governance across societies as a complementary way of conceptualizing and categorizing data governance beyond the European standpoint. Finally, we conduct an extreme case study of governing digital contact-tracing techniques during the pandemic to exemplify the explanatory power of the proposed model of social data governance.
{"title":"Social data governance: Towards a definition and model","authors":"Jun Liu","doi":"10.1177/20539517221111352","DOIUrl":"https://doi.org/10.1177/20539517221111352","url":null,"abstract":"With the surge in the number of data and datafied governance initiatives, arrangements, and practices across the globe, understanding various types of such initiatives, arrangements, and their structural causes has become a daunting task for scholars, policy makers, and the public. This complexity additionally generates substantial difficulties in considering different data(fied) governances commensurable with each other. To advance the discussion, this study argues that existing scholarship is inclined to embrace an organization-centric perspective that primarily concerns factors and dynamics regarding data and datafication at the organizational level at the expense of macro-level social, political, and cultural factors of both data and governance. To explicate the macro, societal dimension of data governance, this study then suggests the term “social data governance” to bring forth the consideration that data governance not only reflects the society from which it emerges but also (re)produces the policies and practices of the society in question. Drawing on theories of political science and public management, a model of social data governance is proposed to elucidate the ideological and conceptual groundings of various modes of governance from a comparative perspective. This preliminary model, consisting of a two-dimensional continuum, state intervention and societal autonomy for the one, and national cultures for the other, accounts for variations in social data governance across societies as a complementary way of conceptualizing and categorizing data governance beyond the European standpoint. Finally, we conduct an extreme case study of governing digital contact-tracing techniques during the pandemic to exemplify the explanatory power of the proposed model of social data governance.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":null,"pages":null},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47421750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1177/20539517221123505
Yu-Shan Tseng
Scholars of critical algorithmic studies, including those from geography, anthropology, Science and Technology Studies and communication studies, have begun to consider how algorithmic devices and platforms facilitate democratic practices. In this article, I draw on a comparative ethnography of two alternative open-source algorithmic platforms – Decide Madrid and vTaiwan – to consider how they are dynamically constituted by differing algorithmic–human relationships. I compare how different algorithmic–human relationships empower citizens to influence political decision-making through proposing, commenting, and voting on the urban issues that should receive political resources in Taipei and Madrid. I argue that algorithmic empowerment is an emerging process in which algorithmic–human relationships orient away from limitations and towards conditions of plurality, actionality, and power decentralisation. This argument frames algorithmic empowerment as bringing about empowering conditions that allow (underrepresented) individuals to shape policy-making and consider plural perspectives for political change and action, not as an outcome-driven, binary assessment (i.e. yes/no). This article contributes a novel, situated, and comparative conceptualisation of algorithmic empowerment that moves beyond technological determinism and universalism.
{"title":"Algorithmic empowerment: A comparative ethnography of two open-source algorithmic platforms – Decide Madrid and vTaiwan","authors":"Yu-Shan Tseng","doi":"10.1177/20539517221123505","DOIUrl":"https://doi.org/10.1177/20539517221123505","url":null,"abstract":"Scholars of critical algorithmic studies, including those from geography, anthropology, Science and Technology Studies and communication studies, have begun to consider how algorithmic devices and platforms facilitate democratic practices. In this article, I draw on a comparative ethnography of two alternative open-source algorithmic platforms – Decide Madrid and vTaiwan – to consider how they are dynamically constituted by differing algorithmic–human relationships. I compare how different algorithmic–human relationships empower citizens to influence political decision-making through proposing, commenting, and voting on the urban issues that should receive political resources in Taipei and Madrid. I argue that algorithmic empowerment is an emerging process in which algorithmic–human relationships orient away from limitations and towards conditions of plurality, actionality, and power decentralisation. This argument frames algorithmic empowerment as bringing about empowering conditions that allow (underrepresented) individuals to shape policy-making and consider plural perspectives for political change and action, not as an outcome-driven, binary assessment (i.e. yes/no). This article contributes a novel, situated, and comparative conceptualisation of algorithmic empowerment that moves beyond technological determinism and universalism.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":null,"pages":null},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43211524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1177/20539517221135132
Gejun Huang, A. Hu, Wenhong Chen
As a key constituent of China's approach to fighting COVID-19, Health Code apps (HCAs) not only serve the pandemic control imperatives but also exercise the agency of digital surveillance. As such, HCAs pave a new avenue for ongoing discussions on contact tracing solutions and privacy amid the global pandemic. This article attends to the perceived privacy protection among HCA users via the lens of the contextual integrity theory. Drawing on an online survey of adult HCA users in Wuhan and Hangzhou (N = 1551), we find users’ perceived convenience, attention towards privacy policy, trust in government, and acceptance of government purposes regarding HCA data management are significant contributors to users’ perceived privacy protection in using the apps. By contrast, users’ frequency of mobile privacy protection behaviors has limited influence, and their degrees of perceived protection do not vary by sociodemographic status. These findings shed new light on China's distinctive approach to pandemic control with respect to the state's expansion of big data-driven surveillance capacity. Also, the findings foreground the heuristic value of contextual integrity theory to examine controversial digital surveillance in non-Western contexts. Put tougher, our findings contribute to the thriving scholarly conversations around digital privacy and surveillance in China, as well as contact tracing solutions and privacy amid the global pandemic.
{"title":"Privacy at risk? Understanding the perceived privacy protection of health code apps in China","authors":"Gejun Huang, A. Hu, Wenhong Chen","doi":"10.1177/20539517221135132","DOIUrl":"https://doi.org/10.1177/20539517221135132","url":null,"abstract":"As a key constituent of China's approach to fighting COVID-19, Health Code apps (HCAs) not only serve the pandemic control imperatives but also exercise the agency of digital surveillance. As such, HCAs pave a new avenue for ongoing discussions on contact tracing solutions and privacy amid the global pandemic. This article attends to the perceived privacy protection among HCA users via the lens of the contextual integrity theory. Drawing on an online survey of adult HCA users in Wuhan and Hangzhou (N = 1551), we find users’ perceived convenience, attention towards privacy policy, trust in government, and acceptance of government purposes regarding HCA data management are significant contributors to users’ perceived privacy protection in using the apps. By contrast, users’ frequency of mobile privacy protection behaviors has limited influence, and their degrees of perceived protection do not vary by sociodemographic status. These findings shed new light on China's distinctive approach to pandemic control with respect to the state's expansion of big data-driven surveillance capacity. Also, the findings foreground the heuristic value of contextual integrity theory to examine controversial digital surveillance in non-Western contexts. Put tougher, our findings contribute to the thriving scholarly conversations around digital privacy and surveillance in China, as well as contact tracing solutions and privacy amid the global pandemic.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":null,"pages":null},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43299140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}