Pub Date : 2022-09-20DOI: 10.1177/20539517231177620
D. Widder, D. Nafus
Responsible artificial intelligence guidelines ask engineers to consider how their systems might harm. However, contemporary artificial intelligence systems are built by composing many preexisting software modules that pass through many hands before becoming a finished product or service. How does this shape responsible artificial intelligence practice? In interviews with 27 artificial intelligence engineers across industry, open source, and academia, our participants often did not see the questions posed in responsible artificial intelligence guidelines to be within their agency, capability, or responsibility to address. We use Suchman's “located accountability” to show how responsible artificial intelligence labor is currently organized and to explore how it could be done differently. We identify cross-cutting social logics, like modularizability, scale, reputation, and customer orientation, that organize which responsible artificial intelligence actions do take place and which are relegated to low status staff or believed to be the work of the next or previous person in the imagined “supply chain.” We argue that current responsible artificial intelligence interventions, like ethics checklists and guidelines that assume panoptical knowledge and control over systems, could be improved by taking a located accountability approach, recognizing where relations and obligations might intertwine inside and outside of this supply chain.
{"title":"Dislocated accountabilities in the “AI supply chain”: Modularity and developers’ notions of responsibility","authors":"D. Widder, D. Nafus","doi":"10.1177/20539517231177620","DOIUrl":"https://doi.org/10.1177/20539517231177620","url":null,"abstract":"Responsible artificial intelligence guidelines ask engineers to consider how their systems might harm. However, contemporary artificial intelligence systems are built by composing many preexisting software modules that pass through many hands before becoming a finished product or service. How does this shape responsible artificial intelligence practice? In interviews with 27 artificial intelligence engineers across industry, open source, and academia, our participants often did not see the questions posed in responsible artificial intelligence guidelines to be within their agency, capability, or responsibility to address. We use Suchman's “located accountability” to show how responsible artificial intelligence labor is currently organized and to explore how it could be done differently. We identify cross-cutting social logics, like modularizability, scale, reputation, and customer orientation, that organize which responsible artificial intelligence actions do take place and which are relegated to low status staff or believed to be the work of the next or previous person in the imagined “supply chain.” We argue that current responsible artificial intelligence interventions, like ethics checklists and guidelines that assume panoptical knowledge and control over systems, could be improved by taking a located accountability approach, recognizing where relations and obligations might intertwine inside and outside of this supply chain.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":8.5,"publicationDate":"2022-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44303049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-10DOI: 10.1177/20539517231176230
Matthew Kopec, Meica Magnani, Vance Ricks, R. Torosyan, John Basl, Nicholas Miklaucic, Felix Muzny, R. Sandler, Christopher D. Wilson, Adam Wisniewski-Jensen, Cora Lundgren, Ryan Baylon, Kevin Mills, Marcy Wells
Embedding ethics modules within computer science courses has become a popular response to the growing recognition that computer science programs need to better equip their students to navigate the ethical dimensions of computing technologies such as artificial intelligence, machine learning, and big data analytics. However, the popularity of this approach has outpaced the evidence of its positive outcomes. To help close that gap, this empirical study reports positive results from Northeastern University's program that embeds values analysis modules into computer science courses. The resulting data suggest that such modules have a positive effect on students’ moral attitudes and that students leave the modules believing they are more prepared to navigate the ethical dimensions they will likely face in their eventual careers. Importantly, these gains were accomplished at an institution without a philosophy doctoral program, suggesting this strategy can be effectively employed by a wider range of institutions than many have thought.
{"title":"The effectiveness of embedded values analysis modules in Computer Science education: An empirical study","authors":"Matthew Kopec, Meica Magnani, Vance Ricks, R. Torosyan, John Basl, Nicholas Miklaucic, Felix Muzny, R. Sandler, Christopher D. Wilson, Adam Wisniewski-Jensen, Cora Lundgren, Ryan Baylon, Kevin Mills, Marcy Wells","doi":"10.1177/20539517231176230","DOIUrl":"https://doi.org/10.1177/20539517231176230","url":null,"abstract":"Embedding ethics modules within computer science courses has become a popular response to the growing recognition that computer science programs need to better equip their students to navigate the ethical dimensions of computing technologies such as artificial intelligence, machine learning, and big data analytics. However, the popularity of this approach has outpaced the evidence of its positive outcomes. To help close that gap, this empirical study reports positive results from Northeastern University's program that embeds values analysis modules into computer science courses. The resulting data suggest that such modules have a positive effect on students’ moral attitudes and that students leave the modules believing they are more prepared to navigate the ethical dimensions they will likely face in their eventual careers. Importantly, these gains were accomplished at an institution without a philosophy doctoral program, suggesting this strategy can be effectively employed by a wider range of institutions than many have thought.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":"10 1","pages":""},"PeriodicalIF":8.5,"publicationDate":"2022-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43917079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1177/20539517221135162
Bernhard Rieder
This research commentary proposes a conceptual framework for studying big tech companies as “technical systems” that organize much of their operation around the mastery and operationalization of key technologies that facilitate and drive their continuous expansion. Drawing on the study of Large Technical Systems (LTS), on the work of historian Bertrand Gille, and on the economics of General Purpose Technologies (GPTs), it outlines a way to study the “tech” in “big tech” more attentively, looking for compatibilities, synergies, and dependencies between the technologies created and deployed by these companies. Using Google as example, the paper shows how to interrogate software and hardware through the lens of transversal applicability, discusses software and hardware integration, and proposes the notion of “data amalgams” to contextualize and complicate the notion of data. The goal is to complement existing vectors of “big tech” critique with a perspective sensitive to the specific materialities of specific technologies and their possible consequences.
{"title":"Towards a political economy of technical systems: The case of Google","authors":"Bernhard Rieder","doi":"10.1177/20539517221135162","DOIUrl":"https://doi.org/10.1177/20539517221135162","url":null,"abstract":"This research commentary proposes a conceptual framework for studying big tech companies as “technical systems” that organize much of their operation around the mastery and operationalization of key technologies that facilitate and drive their continuous expansion. Drawing on the study of Large Technical Systems (LTS), on the work of historian Bertrand Gille, and on the economics of General Purpose Technologies (GPTs), it outlines a way to study the “tech” in “big tech” more attentively, looking for compatibilities, synergies, and dependencies between the technologies created and deployed by these companies. Using Google as example, the paper shows how to interrogate software and hardware through the lens of transversal applicability, discusses software and hardware integration, and proposes the notion of “data amalgams” to contextualize and complicate the notion of data. The goal is to complement existing vectors of “big tech” critique with a perspective sensitive to the specific materialities of specific technologies and their possible consequences.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47079658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1177/20539517221124586
Ana Valdivia, Claudia Aradau, Tobias Blanke, S. Perret
In 2020, the European Union announced the award of the contract for the biometric part of the new database for border control, the Entry Exit System, to two companies: IDEMIA and Sopra Steria. Both companies had been previously involved in the development of databases for border and migration management. While there has been a growing amount of publicly available documents that show what kind of technologies are being implemented, for how much money, and by whom, there has been limited engagement with digital methods in this field. Moreover, critical border and security scholarship has largely focused on qualitative and ethnographic methods. Building on a data feminist approach, we propose a transdisciplinary methodology that goes beyond binaries of qualitative/quantitative and opacity/transparency, examines power asymmetries and makes the labour of coding visible. Empirically, we build and analyse a dataset of the contracts awarded by two European Union agencies key to its border management policies – the European Agency for Large-Scale Information Systems (eu-LISA) and the European Border and Coast Guard Agency (Frontex). We supplement the digital analysis and visualisation of networks of companies with close reading of tender documents. In so doing, we show how a transdisciplinary methodology can be a device for making datafication ‘intelligible’ at the European Union borders.
{"title":"Neither opaque nor transparent: A transdisciplinary methodology to investigate datafication at the EU borders","authors":"Ana Valdivia, Claudia Aradau, Tobias Blanke, S. Perret","doi":"10.1177/20539517221124586","DOIUrl":"https://doi.org/10.1177/20539517221124586","url":null,"abstract":"In 2020, the European Union announced the award of the contract for the biometric part of the new database for border control, the Entry Exit System, to two companies: IDEMIA and Sopra Steria. Both companies had been previously involved in the development of databases for border and migration management. While there has been a growing amount of publicly available documents that show what kind of technologies are being implemented, for how much money, and by whom, there has been limited engagement with digital methods in this field. Moreover, critical border and security scholarship has largely focused on qualitative and ethnographic methods. Building on a data feminist approach, we propose a transdisciplinary methodology that goes beyond binaries of qualitative/quantitative and opacity/transparency, examines power asymmetries and makes the labour of coding visible. Empirically, we build and analyse a dataset of the contracts awarded by two European Union agencies key to its border management policies – the European Agency for Large-Scale Information Systems (eu-LISA) and the European Border and Coast Guard Agency (Frontex). We supplement the digital analysis and visualisation of networks of companies with close reading of tender documents. In so doing, we show how a transdisciplinary methodology can be a device for making datafication ‘intelligible’ at the European Union borders.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41974968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1177/20539517221112901
Z. Tacheva
Through a critical analysis of recent developments in the theory and practice of data science, including nascent feminist approaches to data collection and analysis, this commentary aims to signal the need for a transnational feminist orientation towards data science. I argue that while much needed in the context of persistent algorithmic oppression, a Western feminist lens limits the scope of problems, and thus—solutions, critical data scholars, and scientists can consider. A resolutely transnational feminist approach on the other hand, can provide data theorists and practitioners with the hermeneutic tools necessary to identify and disrupt instances of injustice in a more inclusive and comprehensive manner. A transnational feminist orientation to data science can pay particular attention to the communities rendered most vulnerable by algorithmic oppression, such as women of color and populations in non-Western countries. I present five ways in which transnational feminism can be leveraged as an intervention into the current data science canon.
{"title":"Taking a critical look at the critical turn in data science: From “data feminism” to transnational feminist data science","authors":"Z. Tacheva","doi":"10.1177/20539517221112901","DOIUrl":"https://doi.org/10.1177/20539517221112901","url":null,"abstract":"Through a critical analysis of recent developments in the theory and practice of data science, including nascent feminist approaches to data collection and analysis, this commentary aims to signal the need for a transnational feminist orientation towards data science. I argue that while much needed in the context of persistent algorithmic oppression, a Western feminist lens limits the scope of problems, and thus—solutions, critical data scholars, and scientists can consider. A resolutely transnational feminist approach on the other hand, can provide data theorists and practitioners with the hermeneutic tools necessary to identify and disrupt instances of injustice in a more inclusive and comprehensive manner. A transnational feminist orientation to data science can pay particular attention to the communities rendered most vulnerable by algorithmic oppression, such as women of color and populations in non-Western countries. I present five ways in which transnational feminism can be leveraged as an intervention into the current data science canon.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42126394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1177/20539517221136078
Sam H A Muller, M. Mostert, J. V. van Delden, Thomas Schillemans, G. V. van Thiel
Current challenges to sustaining public support for health data research have directed attention to the governance of data-intensive health research networks. Accountability is hailed as an important element of trustworthy governance frameworks for data-intensive health research networks. Yet the extent to which adequate accountability regimes in data-intensive health research networks are currently realized is questionable. Current governance of data-intensive health research networks is dominated by the limitations of a drawing board approach. As a way forward, we propose a stronger focus on accountability as learning to achieve accountable governance. As an important step in that direction, we provide two pathways: (1) developing an integrated structure for decision-making and (2) establishing a dialogue in ongoing deliberative processes. Suitable places for learning accountability to thrive are dedicated governing bodies as well as specialized committees, panels or boards which bear and guide the development of governance in data-intensive health research networks. A continuous accountability process which comprises learning and interaction accommodates the diversity of expectations, responsibilities and tasks in data-intensive health research networks to achieve responsible and effective governance.
{"title":"Learning accountable governance: Challenges and perspectives for data-intensive health research networks","authors":"Sam H A Muller, M. Mostert, J. V. van Delden, Thomas Schillemans, G. V. van Thiel","doi":"10.1177/20539517221136078","DOIUrl":"https://doi.org/10.1177/20539517221136078","url":null,"abstract":"Current challenges to sustaining public support for health data research have directed attention to the governance of data-intensive health research networks. Accountability is hailed as an important element of trustworthy governance frameworks for data-intensive health research networks. Yet the extent to which adequate accountability regimes in data-intensive health research networks are currently realized is questionable. Current governance of data-intensive health research networks is dominated by the limitations of a drawing board approach. As a way forward, we propose a stronger focus on accountability as learning to achieve accountable governance. As an important step in that direction, we provide two pathways: (1) developing an integrated structure for decision-making and (2) establishing a dialogue in ongoing deliberative processes. Suitable places for learning accountability to thrive are dedicated governing bodies as well as specialized committees, panels or boards which bear and guide the development of governance in data-intensive health research networks. A continuous accountability process which comprises learning and interaction accommodates the diversity of expectations, responsibilities and tasks in data-intensive health research networks to achieve responsible and effective governance.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46307312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1177/20539517221111361
C. Borch, Bo Hee Min
Machine learning systems are making considerable inroads in society owing to their ability to recognize and predict patterns. However, the decision-making logic of some widely used machine learning models, such as deep neural networks, is characterized by opacity, thereby rendering them exceedingly difficult for humans to understand and explain and, as a result, potentially risky to use. Considering the importance of addressing this opacity, this paper calls for research that studies empirically and theoretically how machine learning experts and users seek to attain machine learning explainability. Focusing on automated trading, we take steps in this direction by analyzing a trading firm’s quest for explaining its deep neural network system’s actionable predictions. We demonstrate that this explainability effort involves a particular form of human–machine interaction that contains both anthropomorphic and technomorphic elements. We discuss this attempt to attain machine learning explainability in light of reflections on cross-species companionship and consider it an example of human–machine companionship.
{"title":"Toward a sociology of machine learning explainability: Human–machine interaction in deep neural network-based automated trading","authors":"C. Borch, Bo Hee Min","doi":"10.1177/20539517221111361","DOIUrl":"https://doi.org/10.1177/20539517221111361","url":null,"abstract":"Machine learning systems are making considerable inroads in society owing to their ability to recognize and predict patterns. However, the decision-making logic of some widely used machine learning models, such as deep neural networks, is characterized by opacity, thereby rendering them exceedingly difficult for humans to understand and explain and, as a result, potentially risky to use. Considering the importance of addressing this opacity, this paper calls for research that studies empirically and theoretically how machine learning experts and users seek to attain machine learning explainability. Focusing on automated trading, we take steps in this direction by analyzing a trading firm’s quest for explaining its deep neural network system’s actionable predictions. We demonstrate that this explainability effort involves a particular form of human–machine interaction that contains both anthropomorphic and technomorphic elements. We discuss this attempt to attain machine learning explainability in light of reflections on cross-species companionship and consider it an example of human–machine companionship.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48201896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1177/20539517221111352
Jun Liu
With the surge in the number of data and datafied governance initiatives, arrangements, and practices across the globe, understanding various types of such initiatives, arrangements, and their structural causes has become a daunting task for scholars, policy makers, and the public. This complexity additionally generates substantial difficulties in considering different data(fied) governances commensurable with each other. To advance the discussion, this study argues that existing scholarship is inclined to embrace an organization-centric perspective that primarily concerns factors and dynamics regarding data and datafication at the organizational level at the expense of macro-level social, political, and cultural factors of both data and governance. To explicate the macro, societal dimension of data governance, this study then suggests the term “social data governance” to bring forth the consideration that data governance not only reflects the society from which it emerges but also (re)produces the policies and practices of the society in question. Drawing on theories of political science and public management, a model of social data governance is proposed to elucidate the ideological and conceptual groundings of various modes of governance from a comparative perspective. This preliminary model, consisting of a two-dimensional continuum, state intervention and societal autonomy for the one, and national cultures for the other, accounts for variations in social data governance across societies as a complementary way of conceptualizing and categorizing data governance beyond the European standpoint. Finally, we conduct an extreme case study of governing digital contact-tracing techniques during the pandemic to exemplify the explanatory power of the proposed model of social data governance.
{"title":"Social data governance: Towards a definition and model","authors":"Jun Liu","doi":"10.1177/20539517221111352","DOIUrl":"https://doi.org/10.1177/20539517221111352","url":null,"abstract":"With the surge in the number of data and datafied governance initiatives, arrangements, and practices across the globe, understanding various types of such initiatives, arrangements, and their structural causes has become a daunting task for scholars, policy makers, and the public. This complexity additionally generates substantial difficulties in considering different data(fied) governances commensurable with each other. To advance the discussion, this study argues that existing scholarship is inclined to embrace an organization-centric perspective that primarily concerns factors and dynamics regarding data and datafication at the organizational level at the expense of macro-level social, political, and cultural factors of both data and governance. To explicate the macro, societal dimension of data governance, this study then suggests the term “social data governance” to bring forth the consideration that data governance not only reflects the society from which it emerges but also (re)produces the policies and practices of the society in question. Drawing on theories of political science and public management, a model of social data governance is proposed to elucidate the ideological and conceptual groundings of various modes of governance from a comparative perspective. This preliminary model, consisting of a two-dimensional continuum, state intervention and societal autonomy for the one, and national cultures for the other, accounts for variations in social data governance across societies as a complementary way of conceptualizing and categorizing data governance beyond the European standpoint. Finally, we conduct an extreme case study of governing digital contact-tracing techniques during the pandemic to exemplify the explanatory power of the proposed model of social data governance.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47421750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1177/20539517221135132
Gejun Huang, A. Hu, Wenhong Chen
As a key constituent of China's approach to fighting COVID-19, Health Code apps (HCAs) not only serve the pandemic control imperatives but also exercise the agency of digital surveillance. As such, HCAs pave a new avenue for ongoing discussions on contact tracing solutions and privacy amid the global pandemic. This article attends to the perceived privacy protection among HCA users via the lens of the contextual integrity theory. Drawing on an online survey of adult HCA users in Wuhan and Hangzhou (N = 1551), we find users’ perceived convenience, attention towards privacy policy, trust in government, and acceptance of government purposes regarding HCA data management are significant contributors to users’ perceived privacy protection in using the apps. By contrast, users’ frequency of mobile privacy protection behaviors has limited influence, and their degrees of perceived protection do not vary by sociodemographic status. These findings shed new light on China's distinctive approach to pandemic control with respect to the state's expansion of big data-driven surveillance capacity. Also, the findings foreground the heuristic value of contextual integrity theory to examine controversial digital surveillance in non-Western contexts. Put tougher, our findings contribute to the thriving scholarly conversations around digital privacy and surveillance in China, as well as contact tracing solutions and privacy amid the global pandemic.
{"title":"Privacy at risk? Understanding the perceived privacy protection of health code apps in China","authors":"Gejun Huang, A. Hu, Wenhong Chen","doi":"10.1177/20539517221135132","DOIUrl":"https://doi.org/10.1177/20539517221135132","url":null,"abstract":"As a key constituent of China's approach to fighting COVID-19, Health Code apps (HCAs) not only serve the pandemic control imperatives but also exercise the agency of digital surveillance. As such, HCAs pave a new avenue for ongoing discussions on contact tracing solutions and privacy amid the global pandemic. This article attends to the perceived privacy protection among HCA users via the lens of the contextual integrity theory. Drawing on an online survey of adult HCA users in Wuhan and Hangzhou (N = 1551), we find users’ perceived convenience, attention towards privacy policy, trust in government, and acceptance of government purposes regarding HCA data management are significant contributors to users’ perceived privacy protection in using the apps. By contrast, users’ frequency of mobile privacy protection behaviors has limited influence, and their degrees of perceived protection do not vary by sociodemographic status. These findings shed new light on China's distinctive approach to pandemic control with respect to the state's expansion of big data-driven surveillance capacity. Also, the findings foreground the heuristic value of contextual integrity theory to examine controversial digital surveillance in non-Western contexts. Put tougher, our findings contribute to the thriving scholarly conversations around digital privacy and surveillance in China, as well as contact tracing solutions and privacy amid the global pandemic.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43299140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1177/20539517221113775
Lukas Engelmann, G. Wackers
There is an astonishing posthuman promise in digital phenotyping, as Beth Semel recently argued (Semel, 2022). The goal of digital phenotyping enthusiasts is no less than to bypass the human observer as a deeply flawed threshold of medical knowledge production. The second goal is then – ultimately – to rid the human body and mind of its frailty and to utilise technology for a ‘world without disease’ (Topol and Corr, 2019). This promissory rhetoric is not only geared towards the disruption of dated medical conventions but comes equipped with bold, revolutionary concepts. Objective knowledge, based on aggregated, automated, and sweeping data collection to deliver granular, minute, and personalised healthcare; digital phenotyping is a collection of ideas, technologies, and practices to realise a powerful and futuristic vision of a medicine far beyond human capacities. This posthuman promise might be naive and driven by an abundant positivism, but as a small movement, made up of medical researchers and digital disruptors alike, it has continuously gathered steam over the last decade. The purpose of this collection is foremost to take stock and to collect a range of critical questions for a first revision of what digital phenotyping might be and what it could potentially become. The meaning of digital phenotyping is not as well defined as the many publications in this growing body of scholarship might suggest. Some of that vagueness has been captured in the critical literature. Birk and Samuel, in their sociological analysis, have described the term recently in more general terms as an analytical concept that presumes simply that diseases and illness are by and large ‘measurable by digital devices’ (Birk and Samuel, 2020). This assumes that a person’s experience of any kind of suffering is always in one way or another expressed in the digital traces of their behaviour. The leg injury that might result in a different mobility pattern; measurable tremors in the thumb control of smartphones as a sign of Parkinson’s; sudden lack of social interaction as a sign of depression: digital phenotypes can in theory be defined for any illness and disease and captured by any of the sensors, devices, and technologies, through which humans leave digital traces. Loi, in his ethical and philosophical exploration of the digital phenotype, assumes it in more general terms to be ‘an assemblage of information in digital form, that humans produce intentionally or as a by-product of other activities, and which affects human behaviour’ (Loi, 2018). Many questions remain, not least why and how this concept seeks association with genetic terminology. What does the wholesale capturing of a human’s digital traces as phenotype imply? What does it mean to group a sheer endless range of symptoms within the paradigm of inheritable traits and how does this framing structure research on and with digital phenotypes? The phrase itself was coined by the physician Sachin Jain and colleague
正如Beth Semel最近所说的那样,数字表现型有一个惊人的后人类前景(Semel, 2022)。数字表现型爱好者的目标不亚于绕过人类观察者作为医学知识生产的一个存在严重缺陷的门槛。第二个目标是——最终——消除人类身心的脆弱,并利用技术实现“没有疾病的世界”(Topol和Corr, 2019)。这种承诺的修辞不仅是为了打破过时的医疗惯例,而且还配备了大胆的、革命性的概念。客观知识,基于聚合、自动化和全面的数据收集,提供细粒度、分钟和个性化的医疗保健;数字表型是一种想法、技术和实践的集合,旨在实现一种远远超出人类能力的强大而未来的医学愿景。这种后人类的承诺可能是幼稚的,并受到大量实证主义的推动,但作为一场由医学研究人员和数字颠覆者组成的小运动,它在过去十年中不断积聚动力。这个集合的目的首先是评估和收集一系列关键问题,以便首次修订数字表型可能是什么以及它可能成为什么。数字表现型的含义并不像这个不断增长的学术机构中的许多出版物所暗示的那样定义得很好。这种模糊性在批评文学中有所体现。Birk和Samuel在他们的社会学分析中,最近用更一般的术语描述了这个术语,作为一个分析概念,它简单地假设疾病和疾病总体上是“通过数字设备可测量的”(Birk和Samuel, 2020)。这假设一个人对任何一种痛苦的经历总是以这样或那样的方式表现在他们行为的数字痕迹中。腿部损伤可能导致不同的活动模式;智能手机拇指控制的可测量震颤是帕金森症的征兆;突然缺乏社交互动是抑郁症的标志:理论上,数字表型可以定义为任何疾病,并被任何传感器、设备和技术捕获,人类通过这些传感器、设备和技术留下数字痕迹。Loi在其对数字表现型的伦理和哲学探索中,将其更一般地假设为“人类有意或作为其他活动的副产品产生的数字形式的信息集合,并影响人类行为”(Loi, 2018)。许多问题仍然存在,尤其是为什么以及如何将这个概念与遗传术语联系起来。大规模捕获人类数字痕迹作为表现型意味着什么?在可遗传特征的范式中把一系列的症状归类意味着什么这个框架结构是如何研究数字表现型的?这个短语本身是由哈佛大学医生萨钦·贾恩(Sachin Jain)及其同事在2015年写给《自然生物技术》的一封信中创造的。从概念上讲,他们参照Richard Dawkins对“扩展表型”的阐述构思了数字表型(Jain et al., 2015;道金斯,1982)。他们不仅看到数字技术为诊断和预后提供了前所未有的大量潜在有价值的数据,而且重要的是,这些数据的产生超越了患者和医生之间简短而粗略的接触。对这些数据的全面利用将使人们对一生中的疾病表现有新的认识。这不仅是监测的扩展,而且将打开医学知识生产的新范式:而不仅仅是在医疗咨询中记录症状,“数字表型根据个人的生活经验重新定义疾病表达,这扩展了我们分类和理解疾病的能力”(Jain et al., 2015)。在2017年《美国医学杂志》的一篇文章中,美国神经科学家托马斯·r·英塞尔将数字表现型概念化为一门“新的行为科学”(英塞尔,2017年)。从那以后,这句话就流传了
{"title":"Digital phenotyping – Editorial","authors":"Lukas Engelmann, G. Wackers","doi":"10.1177/20539517221113775","DOIUrl":"https://doi.org/10.1177/20539517221113775","url":null,"abstract":"There is an astonishing posthuman promise in digital phenotyping, as Beth Semel recently argued (Semel, 2022). The goal of digital phenotyping enthusiasts is no less than to bypass the human observer as a deeply flawed threshold of medical knowledge production. The second goal is then – ultimately – to rid the human body and mind of its frailty and to utilise technology for a ‘world without disease’ (Topol and Corr, 2019). This promissory rhetoric is not only geared towards the disruption of dated medical conventions but comes equipped with bold, revolutionary concepts. Objective knowledge, based on aggregated, automated, and sweeping data collection to deliver granular, minute, and personalised healthcare; digital phenotyping is a collection of ideas, technologies, and practices to realise a powerful and futuristic vision of a medicine far beyond human capacities. This posthuman promise might be naive and driven by an abundant positivism, but as a small movement, made up of medical researchers and digital disruptors alike, it has continuously gathered steam over the last decade. The purpose of this collection is foremost to take stock and to collect a range of critical questions for a first revision of what digital phenotyping might be and what it could potentially become. The meaning of digital phenotyping is not as well defined as the many publications in this growing body of scholarship might suggest. Some of that vagueness has been captured in the critical literature. Birk and Samuel, in their sociological analysis, have described the term recently in more general terms as an analytical concept that presumes simply that diseases and illness are by and large ‘measurable by digital devices’ (Birk and Samuel, 2020). This assumes that a person’s experience of any kind of suffering is always in one way or another expressed in the digital traces of their behaviour. The leg injury that might result in a different mobility pattern; measurable tremors in the thumb control of smartphones as a sign of Parkinson’s; sudden lack of social interaction as a sign of depression: digital phenotypes can in theory be defined for any illness and disease and captured by any of the sensors, devices, and technologies, through which humans leave digital traces. Loi, in his ethical and philosophical exploration of the digital phenotype, assumes it in more general terms to be ‘an assemblage of information in digital form, that humans produce intentionally or as a by-product of other activities, and which affects human behaviour’ (Loi, 2018). Many questions remain, not least why and how this concept seeks association with genetic terminology. What does the wholesale capturing of a human’s digital traces as phenotype imply? What does it mean to group a sheer endless range of symptoms within the paradigm of inheritable traits and how does this framing structure research on and with digital phenotypes? The phrase itself was coined by the physician Sachin Jain and colleague","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":" ","pages":""},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41616997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}