首页 > 最新文献

Big Data & Society最新文献

英文 中文
Dislocated accountabilities in the “AI supply chain”: Modularity and developers’ notions of responsibility “AI供应链”中的错位责任:模块化与开发者的责任观念
IF 8.5 1区 社会学 Q1 Social Sciences Pub Date : 2022-09-20 DOI: 10.1177/20539517231177620
D. Widder, D. Nafus
Responsible artificial intelligence guidelines ask engineers to consider how their systems might harm. However, contemporary artificial intelligence systems are built by composing many preexisting software modules that pass through many hands before becoming a finished product or service. How does this shape responsible artificial intelligence practice? In interviews with 27 artificial intelligence engineers across industry, open source, and academia, our participants often did not see the questions posed in responsible artificial intelligence guidelines to be within their agency, capability, or responsibility to address. We use Suchman's “located accountability” to show how responsible artificial intelligence labor is currently organized and to explore how it could be done differently. We identify cross-cutting social logics, like modularizability, scale, reputation, and customer orientation, that organize which responsible artificial intelligence actions do take place and which are relegated to low status staff or believed to be the work of the next or previous person in the imagined “supply chain.” We argue that current responsible artificial intelligence interventions, like ethics checklists and guidelines that assume panoptical knowledge and control over systems, could be improved by taking a located accountability approach, recognizing where relations and obligations might intertwine inside and outside of this supply chain.
负责任的人工智能指导方针要求工程师考虑他们的系统可能会造成怎样的危害。然而,当代的人工智能系统是由许多预先存在的软件模块组成的,这些模块在成为成品或服务之前经过了许多人的处理。这是如何形成负责任的人工智能实践的?在对来自行业、开源和学术界的27名人工智能工程师的采访中,我们的参与者往往认为负责任的人工智能指南中提出的问题不在他们的机构、能力或责任范围内。我们使用Suchman的“定位问责制”来展示当前负责任的人工智能劳动是如何组织的,并探索如何以不同的方式进行。我们确定了跨领域的社会逻辑,如模块化、规模、声誉和客户导向,这些逻辑组织了哪些负责任的人工智能行动确实发生了,哪些被降级为低地位的员工,或者被认为是想象中的“供应链”中下一个或前一个人的工作,比如道德检查表和准则,假设对系统有全面的了解和控制,可以通过采取定位问责方法来改进,认识到供应链内外的关系和义务可能交织在一起。
{"title":"Dislocated accountabilities in the “AI supply chain”: Modularity and developers’ notions of responsibility","authors":"D. Widder, D. Nafus","doi":"10.1177/20539517231177620","DOIUrl":"https://doi.org/10.1177/20539517231177620","url":null,"abstract":"Responsible artificial intelligence guidelines ask engineers to consider how their systems might harm. However, contemporary artificial intelligence systems are built by composing many preexisting software modules that pass through many hands before becoming a finished product or service. How does this shape responsible artificial intelligence practice? In interviews with 27 artificial intelligence engineers across industry, open source, and academia, our participants often did not see the questions posed in responsible artificial intelligence guidelines to be within their agency, capability, or responsibility to address. We use Suchman's “located accountability” to show how responsible artificial intelligence labor is currently organized and to explore how it could be done differently. We identify cross-cutting social logics, like modularizability, scale, reputation, and customer orientation, that organize which responsible artificial intelligence actions do take place and which are relegated to low status staff or believed to be the work of the next or previous person in the imagined “supply chain.” We argue that current responsible artificial intelligence interventions, like ethics checklists and guidelines that assume panoptical knowledge and control over systems, could be improved by taking a located accountability approach, recognizing where relations and obligations might intertwine inside and outside of this supply chain.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":null,"pages":null},"PeriodicalIF":8.5,"publicationDate":"2022-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44303049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
The effectiveness of embedded values analysis modules in Computer Science education: An empirical study 嵌入式价值分析模块在计算机科学教育中的有效性:一项实证研究
IF 8.5 1区 社会学 Q1 Social Sciences Pub Date : 2022-08-10 DOI: 10.1177/20539517231176230
Matthew Kopec, Meica Magnani, Vance Ricks, R. Torosyan, John Basl, Nicholas Miklaucic, Felix Muzny, R. Sandler, Christopher D. Wilson, Adam Wisniewski-Jensen, Cora Lundgren, Ryan Baylon, Kevin Mills, Marcy Wells
Embedding ethics modules within computer science courses has become a popular response to the growing recognition that computer science programs need to better equip their students to navigate the ethical dimensions of computing technologies such as artificial intelligence, machine learning, and big data analytics. However, the popularity of this approach has outpaced the evidence of its positive outcomes. To help close that gap, this empirical study reports positive results from Northeastern University's program that embeds values analysis modules into computer science courses. The resulting data suggest that such modules have a positive effect on students’ moral attitudes and that students leave the modules believing they are more prepared to navigate the ethical dimensions they will likely face in their eventual careers. Importantly, these gains were accomplished at an institution without a philosophy doctoral program, suggesting this strategy can be effectively employed by a wider range of institutions than many have thought.
在计算机科学课程中嵌入伦理模块已成为一种流行的回应,因为人们越来越认识到,计算机科学课程需要更好地让学生掌握人工智能、机器学习和大数据分析等计算技术的伦理维度。然而,这种方法的受欢迎程度已经超过了其积极成果的证据。为了帮助缩小这一差距,这项实证研究报告了东北大学在计算机科学课程中嵌入价值分析模块的项目的积极结果。由此产生的数据表明,这些模块对学生的道德态度有积极影响,学生离开模块时相信他们已经做好了更充分的准备,可以应对他们最终职业生涯中可能面临的道德层面。重要的是,这些成果是在一个没有哲学博士课程的机构中实现的,这表明这种策略可以被比许多人想象的更广泛的机构有效利用。
{"title":"The effectiveness of embedded values analysis modules in Computer Science education: An empirical study","authors":"Matthew Kopec, Meica Magnani, Vance Ricks, R. Torosyan, John Basl, Nicholas Miklaucic, Felix Muzny, R. Sandler, Christopher D. Wilson, Adam Wisniewski-Jensen, Cora Lundgren, Ryan Baylon, Kevin Mills, Marcy Wells","doi":"10.1177/20539517231176230","DOIUrl":"https://doi.org/10.1177/20539517231176230","url":null,"abstract":"Embedding ethics modules within computer science courses has become a popular response to the growing recognition that computer science programs need to better equip their students to navigate the ethical dimensions of computing technologies such as artificial intelligence, machine learning, and big data analytics. However, the popularity of this approach has outpaced the evidence of its positive outcomes. To help close that gap, this empirical study reports positive results from Northeastern University's program that embeds values analysis modules into computer science courses. The resulting data suggest that such modules have a positive effect on students’ moral attitudes and that students leave the modules believing they are more prepared to navigate the ethical dimensions they will likely face in their eventual careers. Importantly, these gains were accomplished at an institution without a philosophy doctoral program, suggesting this strategy can be effectively employed by a wider range of institutions than many have thought.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":null,"pages":null},"PeriodicalIF":8.5,"publicationDate":"2022-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43917079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Taking a critical look at the critical turn in data science: From “data feminism” to transnational feminist data science 批判性地看待数据科学的批判性转向:从“数据女权主义”到跨国女权主义数据科学
IF 8.5 1区 社会学 Q1 Social Sciences Pub Date : 2022-07-01 DOI: 10.1177/20539517221112901
Z. Tacheva
Through a critical analysis of recent developments in the theory and practice of data science, including nascent feminist approaches to data collection and analysis, this commentary aims to signal the need for a transnational feminist orientation towards data science. I argue that while much needed in the context of persistent algorithmic oppression, a Western feminist lens limits the scope of problems, and thus—solutions, critical data scholars, and scientists can consider. A resolutely transnational feminist approach on the other hand, can provide data theorists and practitioners with the hermeneutic tools necessary to identify and disrupt instances of injustice in a more inclusive and comprehensive manner. A transnational feminist orientation to data science can pay particular attention to the communities rendered most vulnerable by algorithmic oppression, such as women of color and populations in non-Western countries. I present five ways in which transnational feminism can be leveraged as an intervention into the current data science canon.
通过对数据科学理论和实践的最新发展进行批判性分析,包括对数据收集和分析的新兴女权主义方法,本评论旨在表明数据科学需要跨国女权主义取向。我认为,尽管在持续的算法压迫的背景下非常需要,但西方女权主义的视角限制了问题的范围,因此——解决方案、关键数据学者和科学家可以考虑。另一方面,坚定的跨国女权主义方法可以为数据理论家和从业者提供必要的解释学工具,以更具包容性和全面的方式识别和破坏不公正现象。数据科学的跨国女权主义取向可以特别关注因算法压迫而变得最脆弱的社区,例如非西方国家的有色人种女性和人口。我提出了五种方法,可以利用跨国女权主义作为对当前数据科学经典的干预。
{"title":"Taking a critical look at the critical turn in data science: From “data feminism” to transnational feminist data science","authors":"Z. Tacheva","doi":"10.1177/20539517221112901","DOIUrl":"https://doi.org/10.1177/20539517221112901","url":null,"abstract":"Through a critical analysis of recent developments in the theory and practice of data science, including nascent feminist approaches to data collection and analysis, this commentary aims to signal the need for a transnational feminist orientation towards data science. I argue that while much needed in the context of persistent algorithmic oppression, a Western feminist lens limits the scope of problems, and thus—solutions, critical data scholars, and scientists can consider. A resolutely transnational feminist approach on the other hand, can provide data theorists and practitioners with the hermeneutic tools necessary to identify and disrupt instances of injustice in a more inclusive and comprehensive manner. A transnational feminist orientation to data science can pay particular attention to the communities rendered most vulnerable by algorithmic oppression, such as women of color and populations in non-Western countries. I present five ways in which transnational feminism can be leveraged as an intervention into the current data science canon.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":null,"pages":null},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42126394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Neither opaque nor transparent: A transdisciplinary methodology to investigate datafication at the EU borders 既不不透明也不透明:调查欧盟边境数据化的跨学科方法
IF 8.5 1区 社会学 Q1 Social Sciences Pub Date : 2022-07-01 DOI: 10.1177/20539517221124586
Ana Valdivia, Claudia Aradau, Tobias Blanke, S. Perret
In 2020, the European Union announced the award of the contract for the biometric part of the new database for border control, the Entry Exit System, to two companies: IDEMIA and Sopra Steria. Both companies had been previously involved in the development of databases for border and migration management. While there has been a growing amount of publicly available documents that show what kind of technologies are being implemented, for how much money, and by whom, there has been limited engagement with digital methods in this field. Moreover, critical border and security scholarship has largely focused on qualitative and ethnographic methods. Building on a data feminist approach, we propose a transdisciplinary methodology that goes beyond binaries of qualitative/quantitative and opacity/transparency, examines power asymmetries and makes the labour of coding visible. Empirically, we build and analyse a dataset of the contracts awarded by two European Union agencies key to its border management policies – the European Agency for Large-Scale Information Systems (eu-LISA) and the European Border and Coast Guard Agency (Frontex). We supplement the digital analysis and visualisation of networks of companies with close reading of tender documents. In so doing, we show how a transdisciplinary methodology can be a device for making datafication ‘intelligible’ at the European Union borders.
2020年,欧盟宣布将边境管制新数据库出入境系统的生物识别部分合同授予两家公司:IDEMIA和Sopra Steria。这两家公司此前都参与了边境和移民管理数据库的开发。尽管越来越多的公开文件显示了正在实施什么样的技术、投入多少资金以及由谁来实施,但在该领域对数字方法的参与有限。此外,关键的边境和安全学术主要侧重于定性和人种学方法。在数据女权主义方法的基础上,我们提出了一种跨学科的方法,超越了定性/定量和不透明/透明的二元体系,研究了权力不对称,并使编码的劳动变得可见。根据经验,我们建立并分析了两个欧盟机构授予的合同数据集,这两个机构对其边境管理政策至关重要——欧洲大规模信息系统署(eu LISA)和欧洲边境和海岸警卫队(Frontex)。我们通过仔细阅读招标文件来补充公司网络的数字分析和可视化。通过这样做,我们展示了跨学科方法如何成为在欧盟边境“可理解”数据化的一种手段。
{"title":"Neither opaque nor transparent: A transdisciplinary methodology to investigate datafication at the EU borders","authors":"Ana Valdivia, Claudia Aradau, Tobias Blanke, S. Perret","doi":"10.1177/20539517221124586","DOIUrl":"https://doi.org/10.1177/20539517221124586","url":null,"abstract":"In 2020, the European Union announced the award of the contract for the biometric part of the new database for border control, the Entry Exit System, to two companies: IDEMIA and Sopra Steria. Both companies had been previously involved in the development of databases for border and migration management. While there has been a growing amount of publicly available documents that show what kind of technologies are being implemented, for how much money, and by whom, there has been limited engagement with digital methods in this field. Moreover, critical border and security scholarship has largely focused on qualitative and ethnographic methods. Building on a data feminist approach, we propose a transdisciplinary methodology that goes beyond binaries of qualitative/quantitative and opacity/transparency, examines power asymmetries and makes the labour of coding visible. Empirically, we build and analyse a dataset of the contracts awarded by two European Union agencies key to its border management policies – the European Agency for Large-Scale Information Systems (eu-LISA) and the European Border and Coast Guard Agency (Frontex). We supplement the digital analysis and visualisation of networks of companies with close reading of tender documents. In so doing, we show how a transdisciplinary methodology can be a device for making datafication ‘intelligible’ at the European Union borders.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":null,"pages":null},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41974968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Towards a political economy of technical systems: The case of Google 走向技术系统的政治经济学:b谷歌的案例
IF 8.5 1区 社会学 Q1 Social Sciences Pub Date : 2022-07-01 DOI: 10.1177/20539517221135162
Bernhard Rieder
This research commentary proposes a conceptual framework for studying big tech companies as “technical systems” that organize much of their operation around the mastery and operationalization of key technologies that facilitate and drive their continuous expansion. Drawing on the study of Large Technical Systems (LTS), on the work of historian Bertrand Gille, and on the economics of General Purpose Technologies (GPTs), it outlines a way to study the “tech” in “big tech” more attentively, looking for compatibilities, synergies, and dependencies between the technologies created and deployed by these companies. Using Google as example, the paper shows how to interrogate software and hardware through the lens of transversal applicability, discusses software and hardware integration, and proposes the notion of “data amalgams” to contextualize and complicate the notion of data. The goal is to complement existing vectors of “big tech” critique with a perspective sensitive to the specific materialities of specific technologies and their possible consequences.
这篇研究评论提出了一个概念框架,将大型科技公司作为“技术系统”来研究,这些“技术系统”围绕着关键技术的掌握和运作来组织他们的大部分运营,这些关键技术促进和推动了他们的持续扩张。通过对大型技术系统(LTS)的研究,历史学家Bertrand Gille的工作,以及通用技术(GPTs)的经济学,它概述了一种更仔细地研究“大技术”中的“技术”的方法,寻找这些公司创造和部署的技术之间的兼容性,协同作用和依赖性。本文以谷歌为例,说明了如何从横向适用性的角度来审视软件和硬件,讨论了软件和硬件的集成,并提出了“数据融合”的概念,将数据的概念语境化和复杂化。其目标是补充现有的“大科技”批判向量,并对特定技术的特定材料及其可能的后果保持敏感。
{"title":"Towards a political economy of technical systems: The case of Google","authors":"Bernhard Rieder","doi":"10.1177/20539517221135162","DOIUrl":"https://doi.org/10.1177/20539517221135162","url":null,"abstract":"This research commentary proposes a conceptual framework for studying big tech companies as “technical systems” that organize much of their operation around the mastery and operationalization of key technologies that facilitate and drive their continuous expansion. Drawing on the study of Large Technical Systems (LTS), on the work of historian Bertrand Gille, and on the economics of General Purpose Technologies (GPTs), it outlines a way to study the “tech” in “big tech” more attentively, looking for compatibilities, synergies, and dependencies between the technologies created and deployed by these companies. Using Google as example, the paper shows how to interrogate software and hardware through the lens of transversal applicability, discusses software and hardware integration, and proposes the notion of “data amalgams” to contextualize and complicate the notion of data. The goal is to complement existing vectors of “big tech” critique with a perspective sensitive to the specific materialities of specific technologies and their possible consequences.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":null,"pages":null},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47079658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Learning accountable governance: Challenges and perspectives for data-intensive health research networks 学习负责任的治理:数据密集型健康研究网络面临的挑战和前景
IF 8.5 1区 社会学 Q1 Social Sciences Pub Date : 2022-07-01 DOI: 10.1177/20539517221136078
Sam H A Muller, M. Mostert, J. V. van Delden, Thomas Schillemans, G. V. van Thiel
Current challenges to sustaining public support for health data research have directed attention to the governance of data-intensive health research networks. Accountability is hailed as an important element of trustworthy governance frameworks for data-intensive health research networks. Yet the extent to which adequate accountability regimes in data-intensive health research networks are currently realized is questionable. Current governance of data-intensive health research networks is dominated by the limitations of a drawing board approach. As a way forward, we propose a stronger focus on accountability as learning to achieve accountable governance. As an important step in that direction, we provide two pathways: (1) developing an integrated structure for decision-making and (2) establishing a dialogue in ongoing deliberative processes. Suitable places for learning accountability to thrive are dedicated governing bodies as well as specialized committees, panels or boards which bear and guide the development of governance in data-intensive health research networks. A continuous accountability process which comprises learning and interaction accommodates the diversity of expectations, responsibilities and tasks in data-intensive health research networks to achieve responsible and effective governance.
当前在维持公众对卫生数据研究的支持方面所面临的挑战已将注意力转向数据密集型卫生研究网络的治理。问责制被誉为数据密集型卫生研究网络可信赖治理框架的重要组成部分。然而,目前在数据密集的卫生研究网络中实现适当问责制度的程度是值得怀疑的。目前对数据密集型卫生研究网络的管理主要是受绘图板方法的局限性所支配。作为前进的方向,我们建议更加注重问责制,学习如何实现问责治理。作为朝这个方向迈出的重要一步,我们提供了两条途径:(1)发展一个决策的综合结构;(2)在正在进行的审议进程中建立对话。学习问责制蓬勃发展的合适场所是专门的理事机构以及承担和指导数据密集型卫生研究网络治理发展的专门委员会、小组或理事会。包括学习和互动在内的持续问责制进程可适应数据密集型保健研究网络中期望、责任和任务的多样性,以实现负责任和有效的治理。
{"title":"Learning accountable governance: Challenges and perspectives for data-intensive health research networks","authors":"Sam H A Muller, M. Mostert, J. V. van Delden, Thomas Schillemans, G. V. van Thiel","doi":"10.1177/20539517221136078","DOIUrl":"https://doi.org/10.1177/20539517221136078","url":null,"abstract":"Current challenges to sustaining public support for health data research have directed attention to the governance of data-intensive health research networks. Accountability is hailed as an important element of trustworthy governance frameworks for data-intensive health research networks. Yet the extent to which adequate accountability regimes in data-intensive health research networks are currently realized is questionable. Current governance of data-intensive health research networks is dominated by the limitations of a drawing board approach. As a way forward, we propose a stronger focus on accountability as learning to achieve accountable governance. As an important step in that direction, we provide two pathways: (1) developing an integrated structure for decision-making and (2) establishing a dialogue in ongoing deliberative processes. Suitable places for learning accountability to thrive are dedicated governing bodies as well as specialized committees, panels or boards which bear and guide the development of governance in data-intensive health research networks. A continuous accountability process which comprises learning and interaction accommodates the diversity of expectations, responsibilities and tasks in data-intensive health research networks to achieve responsible and effective governance.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":null,"pages":null},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46307312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward a sociology of machine learning explainability: Human–machine interaction in deep neural network-based automated trading 走向机器学习的社会学解释:基于深度神经网络的自动交易中的人机交互
IF 8.5 1区 社会学 Q1 Social Sciences Pub Date : 2022-07-01 DOI: 10.1177/20539517221111361
C. Borch, Bo Hee Min
Machine learning systems are making considerable inroads in society owing to their ability to recognize and predict patterns. However, the decision-making logic of some widely used machine learning models, such as deep neural networks, is characterized by opacity, thereby rendering them exceedingly difficult for humans to understand and explain and, as a result, potentially risky to use. Considering the importance of addressing this opacity, this paper calls for research that studies empirically and theoretically how machine learning experts and users seek to attain machine learning explainability. Focusing on automated trading, we take steps in this direction by analyzing a trading firm’s quest for explaining its deep neural network system’s actionable predictions. We demonstrate that this explainability effort involves a particular form of human–machine interaction that contains both anthropomorphic and technomorphic elements. We discuss this attempt to attain machine learning explainability in light of reflections on cross-species companionship and consider it an example of human–machine companionship.
机器学习系统由于其识别和预测模式的能力而在社会上取得了长足的进步。然而,一些广泛使用的机器学习模型(如深度神经网络)的决策逻辑具有不透明性,从而使人类极难理解和解释这些模型,因此使用起来可能有风险。考虑到解决这种不透明性的重要性,本文呼吁进行实证和理论研究,研究机器学习专家和用户如何寻求获得机器学习的可解释性。专注于自动化交易,我们通过分析贸易公司对解释其深度神经网络系统可操作预测的追求,朝着这个方向迈出了一步。我们证明,这种可解释性的努力涉及一种特殊形式的人机交互,它包含拟人化和技术形态元素。我们根据对跨物种陪伴的反思,讨论了这种实现机器学习可解释性的尝试,并将其视为人机陪伴的一个例子。
{"title":"Toward a sociology of machine learning explainability: Human–machine interaction in deep neural network-based automated trading","authors":"C. Borch, Bo Hee Min","doi":"10.1177/20539517221111361","DOIUrl":"https://doi.org/10.1177/20539517221111361","url":null,"abstract":"Machine learning systems are making considerable inroads in society owing to their ability to recognize and predict patterns. However, the decision-making logic of some widely used machine learning models, such as deep neural networks, is characterized by opacity, thereby rendering them exceedingly difficult for humans to understand and explain and, as a result, potentially risky to use. Considering the importance of addressing this opacity, this paper calls for research that studies empirically and theoretically how machine learning experts and users seek to attain machine learning explainability. Focusing on automated trading, we take steps in this direction by analyzing a trading firm’s quest for explaining its deep neural network system’s actionable predictions. We demonstrate that this explainability effort involves a particular form of human–machine interaction that contains both anthropomorphic and technomorphic elements. We discuss this attempt to attain machine learning explainability in light of reflections on cross-species companionship and consider it an example of human–machine companionship.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":null,"pages":null},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48201896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Social data governance: Towards a definition and model 社会数据治理:走向定义和模型
IF 8.5 1区 社会学 Q1 Social Sciences Pub Date : 2022-07-01 DOI: 10.1177/20539517221111352
Jun Liu
With the surge in the number of data and datafied governance initiatives, arrangements, and practices across the globe, understanding various types of such initiatives, arrangements, and their structural causes has become a daunting task for scholars, policy makers, and the public. This complexity additionally generates substantial difficulties in considering different data(fied) governances commensurable with each other. To advance the discussion, this study argues that existing scholarship is inclined to embrace an organization-centric perspective that primarily concerns factors and dynamics regarding data and datafication at the organizational level at the expense of macro-level social, political, and cultural factors of both data and governance. To explicate the macro, societal dimension of data governance, this study then suggests the term “social data governance” to bring forth the consideration that data governance not only reflects the society from which it emerges but also (re)produces the policies and practices of the society in question. Drawing on theories of political science and public management, a model of social data governance is proposed to elucidate the ideological and conceptual groundings of various modes of governance from a comparative perspective. This preliminary model, consisting of a two-dimensional continuum, state intervention and societal autonomy for the one, and national cultures for the other, accounts for variations in social data governance across societies as a complementary way of conceptualizing and categorizing data governance beyond the European standpoint. Finally, we conduct an extreme case study of governing digital contact-tracing techniques during the pandemic to exemplify the explanatory power of the proposed model of social data governance.
随着全球数据和数据化治理举措、安排和实践的数量激增,了解各种类型的此类举措、安排及其结构性原因已成为学者、决策者和公众的艰巨任务。这种复杂性另外在考虑彼此可公度的不同数据(化)治理时产生了实质性的困难。为了推进讨论,本研究认为,现有的学术倾向于采用以组织为中心的观点,主要关注组织层面的数据和数据化的因素和动态,而牺牲了数据和治理的宏观层面的社会、政治和文化因素。为了解释数据治理的宏观、社会维度,本研究提出了“社会数据治理”一词,以提出这样一种观点,即数据治理不仅反映了它产生的社会,而且(重新)产生了相关社会的政策和实践。借鉴政治学和公共管理理论,提出了一个社会数据治理模型,从比较的角度阐明了各种治理模式的思想和概念基础。这一初步模型由二维连续体、国家干预和社会自治以及国家文化组成,解释了不同社会的社会数据治理变化,作为超越欧洲观点对数据治理进行概念化和分类的补充方式。最后,我们对疫情期间的数字接触者追踪技术进行了极端案例研究,以证明所提出的社会数据治理模型的解释力。
{"title":"Social data governance: Towards a definition and model","authors":"Jun Liu","doi":"10.1177/20539517221111352","DOIUrl":"https://doi.org/10.1177/20539517221111352","url":null,"abstract":"With the surge in the number of data and datafied governance initiatives, arrangements, and practices across the globe, understanding various types of such initiatives, arrangements, and their structural causes has become a daunting task for scholars, policy makers, and the public. This complexity additionally generates substantial difficulties in considering different data(fied) governances commensurable with each other. To advance the discussion, this study argues that existing scholarship is inclined to embrace an organization-centric perspective that primarily concerns factors and dynamics regarding data and datafication at the organizational level at the expense of macro-level social, political, and cultural factors of both data and governance. To explicate the macro, societal dimension of data governance, this study then suggests the term “social data governance” to bring forth the consideration that data governance not only reflects the society from which it emerges but also (re)produces the policies and practices of the society in question. Drawing on theories of political science and public management, a model of social data governance is proposed to elucidate the ideological and conceptual groundings of various modes of governance from a comparative perspective. This preliminary model, consisting of a two-dimensional continuum, state intervention and societal autonomy for the one, and national cultures for the other, accounts for variations in social data governance across societies as a complementary way of conceptualizing and categorizing data governance beyond the European standpoint. Finally, we conduct an extreme case study of governing digital contact-tracing techniques during the pandemic to exemplify the explanatory power of the proposed model of social data governance.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":null,"pages":null},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47421750","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Algorithmic empowerment: A comparative ethnography of two open-source algorithmic platforms – Decide Madrid and vTaiwan 算法赋能:两个开源算法平台的比较民族志——Decision Madrid和vTaiwan
IF 8.5 1区 社会学 Q1 Social Sciences Pub Date : 2022-07-01 DOI: 10.1177/20539517221123505
Yu-Shan Tseng
Scholars of critical algorithmic studies, including those from geography, anthropology, Science and Technology Studies and communication studies, have begun to consider how algorithmic devices and platforms facilitate democratic practices. In this article, I draw on a comparative ethnography of two alternative open-source algorithmic platforms – Decide Madrid and vTaiwan – to consider how they are dynamically constituted by differing algorithmic–human relationships. I compare how different algorithmic–human relationships empower citizens to influence political decision-making through proposing, commenting, and voting on the urban issues that should receive political resources in Taipei and Madrid. I argue that algorithmic empowerment is an emerging process in which algorithmic–human relationships orient away from limitations and towards conditions of plurality, actionality, and power decentralisation. This argument frames algorithmic empowerment as bringing about empowering conditions that allow (underrepresented) individuals to shape policy-making and consider plural perspectives for political change and action, not as an outcome-driven, binary assessment (i.e. yes/no). This article contributes a novel, situated, and comparative conceptualisation of algorithmic empowerment that moves beyond technological determinism and universalism.
批判性算法研究的学者,包括地理学、人类学、科学技术研究和传播学的学者,已经开始考虑算法设备和平台如何促进民主实践。在这篇文章中,我借鉴了两个替代开源算法平台——Decision Madrid和vTaiwan——的比较民族志,来考虑它们是如何由不同的算法-人类关系动态构成的。我比较了不同的算法-人际关系如何使公民能够通过对台北和马德里应该获得政治资源的城市问题提出建议、发表评论和投票来影响政治决策。我认为,算法赋权是一个新兴的过程,在这个过程中,算法与人的关系从局限性转向多元化、行动性和权力分散的条件。这一论点将算法赋权定义为带来赋权条件,允许(代表性不足的)个人制定政策,并考虑政治变革和行动的多元视角,而不是作为一种结果驱动的二元评估(即是/否)。本文提出了一个新颖的、情境化的、比较性的算法赋权概念,超越了技术决定论和普遍主义。
{"title":"Algorithmic empowerment: A comparative ethnography of two open-source algorithmic platforms – Decide Madrid and vTaiwan","authors":"Yu-Shan Tseng","doi":"10.1177/20539517221123505","DOIUrl":"https://doi.org/10.1177/20539517221123505","url":null,"abstract":"Scholars of critical algorithmic studies, including those from geography, anthropology, Science and Technology Studies and communication studies, have begun to consider how algorithmic devices and platforms facilitate democratic practices. In this article, I draw on a comparative ethnography of two alternative open-source algorithmic platforms – Decide Madrid and vTaiwan – to consider how they are dynamically constituted by differing algorithmic–human relationships. I compare how different algorithmic–human relationships empower citizens to influence political decision-making through proposing, commenting, and voting on the urban issues that should receive political resources in Taipei and Madrid. I argue that algorithmic empowerment is an emerging process in which algorithmic–human relationships orient away from limitations and towards conditions of plurality, actionality, and power decentralisation. This argument frames algorithmic empowerment as bringing about empowering conditions that allow (underrepresented) individuals to shape policy-making and consider plural perspectives for political change and action, not as an outcome-driven, binary assessment (i.e. yes/no). This article contributes a novel, situated, and comparative conceptualisation of algorithmic empowerment that moves beyond technological determinism and universalism.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":null,"pages":null},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43211524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Privacy at risk? Understanding the perceived privacy protection of health code apps in China 隐私受到威胁?了解中国健康码应用的隐私保护认知
IF 8.5 1区 社会学 Q1 Social Sciences Pub Date : 2022-07-01 DOI: 10.1177/20539517221135132
Gejun Huang, A. Hu, Wenhong Chen
As a key constituent of China's approach to fighting COVID-19, Health Code apps (HCAs) not only serve the pandemic control imperatives but also exercise the agency of digital surveillance. As such, HCAs pave a new avenue for ongoing discussions on contact tracing solutions and privacy amid the global pandemic. This article attends to the perceived privacy protection among HCA users via the lens of the contextual integrity theory. Drawing on an online survey of adult HCA users in Wuhan and Hangzhou (N = 1551), we find users’ perceived convenience, attention towards privacy policy, trust in government, and acceptance of government purposes regarding HCA data management are significant contributors to users’ perceived privacy protection in using the apps. By contrast, users’ frequency of mobile privacy protection behaviors has limited influence, and their degrees of perceived protection do not vary by sociodemographic status. These findings shed new light on China's distinctive approach to pandemic control with respect to the state's expansion of big data-driven surveillance capacity. Also, the findings foreground the heuristic value of contextual integrity theory to examine controversial digital surveillance in non-Western contexts. Put tougher, our findings contribute to the thriving scholarly conversations around digital privacy and surveillance in China, as well as contact tracing solutions and privacy amid the global pandemic.
作为中国抗击新冠肺炎方法的关键组成部分,健康码应用程序(HCA)不仅服务于疫情控制,而且还发挥了数字监控的作用。因此,HCA为在全球疫情期间就接触者追踪解决方案和隐私问题进行持续讨论铺平了新的途径。本文通过语境完整性理论的视角来关注HCA用户对隐私保护的感知。基于武汉和杭州成人HCA用户的在线调查(N = 1551),我们发现用户在使用应用程序时感知到的便利性、对隐私政策的关注、对政府的信任以及对政府HCA数据管理目的的接受是用户感知到的隐私保护的重要因素。相比之下,用户的移动隐私保护行为频率影响有限,他们感知到的保护程度不会因社会人口状况而变化。这些发现为中国在扩大大数据驱动的监测能力方面采取独特的疫情控制方法提供了新的线索。此外,研究结果展望了语境完整性理论在非西方语境中检验有争议的数字监控的启发价值。更严格地说,我们的发现有助于在全球疫情期间,围绕中国的数字隐私和监控,以及接触者追踪解决方案和隐私展开蓬勃发展的学术对话。
{"title":"Privacy at risk? Understanding the perceived privacy protection of health code apps in China","authors":"Gejun Huang, A. Hu, Wenhong Chen","doi":"10.1177/20539517221135132","DOIUrl":"https://doi.org/10.1177/20539517221135132","url":null,"abstract":"As a key constituent of China's approach to fighting COVID-19, Health Code apps (HCAs) not only serve the pandemic control imperatives but also exercise the agency of digital surveillance. As such, HCAs pave a new avenue for ongoing discussions on contact tracing solutions and privacy amid the global pandemic. This article attends to the perceived privacy protection among HCA users via the lens of the contextual integrity theory. Drawing on an online survey of adult HCA users in Wuhan and Hangzhou (N = 1551), we find users’ perceived convenience, attention towards privacy policy, trust in government, and acceptance of government purposes regarding HCA data management are significant contributors to users’ perceived privacy protection in using the apps. By contrast, users’ frequency of mobile privacy protection behaviors has limited influence, and their degrees of perceived protection do not vary by sociodemographic status. These findings shed new light on China's distinctive approach to pandemic control with respect to the state's expansion of big data-driven surveillance capacity. Also, the findings foreground the heuristic value of contextual integrity theory to examine controversial digital surveillance in non-Western contexts. Put tougher, our findings contribute to the thriving scholarly conversations around digital privacy and surveillance in China, as well as contact tracing solutions and privacy amid the global pandemic.","PeriodicalId":47834,"journal":{"name":"Big Data & Society","volume":null,"pages":null},"PeriodicalIF":8.5,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43299140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"社会学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Big Data & Society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1