首页 > 最新文献

Morals & Machines最新文献

英文 中文
Right to vote for robots? 投票给机器人的权利?
Pub Date : 1900-01-01 DOI: 10.5771/2747-5174-2022-1-44
Gunther Pallaver, T. Hug
In 2017, the Kingdom of Saudi Arabia granted citizenship to the robot Sophia, a social humanoid robot equipped with humanoid artificial intelligence (AI). Our thesis is that, if Saudi Arabia’s move is consistently pursued, it is impossible to avoid granting the new citizens the right to vote, especially since liberal pluralist democracies are based on the principle of equality and thus the participation of its citizens. This in turn raises a number of questions of legal policy which are also discussed by the European Parliament and which in their scope are comparable to the time when Europe fought for universal and equal suffrage from the 19th century onwards. The contribution aims at discussing a series of questions as well as initial answers.
2017年,沙特阿拉伯王国授予了配备人形人工智能(AI)的社交类人机器人索菲亚(Sophia)公民身份。我们的论点是,如果沙特阿拉伯的行动持续下去,就不可能避免给予新公民投票权,特别是因为自由多元民主是以平等原则为基础的,因此是公民参与的原则。这反过来又提出了一些法律政策问题,这些问题也由欧洲议会讨论,其范围可与19世纪以来欧洲争取普遍和平等选举权的时期相媲美。该贡献旨在讨论一系列问题以及初步答案。
{"title":"Right to vote for robots?","authors":"Gunther Pallaver, T. Hug","doi":"10.5771/2747-5174-2022-1-44","DOIUrl":"https://doi.org/10.5771/2747-5174-2022-1-44","url":null,"abstract":"In 2017, the Kingdom of Saudi Arabia granted citizenship to the robot Sophia, a social humanoid robot equipped with humanoid artificial intelligence (AI). Our thesis is that, if Saudi Arabia’s move is consistently pursued, it is impossible to avoid granting the new citizens the right to vote, especially since liberal pluralist democracies are based on the principle of equality and thus the participation of its citizens. This in turn raises a number of questions of legal policy which are also discussed by the European Parliament and which in their scope are comparable to the time when Europe fought for universal and equal suffrage from the 19th century onwards. The contribution aims at discussing a series of questions as well as initial answers.","PeriodicalId":377128,"journal":{"name":"Morals & Machines","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129585886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Please Explain: Key Questions for Explainable AI research from an Organizational perspective 请解释:从组织角度解释人工智能研究的关键问题
Pub Date : 1900-01-01 DOI: 10.5771/2747-5174-2021-2-10
Ella Hafermalz, M. Huysman
There is growing interest in explanations as an ethical and technical solution to the problem of 'opaque' AI systems. In this essay we point out that technical and ethical approaches to Explainable AI (XAI) have different assumptions and aims. Further, the organizational perspective is missing from this discourse. In response we formulate key questions for explainable AI research from an organizational perspective: 1) Who is the 'user' in Explainable AI? 2) What is the 'purpose' of an explanation in Explainable AI? and 3) Where does an explanation 'reside' in Explainable AI? Our aim is to prompt collaboration across disciplines working on Explainable AI.
人们越来越有兴趣将解释作为解决“不透明”人工智能系统问题的道德和技术解决方案。在本文中,我们指出,可解释人工智能(XAI)的技术和伦理方法具有不同的假设和目标。此外,这一论述缺少组织视角。作为回应,我们从组织的角度为可解释人工智能研究制定了关键问题:1)谁是可解释人工智能的“用户”?2)在可解释AI中解释的“目的”是什么?3)在可解释的AI中,解释“存在”在哪里?我们的目标是促进在可解释人工智能领域的跨学科合作。
{"title":"Please Explain: Key Questions for Explainable AI research from an Organizational perspective","authors":"Ella Hafermalz, M. Huysman","doi":"10.5771/2747-5174-2021-2-10","DOIUrl":"https://doi.org/10.5771/2747-5174-2021-2-10","url":null,"abstract":"There is growing interest in explanations as an ethical and technical solution to the problem of 'opaque' AI systems. In this essay we point out that technical and ethical approaches to Explainable AI (XAI) have different assumptions and aims. Further, the organizational perspective is missing from this discourse. In response we formulate key questions for explainable AI research from an organizational perspective: 1) Who is the 'user' in Explainable AI? 2) What is the 'purpose' of an explanation in Explainable AI? and 3) Where does an explanation 'reside' in Explainable AI? Our aim is to prompt collaboration across disciplines working on Explainable AI.","PeriodicalId":377128,"journal":{"name":"Morals & Machines","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129739113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
AI Errors in Health? 健康中的AI错误?
Pub Date : 1900-01-01 DOI: 10.5771/2747-5174-2022-1-34
Veronica Barassi, Rahi Patra
The ever-greater use of AI-driven technologies in the health sector begs moral questions regarding what it means for algorithms to mis-understand and mis-measure human health and how as a society we are understanding AI errors in health. This article argues that AI errors in health are putting us in front of the problem that our AI technologies do not grasp the full pluriverse of human experience, and rely on data and measures that have a long history of scientific bias. However, as we shall see in this paper, contemporary public debate on the issue is very limited. Drawing on a discourse analysis of 520 European news media articles reporting on AI-errors the article will argue that the ‘media frame’ on AI errors in health is often defined by a techno-solutionist perspective, and only rarely it sheds light on the relationship between AI technologies and scientific bias. Yet public awareness on the issue is of central importance because it shows us that rather than ‚fixing‘ or ‚finding solutions‘ for AI errors we need to learn how to coexist with the fact that technlogies – because they are human made, are always going to be inevitably biased.
卫生部门越来越多地使用人工智能驱动的技术,引发了一些道德问题:算法误解和错误衡量人类健康意味着什么,以及作为一个社会,我们如何理解人工智能在健康方面的错误。本文认为,人工智能在健康领域的错误使我们面临这样一个问题:我们的人工智能技术没有掌握人类经验的全部多样性,而是依赖于长期存在科学偏见的数据和措施。然而,正如我们将在本文中看到的,当代关于这个问题的公开辩论是非常有限的。通过对520篇报道人工智能错误的欧洲新闻媒体文章的话语分析,本文将认为,关于人工智能在健康方面的错误的“媒体框架”通常是由技术解决方案主义的角度来定义的,并且很少揭示人工智能技术与科学偏见之间的关系。然而,公众对这个问题的认识至关重要,因为它向我们表明,我们需要学会如何与这样一个事实共存:技术——因为它们是人类制造的,总是不可避免地会有偏见,而不是修复“或寻找解决方案”。
{"title":"AI Errors in Health?","authors":"Veronica Barassi, Rahi Patra","doi":"10.5771/2747-5174-2022-1-34","DOIUrl":"https://doi.org/10.5771/2747-5174-2022-1-34","url":null,"abstract":"The ever-greater use of AI-driven technologies in the health sector begs moral questions regarding what it means for algorithms to mis-understand and mis-measure human health and how as a society we are understanding AI errors in health. This article argues that AI errors in health are putting us in front of the problem that our AI technologies do not grasp the full pluriverse of human experience, and rely on data and measures that have a long history of scientific bias. However, as we shall see in this paper, contemporary public debate on the issue is very limited. Drawing on a discourse analysis of 520 European news media articles reporting on AI-errors the article will argue that the ‘media frame’ on AI errors in health is often defined by a techno-solutionist perspective, and only rarely it sheds light on the relationship between AI technologies and scientific bias. Yet public awareness on the issue is of central importance because it shows us that rather than ‚fixing‘ or ‚finding solutions‘ for AI errors we need to learn how to coexist with the fact that technlogies – because they are human made, are always going to be inevitably biased.","PeriodicalId":377128,"journal":{"name":"Morals & Machines","volume":"335 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121601432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deskilling, Upskilling, and Reskilling: a Case for Hybrid Intelligence 去技能、提升技能和再技能:混合智能的案例
Pub Date : 1900-01-01 DOI: 10.5771/2747-5174-2021-2-24
J. Rafner, Dominik Dellermann, A. Hjorth, Dóra Verasztó, C. Kampf, Wendy Mackay, J. Sherson
Advances in AI technology affect knowledge work in diverse fields, including healthcare, engineering, and management. Although automation and machine support can increase efficiency and lower costs, it can also, as an unintended consequence, deskill workers, who lose valuable skills that would otherwise be maintained as part of their daily work. Such deskilling has a wide range of negative effects on multiple stakeholders -- employees, organizations, and society at large. This essay discusses deskilling in the age of AI on three levels - individual, organizational and societal. Deskilling is furthermore analyzed through the lens of four different levels of human-AI configurations and we argue that one of them, Hybrid Intelligence, could be particularly suitable to help manage the risk of deskilling human experts. Hybrid Intelligence system design and implementation can explicitly take such risks into account and instead foster upskilling of workers. Hybrid Intelligence may thus, in the long run, lower costs and improve performance and job satisfaction, as well as prevent management from creating unintended organization-wide deskilling.
人工智能技术的进步影响着各个领域的知识工作,包括医疗保健、工程和管理。尽管自动化和机器支持可以提高效率和降低成本,但它也可能作为意想不到的后果,使工人失去宝贵的技能,否则这些技能将作为日常工作的一部分得到维护。这种去技能化会对多个利益相关者——员工、组织和整个社会——产生广泛的负面影响。本文从个人、组织和社会三个层面讨论了人工智能时代的去技能化问题。通过四个不同层次的人类-人工智能配置进一步分析了去技能化,我们认为其中一个,混合智能,可能特别适合帮助管理去技能化人类专家的风险。混合智能系统的设计和实现可以明确地考虑到这些风险,而不是培养工人的技能。因此,从长远来看,混合智能可能会降低成本,提高绩效和工作满意度,并防止管理层产生意想不到的组织范围内的技能流失。
{"title":"Deskilling, Upskilling, and Reskilling: a Case for Hybrid Intelligence","authors":"J. Rafner, Dominik Dellermann, A. Hjorth, Dóra Verasztó, C. Kampf, Wendy Mackay, J. Sherson","doi":"10.5771/2747-5174-2021-2-24","DOIUrl":"https://doi.org/10.5771/2747-5174-2021-2-24","url":null,"abstract":"Advances in AI technology affect knowledge work in diverse fields, including healthcare, engineering, and management. Although automation and machine support can increase efficiency and lower costs, it can also, as an unintended consequence, deskill workers, who lose valuable skills that would otherwise be maintained as part of their daily work. Such deskilling has a wide range of negative effects on multiple stakeholders -- employees, organizations, and society at large. This essay discusses deskilling in the age of AI on three levels - individual, organizational and societal. Deskilling is furthermore analyzed through the lens of four different levels of human-AI configurations and we argue that one of them, Hybrid Intelligence, could be particularly suitable to help manage the risk of deskilling human experts. Hybrid Intelligence system design and implementation can explicitly take such risks into account and instead foster upskilling of workers. Hybrid Intelligence may thus, in the long run, lower costs and improve performance and job satisfaction, as well as prevent management from creating unintended organization-wide deskilling.","PeriodicalId":377128,"journal":{"name":"Morals & Machines","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131450169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
We are what we create. 我们创造了什么,我们就是什么。
Pub Date : 1900-01-01 DOI: 10.5771/2747-5174-2022-1-22
Claudia Gerstl
Already in ancient times, people dreamt of becoming creators, and nowadays they are approaching this dream in small but impressive steps. In fiction, the goal of creating an artificial human has already been achieved. Additionally, as a mirror of society, the medium of film offers the opportunity to address current questions and problems of humanity. This paper raises the question of how artificial agents are depicted in over 30 years of film history from 1990 to 2021, and it explores roboethics through the lens of movies, focusing on social humanoid robots.
早在古代,人们就梦想着成为创造者,现在他们正在以微小但令人印象深刻的步伐接近这个梦想。在小说中,创造人造人的目标已经实现了。此外,作为社会的一面镜子,电影的媒介提供了解决当前人类问题和问题的机会。本文提出了从1990年到2021年的30多年电影史中,人工智能体是如何被描绘出来的问题,并通过电影的镜头来探讨机器人伦理,重点关注社会类人机器人。
{"title":"We are what we create.","authors":"Claudia Gerstl","doi":"10.5771/2747-5174-2022-1-22","DOIUrl":"https://doi.org/10.5771/2747-5174-2022-1-22","url":null,"abstract":"Already in ancient times, people dreamt of becoming creators, and nowadays they are approaching this dream in small but impressive steps. In fiction, the goal of creating an artificial human has already been achieved. Additionally, as a mirror of society, the medium of film offers the opportunity to address current questions and problems of humanity. This paper raises the question of how artificial agents are depicted in over 30 years of film history from 1990 to 2021, and it explores roboethics through the lens of movies, focusing on social humanoid robots.","PeriodicalId":377128,"journal":{"name":"Morals & Machines","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131789465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Some Black Boxes are made of Flesh; Others are made of Silicon 有些黑盒子是肉做的;还有一些是由硅制成的
Pub Date : 1900-01-01 DOI: 10.5771/2747-5174-2022-2-64
Tobias Mast
Automated decisions, especially those that operate using AI, face accusations of being opaque black boxes. This accusation is to be taken seriously, but it is put into perspective to a certain extent if one contrasts it with the criticism of human-generated reasoning for decisions in the judiciary and administration. This article explains the functions that decision rationales fulfill from the perspective of legal scholarship and examines the qualities and shortcomings of human and machine approaches.
自动化决策,尤其是那些使用人工智能操作的决策,面临着不透明黑匣子的指责。这一指责应该认真对待,但如果将其与对司法和行政决策中人为推理的批评进行对比,就会在一定程度上正确看待这一指责。本文从法学的角度阐释了决策理论的功能,并考察了人和机器方法的优缺点。
{"title":"Some Black Boxes are made of Flesh; Others are made of Silicon","authors":"Tobias Mast","doi":"10.5771/2747-5174-2022-2-64","DOIUrl":"https://doi.org/10.5771/2747-5174-2022-2-64","url":null,"abstract":"Automated decisions, especially those that operate using AI, face accusations of being opaque black boxes. This accusation is to be taken seriously, but it is put into perspective to a certain extent if one contrasts it with the criticism of human-generated reasoning for decisions in the judiciary and administration. This article explains the functions that decision rationales fulfill from the perspective of legal scholarship and examines the qualities and shortcomings of human and machine approaches.","PeriodicalId":377128,"journal":{"name":"Morals & Machines","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131794776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Collectively Reimagining Technology 集体重新构想技术
Pub Date : 1900-01-01 DOI: 10.5771/2747-5174-2021-2-86
K. Andersen, M. Overgaard, Mirabelle Jones, I. Shklovski
This article explores the use of participatory art and technology workshops as an approach to create more diverse and inclusive modes of engagement in the design of digital technologies. Taking the starting point in diverse works of science fiction, we draw on the concept of critical making and Ursula Le Guin’s disdain for the distinction between hard and soft technology to discuss the role of collaborative reimagining in the creation of technological futures. Such an approach facilitates a nuanced and reflected understanding for how technologies come to be designed and can empower more open and diverse participation in technology development.
本文探讨了使用参与式艺术和技术研讨会作为一种方法,在数字技术设计中创造更多样化和包容性的参与模式。以各种科幻作品为出发点,我们借鉴了批判性制作的概念和乌苏拉·勒奎恩对硬技术和软技术之间区别的蔑视,讨论了协作重新想象在创造技术未来中的作用。这种方法有助于细致入微地理解技术是如何设计的,并能使技术开发的参与更加开放和多样化。
{"title":"Collectively Reimagining Technology","authors":"K. Andersen, M. Overgaard, Mirabelle Jones, I. Shklovski","doi":"10.5771/2747-5174-2021-2-86","DOIUrl":"https://doi.org/10.5771/2747-5174-2021-2-86","url":null,"abstract":"This article explores the use of participatory art and technology workshops as an approach to create more diverse and inclusive modes of engagement in the design of digital technologies. Taking the starting point in diverse works of science fiction, we draw on the concept of critical making and Ursula Le Guin’s disdain for the distinction between hard and soft technology to discuss the role of collaborative reimagining in the creation of technological futures. Such an approach facilitates a nuanced and reflected understanding for how technologies come to be designed and can empower more open and diverse participation in technology development.","PeriodicalId":377128,"journal":{"name":"Morals & Machines","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125257797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recommender Systems in the EU: from Responsibility to Regulation 欧盟的推荐制度:从责任到监管
Pub Date : 1900-01-01 DOI: 10.5771/2747-5174-2021-2-60
Sebastian Felix Schwemer
Over recent years, the EU has increasingly looked at the regulation of various forms of automation and the use of algorithms. For recommender systems specifically, two recent legislative proposals by the European Commission, the Digital Services Act from December 2020 and the Artificial Intelligence Act from April 2021, are of interest. This article analyses the recent legislative proposals with a view to identify the regulatory trajectory. Whereas the instruments differ in scope, it argues that both may -directly and indirectly- regulate various aspects of recommender systems and thereby influence the debate on how to ensure responsible, not opaque, machines that recommend information to humans.
近年来,欧盟越来越关注对各种形式的自动化和算法使用的监管。就推荐系统而言,欧盟委员会最近提出的两项立法提案,即2020年12月的《数字服务法》和2021年4月的《人工智能法》,值得关注。本文分析了最近的立法建议,以期确定监管轨迹。尽管这些工具在范围上有所不同,但它认为,两者都可能——直接或间接地——监管推荐系统的各个方面,从而影响有关如何确保负责任、而不是不透明的机器向人类推荐信息的辩论。
{"title":"Recommender Systems in the EU: from Responsibility to Regulation","authors":"Sebastian Felix Schwemer","doi":"10.5771/2747-5174-2021-2-60","DOIUrl":"https://doi.org/10.5771/2747-5174-2021-2-60","url":null,"abstract":"Over recent years, the EU has increasingly looked at the regulation of various forms of automation and the use of algorithms. For recommender systems specifically, two recent legislative proposals by the European Commission, the Digital Services Act from December 2020 and the Artificial Intelligence Act from April 2021, are of interest. This article analyses the recent legislative proposals with a view to identify the regulatory trajectory. Whereas the instruments differ in scope, it argues that both may -directly and indirectly- regulate various aspects of recommender systems and thereby influence the debate on how to ensure responsible, not opaque, machines that recommend information to humans.","PeriodicalId":377128,"journal":{"name":"Morals & Machines","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116941251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Legal Theory and the Problem of Machine Opacity: Spinoza on Intentionality, Prediction and Law 法律理论与机器不透明问题:斯宾诺莎论意向性、预测与法律
Pub Date : 1900-01-01 DOI: 10.5771/2747-5174-2021-2-50
M. Dahlbeck
In this paper I will approach the problem of machine opacity in law, according to an understanding of it as a problem revolving around the underlying philosophical tension between description and prescription in law and legal theory. I will use the problem of machine opacity, and its effects on the lawmaker’s activity, as a practical backdrop for a discussion of the associations upheld by legal theory between law’s normative ideals and its preferred, normatively neutral, method for achieving these. My discussion of this problem will provide a preliminary answer to the question whether it is machine opacity - by introducing an unfamiliar kind of intentionality into the legal sphere which disturbs the predictability of law - that renders the contemporary lawmaker’s job difficult, or, whether this difficulty indeed comes with the lawmaker’s job description. I will turn to early rationalist Benedict Spinoza (1632-1677) and particularly his explanation of law in the Theological Political Treatise (TTP) for analytical assistance in my discussion.
在本文中,我将探讨法律中的机器不透明问题,根据对它的理解,它是一个围绕法律和法律理论中描述和规定之间潜在的哲学张力的问题。我将以机器不透明的问题及其对立法者活动的影响为背景,讨论法律理论所支持的法律规范理想与实现这些理想的首选规范中立方法之间的联系。我对这个问题的讨论将为以下问题提供一个初步的答案:是机器不透明——通过将一种不熟悉的意向性引入法律领域,扰乱法律的可预测性——使当代立法者的工作变得困难,还是这种困难确实伴随着立法者的工作描述。我将转向早期理性主义者本尼迪克特·斯宾诺莎(1632-1677),特别是他在《神学政治论著》(TTP)中对法律的解释,以帮助我在讨论中进行分析。
{"title":"Legal Theory and the Problem of Machine Opacity: Spinoza on Intentionality, Prediction and Law","authors":"M. Dahlbeck","doi":"10.5771/2747-5174-2021-2-50","DOIUrl":"https://doi.org/10.5771/2747-5174-2021-2-50","url":null,"abstract":"In this paper I will approach the problem of machine opacity in law, according to an understanding of it as a problem revolving around the underlying philosophical tension between description and prescription in law and legal theory. I will use the problem of machine opacity, and its effects on the lawmaker’s activity, as a practical backdrop for a discussion of the associations upheld by legal theory between law’s normative ideals and its preferred, normatively neutral, method for achieving these. My discussion of this problem will provide a preliminary answer to the question whether it is machine opacity - by introducing an unfamiliar kind of intentionality into the legal sphere which disturbs the predictability of law - that renders the contemporary lawmaker’s job difficult, or, whether this difficulty indeed comes with the lawmaker’s job description. I will turn to early rationalist Benedict Spinoza (1632-1677) and particularly his explanation of law in the Theological Political Treatise (TTP) for analytical assistance in my discussion.","PeriodicalId":377128,"journal":{"name":"Morals & Machines","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134538532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Critical Dataset and Machine Learning Art. 关键数据集和机器学习艺术。
Pub Date : 1900-01-01 DOI: 10.5771/2747-5174-2022-2-22
Hanna L. Grønneberg, Ana Alacovska
The paper proposes that artistic practices engaging with datasets on which machine learning (ML) systems are trained can provide caring or curative resistance to digital toxicity and furnish models for imagining more equitable digital futures. The paper provides a critique of harmful practices of data mining and data extraction in ML system development by focusing on artworks that engage pharmacologically, that is critically and restoratively, with these technologies. The three works analyzed are ImageNet Roulette (2019) by Kate Crawford and Trevor Paglen, This Person does Exist (2020) by Mathias Schäfer, and Feminist Dataset (2017-ongoing) by Caroline Sinders. These works, we ague, use defamiliarization as a critical practice when engaging with datasets and ML, providing vital counter-narratives and curative strategies, yet in some cases also deepening and exacerbating the very same technological toxicity they set out to remedy.
本文提出,与训练机器学习(ML)系统的数据集相结合的艺术实践可以提供对数字毒性的关怀或治疗性抵抗,并为想象更公平的数字未来提供模型。本文通过关注与这些技术具有批判性和恢复性的药理学作用的艺术品,对ML系统开发中数据挖掘和数据提取的有害做法进行了批评。这三部作品分别是凯特·克劳福德和特雷弗·佩格伦的《ImageNet Roulette》(2019)、马蒂亚斯·Schäfer的《这个人确实存在》(2020)和卡罗琳·辛德斯的《女权主义数据集》(2017年至今)。我们认为,这些作品在处理数据集和机器学习时,将陌生化作为一种关键实践,提供了重要的反叙事和治疗策略,但在某些情况下,也加深和加剧了他们着手补救的技术毒性。
{"title":"Critical Dataset and Machine Learning Art.","authors":"Hanna L. Grønneberg, Ana Alacovska","doi":"10.5771/2747-5174-2022-2-22","DOIUrl":"https://doi.org/10.5771/2747-5174-2022-2-22","url":null,"abstract":"The paper proposes that artistic practices engaging with datasets on which machine learning (ML) systems are trained can provide caring or curative resistance to digital toxicity and furnish models for imagining more equitable digital futures. The paper provides a critique of harmful practices of data mining and data extraction in ML system development by focusing on artworks that engage pharmacologically, that is critically and restoratively, with these technologies. The three works analyzed are ImageNet Roulette (2019) by Kate Crawford and Trevor Paglen, This Person does Exist (2020) by Mathias Schäfer, and Feminist Dataset (2017-ongoing) by Caroline Sinders. These works, we ague, use defamiliarization as a critical practice when engaging with datasets and ML, providing vital counter-narratives and curative strategies, yet in some cases also deepening and exacerbating the very same technological toxicity they set out to remedy.","PeriodicalId":377128,"journal":{"name":"Morals & Machines","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125439301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Morals & Machines
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1