Pub Date : 1900-01-01DOI: 10.5771/2747-5174-2022-1-44
Gunther Pallaver, T. Hug
In 2017, the Kingdom of Saudi Arabia granted citizenship to the robot Sophia, a social humanoid robot equipped with humanoid artificial intelligence (AI). Our thesis is that, if Saudi Arabia’s move is consistently pursued, it is impossible to avoid granting the new citizens the right to vote, especially since liberal pluralist democracies are based on the principle of equality and thus the participation of its citizens. This in turn raises a number of questions of legal policy which are also discussed by the European Parliament and which in their scope are comparable to the time when Europe fought for universal and equal suffrage from the 19th century onwards. The contribution aims at discussing a series of questions as well as initial answers.
{"title":"Right to vote for robots?","authors":"Gunther Pallaver, T. Hug","doi":"10.5771/2747-5174-2022-1-44","DOIUrl":"https://doi.org/10.5771/2747-5174-2022-1-44","url":null,"abstract":"In 2017, the Kingdom of Saudi Arabia granted citizenship to the robot Sophia, a social humanoid robot equipped with humanoid artificial intelligence (AI). Our thesis is that, if Saudi Arabia’s move is consistently pursued, it is impossible to avoid granting the new citizens the right to vote, especially since liberal pluralist democracies are based on the principle of equality and thus the participation of its citizens. This in turn raises a number of questions of legal policy which are also discussed by the European Parliament and which in their scope are comparable to the time when Europe fought for universal and equal suffrage from the 19th century onwards. The contribution aims at discussing a series of questions as well as initial answers.","PeriodicalId":377128,"journal":{"name":"Morals & Machines","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129585886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.5771/2747-5174-2021-2-10
Ella Hafermalz, M. Huysman
There is growing interest in explanations as an ethical and technical solution to the problem of 'opaque' AI systems. In this essay we point out that technical and ethical approaches to Explainable AI (XAI) have different assumptions and aims. Further, the organizational perspective is missing from this discourse. In response we formulate key questions for explainable AI research from an organizational perspective: 1) Who is the 'user' in Explainable AI? 2) What is the 'purpose' of an explanation in Explainable AI? and 3) Where does an explanation 'reside' in Explainable AI? Our aim is to prompt collaboration across disciplines working on Explainable AI.
{"title":"Please Explain: Key Questions for Explainable AI research from an Organizational perspective","authors":"Ella Hafermalz, M. Huysman","doi":"10.5771/2747-5174-2021-2-10","DOIUrl":"https://doi.org/10.5771/2747-5174-2021-2-10","url":null,"abstract":"There is growing interest in explanations as an ethical and technical solution to the problem of 'opaque' AI systems. In this essay we point out that technical and ethical approaches to Explainable AI (XAI) have different assumptions and aims. Further, the organizational perspective is missing from this discourse. In response we formulate key questions for explainable AI research from an organizational perspective: 1) Who is the 'user' in Explainable AI? 2) What is the 'purpose' of an explanation in Explainable AI? and 3) Where does an explanation 'reside' in Explainable AI? Our aim is to prompt collaboration across disciplines working on Explainable AI.","PeriodicalId":377128,"journal":{"name":"Morals & Machines","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129739113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.5771/2747-5174-2022-1-34
Veronica Barassi, Rahi Patra
The ever-greater use of AI-driven technologies in the health sector begs moral questions regarding what it means for algorithms to mis-understand and mis-measure human health and how as a society we are understanding AI errors in health. This article argues that AI errors in health are putting us in front of the problem that our AI technologies do not grasp the full pluriverse of human experience, and rely on data and measures that have a long history of scientific bias. However, as we shall see in this paper, contemporary public debate on the issue is very limited. Drawing on a discourse analysis of 520 European news media articles reporting on AI-errors the article will argue that the ‘media frame’ on AI errors in health is often defined by a techno-solutionist perspective, and only rarely it sheds light on the relationship between AI technologies and scientific bias. Yet public awareness on the issue is of central importance because it shows us that rather than ‚fixing‘ or ‚finding solutions‘ for AI errors we need to learn how to coexist with the fact that technlogies – because they are human made, are always going to be inevitably biased.
{"title":"AI Errors in Health?","authors":"Veronica Barassi, Rahi Patra","doi":"10.5771/2747-5174-2022-1-34","DOIUrl":"https://doi.org/10.5771/2747-5174-2022-1-34","url":null,"abstract":"The ever-greater use of AI-driven technologies in the health sector begs moral questions regarding what it means for algorithms to mis-understand and mis-measure human health and how as a society we are understanding AI errors in health. This article argues that AI errors in health are putting us in front of the problem that our AI technologies do not grasp the full pluriverse of human experience, and rely on data and measures that have a long history of scientific bias. However, as we shall see in this paper, contemporary public debate on the issue is very limited. Drawing on a discourse analysis of 520 European news media articles reporting on AI-errors the article will argue that the ‘media frame’ on AI errors in health is often defined by a techno-solutionist perspective, and only rarely it sheds light on the relationship between AI technologies and scientific bias. Yet public awareness on the issue is of central importance because it shows us that rather than ‚fixing‘ or ‚finding solutions‘ for AI errors we need to learn how to coexist with the fact that technlogies – because they are human made, are always going to be inevitably biased.","PeriodicalId":377128,"journal":{"name":"Morals & Machines","volume":"335 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121601432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.5771/2747-5174-2021-2-24
J. Rafner, Dominik Dellermann, A. Hjorth, Dóra Verasztó, C. Kampf, Wendy Mackay, J. Sherson
Advances in AI technology affect knowledge work in diverse fields, including healthcare, engineering, and management. Although automation and machine support can increase efficiency and lower costs, it can also, as an unintended consequence, deskill workers, who lose valuable skills that would otherwise be maintained as part of their daily work. Such deskilling has a wide range of negative effects on multiple stakeholders -- employees, organizations, and society at large. This essay discusses deskilling in the age of AI on three levels - individual, organizational and societal. Deskilling is furthermore analyzed through the lens of four different levels of human-AI configurations and we argue that one of them, Hybrid Intelligence, could be particularly suitable to help manage the risk of deskilling human experts. Hybrid Intelligence system design and implementation can explicitly take such risks into account and instead foster upskilling of workers. Hybrid Intelligence may thus, in the long run, lower costs and improve performance and job satisfaction, as well as prevent management from creating unintended organization-wide deskilling.
{"title":"Deskilling, Upskilling, and Reskilling: a Case for Hybrid Intelligence","authors":"J. Rafner, Dominik Dellermann, A. Hjorth, Dóra Verasztó, C. Kampf, Wendy Mackay, J. Sherson","doi":"10.5771/2747-5174-2021-2-24","DOIUrl":"https://doi.org/10.5771/2747-5174-2021-2-24","url":null,"abstract":"Advances in AI technology affect knowledge work in diverse fields, including healthcare, engineering, and management. Although automation and machine support can increase efficiency and lower costs, it can also, as an unintended consequence, deskill workers, who lose valuable skills that would otherwise be maintained as part of their daily work. Such deskilling has a wide range of negative effects on multiple stakeholders -- employees, organizations, and society at large. This essay discusses deskilling in the age of AI on three levels - individual, organizational and societal. Deskilling is furthermore analyzed through the lens of four different levels of human-AI configurations and we argue that one of them, Hybrid Intelligence, could be particularly suitable to help manage the risk of deskilling human experts. Hybrid Intelligence system design and implementation can explicitly take such risks into account and instead foster upskilling of workers. Hybrid Intelligence may thus, in the long run, lower costs and improve performance and job satisfaction, as well as prevent management from creating unintended organization-wide deskilling.","PeriodicalId":377128,"journal":{"name":"Morals & Machines","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131450169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.5771/2747-5174-2022-1-22
Claudia Gerstl
Already in ancient times, people dreamt of becoming creators, and nowadays they are approaching this dream in small but impressive steps. In fiction, the goal of creating an artificial human has already been achieved. Additionally, as a mirror of society, the medium of film offers the opportunity to address current questions and problems of humanity. This paper raises the question of how artificial agents are depicted in over 30 years of film history from 1990 to 2021, and it explores roboethics through the lens of movies, focusing on social humanoid robots.
{"title":"We are what we create.","authors":"Claudia Gerstl","doi":"10.5771/2747-5174-2022-1-22","DOIUrl":"https://doi.org/10.5771/2747-5174-2022-1-22","url":null,"abstract":"Already in ancient times, people dreamt of becoming creators, and nowadays they are approaching this dream in small but impressive steps. In fiction, the goal of creating an artificial human has already been achieved. Additionally, as a mirror of society, the medium of film offers the opportunity to address current questions and problems of humanity. This paper raises the question of how artificial agents are depicted in over 30 years of film history from 1990 to 2021, and it explores roboethics through the lens of movies, focusing on social humanoid robots.","PeriodicalId":377128,"journal":{"name":"Morals & Machines","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131789465","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.5771/2747-5174-2022-2-64
Tobias Mast
Automated decisions, especially those that operate using AI, face accusations of being opaque black boxes. This accusation is to be taken seriously, but it is put into perspective to a certain extent if one contrasts it with the criticism of human-generated reasoning for decisions in the judiciary and administration. This article explains the functions that decision rationales fulfill from the perspective of legal scholarship and examines the qualities and shortcomings of human and machine approaches.
{"title":"Some Black Boxes are made of Flesh; Others are made of Silicon","authors":"Tobias Mast","doi":"10.5771/2747-5174-2022-2-64","DOIUrl":"https://doi.org/10.5771/2747-5174-2022-2-64","url":null,"abstract":"Automated decisions, especially those that operate using AI, face accusations of being opaque black boxes. This accusation is to be taken seriously, but it is put into perspective to a certain extent if one contrasts it with the criticism of human-generated reasoning for decisions in the judiciary and administration. This article explains the functions that decision rationales fulfill from the perspective of legal scholarship and examines the qualities and shortcomings of human and machine approaches.","PeriodicalId":377128,"journal":{"name":"Morals & Machines","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131794776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.5771/2747-5174-2021-2-86
K. Andersen, M. Overgaard, Mirabelle Jones, I. Shklovski
This article explores the use of participatory art and technology workshops as an approach to create more diverse and inclusive modes of engagement in the design of digital technologies. Taking the starting point in diverse works of science fiction, we draw on the concept of critical making and Ursula Le Guin’s disdain for the distinction between hard and soft technology to discuss the role of collaborative reimagining in the creation of technological futures. Such an approach facilitates a nuanced and reflected understanding for how technologies come to be designed and can empower more open and diverse participation in technology development.
{"title":"Collectively Reimagining Technology","authors":"K. Andersen, M. Overgaard, Mirabelle Jones, I. Shklovski","doi":"10.5771/2747-5174-2021-2-86","DOIUrl":"https://doi.org/10.5771/2747-5174-2021-2-86","url":null,"abstract":"This article explores the use of participatory art and technology workshops as an approach to create more diverse and inclusive modes of engagement in the design of digital technologies. Taking the starting point in diverse works of science fiction, we draw on the concept of critical making and Ursula Le Guin’s disdain for the distinction between hard and soft technology to discuss the role of collaborative reimagining in the creation of technological futures. Such an approach facilitates a nuanced and reflected understanding for how technologies come to be designed and can empower more open and diverse participation in technology development.","PeriodicalId":377128,"journal":{"name":"Morals & Machines","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125257797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.5771/2747-5174-2021-2-60
Sebastian Felix Schwemer
Over recent years, the EU has increasingly looked at the regulation of various forms of automation and the use of algorithms. For recommender systems specifically, two recent legislative proposals by the European Commission, the Digital Services Act from December 2020 and the Artificial Intelligence Act from April 2021, are of interest. This article analyses the recent legislative proposals with a view to identify the regulatory trajectory. Whereas the instruments differ in scope, it argues that both may -directly and indirectly- regulate various aspects of recommender systems and thereby influence the debate on how to ensure responsible, not opaque, machines that recommend information to humans.
{"title":"Recommender Systems in the EU: from Responsibility to Regulation","authors":"Sebastian Felix Schwemer","doi":"10.5771/2747-5174-2021-2-60","DOIUrl":"https://doi.org/10.5771/2747-5174-2021-2-60","url":null,"abstract":"Over recent years, the EU has increasingly looked at the regulation of various forms of automation and the use of algorithms. For recommender systems specifically, two recent legislative proposals by the European Commission, the Digital Services Act from December 2020 and the Artificial Intelligence Act from April 2021, are of interest. This article analyses the recent legislative proposals with a view to identify the regulatory trajectory. Whereas the instruments differ in scope, it argues that both may -directly and indirectly- regulate various aspects of recommender systems and thereby influence the debate on how to ensure responsible, not opaque, machines that recommend information to humans.","PeriodicalId":377128,"journal":{"name":"Morals & Machines","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116941251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.5771/2747-5174-2021-2-50
M. Dahlbeck
In this paper I will approach the problem of machine opacity in law, according to an understanding of it as a problem revolving around the underlying philosophical tension between description and prescription in law and legal theory. I will use the problem of machine opacity, and its effects on the lawmaker’s activity, as a practical backdrop for a discussion of the associations upheld by legal theory between law’s normative ideals and its preferred, normatively neutral, method for achieving these. My discussion of this problem will provide a preliminary answer to the question whether it is machine opacity - by introducing an unfamiliar kind of intentionality into the legal sphere which disturbs the predictability of law - that renders the contemporary lawmaker’s job difficult, or, whether this difficulty indeed comes with the lawmaker’s job description. I will turn to early rationalist Benedict Spinoza (1632-1677) and particularly his explanation of law in the Theological Political Treatise (TTP) for analytical assistance in my discussion.
{"title":"Legal Theory and the Problem of Machine Opacity: Spinoza on Intentionality, Prediction and Law","authors":"M. Dahlbeck","doi":"10.5771/2747-5174-2021-2-50","DOIUrl":"https://doi.org/10.5771/2747-5174-2021-2-50","url":null,"abstract":"In this paper I will approach the problem of machine opacity in law, according to an understanding of it as a problem revolving around the underlying philosophical tension between description and prescription in law and legal theory. I will use the problem of machine opacity, and its effects on the lawmaker’s activity, as a practical backdrop for a discussion of the associations upheld by legal theory between law’s normative ideals and its preferred, normatively neutral, method for achieving these. My discussion of this problem will provide a preliminary answer to the question whether it is machine opacity - by introducing an unfamiliar kind of intentionality into the legal sphere which disturbs the predictability of law - that renders the contemporary lawmaker’s job difficult, or, whether this difficulty indeed comes with the lawmaker’s job description. I will turn to early rationalist Benedict Spinoza (1632-1677) and particularly his explanation of law in the Theological Political Treatise (TTP) for analytical assistance in my discussion.","PeriodicalId":377128,"journal":{"name":"Morals & Machines","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134538532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1900-01-01DOI: 10.5771/2747-5174-2022-2-22
Hanna L. Grønneberg, Ana Alacovska
The paper proposes that artistic practices engaging with datasets on which machine learning (ML) systems are trained can provide caring or curative resistance to digital toxicity and furnish models for imagining more equitable digital futures. The paper provides a critique of harmful practices of data mining and data extraction in ML system development by focusing on artworks that engage pharmacologically, that is critically and restoratively, with these technologies. The three works analyzed are ImageNet Roulette (2019) by Kate Crawford and Trevor Paglen, This Person does Exist (2020) by Mathias Schäfer, and Feminist Dataset (2017-ongoing) by Caroline Sinders. These works, we ague, use defamiliarization as a critical practice when engaging with datasets and ML, providing vital counter-narratives and curative strategies, yet in some cases also deepening and exacerbating the very same technological toxicity they set out to remedy.
{"title":"Critical Dataset and Machine Learning Art.","authors":"Hanna L. Grønneberg, Ana Alacovska","doi":"10.5771/2747-5174-2022-2-22","DOIUrl":"https://doi.org/10.5771/2747-5174-2022-2-22","url":null,"abstract":"The paper proposes that artistic practices engaging with datasets on which machine learning (ML) systems are trained can provide caring or curative resistance to digital toxicity and furnish models for imagining more equitable digital futures. The paper provides a critique of harmful practices of data mining and data extraction in ML system development by focusing on artworks that engage pharmacologically, that is critically and restoratively, with these technologies. The three works analyzed are ImageNet Roulette (2019) by Kate Crawford and Trevor Paglen, This Person does Exist (2020) by Mathias Schäfer, and Feminist Dataset (2017-ongoing) by Caroline Sinders. These works, we ague, use defamiliarization as a critical practice when engaging with datasets and ML, providing vital counter-narratives and curative strategies, yet in some cases also deepening and exacerbating the very same technological toxicity they set out to remedy.","PeriodicalId":377128,"journal":{"name":"Morals & Machines","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125439301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}