Pub Date : 2022-10-29DOI: 10.1007/s11023-022-09614-w
B. Townsend, Colin Paterson, T. Arvind, G. Nemirovsky, R. Calinescu, Ana Cavalcanti, I. Habli, Alan Thomas
{"title":"From Pluralistic Normative Principles to Autonomous-Agent Rules","authors":"B. Townsend, Colin Paterson, T. Arvind, G. Nemirovsky, R. Calinescu, Ana Cavalcanti, I. Habli, Alan Thomas","doi":"10.1007/s11023-022-09614-w","DOIUrl":"https://doi.org/10.1007/s11023-022-09614-w","url":null,"abstract":"","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"47 7","pages":"683 - 715"},"PeriodicalIF":7.4,"publicationDate":"2022-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41274682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-24DOI: 10.1007/s11023-022-09613-x
David Wong, L. Floridi
{"title":"Meta’s Oversight Board: A Review and Critical Assessment","authors":"David Wong, L. Floridi","doi":"10.1007/s11023-022-09613-x","DOIUrl":"https://doi.org/10.1007/s11023-022-09613-x","url":null,"abstract":"","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"33 1","pages":"261 - 284"},"PeriodicalIF":7.4,"publicationDate":"2022-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43714661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-01DOI: 10.1007/s11023-022-09609-7
Alison Duncan Kerr, Kevin Scharp
{"title":"The End of Vagueness: Technological Epistemicism, Surveillance Capitalism, and Explainable Artificial Intelligence","authors":"Alison Duncan Kerr, Kevin Scharp","doi":"10.1007/s11023-022-09609-7","DOIUrl":"https://doi.org/10.1007/s11023-022-09609-7","url":null,"abstract":"","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"32 1","pages":"585 - 611"},"PeriodicalIF":7.4,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47535062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-18DOI: 10.1007/s11023-022-09612-y
Jakob Mökander, Prathm Juneja, David Watson, Luciano Floridi
{"title":"The US Algorithmic Accountability Act of 2022 vs. The EU Artificial Intelligence Act: what can they learn from each other?","authors":"Jakob Mökander, Prathm Juneja, David Watson, Luciano Floridi","doi":"10.1007/s11023-022-09612-y","DOIUrl":"https://doi.org/10.1007/s11023-022-09612-y","url":null,"abstract":"","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"32 1","pages":"751 - 758"},"PeriodicalIF":7.4,"publicationDate":"2022-08-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44497626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-13DOI: 10.1007/s11023-022-09611-z
K. Alfrink, I. Keller, Gerd Kortuem, N. Doorn
{"title":"Contestable AI by Design: Towards a Framework","authors":"K. Alfrink, I. Keller, Gerd Kortuem, N. Doorn","doi":"10.1007/s11023-022-09611-z","DOIUrl":"https://doi.org/10.1007/s11023-022-09611-z","url":null,"abstract":"","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"1 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2022-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49576317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-28DOI: 10.1007/s11023-022-09608-8
Filippo Santoni de Sio, Giulio Mecacci, Simeon Calvert, Daniel Heikoop, Marjan Hagenzieker, Bart van Arem
The paper presents a framework to realise "meaningful human control" over Automated Driving Systems. The framework is based on an original synthesis of the results of the multidisciplinary research project "Meaningful Human Control over Automated Driving Systems" lead by a team of engineers, philosophers, and psychologists at Delft University of the Technology from 2017 to 2021. Meaningful human control aims at protecting safety and reducing responsibility gaps. The framework is based on the core assumption that human persons and institutions, not hardware and software and their algorithms, should remain ultimately-though not necessarily directly-in control of, and thus morally responsible for, the potentially dangerous operation of driving in mixed traffic. We propose an Automated Driving System to be under meaningful human control if it behaves according to the relevant reasons of the relevant human actors (tracking), and that any potentially dangerous event can be related to a human actor (tracing). We operationalise the requirements for meaningful human control through multidisciplinary work in philosophy, behavioural psychology and traffic engineering. The tracking condition is operationalised via a proximal scale of reasons and the tracing condition via an evaluation cascade table. We review the implications and requirements for the behaviour and skills of human actors, in particular related to supervisory control and driver education. We show how the evaluation cascade table can be applied in concrete engineering use cases in combination with the definition of core components to expose deficiencies in traceability, thereby avoiding so-called responsibility gaps. Future research directions are proposed to expand the philosophical framework and use cases, supervisory control and driver education, real-world pilots and institutional embedding.
{"title":"Realising Meaningful Human Control Over Automated Driving Systems: A Multidisciplinary Approach.","authors":"Filippo Santoni de Sio, Giulio Mecacci, Simeon Calvert, Daniel Heikoop, Marjan Hagenzieker, Bart van Arem","doi":"10.1007/s11023-022-09608-8","DOIUrl":"10.1007/s11023-022-09608-8","url":null,"abstract":"<p><p>The paper presents a framework to realise \"meaningful human control\" over Automated Driving Systems. The framework is based on an original synthesis of the results of the multidisciplinary research project \"Meaningful Human Control over Automated Driving Systems\" lead by a team of engineers, philosophers, and psychologists at Delft University of the Technology from 2017 to 2021. Meaningful human control aims at protecting safety and reducing responsibility gaps. The framework is based on the core assumption that human persons and institutions, not hardware and software and their algorithms, should remain ultimately-though not necessarily directly-in control of, and thus morally responsible for, the potentially dangerous operation of driving in mixed traffic. We propose an Automated Driving System to be under meaningful human control if it behaves according to the relevant reasons of the relevant human actors (tracking), and that any potentially dangerous event can be related to a human actor (tracing). We operationalise the requirements for meaningful human control through multidisciplinary work in philosophy, behavioural psychology and traffic engineering. The tracking condition is operationalised via a proximal scale of reasons and the tracing condition via an evaluation cascade table. We review the implications and requirements for the behaviour and skills of human actors, in particular related to supervisory control and driver education. We show how the evaluation cascade table can be applied in concrete engineering use cases in combination with the definition of core components to expose deficiencies in traceability, thereby avoiding so-called responsibility gaps. Future research directions are proposed to expand the philosophical framework and use cases, supervisory control and driver education, real-world pilots and institutional embedding.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":" ","pages":"1-25"},"PeriodicalIF":7.4,"publicationDate":"2022-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9330947/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9277739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-29DOI: 10.1007/s11023-022-09607-9
Stefan Buijsman
{"title":"Defining Explanation and Explanatory Depth in XAI","authors":"Stefan Buijsman","doi":"10.1007/s11023-022-09607-9","DOIUrl":"https://doi.org/10.1007/s11023-022-09607-9","url":null,"abstract":"","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"32 1","pages":"563 - 584"},"PeriodicalIF":7.4,"publicationDate":"2022-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49352469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-09DOI: 10.1007/s11023-022-09604-y
Selene Arfini, D. Spinelli, D. Chiffi
{"title":"Ethics of Self-driving Cars: A Naturalistic Approach","authors":"Selene Arfini, D. Spinelli, D. Chiffi","doi":"10.1007/s11023-022-09604-y","DOIUrl":"https://doi.org/10.1007/s11023-022-09604-y","url":null,"abstract":"","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"32 1","pages":"717 - 734"},"PeriodicalIF":7.4,"publicationDate":"2022-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41756533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}