Pub Date : 2024-01-01Epub Date: 2023-04-06DOI: 10.1007/s11023-023-09633-1
Cesare Carissimo, Marcin Korecki
Optimization is about finding the best available object with respect to an objective function. Mathematics and quantitative sciences have been highly successful in formulating problems as optimization problems, and constructing clever processes that find optimal objects from sets of objects. As computers have become readily available to most people, optimization and optimized processes play a very broad role in societies. It is not obvious, however, that the optimization processes that work for mathematics and abstract objects should be readily applied to complex and open social systems. In this paper we set forth a framework to understand when optimization is limited, particularly for complex and open social systems.
{"title":"Limits of Optimization.","authors":"Cesare Carissimo, Marcin Korecki","doi":"10.1007/s11023-023-09633-1","DOIUrl":"10.1007/s11023-023-09633-1","url":null,"abstract":"<p><p>Optimization is about finding the best available object with respect to an objective function. Mathematics and quantitative sciences have been highly successful in formulating problems as optimization problems, and constructing clever processes that find optimal objects from sets of objects. As computers have become readily available to most people, optimization and optimized processes play a very broad role in societies. It is not obvious, however, that the optimization processes that work for mathematics and abstract objects should be readily applied to complex and open social systems. In this paper we set forth a framework to understand when optimization is limited, particularly for complex and open social systems.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"1 1","pages":"117-137"},"PeriodicalIF":7.4,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10948533/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42980662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-18DOI: 10.1007/s11023-023-09654-w
André T. Nemat, Sarah J. Becker, Simon Lucas, Sean Thomas, Isabel Gadea, Jean Enno Charton
Recent attempts to develop and apply digital ethics principles to address the challenges of the digital transformation leave organisations with an operationalisation gap. To successfully implement such guidance, they must find ways to translate high-level ethics frameworks into practical methods and tools that match their specific workflows and needs. Here, we describe the development of a standardised risk assessment tool, the Principle-at-Risk Analysis (PaRA), as a means to close this operationalisation gap for a key level of the ethics infrastructure at many organisations – the work of an interdisciplinary ethics panel. The PaRA tool serves to guide and harmonise the work of the Digital Ethics Advisory Panel at the multinational science and technology company Merck KGaA in alignment with the principles outlined in the company’s Code of Digital Ethics. We examine how such a tool can be used as part of a multifaceted approach to operationalise high-level principles at an organisational level and provide general requirements for its implementation. We showcase its application in an example case dealing with the comprehensibility of consent forms in a data-sharing context at Syntropy, a collaborative technology platform for clinical research.
{"title":"The Principle-at-Risk Analysis (PaRA): Operationalising Digital Ethics by Bridging Principles and Operations of a Digital Ethics Advisory Panel","authors":"André T. Nemat, Sarah J. Becker, Simon Lucas, Sean Thomas, Isabel Gadea, Jean Enno Charton","doi":"10.1007/s11023-023-09654-w","DOIUrl":"https://doi.org/10.1007/s11023-023-09654-w","url":null,"abstract":"<p>Recent attempts to develop and apply digital ethics principles to address the challenges of the digital transformation leave organisations with an operationalisation gap. To successfully implement such guidance, they must find ways to translate high-level ethics frameworks into practical methods and tools that match their specific workflows and needs. Here, we describe the development of a standardised risk assessment tool, the Principle-at-Risk Analysis (PaRA), as a means to close this operationalisation gap for a key level of the ethics infrastructure at many organisations – the work of an interdisciplinary ethics panel. The PaRA tool serves to guide and harmonise the work of the Digital Ethics Advisory Panel at the multinational science and technology company Merck KGaA in alignment with the principles outlined in the company’s Code of Digital Ethics. We examine how such a tool can be used as part of a multifaceted approach to operationalise high-level principles at an organisational level and provide general requirements for its implementation. We showcase its application in an example case dealing with the comprehensibility of consent forms in a data-sharing context at Syntropy, a collaborative technology platform for clinical research.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"121 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138716548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-12-13DOI: 10.1007/s11023-023-09653-x
Juan Luis Gastaldi
{"title":"Computing Cultures: Historical and Philosophical Perspectives","authors":"Juan Luis Gastaldi","doi":"10.1007/s11023-023-09653-x","DOIUrl":"https://doi.org/10.1007/s11023-023-09653-x","url":null,"abstract":"","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"28 6","pages":""},"PeriodicalIF":7.4,"publicationDate":"2023-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139005660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-29DOI: 10.1007/s11023-023-09652-y
Igor Douven
This paper studies the learnability of natural concepts in the context of the conceptual spaces framework. Previous work proposed that natural concepts are represented by the cells of optimally partitioned similarity spaces, where optimality was defined in terms of a number of constraints. Among these is the constraint that optimally partitioned similarity spaces result in easily learnable concepts. While there is evidence that systems of concepts generally regarded as natural satisfy a number of the proposed optimality constraints, the connection between naturalness and learnability has been less well studied. To fill this gap, we conduct a computational study employing two standard models of concept learning. Applying these models to the learning of color concepts, we examine whether natural color concepts are more readily learned than nonnatural ones. Our findings warrant a positive answer to this question for both models employed, thus lending empirical support to the notion that learnability is a distinctive characteristic of natural concepts.
{"title":"The Role of Naturalness in Concept Learning: A Computational Study","authors":"Igor Douven","doi":"10.1007/s11023-023-09652-y","DOIUrl":"https://doi.org/10.1007/s11023-023-09652-y","url":null,"abstract":"<p>This paper studies the learnability of natural concepts in the context of the conceptual spaces framework. Previous work proposed that natural concepts are represented by the cells of optimally partitioned similarity spaces, where optimality was defined in terms of a number of constraints. Among these is the constraint that optimally partitioned similarity spaces result in easily learnable concepts. While there is evidence that systems of concepts generally regarded as natural satisfy a number of the proposed optimality constraints, the connection between naturalness and learnability has been less well studied. To fill this gap, we conduct a computational study employing two standard models of concept learning. Applying these models to the learning of color concepts, we examine whether natural color concepts are more readily learned than nonnatural ones. Our findings warrant a positive answer to this question for both models employed, thus lending empirical support to the notion that learnability is a distinctive characteristic of natural concepts.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"2 3","pages":""},"PeriodicalIF":7.4,"publicationDate":"2023-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138525420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-22DOI: 10.1007/s11023-023-09649-7
Marc Serramia, Manel Rodriguez-Soto, Maite Lopez-Sanchez, Juan A. Rodriguez-Aguilar, Filippo Bistaffa, Paula Boddington, Michael Wooldridge, Carlos Ansotegui
Norms have been widely enacted in human and agent societies to regulate individuals’ actions. However, although legislators may have ethics in mind when establishing norms, moral values are only sometimes explicitly considered. This paper advances the state of the art by providing a method for selecting the norms to enact within a society that best aligns with the moral values of such a society. Our approach to aligning norms and values is grounded in the ethics literature. Specifically, from the literature’s study of the relations between norms, actions, and values, we formally define how actions and values relate through the so-called value judgment function and how norms and values relate through the so-called norm promotion function. We show that both functions provide the means to compute value alignment for a set of norms. Moreover, we detail how to cast our decision-making problem as an optimisation problem: finding the norms that maximise value alignment. We also show how to solve our problem using off-the-shelf optimisation tools. Finally, we illustrate our approach with a specific case study on the European Value Study.
{"title":"Encoding Ethics to Compute Value-Aligned Norms","authors":"Marc Serramia, Manel Rodriguez-Soto, Maite Lopez-Sanchez, Juan A. Rodriguez-Aguilar, Filippo Bistaffa, Paula Boddington, Michael Wooldridge, Carlos Ansotegui","doi":"10.1007/s11023-023-09649-7","DOIUrl":"https://doi.org/10.1007/s11023-023-09649-7","url":null,"abstract":"<p>Norms have been widely enacted in human and agent societies to regulate individuals’ actions. However, although legislators may have ethics in mind when establishing norms, moral values are only sometimes explicitly considered. This paper advances the state of the art by providing a method for selecting the norms to enact within a society that best aligns with the moral values of such a society. Our approach to aligning norms and values is grounded in the ethics literature. Specifically, from the literature’s study of the relations between norms, actions, and values, we formally define how actions and values relate through the so-called <i>value judgment function</i> and how norms and values relate through the so-called <i>norm promotion function</i>. We show that both functions provide the means to compute value alignment for a set of norms. Moreover, we detail how to cast our decision-making problem as an optimisation problem: finding the norms that maximise value alignment. We also show how to solve our problem using off-the-shelf optimisation tools. Finally, we illustrate our approach with a specific case study on the European Value Study.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"41 7","pages":""},"PeriodicalIF":7.4,"publicationDate":"2023-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138525433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-22DOI: 10.1007/s11023-023-09650-0
Alessandro G. Buda, Giuseppe Primiero
Some computational phenomena rely essentially on pragmatic considerations, and seem to undermine the independence of the specification from the implementation. These include software development, deviant uses, esoteric languages and recent data-driven applications. To account for them, the interaction between pragmatics, epistemology and ontology in computational artefacts seems essential, indicating the need to recover the role of the language metaphor. We propose a User Levels (ULs) structure as a pragmatic complement to the Levels of Abstraction (LoAs)-based structure defining the ontology and epistemology of computational artefacts. ULs identify a flexible hierarchy in which users bear their own semantic and normative requirements, possibly competing with the logical specification. We formulate a notion of computational act intended in its pragmatic sense, alongside pragmatic versions of implementation and correctness.
{"title":"A Pragmatic Theory of Computational Artefacts","authors":"Alessandro G. Buda, Giuseppe Primiero","doi":"10.1007/s11023-023-09650-0","DOIUrl":"https://doi.org/10.1007/s11023-023-09650-0","url":null,"abstract":"<p>Some computational phenomena rely essentially on pragmatic considerations, and seem to undermine the independence of the specification from the implementation. These include software development, deviant uses, esoteric languages and recent data-driven applications. To account for them, the interaction between pragmatics, epistemology and ontology in computational artefacts seems essential, indicating the need to recover the role of the language metaphor. We propose a User Levels (ULs) structure as a pragmatic complement to the Levels of Abstraction (LoAs)-based structure defining the ontology and epistemology of computational artefacts. ULs identify a flexible hierarchy in which users bear their own semantic and normative requirements, possibly competing with the logical specification. We formulate a notion of computational act intended in its pragmatic sense, alongside pragmatic versions of implementation and correctness.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"30 8","pages":""},"PeriodicalIF":7.4,"publicationDate":"2023-11-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138525422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-18DOI: 10.1007/s11023-023-09651-z
Merel Noorman, Tsjalling Swierstra
Artificial Intelligence (AI) technologies offer new ways of conducting decision-making tasks that influence the daily lives of citizens, such as coordinating traffic, energy distributions, and crowd flows. They can sort, rank, and prioritize the distribution of fines or public funds and resources. Many of the changes that AI technologies promise to bring to such tasks pertain to decisions that are collectively binding. When these technologies become part of critical infrastructures, such as energy networks, citizens are affected by these decisions whether they like it or not, and they usually do not have much say in them. The democratic challenge for those working on AI technologies with collectively binding effects is both to develop and deploy technologies in such a way that the democratic legitimacy of the relevant decisions is safeguarded. In this paper, we develop a conceptual framework to help policymakers, project managers, innovators, and technologists to assess and develop approaches to democratize AI. This framework embraces a broad sociotechnical perspective that highlights the interactions between technology and the complexities and contingencies of the context in which these technologies are embedded. We start from the problem-based and practice-oriented approach to democracy theory as developed by political theorist Mark Warren. We build on this approach to describe practices that can enhance or challenge democracy in political systems and extend it to integrate a sociotechnical perspective and make the role of technology explicit. We then examine how AI technologies can play a role in these practices to improve or inhibit the democratic nature of political systems. We focus in particular on AI-supported political systems in the energy domain.
{"title":"Democratizing AI from a Sociotechnical Perspective","authors":"Merel Noorman, Tsjalling Swierstra","doi":"10.1007/s11023-023-09651-z","DOIUrl":"https://doi.org/10.1007/s11023-023-09651-z","url":null,"abstract":"<p>Artificial Intelligence (AI) technologies offer new ways of conducting decision-making tasks that influence the daily lives of citizens, such as coordinating traffic, energy distributions, and crowd flows. They can sort, rank, and prioritize the distribution of fines or public funds and resources. Many of the changes that AI technologies promise to bring to such tasks pertain to decisions that are collectively binding. When these technologies become part of critical infrastructures, such as energy networks, citizens are affected by these decisions whether they like it or not, and they usually do not have much say in them. The democratic challenge for those working on AI technologies with collectively binding effects is both to <i>develop</i> and <i>deploy</i> technologies in such a way that the democratic legitimacy of the relevant decisions is safeguarded. In this paper, we develop a conceptual framework to help policymakers, project managers, innovators, and technologists to assess and develop approaches to democratize AI. This framework embraces a broad sociotechnical perspective that highlights the interactions between technology and the complexities and contingencies of the context in which these technologies are embedded. We start from the problem-based and practice-oriented approach to democracy theory as developed by political theorist Mark Warren. We build on this approach to describe practices that can enhance or challenge democracy in political systems and extend it to integrate a sociotechnical perspective and make the role of technology explicit. We then examine how AI technologies can play a role in these practices to improve or inhibit the democratic nature of political systems. We focus in particular on AI-supported political systems in the energy domain.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"43 1","pages":""},"PeriodicalIF":7.4,"publicationDate":"2023-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138525377","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-28DOI: 10.1007/s11023-023-09648-8
Katherine Lou, Luciano Floridi
Altruism is a well-studied phenomenon in the social sciences, but online altruism has received relatively little attention. In this article, we examine several cases of online altruism, and analyse the key characteristics of the phenomenon, in particular comparing and contrasting it against models of traditional donor behaviour. We suggest a novel definition of online altruism, and provide an in-depth, mixed-method study of a significant case, represented by the r/Assistance subreddit. We argue that online altruism can be characterized by its differing experiences compared to traditional giving, from a giver’s point of view, and unique mechanisms and actions made possible by the internet. These findings explain why people give to anonymous strangers online and provide a new perspective on altruism that is important in building a more altruistic internet and society.
{"title":"Online Altruism: What it is and how it Differs from Other Kinds of Altruism","authors":"Katherine Lou, Luciano Floridi","doi":"10.1007/s11023-023-09648-8","DOIUrl":"https://doi.org/10.1007/s11023-023-09648-8","url":null,"abstract":"<p>Altruism is a well-studied phenomenon in the social sciences, but online altruism has received relatively little attention. In this article, we examine several cases of online altruism, and analyse the key characteristics of the phenomenon, in particular comparing and contrasting it against models of traditional donor behaviour. We suggest a novel definition of online altruism, and provide an in-depth, mixed-method study of a significant case, represented by the r/Assistance subreddit. We argue that online altruism can be characterized by its differing experiences compared to traditional giving, from a giver’s point of view, and unique mechanisms and actions made possible by the internet. These findings explain why people give to anonymous strangers online and provide a new perspective on altruism that is important in building a more altruistic internet and society.</p>","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"6 2","pages":""},"PeriodicalIF":7.4,"publicationDate":"2023-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138525400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-22DOI: 10.1007/s11023-023-09644-y
Andrea Polonioli, Riccardo Ghioni, Ciro Greco, Prathm Juneja, Jacopo Tagliabue, David Watson, Luciano Floridi
Abstract Online controlled experiments, also known as A/B tests, have become ubiquitous. While many practical challenges in running experiments at scale have been thoroughly discussed, the ethical dimension of A/B testing has been neglected. This article fills this gap in the literature by introducing a new, soft ethics and governance framework that explicitly recognizes how the rise of an experimentation culture in industry settings brings not only unprecedented opportunities to businesses but also significant responsibilities. More precisely, the article (a) introduces a set of principles to encourage ethical and responsible experimentation to protect users, customers, and society; (b) argues that ensuring compliance with the proposed principles is a complex challenge unlikely to be addressed by resorting to a one-solution response; (c) discusses the relevance and effectiveness of several mechanisms and policies in educating, governing, and incentivizing companies conducting online controlled experiments; and (d) offers a list of prompting questions specifically designed to help and empower practitioners by stimulating specific ethical deliberations and facilitating coordination among different groups of stakeholders.
{"title":"The Ethics of Online Controlled Experiments (A/B Testing)","authors":"Andrea Polonioli, Riccardo Ghioni, Ciro Greco, Prathm Juneja, Jacopo Tagliabue, David Watson, Luciano Floridi","doi":"10.1007/s11023-023-09644-y","DOIUrl":"https://doi.org/10.1007/s11023-023-09644-y","url":null,"abstract":"Abstract Online controlled experiments, also known as A/B tests, have become ubiquitous. While many practical challenges in running experiments at scale have been thoroughly discussed, the ethical dimension of A/B testing has been neglected. This article fills this gap in the literature by introducing a new, soft ethics and governance framework that explicitly recognizes how the rise of an experimentation culture in industry settings brings not only unprecedented opportunities to businesses but also significant responsibilities. More precisely, the article (a) introduces a set of principles to encourage ethical and responsible experimentation to protect users, customers, and society; (b) argues that ensuring compliance with the proposed principles is a complex challenge unlikely to be addressed by resorting to a one-solution response; (c) discusses the relevance and effectiveness of several mechanisms and policies in educating, governing, and incentivizing companies conducting online controlled experiments; and (d) offers a list of prompting questions specifically designed to help and empower practitioners by stimulating specific ethical deliberations and facilitating coordination among different groups of stakeholders.","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136061704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-08DOI: 10.1007/s11023-023-09645-x
Fabian Beigang
{"title":"Yet Another Impossibility Theorem in Algorithmic Fairness","authors":"Fabian Beigang","doi":"10.1007/s11023-023-09645-x","DOIUrl":"https://doi.org/10.1007/s11023-023-09645-x","url":null,"abstract":"","PeriodicalId":51133,"journal":{"name":"Minds and Machines","volume":"759 ","pages":""},"PeriodicalIF":7.4,"publicationDate":"2023-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41281802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}