Pub Date : 2025-05-07DOI: 10.1007/s11948-025-00535-2
Simon Rosenqvist, Magnus Dustler, Johan Brännmark
This paper argues that we have a moral obligation to implement certain health technologies even if we have limited or incomplete evidence of their effectiveness. The focus is on technologies used in non-emergency settings, as opposed to "exceptional cases" such as compassionate use and emergency approvals during public health emergencies. A broadly plausible moral principle - the Ecumenical Principle - is introduced and applied to a test case: the use of Digital Breast Tomosynthesis in mammographic screening. The paper concludes by exploring the implications of the Ecumenical Principle for the adoption of other new health technologies.
{"title":"Health Technologies and Impermissible Delays: The Case of Digital Breast Tomosynthesis.","authors":"Simon Rosenqvist, Magnus Dustler, Johan Brännmark","doi":"10.1007/s11948-025-00535-2","DOIUrl":"10.1007/s11948-025-00535-2","url":null,"abstract":"<p><p>This paper argues that we have a moral obligation to implement certain health technologies even if we have limited or incomplete evidence of their effectiveness. The focus is on technologies used in non-emergency settings, as opposed to \"exceptional cases\" such as compassionate use and emergency approvals during public health emergencies. A broadly plausible moral principle - the Ecumenical Principle - is introduced and applied to a test case: the use of Digital Breast Tomosynthesis in mammographic screening. The paper concludes by exploring the implications of the Ecumenical Principle for the adoption of other new health technologies.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 3","pages":"13"},"PeriodicalIF":2.7,"publicationDate":"2025-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12058816/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144010338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-22DOI: 10.1007/s11948-025-00533-4
Ryan Jenkins
This paper engages with the problem of toxic speech online and suggests remedies inspired by the value-sensitive design literature (VSD), suggesting that the designers of online platforms should explore methods of adding friction to online conversations. Second, this paper examines a historical case of designing a communications platform to offer methods to users to inculcate norms of acceptable behavior by introducing friction into synchronous conversations. This is the case of America Online (AOL) Instant Messenger, also known as AIM, which included a feature whereby users could "warn" other users, attaching a cost to, and thus disincentivizing, certain kinds of speech. The nuances of the design of this feature make it especially well-suited as a subject of study in value-sensitive design as it seems to be the product of significant reflection and foresight by its designers. In the course of examining this case, this paper proposes two novel and generalizable processes of integrating values into the design of technology, inspired by the approach of value-sensitive design: a "method of decomposition," reconstructing a user journey in order to identify possible moments of intervention; and an iterative "Innovation-Abuse-Innovation" branching diagram, which systematizes the process of anticipating abuse cases and designing responses to them. These methods build upon recent work in the literature on operationalizing ethical values in the design process. I close by illustrating the flexibility and generalizability of these methods and speculating on how they might be applied to contemporary platforms.
{"title":"Threads and Needles: A Value-Sensitive Design Approach to Online Toxicity.","authors":"Ryan Jenkins","doi":"10.1007/s11948-025-00533-4","DOIUrl":"10.1007/s11948-025-00533-4","url":null,"abstract":"<p><p>This paper engages with the problem of toxic speech online and suggests remedies inspired by the value-sensitive design literature (VSD), suggesting that the designers of online platforms should explore methods of adding friction to online conversations. Second, this paper examines a historical case of designing a communications platform to offer methods to users to inculcate norms of acceptable behavior by introducing friction into synchronous conversations. This is the case of America Online (AOL) Instant Messenger, also known as AIM, which included a feature whereby users could \"warn\" other users, attaching a cost to, and thus disincentivizing, certain kinds of speech. The nuances of the design of this feature make it especially well-suited as a subject of study in value-sensitive design as it seems to be the product of significant reflection and foresight by its designers. In the course of examining this case, this paper proposes two novel and generalizable processes of integrating values into the design of technology, inspired by the approach of value-sensitive design: a \"method of decomposition,\" reconstructing a user journey in order to identify possible moments of intervention; and an iterative \"Innovation-Abuse-Innovation\" branching diagram, which systematizes the process of anticipating abuse cases and designing responses to them. These methods build upon recent work in the literature on operationalizing ethical values in the design process. I close by illustrating the flexibility and generalizability of these methods and speculating on how they might be applied to contemporary platforms.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 3","pages":"12"},"PeriodicalIF":2.7,"publicationDate":"2025-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12014825/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144058177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-09DOI: 10.1007/s11948-025-00538-z
Fei Wang, Yuanbao Hou, Lingling Zhang
The medical field is highly susceptible to research misconduct, making research integrity in medical universities and colleges crucial for its prevention and management. While both Chinese and international researchers have conducted extensive studies on fostering research integrity in higher education institutions, comparative analyses focusing specifically on medical universities and colleges in China remain insufficient. To address this gap, this study examines the state of research integrity construction in 83 Chinese public medical universities/colleges during 2020 and 2024, exploring the underlying factors influencing this development. The findings indicate that research integrity initiatives in Chinese medical universities and colleges are predominantly reactive, driven by compliance with government regulations and mandated tasks, rather than proactive, guided by intrinsic awareness and moral commitment. These results underscore the need to go beyond addressing Two-Points to emphasize Key-Points, advocating for a greater role of scientific autonomy in shaping research integrity, as opposed to reliance on government oversight.
{"title":"A Comparative Study on the Construction of Research Integrity in Public Medical Universities/Colleges in China: 2020-2024.","authors":"Fei Wang, Yuanbao Hou, Lingling Zhang","doi":"10.1007/s11948-025-00538-z","DOIUrl":"10.1007/s11948-025-00538-z","url":null,"abstract":"<p><p>The medical field is highly susceptible to research misconduct, making research integrity in medical universities and colleges crucial for its prevention and management. While both Chinese and international researchers have conducted extensive studies on fostering research integrity in higher education institutions, comparative analyses focusing specifically on medical universities and colleges in China remain insufficient. To address this gap, this study examines the state of research integrity construction in 83 Chinese public medical universities/colleges during 2020 and 2024, exploring the underlying factors influencing this development. The findings indicate that research integrity initiatives in Chinese medical universities and colleges are predominantly reactive, driven by compliance with government regulations and mandated tasks, rather than proactive, guided by intrinsic awareness and moral commitment. These results underscore the need to go beyond addressing Two-Points to emphasize Key-Points, advocating for a greater role of scientific autonomy in shaping research integrity, as opposed to reliance on government oversight.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 2","pages":"11"},"PeriodicalIF":2.7,"publicationDate":"2025-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11982101/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143990055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01DOI: 10.1007/s11948-025-00536-1
Rockwell F Clancy, Qin Zhu, Scott Streiner, Andrea Gammon, Ryan Thorpe
This paper describes the motivations and some directions for bringing insights and methods from moral and cultural psychology to bear on how engineering ethics is conceived, taught, and assessed. Therefore, the audience for this paper is not only engineering ethics educators and researchers but also administrators and organizations concerned with ethical behaviors. Engineering ethics has typically been conceived and taught as a branch of professional and applied ethics with pedagogical aims, where students and practitioners learn about professional codes and/or Western ethical theories and then apply these resources to address issues presented in case studies about engineering and/or technology. As a result, accreditation and professional bodies have generally adopted ethical reasoning skills and/or moral knowledge as learning outcomes. However, this paper argues that such frameworks are psychologically "irrealist" and culturally biased: it is not clear that ethical judgments or behaviors are primarily the result of applying principles, or that ethical concerns captured in professional codes or Western ethical theories do or should reflect the engineering ethical concerns of global populations. Individuals from Western educated industrialized rich democratic cultures are outliers on various psychological and social constructs, including self-concepts, thought styles, and ethical concerns. However, engineering is more cross cultural and international than ever before, with engineers and technologies spanning multiple cultures and countries. For instance, different national regulations and cultural values can come into conflict while performing engineering work. Additionally, ethical judgments may also result from intuitions, closer to emotions than reflective thought, and behaviors can be affected by unconscious, social, and environmental factors. To address these issues, this paper surveys work in engineering ethics education and assessment to date, shortcomings within these approaches, and how insights and methods from moral and cultural psychology could be used to improve engineering ethics education and assessment, making them more culturally responsive and psychologically realist at the same time.
{"title":"Towards a Psychologically Realist, Culturally Responsive Approach to Engineering Ethics in Global Contexts.","authors":"Rockwell F Clancy, Qin Zhu, Scott Streiner, Andrea Gammon, Ryan Thorpe","doi":"10.1007/s11948-025-00536-1","DOIUrl":"10.1007/s11948-025-00536-1","url":null,"abstract":"<p><p>This paper describes the motivations and some directions for bringing insights and methods from moral and cultural psychology to bear on how engineering ethics is conceived, taught, and assessed. Therefore, the audience for this paper is not only engineering ethics educators and researchers but also administrators and organizations concerned with ethical behaviors. Engineering ethics has typically been conceived and taught as a branch of professional and applied ethics with pedagogical aims, where students and practitioners learn about professional codes and/or Western ethical theories and then apply these resources to address issues presented in case studies about engineering and/or technology. As a result, accreditation and professional bodies have generally adopted ethical reasoning skills and/or moral knowledge as learning outcomes. However, this paper argues that such frameworks are psychologically \"irrealist\" and culturally biased: it is not clear that ethical judgments or behaviors are primarily the result of applying principles, or that ethical concerns captured in professional codes or Western ethical theories do or should reflect the engineering ethical concerns of global populations. Individuals from Western educated industrialized rich democratic cultures are outliers on various psychological and social constructs, including self-concepts, thought styles, and ethical concerns. However, engineering is more cross cultural and international than ever before, with engineers and technologies spanning multiple cultures and countries. For instance, different national regulations and cultural values can come into conflict while performing engineering work. Additionally, ethical judgments may also result from intuitions, closer to emotions than reflective thought, and behaviors can be affected by unconscious, social, and environmental factors. To address these issues, this paper surveys work in engineering ethics education and assessment to date, shortcomings within these approaches, and how insights and methods from moral and cultural psychology could be used to improve engineering ethics education and assessment, making them more culturally responsive and psychologically realist at the same time.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 2","pages":"10"},"PeriodicalIF":2.7,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11961465/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143755520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-28DOI: 10.1007/s11948-025-00532-5
Cindy Friedman
One characteristic of socially disruptive technologies is that they have the potential to cause uncertainty about the application conditions of a concept i.e., they are conceptually disruptive. Humanoid robots have done just this, as evidenced by discussions about whether, and under what conditions, humanoid robots could be classified as, for example, moral agents, moral patients, or legal and/or moral persons. This paper frames the disruptive effect of humanoid robots differently by taking the discussion beyond that of classificatory concerns. It does so by showing that humanoid robots are socially disruptive because they also transform how we experience and understand the world. Through inviting us to relate to a technological artefact as if it is human, humanoid robots have a profound impact upon the way in which we relate to different elements of our world. Specifically, I focus on three types of human relational experiences, and how the norms that surround them may be transformed by humanoid robots: (1) human-technology relations; (2) human-human relations; and (3) human-self relations. Anticipating the ways in which humanoid robots may change society is important given that once a technology is entrenched, it is difficult to counteract negative impacts. Therefore, we should try to anticipate them while we can still do something to prevent them. Since humanoid robots are currently relatively rudimentary, yet there is incentive to invest more in their development, it is now a good time to think carefully about how this technology may impact us.
{"title":"Artefacts of Change: The Disruptive Nature of Humanoid Robots Beyond Classificatory Concerns.","authors":"Cindy Friedman","doi":"10.1007/s11948-025-00532-5","DOIUrl":"10.1007/s11948-025-00532-5","url":null,"abstract":"<p><p>One characteristic of socially disruptive technologies is that they have the potential to cause uncertainty about the application conditions of a concept i.e., they are conceptually disruptive. Humanoid robots have done just this, as evidenced by discussions about whether, and under what conditions, humanoid robots could be classified as, for example, moral agents, moral patients, or legal and/or moral persons. This paper frames the disruptive effect of humanoid robots differently by taking the discussion beyond that of classificatory concerns. It does so by showing that humanoid robots are socially disruptive because they also transform how we experience and understand the world. Through inviting us to relate to a technological artefact as if it is human, humanoid robots have a profound impact upon the way in which we relate to different elements of our world. Specifically, I focus on three types of human relational experiences, and how the norms that surround them may be transformed by humanoid robots: (1) human-technology relations; (2) human-human relations; and (3) human-self relations. Anticipating the ways in which humanoid robots may change society is important given that once a technology is entrenched, it is difficult to counteract negative impacts. Therefore, we should try to anticipate them while we can still do something to prevent them. Since humanoid robots are currently relatively rudimentary, yet there is incentive to invest more in their development, it is now a good time to think carefully about how this technology may impact us.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 2","pages":"9"},"PeriodicalIF":2.7,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11953219/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143736269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-18DOI: 10.1007/s11948-025-00537-0
Tomasz Żuradzki, Piotr Bystranowski, Vilius Dranseika
{"title":"Correction: Discussions on Human Enhancement Meet Science: A Quantitative Analysis.","authors":"Tomasz Żuradzki, Piotr Bystranowski, Vilius Dranseika","doi":"10.1007/s11948-025-00537-0","DOIUrl":"10.1007/s11948-025-00537-0","url":null,"abstract":"","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 2","pages":"8"},"PeriodicalIF":2.7,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11919922/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143659592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-10DOI: 10.1007/s11948-025-00530-7
Stefan Heuser, Jochen Steil, Sabine Salloch
The search for ethical guidance in the development of artificial intelligence (AI) systems, especially in healthcare and decision support, remains a crucial effort. So far, principles usually serve as the main reference points to achieve ethically correct implementations. Based on reviewing classical criticism of principle-based ethics and taking into account the severity and potentially life-changing relevance of decisions assisted by AI-driven systems, we argue for strengthening a complementary perspective that focuses on the life-world as ensembles of practices which shape people's lives. This perspective focuses on the notion of ethical judgment sensitive to life forms, arguing that principles alone do not guarantee ethicality in a moral world that is rather a joint construction of reality than a matter of mere control. We conclude that it is essential to support and supplement the implementation of moral principles in the development of AI systems for decision-making in healthcare by recognizing the normative relevance of life forms and practices in ethical judgment.
{"title":"AI Ethics beyond Principles: Strengthening the Life-world Perspective.","authors":"Stefan Heuser, Jochen Steil, Sabine Salloch","doi":"10.1007/s11948-025-00530-7","DOIUrl":"10.1007/s11948-025-00530-7","url":null,"abstract":"<p><p>The search for ethical guidance in the development of artificial intelligence (AI) systems, especially in healthcare and decision support, remains a crucial effort. So far, principles usually serve as the main reference points to achieve ethically correct implementations. Based on reviewing classical criticism of principle-based ethics and taking into account the severity and potentially life-changing relevance of decisions assisted by AI-driven systems, we argue for strengthening a complementary perspective that focuses on the life-world as ensembles of practices which shape people's lives. This perspective focuses on the notion of ethical judgment sensitive to life forms, arguing that principles alone do not guarantee ethicality in a moral world that is rather a joint construction of reality than a matter of mere control. We conclude that it is essential to support and supplement the implementation of moral principles in the development of AI systems for decision-making in healthcare by recognizing the normative relevance of life forms and practices in ethical judgment.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 1","pages":"7"},"PeriodicalIF":2.7,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11811459/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143383684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-05DOI: 10.1007/s11948-025-00531-6
Tomasz Żuradzki, Piotr Bystranowski, Vilius Dranseika
The analysis of citation flow from a collection of scholarly articles might provide valuable insights into their thematic focus and the genealogy of their main concepts. In this study, we employ a topic model to delineate a subcorpus of 1,360 papers representative of bioethical discussions on enhancing human life. We subsequently conduct an analysis of almost 11,000 references cited in that subcorpus to examine quantitatively, from a bird's-eye view, the degree of openness of this part of scholarship to the specialized knowledge produced in biosciences. Although almost half of the analyzed references point to journals classified as Natural Science and Engineering (NSE), we do not find strong evidence of the intellectual influence of recent discoveries in biosciences on discussions on human enhancement. We conclude that a large part of the discourse surrounding human enhancement is inflected with "science-fictional habits of mind." Our findings point to the need for a more science-informed approach in discussions on enhancing human life.
{"title":"Discussions on Human Enhancement Meet Science: A Quantitative Analysis.","authors":"Tomasz Żuradzki, Piotr Bystranowski, Vilius Dranseika","doi":"10.1007/s11948-025-00531-6","DOIUrl":"10.1007/s11948-025-00531-6","url":null,"abstract":"<p><p>The analysis of citation flow from a collection of scholarly articles might provide valuable insights into their thematic focus and the genealogy of their main concepts. In this study, we employ a topic model to delineate a subcorpus of 1,360 papers representative of bioethical discussions on enhancing human life. We subsequently conduct an analysis of almost 11,000 references cited in that subcorpus to examine quantitatively, from a bird's-eye view, the degree of openness of this part of scholarship to the specialized knowledge produced in biosciences. Although almost half of the analyzed references point to journals classified as Natural Science and Engineering (NSE), we do not find strong evidence of the intellectual influence of recent discoveries in biosciences on discussions on human enhancement. We conclude that a large part of the discourse surrounding human enhancement is inflected with \"science-fictional habits of mind.\" Our findings point to the need for a more science-informed approach in discussions on enhancing human life.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 1","pages":"6"},"PeriodicalIF":2.7,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11799069/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143191056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-24DOI: 10.1007/s11948-025-00528-1
Dario Cecchini, Veljko Dubljević
The incorporation of ethical settings in Automated Driving Systems (ADSs) has been extensively discussed in recent years with the goal of enhancing potential stakeholders' trust in the new technology. However, a comprehensive ethical framework for ADS decision-making, capable of merging multiple ethical considerations and investigating their consistency is currently missing. This paper addresses this gap by providing a taxonomy of ADS decision-making based on the Agent-Deed-Consequences (ADC) model of moral judgment. Specifically, we identify three main components of traffic moral judgment: driving style, traffic rules compliance, and risk distribution. Then, we suggest distinguishable ethical settings for each traffic component.
{"title":"Moral Complexity in Traffic: Advancing the ADC Model for Automated Driving Systems.","authors":"Dario Cecchini, Veljko Dubljević","doi":"10.1007/s11948-025-00528-1","DOIUrl":"10.1007/s11948-025-00528-1","url":null,"abstract":"<p><p>The incorporation of ethical settings in Automated Driving Systems (ADSs) has been extensively discussed in recent years with the goal of enhancing potential stakeholders' trust in the new technology. However, a comprehensive ethical framework for ADS decision-making, capable of merging multiple ethical considerations and investigating their consistency is currently missing. This paper addresses this gap by providing a taxonomy of ADS decision-making based on the Agent-Deed-Consequences (ADC) model of moral judgment. Specifically, we identify three main components of traffic moral judgment: driving style, traffic rules compliance, and risk distribution. Then, we suggest distinguishable ethical settings for each traffic component.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 1","pages":"5"},"PeriodicalIF":2.7,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11761772/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143034715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-23DOI: 10.1007/s11948-025-00529-0
Mark Coeckelbergh
While there are many public concerns about the impact of AI on truth and knowledge, especially when it comes to the widespread use of LLMs, there is not much systematic philosophical analysis of these problems and their political implications. This paper aims to assist this effort by providing an overview of some truth-related risks in which LLMs may play a role, including risks concerning hallucination and misinformation, epistemic agency and epistemic bubbles, bullshit and relativism, and epistemic anachronism and epistemic incest, and by offering arguments for why these problems are not only epistemic issues but also raise problems for democracy since they undermine its epistemic basis- especially if we assume democracy theories that go beyond minimalist views. I end with a short reflection on what can be done about these political-epistemic risks, pointing to education as one of the sites for change.
{"title":"LLMs, Truth, and Democracy: An Overview of Risks.","authors":"Mark Coeckelbergh","doi":"10.1007/s11948-025-00529-0","DOIUrl":"10.1007/s11948-025-00529-0","url":null,"abstract":"<p><p>While there are many public concerns about the impact of AI on truth and knowledge, especially when it comes to the widespread use of LLMs, there is not much systematic philosophical analysis of these problems and their political implications. This paper aims to assist this effort by providing an overview of some truth-related risks in which LLMs may play a role, including risks concerning hallucination and misinformation, epistemic agency and epistemic bubbles, bullshit and relativism, and epistemic anachronism and epistemic incest, and by offering arguments for why these problems are not only epistemic issues but also raise problems for democracy since they undermine its epistemic basis- especially if we assume democracy theories that go beyond minimalist views. I end with a short reflection on what can be done about these political-epistemic risks, pointing to education as one of the sites for change.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 1","pages":"4"},"PeriodicalIF":2.7,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11759458/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143030055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}