{"title":"On the ethical and moral dimensions of using artificial intelligence for evidence synthesis.","authors":"Soumyadeep Bhaumik","doi":"10.1371/journal.pgph.0004348","DOIUrl":null,"url":null,"abstract":"<p><p>Artificial intelligence (AI) is increasingly being used in the field of medicine and healthcare. However, there are no articles specifically examining ethical and moral dimensions of AI use for evidence synthesis. This article attempts to fills this gap. In doing so, I deploy in written form, what in Bengali philosophy and culture, is the Adda (আড্ডা) approach, a form of oral exchange, which involves deep but conversational style discussion. Adda developed as a form of intellectual resistance against the cultural hegemony of British Imperialism and entails asking provocative question to encourage critical discourse.The raison d'être for using AI is that it would enhance efficiency in the conduct of evidence synthesis, thus leading to greater evidence uptake. I question whether assuming so without any empirical evidence is ethical. I then examine the challenges posed by the lack of moral agency of AI; the issue of bias and discrimination being amplified through AI driven evidence synthesis; ethical and moral dimensions of epistemic (knowledge-related) uncertainty on AI; impact of knowledge systems (training of future scientists, and epistemic conformity), and the need for looking at ethical and moral dimensions beyond technical evaluation of AI models. I then discuss ethical and moral responsibilities of government, multi-laterals, research institutions and funders in regulating and having an oversight role in development, validation, and conduct of evidence synthesis. I argue that industry self-regulation for responsible use of AI is unlikely to address ethical and moral concerns, and that there is a need to develop legal frameworks, ethics codes, and of bringing such work within the ambit of institutional ethics committees to enable appreciation of the complexities around use of AI for evidence synthesis, mitigate against moral hazards, and ensure that evidence synthesis leads to improvement of health of individuals, nations and societies.</p>","PeriodicalId":74466,"journal":{"name":"PLOS global public health","volume":"5 3","pages":"e0004348"},"PeriodicalIF":0.0000,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PLOS global public health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1371/journal.pgph.0004348","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial intelligence (AI) is increasingly being used in the field of medicine and healthcare. However, there are no articles specifically examining ethical and moral dimensions of AI use for evidence synthesis. This article attempts to fills this gap. In doing so, I deploy in written form, what in Bengali philosophy and culture, is the Adda (আড্ডা) approach, a form of oral exchange, which involves deep but conversational style discussion. Adda developed as a form of intellectual resistance against the cultural hegemony of British Imperialism and entails asking provocative question to encourage critical discourse.The raison d'être for using AI is that it would enhance efficiency in the conduct of evidence synthesis, thus leading to greater evidence uptake. I question whether assuming so without any empirical evidence is ethical. I then examine the challenges posed by the lack of moral agency of AI; the issue of bias and discrimination being amplified through AI driven evidence synthesis; ethical and moral dimensions of epistemic (knowledge-related) uncertainty on AI; impact of knowledge systems (training of future scientists, and epistemic conformity), and the need for looking at ethical and moral dimensions beyond technical evaluation of AI models. I then discuss ethical and moral responsibilities of government, multi-laterals, research institutions and funders in regulating and having an oversight role in development, validation, and conduct of evidence synthesis. I argue that industry self-regulation for responsible use of AI is unlikely to address ethical and moral concerns, and that there is a need to develop legal frameworks, ethics codes, and of bringing such work within the ambit of institutional ethics committees to enable appreciation of the complexities around use of AI for evidence synthesis, mitigate against moral hazards, and ensure that evidence synthesis leads to improvement of health of individuals, nations and societies.