{"title":"Uncertainty, Evidence, and the Integration of Machine Learning into Medical Practice.","authors":"Thomas Grote, Philipp Berens","doi":"10.1093/jmp/jhac034","DOIUrl":null,"url":null,"abstract":"<p><p>In light of recent advances in machine learning for medical applications, the automation of medical diagnostics is imminent. That said, before machine learning algorithms find their way into clinical practice, various problems at the epistemic level need to be overcome. In this paper, we discuss different sources of uncertainty arising for clinicians trying to evaluate the trustworthiness of algorithmic evidence when making diagnostic judgments. Thereby, we examine many of the limitations of current machine learning algorithms (with deep learning in particular) and highlight their relevance for medical diagnostics. Among the problems we inspect are the theoretical foundations of deep learning (which are not yet adequately understood), the opacity of algorithmic decisions, and the vulnerabilities of machine learning models, as well as concerns regarding the quality of medical data used to train the models. Building on this, we discuss different desiderata for an uncertainty amelioration strategy that ensures that the integration of machine learning into clinical settings proves to be medically beneficial in a meaningful way.</p>","PeriodicalId":47377,"journal":{"name":"Journal of Medicine and Philosophy","volume":null,"pages":null},"PeriodicalIF":1.3000,"publicationDate":"2023-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Medicine and Philosophy","FirstCategoryId":"3","ListUrlMain":"https://doi.org/10.1093/jmp/jhac034","RegionNum":3,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ETHICS","Score":null,"Total":0}
引用次数: 7
Abstract
In light of recent advances in machine learning for medical applications, the automation of medical diagnostics is imminent. That said, before machine learning algorithms find their way into clinical practice, various problems at the epistemic level need to be overcome. In this paper, we discuss different sources of uncertainty arising for clinicians trying to evaluate the trustworthiness of algorithmic evidence when making diagnostic judgments. Thereby, we examine many of the limitations of current machine learning algorithms (with deep learning in particular) and highlight their relevance for medical diagnostics. Among the problems we inspect are the theoretical foundations of deep learning (which are not yet adequately understood), the opacity of algorithmic decisions, and the vulnerabilities of machine learning models, as well as concerns regarding the quality of medical data used to train the models. Building on this, we discuss different desiderata for an uncertainty amelioration strategy that ensures that the integration of machine learning into clinical settings proves to be medically beneficial in a meaningful way.
期刊介绍:
This bimonthly publication explores the shared themes and concerns of philosophy and the medical sciences. Central issues in medical research and practice have important philosophical dimensions, for, in treating disease and promoting health, medicine involves presuppositions about human goals and values. Conversely, the concerns of philosophy often significantly relate to those of medicine, as philosophers seek to understand the nature of medical knowledge and the human condition in the modern world. In addition, recent developments in medical technology and treatment create moral problems that raise important philosophical questions. The Journal of Medicine and Philosophy aims to provide an ongoing forum for the discussion of such themes and issues.