{"title":"Hermeneutics as impediment to AI in medicine.","authors":"Kyle Karches","doi":"10.1007/s11017-025-09701-w","DOIUrl":null,"url":null,"abstract":"<p><p>Predictions that artificial intelligence (AI) will become capable of replacing human beings in domains such as medicine rest implicitly on a theory of mind according to which knowledge can be captured propositionally without loss of meaning. Generative AIs, for example, draw upon billions of written sources to produce text that most likely responds to a user's query, according to its probability heuristic. Such programs can only replace human beings in practices such as medicine if human language functions similarly and, like AI, does not rely on meta-textual resources to convey meaning. In this essay, I draw on the hermeneutic philosophy of Hans-Georg Gadamer to challenge this conception of human knowledge. I follow Gadamer in arguing that human understanding of texts is an interpretive process relying on previously received judgments that derive from the human person's situatedness in history, and these judgments differ from the rules guiding generative AI. Human understanding is also dialogical, as it depends on the 'fusion of horizons' with another person to the extent that one's own prejudices may come under question, something AI cannot achieve. Furthermore, artificial intelligence lacks a human body, which conditions human perception and understanding. I contend that these non-textual sources of meaning, which must remain obscure to AI, are important in moral practices such as medicine, particularly in history-taking, physical examination, diagnostic reasoning, and negotiating a treatment plan. Although AI can undoubtedly aid physicians in certain ways, it faces inherent limitations in replicating these core tasks of the physician-patient relationship.</p>","PeriodicalId":94251,"journal":{"name":"Theoretical medicine and bioethics","volume":" ","pages":"31-49"},"PeriodicalIF":0.0000,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Theoretical medicine and bioethics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1007/s11017-025-09701-w","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/2/26 0:00:00","PubModel":"Epub","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Predictions that artificial intelligence (AI) will become capable of replacing human beings in domains such as medicine rest implicitly on a theory of mind according to which knowledge can be captured propositionally without loss of meaning. Generative AIs, for example, draw upon billions of written sources to produce text that most likely responds to a user's query, according to its probability heuristic. Such programs can only replace human beings in practices such as medicine if human language functions similarly and, like AI, does not rely on meta-textual resources to convey meaning. In this essay, I draw on the hermeneutic philosophy of Hans-Georg Gadamer to challenge this conception of human knowledge. I follow Gadamer in arguing that human understanding of texts is an interpretive process relying on previously received judgments that derive from the human person's situatedness in history, and these judgments differ from the rules guiding generative AI. Human understanding is also dialogical, as it depends on the 'fusion of horizons' with another person to the extent that one's own prejudices may come under question, something AI cannot achieve. Furthermore, artificial intelligence lacks a human body, which conditions human perception and understanding. I contend that these non-textual sources of meaning, which must remain obscure to AI, are important in moral practices such as medicine, particularly in history-taking, physical examination, diagnostic reasoning, and negotiating a treatment plan. Although AI can undoubtedly aid physicians in certain ways, it faces inherent limitations in replicating these core tasks of the physician-patient relationship.