Pub Date : 2025-12-05DOI: 10.1007/s43681-025-00877-4
Mark Graves
Practical wisdom (phronesis) is the context-sensitive capacity to skillfully achieve morally good outcomes in complex situations. Developing artificial practical wisdom offers a more ethically robust and achievable goal for AI development than artificial general intelligence (AGI). While identifying what is morally good in ethically complex situations remains challenging, grounding artificial practical wisdom explicitly in compassion effectively reduces ethical risks associated with AI-induced suffering, surpassing conventional alignment strategies like rule-based guardrails or predefined reward systems. As a theoretical foundation for initial development of artificial practical wisdom, this virtue ethics approach integrates Aristotelian practical wisdom with cross-cultural perspectives on suffering and compassion from utilitarianism, the Capability Approach, Buddhism, and contemporary moral psychology. Operationalizing compassionate AI involves recognizing suffering, empathetic engagement, context-sensitive moral decision making, and motivational responses. Compassionate AI not only serves as a foundation for broader practical wisdom development but also demonstrates immediate practical benefits, particularly in healthcare, by measurably improving patient outcomes, enhancing well-being, and reducing caregiver burdens.
{"title":"AI practical wisdom and compassion","authors":"Mark Graves","doi":"10.1007/s43681-025-00877-4","DOIUrl":"10.1007/s43681-025-00877-4","url":null,"abstract":"<div><p>Practical wisdom (<i>phronesis</i>) is the context-sensitive capacity to skillfully achieve morally good outcomes in complex situations. Developing artificial practical wisdom offers a more ethically robust and achievable goal for AI development than artificial general intelligence (AGI). While identifying what is morally good in ethically complex situations remains challenging, grounding artificial practical wisdom explicitly in compassion effectively reduces ethical risks associated with AI-induced suffering, surpassing conventional alignment strategies like rule-based guardrails or predefined reward systems. As a theoretical foundation for initial development of artificial practical wisdom, this virtue ethics approach integrates Aristotelian practical wisdom with cross-cultural perspectives on suffering and compassion from utilitarianism, the Capability Approach, Buddhism, and contemporary moral psychology. Operationalizing compassionate AI involves recognizing suffering, empathetic engagement, context-sensitive moral decision making, and motivational responses. Compassionate AI not only serves as a foundation for broader practical wisdom development but also demonstrates immediate practical benefits, particularly in healthcare, by measurably improving patient outcomes, enhancing well-being, and reducing caregiver burdens.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43681-025-00877-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-05DOI: 10.1007/s43681-025-00875-6
Vidith Phillips, Paravreet Woodwal
Oral cancer remains a major yet underdiagnosed burden in low- and middle-income countries (LMICs), where high-risk behaviors, late-stage presentation, and limited diagnostic infrastructure contribute to poor outcomes. Artificial intelligence (AI), particularly vision-language foundation models such as CLIP and SAM, has emerged as a potential tool for zero-shot oral lesion detection, offering scalable diagnostic support without requiring large annotated datasets. However, these models are predominantly trained on Western-centric, light-skinned image corpora, raising concerns about fairness and generalizability in global health contexts. This review critically explores the diagnostic capabilities, limitations, and ethical considerations of using foundation models for oral lesion detection in LMICs, emphasizing their potential impact on equity and data justice. We analyzed interdisciplinary literature spanning medical imaging, oral oncology, digital health ethics, and AI fairness. Empirical studies evaluating AI performance across photographic, histological, and cytological data types were examined alongside implementation case studies and policy frameworks. Key themes included model generalization, domain adaptation, and data governance. While foundation models achieve competitive zero-shot performance in select lesion classification and segmentation tasks, their sensitivity declines when applied to darker-skinned individuals and field-acquired images from LMICs. Bias mitigation strategies such as prompt engineering, few-shot fine-tuning, and federated learning show promise but remain underutilized. Current development pipelines often lack transparency, community participation, and subgroup validation, leading to epistemic and diagnostic inequities. Foundation models offer significant potential for democratizing oral cancer screening, particularly in resource-constrained settings. However, realizing this promise requires locally grounded adaptation, inclusive dataset design, and participatory governance. Without these, zero-shot AI may reinforce rather than resolve existing disparities in oral healthcare. These findings emphasize the need for actionable collaboration among policymakers, clinicians, and AI developers. Integrating fairness metrics into regulatory review and encouraging locally validated deployment protocols can help translate these technologies into safe, equitable screening pathways in low-resource settings.
{"title":"What does an oral lesion look like in the global south? Rethinking AI, equity, and data justice in oral cancer diagnosis","authors":"Vidith Phillips, Paravreet Woodwal","doi":"10.1007/s43681-025-00875-6","DOIUrl":"10.1007/s43681-025-00875-6","url":null,"abstract":"<div><p>Oral cancer remains a major yet underdiagnosed burden in low- and middle-income countries (LMICs), where high-risk behaviors, late-stage presentation, and limited diagnostic infrastructure contribute to poor outcomes. Artificial intelligence (AI), particularly vision-language foundation models such as CLIP and SAM, has emerged as a potential tool for zero-shot oral lesion detection, offering scalable diagnostic support without requiring large annotated datasets. However, these models are predominantly trained on Western-centric, light-skinned image corpora, raising concerns about fairness and generalizability in global health contexts. This review critically explores the diagnostic capabilities, limitations, and ethical considerations of using foundation models for oral lesion detection in LMICs, emphasizing their potential impact on equity and data justice. We analyzed interdisciplinary literature spanning medical imaging, oral oncology, digital health ethics, and AI fairness. Empirical studies evaluating AI performance across photographic, histological, and cytological data types were examined alongside implementation case studies and policy frameworks. Key themes included model generalization, domain adaptation, and data governance. While foundation models achieve competitive zero-shot performance in select lesion classification and segmentation tasks, their sensitivity declines when applied to darker-skinned individuals and field-acquired images from LMICs. Bias mitigation strategies such as prompt engineering, few-shot fine-tuning, and federated learning show promise but remain underutilized. Current development pipelines often lack transparency, community participation, and subgroup validation, leading to epistemic and diagnostic inequities. Foundation models offer significant potential for democratizing oral cancer screening, particularly in resource-constrained settings. However, realizing this promise requires locally grounded adaptation, inclusive dataset design, and participatory governance. Without these, zero-shot AI may reinforce rather than resolve existing disparities in oral healthcare. These findings emphasize the need for actionable collaboration among policymakers, clinicians, and AI developers. Integrating fairness metrics into regulatory review and encouraging locally validated deployment protocols can help translate these technologies into safe, equitable screening pathways in low-resource settings.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145675235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}