Pub Date : 2025-04-01Epub Date: 2024-08-03DOI: 10.1007/s00424-024-03002-2
David L Hölscher, Roman D Bülow
Traditional histopathology, characterized by manual quantifications and assessments, faces challenges such as low-throughput and inter-observer variability that hinder the introduction of precision medicine in pathology diagnostics and research. The advent of digital pathology allowed the introduction of computational pathology, a discipline that leverages computational methods, especially based on deep learning (DL) techniques, to analyze histopathology specimens. A growing body of research shows impressive performances of DL-based models in pathology for a multitude of tasks, such as mutation prediction, large-scale pathomics analyses, or prognosis prediction. New approaches integrate multimodal data sources and increasingly rely on multi-purpose foundation models. This review provides an introductory overview of advancements in computational pathology and discusses their implications for the future of histopathology in research and diagnostics.
{"title":"Decoding pathology: the role of computational pathology in research and diagnostics.","authors":"David L Hölscher, Roman D Bülow","doi":"10.1007/s00424-024-03002-2","DOIUrl":"10.1007/s00424-024-03002-2","url":null,"abstract":"<p><p>Traditional histopathology, characterized by manual quantifications and assessments, faces challenges such as low-throughput and inter-observer variability that hinder the introduction of precision medicine in pathology diagnostics and research. The advent of digital pathology allowed the introduction of computational pathology, a discipline that leverages computational methods, especially based on deep learning (DL) techniques, to analyze histopathology specimens. A growing body of research shows impressive performances of DL-based models in pathology for a multitude of tasks, such as mutation prediction, large-scale pathomics analyses, or prognosis prediction. New approaches integrate multimodal data sources and increasingly rely on multi-purpose foundation models. This review provides an introductory overview of advancements in computational pathology and discusses their implications for the future of histopathology in research and diagnostics.</p>","PeriodicalId":19954,"journal":{"name":"Pflugers Archiv : European journal of physiology","volume":" ","pages":"555-570"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11958429/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141879178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2024-10-08DOI: 10.1007/s00424-024-03016-w
Moritz Seiler, Kerstin Ritter
Recently, deep generative modelling has become an increasingly powerful tool with seminal work in a myriad of disciplines. This powerful modelling approach is supposed to not only have the potential to solve current problems in the medical field but also to enable personalised precision medicine and revolutionise healthcare through applications such as digital twins of patients. Here, the core concepts of generative modelling and popular modelling approaches are first introduced to consider the potential based on methodological concepts for the generation of synthetic data and the ability to learn a representation of observed data. These potentials will be reviewed using current applications in neuroimaging for data synthesis and disease decomposition in Alzheimer's disease and multiple sclerosis. Finally, challenges for further research and applications will be discussed, including computational and data requirements, model evaluation, and potential privacy risks.
{"title":"Pioneering new paths: the role of generative modelling in neurological disease research.","authors":"Moritz Seiler, Kerstin Ritter","doi":"10.1007/s00424-024-03016-w","DOIUrl":"10.1007/s00424-024-03016-w","url":null,"abstract":"<p><p>Recently, deep generative modelling has become an increasingly powerful tool with seminal work in a myriad of disciplines. This powerful modelling approach is supposed to not only have the potential to solve current problems in the medical field but also to enable personalised precision medicine and revolutionise healthcare through applications such as digital twins of patients. Here, the core concepts of generative modelling and popular modelling approaches are first introduced to consider the potential based on methodological concepts for the generation of synthetic data and the ability to learn a representation of observed data. These potentials will be reviewed using current applications in neuroimaging for data synthesis and disease decomposition in Alzheimer's disease and multiple sclerosis. Finally, challenges for further research and applications will be discussed, including computational and data requirements, model evaluation, and potential privacy risks.</p>","PeriodicalId":19954,"journal":{"name":"Pflugers Archiv : European journal of physiology","volume":" ","pages":"571-589"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11958445/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142392408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2024-08-01DOI: 10.1007/s00424-024-02997-y
Jhonatan Contreras, Thomas Bocklitz
Explainable artificial intelligence (XAI) has gained significant attention in various domains, including natural and medical image analysis. However, its application in spectroscopy remains relatively unexplored. This systematic review aims to fill this gap by providing a comprehensive overview of the current landscape of XAI in spectroscopy and identifying potential benefits and challenges associated with its implementation. Following the PRISMA guideline 2020, we conducted a systematic search across major journal databases, resulting in 259 initial search results. After removing duplicates and applying inclusion and exclusion criteria, 21 scientific studies were included in this review. Notably, most of the studies focused on using XAI methods for spectral data analysis, emphasizing identifying significant spectral bands rather than specific intensity peaks. Among the most utilized AI techniques were SHapley Additive exPlanations (SHAP), masking methods inspired by Local Interpretable Model-agnostic Explanations (LIME), and Class Activation Mapping (CAM). These methods were favored due to their model-agnostic nature and ease of use, enabling interpretable explanations without modifying the original models. Future research should propose new methods and explore the adaptation of other XAI employed in other domains to better suit the unique characteristics of spectroscopic data.
{"title":"Explainable artificial intelligence for spectroscopy data: a review.","authors":"Jhonatan Contreras, Thomas Bocklitz","doi":"10.1007/s00424-024-02997-y","DOIUrl":"10.1007/s00424-024-02997-y","url":null,"abstract":"<p><p>Explainable artificial intelligence (XAI) has gained significant attention in various domains, including natural and medical image analysis. However, its application in spectroscopy remains relatively unexplored. This systematic review aims to fill this gap by providing a comprehensive overview of the current landscape of XAI in spectroscopy and identifying potential benefits and challenges associated with its implementation. Following the PRISMA guideline 2020, we conducted a systematic search across major journal databases, resulting in 259 initial search results. After removing duplicates and applying inclusion and exclusion criteria, 21 scientific studies were included in this review. Notably, most of the studies focused on using XAI methods for spectral data analysis, emphasizing identifying significant spectral bands rather than specific intensity peaks. Among the most utilized AI techniques were SHapley Additive exPlanations (SHAP), masking methods inspired by Local Interpretable Model-agnostic Explanations (LIME), and Class Activation Mapping (CAM). These methods were favored due to their model-agnostic nature and ease of use, enabling interpretable explanations without modifying the original models. Future research should propose new methods and explore the adaptation of other XAI employed in other domains to better suit the unique characteristics of spectroscopic data.</p>","PeriodicalId":19954,"journal":{"name":"Pflugers Archiv : European journal of physiology","volume":" ","pages":"603-615"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11958459/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141860467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-02-25DOI: 10.1007/s00424-025-03067-7
Bettina Finzel
Explainable artificial intelligence (XAI) is gaining importance in physiological research, where artificial intelligence is now used as an analytical and predictive tool for many medical research questions. The primary goal of XAI is to make AI models understandable for human decision-makers. This can be achieved in particular through providing inherently interpretable AI methods or by making opaque models and their outputs transparent using post hoc explanations. This review introduces XAI core topics and provides a selective overview of current XAI methods in physiology. It further illustrates solved and discusses open challenges in XAI research using existing practical examples from the medical field. The article gives an outlook on two possible future prospects: (1) using XAI methods to provide trustworthy AI for integrative physiological research and (2) integrating physiological expertise about human explanation into XAI method development for useful and beneficial human-AI partnerships.
{"title":"Current methods in explainable artificial intelligence and future prospects for integrative physiology.","authors":"Bettina Finzel","doi":"10.1007/s00424-025-03067-7","DOIUrl":"10.1007/s00424-025-03067-7","url":null,"abstract":"<p><p>Explainable artificial intelligence (XAI) is gaining importance in physiological research, where artificial intelligence is now used as an analytical and predictive tool for many medical research questions. The primary goal of XAI is to make AI models understandable for human decision-makers. This can be achieved in particular through providing inherently interpretable AI methods or by making opaque models and their outputs transparent using post hoc explanations. This review introduces XAI core topics and provides a selective overview of current XAI methods in physiology. It further illustrates solved and discusses open challenges in XAI research using existing practical examples from the medical field. The article gives an outlook on two possible future prospects: (1) using XAI methods to provide trustworthy AI for integrative physiological research and (2) integrating physiological expertise about human explanation into XAI method development for useful and beneficial human-AI partnerships.</p>","PeriodicalId":19954,"journal":{"name":"Pflugers Archiv : European journal of physiology","volume":" ","pages":"513-529"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11958383/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143493196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2025-03-11DOI: 10.1007/s00424-025-03071-x
Anika Westphal, Ralf Mrowka
This special issue presents a collection of reviews on the recent advancements and applications of artificial intelligence (AI) in medicine and physiology. The topics covered include digital histopathology, generative AI, explainable AI (XAI), and ethical considerations in AI development and implementation. The reviews highlight the potential of AI to transform medical diagnostics, personalized medicine, and clinical decision making, while also addressing challenges such as data quality, interpretability, and trustworthiness. The contributions demonstrate the growing importance of AI in physiological research and medicine, the need for multi-level ethics approaches in AI development, and the potential benefits of generative AI in medical applications. Overall, this special issue showcases some of the the pioneering aspects of AI in medicine and physiology, covering technical, applicative, and ethical viewpoints, and underlines the remarkable impact of AI on these fields.
{"title":"Special issue European Journal of Physiology: Artificial intelligence in the field of physiology and medicine.","authors":"Anika Westphal, Ralf Mrowka","doi":"10.1007/s00424-025-03071-x","DOIUrl":"10.1007/s00424-025-03071-x","url":null,"abstract":"<p><p>This special issue presents a collection of reviews on the recent advancements and applications of artificial intelligence (AI) in medicine and physiology. The topics covered include digital histopathology, generative AI, explainable AI (XAI), and ethical considerations in AI development and implementation. The reviews highlight the potential of AI to transform medical diagnostics, personalized medicine, and clinical decision making, while also addressing challenges such as data quality, interpretability, and trustworthiness. The contributions demonstrate the growing importance of AI in physiological research and medicine, the need for multi-level ethics approaches in AI development, and the potential benefits of generative AI in medical applications. Overall, this special issue showcases some of the the pioneering aspects of AI in medicine and physiology, covering technical, applicative, and ethical viewpoints, and underlines the remarkable impact of AI on these fields.</p>","PeriodicalId":19954,"journal":{"name":"Pflugers Archiv : European journal of physiology","volume":" ","pages":"509-512"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11958393/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143605947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2024-10-29DOI: 10.1007/s00424-024-03033-9
Florian Boge, Axel Mosig
With rapid advances of deep neural networks over the past decade, artificial intelligence (AI) systems are now commonplace in many applications in biomedicine. These systems often achieve high predictive accuracy in clinical studies, and increasingly in clinical practice. Yet, despite their commonly high predictive accuracy, the trustworthiness of AI systems needs to be questioned when it comes to decision-making that affects the well-being of patients or the fairness towards patients or other stakeholders affected by AI-based decisions. To address this, the field of explainable artificial intelligence, or XAI for short, has emerged, seeking to provide means by which AI-based decisions can be explained to experts, users, or other stakeholders. While it is commonly claimed that explanations of artificial intelligence (AI) establish the trustworthiness of AI-based decisions, it remains unclear what traits of explanations cause them to foster trustworthiness. Building on historical cases of scientific explanation in medicine, we here propagate our perspective that, in order to foster trustworthiness, explanations in biomedical AI should meet the criteria of being scientific explanations. To further undermine our approach, we discuss its relation to the concepts of causality and randomized intervention. In our perspective, we combine aspects from the three disciplines of biomedicine, machine learning, and philosophy. From this interdisciplinary angle, we shed light on how the explanation and trustworthiness of artificial intelligence relate to the concepts of causality and robustness. To connect our perspective with AI research practice, we review recent cases of AI-based studies in pathology and, finally, provide guidelines on how to connect AI in biomedicine with scientific explanation.
{"title":"Causality and scientific explanation of artificial intelligence systems in biomedicine.","authors":"Florian Boge, Axel Mosig","doi":"10.1007/s00424-024-03033-9","DOIUrl":"10.1007/s00424-024-03033-9","url":null,"abstract":"<p><p>With rapid advances of deep neural networks over the past decade, artificial intelligence (AI) systems are now commonplace in many applications in biomedicine. These systems often achieve high predictive accuracy in clinical studies, and increasingly in clinical practice. Yet, despite their commonly high predictive accuracy, the trustworthiness of AI systems needs to be questioned when it comes to decision-making that affects the well-being of patients or the fairness towards patients or other stakeholders affected by AI-based decisions. To address this, the field of explainable artificial intelligence, or XAI for short, has emerged, seeking to provide means by which AI-based decisions can be explained to experts, users, or other stakeholders. While it is commonly claimed that explanations of artificial intelligence (AI) establish the trustworthiness of AI-based decisions, it remains unclear what traits of explanations cause them to foster trustworthiness. Building on historical cases of scientific explanation in medicine, we here propagate our perspective that, in order to foster trustworthiness, explanations in biomedical AI should meet the criteria of being scientific explanations. To further undermine our approach, we discuss its relation to the concepts of causality and randomized intervention. In our perspective, we combine aspects from the three disciplines of biomedicine, machine learning, and philosophy. From this interdisciplinary angle, we shed light on how the explanation and trustworthiness of artificial intelligence relate to the concepts of causality and robustness. To connect our perspective with AI research practice, we review recent cases of AI-based studies in pathology and, finally, provide guidelines on how to connect AI in biomedicine with scientific explanation.</p>","PeriodicalId":19954,"journal":{"name":"Pflugers Archiv : European journal of physiology","volume":" ","pages":"543-554"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11958387/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142546683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2024-10-17DOI: 10.1007/s00424-024-03024-w
Chaithra Umesh, Manjunath Mahendra, Saptarshi Bej, Olaf Wolkenhauer, Markus Wolfien
Recent advancements in generative approaches in AI have opened up the prospect of synthetic tabular clinical data generation. From filling in missing values in real-world data, these approaches have now advanced to creating complex multi-tables. This review explores the development of techniques capable of synthesizing patient data and modeling multiple tables. We highlight the challenges and opportunities of these methods for analyzing patient data in physiology. Additionally, it discusses the challenges and potential of these approaches in improving clinical research, personalized medicine, and healthcare policy. The integration of these generative models into physiological settings may represent both a theoretical advancement and a practical tool that has the potential to improve mechanistic understanding and patient care. By providing a reliable source of synthetic data, these models can also help mitigate privacy concerns and facilitate large-scale data sharing.
{"title":"Challenges and applications in generative AI for clinical tabular data in physiology.","authors":"Chaithra Umesh, Manjunath Mahendra, Saptarshi Bej, Olaf Wolkenhauer, Markus Wolfien","doi":"10.1007/s00424-024-03024-w","DOIUrl":"10.1007/s00424-024-03024-w","url":null,"abstract":"<p><p>Recent advancements in generative approaches in AI have opened up the prospect of synthetic tabular clinical data generation. From filling in missing values in real-world data, these approaches have now advanced to creating complex multi-tables. This review explores the development of techniques capable of synthesizing patient data and modeling multiple tables. We highlight the challenges and opportunities of these methods for analyzing patient data in physiology. Additionally, it discusses the challenges and potential of these approaches in improving clinical research, personalized medicine, and healthcare policy. The integration of these generative models into physiological settings may represent both a theoretical advancement and a practical tool that has the potential to improve mechanistic understanding and patient care. By providing a reliable source of synthetic data, these models can also help mitigate privacy concerns and facilitate large-scale data sharing.</p>","PeriodicalId":19954,"journal":{"name":"Pflugers Archiv : European journal of physiology","volume":" ","pages":"531-542"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11958401/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142472225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rapid integration of artificial intelligence (AI) into surgical practice necessitates a comprehensive evaluation of its applications, challenges, and physiological impact. This systematic review synthesizes current AI applications in surgery, with a particular focus on machine learning (ML) and its role in optimizing preoperative planning, intraoperative decision-making, and postoperative patient management. Using PRISMA guidelines and PICO criteria, we analyzed key studies addressing AI's contributions to surgical precision, outcome prediction, and real-time physiological monitoring. While AI has demonstrated significant promise-from enhancing diagnostics to improving intraoperative safety-many surgeons remain skeptical due to concerns over algorithmic unpredictability, surgeon autonomy, and ethical transparency. This review explores AI's physiological integration into surgery, discussing its role in real-time hemodynamic assessments, AI-guided tissue characterization, and intraoperative physiological modeling. Ethical concerns, including algorithmic opacity and liability in high-stakes scenarios, are critically examined alongside AI's potential to augment surgical expertise. We conclude that longitudinal validation, improved AI explainability, and adaptive regulatory frameworks are essential to ensure safe, effective, and ethically sound integration of AI into surgical decision-making. Future research should focus on bridging AI-driven analytics with real-time physiological feedback to refine precision surgery and patient safety strategies.
{"title":"Comprehensive overview of artificial intelligence in surgery: a systematic review and perspectives.","authors":"Olivia Chevalier, Gérard Dubey, Amine Benkabbou, Mohammed Anass Majbar, Amine Souadka","doi":"10.1007/s00424-025-03076-6","DOIUrl":"10.1007/s00424-025-03076-6","url":null,"abstract":"<p><p>The rapid integration of artificial intelligence (AI) into surgical practice necessitates a comprehensive evaluation of its applications, challenges, and physiological impact. This systematic review synthesizes current AI applications in surgery, with a particular focus on machine learning (ML) and its role in optimizing preoperative planning, intraoperative decision-making, and postoperative patient management. Using PRISMA guidelines and PICO criteria, we analyzed key studies addressing AI's contributions to surgical precision, outcome prediction, and real-time physiological monitoring. While AI has demonstrated significant promise-from enhancing diagnostics to improving intraoperative safety-many surgeons remain skeptical due to concerns over algorithmic unpredictability, surgeon autonomy, and ethical transparency. This review explores AI's physiological integration into surgery, discussing its role in real-time hemodynamic assessments, AI-guided tissue characterization, and intraoperative physiological modeling. Ethical concerns, including algorithmic opacity and liability in high-stakes scenarios, are critically examined alongside AI's potential to augment surgical expertise. We conclude that longitudinal validation, improved AI explainability, and adaptive regulatory frameworks are essential to ensure safe, effective, and ethically sound integration of AI into surgical decision-making. Future research should focus on bridging AI-driven analytics with real-time physiological feedback to refine precision surgery and patient safety strategies.</p>","PeriodicalId":19954,"journal":{"name":"Pflugers Archiv : European journal of physiology","volume":" ","pages":"617-626"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143634419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-01Epub Date: 2024-07-06DOI: 10.1007/s00424-024-02984-3
Tijs Vandemeulebroucke
Artificial intelligence systems (ai-systems) (e.g. machine learning, generative artificial intelligence), in healthcare and medicine, have been received with hopes of better care quality, more efficiency, lower care costs, etc. Simultaneously, these systems have been met with reservations regarding their impacts on stakeholders' privacy, on changing power dynamics, on systemic biases, etc. Fortunately, healthcare and medicine have been guided by a multitude of ethical principles, frameworks, or approaches, which also guide the use of ai-systems in healthcare and medicine, in one form or another. Nevertheless, in this article, I argue that most of these approaches are inspired by a local isolationist view on ai-systems, here exemplified by the principlist approach. Despite positive contributions to laying out the ethical landscape of ai-systems in healthcare and medicine, such ethics approaches are too focused on a specific local healthcare and medical setting, be it a particular care relationship, a particular care organisation, or a particular society or region. By doing so, they lose sight of the global impacts ai-systems have, especially environmental impacts and related social impacts, such as increased health risks. To meet this gap, this article presents a global approach to the ethics of ai-systems in healthcare and medicine which consists of five levels of ethical impacts and analysis: individual-relational, organisational, societal, global, and historical. As such, this global approach incorporates the local isolationist view by integrating it in a wider landscape of ethical consideration so to ensure ai-systems meet the needs of everyone everywhere.
{"title":"The ethics of artificial intelligence systems in healthcare and medicine: from a local to a global perspective, and back.","authors":"Tijs Vandemeulebroucke","doi":"10.1007/s00424-024-02984-3","DOIUrl":"10.1007/s00424-024-02984-3","url":null,"abstract":"<p><p>Artificial intelligence systems (ai-systems) (e.g. machine learning, generative artificial intelligence), in healthcare and medicine, have been received with hopes of better care quality, more efficiency, lower care costs, etc. Simultaneously, these systems have been met with reservations regarding their impacts on stakeholders' privacy, on changing power dynamics, on systemic biases, etc. Fortunately, healthcare and medicine have been guided by a multitude of ethical principles, frameworks, or approaches, which also guide the use of ai-systems in healthcare and medicine, in one form or another. Nevertheless, in this article, I argue that most of these approaches are inspired by a local isolationist view on ai-systems, here exemplified by the principlist approach. Despite positive contributions to laying out the ethical landscape of ai-systems in healthcare and medicine, such ethics approaches are too focused on a specific local healthcare and medical setting, be it a particular care relationship, a particular care organisation, or a particular society or region. By doing so, they lose sight of the global impacts ai-systems have, especially environmental impacts and related social impacts, such as increased health risks. To meet this gap, this article presents a global approach to the ethics of ai-systems in healthcare and medicine which consists of five levels of ethical impacts and analysis: individual-relational, organisational, societal, global, and historical. As such, this global approach incorporates the local isolationist view by integrating it in a wider landscape of ethical consideration so to ensure ai-systems meet the needs of everyone everywhere.</p>","PeriodicalId":19954,"journal":{"name":"Pflugers Archiv : European journal of physiology","volume":" ","pages":"591-601"},"PeriodicalIF":2.9,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11958494/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141538355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-31DOI: 10.1007/s00424-025-03081-9
Eric Feraille, Ali Sassi, Monika Gjorgjieva
{"title":"The enigma of ENaC activation by proteolytic cleavage: a never ending quest?","authors":"Eric Feraille, Ali Sassi, Monika Gjorgjieva","doi":"10.1007/s00424-025-03081-9","DOIUrl":"https://doi.org/10.1007/s00424-025-03081-9","url":null,"abstract":"","PeriodicalId":19954,"journal":{"name":"Pflugers Archiv : European journal of physiology","volume":" ","pages":""},"PeriodicalIF":2.9,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143753408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}