Pub Date : 2025-07-12eCollection Date: 2025-09-01DOI: 10.1007/s13534-025-00490-8
Huashan Chen, Yongxu Liu, Chen Liu, Qiuli Wang, Rongping Wang
The generated lung nodule data plays an indispensable role in the development of intelligent assisted diagnosis of lung cancer. Existing generative models, primarily based on Generative Adversarial Networks (GANs) and Denoising Diffusion Probabilistic Models (DDPM), have demonstrated effectiveness but also come with certain limitations: GANs often produce artifacts and unnatural boundaries, and due to dataset limitations, they struggle with irregular nodules. While DDPMs are capable of generating a diverse range of nodules, their inherent randomness and lack of control limit their applicability in tasks such as segmentation. To synthesize controllable shapes and details of lung nodules, in this study, we propose a unified model that combines GAN and DDPM. Guided by multi-confidence masks, our method can synthesize customized lung nodule images by adding spikes or dents to the input mask, allowing control over shape, size, and other medical image features. The model consists of two parts: (1) a Rough Lung Nodule Generator, based on GAN, which synthesizes rough lung nodules of specified sizes and shapes using a multi-confidence mask, and (2) a Lung Nodule Optimizer, based on DDPM, which refines the rough results from the first part to produce more authentic boundaries. We validate our method using the LIDC-IDRI dataset. Experimental results demonstrate that our unified model achieves the best FID score, and the synthetic lung nodules it generates can serve as a valuable supplement to training datasets for segmentation tasks. Our study presents a unified model that effectively combines GAN and DDPM to generate high-quality and customized lung nodule images. This approach addresses the limitations of existing models by leveraging the strengths of both techniques. Our code is available at https://github.com/UtaUtaUtaha/CMCMGN.
{"title":"Lung nodule synthesis guided by customized multi-confidence masks.","authors":"Huashan Chen, Yongxu Liu, Chen Liu, Qiuli Wang, Rongping Wang","doi":"10.1007/s13534-025-00490-8","DOIUrl":"10.1007/s13534-025-00490-8","url":null,"abstract":"<p><p>The generated lung nodule data plays an indispensable role in the development of intelligent assisted diagnosis of lung cancer. Existing generative models, primarily based on Generative Adversarial Networks (GANs) and Denoising Diffusion Probabilistic Models (DDPM), have demonstrated effectiveness but also come with certain limitations: GANs often produce artifacts and unnatural boundaries, and due to dataset limitations, they struggle with irregular nodules. While DDPMs are capable of generating a diverse range of nodules, their inherent randomness and lack of control limit their applicability in tasks such as segmentation. To synthesize controllable shapes and details of lung nodules, in this study, we propose a unified model that combines GAN and DDPM. Guided by multi-confidence masks, our method can synthesize customized lung nodule images by adding spikes or dents to the input mask, allowing control over shape, size, and other medical image features. The model consists of two parts: (1) a Rough Lung Nodule Generator, based on GAN, which synthesizes rough lung nodules of specified sizes and shapes using a multi-confidence mask, and (2) a Lung Nodule Optimizer, based on DDPM, which refines the rough results from the first part to produce more authentic boundaries. We validate our method using the LIDC-IDRI dataset. Experimental results demonstrate that our unified model achieves the best FID score, and the synthetic lung nodules it generates can serve as a valuable supplement to training datasets for segmentation tasks. Our study presents a unified model that effectively combines GAN and DDPM to generate high-quality and customized lung nodule images. This approach addresses the limitations of existing models by leveraging the strengths of both techniques. Our code is available at https://github.com/UtaUtaUtaha/CMCMGN.</p>","PeriodicalId":46898,"journal":{"name":"Biomedical Engineering Letters","volume":"15 5","pages":"917-927"},"PeriodicalIF":2.8,"publicationDate":"2025-07-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12411368/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145015270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-27eCollection Date: 2025-09-01DOI: 10.1007/s13534-025-00488-2
Vesper Evereux, Sunjeet Saha, Chandrabali Bhattacharya, Seungman Park
Alginate is known to readily aggregate and form a physical gel when exposed to cations, making it a promising material for bioprinting applications. Alginate and its derivatives exhibit viscoelastic behavior due to the combination of solid and fluid components, necessitating the characterization of both elastic and viscous properties. However, a comprehensive investigation into the time-dependent viscoelastic properties of alginate hydrogels specifically optimized for bioprinting is still lacking. In this study, we investigated and quantified the time-dependent viscoelastic properties (elastic modulus, shear modulus, and viscosity) of calcium chloride (CaCl2) crosslinked-alginate hydrogels across 5 different alginate concentrations under 2 environmental conditions and 3 indentation depths using the Prony series. Moreover, we evaluated the printability of alginate solutions at different concentrations through bioprinted-filament collapse and fusion tests to assess their potential for bioprinting applications. The results demonstrated significant effects of alginate concentration, indentation depth, and environmental conditions on the viscoelastic behavior of alginate-based hydrogels. Furthermore, we identified 5% alginate as the optimal concentration for bioprinting. This study establishes a foundational workflow for characterizing various biomaterials, enabling their assessment for suitability in bioprinting and other tissue engineering applications.
{"title":"Characterization of time-dependent viscoelastic behaviors of alginate-calcium chloride hydrogels for bioprinting applications.","authors":"Vesper Evereux, Sunjeet Saha, Chandrabali Bhattacharya, Seungman Park","doi":"10.1007/s13534-025-00488-2","DOIUrl":"10.1007/s13534-025-00488-2","url":null,"abstract":"<p><p>Alginate is known to readily aggregate and form a physical gel when exposed to cations, making it a promising material for bioprinting applications. Alginate and its derivatives exhibit viscoelastic behavior due to the combination of solid and fluid components, necessitating the characterization of both elastic and viscous properties. However, a comprehensive investigation into the time-dependent viscoelastic properties of alginate hydrogels specifically optimized for bioprinting is still lacking. In this study, we investigated and quantified the time-dependent viscoelastic properties (elastic modulus, shear modulus, and viscosity) of calcium chloride (CaCl<sub>2</sub>) crosslinked-alginate hydrogels across 5 different alginate concentrations under 2 environmental conditions and 3 indentation depths using the Prony series. Moreover, we evaluated the printability of alginate solutions at different concentrations through bioprinted-filament collapse and fusion tests to assess their potential for bioprinting applications. The results demonstrated significant effects of alginate concentration, indentation depth, and environmental conditions on the viscoelastic behavior of alginate-based hydrogels. Furthermore, we identified 5% alginate as the optimal concentration for bioprinting. This study establishes a foundational workflow for characterizing various biomaterials, enabling their assessment for suitability in bioprinting and other tissue engineering applications.</p>","PeriodicalId":46898,"journal":{"name":"Biomedical Engineering Letters","volume":"15 5","pages":"891-901"},"PeriodicalIF":2.8,"publicationDate":"2025-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12411344/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145015238","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-25eCollection Date: 2025-07-01DOI: 10.1007/s13534-025-00486-4
Dogeun Park, Kwangsub So, Sunil Kumar Prabhakar, Chulho Kim, Jae Jun Lee, Jong-Hee Sohn, Jong-Ho Kim, Sang-Hwa Lee, Dong-Ok Won
Early warning score (EWS) have become an essential component of patient safety strategies in healthcare environments worldwide. These systems aim to identify patients at risk of clinical deterioration by evaluating vital signs and other physiological parameters, enabling timely intervention by rapid response teams. Despite proven benefits and widespread adoption, conventional EWS have limitations that may affect their ability to effectively detect and respond to patient deterioration. There is growing interest in integrating continuous multimodal monitoring technologies and advanced analytics, particularly artificial intelligence (AI) and machine learning (ML)-based approaches, to address these limitations and enhance EWS performance. This review provides a comprehensive overview of the current state and potential future directions of AI-based bio-signal monitoring in early warning system. It examines emerging trends and techniques in AI and ML for bio-signal analysis, exploring the possibilities and potential applications of various bio-signals such as electroencephalography, electrocardiography, electromyography in early warning system. However, significant challenges exist in developing and implementing AI-based bio-signal monitoring systems in early warning system, including data acquisition strategies, data quality and standardization, interpretability and explainability, validation and regulatory approval, integration into clinical workflows, and ethical and legal considerations. Addressing these challenges requires a multidisciplinary approach involving close collaboration between healthcare professionals, data scientists, engineers, and other stakeholders. Future research should focus on developing advanced data fusion techniques, personalized adaptive models, real-time and continuous monitoring, explainable and reliable AI, and regulatory and ethical frameworks. By addressing these challenges and opportunities, the integration of AI and bio-signals into early warning systems can enhance patient monitoring and clinical decision support, ultimately improving healthcare quality and safety. In conclusion, integrating AI and bio-signals into the early warning system represents a promising approach to improve patient care outcomes and support clinical decision-making. As research in this field continues to evolve, it is crucial to develop safe, effective, and ethically responsible solutions that can be seamlessly integrated into clinical practice, harnessing the power of innovative technology to enhance patient care and improve individual and population health and well-being.
{"title":"Early warning score and feasible complementary approach using artificial intelligence-based bio-signal monitoring system: a review.","authors":"Dogeun Park, Kwangsub So, Sunil Kumar Prabhakar, Chulho Kim, Jae Jun Lee, Jong-Hee Sohn, Jong-Ho Kim, Sang-Hwa Lee, Dong-Ok Won","doi":"10.1007/s13534-025-00486-4","DOIUrl":"10.1007/s13534-025-00486-4","url":null,"abstract":"<p><p>Early warning score (EWS) have become an essential component of patient safety strategies in healthcare environments worldwide. These systems aim to identify patients at risk of clinical deterioration by evaluating vital signs and other physiological parameters, enabling timely intervention by rapid response teams. Despite proven benefits and widespread adoption, conventional EWS have limitations that may affect their ability to effectively detect and respond to patient deterioration. There is growing interest in integrating continuous multimodal monitoring technologies and advanced analytics, particularly artificial intelligence (AI) and machine learning (ML)-based approaches, to address these limitations and enhance EWS performance. This review provides a comprehensive overview of the current state and potential future directions of AI-based bio-signal monitoring in early warning system. It examines emerging trends and techniques in AI and ML for bio-signal analysis, exploring the possibilities and potential applications of various bio-signals such as electroencephalography, electrocardiography, electromyography in early warning system. However, significant challenges exist in developing and implementing AI-based bio-signal monitoring systems in early warning system, including data acquisition strategies, data quality and standardization, interpretability and explainability, validation and regulatory approval, integration into clinical workflows, and ethical and legal considerations. Addressing these challenges requires a multidisciplinary approach involving close collaboration between healthcare professionals, data scientists, engineers, and other stakeholders. Future research should focus on developing advanced data fusion techniques, personalized adaptive models, real-time and continuous monitoring, explainable and reliable AI, and regulatory and ethical frameworks. By addressing these challenges and opportunities, the integration of AI and bio-signals into early warning systems can enhance patient monitoring and clinical decision support, ultimately improving healthcare quality and safety. In conclusion, integrating AI and bio-signals into the early warning system represents a promising approach to improve patient care outcomes and support clinical decision-making. As research in this field continues to evolve, it is crucial to develop safe, effective, and ethically responsible solutions that can be seamlessly integrated into clinical practice, harnessing the power of innovative technology to enhance patient care and improve individual and population health and well-being.</p>","PeriodicalId":46898,"journal":{"name":"Biomedical Engineering Letters","volume":"15 4","pages":"717-734"},"PeriodicalIF":3.2,"publicationDate":"2025-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12226448/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144576647","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-21eCollection Date: 2025-09-01DOI: 10.1007/s13534-025-00489-1
Ki Chang Nam, Bong Joo Park
Purpose: This study investigates the antibacterial and anticancer activity of previously reported iron oxide (Fe3O4)-based nanoparticles (NPs) conjugated with chlorin e6 and folic acid (FCF) in photodynamic therapy (PDT) using a human bladder cancer (BC) (T-24) cell line and three bacterial strains.
Method: To investigate the potential applicability of the synthesized NPs as therapeutic agents for image-based photodynamic BC therapy, their photodynamic anticancer activity was analyzed and the mechanisms of cell death in T-24 cells treated with these NPs were assessed qualitatively and quantitatively through atomic absorption spectroscopy, fluorescence imaging, and transmission electron microscopy.
Results: The effective localization of FCF NPs in T-24 cells were confirmed, validating their excellent cellular fluorescence and magnetic resonance imaging capabilities. Moreover, the FCF NPs exhibited excellent anticancer activity via distinct mechanisms of cell death; they induced apoptotic cancer cell death by strongly upregulating apoptosis-related mRNA genes, such as Bcl-2-interacting killer, growth arrest DNA damage-inducible protein 45 beta, and Caspase-3, -6, and -9. Furthermore, the FCF NPs showed significant antibacterial activity against Escherichia coli, Staphylococcus aureus, and the clinically isolated methicillin-resistant strain Staphylococcus aureus.
Conclusion: FCF NPs effectively induce cancer cell death, show excellent photodynamic anticancer efficacy against BC cells, and exhibit potent antibacterial activity against uropathogenic bacterial strains via PDT, exhibiting high potential for application in versatile imaging-based diagnostics and therapeutics in BC treatment and urinary tract infection management. However, prior to their clinical application, in vivo studies using animal models are required to validate these biological and physiological effects.
{"title":"Antibacterial and anticancer activity of multifunctional iron-based magnetic nanoparticles against urinary tract infection and cystitis-related bacterial strains and bladder cancer cells.","authors":"Ki Chang Nam, Bong Joo Park","doi":"10.1007/s13534-025-00489-1","DOIUrl":"10.1007/s13534-025-00489-1","url":null,"abstract":"<p><strong>Purpose: </strong>This study investigates the antibacterial and anticancer activity of previously reported iron oxide (Fe<sub>3</sub>O<sub>4</sub>)-based nanoparticles (NPs) conjugated with chlorin e6 and folic acid (FCF) in photodynamic therapy (PDT) using a human bladder cancer (BC) (T-24) cell line and three bacterial strains.</p><p><strong>Method: </strong>To investigate the potential applicability of the synthesized NPs as therapeutic agents for image-based photodynamic BC therapy, their photodynamic anticancer activity was analyzed and the mechanisms of cell death in T-24 cells treated with these NPs were assessed qualitatively and quantitatively through atomic absorption spectroscopy, fluorescence imaging, and transmission electron microscopy.</p><p><strong>Results: </strong>The effective localization of FCF NPs in T-24 cells were confirmed, validating their excellent cellular fluorescence and magnetic resonance imaging capabilities. Moreover, the FCF NPs exhibited excellent anticancer activity via distinct mechanisms of cell death; they induced apoptotic cancer cell death by strongly upregulating apoptosis-related mRNA genes, such as Bcl-2-interacting killer, growth arrest DNA damage-inducible protein 45 beta, and Caspase-3, -6, and -9. Furthermore, the FCF NPs showed significant antibacterial activity against <i>Escherichia coli</i>, <i>Staphylococcus aureus</i>, and the clinically isolated methicillin-resistant strain <i>Staphylococcus aureus</i>.</p><p><strong>Conclusion: </strong>FCF NPs effectively induce cancer cell death, show excellent photodynamic anticancer efficacy against BC cells, and exhibit potent antibacterial activity against uropathogenic bacterial strains via PDT, exhibiting high potential for application in versatile imaging-based diagnostics and therapeutics in BC treatment and urinary tract infection management. However, prior to their clinical application, in vivo studies using animal models are required to validate these biological and physiological effects.</p>","PeriodicalId":46898,"journal":{"name":"Biomedical Engineering Letters","volume":"15 5","pages":"903-915"},"PeriodicalIF":2.8,"publicationDate":"2025-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12411361/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145015216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-06eCollection Date: 2025-09-01DOI: 10.1007/s13534-025-00484-6
Ji Seung Ryu, Hyunyoung Kang, Yuseong Chu, Sejung Yang
Foundation models, including large language models and vision-language models (VLMs), have revolutionized artificial intelligence by enabling efficient, scalable, and multimodal learning across diverse applications. By leveraging advancements in self-supervised and semi-supervised learning, these models integrate computer vision and natural language processing to address complex tasks, such as disease classification, segmentation, cross-modal retrieval, and automated report generation. Their ability to pretrain on vast, uncurated datasets minimizes reliance on annotated data while improving generalization and adaptability for a wide range of downstream tasks. In the medical domain, foundation models address critical challenges by combining the information from various medical imaging modalities with textual data from radiology reports and clinical notes. This integration has enabled the development of tools that streamline diagnostic workflows, enhance accuracy (ACC), and enable robust decision-making. This review provides a systematic examination of the recent advancements in medical VLMs from 2022 to 2024, focusing on modality-specific approaches and tailored applications in medical imaging. The key contributions include the creation of a structured taxonomy to categorize existing models, an in-depth analysis of datasets essential for training and evaluation, and a review of practical applications. This review also addresses ongoing challenges and proposes future directions for enhancing the accessibility and impact of foundation models in healthcare.
Supplementary information: The online version contains supplementary material available at 10.1007/s13534-025-00484-6.
{"title":"Vision-language foundation models for medical imaging: a review of current practices and innovations.","authors":"Ji Seung Ryu, Hyunyoung Kang, Yuseong Chu, Sejung Yang","doi":"10.1007/s13534-025-00484-6","DOIUrl":"10.1007/s13534-025-00484-6","url":null,"abstract":"<p><p>Foundation models, including large language models and vision-language models (VLMs), have revolutionized artificial intelligence by enabling efficient, scalable, and multimodal learning across diverse applications. By leveraging advancements in self-supervised and semi-supervised learning, these models integrate computer vision and natural language processing to address complex tasks, such as disease classification, segmentation, cross-modal retrieval, and automated report generation. Their ability to pretrain on vast, uncurated datasets minimizes reliance on annotated data while improving generalization and adaptability for a wide range of downstream tasks. In the medical domain, foundation models address critical challenges by combining the information from various medical imaging modalities with textual data from radiology reports and clinical notes. This integration has enabled the development of tools that streamline diagnostic workflows, enhance accuracy (ACC), and enable robust decision-making. This review provides a systematic examination of the recent advancements in medical VLMs from 2022 to 2024, focusing on modality-specific approaches and tailored applications in medical imaging. The key contributions include the creation of a structured taxonomy to categorize existing models, an in-depth analysis of datasets essential for training and evaluation, and a review of practical applications. This review also addresses ongoing challenges and proposes future directions for enhancing the accessibility and impact of foundation models in healthcare.</p><p><strong>Supplementary information: </strong>The online version contains supplementary material available at 10.1007/s13534-025-00484-6.</p>","PeriodicalId":46898,"journal":{"name":"Biomedical Engineering Letters","volume":"15 5","pages":"809-830"},"PeriodicalIF":2.8,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12411343/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145015210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-06-05eCollection Date: 2025-07-01DOI: 10.1007/s13534-025-00483-7
Wonbum Sohn, M Hongchul Sohn, Jongsang Son
Myographic signals can effectively detect and assess subtle changes in muscle function; however, their measurement and analysis are often limited in clinical settings compared to inertial measurement units. Recently, the advent of artificial intelligence (AI) has made the analysis of complex myographic signals more feasible. This scoping review aims to examine the use of myographic signals in conjunction with AI for assessing motor impairments and highlight potential limitations and future directions. We conducted a systematic search using specific keywords in the Scopus and PubMed databases. After a thorough screening process, 111 relevant studies were selected for review. These studies were organized based on target applications (measurement modality, measurement location, and AI application task), sample demographics (age, sex, ethnicity, and pathology), and AI models (general approach and algorithm type). Among various myographic measurement modalities, surface electromyography was the most commonly used. In terms of AI approaches, machine learning with feature engineering was the predominant method, with classification tasks being the most common application of AI. Our review also noted a significant bias in participant demographics, with a greater representation of males compared to females and healthy individuals compared to clinical populations. Overall, our findings suggest that integrating myographic signals with AI has the potential to provide more objective and clinically relevant assessments of motor impairments.
{"title":"Insights into motor impairment assessment using myographic signals with artificial intelligence: a scoping review.","authors":"Wonbum Sohn, M Hongchul Sohn, Jongsang Son","doi":"10.1007/s13534-025-00483-7","DOIUrl":"10.1007/s13534-025-00483-7","url":null,"abstract":"<p><p>Myographic signals can effectively detect and assess subtle changes in muscle function; however, their measurement and analysis are often limited in clinical settings compared to inertial measurement units. Recently, the advent of artificial intelligence (AI) has made the analysis of complex myographic signals more feasible. This scoping review aims to examine the use of myographic signals in conjunction with AI for assessing motor impairments and highlight potential limitations and future directions. We conducted a systematic search using specific keywords in the Scopus and PubMed databases. After a thorough screening process, 111 relevant studies were selected for review. These studies were organized based on target applications (measurement modality, measurement location, and AI application task), sample demographics (age, sex, ethnicity, and pathology), and AI models (general approach and algorithm type). Among various myographic measurement modalities, surface electromyography was the most commonly used. In terms of AI approaches, machine learning with feature engineering was the predominant method, with classification tasks being the most common application of AI. Our review also noted a significant bias in participant demographics, with a greater representation of males compared to females and healthy individuals compared to clinical populations. Overall, our findings suggest that integrating myographic signals with AI has the potential to provide more objective and clinically relevant assessments of motor impairments.</p>","PeriodicalId":46898,"journal":{"name":"Biomedical Engineering Letters","volume":"15 4","pages":"693-716"},"PeriodicalIF":3.2,"publicationDate":"2025-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12229422/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144576649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-30eCollection Date: 2025-09-01DOI: 10.1007/s13534-025-00482-8
Woo-Young Seo, Sang-Wook Lee, Yong-Seok Park, Hyun-Seok Kim, Jae-Man Shin, Dong-Kyu Kim, Woo-Jin Kim, Sung-Hoon Kim
Heart sounds provide essential information about cardiac function; however, their clinical meaning and potential for minimally invasive hemodynamic monitoring in real world clinical settings remain underexplored. This study assessed relationships between heart sound indices and hemodynamic parameters during liver transplant surgery. Data from 80 liver transplant recipients were analyzed across five procedural phases (approximately 1,680k cardiac beats). The heart sound indices (S1 amplitude, S2 amplitude, systolic time interval, systolic time variation (STV)) were compared with hemodynamic parameters (mean blood pressure, peak arterial pressure gradient, stroke volume, systemic vascular resistance (SVR), stroke volume variation (SVV)). Relationships were assessed using Pearson's correlation, Bland-Altman analysis, and concordance correlation coefficient (CCC). The heart sound indices showed significant correlations with hemodynamic parameters during liver transplantation. S1 amplitude had positive correlations with dP/dt_max (r = 0.467-0.548), while S2 amplitude was correlated with SVR (r = 0.364-0.406). The STV showed the strongest and most consistent correlations with SVV across surgical phases (r = 0.687-0.721). Agreement metrics between STV and SVV showed mean biases ranging from - 0.34 to 0.28 with limits of agreement ranging from - 6.20 to 6.10, and the CCC ranged from 0.55 to 0.69. The amplitudes of S1 and S2 and their interval variation may reflect changes in dP/dt_max, SVR and SVV, respectively. These results suggest that heart sound parameters can serve as valuable minimally invasive indicators of hemodynamic changes during complex surgical procedures such as liver transplantation.
{"title":"Unobtrusive continuous hemodynamic monitoring method using processed heart sound signals in patients undergoing surgery: a proof of concept study.","authors":"Woo-Young Seo, Sang-Wook Lee, Yong-Seok Park, Hyun-Seok Kim, Jae-Man Shin, Dong-Kyu Kim, Woo-Jin Kim, Sung-Hoon Kim","doi":"10.1007/s13534-025-00482-8","DOIUrl":"10.1007/s13534-025-00482-8","url":null,"abstract":"<p><p>Heart sounds provide essential information about cardiac function; however, their clinical meaning and potential for minimally invasive hemodynamic monitoring in real world clinical settings remain underexplored. This study assessed relationships between heart sound indices and hemodynamic parameters during liver transplant surgery. Data from 80 liver transplant recipients were analyzed across five procedural phases (approximately 1,680k cardiac beats). The heart sound indices (S1 amplitude, S2 amplitude, systolic time interval, systolic time variation (STV)) were compared with hemodynamic parameters (mean blood pressure, peak arterial pressure gradient, stroke volume, systemic vascular resistance (SVR), stroke volume variation (SVV)). Relationships were assessed using Pearson's correlation, Bland-Altman analysis, and concordance correlation coefficient (CCC). The heart sound indices showed significant correlations with hemodynamic parameters during liver transplantation. S1 amplitude had positive correlations with dP/dt_max (r = 0.467-0.548), while S2 amplitude was correlated with SVR (r = 0.364-0.406). The STV showed the strongest and most consistent correlations with SVV across surgical phases (r = 0.687-0.721). Agreement metrics between STV and SVV showed mean biases ranging from - 0.34 to 0.28 with limits of agreement ranging from - 6.20 to 6.10, and the CCC ranged from 0.55 to 0.69. The amplitudes of S1 and S2 and their interval variation may reflect changes in dP/dt_max, SVR and SVV, respectively. These results suggest that heart sound parameters can serve as valuable minimally invasive indicators of hemodynamic changes during complex surgical procedures such as liver transplantation.</p>","PeriodicalId":46898,"journal":{"name":"Biomedical Engineering Letters","volume":"15 5","pages":"865-875"},"PeriodicalIF":2.8,"publicationDate":"2025-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12411395/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145015233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-29eCollection Date: 2025-07-01DOI: 10.1007/s13534-025-00481-9
Guangxing Du, Rui Wu, Jinming Xu, Xiang Zeng, Shengwu Xiong
Semi-supervised learning has become a favorable method for medical image segmentation due to the high cost of obtaining labeled data in the field of medical image analysis. However, existing magnetic resonance images have low contrast, the scale and shape of organs vary greatly under different slice perspectives. Although existing methods have made some progress, they still cannot handle these challenging samples well. To this end, we propose a semi-supervised magnetic resonance images segmentation method based on informative patches learning (IPNet), which focuses on the learning of challenging regions. Specifically, we design a novel informative patch scoring strategy based on prediction uncertainty and category diversity, which can accurately identify challenging regions in samples. And to ensure that the informative patch is fully learned, the patch with the lowest score in one sample is replaced with the patch with the highest score in another sample to obtain a new pair of training samples. Furthermore, we introduce global and local consistency losses to supervise the new samples, guide the model to focus on the global and local features of the informative patches. To evaluate the effectiveness of the method, we conducted experiments on three magnetic resonance image datasets (ACDC, PROMISE 12 and LA datasets). Extensive experimental results demonstrate the effectiveness and superior performance of the proposed method.
{"title":"Ipnet: informative patches learning for semi-supervised magnetic resonance image segmentation.","authors":"Guangxing Du, Rui Wu, Jinming Xu, Xiang Zeng, Shengwu Xiong","doi":"10.1007/s13534-025-00481-9","DOIUrl":"https://doi.org/10.1007/s13534-025-00481-9","url":null,"abstract":"<p><p>Semi-supervised learning has become a favorable method for medical image segmentation due to the high cost of obtaining labeled data in the field of medical image analysis. However, existing magnetic resonance images have low contrast, the scale and shape of organs vary greatly under different slice perspectives. Although existing methods have made some progress, they still cannot handle these challenging samples well. To this end, we propose a semi-supervised magnetic resonance images segmentation method based on informative patches learning (IPNet), which focuses on the learning of challenging regions. Specifically, we design a novel informative patch scoring strategy based on prediction uncertainty and category diversity, which can accurately identify challenging regions in samples. And to ensure that the informative patch is fully learned, the patch with the lowest score in one sample is replaced with the patch with the highest score in another sample to obtain a new pair of training samples. Furthermore, we introduce global and local consistency losses to supervise the new samples, guide the model to focus on the global and local features of the informative patches. To evaluate the effectiveness of the method, we conducted experiments on three magnetic resonance image datasets (ACDC, PROMISE 12 and LA datasets). Extensive experimental results demonstrate the effectiveness and superior performance of the proposed method.</p>","PeriodicalId":46898,"journal":{"name":"Biomedical Engineering Letters","volume":"15 4","pages":"797-807"},"PeriodicalIF":3.2,"publicationDate":"2025-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12229363/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144585256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-27eCollection Date: 2025-07-01DOI: 10.1007/s13534-025-00480-w
Nawara Mahmood Broti, Masaki Iwasaki, Yumie Ono
Accurate identification of seizure onset zones (SOZ) is essential for the surgical treatment of epilepsy. This narrative review examines recent advances in machine learning approaches for SOZ localization using intracranial electroencephalography (iEEG) data. Existing studies are analyzed while addressing key questions: What machine learning techniques are used for SOZ localization? How effective are these methods? What are the limitations, and what solutions can drive further progress in the field? This narrative review examined peer-reviewed studies that employed machine learning techniques for SOZ localization using iEEG data. The selected studies were analyzed to identify trends in machine learning applications, performance metrics, benefits, and challenges associated with SOZ identification. The review highlights the increasing adoption of machine learning for SOZ localization, mostly with supervised approaches. Particularly support vector machine (SVM) using high frequency oscillation (HFO) biomarker feature being the most prevalent. High accuracy and sensitivity, especially in studies with smaller sample sizes are reported. However, patient-wise validation reveals limited generalizability. Additionally, ambiguity in SOZ definition and the scarcity of open-access iEEG datasets continue to hinder progress and reproducibility in the field. Machine learning offers significant potential for advancing SOZ localization. Development of more robust algorithms, integration of multimodal data, and greater model interpretability, can improve model reliability, ensure consistency, and enhance real-world applicability, thereby transforming the future of SOZ localization.
{"title":"Machine learning detection of epileptic seizure onset zone from iEEG.","authors":"Nawara Mahmood Broti, Masaki Iwasaki, Yumie Ono","doi":"10.1007/s13534-025-00480-w","DOIUrl":"10.1007/s13534-025-00480-w","url":null,"abstract":"<p><p>Accurate identification of seizure onset zones (SOZ) is essential for the surgical treatment of epilepsy. This narrative review examines recent advances in machine learning approaches for SOZ localization using intracranial electroencephalography (iEEG) data. Existing studies are analyzed while addressing key questions: What machine learning techniques are used for SOZ localization? How effective are these methods? What are the limitations, and what solutions can drive further progress in the field? This narrative review examined peer-reviewed studies that employed machine learning techniques for SOZ localization using iEEG data. The selected studies were analyzed to identify trends in machine learning applications, performance metrics, benefits, and challenges associated with SOZ identification. The review highlights the increasing adoption of machine learning for SOZ localization, mostly with supervised approaches. Particularly support vector machine (SVM) using high frequency oscillation (HFO) biomarker feature being the most prevalent. High accuracy and sensitivity, especially in studies with smaller sample sizes are reported. However, patient-wise validation reveals limited generalizability. Additionally, ambiguity in SOZ definition and the scarcity of open-access iEEG datasets continue to hinder progress and reproducibility in the field. Machine learning offers significant potential for advancing SOZ localization. Development of more robust algorithms, integration of multimodal data, and greater model interpretability, can improve model reliability, ensure consistency, and enhance real-world applicability, thereby transforming the future of SOZ localization.</p>","PeriodicalId":46898,"journal":{"name":"Biomedical Engineering Letters","volume":"15 4","pages":"677-692"},"PeriodicalIF":3.2,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12229413/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144576650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-22eCollection Date: 2025-07-01DOI: 10.1007/s13534-025-00478-4
Seunghyun Gwak, Sooyoung Yang, Heawon Jeong, Junhu Park, Myungjoo Kang
This study proposes a deep learning-based diagnostic model called the Projection-wise Masked Autoencoder (ProMAE) for rapid and accurate COVID-19 diagnosis using sparse-view CT images. ProMAE employs a column-wise masking strategy during pre-training to effectively learn critical diagnostic features from sinograms, even under extremely sparse conditions. The trained ProMAE can directly classify sparse-view sinograms without requiring CT image reconstruction. Experiments on sparse-view data with 50%, 75%, 85%, 95%, and 99% sparsity show that ProMAE achieves a diagnostic accuracy of over 95% at all sparsity levels and, in particular, outperforms ResNet, ConvNeXt, and conventional MAE models in COVID-19 diagnosis in environments with 85% or higher sparsity. This capability is especially advantageous for the development of portable and flexible imaging systems during large-scale outbreaks such as COVID-19, as it ensures accurate diagnosis while minimizing radiation exposure, making it a vital tool in resource-limited and high-demand settings.
{"title":"Efficient sparse-view medical image classification for low radiation and rapid COVID-19 diagnosis.","authors":"Seunghyun Gwak, Sooyoung Yang, Heawon Jeong, Junhu Park, Myungjoo Kang","doi":"10.1007/s13534-025-00478-4","DOIUrl":"10.1007/s13534-025-00478-4","url":null,"abstract":"<p><p>This study proposes a deep learning-based diagnostic model called the Projection-wise Masked Autoencoder (ProMAE) for rapid and accurate COVID-19 diagnosis using sparse-view CT images. ProMAE employs a column-wise masking strategy during pre-training to effectively learn critical diagnostic features from sinograms, even under extremely sparse conditions. The trained ProMAE can directly classify sparse-view sinograms without requiring CT image reconstruction. Experiments on sparse-view data with 50%, 75%, 85%, 95%, and 99% sparsity show that ProMAE achieves a diagnostic accuracy of over 95% at all sparsity levels and, in particular, outperforms ResNet, ConvNeXt, and conventional MAE models in COVID-19 diagnosis in environments with 85% or higher sparsity. This capability is especially advantageous for the development of portable and flexible imaging systems during large-scale outbreaks such as COVID-19, as it ensures accurate diagnosis while minimizing radiation exposure, making it a vital tool in resource-limited and high-demand settings.</p>","PeriodicalId":46898,"journal":{"name":"Biomedical Engineering Letters","volume":"15 4","pages":"785-795"},"PeriodicalIF":3.2,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12229398/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144576648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}