Antibodies are a key therapeutic class in pharma, enabling precise targeting of disease agents. Traditional methods for their design are slow, costly, and limited. Advances in high-throughput data and artificial intelligence (AI) including machine learning, deep learning, and reinforcement learning have revolutionized antibody sequence design, 3D structure prediction, and optimization of affinity and specificity. Computational approaches enable rapid library generation and efficient screening, reduce experimental sampling, and support rational design with improved immune response. Combining AI with experimental methods allows for de novo, multifunctional antibody development. AI also accelerates the discovery process, target identification, and candidate prioritization by analyzing large datasets, predicting interactions, and guiding modifications to enhance efficacy and safety. Despite challenges, ongoing research continues to expand the potential of AI and transform antibody development and the pharmaceutical industry. Antibodies are a key therapeutic class in pharma, enabling precise targeting of disease agents. Traditional methods for their design are slow, costly, and limited. Advances in high-throughput data and artificial intelligence (AI) including machine learning, deep learning, and reinforcement learning have revolutionized antibody sequence design, 3D structure prediction, and optimization of affinity and specificity. Computational approaches enable rapid library generation and efficient screening, reduce experimental sampling, and support rational design with improved immune response. Combining AI with experimental methods allows for de novo, multifunctional antibody development. AI also accelerates the discovery process, target identification, and candidate prioritization by analyzing large datasets, predicting interactions, and guiding modifications to enhance efficacy and safety. Despite challenges, ongoing research continues to expand the potential of AI and transform antibody development and the pharmaceutical industry.
{"title":"Artificial intelligence in antibody design and development: harnessing the power of computational approaches.","authors":"Soudabeh Kavousipour, Mahdi Barazesh, Shiva Mohammadi","doi":"10.1007/s11517-025-03429-4","DOIUrl":"10.1007/s11517-025-03429-4","url":null,"abstract":"<p><p>Antibodies are a key therapeutic class in pharma, enabling precise targeting of disease agents. Traditional methods for their design are slow, costly, and limited. Advances in high-throughput data and artificial intelligence (AI) including machine learning, deep learning, and reinforcement learning have revolutionized antibody sequence design, 3D structure prediction, and optimization of affinity and specificity. Computational approaches enable rapid library generation and efficient screening, reduce experimental sampling, and support rational design with improved immune response. Combining AI with experimental methods allows for de novo, multifunctional antibody development. AI also accelerates the discovery process, target identification, and candidate prioritization by analyzing large datasets, predicting interactions, and guiding modifications to enhance efficacy and safety. Despite challenges, ongoing research continues to expand the potential of AI and transform antibody development and the pharmaceutical industry. Antibodies are a key therapeutic class in pharma, enabling precise targeting of disease agents. Traditional methods for their design are slow, costly, and limited. Advances in high-throughput data and artificial intelligence (AI) including machine learning, deep learning, and reinforcement learning have revolutionized antibody sequence design, 3D structure prediction, and optimization of affinity and specificity. Computational approaches enable rapid library generation and efficient screening, reduce experimental sampling, and support rational design with improved immune response. Combining AI with experimental methods allows for de novo, multifunctional antibody development. AI also accelerates the discovery process, target identification, and candidate prioritization by analyzing large datasets, predicting interactions, and guiding modifications to enhance efficacy and safety. Despite challenges, ongoing research continues to expand the potential of AI and transform antibody development and the pharmaceutical industry.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"3475-3501"},"PeriodicalIF":2.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144976398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-08-21DOI: 10.1007/s11517-025-03433-8
Prabhakar Agarwal, Sandeep Kumar, Rishav Singh
Nowadays, deep network-based classification algorithms are used in a myriad of applications for brain-computer interfaces (BCIs). These interfaces can enhance the daily lives of quadriplegic patients. Electroencephalography (EEG) based motor imagery (MI) is an integral part of BCI, and the performance of the available deep classifiers is still limited. This paper presents a novel convolutional neural network (CNN) architecture designed to enhance the multiclass classification accuracy of motor imagery (MI) signals acquired through EEG-based sensing. We have selected the electrodes over the sensorimotor cortex region of the brain in the 8-30 Hz EEG frequency band. Further, we have computed the classification accuracy and kappa scores in an end-to-end deep classification network. Our framework surpasses the contemporary literature algorithms in classifying BCI competition IV-2a, a four-class MI dataset of nine subjects (left hand, right hand, both feet, tongue). The proposed network architecture has achieved an average and maximum accuracy of 95.19% and 99.28%, respectively. We have outperformed state-of-the-art accuracies of the individual subjects S1, S2, S3, S4, S5, S6, S8, and the average accuracy of the dataset by 8.28%, 40.97%, 5.54%, 14.83%, 19.09%, 25.5%, 10.43%, and 12.82% respectively.
{"title":"Motor imagery-based neural networks for assisting tetraplegic patients.","authors":"Prabhakar Agarwal, Sandeep Kumar, Rishav Singh","doi":"10.1007/s11517-025-03433-8","DOIUrl":"10.1007/s11517-025-03433-8","url":null,"abstract":"<p><p>Nowadays, deep network-based classification algorithms are used in a myriad of applications for brain-computer interfaces (BCIs). These interfaces can enhance the daily lives of quadriplegic patients. Electroencephalography (EEG) based motor imagery (MI) is an integral part of BCI, and the performance of the available deep classifiers is still limited. This paper presents a novel convolutional neural network (CNN) architecture designed to enhance the multiclass classification accuracy of motor imagery (MI) signals acquired through EEG-based sensing. We have selected the electrodes over the sensorimotor cortex region of the brain in the 8-30 Hz EEG frequency band. Further, we have computed the classification accuracy and kappa scores in an end-to-end deep classification network. Our framework surpasses the contemporary literature algorithms in classifying BCI competition IV-2a, a four-class MI dataset of nine subjects (left hand, right hand, both feet, tongue). The proposed network architecture has achieved an average and maximum accuracy of 95.19% and 99.28%, respectively. We have outperformed state-of-the-art accuracies of the individual subjects S1, S2, S3, S4, S5, S6, S8, and the average accuracy of the dataset by 8.28%, 40.97%, 5.54%, 14.83%, 19.09%, 25.5%, 10.43%, and 12.82% respectively.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"3793-3807"},"PeriodicalIF":2.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144976543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-07-16DOI: 10.1007/s11517-025-03407-w
Sha Yuan, Jiwen Hu, Chuangjian Xia, Qinlin Li, Chang Li
How to utilize focused ultrasound to achieve rapid, efficient, and safe ablation of atherosclerotic plaques (APs) is a significant challenge in clinical medicine. On the basis of the thermal damage effect of ultrasound on biological tissues, this paper proposes a thermal ablation mode for AP therapy with a single-focus, variable-frequency scanning model using a phased array. An AP model combined with fluid‒solid‒thermal conjugation is established and solved by the finite element method. The results show that the acoustic energy excited by a phased array can be precisely localized at the preset focal points in the plaque, and auto-focused heating is achieved under temperature control at 43 °C. Multiple autofocus scans increase the area of plaque thermal ablation while protecting the normal tissue surrounding the plaque. This model provides a potential treatment option for the thermal ablation of plaques with different depths and sizes.
{"title":"Thermal therapy of atherosclerotic plaques using ultrasonic phased-array system.","authors":"Sha Yuan, Jiwen Hu, Chuangjian Xia, Qinlin Li, Chang Li","doi":"10.1007/s11517-025-03407-w","DOIUrl":"10.1007/s11517-025-03407-w","url":null,"abstract":"<p><p>How to utilize focused ultrasound to achieve rapid, efficient, and safe ablation of atherosclerotic plaques (APs) is a significant challenge in clinical medicine. On the basis of the thermal damage effect of ultrasound on biological tissues, this paper proposes a thermal ablation mode for AP therapy with a single-focus, variable-frequency scanning model using a phased array. An AP model combined with fluid‒solid‒thermal conjugation is established and solved by the finite element method. The results show that the acoustic energy excited by a phased array can be precisely localized at the preset focal points in the plaque, and auto-focused heating is achieved under temperature control at 43 °C. Multiple autofocus scans increase the area of plaque thermal ablation while protecting the normal tissue surrounding the plaque. This model provides a potential treatment option for the thermal ablation of plaques with different depths and sizes.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"3563-3576"},"PeriodicalIF":2.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144643969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-07-24DOI: 10.1007/s11517-025-03412-z
Luis Serrador, Pedro Varanda, Bruno Direito-Santos, Cristina P Santos
This paper introduces SpineAlign, a novel radiation-free clinical decision support system (CDSS) designed to address the challenge of intraoperative spinal alignment assessment during spinal deformity (SD) correction surgeries. SpineAlign aims to overcome the current limitations of existing systems by providing a quantitative assessment without radiation exposure in the operating room (OR), thus enhancing the safety and precision of computer-assisted spinal surgeries (CASS). The system focuses on spinal alignment calculation, leveraging Bézier curves and algorithm development to track vertebrae and estimate spinal curvature. Collaborative meetings with clinical experts identified challenges such as patient positioning complexities and limitations of minimal invasiveness. Thus, the method developed involves four algorithms: (1) tracking anatomical planes; (2) estimating the Bézier curve; (3) determining vertebrae positions; and (4) adjusting orientation. A proof of concept (PoC) using a porcine spinal segment validated SpineAlign's integrated algorithms and functionalities. The PoC demonstrated the system's accuracy and clinical applicability, successfully transitioning a spine without curvature to a lordotic spine. Quantitative evaluation of spinal alignment by the system showed high accuracy, with a maximum root mean squared error of 6 . The successful PoC marks an initial step towards developing a reliable CDSS for intraoperative spinal alignment assessment without medical image acquisition. Future steps will focus on enhancing system robustness and performing multi-surgeon serial studies to advance SpineAlign towards widespread clinical adoption.
{"title":"Towards a radiation-free clinical decision support system for intraoperative spinal alignment assessment.","authors":"Luis Serrador, Pedro Varanda, Bruno Direito-Santos, Cristina P Santos","doi":"10.1007/s11517-025-03412-z","DOIUrl":"10.1007/s11517-025-03412-z","url":null,"abstract":"<p><p>This paper introduces SpineAlign, a novel radiation-free clinical decision support system (CDSS) designed to address the challenge of intraoperative spinal alignment assessment during spinal deformity (SD) correction surgeries. SpineAlign aims to overcome the current limitations of existing systems by providing a quantitative assessment without radiation exposure in the operating room (OR), thus enhancing the safety and precision of computer-assisted spinal surgeries (CASS). The system focuses on spinal alignment calculation, leveraging Bézier curves and algorithm development to track vertebrae and estimate spinal curvature. Collaborative meetings with clinical experts identified challenges such as patient positioning complexities and limitations of minimal invasiveness. Thus, the method developed involves four algorithms: (1) tracking anatomical planes; (2) estimating the Bézier curve; (3) determining vertebrae positions; and (4) adjusting orientation. A proof of concept (PoC) using a porcine spinal segment validated SpineAlign's integrated algorithms and functionalities. The PoC demonstrated the system's accuracy and clinical applicability, successfully transitioning a spine without curvature to a lordotic spine. Quantitative evaluation of spinal alignment by the system showed high accuracy, with a maximum root mean squared error of 6 <math><mmultiscripts><mrow></mrow> <mrow></mrow> <mo>∘</mo></mmultiscripts> </math> . The successful PoC marks an initial step towards developing a reliable CDSS for intraoperative spinal alignment assessment without medical image acquisition. Future steps will focus on enhancing system robustness and performing multi-surgeon serial studies to advance SpineAlign towards widespread clinical adoption.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"3669-3694"},"PeriodicalIF":2.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12675762/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144700160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thyroid nodules often necessitate surgical intervention, where traditional retractors may cause muscle damage due to prolonged use. This study introduces a slippage-suppression robotic system for thyroid surgery, featuring a conformal force and torque sensing module integrated with a robotic manipulator for compliant force control. The system features five-dimensional (5DoF) contact force sensing, achieving accurate force measurement with a relative error of . Experiments performed on phantoms and porcine tissues demonstrate the system's ability to suppress slippage effectively, ensure reliable force feedback, and improve safety and precision during thyroid surgery.
{"title":"Slippage-suppression robot-assisted retraction for thyroid surgery with 5DoF contact force sensing.","authors":"Shouhui Deng, Haojun Li, Yuxuan Lin, Aiguo Song, Lifeng Zhu","doi":"10.1007/s11517-025-03420-z","DOIUrl":"10.1007/s11517-025-03420-z","url":null,"abstract":"<p><p>Thyroid nodules often necessitate surgical intervention, where traditional retractors may cause muscle damage due to prolonged use. This study introduces a slippage-suppression robotic system for thyroid surgery, featuring a conformal force and torque sensing module integrated with a robotic manipulator for compliant force control. The system features five-dimensional (5DoF) contact force sensing, achieving accurate force measurement with a relative error of <math><mrow><mo>≤</mo> <mn>1.5</mn> <mo>%</mo></mrow> </math> . Experiments performed on phantoms and porcine tissues demonstrate the system's ability to suppress slippage effectively, ensure reliable force feedback, and improve safety and precision during thyroid surgery.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"3655-3668"},"PeriodicalIF":2.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144700159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-08-05DOI: 10.1007/s11517-025-03422-x
Ang Li, Long Zhao, Chenyang Wu, Zhanxiao Geng, Lihui Yang, Fei Tang
Currently, non-invasive continuous blood glucose monitoring technology remains insufficient in terms of clinical validation data. Existing approaches predominantly depend on statistical models to predict blood glucose levels, which often suffer from limited data samples. This leads to significant individual differences in non-invasive continuous glucose monitoring, limiting its scope and promotion. We propose a neural network that uses metabolic characteristics as inputs to predict the rate of insulin-facilitated glucose uptake by cells and postprandial glucose gradient changes (glucose gradient: the rate of change of blood glucose concentration within a unit of time (dG/dt), with the unit of mg/(dL × min), reflects the dynamic change trend of blood glucose levels). This neural network utilises non-invasive continuous glucose monitoring method based on the Bergman minimal model (BM-NCGM) while considering the effects of the glucose gradient, insulin action, and the digestion process on glucose changes, achieving non-invasive continuous glucose monitoring. This work involved 161 subjects in a controlled clinical trial, collecting over 15,000 valid data sets. The predictive results of BM-NCGM for glucose showed that the CEG A area accounted for 77.58% and the A + B area for 99.57%. The correlation coefficient (0.85), RMSE (1.48 mmol/L), and MARD (11.51%) showed an improvement of over 32% compared to the non-use of BM-NCGM. The dynamic time warping algorithm was used to calculate the distance between the predicted blood glucose spectrum and the reference blood glucose spectrum, with an average distance of 21.80, demonstrating the excellent blood glucose spectrum tracking ability of BM-NCGM. This study is the first to apply the Bergman minimum model to non-invasive continuous blood glucose monitoring research, supported by a large amount of clinical trial data, bringing non-invasive continuous blood glucose monitoring closer to its true application in daily blood glucose monitoring. CLINICAL TRIAL REGISTRY NUMBER: ChiCTR1900028100.
{"title":"A non-invasive continuous glucose monitoring method based on the Bergman minimal model.","authors":"Ang Li, Long Zhao, Chenyang Wu, Zhanxiao Geng, Lihui Yang, Fei Tang","doi":"10.1007/s11517-025-03422-x","DOIUrl":"10.1007/s11517-025-03422-x","url":null,"abstract":"<p><p>Currently, non-invasive continuous blood glucose monitoring technology remains insufficient in terms of clinical validation data. Existing approaches predominantly depend on statistical models to predict blood glucose levels, which often suffer from limited data samples. This leads to significant individual differences in non-invasive continuous glucose monitoring, limiting its scope and promotion. We propose a neural network that uses metabolic characteristics as inputs to predict the rate of insulin-facilitated glucose uptake by cells and postprandial glucose gradient changes (glucose gradient: the rate of change of blood glucose concentration within a unit of time (dG/dt), with the unit of mg/(dL × min), reflects the dynamic change trend of blood glucose levels). This neural network utilises non-invasive continuous glucose monitoring method based on the Bergman minimal model (BM-NCGM) while considering the effects of the glucose gradient, insulin action, and the digestion process on glucose changes, achieving non-invasive continuous glucose monitoring. This work involved 161 subjects in a controlled clinical trial, collecting over 15,000 valid data sets. The predictive results of BM-NCGM for glucose showed that the CEG A area accounted for 77.58% and the A + B area for 99.57%. The correlation coefficient (0.85), RMSE (1.48 mmol/L), and MARD (11.51%) showed an improvement of over 32% compared to the non-use of BM-NCGM. The dynamic time warping algorithm was used to calculate the distance between the predicted blood glucose spectrum and the reference blood glucose spectrum, with an average distance of 21.80, demonstrating the excellent blood glucose spectrum tracking ability of BM-NCGM. This study is the first to apply the Bergman minimum model to non-invasive continuous blood glucose monitoring research, supported by a large amount of clinical trial data, bringing non-invasive continuous blood glucose monitoring closer to its true application in daily blood glucose monitoring. CLINICAL TRIAL REGISTRY NUMBER: ChiCTR1900028100.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"3749-3760"},"PeriodicalIF":2.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144785849","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Markerless tumor tracking in x-ray fluoroscopic images is an important technique for achieving precise dose delivery for moving lung tumors during radiation therapy. However, accurate tumor tracking is challenging due to the poor visibility of the target tumor overlapped by other organs such as rib bones. Dual-energy (DE) x-ray fluoroscopy can enhance tracking accuracy with improved tumor visibility by suppressing bones. However, DE x-ray imaging requires special hardware, limiting its clinical use. This study presents a deep learning-based DE subtraction (DES) synthesis method to avoid hardware limitations and enhance tracking accuracy. The proposed method employs a residual U-Net model trained on a simulated DES dataset from a digital phantom to synthesize DES from single-energy (SE) fluoroscopy. Experimental results using a digital phantom showed quantitative evaluation results of synthesis quality. Also, experimental results using clinical SE fluoroscopic images of ten lung cancer patients showed improved tumor tracking accuracy using synthesized DES images, reducing errors from 1.80 to 1.68 mm on average. The tracking success rate within a 25% movement range increased from 50.2% (SE) to 54.9% (DES). These findings indicate the feasibility of deep learning-based DES synthesis for markerless tumor tracking, offering a potential alternative to hardware-dependent DE imaging.
{"title":"Deep learning-based dual-energy subtraction synthesis from single-energy kV x-ray fluoroscopy for markerless tumor tracking.","authors":"Jiaoyang Wang, Kei Ichiji, Yuwen Zeng, Xiaoyong Zhang, Yoshihiro Takai, Noriyasu Homma","doi":"10.1007/s11517-025-03432-9","DOIUrl":"10.1007/s11517-025-03432-9","url":null,"abstract":"<p><p>Markerless tumor tracking in x-ray fluoroscopic images is an important technique for achieving precise dose delivery for moving lung tumors during radiation therapy. However, accurate tumor tracking is challenging due to the poor visibility of the target tumor overlapped by other organs such as rib bones. Dual-energy (DE) x-ray fluoroscopy can enhance tracking accuracy with improved tumor visibility by suppressing bones. However, DE x-ray imaging requires special hardware, limiting its clinical use. This study presents a deep learning-based DE subtraction (DES) synthesis method to avoid hardware limitations and enhance tracking accuracy. The proposed method employs a residual U-Net model trained on a simulated DES dataset from a digital phantom to synthesize DES from single-energy (SE) fluoroscopy. Experimental results using a digital phantom showed quantitative evaluation results of synthesis quality. Also, experimental results using clinical SE fluoroscopic images of ten lung cancer patients showed improved tumor tracking accuracy using synthesized DES images, reducing errors from 1.80 to 1.68 mm on average. The tracking success rate within a 25% movement range increased from 50.2% (SE) to 54.9% (DES). These findings indicate the feasibility of deep learning-based DES synthesis for markerless tumor tracking, offering a potential alternative to hardware-dependent DE imaging.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"3857-3872"},"PeriodicalIF":2.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12675645/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144976427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-08-27DOI: 10.1007/s11517-025-03431-w
Thu Ha Ngo, Minh Hieu Tran, Hoang Bach Nguyen, Van Nam Hoang, Thi Lan Le, Hai Vu, Trung Kien Tran, Huu Khanh Nguyen, Van Mao Can, Thanh Bac Nguyen, Thanh-Hai Tran
Traumatic brain injury (TBI) is one of the most prevalent health conditions, with severity assessment serving as an initial step for management, prognosis, and targeted therapy. Existing studies on automated outcome prediction using machine learning (ML) often overlook the importance of TBI features in decision-making and the challenges posed by limited and imbalanced training data. Furthermore, many attempts have focused on quantitatively evaluating ML algorithms without explaining the decisions, making the outcomes difficult to interpret and apply for less-experienced doctors. This study presents a novel supportive tool, named E-TBI (explainable outcome prediction after TBI), designed with a user-friendly web-based interface to assist doctors in outcome prediction after TBI using machine learning. The tool is developed with the capability to visualize rules applied in the decision-making process. At the tool's core is a feature selection and classification module that receives multimodal data from TBI patients (demographic data, clinical data, laboratory test results, and CT findings). It then infers one of four TBI severity levels. This research investigates various machine learning models and feature selection techniques, ultimately identifying the optimal combination of gradient boosting machine and random forest for the task, which we refer to as GBMRF. This method enabled us to identify a small set of essential features, reducing patient testing costs by 35%, while achieving the highest accuracy rates of 88.82% and 89.78% on two datasets (a public TBI dataset and our self-collected dataset, TBI_MH103). Classification modules are available at https://github.com/auverngo110/Traumatic_Brain_Injury_103 .
{"title":"E-TBI: explainable outcome prediction after traumatic brain injury using machine learning.","authors":"Thu Ha Ngo, Minh Hieu Tran, Hoang Bach Nguyen, Van Nam Hoang, Thi Lan Le, Hai Vu, Trung Kien Tran, Huu Khanh Nguyen, Van Mao Can, Thanh Bac Nguyen, Thanh-Hai Tran","doi":"10.1007/s11517-025-03431-w","DOIUrl":"10.1007/s11517-025-03431-w","url":null,"abstract":"<p><p>Traumatic brain injury (TBI) is one of the most prevalent health conditions, with severity assessment serving as an initial step for management, prognosis, and targeted therapy. Existing studies on automated outcome prediction using machine learning (ML) often overlook the importance of TBI features in decision-making and the challenges posed by limited and imbalanced training data. Furthermore, many attempts have focused on quantitatively evaluating ML algorithms without explaining the decisions, making the outcomes difficult to interpret and apply for less-experienced doctors. This study presents a novel supportive tool, named E-TBI (explainable outcome prediction after TBI), designed with a user-friendly web-based interface to assist doctors in outcome prediction after TBI using machine learning. The tool is developed with the capability to visualize rules applied in the decision-making process. At the tool's core is a feature selection and classification module that receives multimodal data from TBI patients (demographic data, clinical data, laboratory test results, and CT findings). It then infers one of four TBI severity levels. This research investigates various machine learning models and feature selection techniques, ultimately identifying the optimal combination of gradient boosting machine and random forest for the task, which we refer to as GBMRF. This method enabled us to identify a small set of essential features, reducing patient testing costs by 35%, while achieving the highest accuracy rates of 88.82% and 89.78% on two datasets (a public TBI dataset and our self-collected dataset, TBI_MH103). Classification modules are available at https://github.com/auverngo110/Traumatic_Brain_Injury_103 .</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"3839-3856"},"PeriodicalIF":2.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144976487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01Epub Date: 2025-07-14DOI: 10.1007/s11517-025-03402-1
Jinghang Li, Keyi Wang, Yanzhuo Wang, Yi Yuan
Reconfigurable cable-driven parallel robots (RCDPRs) have attracted much attention as a novel type of cable-driven robot that can change their cable anchor position. The reconfigurable cable-driven lower limb rehabilitation robot (RCDLR) employs RCDPRs in lower limb rehabilitation to achieve multiple training modes. This paper investigates the reconfiguration planning and structural parameter design of the RCDLR. The RCDLR aims to fulfill the requirements of early passive rehabilitation training. Therefore, motion capture data are analyzed and mapped to the target trajectory of the RCDLR. Through dynamics modeling, the Wrench-Feasible Anchor-point Space (WFAS) is defined, from which an objective function for optimal reconfiguration planning is derived. The genetic algorithm is used to solve the optimal reconfiguration planning problem. Additionally, we propose the reconfigurability and safety coefficients as components of a structure parameter design method aimed at satisfying multiple target rehabilitation trajectories. Finally, numerical simulations are implemented based on the instance data and target trajectories to compute the specific structure parameters and verify the effectiveness of the reconfiguration planning method.
{"title":"Reconfiguration planning and structure parameter design of a reconfigurable cable-driven lower limb rehabilitation robot.","authors":"Jinghang Li, Keyi Wang, Yanzhuo Wang, Yi Yuan","doi":"10.1007/s11517-025-03402-1","DOIUrl":"10.1007/s11517-025-03402-1","url":null,"abstract":"<p><p>Reconfigurable cable-driven parallel robots (RCDPRs) have attracted much attention as a novel type of cable-driven robot that can change their cable anchor position. The reconfigurable cable-driven lower limb rehabilitation robot (RCDLR) employs RCDPRs in lower limb rehabilitation to achieve multiple training modes. This paper investigates the reconfiguration planning and structural parameter design of the RCDLR. The RCDLR aims to fulfill the requirements of early passive rehabilitation training. Therefore, motion capture data are analyzed and mapped to the target trajectory of the RCDLR. Through dynamics modeling, the Wrench-Feasible Anchor-point Space (WFAS) is defined, from which an objective function for optimal reconfiguration planning is derived. The genetic algorithm is used to solve the optimal reconfiguration planning problem. Additionally, we propose the reconfigurability and safety coefficients as components of a structure parameter design method aimed at satisfying multiple target rehabilitation trajectories. Finally, numerical simulations are implemented based on the instance data and target trajectories to compute the specific structure parameters and verify the effectiveness of the reconfiguration planning method.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"3531-3547"},"PeriodicalIF":2.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144627590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Echocardiography sequence segmentation is vital in modern cardiology. While the Segment Anything Model (SAM) excels in general segmentation, its direct use in echocardiography faces challenges due to complex cardiac anatomy and subtle ultrasound boundaries. We introduce SAID (Segment Anything with Implicit Decoding), a novel framework integrating implicit neural representations (INR) with SAM to enhance accuracy, adaptability, and robustness. SAID employs a Hiera-based encoder for multi-scale feature extraction and a Mask Unit Attention Decoder for fine detail capture, critical for cardiac delineation. Orthogonalization boosts feature diversity, and I Net improves handling of misaligned contextual features. Tested on CAMUS and EchoNet-Dynamics datasets, SAID outperforms state-of-the-art methods, achieving a Dice Similarity Coefficient (DSC) of 93.2% and Hausdorff Distance (HD95) of 5.02 mm on CAMUS, and a DSC of 92.3% and HD95 of 4.05 mm on EchoNet-Dynamics, confirming its efficacy and robustness for echocardiography sequence segmentation.
{"title":"SAID-Net: enhancing segment anything model with implicit decoding for echocardiography sequences segmentation.","authors":"Yagang Wu, Tianli Zhao, Shijun Hu, Qin Wu, Xin Huang, Yingxu Chen, Pengzhi Yin, Zhoushun Zheng","doi":"10.1007/s11517-025-03419-6","DOIUrl":"10.1007/s11517-025-03419-6","url":null,"abstract":"<p><p>Echocardiography sequence segmentation is vital in modern cardiology. While the Segment Anything Model (SAM) excels in general segmentation, its direct use in echocardiography faces challenges due to complex cardiac anatomy and subtle ultrasound boundaries. We introduce SAID (Segment Anything with Implicit Decoding), a novel framework integrating implicit neural representations (INR) with SAM to enhance accuracy, adaptability, and robustness. SAID employs a Hiera-based encoder for multi-scale feature extraction and a Mask Unit Attention Decoder for fine detail capture, critical for cardiac delineation. Orthogonalization boosts feature diversity, and I <math><mmultiscripts><mrow></mrow> <mrow></mrow> <mn>2</mn></mmultiscripts> </math> Net improves handling of misaligned contextual features. Tested on CAMUS and EchoNet-Dynamics datasets, SAID outperforms state-of-the-art methods, achieving a Dice Similarity Coefficient (DSC) of 93.2% and Hausdorff Distance (HD95) of 5.02 mm on CAMUS, and a DSC of 92.3% and HD95 of 4.05 mm on EchoNet-Dynamics, confirming its efficacy and robustness for echocardiography sequence segmentation.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"3577-3587"},"PeriodicalIF":2.6,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144651066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}