首页 > 最新文献

Medical & Biological Engineering & Computing最新文献

英文 中文
Diffusion-driven multi-modality medical image fusion.
IF 2.6 4区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-11 DOI: 10.1007/s11517-025-03300-6
Jiantao Qu, Dongjin Huang, Yongsheng Shi, Jinhua Liu, Wen Tang

Multi-modality medical image fusion (MMIF) technology utilizes the complementarity of different modalities to provide more comprehensive diagnostic insights for clinical practice. Existing deep learning-based methods often focus on extracting the primary information from individual modalities while ignoring the correlation of information distribution across different modalities, which leads to insufficient fusion of image details and color information. To address this problem, a diffusion-driven MMIF method is proposed to leverage the information distribution relationship among multi-modality images in the latent space. To better preserve the complementary information from different modalities, a local and global network (LAGN) is suggested. Additionally, a loss strategy is designed to establish robust constraints among diffusion-generated images, original images, and fused images. This strategy supervises the training process and prevents information loss in fused images. The experimental results demonstrate that the proposed method surpasses state-of-the-art image fusion methods in terms of unsupervised metrics on three datasets: MRI/CT, MRI/PET, and MRI/SPECT images. The proposed method successfully captures rich details and color information. Furthermore, 16 doctors and medical students were invited to evaluate the effectiveness of our method in assisting clinical diagnosis and treatment.

{"title":"Diffusion-driven multi-modality medical image fusion.","authors":"Jiantao Qu, Dongjin Huang, Yongsheng Shi, Jinhua Liu, Wen Tang","doi":"10.1007/s11517-025-03300-6","DOIUrl":"https://doi.org/10.1007/s11517-025-03300-6","url":null,"abstract":"<p><p>Multi-modality medical image fusion (MMIF) technology utilizes the complementarity of different modalities to provide more comprehensive diagnostic insights for clinical practice. Existing deep learning-based methods often focus on extracting the primary information from individual modalities while ignoring the correlation of information distribution across different modalities, which leads to insufficient fusion of image details and color information. To address this problem, a diffusion-driven MMIF method is proposed to leverage the information distribution relationship among multi-modality images in the latent space. To better preserve the complementary information from different modalities, a local and global network (LAGN) is suggested. Additionally, a loss strategy is designed to establish robust constraints among diffusion-generated images, original images, and fused images. This strategy supervises the training process and prevents information loss in fused images. The experimental results demonstrate that the proposed method surpasses state-of-the-art image fusion methods in terms of unsupervised metrics on three datasets: MRI/CT, MRI/PET, and MRI/SPECT images. The proposed method successfully captures rich details and color information. Furthermore, 16 doctors and medical students were invited to evaluate the effectiveness of our method in assisting clinical diagnosis and treatment.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143392336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving movement decoding performance under joint constraints based on a neural-driven musculoskeletal model.
IF 2.6 4区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-11 DOI: 10.1007/s11517-025-03321-1
Lizhi Pan, Xingyu Yan, Shizhuo Yue, Jianmin Li

Electromyography-driven musculoskeletal model (E-DMM) connects the user's control commands with the joint positions from a physiological perspective. However, features extracted directly from the surface EMG signals may be affected by signal crosstalk and amplitude cancellation. This limitation can be addressed with the decomposition algorithms for high-density (HD) EMG signals, which demonstrate the capability of extracting neural drives for the human-machine interface. On this basis, we proposed a neural-driven musculoskeletal model (N-DMM) with improved movement decoding performance for estimating wrist and metacarpophalangeal (MCP) joint positions under joint constraints. Eight limb-intact subjects participated in the experiment of mirrored bilateral training. The wrist and MCP joints of the subjects on one side were constrained, and the HD EMG signals from the same side were recorded. Moreover, the unconstrained side mirrored the joint movements of the phantom limb, while the joint angles were measured simultaneously. The obtained EMG signals were processed with the fast independent component analysis algorithm to extract motor unit discharges, enabling the estimation of neural drives. Then the neural drives were taken as inputs for the N-DMM to estimate joint movements. For comparison, an E-DMM was also employed for joint angle prediction. The results indicated that our N-DMM demonstrated superior performance compared to the E-DMM, potentially allowing for more accurate and robust decoding of continuous movements under joint constraints. Further improvement of the proposed model could offer a promising approach for practical applications in amputees.

{"title":"Improving movement decoding performance under joint constraints based on a neural-driven musculoskeletal model.","authors":"Lizhi Pan, Xingyu Yan, Shizhuo Yue, Jianmin Li","doi":"10.1007/s11517-025-03321-1","DOIUrl":"https://doi.org/10.1007/s11517-025-03321-1","url":null,"abstract":"<p><p>Electromyography-driven musculoskeletal model (E-DMM) connects the user's control commands with the joint positions from a physiological perspective. However, features extracted directly from the surface EMG signals may be affected by signal crosstalk and amplitude cancellation. This limitation can be addressed with the decomposition algorithms for high-density (HD) EMG signals, which demonstrate the capability of extracting neural drives for the human-machine interface. On this basis, we proposed a neural-driven musculoskeletal model (N-DMM) with improved movement decoding performance for estimating wrist and metacarpophalangeal (MCP) joint positions under joint constraints. Eight limb-intact subjects participated in the experiment of mirrored bilateral training. The wrist and MCP joints of the subjects on one side were constrained, and the HD EMG signals from the same side were recorded. Moreover, the unconstrained side mirrored the joint movements of the phantom limb, while the joint angles were measured simultaneously. The obtained EMG signals were processed with the fast independent component analysis algorithm to extract motor unit discharges, enabling the estimation of neural drives. Then the neural drives were taken as inputs for the N-DMM to estimate joint movements. For comparison, an E-DMM was also employed for joint angle prediction. The results indicated that our N-DMM demonstrated superior performance compared to the E-DMM, potentially allowing for more accurate and robust decoding of continuous movements under joint constraints. Further improvement of the proposed model could offer a promising approach for practical applications in amputees.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143400612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Closed loop automated drug infusion regulation based on optimal 2-DOF TID control approach for the mean arterial blood pressure.
IF 2.6 4区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-10 DOI: 10.1007/s11517-025-03313-1
Oguzhan Karahan, Hasan Karci

This work aims to design an optimal controller for regulating mean arterial blood pressure (MAP) during the cardiac cycle in surgical and post-surgical conditions to enhance automated drug infusion. MAP controllers must address uncertainties like external disturbances, time-varying parameters, and noise. Thus, closed-loop control is essential to normalize MAP regardless of the patient's pharmacokinetics during surgery. A two-degree-of-freedom tilt integral derivative (2-DOF TID) controller, tuned by the Chernobyl Disaster Optimizer (CDO) algorithm, is proposed to dynamically adjust sodium nitroprusside (SNP) infusion rates in various conditions. The performance of this 2-DOF TID controller is compared with CDO-based PID, 2-DOF PID, and TID controllers. The results demonstrate the effectiveness and robustness of the proposed controller in achieving and maintaining MAP at 100 mmHg. All controllers are evaluated on different patient responses, including fixed and time-varying sensitivities, to SNP infusion, external disturbances, and noise. The study reveals which controller performs best in terms of overshoot, settling time, error, disturbance rejection, and anti-interference ability, confirming the 2-DOF TID controller as a strong candidate for automated drug infusion systems in clinical settings.

{"title":"Closed loop automated drug infusion regulation based on optimal 2-DOF TID control approach for the mean arterial blood pressure.","authors":"Oguzhan Karahan, Hasan Karci","doi":"10.1007/s11517-025-03313-1","DOIUrl":"https://doi.org/10.1007/s11517-025-03313-1","url":null,"abstract":"<p><p>This work aims to design an optimal controller for regulating mean arterial blood pressure (MAP) during the cardiac cycle in surgical and post-surgical conditions to enhance automated drug infusion. MAP controllers must address uncertainties like external disturbances, time-varying parameters, and noise. Thus, closed-loop control is essential to normalize MAP regardless of the patient's pharmacokinetics during surgery. A two-degree-of-freedom tilt integral derivative (2-DOF TID) controller, tuned by the Chernobyl Disaster Optimizer (CDO) algorithm, is proposed to dynamically adjust sodium nitroprusside (SNP) infusion rates in various conditions. The performance of this 2-DOF TID controller is compared with CDO-based PID, 2-DOF PID, and TID controllers. The results demonstrate the effectiveness and robustness of the proposed controller in achieving and maintaining MAP at 100 mmHg. All controllers are evaluated on different patient responses, including fixed and time-varying sensitivities, to SNP infusion, external disturbances, and noise. The study reveals which controller performs best in terms of overshoot, settling time, error, disturbance rejection, and anti-interference ability, confirming the 2-DOF TID controller as a strong candidate for automated drug infusion systems in clinical settings.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143383914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deformation registration based on reconstruction of brain MRI images with pathologies.
IF 2.6 4区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-10 DOI: 10.1007/s11517-025-03319-9
Li Lian, Qing Chang

Deformable registration between brain tumor images and brain atlas has been an important tool to facilitate pathological analysis. However, registration of images with tumors is challenging due to absent correspondences induced by the tumor. Furthermore, the tumor growth may displace the tissue, causing larger deformations than what is observed in healthy brains. Therefore, we propose a new reconstruction-driven cascade feature warping (RCFW) network for brain tumor images. We first introduce the symmetric-constrained feature reasoning (SFR) module which reconstructs the missed normal appearance within tumor regions, allowing a dense spatial correspondence between the reconstructed quasi-normal appearance and the atlas. The dilated multi-receptive feature fusion module is further introduced, which collects long-range features from different dimensions to facilitate tumor region reconstruction, especially for large tumor cases. Then, the reconstructed tumor images and atlas are jointly fed into the multi-stage feature warping module (MFW) to progressively predict spatial transformations. The method was performed on the Multimodal Brain Tumor Segmentation (BraTS) 2021 challenge database and compared with six existing methods. Experimental results showed that the proposed method effectively handles the problem of brain tumor image registration, which can maintain the smooth deformation of the tumor region while maximizing the image similarity of normal regions.

{"title":"Deformation registration based on reconstruction of brain MRI images with pathologies.","authors":"Li Lian, Qing Chang","doi":"10.1007/s11517-025-03319-9","DOIUrl":"https://doi.org/10.1007/s11517-025-03319-9","url":null,"abstract":"<p><p>Deformable registration between brain tumor images and brain atlas has been an important tool to facilitate pathological analysis. However, registration of images with tumors is challenging due to absent correspondences induced by the tumor. Furthermore, the tumor growth may displace the tissue, causing larger deformations than what is observed in healthy brains. Therefore, we propose a new reconstruction-driven cascade feature warping (RCFW) network for brain tumor images. We first introduce the symmetric-constrained feature reasoning (SFR) module which reconstructs the missed normal appearance within tumor regions, allowing a dense spatial correspondence between the reconstructed quasi-normal appearance and the atlas. The dilated multi-receptive feature fusion module is further introduced, which collects long-range features from different dimensions to facilitate tumor region reconstruction, especially for large tumor cases. Then, the reconstructed tumor images and atlas are jointly fed into the multi-stage feature warping module (MFW) to progressively predict spatial transformations. The method was performed on the Multimodal Brain Tumor Segmentation (BraTS) 2021 challenge database and compared with six existing methods. Experimental results showed that the proposed method effectively handles the problem of brain tumor image registration, which can maintain the smooth deformation of the tumor region while maximizing the image similarity of normal regions.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143383915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hardware-efficient on-implant spike compression processor based on VQ-DAE for brain-implantable microsystems.
IF 2.6 4区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-08 DOI: 10.1007/s11517-025-03317-x
Nazanin Ahmadi-Dastgerdi, Hossein Hosseini-Nejad, Hamid Alinejad-Rokny

High-density implantable neural recording microsystems deal with a huge amount of data. Since the wireless transmission of the raw recorded data leads to excessive bandwidth requirements, spike compression approaches have become vital to such systems. The compression processor is designed to be implemented on the implant and so to avoid any tissue damage, the hardware cost of the processor is of great importance. The vector quantization (VQ) algorithm has proven to be effective in compression applications and spike compression systems as well. In this paper, benefiting from the capabilities of the denoising autoencoders (DAE), we propose a solution to enhance the compression performance of the VQ-based approach in terms of both reconstruction accuracy and hardware efficiency. Moreover, we develop a hardware-efficient multi-channel architecture for the proposed VQ-DAE processor. The processor has been implemented in a 180-nm CMOS technology and the validation and verification processes confirm that it provides satisfactory results. It achieves an average signal-to-noise-distortion (SNDR) of 14.51 at a spike compression ratio (SCR) of 30. Operated at a clock frequency of 192 kHz and a supply voltage of 1.8 V, the circuit consumes a power of 4.88 μ W and a silicon area of 0.14 mm2 per channel.

{"title":"A hardware-efficient on-implant spike compression processor based on VQ-DAE for brain-implantable microsystems.","authors":"Nazanin Ahmadi-Dastgerdi, Hossein Hosseini-Nejad, Hamid Alinejad-Rokny","doi":"10.1007/s11517-025-03317-x","DOIUrl":"https://doi.org/10.1007/s11517-025-03317-x","url":null,"abstract":"<p><p>High-density implantable neural recording microsystems deal with a huge amount of data. Since the wireless transmission of the raw recorded data leads to excessive bandwidth requirements, spike compression approaches have become vital to such systems. The compression processor is designed to be implemented on the implant and so to avoid any tissue damage, the hardware cost of the processor is of great importance. The vector quantization (VQ) algorithm has proven to be effective in compression applications and spike compression systems as well. In this paper, benefiting from the capabilities of the denoising autoencoders (DAE), we propose a solution to enhance the compression performance of the VQ-based approach in terms of both reconstruction accuracy and hardware efficiency. Moreover, we develop a hardware-efficient multi-channel architecture for the proposed VQ-DAE processor. The processor has been implemented in a 180-nm CMOS technology and the validation and verification processes confirm that it provides satisfactory results. It achieves an average signal-to-noise-distortion (SNDR) of 14.51 at a spike compression ratio (SCR) of 30. Operated at a clock frequency of 192 kHz and a supply voltage of 1.8 V, the circuit consumes a power of 4.88 <math><mrow><mi>μ</mi> <mi>W</mi></mrow> </math> and a silicon area of 0.14 mm<sup>2</sup> per channel.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143374946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A monocular thoracoscopic 3D scene reconstruction framework based on NeRF.
IF 2.6 4区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-08 DOI: 10.1007/s11517-025-03316-y
Juntao Han, Ziming Zhang, Wenjun Tan, Yufei Wang, Mingxiao Li

With the increasing use of image-based 3D reconstruction in medical procedures, accurate scene reconstruction plays a crucial role in surgical navigation and assisted treatment. However, the monotonous colors, limited image features, and obvious brightness fluctuations of thoracoscopic scenes make the feature point matching process, on which traditional 3D reconstruction methods rely, unstable and unreliable. It brings a great challenge to accurate 3D reconstruction. In this study, a new method for implicit 3D reconstruction of monocular thoracoscopic scenes is proposed. The method combines a pre-trained metric depth estimation model with neural radiation field (NeRF) technique and uses dense SLAM to accurately compute the camera pose. To ensure the accuracy of the depth values and the structural consistency of the reconstructed scene, depth and normal constraints are added to the original color constraints of the NeRF network to achieve high-quality scene reconstruction results. We conducted experiments on the SCARED dataset and the clinical dataset. After comparing with other methods, the depth estimation accuracy and point cloud reconstruction quality of this paper outperform the existing methods. The method in this paper can provide more accurate 3D reconstruction of complex thoracic surgical scenes, which can significantly improve the accuracy and therapeutic efficacy of surgical navigation.

{"title":"A monocular thoracoscopic 3D scene reconstruction framework based on NeRF.","authors":"Juntao Han, Ziming Zhang, Wenjun Tan, Yufei Wang, Mingxiao Li","doi":"10.1007/s11517-025-03316-y","DOIUrl":"https://doi.org/10.1007/s11517-025-03316-y","url":null,"abstract":"<p><p>With the increasing use of image-based 3D reconstruction in medical procedures, accurate scene reconstruction plays a crucial role in surgical navigation and assisted treatment. However, the monotonous colors, limited image features, and obvious brightness fluctuations of thoracoscopic scenes make the feature point matching process, on which traditional 3D reconstruction methods rely, unstable and unreliable. It brings a great challenge to accurate 3D reconstruction. In this study, a new method for implicit 3D reconstruction of monocular thoracoscopic scenes is proposed. The method combines a pre-trained metric depth estimation model with neural radiation field (NeRF) technique and uses dense SLAM to accurately compute the camera pose. To ensure the accuracy of the depth values and the structural consistency of the reconstructed scene, depth and normal constraints are added to the original color constraints of the NeRF network to achieve high-quality scene reconstruction results. We conducted experiments on the SCARED dataset and the clinical dataset. After comparing with other methods, the depth estimation accuracy and point cloud reconstruction quality of this paper outperform the existing methods. The method in this paper can provide more accurate 3D reconstruction of complex thoracic surgical scenes, which can significantly improve the accuracy and therapeutic efficacy of surgical navigation.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143374947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Patient performance assessment methods for upper extremity rehabilitation in assist-as-needed therapy strategies: a comprehensive review.
IF 2.6 4区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-07 DOI: 10.1007/s11517-025-03315-z
Erkan Ödemiş, Cabbar Veysel Baysal, Mustafa İnci

This paper aims to comprehensively review patient performance assessment (PPA) methods used in assist-as-needed (AAN) robotic therapy for upper extremity rehabilitation. AAN strategies adjust robotic assistance according to the patient's performance, aiming to enhance engagement and recovery in individuals with motor impairments. This review categorizes the implemented PPA methods in the literature for the first time in such a wide scope and suggests future research directions to improve adaptive and personalized therapy. At first, the studies are examined to evaluate PPA methods, which are subsequently categorized according to their underlying implementation strategies: position error-based methods, force-based methods, electromyography (EMG), electroencephalography (EEG)-based methods, performance indicator-based methods, and physiological signal-based methods. The advantages and limitations of each method are discussed. In addition to the classification of PPA methods, the current study also examines clinically tested AAN strategies applied in upper extremity rehabilitation and their clinical outcomes. Clinical findings from these trials demonstrate the potential of AAN strategies in improving motor function and patient engagement. Nevertheless, more extensive clinical testing is necessary to establish the long-term benefits of these strategies over conventional therapies. Ultimately, this review aims to guide future developments in the field of robotic rehabilitation, providing researchers with insights into optimizing AAN strategies for enhanced patient outcomes.

{"title":"Patient performance assessment methods for upper extremity rehabilitation in assist-as-needed therapy strategies: a comprehensive review.","authors":"Erkan Ödemiş, Cabbar Veysel Baysal, Mustafa İnci","doi":"10.1007/s11517-025-03315-z","DOIUrl":"https://doi.org/10.1007/s11517-025-03315-z","url":null,"abstract":"<p><p>This paper aims to comprehensively review patient performance assessment (PPA) methods used in assist-as-needed (AAN) robotic therapy for upper extremity rehabilitation. AAN strategies adjust robotic assistance according to the patient's performance, aiming to enhance engagement and recovery in individuals with motor impairments. This review categorizes the implemented PPA methods in the literature for the first time in such a wide scope and suggests future research directions to improve adaptive and personalized therapy. At first, the studies are examined to evaluate PPA methods, which are subsequently categorized according to their underlying implementation strategies: position error-based methods, force-based methods, electromyography (EMG), electroencephalography (EEG)-based methods, performance indicator-based methods, and physiological signal-based methods. The advantages and limitations of each method are discussed. In addition to the classification of PPA methods, the current study also examines clinically tested AAN strategies applied in upper extremity rehabilitation and their clinical outcomes. Clinical findings from these trials demonstrate the potential of AAN strategies in improving motor function and patient engagement. Nevertheless, more extensive clinical testing is necessary to establish the long-term benefits of these strategies over conventional therapies. Ultimately, this review aims to guide future developments in the field of robotic rehabilitation, providing researchers with insights into optimizing AAN strategies for enhanced patient outcomes.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143366652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel deep learning framework for retinal disease detection leveraging contextual and local features cues from retinal images.
IF 2.6 4区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-07 DOI: 10.1007/s11517-025-03314-0
Sultan Daud Khan, Saleh Basalamah, Ahmed Lbath

Retinal diseases are a serious global threat to human vision, and early identification is essential for effective prevention and treatment. However, current diagnostic methods rely on manual analysis of fundus images, which heavily depends on the expertise of ophthalmologists. This manual process is time-consuming and labor-intensive and can sometimes lead to missed diagnoses. With advancements in computer vision technology, several automated models have been proposed to improve diagnostic accuracy for retinal diseases and medical imaging in general. However, these methods face challenges in accurately detecting specific diseases within images due to inherent issues associated with fundus images, including inter-class similarities, intra-class variations, limited local information, insufficient contextual understanding, and class imbalances within datasets. To address these challenges, we propose a novel deep learning framework for accurate retinal disease classification. This framework is designed to achieve high accuracy in identifying various retinal diseases while overcoming inherent challenges associated with fundus images. Generally, the framework consists of three main modules. The first module is Densely Connected Multidilated Convolution Neural Network (DCM-CNN) that extracts global contextual information by effectively integrating novel Casual Dilated Dense Convolutional Blocks (CDDCBs). The second module of the framework, namely, Local-Patch-based Convolution Neural Network (LP-CNN), utilizes Class Activation Map (CAM) (obtained from DCM-CNN) to extract local and fine-grained information. To identify the correct class and minimize the error, a synergic network is utilized that takes the feature maps of both DCM-CNN and LP-CNN and connects both maps in a fully connected fashion to identify the correct class and minimize the errors. The framework is evaluated through a comprehensive set of experiments, both quantitatively and qualitatively, using two publicly available benchmark datasets: RFMiD and ODIR-5K. Our experimental results demonstrate the effectiveness of the proposed framework and achieves higher performance on RFMiD and ODIR-5K datasets compared to reference methods.

{"title":"A novel deep learning framework for retinal disease detection leveraging contextual and local features cues from retinal images.","authors":"Sultan Daud Khan, Saleh Basalamah, Ahmed Lbath","doi":"10.1007/s11517-025-03314-0","DOIUrl":"https://doi.org/10.1007/s11517-025-03314-0","url":null,"abstract":"<p><p>Retinal diseases are a serious global threat to human vision, and early identification is essential for effective prevention and treatment. However, current diagnostic methods rely on manual analysis of fundus images, which heavily depends on the expertise of ophthalmologists. This manual process is time-consuming and labor-intensive and can sometimes lead to missed diagnoses. With advancements in computer vision technology, several automated models have been proposed to improve diagnostic accuracy for retinal diseases and medical imaging in general. However, these methods face challenges in accurately detecting specific diseases within images due to inherent issues associated with fundus images, including inter-class similarities, intra-class variations, limited local information, insufficient contextual understanding, and class imbalances within datasets. To address these challenges, we propose a novel deep learning framework for accurate retinal disease classification. This framework is designed to achieve high accuracy in identifying various retinal diseases while overcoming inherent challenges associated with fundus images. Generally, the framework consists of three main modules. The first module is Densely Connected Multidilated Convolution Neural Network (DCM-CNN) that extracts global contextual information by effectively integrating novel Casual Dilated Dense Convolutional Blocks (CDDCBs). The second module of the framework, namely, Local-Patch-based Convolution Neural Network (LP-CNN), utilizes Class Activation Map (CAM) (obtained from DCM-CNN) to extract local and fine-grained information. To identify the correct class and minimize the error, a synergic network is utilized that takes the feature maps of both DCM-CNN and LP-CNN and connects both maps in a fully connected fashion to identify the correct class and minimize the errors. The framework is evaluated through a comprehensive set of experiments, both quantitatively and qualitatively, using two publicly available benchmark datasets: RFMiD and ODIR-5K. Our experimental results demonstrate the effectiveness of the proposed framework and achieves higher performance on RFMiD and ODIR-5K datasets compared to reference methods.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143366651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transformer-based fusion model for mild depression recognition with EEG and pupil area signals.
IF 2.6 4区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-06 DOI: 10.1007/s11517-024-03269-8
Jing Zhu, Yuanlong Li, Changlin Yang, Hanshu Cai, Xiaowei Li, Bin Hu

Early detection and treatment are crucial for the prevention and treatment of depression; compared with major depression, current researches pay less attention to mild depression. Meanwhile, analysis of multimodal biosignals such as EEG, eye movement data, and magnetic resonance imaging provides reliable technical means for the quantitative analysis of depression. However, how to effectively capture relevant and complementary information between multimodal data so as to achieve efficient and accurate depression recognition remains a challenge. This paper proposes a novel Transformer-based fusion model using EEG and pupil area signals for mild depression recognition. We first introduce CSP into the Transformer to construct single-modal models of EEG and pupil data and then utilize attention bottleneck to construct a mid-fusion model to facilitate information exchange between the two modalities; this strategy enables the model to learn the most relevant and complementary information for each modality and only share the necessary information, which improves the model accuracy while reducing the computational cost. Experimental results show that the accuracy of the EEG and pupil area signals of single-modal models we constructed is 89.75% and 84.17%, the precision is 92.04% and 95.21%, the recall is 89.5% and 71%, the specificity is 90% and 97.33%, the F1 score is 89.41% and 78.44%, respectively, and the accuracy of mid-fusion model can reach 93.25%. Our study demonstrates that the Transformer model can learn the long-term time-dependent relationship between EEG and pupil area signals, providing an idea for designing a reliable multimodal fusion model for mild depression recognition based on EEG and pupil area signals.

{"title":"Transformer-based fusion model for mild depression recognition with EEG and pupil area signals.","authors":"Jing Zhu, Yuanlong Li, Changlin Yang, Hanshu Cai, Xiaowei Li, Bin Hu","doi":"10.1007/s11517-024-03269-8","DOIUrl":"https://doi.org/10.1007/s11517-024-03269-8","url":null,"abstract":"<p><p>Early detection and treatment are crucial for the prevention and treatment of depression; compared with major depression, current researches pay less attention to mild depression. Meanwhile, analysis of multimodal biosignals such as EEG, eye movement data, and magnetic resonance imaging provides reliable technical means for the quantitative analysis of depression. However, how to effectively capture relevant and complementary information between multimodal data so as to achieve efficient and accurate depression recognition remains a challenge. This paper proposes a novel Transformer-based fusion model using EEG and pupil area signals for mild depression recognition. We first introduce CSP into the Transformer to construct single-modal models of EEG and pupil data and then utilize attention bottleneck to construct a mid-fusion model to facilitate information exchange between the two modalities; this strategy enables the model to learn the most relevant and complementary information for each modality and only share the necessary information, which improves the model accuracy while reducing the computational cost. Experimental results show that the accuracy of the EEG and pupil area signals of single-modal models we constructed is 89.75% and 84.17%, the precision is 92.04% and 95.21%, the recall is 89.5% and 71%, the specificity is 90% and 97.33%, the F1 score is 89.41% and 78.44%, respectively, and the accuracy of mid-fusion model can reach 93.25%. Our study demonstrates that the Transformer model can learn the long-term time-dependent relationship between EEG and pupil area signals, providing an idea for designing a reliable multimodal fusion model for mild depression recognition based on EEG and pupil area signals.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143257180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Class-aware multi-level attention learning for semi-supervised breast cancer diagnosis under imbalanced label distribution.
IF 2.6 4区 医学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS Pub Date : 2025-02-05 DOI: 10.1007/s11517-025-03291-4
Renjun Wen, Yufei Ma, Changdong Liu, Renwei Feng

Breast cancer affects a significant number of patients worldwide, and early diagnosis is critical for improving cure rates and prognosis. Deep learning-based breast cancer classification algorithms have substantially alleviated the burden on medical personnel. However, existing breast cancer diagnosis models face notable limitations which are challenging to obtain in clinical settings, such as reliance on a large volume of labeled samples, an inability to comprehensively extract features from breast cancer images, and susceptibility to overfitting on account of imbalanced class distribution. Therefore, we propose the class-aware multi-level attention learning model focused on semi-supervised breast cancer diagnosis to effectively reduce the dependency on extensive data annotation. Additionally, we develop the multi-level fusion attention learning module, which integrates multiple mutual attention components across different layers, allowing the model to precisely identify critical regions for lesion categorization. Finally, we design the class-aware adaptive pseudo-labeling module which adaptively predicts category distribution in unlabeled data, and directs the model to focus on underrepresented categories, ensuring a balanced learning process. Experimental results on the BACH dataset demonstrate that our proposed model achieves an accuracy of 86.7% with only 40% labeled microscopic data, showcasing its outstanding contribution to semi-supervised breast cancer diagnosis.

{"title":"Class-aware multi-level attention learning for semi-supervised breast cancer diagnosis under imbalanced label distribution.","authors":"Renjun Wen, Yufei Ma, Changdong Liu, Renwei Feng","doi":"10.1007/s11517-025-03291-4","DOIUrl":"https://doi.org/10.1007/s11517-025-03291-4","url":null,"abstract":"<p><p>Breast cancer affects a significant number of patients worldwide, and early diagnosis is critical for improving cure rates and prognosis. Deep learning-based breast cancer classification algorithms have substantially alleviated the burden on medical personnel. However, existing breast cancer diagnosis models face notable limitations which are challenging to obtain in clinical settings, such as reliance on a large volume of labeled samples, an inability to comprehensively extract features from breast cancer images, and susceptibility to overfitting on account of imbalanced class distribution. Therefore, we propose the class-aware multi-level attention learning model focused on semi-supervised breast cancer diagnosis to effectively reduce the dependency on extensive data annotation. Additionally, we develop the multi-level fusion attention learning module, which integrates multiple mutual attention components across different layers, allowing the model to precisely identify critical regions for lesion categorization. Finally, we design the class-aware adaptive pseudo-labeling module which adaptively predicts category distribution in unlabeled data, and directs the model to focus on underrepresented categories, ensuring a balanced learning process. Experimental results on the BACH dataset demonstrate that our proposed model achieves an accuracy of 86.7% with only 40% labeled microscopic data, showcasing its outstanding contribution to semi-supervised breast cancer diagnosis.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143191137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Medical & Biological Engineering & Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1