The aim of this study was to investigate and compare the biomechanical properties of the conventional and novel hip prosthetic socket by using the finite element and gait analysis. According to the CT scan model of the subject's residual limb, the bones, soft tissues, and the socket model were reconstructed in three dimensions by using inverse modeling. The distribution of normal and shear stresses at the residual limb-socket interface under the standing condition was investigated using the finite element method and verified by designing a pressure acquisition module system. The gait experiment compared and analyzed the conventional and novel sockets. The results show that the simulation results are consistent with the experimental data. The novel socket exhibited superior stress performance and gait outcomes compared to the conventional design. Our findings provide a research basis for evaluating the comfort of the hip prosthetic socket, optimizing and designing the structure of the socket of the hip.
{"title":"Comparative biomechanical analysis of a conventional/novel hip prosthetic socket.","authors":"Yu Qian, Yunzhang Cheng, Shiyao Chen, Mingwei Zhang, Yingyu Fang, Tianyi Zhang","doi":"10.1007/s11517-024-03206-9","DOIUrl":"https://doi.org/10.1007/s11517-024-03206-9","url":null,"abstract":"<p><p>The aim of this study was to investigate and compare the biomechanical properties of the conventional and novel hip prosthetic socket by using the finite element and gait analysis. According to the CT scan model of the subject's residual limb, the bones, soft tissues, and the socket model were reconstructed in three dimensions by using inverse modeling. The distribution of normal and shear stresses at the residual limb-socket interface under the standing condition was investigated using the finite element method and verified by designing a pressure acquisition module system. The gait experiment compared and analyzed the conventional and novel sockets. The results show that the simulation results are consistent with the experimental data. The novel socket exhibited superior stress performance and gait outcomes compared to the conventional design. Our findings provide a research basis for evaluating the comfort of the hip prosthetic socket, optimizing and designing the structure of the socket of the hip.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142367226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-03DOI: 10.1007/s11517-024-03197-7
Liangquan Yan, Jumin Zhao, Danyang Shi, Dengao Li, Yi Liu
Heart failure represents the ultimate stage in the progression of diverse cardiac ailments. Throughout the management of heart failure, physicians require observation of medical imagery to formulate therapeutic regimens for patients. Automated report generation technology serves as a tool aiding physicians in patient management. However, previous studies failed to generate targeted reports for specific diseases. To produce high-quality medical reports with greater relevance across diverse conditions, we introduce an automatic report generation model HF-CMN, tailored to heart failure. Firstly, the generated report includes comprehensive information pertaining to heart failure gleaned from chest radiographs. Additionally, we construct a storage query matrix grouping based on a multi-label type, enhancing the accuracy of our model in aligning images with text. Experimental results demonstrate that our method can generate reports strongly correlated with heart failure and outperforms most other advanced methods on benchmark datasets MIMIC-CXR and IU X-Ray. Further analysis confirms that our method achieves superior alignment between images and texts, resulting in higher-quality reports.
心力衰竭是各种心脏疾病发展的终极阶段。在心力衰竭的整个治疗过程中,医生需要观察医疗图像,为患者制定治疗方案。自动报告生成技术是帮助医生管理病人的一种工具。然而,以往的研究未能针对特定疾病生成有针对性的报告。为了在各种疾病中生成更有针对性的高质量医疗报告,我们引入了一种针对心力衰竭的自动报告生成模型 HF-CMN。首先,生成的报告包括从胸片中收集到的有关心力衰竭的全面信息。此外,我们还构建了基于多标签类型的存储查询矩阵分组,从而提高了模型在图像与文本对齐方面的准确性。实验结果表明,我们的方法可以生成与心衰密切相关的报告,在基准数据集 MIMIC-CXR 和 IU X-Ray 上的表现优于其他大多数先进方法。进一步的分析证实,我们的方法实现了图像与文本之间的卓越对齐,从而生成了更高质量的报告。
{"title":"HF-CMN: a medical report generation model for heart failure.","authors":"Liangquan Yan, Jumin Zhao, Danyang Shi, Dengao Li, Yi Liu","doi":"10.1007/s11517-024-03197-7","DOIUrl":"https://doi.org/10.1007/s11517-024-03197-7","url":null,"abstract":"<p><p>Heart failure represents the ultimate stage in the progression of diverse cardiac ailments. Throughout the management of heart failure, physicians require observation of medical imagery to formulate therapeutic regimens for patients. Automated report generation technology serves as a tool aiding physicians in patient management. However, previous studies failed to generate targeted reports for specific diseases. To produce high-quality medical reports with greater relevance across diverse conditions, we introduce an automatic report generation model HF-CMN, tailored to heart failure. Firstly, the generated report includes comprehensive information pertaining to heart failure gleaned from chest radiographs. Additionally, we construct a storage query matrix grouping based on a multi-label type, enhancing the accuracy of our model in aligning images with text. Experimental results demonstrate that our method can generate reports strongly correlated with heart failure and outperforms most other advanced methods on benchmark datasets MIMIC-CXR and IU X-Ray. Further analysis confirms that our method achieves superior alignment between images and texts, resulting in higher-quality reports.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2024-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142367227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01Epub Date: 2024-05-09DOI: 10.1007/s11517-024-03111-1
Ruiyu Qiu, Mengqiang Zhou, Jieyun Bai, Yaosheng Lu, Huijin Wang
The accurate selection of the ultrasound plane for the fetal head and pubic symphysis is critical for precisely measuring the angle of progression. The traditional method depends heavily on sonographers manually selecting the imaging plane. This process is not only time-intensive and laborious but also prone to variability based on the clinicians' expertise. Consequently, there is a significant need for an automated method driven by artificial intelligence. To enhance the efficiency and accuracy of identifying the pubic symphysis-fetal head standard plane (PSFHSP), we proposed a streamlined neural network, PSFHSP-Net, based on a modified version of ResNet-18. This network comprises a single convolutional layer and three residual blocks designed to mitigate noise interference and bolster feature extraction capabilities. The model's adaptability was further refined by expanding the shared feature layer into task-specific layers. We assessed its performance against both traditional heavyweight and other lightweight models by evaluating metrics such as F1-score, accuracy (ACC), recall, precision, area under the ROC curve (AUC), model parameter count, and frames per second (FPS). The PSFHSP-Net recorded an ACC of 0.8995, an F1-score of 0.9075, a recall of 0.9191, and a precision of 0.9022. This model surpassed other heavyweight and lightweight models in these metrics. Notably, it featured the smallest model size (1.48 MB) and the highest processing speed (65.7909 FPS), meeting the real-time processing criterion of over 24 images per second. While the AUC of our model was 0.930, slightly lower than that of ResNet34 (0.935), it showed a marked improvement over ResNet-18 in testing, with increases in ACC and F1-score of 0.0435 and 0.0306, respectively. However, precision saw a slight decrease from 0.9184 to 0.9022, a reduction of 0.0162. Despite these trade-offs, the compression of the model significantly reduced its size from 42.64 to 1.48 MB and increased its inference speed by 4.4753 to 65.7909 FPS. The results confirm that the PSFHSP-Net is capable of swiftly and effectively identifying the PSFHSP, thereby facilitating accurate measurements of the angle of progression. This development represents a significant advancement in automating fetal imaging analysis, promising enhanced consistency and reduced operator dependency in clinical settings.
{"title":"PSFHSP-Net: an efficient lightweight network for identifying pubic symphysis-fetal head standard plane from intrapartum ultrasound images.","authors":"Ruiyu Qiu, Mengqiang Zhou, Jieyun Bai, Yaosheng Lu, Huijin Wang","doi":"10.1007/s11517-024-03111-1","DOIUrl":"10.1007/s11517-024-03111-1","url":null,"abstract":"<p><p>The accurate selection of the ultrasound plane for the fetal head and pubic symphysis is critical for precisely measuring the angle of progression. The traditional method depends heavily on sonographers manually selecting the imaging plane. This process is not only time-intensive and laborious but also prone to variability based on the clinicians' expertise. Consequently, there is a significant need for an automated method driven by artificial intelligence. To enhance the efficiency and accuracy of identifying the pubic symphysis-fetal head standard plane (PSFHSP), we proposed a streamlined neural network, PSFHSP-Net, based on a modified version of ResNet-18. This network comprises a single convolutional layer and three residual blocks designed to mitigate noise interference and bolster feature extraction capabilities. The model's adaptability was further refined by expanding the shared feature layer into task-specific layers. We assessed its performance against both traditional heavyweight and other lightweight models by evaluating metrics such as F1-score, accuracy (ACC), recall, precision, area under the ROC curve (AUC), model parameter count, and frames per second (FPS). The PSFHSP-Net recorded an ACC of 0.8995, an F1-score of 0.9075, a recall of 0.9191, and a precision of 0.9022. This model surpassed other heavyweight and lightweight models in these metrics. Notably, it featured the smallest model size (1.48 MB) and the highest processing speed (65.7909 FPS), meeting the real-time processing criterion of over 24 images per second. While the AUC of our model was 0.930, slightly lower than that of ResNet34 (0.935), it showed a marked improvement over ResNet-18 in testing, with increases in ACC and F1-score of 0.0435 and 0.0306, respectively. However, precision saw a slight decrease from 0.9184 to 0.9022, a reduction of 0.0162. Despite these trade-offs, the compression of the model significantly reduced its size from 42.64 to 1.48 MB and increased its inference speed by 4.4753 to 65.7909 FPS. The results confirm that the PSFHSP-Net is capable of swiftly and effectively identifying the PSFHSP, thereby facilitating accurate measurements of the angle of progression. This development represents a significant advancement in automating fetal imaging analysis, promising enhanced consistency and reduced operator dependency in clinical settings.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"2975-2986"},"PeriodicalIF":2.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11379789/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140899400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01Epub Date: 2024-05-18DOI: 10.1007/s11517-024-03118-8
Keqin Ding, Mohsen Rakhshan, Natalia Paredes-Acuña, Gordon Cheng, Nitish V Thakor
In the field of sensory neuroprostheses, one ultimate goal is for individuals to perceive artificial somatosensory information and use the prosthesis with high complexity that resembles an intact system. To this end, research has shown that stimulation-elicited somatosensory information improves prosthesis perception and task performance. While studies strive to achieve sensory integration, a crucial phenomenon that entails naturalistic interaction with the environment, this topic has not been commensurately reviewed. Therefore, here we present a perspective for understanding sensory integration in neuroprostheses. First, we review the engineering aspects and functional outcomes in sensory neuroprosthesis studies. In this context, we summarize studies that have suggested sensory integration. We focus on how they have used stimulation-elicited percepts to maximize and improve the reliability of somatosensory information. Next, we review studies that have suggested multisensory integration. These works have demonstrated that congruent and simultaneous multisensory inputs provided cognitive benefits such that an individual experiences a greater sense of authority over prosthesis movements (i.e., agency) and perceives the prosthesis as part of their own (i.e., ownership). Thereafter, we present the theoretical and neuroscience framework of sensory integration. We investigate how behavioral models and neural recordings have been applied in the context of sensory integration. Sensory integration models developed from intact-limb individuals have led the way to sensory neuroprosthesis studies to demonstrate multisensory integration. Neural recordings have been used to show how multisensory inputs are processed across cortical areas. Lastly, we discuss some ongoing research and challenges in achieving and understanding sensory integration in sensory neuroprostheses. Resolving these challenges would help to develop future strategies to improve the sensory feedback of a neuroprosthetic system.
{"title":"Sensory integration for neuroprostheses: from functional benefits to neural correlates.","authors":"Keqin Ding, Mohsen Rakhshan, Natalia Paredes-Acuña, Gordon Cheng, Nitish V Thakor","doi":"10.1007/s11517-024-03118-8","DOIUrl":"10.1007/s11517-024-03118-8","url":null,"abstract":"<p><p>In the field of sensory neuroprostheses, one ultimate goal is for individuals to perceive artificial somatosensory information and use the prosthesis with high complexity that resembles an intact system. To this end, research has shown that stimulation-elicited somatosensory information improves prosthesis perception and task performance. While studies strive to achieve sensory integration, a crucial phenomenon that entails naturalistic interaction with the environment, this topic has not been commensurately reviewed. Therefore, here we present a perspective for understanding sensory integration in neuroprostheses. First, we review the engineering aspects and functional outcomes in sensory neuroprosthesis studies. In this context, we summarize studies that have suggested sensory integration. We focus on how they have used stimulation-elicited percepts to maximize and improve the reliability of somatosensory information. Next, we review studies that have suggested multisensory integration. These works have demonstrated that congruent and simultaneous multisensory inputs provided cognitive benefits such that an individual experiences a greater sense of authority over prosthesis movements (i.e., agency) and perceives the prosthesis as part of their own (i.e., ownership). Thereafter, we present the theoretical and neuroscience framework of sensory integration. We investigate how behavioral models and neural recordings have been applied in the context of sensory integration. Sensory integration models developed from intact-limb individuals have led the way to sensory neuroprosthesis studies to demonstrate multisensory integration. Neural recordings have been used to show how multisensory inputs are processed across cortical areas. Lastly, we discuss some ongoing research and challenges in achieving and understanding sensory integration in sensory neuroprostheses. Resolving these challenges would help to develop future strategies to improve the sensory feedback of a neuroprosthetic system.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"2939-2960"},"PeriodicalIF":2.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140959670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01Epub Date: 2024-05-25DOI: 10.1007/s11517-024-03108-w
Jing Liang, Yun Jiang, Hao Yan
Many major diseases of the retina often show symptoms of lesions in the fundus of the eye. The extraction of blood vessels from retinal fundus images is essential to assist doctors. Some of the existing methods do not fully extract the detailed features of retinal images or lose some information, making it difficult to accurately segment capillaries located at the edges of the images. In this paper, we propose a multi-scale retinal vessel segmentation network (SCIE_Net) based on skip connection information enhancement. Firstly, the network processes retinal images at multiple scales to achieve network capture of features at different scales. Secondly, the feature aggregation module is proposed to aggregate the rich information of the shallow network. Further, the skip connection information enhancement module is proposed to take into account the detailed features of the shallow layer and the advanced features of the deeper network to avoid the problem of incomplete information interaction between the layers of the network. Finally, SCIE_Net achieves better vessel segmentation performance and results on the publicly available retinal image standard datasets DRIVE, CHASE_DB1, and STARE.
许多视网膜重大疾病的症状往往表现为眼底病变。从视网膜眼底图像中提取血管对于协助医生至关重要。现有的一些方法不能完全提取视网膜图像的细节特征或丢失部分信息,因此难以准确分割位于图像边缘的毛细血管。本文提出了一种基于跳接信息增强的多尺度视网膜血管分割网络(SCIE_Net)。首先,该网络处理多个尺度的视网膜图像,实现对不同尺度特征的网络捕捉。其次,提出了特征聚合模块,以聚合浅层网络的丰富信息。此外,还提出了跳接信息增强模块,以兼顾浅层的细节特征和深层网络的高级特征,避免网络各层之间信息交互不完全的问题。最后,SCIE_Net 在公开的视网膜图像标准数据集 DRIVE、CHASE_DB1 和 STARE 上取得了更好的血管分割性能和结果。
{"title":"Skip connection information enhancement network for retinal vessel segmentation.","authors":"Jing Liang, Yun Jiang, Hao Yan","doi":"10.1007/s11517-024-03108-w","DOIUrl":"10.1007/s11517-024-03108-w","url":null,"abstract":"<p><p>Many major diseases of the retina often show symptoms of lesions in the fundus of the eye. The extraction of blood vessels from retinal fundus images is essential to assist doctors. Some of the existing methods do not fully extract the detailed features of retinal images or lose some information, making it difficult to accurately segment capillaries located at the edges of the images. In this paper, we propose a multi-scale retinal vessel segmentation network (SCIE_Net) based on skip connection information enhancement. Firstly, the network processes retinal images at multiple scales to achieve network capture of features at different scales. Secondly, the feature aggregation module is proposed to aggregate the rich information of the shallow network. Further, the skip connection information enhancement module is proposed to take into account the detailed features of the shallow layer and the advanced features of the deeper network to avoid the problem of incomplete information interaction between the layers of the network. Finally, SCIE_Net achieves better vessel segmentation performance and results on the publicly available retinal image standard datasets DRIVE, CHASE_DB1, and STARE.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"3163-3178"},"PeriodicalIF":2.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141092554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Functional near-infrared spectroscopy (fNIRS), an optical neuroimaging technique, has been widely used in the field of brain activity recognition and brain-computer interface. Existing works have proposed deep learning-based algorithms for the fNIRS classification problem. In this paper, a novel approach based on convolutional neural network and Transformer, named CT-Net, is established to guide the deep modeling for the classification of mental arithmetic (MA) tasks. We explore the effect of data representations, and design a temporal-level combination of two raw chromophore signals to improve the data utilization and enrich the feature learning of the model. We evaluate our model on two open-access datasets and achieve the classification accuracy of 98.05% and 77.61%, respectively. Moreover, we explain our model by the gradient-weighted class activation mapping, which presents a high consistent between the contributing value of features learned by the model and the mapping of brain activity in the MA task. The results suggest the feasibility and interpretability of CT-Net for decoding MA tasks.
功能性近红外光谱(fNIRS)是一种光学神经成像技术,已被广泛应用于大脑活动识别和脑机接口领域。现有研究针对 fNIRS 分类问题提出了基于深度学习的算法。本文建立了一种基于卷积神经网络和变换器(Transformer)的新方法,命名为 CT-Net,用于指导心算(MA)任务分类的深度建模。我们探索了数据表示的效果,并设计了两种原始发色团信号的时间级组合,以提高数据利用率并丰富模型的特征学习。我们在两个开放获取的数据集上评估了我们的模型,分类准确率分别达到了 98.05% 和 77.61%。此外,我们还通过梯度加权类激活映射来解释我们的模型,结果表明模型学习到的特征贡献值与 MA 任务中的大脑活动映射高度一致。这些结果表明 CT-Net 对 MA 任务解码的可行性和可解释性。
{"title":"CT-Net: an interpretable CNN-Transformer fusion network for fNIRS classification.","authors":"Lingxiang Liao, Jingqing Lu, Lutao Wang, Yongqing Zhang, Dongrui Gao, Manqing Wang","doi":"10.1007/s11517-024-03138-4","DOIUrl":"10.1007/s11517-024-03138-4","url":null,"abstract":"<p><p>Functional near-infrared spectroscopy (fNIRS), an optical neuroimaging technique, has been widely used in the field of brain activity recognition and brain-computer interface. Existing works have proposed deep learning-based algorithms for the fNIRS classification problem. In this paper, a novel approach based on convolutional neural network and Transformer, named CT-Net, is established to guide the deep modeling for the classification of mental arithmetic (MA) tasks. We explore the effect of data representations, and design a temporal-level combination of two raw chromophore signals to improve the data utilization and enrich the feature learning of the model. We evaluate our model on two open-access datasets and achieve the classification accuracy of 98.05% and 77.61%, respectively. Moreover, we explain our model by the gradient-weighted class activation mapping, which presents a high consistent between the contributing value of features learned by the model and the mapping of brain activity in the MA task. The results suggest the feasibility and interpretability of CT-Net for decoding MA tasks.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"3233-3247"},"PeriodicalIF":2.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141181077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Systematic research into device-induced red blood cell (RBC) damage beyond hemolysis, including correlations between hemolysis and RBC-derived extracellular vesicles, remains limited. This study investigated non-physiological shear stress-induced RBC damage and changes in related biochemical indicators under two blood pump clinical support conditions. Pressure heads of 100 and 350 mmHg, numerical simulation methods, and two in vitro loops were utilized to analyze the shear stress and changes in RBC morphology, hemolysis, biochemistry, metabolism, and oxidative stress. The blood pump created higher shear stress in the 350-mmHg condition than in the 100-mmHg condition. With prolonged blood pump operation, plasma-free hemoglobin and cholesterol increased, whereas plasma glucose and nitric oxide decreased in both loops. Notably, plasma iron and triglyceride concentrations increased only in the 350-mmHg condition. The RBC count and morphology, plasma lactic dehydrogenase, and oxidative stress across loops did not differ significantly. Plasma extracellular vesicles, including RBC-derived microparticles, increased significantly at 600 min in both loops. Hemolysis correlated with plasma triglyceride, cholesterol, glucose, and nitric oxide levels. Shear stress, but not oxidative stress, was the main cause of RBC damage. Hemolysis alone inadequately reflects overall blood pump-induced RBC damage, suggesting the need for additional biomarkers for comprehensive assessments.
{"title":"Analysis of non-physiological shear stress-induced red blood cell trauma across different clinical support conditions of the blood pump.","authors":"Xinyu Liu, Yuan Li, Jinze Jia, Hongyu Wang, Yifeng Xi, Anqiang Sun, Lizhen Wang, Xiaoyan Deng, Zengsheng Chen, Yubo Fan","doi":"10.1007/s11517-024-03121-z","DOIUrl":"10.1007/s11517-024-03121-z","url":null,"abstract":"<p><p>Systematic research into device-induced red blood cell (RBC) damage beyond hemolysis, including correlations between hemolysis and RBC-derived extracellular vesicles, remains limited. This study investigated non-physiological shear stress-induced RBC damage and changes in related biochemical indicators under two blood pump clinical support conditions. Pressure heads of 100 and 350 mmHg, numerical simulation methods, and two in vitro loops were utilized to analyze the shear stress and changes in RBC morphology, hemolysis, biochemistry, metabolism, and oxidative stress. The blood pump created higher shear stress in the 350-mmHg condition than in the 100-mmHg condition. With prolonged blood pump operation, plasma-free hemoglobin and cholesterol increased, whereas plasma glucose and nitric oxide decreased in both loops. Notably, plasma iron and triglyceride concentrations increased only in the 350-mmHg condition. The RBC count and morphology, plasma lactic dehydrogenase, and oxidative stress across loops did not differ significantly. Plasma extracellular vesicles, including RBC-derived microparticles, increased significantly at 600 min in both loops. Hemolysis correlated with plasma triglyceride, cholesterol, glucose, and nitric oxide levels. Shear stress, but not oxidative stress, was the main cause of RBC damage. Hemolysis alone inadequately reflects overall blood pump-induced RBC damage, suggesting the need for additional biomarkers for comprehensive assessments.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"3209-3223"},"PeriodicalIF":2.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141158720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Three-dimensional vessel model reconstruction from patient-specific magnetic resonance angiography (MRA) images often requires some manual maneuvers. This study aimed to establish the deep learning (DL)-based method for vessel model reconstruction. Time of flight MRA of 40 patients with internal carotid artery aneurysms was prepared, and three-dimensional vessel models were constructed using the threshold and region-growing method. Using those datasets, supervised deep learning using 2D U-net was performed to reconstruct 3D vessel models. The accuracy of the DL-based vessel segmentations was assessed using 20 MRA images outside the training dataset. The dice coefficient was used as the indicator of the model accuracy, and the blood flow simulation was performed using the DL-based vessel model. The created DL model could successfully reconstruct a three-dimensional model in all 60 cases. The dice coefficient in the test dataset was 0.859. Of note, the DL-generated model proved its efficacy even for large aneurysms (> 10 mm in their diameter). The reconstructed model was feasible in performing blood flow simulation to assist clinical decision-making. Our DL-based method could successfully reconstruct a three-dimensional vessel model with moderate accuracy. Future studies are warranted to exhibit that DL-based technology can promote medical image processing.
{"title":"Patient-specific cerebral 3D vessel model reconstruction using deep learning.","authors":"Satoshi Koizumi, Taichi Kin, Naoyuki Shono, Satoshi Kiyofuji, Motoyuki Umekawa, Katsuya Sato, Nobuhito Saito","doi":"10.1007/s11517-024-03136-6","DOIUrl":"10.1007/s11517-024-03136-6","url":null,"abstract":"<p><p>Three-dimensional vessel model reconstruction from patient-specific magnetic resonance angiography (MRA) images often requires some manual maneuvers. This study aimed to establish the deep learning (DL)-based method for vessel model reconstruction. Time of flight MRA of 40 patients with internal carotid artery aneurysms was prepared, and three-dimensional vessel models were constructed using the threshold and region-growing method. Using those datasets, supervised deep learning using 2D U-net was performed to reconstruct 3D vessel models. The accuracy of the DL-based vessel segmentations was assessed using 20 MRA images outside the training dataset. The dice coefficient was used as the indicator of the model accuracy, and the blood flow simulation was performed using the DL-based vessel model. The created DL model could successfully reconstruct a three-dimensional model in all 60 cases. The dice coefficient in the test dataset was 0.859. Of note, the DL-generated model proved its efficacy even for large aneurysms (> 10 mm in their diameter). The reconstructed model was feasible in performing blood flow simulation to assist clinical decision-making. Our DL-based method could successfully reconstruct a three-dimensional vessel model with moderate accuracy. Future studies are warranted to exhibit that DL-based technology can promote medical image processing.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"3225-3232"},"PeriodicalIF":2.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11379798/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141158722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01Epub Date: 2024-05-10DOI: 10.1007/s11517-024-03114-y
Pei Fang, Renwei Feng, Changdong Liu, Renjun Wen
Medical image classification plays a pivotal role within the field of medicine. Existing models predominantly rely on supervised learning methods, which necessitate large volumes of labeled data for effective training. However, acquiring and annotating medical image data is both an expensive and time-consuming endeavor. In contrast, semi-supervised learning methods offer a promising approach by harnessing limited labeled data alongside abundant unlabeled data to enhance the performance of medical image classification. Nonetheless, current methods often encounter confirmation bias due to noise inherent in self-generated pseudo-labels and the presence of boundary samples from different classes. To overcome these challenges, this study introduces a novel framework known as boundary sample-based class-weighted semi-supervised learning (BSCSSL) for medical image classification. Our method aims to alleviate the impact of intra- and inter-class boundary samples derived from unlabeled data. Specifically, we address reliable confidential data and inter-class boundary samples separately through the utilization of an inter-class boundary sample mining module. Additionally, we implement an intra-class boundary sample weighting mechanism to extract class-aware features specific to intra-class boundary samples. Rather than discarding such intra-class boundary samples outright, our approach acknowledges their intrinsic value despite the difficulty associated with accurate classification, as they contribute significantly to model prediction. Experimental results on widely recognized medical image datasets demonstrate the superiority of our proposed BSCSSL method over existing semi-supervised learning approaches. By enhancing the accuracy and robustness of medical image classification, our BSCSSL approach yields considerable implications for advancing medical diagnosis and future research endeavors.
{"title":"Boundary sample-based class-weighted semi-supervised learning for malignant tumor classification of medical imaging.","authors":"Pei Fang, Renwei Feng, Changdong Liu, Renjun Wen","doi":"10.1007/s11517-024-03114-y","DOIUrl":"10.1007/s11517-024-03114-y","url":null,"abstract":"<p><p>Medical image classification plays a pivotal role within the field of medicine. Existing models predominantly rely on supervised learning methods, which necessitate large volumes of labeled data for effective training. However, acquiring and annotating medical image data is both an expensive and time-consuming endeavor. In contrast, semi-supervised learning methods offer a promising approach by harnessing limited labeled data alongside abundant unlabeled data to enhance the performance of medical image classification. Nonetheless, current methods often encounter confirmation bias due to noise inherent in self-generated pseudo-labels and the presence of boundary samples from different classes. To overcome these challenges, this study introduces a novel framework known as boundary sample-based class-weighted semi-supervised learning (BSCSSL) for medical image classification. Our method aims to alleviate the impact of intra- and inter-class boundary samples derived from unlabeled data. Specifically, we address reliable confidential data and inter-class boundary samples separately through the utilization of an inter-class boundary sample mining module. Additionally, we implement an intra-class boundary sample weighting mechanism to extract class-aware features specific to intra-class boundary samples. Rather than discarding such intra-class boundary samples outright, our approach acknowledges their intrinsic value despite the difficulty associated with accurate classification, as they contribute significantly to model prediction. Experimental results on widely recognized medical image datasets demonstrate the superiority of our proposed BSCSSL method over existing semi-supervised learning approaches. By enhancing the accuracy and robustness of medical image classification, our BSCSSL approach yields considerable implications for advancing medical diagnosis and future research endeavors.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"2987-2997"},"PeriodicalIF":2.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140899395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-01Epub Date: 2024-05-21DOI: 10.1007/s11517-024-03125-9
Mayadeh Kouti, Karim Ansari-Asl, Ehsan Namjoo
One of the most important needs in neuroimaging is brain dynamic source imaging with high spatial and temporal resolution. EEG source imaging estimates the underlying sources from EEG recordings, which provides enhanced spatial resolution with intrinsically high temporal resolution. To ensure identifiability in the underdetermined source reconstruction problem, constraints on EEG sources are essential. This paper introduces a novel method for estimating source activities based on spatio-temporal constraints and a dynamic source imaging algorithm. The method enhances time resolution by incorporating temporal evolution of neural activity into a regularization function. Additionally, two spatial regularization constraints based on and norms are applied in the transformed domain to address both focal and spread neural activities, achieved through spatial gradient and Laplacian transform. Performance evaluation, conducted quantitatively using synthetic datasets, discusses the influence of parameters such as source extent, number of sources, correlation level, and SNR level on temporal and spatial metrics. Results demonstrate that the proposed method provides superior spatial and temporal reconstructions compared to state-of-the-art inverse solutions including STRAPS, sLORETA, SBL, dSPM, and MxNE. This improvement is attributed to the simultaneous integration of transformed spatial and temporal constraints. When applied to a real auditory ERP dataset, our algorithm accurately reconstructs brain source time series and locations, effectively identifying the origins of auditory evoked potentials. In conclusion, our proposed method with spatio-temporal constraints outperforms the state-of-the-art algorithms in estimating source distribution and time courses.
神经成像最重要的需求之一是具有高空间和时间分辨率的大脑动态源成像。脑电信号源成像可从脑电图记录中估算出潜在的信号源,从而提供更高的空间分辨率和内在的高时间分辨率。为确保欠定源重建问题的可识别性,对脑电图源的约束至关重要。本文介绍了一种基于时空约束和动态声源成像算法来估计声源活动的新方法。该方法将神经活动的时间演变纳入正则化函数,从而提高了时间分辨率。此外,通过空间梯度和拉普拉斯变换,在变换域中应用了基于 L 1 和 L 2 规范的两个空间正则化约束,以解决焦点和扩散神经活动的问题。使用合成数据集对性能进行了定量评估,讨论了源范围、源数量、相关性水平和信噪比水平等参数对时间和空间指标的影响。结果表明,与 STRAPS、sLORETA、SBL、dSPM 和 MxNE 等最先进的逆解法相比,所提出的方法能提供更优越的空间和时间重建。这种改进归功于同时整合了转换的空间和时间约束。当应用于真实的听觉 ERP 数据集时,我们的算法准确地重建了脑源时间序列和位置,有效地识别了听觉诱发电位的起源。总之,我们提出的具有时空约束的方法在估计脑源分布和时间序列方面优于最先进的算法。
{"title":"EEG dynamic source imaging using a regularized optimization with spatio-temporal constraints.","authors":"Mayadeh Kouti, Karim Ansari-Asl, Ehsan Namjoo","doi":"10.1007/s11517-024-03125-9","DOIUrl":"10.1007/s11517-024-03125-9","url":null,"abstract":"<p><p>One of the most important needs in neuroimaging is brain dynamic source imaging with high spatial and temporal resolution. EEG source imaging estimates the underlying sources from EEG recordings, which provides enhanced spatial resolution with intrinsically high temporal resolution. To ensure identifiability in the underdetermined source reconstruction problem, constraints on EEG sources are essential. This paper introduces a novel method for estimating source activities based on spatio-temporal constraints and a dynamic source imaging algorithm. The method enhances time resolution by incorporating temporal evolution of neural activity into a regularization function. Additionally, two spatial regularization constraints based on <math><msub><mi>L</mi> <mn>1</mn></msub> </math> and <math><msub><mi>L</mi> <mn>2</mn></msub> </math> norms are applied in the transformed domain to address both focal and spread neural activities, achieved through spatial gradient and Laplacian transform. Performance evaluation, conducted quantitatively using synthetic datasets, discusses the influence of parameters such as source extent, number of sources, correlation level, and SNR level on temporal and spatial metrics. Results demonstrate that the proposed method provides superior spatial and temporal reconstructions compared to state-of-the-art inverse solutions including STRAPS, sLORETA, SBL, dSPM, and MxNE. This improvement is attributed to the simultaneous integration of transformed spatial and temporal constraints. When applied to a real auditory ERP dataset, our algorithm accurately reconstructs brain source time series and locations, effectively identifying the origins of auditory evoked potentials. In conclusion, our proposed method with spatio-temporal constraints outperforms the state-of-the-art algorithms in estimating source distribution and time courses.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"3073-3088"},"PeriodicalIF":2.6,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141072140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}