Pub Date : 2024-10-25DOI: 10.7507/1001-5515.202309047
An Zeng, Xianyang Lin, Jingliang Zhao, Dan Pan, Baoyao Yang, Xin Liu
In the segmentation of aortic dissection, there are issues such as low contrast between the aortic dissection and surrounding organs and vessels, significant differences in dissection morphology, and high background noise. To address these issues, this paper proposed a reinforcement learning-based method for type B aortic dissection localization. With the assistance of a two-stage segmentation model, the deep reinforcement learning was utilized to perform the first-stage aortic dissection localization task, ensuring the integrity of the localization target. In the second stage, the coarse segmentation results from the first stage were used as input to obtain refined segmentation results. To improve the recall rate of the first-stage segmentation results and include the segmentation target more completely in the localization results, this paper designed a reinforcement learning reward function based on the direction of recall changes. Additionally, the localization window was separated from the field of view window to reduce the occurrence of segmentation target loss. Unet, TransUnet, SwinUnet, and MT-Unet were selected as benchmark segmentation models. Through experiments, it was verified that the majority of the metrics in the two-stage segmentation process of this paper performed better than the benchmark results. Specifically, the Dice index improved by 1.34%, 0.89%, 27.66%, and 7.37% for each respective model. In conclusion, by incorporating the type B aortic dissection localization method proposed in this paper into the segmentation process, the overall segmentation accuracy is improved compared to the benchmark models. The improvement is particularly significant for models with poorer segmentation performance.
在主动脉夹层的分割中,存在主动脉夹层与周围器官和血管对比度低、夹层形态差异大、背景噪声高等问题。针对这些问题,本文提出了一种基于强化学习的 B 型主动脉夹层定位方法。在两阶段分割模型的辅助下,利用深度强化学习完成第一阶段主动脉夹层定位任务,确保定位目标的完整性。在第二阶段,将第一阶段的粗分割结果作为输入,获得精细分割结果。为了提高第一阶段分割结果的召回率,并将分割目标更完整地纳入定位结果,本文设计了基于召回率变化方向的强化学习奖励函数。此外,还将定位窗口与视场窗口分开,以减少分割目标丢失的发生。本文选择 Unet、TransUnet、SwinUnet 和 MTUnet 作为基准分割模型。通过实验验证,本文两阶段分割过程中的大多数指标都优于基准结果。具体来说,每个模型的 Dice 指数分别提高了 1.34%、0.89%、27.66% 和 7.37%。总之,通过将本文提出的 B 型主动脉夹层定位方法纳入分割过程,与基准模型相比,整体分割准确性得到了提高。对于分割性能较差的模型,这种改进尤为明显。
{"title":"[Reinforcement learning-based method for type B aortic dissection localization].","authors":"An Zeng, Xianyang Lin, Jingliang Zhao, Dan Pan, Baoyao Yang, Xin Liu","doi":"10.7507/1001-5515.202309047","DOIUrl":"10.7507/1001-5515.202309047","url":null,"abstract":"<p><p>In the segmentation of aortic dissection, there are issues such as low contrast between the aortic dissection and surrounding organs and vessels, significant differences in dissection morphology, and high background noise. To address these issues, this paper proposed a reinforcement learning-based method for type B aortic dissection localization. With the assistance of a two-stage segmentation model, the deep reinforcement learning was utilized to perform the first-stage aortic dissection localization task, ensuring the integrity of the localization target. In the second stage, the coarse segmentation results from the first stage were used as input to obtain refined segmentation results. To improve the recall rate of the first-stage segmentation results and include the segmentation target more completely in the localization results, this paper designed a reinforcement learning reward function based on the direction of recall changes. Additionally, the localization window was separated from the field of view window to reduce the occurrence of segmentation target loss. Unet, TransUnet, SwinUnet, and MT-Unet were selected as benchmark segmentation models. Through experiments, it was verified that the majority of the metrics in the two-stage segmentation process of this paper performed better than the benchmark results. Specifically, the Dice index improved by 1.34%, 0.89%, 27.66%, and 7.37% for each respective model. In conclusion, by incorporating the type B aortic dissection localization method proposed in this paper into the segmentation process, the overall segmentation accuracy is improved compared to the benchmark models. The improvement is particularly significant for models with poorer segmentation performance.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"41 5","pages":"878-885"},"PeriodicalIF":0.0,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11527745/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142509903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-25DOI: 10.7507/1001-5515.202210059
Yao Xie, Dong Yang, Honglong Yu, Qilian Xie
Impedance cardiography (ICG) is essential in evaluating cardiac function in patients with cardiovascular diseases. Aiming at the problem that the measurement of ICG signal is easily disturbed by motion artifacts, this paper introduces a de-noising method based on two-step spectral ensemble empirical mode decomposition (EEMD) and canonical correlation analysis (CCA). Firstly, the first spectral EEMD-CCA was performed between ICG and motion signals, and electrocardiogram (ECG) and motion signals, respectively. The component with the strongest correlation coefficient was set to zero to suppress the main motion artifacts. Secondly, the obtained ECG and ICG signals were subjected to a second spectral EEMD-CCA for further denoising. Lastly, the ICG signal is reconstructed using these share components. The experiment was tested on 30 subjects, and the results showed that the quality of the ICG signal is greatly improved after using the proposed denoising method, which could support the subsequent diagnosis and analysis of cardiovascular diseases.
{"title":"[Research on motion impedance cardiography de-noising method based on two-step spectral ensemble empirical mode decomposition and canonical correlation analysis].","authors":"Yao Xie, Dong Yang, Honglong Yu, Qilian Xie","doi":"10.7507/1001-5515.202210059","DOIUrl":"10.7507/1001-5515.202210059","url":null,"abstract":"<p><p>Impedance cardiography (ICG) is essential in evaluating cardiac function in patients with cardiovascular diseases. Aiming at the problem that the measurement of ICG signal is easily disturbed by motion artifacts, this paper introduces a de-noising method based on two-step spectral ensemble empirical mode decomposition (EEMD) and canonical correlation analysis (CCA). Firstly, the first spectral EEMD-CCA was performed between ICG and motion signals, and electrocardiogram (ECG) and motion signals, respectively. The component with the strongest correlation coefficient was set to zero to suppress the main motion artifacts. Secondly, the obtained ECG and ICG signals were subjected to a second spectral EEMD-CCA for further denoising. Lastly, the ICG signal is reconstructed using these share components. The experiment was tested on 30 subjects, and the results showed that the quality of the ICG signal is greatly improved after using the proposed denoising method, which could support the subsequent diagnosis and analysis of cardiovascular diseases.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"41 5","pages":"986-994"},"PeriodicalIF":0.0,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11527760/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142509906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-25DOI: 10.7507/1001-5515.202311061
Liang Jiang, Cheng Zhang, Hui Cao, Baihao Jiang
Breast cancer is a malignancy caused by the abnormal proliferation of breast epithelial cells, predominantly affecting female patients, and it is commonly diagnosed using histopathological images. Currently, deep learning techniques have made significant breakthroughs in medical image processing, outperforming traditional detection methods in breast cancer pathology classification tasks. This paper first reviewed the advances in applying deep learning to breast pathology images, focusing on three key areas: multi-scale feature extraction, cellular feature analysis, and classification. Next, it summarized the advantages of multimodal data fusion methods for breast pathology images. Finally, the study discussed the challenges and future prospects of deep learning in breast cancer pathology image diagnosis, providing important guidance for advancing the use of deep learning in breast diagnosis.
{"title":"[Research progress of breast pathology image diagnosis based on deep learning].","authors":"Liang Jiang, Cheng Zhang, Hui Cao, Baihao Jiang","doi":"10.7507/1001-5515.202311061","DOIUrl":"10.7507/1001-5515.202311061","url":null,"abstract":"<p><p>Breast cancer is a malignancy caused by the abnormal proliferation of breast epithelial cells, predominantly affecting female patients, and it is commonly diagnosed using histopathological images. Currently, deep learning techniques have made significant breakthroughs in medical image processing, outperforming traditional detection methods in breast cancer pathology classification tasks. This paper first reviewed the advances in applying deep learning to breast pathology images, focusing on three key areas: multi-scale feature extraction, cellular feature analysis, and classification. Next, it summarized the advantages of multimodal data fusion methods for breast pathology images. Finally, the study discussed the challenges and future prospects of deep learning in breast cancer pathology image diagnosis, providing important guidance for advancing the use of deep learning in breast diagnosis.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"41 5","pages":"1072-1077"},"PeriodicalIF":0.0,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11527764/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142509907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Early diagnosis and treatment of colorectal polyps are crucial for preventing colorectal cancer. This paper proposes a lightweight convolutional neural network for the automatic detection and auxiliary diagnosis of colorectal polyps. Initially, a 53-layer convolutional backbone network is used, incorporating a spatial pyramid pooling module to achieve feature extraction with different receptive field sizes. Subsequently, a feature pyramid network is employed to perform cross-scale fusion of feature maps from the backbone network. A spatial attention module is utilized to enhance the perception of polyp image boundaries and details. Further, a positional pattern attention module is used to automatically mine and integrate key features across different levels of feature maps, achieving rapid, efficient, and accurate automatic detection of colorectal polyps. The proposed model is evaluated on a clinical dataset, achieving an accuracy of 0.9982, recall of 0.9988, F1 score of 0.9984, and mean average precision (mAP) of 0.9953 at an intersection over union (IOU) threshold of 0.5, with a frame rate of 74 frames per second and a parameter count of 9.08 M. Compared to existing mainstream methods, the proposed method is lightweight, has low operating configuration requirements, high detection speed, and high accuracy, making it a feasible technical method and important tool for the early detection and diagnosis of colorectal cancer.
{"title":"[Colon polyp detection based on multi-scale and multi-level feature fusion and lightweight convolutional neural network].","authors":"Yiyang Li, Jiayi Zhao, Ruoyi Yu, Huixiang Liu, Shuang Liang, Yu Gu","doi":"10.7507/1001-5515.202312014","DOIUrl":"10.7507/1001-5515.202312014","url":null,"abstract":"<p><p>Early diagnosis and treatment of colorectal polyps are crucial for preventing colorectal cancer. This paper proposes a lightweight convolutional neural network for the automatic detection and auxiliary diagnosis of colorectal polyps. Initially, a 53-layer convolutional backbone network is used, incorporating a spatial pyramid pooling module to achieve feature extraction with different receptive field sizes. Subsequently, a feature pyramid network is employed to perform cross-scale fusion of feature maps from the backbone network. A spatial attention module is utilized to enhance the perception of polyp image boundaries and details. Further, a positional pattern attention module is used to automatically mine and integrate key features across different levels of feature maps, achieving rapid, efficient, and accurate automatic detection of colorectal polyps. The proposed model is evaluated on a clinical dataset, achieving an accuracy of 0.9982, recall of 0.9988, F1 score of 0.9984, and mean average precision (mAP) of 0.9953 at an intersection over union (IOU) threshold of 0.5, with a frame rate of 74 frames per second and a parameter count of 9.08 M. Compared to existing mainstream methods, the proposed method is lightweight, has low operating configuration requirements, high detection speed, and high accuracy, making it a feasible technical method and important tool for the early detection and diagnosis of colorectal cancer.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"41 5","pages":"911-918"},"PeriodicalIF":0.0,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11527748/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142509890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-25DOI: 10.7507/1001-5515.202312023
Shijia Yan, Ye Yang, Peng Yi
This study aims to optimize surface electromyography-based gesture recognition technique, focusing on the impact of muscle fatigue on the recognition performance. An innovative real-time analysis algorithm is proposed in the paper, which can extract muscle fatigue features in real time and fuse them into the hand gesture recognition process. Based on self-collected data, this paper applies algorithms such as convolutional neural networks and long short-term memory networks to provide an in-depth analysis of the feature extraction method of muscle fatigue, and compares the impact of muscle fatigue features on the performance of surface electromyography-based gesture recognition tasks. The results show that by fusing the muscle fatigue features in real time, the algorithm proposed in this paper improves the accuracy of hand gesture recognition at different fatigue levels, and the average recognition accuracy for different subjects is also improved. In summary, the algorithm in this paper not only improves the adaptability and robustness of the hand gesture recognition system, but its research process can also provide new insights into the development of gesture recognition technology in the field of biomedical engineering.
{"title":"[Enhancement algorithm for surface electromyographic-based gesture recognition based on real-time fusion of muscle fatigue features].","authors":"Shijia Yan, Ye Yang, Peng Yi","doi":"10.7507/1001-5515.202312023","DOIUrl":"10.7507/1001-5515.202312023","url":null,"abstract":"<p><p>This study aims to optimize surface electromyography-based gesture recognition technique, focusing on the impact of muscle fatigue on the recognition performance. An innovative real-time analysis algorithm is proposed in the paper, which can extract muscle fatigue features in real time and fuse them into the hand gesture recognition process. Based on self-collected data, this paper applies algorithms such as convolutional neural networks and long short-term memory networks to provide an in-depth analysis of the feature extraction method of muscle fatigue, and compares the impact of muscle fatigue features on the performance of surface electromyography-based gesture recognition tasks. The results show that by fusing the muscle fatigue features in real time, the algorithm proposed in this paper improves the accuracy of hand gesture recognition at different fatigue levels, and the average recognition accuracy for different subjects is also improved. In summary, the algorithm in this paper not only improves the adaptability and robustness of the hand gesture recognition system, but its research process can also provide new insights into the development of gesture recognition technology in the field of biomedical engineering.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"41 5","pages":"958-968"},"PeriodicalIF":0.0,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11527766/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142509893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-25DOI: 10.7507/1001-5515.202407066
Siting Xiang, Shenying Liu, Kuangzheng Li, Tongjin Zhao, Xu Wang
Amine oxidase copper-containing 1 (AOC1) is a key member of copper amine oxidase family, which is responsible for deamination oxidation of histamine and putrescine. In recent years, AOC1 has been reported to be associated with various cancers, with its expression levels significantly elevated in certain cancer cells, suggesting its potential role in cancer progression. However, its function in lipid metabolism still remains unclear. Through genetic analysis, we have discovered a potential relationship between AOC1 and lipid metabolism. To further investigate, we generated Aoc1-/- mice and characterized their metabolic phenotypes on both chow diet and high-fat diet (HFD) feeding conditions. On HFD feeding conditions, Aoc1-/- mice exhibited significantly higher fat mass and impaired glucose sensitivity, and lipid accumulation in white adipose tissue and liver was also increased. This study uncovers the potential role of AOC1 in lipid metabolism and its implications in metabolic disorders such as obesity and type 2 diabetes, providing new targets and research directions for treating metabolic diseases.
{"title":"[Functional study of amine oxidase copper-containing 1 (AOC1) in lipid metabolism].","authors":"Siting Xiang, Shenying Liu, Kuangzheng Li, Tongjin Zhao, Xu Wang","doi":"10.7507/1001-5515.202407066","DOIUrl":"10.7507/1001-5515.202407066","url":null,"abstract":"<p><p>Amine oxidase copper-containing 1 (AOC1) is a key member of copper amine oxidase family, which is responsible for deamination oxidation of histamine and putrescine. In recent years, AOC1 has been reported to be associated with various cancers, with its expression levels significantly elevated in certain cancer cells, suggesting its potential role in cancer progression. However, its function in lipid metabolism still remains unclear. Through genetic analysis, we have discovered a potential relationship between AOC1 and lipid metabolism. To further investigate, we generated <i>Aoc1</i> <sup>-/-</sup> mice and characterized their metabolic phenotypes on both chow diet and high-fat diet (HFD) feeding conditions. On HFD feeding conditions, <i>Aoc1</i> <sup>-/-</sup> mice exhibited significantly higher fat mass and impaired glucose sensitivity, and lipid accumulation in white adipose tissue and liver was also increased. This study uncovers the potential role of AOC1 in lipid metabolism and its implications in metabolic disorders such as obesity and type 2 diabetes, providing new targets and research directions for treating metabolic diseases.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"41 5","pages":"1019-1025"},"PeriodicalIF":0.0,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11527758/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142509895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-25DOI: 10.7507/1001-5515.202310011
Yong Fan, Zhengbo Zhang, Jing Wang
Currently, the development of deep learning-based multimodal learning is advancing rapidly, and is widely used in the field of artificial intelligence-generated content, such as image-text conversion and image-text generation. Electronic health records are digital information such as numbers, charts, and texts generated by medical staff using information systems in the process of medical activities. The multimodal fusion method of electronic health records based on deep learning can assist medical staff in the medical field to comprehensively analyze a large number of medical multimodal data generated in the process of diagnosis and treatment, thereby achieving accurate diagnosis and timely intervention for patients. In this article, we firstly introduce the methods and development trends of deep learning-based multimodal data fusion. Secondly, we summarize and compare the fusion of structured electronic medical records with other medical data such as images and texts, focusing on the clinical application types, sample sizes, and the fusion methods involved in the research. Through the analysis and summary of the literature, the deep learning methods for fusion of different medical modal data are as follows: first, selecting the appropriate pre-trained model according to the data modality for feature representation and post-fusion, and secondly, fusing based on the attention mechanism. Lastly, the difficulties encountered in multimodal medical data fusion and its developmental directions, including modeling methods, evaluation and application of models, are discussed. Through this review article, we expect to provide reference information for the establishment of models that can comprehensively utilize various modal medical data.
{"title":"[Research progress on electronic health records multimodal data fusion based on deep learning].","authors":"Yong Fan, Zhengbo Zhang, Jing Wang","doi":"10.7507/1001-5515.202310011","DOIUrl":"10.7507/1001-5515.202310011","url":null,"abstract":"<p><p>Currently, the development of deep learning-based multimodal learning is advancing rapidly, and is widely used in the field of artificial intelligence-generated content, such as image-text conversion and image-text generation. Electronic health records are digital information such as numbers, charts, and texts generated by medical staff using information systems in the process of medical activities. The multimodal fusion method of electronic health records based on deep learning can assist medical staff in the medical field to comprehensively analyze a large number of medical multimodal data generated in the process of diagnosis and treatment, thereby achieving accurate diagnosis and timely intervention for patients. In this article, we firstly introduce the methods and development trends of deep learning-based multimodal data fusion. Secondly, we summarize and compare the fusion of structured electronic medical records with other medical data such as images and texts, focusing on the clinical application types, sample sizes, and the fusion methods involved in the research. Through the analysis and summary of the literature, the deep learning methods for fusion of different medical modal data are as follows: first, selecting the appropriate pre-trained model according to the data modality for feature representation and post-fusion, and secondly, fusing based on the attention mechanism. Lastly, the difficulties encountered in multimodal medical data fusion and its developmental directions, including modeling methods, evaluation and application of models, are discussed. Through this review article, we expect to provide reference information for the establishment of models that can comprehensively utilize various modal medical data.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"41 5","pages":"1062-1071"},"PeriodicalIF":0.0,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11527755/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142509908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-25DOI: 10.7507/1001-5515.202310072
Wo Wang, Xiujuan Zheng, Zhiqing Lyu, Ni Li, Jun Chen
Glaucoma stands as the leading irreversible cause of blindness worldwide. Regular visual field examinations play a crucial role in both diagnosing and treating glaucoma. Predicting future visual field changes can assist clinicians in making timely interventions to manage the progression of this disease. To integrate temporal and spatial features from past visual field examination results and enhance visual field prediction, a convolutional long short-term memory (ConvLSTM) network was employed to construct a predictive model. The predictive performance of the ConvLSTM model was validated and compared with other methods using a dataset of perimetry tests from the Humphrey field analyzer at the University of Washington (UWHVF). Compared to traditional methods, the ConvLSTM model demonstrated higher prediction accuracy. Additionally, the relationship between visual field series length and prediction performance was investigated. In predicting the visual field using the previous three visual field results of past 1.5~6.0 years, it was found that the ConvLSTM model performed better, achieving a mean absolute error of 2.255 dB, a root mean squared error of 3.457 dB, and a coefficient of determination of 0.960. The experimental results show that the proposed method effectively utilizes existing visual field examination results to achieve more accurate visual field prediction for the next 0.5~2.0 years. This approach holds promise in assisting clinicians in diagnosing and treating visual field progression in glaucoma patients.
{"title":"[Visual field prediction based on temporal-spatial feature learning].","authors":"Wo Wang, Xiujuan Zheng, Zhiqing Lyu, Ni Li, Jun Chen","doi":"10.7507/1001-5515.202310072","DOIUrl":"10.7507/1001-5515.202310072","url":null,"abstract":"<p><p>Glaucoma stands as the leading irreversible cause of blindness worldwide. Regular visual field examinations play a crucial role in both diagnosing and treating glaucoma. Predicting future visual field changes can assist clinicians in making timely interventions to manage the progression of this disease. To integrate temporal and spatial features from past visual field examination results and enhance visual field prediction, a convolutional long short-term memory (ConvLSTM) network was employed to construct a predictive model. The predictive performance of the ConvLSTM model was validated and compared with other methods using a dataset of perimetry tests from the Humphrey field analyzer at the University of Washington (UWHVF). Compared to traditional methods, the ConvLSTM model demonstrated higher prediction accuracy. Additionally, the relationship between visual field series length and prediction performance was investigated. In predicting the visual field using the previous three visual field results of past 1.5~6.0 years, it was found that the ConvLSTM model performed better, achieving a mean absolute error of 2.255 dB, a root mean squared error of 3.457 dB, and a coefficient of determination of 0.960. The experimental results show that the proposed method effectively utilizes existing visual field examination results to achieve more accurate visual field prediction for the next 0.5~2.0 years. This approach holds promise in assisting clinicians in diagnosing and treating visual field progression in glaucoma patients.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"41 5","pages":"1003-1011"},"PeriodicalIF":0.0,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11527743/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142509911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The hemodynamic parameters in arteries are difficult to measure non-invasively, and the analysis and prediction of hemodynamic parameters based on computational fluid dynamics (CFD) has become one of the important research hotspots in biomechanics. This article establishes 15 idealized left coronary artery bifurcation models with concomitant stenosis and aneurysm lesions, and uses CFD method to numerically simulate them, exploring the effects of left anterior descending branch (LAD) stenosis rate and curvature radius on the hemodynamics inside the aneurysm. This study compared models with different stenosis rates and curvature radii and found that as the stenosis rate increased, the oscillatory shear index (OSI) and relative residence time (RRT) showed a trend of increase; In addition, the decrease in curvature radius led to an increase in the degree of vascular curvature and an increased risk of vascular aneurysm rupture. Among them, when the stenosis rate was less than 60%, the impact of stenosis rate on aneurysm rupture was greater, and when the stenosis rate was greater than 60%, the impact of curvature radius was more significant. Based on the research results of this article, it can be concluded that by comprehensively considering the effects of stenosis rate and curvature radius on hemodynamic parameters, the risk of aneurysm rupture can be analyzed and predicted. This article uses CFD methods to deeply explore the effects of stenosis rate and curvature radius on the hemodynamics of aneurysms, providing new theoretical basis and prediction methods for the assessment of aneurysm rupture risk, which has important academic value and practical guidance significance.
{"title":"[Hemodynamics simulation and analysis of left coronary artery aneurysms with concomitant stenosis].","authors":"Zhengjia Shi, Jianbing Sang, Lifang Sun, Fengtao Li, Yaping Tao, Peng Yang","doi":"10.7507/1001-5515.202310038","DOIUrl":"10.7507/1001-5515.202310038","url":null,"abstract":"<p><p>The hemodynamic parameters in arteries are difficult to measure non-invasively, and the analysis and prediction of hemodynamic parameters based on computational fluid dynamics (CFD) has become one of the important research hotspots in biomechanics. This article establishes 15 idealized left coronary artery bifurcation models with concomitant stenosis and aneurysm lesions, and uses CFD method to numerically simulate them, exploring the effects of left anterior descending branch (LAD) stenosis rate and curvature radius on the hemodynamics inside the aneurysm. This study compared models with different stenosis rates and curvature radii and found that as the stenosis rate increased, the oscillatory shear index (OSI) and relative residence time (RRT) showed a trend of increase; In addition, the decrease in curvature radius led to an increase in the degree of vascular curvature and an increased risk of vascular aneurysm rupture. Among them, when the stenosis rate was less than 60%, the impact of stenosis rate on aneurysm rupture was greater, and when the stenosis rate was greater than 60%, the impact of curvature radius was more significant. Based on the research results of this article, it can be concluded that by comprehensively considering the effects of stenosis rate and curvature radius on hemodynamic parameters, the risk of aneurysm rupture can be analyzed and predicted. This article uses CFD methods to deeply explore the effects of stenosis rate and curvature radius on the hemodynamics of aneurysms, providing new theoretical basis and prediction methods for the assessment of aneurysm rupture risk, which has important academic value and practical guidance significance.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"41 5","pages":"1026-1034"},"PeriodicalIF":0.0,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11527752/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142509897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-25DOI: 10.7507/1001-5515.202403052
Qian Zang, Xiaoming Zhao, Tie Liang, Xiuling Liu, Cunguang Lou
Fear emotion is a typical negative emotion that is commonly present in daily life and significantly influences human behavior. A deeper understanding of the mechanisms underlying negative emotions contributes to the improvement of diagnosing and treating disorders related to negative emotions. However, the neural mechanisms of the brain when faced with fearful emotional stimuli remain unclear. To this end, this study further combined electroencephalogram (EEG) source analysis and cortical brain network construction based on early posterior negativity (EPN) analysis to explore the differences in brain information processing mechanisms under fearful and neutral emotional picture stimuli from a spatiotemporal perspective. The results revealed that neutral emotional stimuli could elicit higher EPN amplitudes compared to fearful stimuli. Further source analysis of EEG data containing EPN components revealed significant differences in brain cortical activation areas between fearful and neutral emotional stimuli. Subsequently, more functional connections were observed in the brain network in the alpha frequency band for fearful emotions compared to neutral emotions. By quantifying brain network properties, we found that the average node degree and average clustering coefficient under fearful emotional stimuli were significantly larger compared to neutral emotions. These results indicate that combining EPN analysis with EEG source component and brain network analysis helps to explore brain functional modulation in the processing of fearful emotions with higher spatiotemporal resolution, providing a new perspective on the neural mechanisms of negative emotions.
{"title":"[Neural mechanisms of fear responses to emotional stimuli: a preliminary study combining early posterior negativity and electroencephalogram source network analysis].","authors":"Qian Zang, Xiaoming Zhao, Tie Liang, Xiuling Liu, Cunguang Lou","doi":"10.7507/1001-5515.202403052","DOIUrl":"10.7507/1001-5515.202403052","url":null,"abstract":"<p><p>Fear emotion is a typical negative emotion that is commonly present in daily life and significantly influences human behavior. A deeper understanding of the mechanisms underlying negative emotions contributes to the improvement of diagnosing and treating disorders related to negative emotions. However, the neural mechanisms of the brain when faced with fearful emotional stimuli remain unclear. To this end, this study further combined electroencephalogram (EEG) source analysis and cortical brain network construction based on early posterior negativity (EPN) analysis to explore the differences in brain information processing mechanisms under fearful and neutral emotional picture stimuli from a spatiotemporal perspective. The results revealed that neutral emotional stimuli could elicit higher EPN amplitudes compared to fearful stimuli. Further source analysis of EEG data containing EPN components revealed significant differences in brain cortical activation areas between fearful and neutral emotional stimuli. Subsequently, more functional connections were observed in the brain network in the alpha frequency band for fearful emotions compared to neutral emotions. By quantifying brain network properties, we found that the average node degree and average clustering coefficient under fearful emotional stimuli were significantly larger compared to neutral emotions. These results indicate that combining EPN analysis with EEG source component and brain network analysis helps to explore brain functional modulation in the processing of fearful emotions with higher spatiotemporal resolution, providing a new perspective on the neural mechanisms of negative emotions.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"41 5","pages":"951-957"},"PeriodicalIF":0.0,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11527751/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142509900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}