首页 > 最新文献

生物医学工程学杂志最新文献

英文 中文
[Gesture accuracy recognition based on grayscale image of surface electromyogram signal and multi-view convolutional neural network].
Q4 Medicine Pub Date : 2024-12-25 DOI: 10.7507/1001-5515.202309007
Qingzheng Chen, Qing Tao, Xiaodong Zhang, Xuezheng Hu, Tianle Zhang

This study aims to address the limitations in gesture recognition caused by the susceptibility of temporal and frequency domain feature extraction from surface electromyography signals, as well as the low recognition rates of conventional classifiers. A novel gesture recognition approach was proposed, which transformed surface electromyography signals into grayscale images and employed convolutional neural networks as classifiers. The method began by segmenting the active portions of the surface electromyography signals using an energy threshold approach. Temporal voltage values were then processed through linear scaling and power transformations to generate grayscale images for convolutional neural network input. Subsequently, a multi-view convolutional neural network model was constructed, utilizing asymmetric convolutional kernels of sizes 1 × n and 3 × n within the same layer to enhance the representation capability of surface electromyography signals. Experimental results showed that the proposed method achieved recognition accuracies of 98.11% for 13 gestures and 98.75% for 12 multi-finger movements, significantly outperforming existing machine learning approaches. The proposed gesture recognition method, based on surface electromyography grayscale images and multi-view convolutional neural networks, demonstrates simplicity and efficiency, substantially improving recognition accuracy and exhibiting strong potential for practical applications.

{"title":"[Gesture accuracy recognition based on grayscale image of surface electromyogram signal and multi-view convolutional neural network].","authors":"Qingzheng Chen, Qing Tao, Xiaodong Zhang, Xuezheng Hu, Tianle Zhang","doi":"10.7507/1001-5515.202309007","DOIUrl":"https://doi.org/10.7507/1001-5515.202309007","url":null,"abstract":"<p><p>This study aims to address the limitations in gesture recognition caused by the susceptibility of temporal and frequency domain feature extraction from surface electromyography signals, as well as the low recognition rates of conventional classifiers. A novel gesture recognition approach was proposed, which transformed surface electromyography signals into grayscale images and employed convolutional neural networks as classifiers. The method began by segmenting the active portions of the surface electromyography signals using an energy threshold approach. Temporal voltage values were then processed through linear scaling and power transformations to generate grayscale images for convolutional neural network input. Subsequently, a multi-view convolutional neural network model was constructed, utilizing asymmetric convolutional kernels of sizes 1 × <i>n</i> and 3 × <i>n</i> within the same layer to enhance the representation capability of surface electromyography signals. Experimental results showed that the proposed method achieved recognition accuracies of 98.11% for 13 gestures and 98.75% for 12 multi-finger movements, significantly outperforming existing machine learning approaches. The proposed gesture recognition method, based on surface electromyography grayscale images and multi-view convolutional neural networks, demonstrates simplicity and efficiency, substantially improving recognition accuracy and exhibiting strong potential for practical applications.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"41 6","pages":"1153-1160"},"PeriodicalIF":0.0,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143504588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
[A study on post-traumatic stress disorder classification based on multi-atlas multi-kernel graph convolutional network].
Q4 Medicine Pub Date : 2024-12-25 DOI: 10.7507/1001-5515.202407031
Lijun Zhou, Hongru Zhu, Yunfei Liu, Xian Mo, Jun Yuan, Changyu Luo, Junran Zhang

Post-traumatic stress disorder (PTSD) presents with complex and diverse clinical manifestations, making accurate and objective diagnosis challenging when relying solely on clinical assessments. Therefore, there is an urgent need to develop reliable and objective auxiliary diagnostic models to provide effective diagnosis for PTSD patients. Currently, the application of graph neural networks for representing PTSD is limited by the expressiveness of existing models, which does not yield optimal classification results. To address this, we proposed a multi-graph multi-kernel graph convolutional network (MK-GCN) model for classifying PTSD data. First, we constructed functional connectivity matrices at different scales for the same subjects using different atlases, followed by employing the k-nearest neighbors algorithm to build the graphs. Second, we introduced the MK-GCN methodology to enhance the feature extraction capability of brain structures at different scales for the same subjects. Finally, we classified the extracted features from multiple scales and utilized graph class activation mapping to identify the top 10 brain regions contributing to classification. Experimental results on seismic-induced PTSD data demonstrated that our model achieved an accuracy of 84.75%, a specificity of 84.02%, and an AUC of 85% in the classification task distinguishing between PTSD patients and non-affected subjects. The findings provide robust evidence for the auxiliary diagnosis of PTSD following earthquakes and hold promise for reliably identifying specific brain regions in other PTSD diagnostic contexts, offering valuable references for clinicians.

{"title":"[A study on post-traumatic stress disorder classification based on multi-atlas multi-kernel graph convolutional network].","authors":"Lijun Zhou, Hongru Zhu, Yunfei Liu, Xian Mo, Jun Yuan, Changyu Luo, Junran Zhang","doi":"10.7507/1001-5515.202407031","DOIUrl":"https://doi.org/10.7507/1001-5515.202407031","url":null,"abstract":"<p><p>Post-traumatic stress disorder (PTSD) presents with complex and diverse clinical manifestations, making accurate and objective diagnosis challenging when relying solely on clinical assessments. Therefore, there is an urgent need to develop reliable and objective auxiliary diagnostic models to provide effective diagnosis for PTSD patients. Currently, the application of graph neural networks for representing PTSD is limited by the expressiveness of existing models, which does not yield optimal classification results. To address this, we proposed a multi-graph multi-kernel graph convolutional network (MK-GCN) model for classifying PTSD data. First, we constructed functional connectivity matrices at different scales for the same subjects using different atlases, followed by employing the k-nearest neighbors algorithm to build the graphs. Second, we introduced the MK-GCN methodology to enhance the feature extraction capability of brain structures at different scales for the same subjects. Finally, we classified the extracted features from multiple scales and utilized graph class activation mapping to identify the top 10 brain regions contributing to classification. Experimental results on seismic-induced PTSD data demonstrated that our model achieved an accuracy of 84.75%, a specificity of 84.02%, and an AUC of 85% in the classification task distinguishing between PTSD patients and non-affected subjects. The findings provide robust evidence for the auxiliary diagnosis of PTSD following earthquakes and hold promise for reliably identifying specific brain regions in other PTSD diagnostic contexts, offering valuable references for clinicians.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"41 6","pages":"1110-1118"},"PeriodicalIF":0.0,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143504083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
[Research progress on endoscopic image diagnosis of gastric tumors based on deep learning].
Q4 Medicine Pub Date : 2024-12-25 DOI: 10.7507/1001-5515.202404004
Yuan Gao, Guohui Wei

Gastric tumors are neoplastic lesions that occur in the stomach, posing a great threat to human health. Gastric cancer represents the malignant form of gastric tumors, and early detection and treatment are crucial for patient recovery. Endoscopic examination is the primary method for diagnosing gastric tumors. Deep learning techniques can automatically extract features from endoscopic images and analyze them, significantly improving the detection rate of gastric cancer and serving as an important tool for auxiliary diagnosis. This paper reviews relevant literature in recent years, presenting the application of deep learning methods in the classification, object detection, and segmentation of gastric tumor endoscopic images. In addition, this paper also summarizes several computer-aided diagnosis (CAD) systems and multimodal algorithms related to gastric tumors, highlights the issues with current deep learning methods, and provides an outlook on future research directions, aiming to promote the clinical application of deep learning methods in the endoscopic diagnosis of gastric tumors.

{"title":"[Research progress on endoscopic image diagnosis of gastric tumors based on deep learning].","authors":"Yuan Gao, Guohui Wei","doi":"10.7507/1001-5515.202404004","DOIUrl":"https://doi.org/10.7507/1001-5515.202404004","url":null,"abstract":"<p><p>Gastric tumors are neoplastic lesions that occur in the stomach, posing a great threat to human health. Gastric cancer represents the malignant form of gastric tumors, and early detection and treatment are crucial for patient recovery. Endoscopic examination is the primary method for diagnosing gastric tumors. Deep learning techniques can automatically extract features from endoscopic images and analyze them, significantly improving the detection rate of gastric cancer and serving as an important tool for auxiliary diagnosis. This paper reviews relevant literature in recent years, presenting the application of deep learning methods in the classification, object detection, and segmentation of gastric tumor endoscopic images. In addition, this paper also summarizes several computer-aided diagnosis (CAD) systems and multimodal algorithms related to gastric tumors, highlights the issues with current deep learning methods, and provides an outlook on future research directions, aiming to promote the clinical application of deep learning methods in the endoscopic diagnosis of gastric tumors.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"41 6","pages":"1293-1300"},"PeriodicalIF":0.0,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143504713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
[Study on the regulatory effect of low intensity retinal ultrasound stimulation on the neural activity of visual cortex].
Q4 Medicine Pub Date : 2024-12-25 DOI: 10.7507/1001-5515.202401047
Qianqian Wang, Yi Yuan, Jiaqing Yan

Low-intensity ultrasound stimulation of the retina has the ability to modulate neural activity in the primary visual cortex (V1), however, it is currently unclear how different intensities and durations of ultrasonic stimulation of the retina modulate neural activity in V1. In this paper, we recorded local field potential (LFP) signals in the V1 brain region of mice under different ultrasound intensities and different stimulation times. The amplitude of LFP corresponding to 1 s before ultrasound stimulation to 2 s after stimulation (-1-2 s) was analyzed, including the power and sample entropy of delta, theta, alpha beta, and low gamma frequency bands. The experimental results showed that, as the stimulation intensity increased, the peak value of the LFP in the visual cortex showed a linear upward trend; the power in the delta and theta frequency bands showed a linear upward trend, and the sample entropy showed a linear downward trend. With increases of stimulation duration, the peak value of the LFP in the visual cortex showed an upward trend, and the upward trend gradually weakened; the power in the delta frequency band showed an upward trend, the sample entropy showed a linear upward trend, and the sample entropy in the theta frequency band showed a downward trend. The results show that low-intensity ultrasonic stimulation of the retina has a significant modulatory effect on neural activity in the visual cortex. The study provides insights into the mechanisms by which ultrasonic stimulation regulates visual system function. Furthermore, it clarifies the patterns of parameter selection, facilitating the development of personalized multi-parameter modulation for the treatment of visual neural degeneration, retinal disorders and related research areas.

{"title":"[Study on the regulatory effect of low intensity retinal ultrasound stimulation on the neural activity of visual cortex].","authors":"Qianqian Wang, Yi Yuan, Jiaqing Yan","doi":"10.7507/1001-5515.202401047","DOIUrl":"https://doi.org/10.7507/1001-5515.202401047","url":null,"abstract":"<p><p>Low-intensity ultrasound stimulation of the retina has the ability to modulate neural activity in the primary visual cortex (V1), however, it is currently unclear how different intensities and durations of ultrasonic stimulation of the retina modulate neural activity in V1. In this paper, we recorded local field potential (LFP) signals in the V1 brain region of mice under different ultrasound intensities and different stimulation times. The amplitude of LFP corresponding to 1 s before ultrasound stimulation to 2 s after stimulation (-1-2 s) was analyzed, including the power and sample entropy of delta, theta, alpha beta, and low gamma frequency bands. The experimental results showed that, as the stimulation intensity increased, the peak value of the LFP in the visual cortex showed a linear upward trend; the power in the delta and theta frequency bands showed a linear upward trend, and the sample entropy showed a linear downward trend. With increases of stimulation duration, the peak value of the LFP in the visual cortex showed an upward trend, and the upward trend gradually weakened; the power in the delta frequency band showed an upward trend, the sample entropy showed a linear upward trend, and the sample entropy in the theta frequency band showed a downward trend. The results show that low-intensity ultrasonic stimulation of the retina has a significant modulatory effect on neural activity in the visual cortex. The study provides insights into the mechanisms by which ultrasonic stimulation regulates visual system function. Furthermore, it clarifies the patterns of parameter selection, facilitating the development of personalized multi-parameter modulation for the treatment of visual neural degeneration, retinal disorders and related research areas.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"41 6","pages":"1161-1168"},"PeriodicalIF":0.0,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143504716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
[Reinforcement learning-based method for type B aortic dissection localization]. [基于强化学习的 B 型主动脉夹层定位方法]。
Q4 Medicine Pub Date : 2024-10-25 DOI: 10.7507/1001-5515.202309047
An Zeng, Xianyang Lin, Jingliang Zhao, Dan Pan, Baoyao Yang, Xin Liu

In the segmentation of aortic dissection, there are issues such as low contrast between the aortic dissection and surrounding organs and vessels, significant differences in dissection morphology, and high background noise. To address these issues, this paper proposed a reinforcement learning-based method for type B aortic dissection localization. With the assistance of a two-stage segmentation model, the deep reinforcement learning was utilized to perform the first-stage aortic dissection localization task, ensuring the integrity of the localization target. In the second stage, the coarse segmentation results from the first stage were used as input to obtain refined segmentation results. To improve the recall rate of the first-stage segmentation results and include the segmentation target more completely in the localization results, this paper designed a reinforcement learning reward function based on the direction of recall changes. Additionally, the localization window was separated from the field of view window to reduce the occurrence of segmentation target loss. Unet, TransUnet, SwinUnet, and MT-Unet were selected as benchmark segmentation models. Through experiments, it was verified that the majority of the metrics in the two-stage segmentation process of this paper performed better than the benchmark results. Specifically, the Dice index improved by 1.34%, 0.89%, 27.66%, and 7.37% for each respective model. In conclusion, by incorporating the type B aortic dissection localization method proposed in this paper into the segmentation process, the overall segmentation accuracy is improved compared to the benchmark models. The improvement is particularly significant for models with poorer segmentation performance.

在主动脉夹层的分割中,存在主动脉夹层与周围器官和血管对比度低、夹层形态差异大、背景噪声高等问题。针对这些问题,本文提出了一种基于强化学习的 B 型主动脉夹层定位方法。在两阶段分割模型的辅助下,利用深度强化学习完成第一阶段主动脉夹层定位任务,确保定位目标的完整性。在第二阶段,将第一阶段的粗分割结果作为输入,获得精细分割结果。为了提高第一阶段分割结果的召回率,并将分割目标更完整地纳入定位结果,本文设计了基于召回率变化方向的强化学习奖励函数。此外,还将定位窗口与视场窗口分开,以减少分割目标丢失的发生。本文选择 Unet、TransUnet、SwinUnet 和 MTUnet 作为基准分割模型。通过实验验证,本文两阶段分割过程中的大多数指标都优于基准结果。具体来说,每个模型的 Dice 指数分别提高了 1.34%、0.89%、27.66% 和 7.37%。总之,通过将本文提出的 B 型主动脉夹层定位方法纳入分割过程,与基准模型相比,整体分割准确性得到了提高。对于分割性能较差的模型,这种改进尤为明显。
{"title":"[Reinforcement learning-based method for type B aortic dissection localization].","authors":"An Zeng, Xianyang Lin, Jingliang Zhao, Dan Pan, Baoyao Yang, Xin Liu","doi":"10.7507/1001-5515.202309047","DOIUrl":"10.7507/1001-5515.202309047","url":null,"abstract":"<p><p>In the segmentation of aortic dissection, there are issues such as low contrast between the aortic dissection and surrounding organs and vessels, significant differences in dissection morphology, and high background noise. To address these issues, this paper proposed a reinforcement learning-based method for type B aortic dissection localization. With the assistance of a two-stage segmentation model, the deep reinforcement learning was utilized to perform the first-stage aortic dissection localization task, ensuring the integrity of the localization target. In the second stage, the coarse segmentation results from the first stage were used as input to obtain refined segmentation results. To improve the recall rate of the first-stage segmentation results and include the segmentation target more completely in the localization results, this paper designed a reinforcement learning reward function based on the direction of recall changes. Additionally, the localization window was separated from the field of view window to reduce the occurrence of segmentation target loss. Unet, TransUnet, SwinUnet, and MT-Unet were selected as benchmark segmentation models. Through experiments, it was verified that the majority of the metrics in the two-stage segmentation process of this paper performed better than the benchmark results. Specifically, the Dice index improved by 1.34%, 0.89%, 27.66%, and 7.37% for each respective model. In conclusion, by incorporating the type B aortic dissection localization method proposed in this paper into the segmentation process, the overall segmentation accuracy is improved compared to the benchmark models. The improvement is particularly significant for models with poorer segmentation performance.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"41 5","pages":"878-885"},"PeriodicalIF":0.0,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11527745/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142509903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
[Research on motion impedance cardiography de-noising method based on two-step spectral ensemble empirical mode decomposition and canonical correlation analysis]. [基于两步谱集合经验模式分解和典型相关分析的运动阻抗心动图去噪方法研究]。
Q4 Medicine Pub Date : 2024-10-25 DOI: 10.7507/1001-5515.202210059
Yao Xie, Dong Yang, Honglong Yu, Qilian Xie

Impedance cardiography (ICG) is essential in evaluating cardiac function in patients with cardiovascular diseases. Aiming at the problem that the measurement of ICG signal is easily disturbed by motion artifacts, this paper introduces a de-noising method based on two-step spectral ensemble empirical mode decomposition (EEMD) and canonical correlation analysis (CCA). Firstly, the first spectral EEMD-CCA was performed between ICG and motion signals, and electrocardiogram (ECG) and motion signals, respectively. The component with the strongest correlation coefficient was set to zero to suppress the main motion artifacts. Secondly, the obtained ECG and ICG signals were subjected to a second spectral EEMD-CCA for further denoising. Lastly, the ICG signal is reconstructed using these share components. The experiment was tested on 30 subjects, and the results showed that the quality of the ICG signal is greatly improved after using the proposed denoising method, which could support the subsequent diagnosis and analysis of cardiovascular diseases.

阻抗心动图(ICG)对评估心血管疾病患者的心脏功能至关重要。针对 ICG 信号的测量容易受到运动伪影干扰的问题,本文介绍了一种基于两步频谱集合经验模式分解(EEMD)和卡农相关分析(CCA)的去噪方法。首先,分别在 ICG 和运动信号、心电图(ECG)和运动信号之间进行第一次频谱 EEMD-CCA。将相关系数最大的分量设为零,以抑制主要的运动伪影。其次,对获得的心电图和 ICG 信号进行第二次频谱 EEMD-CCA 以进一步去噪。最后,利用这些共享分量重建 ICG 信号。实验在 30 名受试者身上进行了测试,结果表明,在使用了所提出的去噪方法后,ICG 信号的质量得到了极大的改善,可以为后续的心血管疾病诊断和分析提供支持。
{"title":"[Research on motion impedance cardiography de-noising method based on two-step spectral ensemble empirical mode decomposition and canonical correlation analysis].","authors":"Yao Xie, Dong Yang, Honglong Yu, Qilian Xie","doi":"10.7507/1001-5515.202210059","DOIUrl":"10.7507/1001-5515.202210059","url":null,"abstract":"<p><p>Impedance cardiography (ICG) is essential in evaluating cardiac function in patients with cardiovascular diseases. Aiming at the problem that the measurement of ICG signal is easily disturbed by motion artifacts, this paper introduces a de-noising method based on two-step spectral ensemble empirical mode decomposition (EEMD) and canonical correlation analysis (CCA). Firstly, the first spectral EEMD-CCA was performed between ICG and motion signals, and electrocardiogram (ECG) and motion signals, respectively. The component with the strongest correlation coefficient was set to zero to suppress the main motion artifacts. Secondly, the obtained ECG and ICG signals were subjected to a second spectral EEMD-CCA for further denoising. Lastly, the ICG signal is reconstructed using these share components. The experiment was tested on 30 subjects, and the results showed that the quality of the ICG signal is greatly improved after using the proposed denoising method, which could support the subsequent diagnosis and analysis of cardiovascular diseases.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"41 5","pages":"986-994"},"PeriodicalIF":0.0,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11527760/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142509906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
[Research progress of breast pathology image diagnosis based on deep learning]. [基于深度学习的乳腺病理图像诊断研究进展]。
Q4 Medicine Pub Date : 2024-10-25 DOI: 10.7507/1001-5515.202311061
Liang Jiang, Cheng Zhang, Hui Cao, Baihao Jiang

Breast cancer is a malignancy caused by the abnormal proliferation of breast epithelial cells, predominantly affecting female patients, and it is commonly diagnosed using histopathological images. Currently, deep learning techniques have made significant breakthroughs in medical image processing, outperforming traditional detection methods in breast cancer pathology classification tasks. This paper first reviewed the advances in applying deep learning to breast pathology images, focusing on three key areas: multi-scale feature extraction, cellular feature analysis, and classification. Next, it summarized the advantages of multimodal data fusion methods for breast pathology images. Finally, the study discussed the challenges and future prospects of deep learning in breast cancer pathology image diagnosis, providing important guidance for advancing the use of deep learning in breast diagnosis.

乳腺癌是一种由乳腺上皮细胞异常增生引起的恶性肿瘤,主要影响女性患者,通常使用组织病理学图像进行诊断。目前,深度学习技术在医学图像处理领域取得了重大突破,在乳腺癌病理分类任务中的表现优于传统检测方法。本文首先回顾了将深度学习应用于乳腺病理图像的进展,重点关注三个关键领域:多尺度特征提取、细胞特征分析和分类。接着,它总结了乳腺病理图像多模态数据融合方法的优势。最后,研究探讨了深度学习在乳腺癌病理图像诊断中面临的挑战和未来前景,为推进深度学习在乳腺诊断中的应用提供了重要指导。
{"title":"[Research progress of breast pathology image diagnosis based on deep learning].","authors":"Liang Jiang, Cheng Zhang, Hui Cao, Baihao Jiang","doi":"10.7507/1001-5515.202311061","DOIUrl":"10.7507/1001-5515.202311061","url":null,"abstract":"<p><p>Breast cancer is a malignancy caused by the abnormal proliferation of breast epithelial cells, predominantly affecting female patients, and it is commonly diagnosed using histopathological images. Currently, deep learning techniques have made significant breakthroughs in medical image processing, outperforming traditional detection methods in breast cancer pathology classification tasks. This paper first reviewed the advances in applying deep learning to breast pathology images, focusing on three key areas: multi-scale feature extraction, cellular feature analysis, and classification. Next, it summarized the advantages of multimodal data fusion methods for breast pathology images. Finally, the study discussed the challenges and future prospects of deep learning in breast cancer pathology image diagnosis, providing important guidance for advancing the use of deep learning in breast diagnosis.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"41 5","pages":"1072-1077"},"PeriodicalIF":0.0,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11527764/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142509907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
[Colon polyp detection based on multi-scale and multi-level feature fusion and lightweight convolutional neural network]. [基于多尺度、多层次特征融合和轻量级卷积神经网络的结肠息肉检测]。
Q4 Medicine Pub Date : 2024-10-25 DOI: 10.7507/1001-5515.202312014
Yiyang Li, Jiayi Zhao, Ruoyi Yu, Huixiang Liu, Shuang Liang, Yu Gu

Early diagnosis and treatment of colorectal polyps are crucial for preventing colorectal cancer. This paper proposes a lightweight convolutional neural network for the automatic detection and auxiliary diagnosis of colorectal polyps. Initially, a 53-layer convolutional backbone network is used, incorporating a spatial pyramid pooling module to achieve feature extraction with different receptive field sizes. Subsequently, a feature pyramid network is employed to perform cross-scale fusion of feature maps from the backbone network. A spatial attention module is utilized to enhance the perception of polyp image boundaries and details. Further, a positional pattern attention module is used to automatically mine and integrate key features across different levels of feature maps, achieving rapid, efficient, and accurate automatic detection of colorectal polyps. The proposed model is evaluated on a clinical dataset, achieving an accuracy of 0.9982, recall of 0.9988, F1 score of 0.9984, and mean average precision (mAP) of 0.9953 at an intersection over union (IOU) threshold of 0.5, with a frame rate of 74 frames per second and a parameter count of 9.08 M. Compared to existing mainstream methods, the proposed method is lightweight, has low operating configuration requirements, high detection speed, and high accuracy, making it a feasible technical method and important tool for the early detection and diagnosis of colorectal cancer.

大肠息肉的早期诊断和治疗对预防大肠癌至关重要。本文提出了一种用于大肠息肉自动检测和辅助诊断的轻量级卷积神经网络。首先,使用 53 层卷积主干网络,结合空间金字塔池化模块,实现不同感受野大小的特征提取。随后,利用特征金字塔网络对骨干网络的特征图进行跨尺度融合。空间注意力模块用于增强对息肉图像边界和细节的感知。此外,位置模式注意模块用于自动挖掘和整合不同层次特征图的关键特征,从而实现快速、高效、准确的大肠息肉自动检测。该模型在临床数据集上进行了评估,在每秒 74 帧的帧率和 9 个参数的情况下,在交集大于联合(IOU)阈值为 0.5 时,准确率达到 0.9982,召回率达到 0.9988,F1 分数达到 0.9984,平均精度(mAP)达到 0.9953。08 M。与现有的主流方法相比,所提出的方法具有轻便、操作配置要求低、检测速度快、精确度高等特点,是一种可行的技术方法,也是结直肠癌早期检测和诊断的重要工具。
{"title":"[Colon polyp detection based on multi-scale and multi-level feature fusion and lightweight convolutional neural network].","authors":"Yiyang Li, Jiayi Zhao, Ruoyi Yu, Huixiang Liu, Shuang Liang, Yu Gu","doi":"10.7507/1001-5515.202312014","DOIUrl":"10.7507/1001-5515.202312014","url":null,"abstract":"<p><p>Early diagnosis and treatment of colorectal polyps are crucial for preventing colorectal cancer. This paper proposes a lightweight convolutional neural network for the automatic detection and auxiliary diagnosis of colorectal polyps. Initially, a 53-layer convolutional backbone network is used, incorporating a spatial pyramid pooling module to achieve feature extraction with different receptive field sizes. Subsequently, a feature pyramid network is employed to perform cross-scale fusion of feature maps from the backbone network. A spatial attention module is utilized to enhance the perception of polyp image boundaries and details. Further, a positional pattern attention module is used to automatically mine and integrate key features across different levels of feature maps, achieving rapid, efficient, and accurate automatic detection of colorectal polyps. The proposed model is evaluated on a clinical dataset, achieving an accuracy of 0.9982, recall of 0.9988, F1 score of 0.9984, and mean average precision (mAP) of 0.9953 at an intersection over union (IOU) threshold of 0.5, with a frame rate of 74 frames per second and a parameter count of 9.08 M. Compared to existing mainstream methods, the proposed method is lightweight, has low operating configuration requirements, high detection speed, and high accuracy, making it a feasible technical method and important tool for the early detection and diagnosis of colorectal cancer.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"41 5","pages":"911-918"},"PeriodicalIF":0.0,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11527748/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142509890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
[Enhancement algorithm for surface electromyographic-based gesture recognition based on real-time fusion of muscle fatigue features]. [基于肌肉疲劳特征实时融合的表面肌电图手势识别增强算法]。
Q4 Medicine Pub Date : 2024-10-25 DOI: 10.7507/1001-5515.202312023
Shijia Yan, Ye Yang, Peng Yi

This study aims to optimize surface electromyography-based gesture recognition technique, focusing on the impact of muscle fatigue on the recognition performance. An innovative real-time analysis algorithm is proposed in the paper, which can extract muscle fatigue features in real time and fuse them into the hand gesture recognition process. Based on self-collected data, this paper applies algorithms such as convolutional neural networks and long short-term memory networks to provide an in-depth analysis of the feature extraction method of muscle fatigue, and compares the impact of muscle fatigue features on the performance of surface electromyography-based gesture recognition tasks. The results show that by fusing the muscle fatigue features in real time, the algorithm proposed in this paper improves the accuracy of hand gesture recognition at different fatigue levels, and the average recognition accuracy for different subjects is also improved. In summary, the algorithm in this paper not only improves the adaptability and robustness of the hand gesture recognition system, but its research process can also provide new insights into the development of gesture recognition technology in the field of biomedical engineering.

本研究旨在优化基于表面肌电图的手势识别技术,重点关注肌肉疲劳对识别性能的影响。本文提出了一种创新的实时分析算法,可实时提取肌肉疲劳特征并将其融合到手势识别过程中。基于自收集的数据,本文应用卷积神经网络和长短期记忆网络等算法深入分析了肌肉疲劳的特征提取方法,并比较了肌肉疲劳特征对基于表面肌电图的手势识别任务性能的影响。结果表明,通过实时融合肌肉疲劳特征,本文提出的算法提高了不同疲劳程度下的手势识别准确率,不同受试者的平均识别准确率也有所提高。总之,本文的算法不仅提高了手势识别系统的适应性和鲁棒性,而且其研究过程也能为生物医学工程领域手势识别技术的发展提供新的启示。
{"title":"[Enhancement algorithm for surface electromyographic-based gesture recognition based on real-time fusion of muscle fatigue features].","authors":"Shijia Yan, Ye Yang, Peng Yi","doi":"10.7507/1001-5515.202312023","DOIUrl":"10.7507/1001-5515.202312023","url":null,"abstract":"<p><p>This study aims to optimize surface electromyography-based gesture recognition technique, focusing on the impact of muscle fatigue on the recognition performance. An innovative real-time analysis algorithm is proposed in the paper, which can extract muscle fatigue features in real time and fuse them into the hand gesture recognition process. Based on self-collected data, this paper applies algorithms such as convolutional neural networks and long short-term memory networks to provide an in-depth analysis of the feature extraction method of muscle fatigue, and compares the impact of muscle fatigue features on the performance of surface electromyography-based gesture recognition tasks. The results show that by fusing the muscle fatigue features in real time, the algorithm proposed in this paper improves the accuracy of hand gesture recognition at different fatigue levels, and the average recognition accuracy for different subjects is also improved. In summary, the algorithm in this paper not only improves the adaptability and robustness of the hand gesture recognition system, but its research process can also provide new insights into the development of gesture recognition technology in the field of biomedical engineering.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"41 5","pages":"958-968"},"PeriodicalIF":0.0,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11527766/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142509893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
[Functional study of amine oxidase copper-containing 1 (AOC1) in lipid metabolism]. [含铜胺氧化酶 1 (AOC1) 在脂质代谢中的功能研究]。
Q4 Medicine Pub Date : 2024-10-25 DOI: 10.7507/1001-5515.202407066
Siting Xiang, Shenying Liu, Kuangzheng Li, Tongjin Zhao, Xu Wang

Amine oxidase copper-containing 1 (AOC1) is a key member of copper amine oxidase family, which is responsible for deamination oxidation of histamine and putrescine. In recent years, AOC1 has been reported to be associated with various cancers, with its expression levels significantly elevated in certain cancer cells, suggesting its potential role in cancer progression. However, its function in lipid metabolism still remains unclear. Through genetic analysis, we have discovered a potential relationship between AOC1 and lipid metabolism. To further investigate, we generated Aoc1 -/- mice and characterized their metabolic phenotypes on both chow diet and high-fat diet (HFD) feeding conditions. On HFD feeding conditions, Aoc1 -/- mice exhibited significantly higher fat mass and impaired glucose sensitivity, and lipid accumulation in white adipose tissue and liver was also increased. This study uncovers the potential role of AOC1 in lipid metabolism and its implications in metabolic disorders such as obesity and type 2 diabetes, providing new targets and research directions for treating metabolic diseases.

含铜胺氧化酶 1(AOC1)是铜胺氧化酶家族的重要成员,负责组胺和腐胺的脱氨氧化。近年来,有报道称 AOC1 与多种癌症有关,其在某些癌细胞中的表达水平明显升高,表明其在癌症进展中可能发挥作用。然而,它在脂质代谢中的功能仍不清楚。通过基因分析,我们发现了 AOC1 与脂质代谢之间的潜在关系。为了进一步研究,我们产生了 Aoc1 -/- 小鼠,并描述了它们在低脂饮食和高脂饮食(HFD)喂养条件下的代谢表型。在高脂饮食喂养条件下,Aoc1 -/-小鼠的脂肪量显著增加,葡萄糖敏感性受损,白色脂肪组织和肝脏中的脂质积累也增加了。这项研究揭示了AOC1在脂质代谢中的潜在作用及其对肥胖和2型糖尿病等代谢性疾病的影响,为治疗代谢性疾病提供了新的靶点和研究方向。
{"title":"[Functional study of amine oxidase copper-containing 1 (AOC1) in lipid metabolism].","authors":"Siting Xiang, Shenying Liu, Kuangzheng Li, Tongjin Zhao, Xu Wang","doi":"10.7507/1001-5515.202407066","DOIUrl":"10.7507/1001-5515.202407066","url":null,"abstract":"<p><p>Amine oxidase copper-containing 1 (AOC1) is a key member of copper amine oxidase family, which is responsible for deamination oxidation of histamine and putrescine. In recent years, AOC1 has been reported to be associated with various cancers, with its expression levels significantly elevated in certain cancer cells, suggesting its potential role in cancer progression. However, its function in lipid metabolism still remains unclear. Through genetic analysis, we have discovered a potential relationship between AOC1 and lipid metabolism. To further investigate, we generated <i>Aoc1</i> <sup>-/-</sup> mice and characterized their metabolic phenotypes on both chow diet and high-fat diet (HFD) feeding conditions. On HFD feeding conditions, <i>Aoc1</i> <sup>-/-</sup> mice exhibited significantly higher fat mass and impaired glucose sensitivity, and lipid accumulation in white adipose tissue and liver was also increased. This study uncovers the potential role of AOC1 in lipid metabolism and its implications in metabolic disorders such as obesity and type 2 diabetes, providing new targets and research directions for treating metabolic diseases.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"41 5","pages":"1019-1025"},"PeriodicalIF":0.0,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11527758/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142509895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
生物医学工程学杂志
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1