Pub Date : 2025-10-25DOI: 10.7507/1001-5515.202405059
Jiayi Li, Wenxin Luo, Zhoufeng Wang, Weimin Li
Lung cancer is a leading cause of cancer-related deaths worldwide, with its high mortality rate primarily attributed to delayed diagnosis. Radiomics, by extracting abundant quantitative features from medical images, offers novel possibilities for early diagnosis and precise treatment of lung cancer. This article reviewed the latest advancements in radiomics for lung cancer management, particularly its integration with artificial intelligence (AI) to optimize diagnostic processes and personalize treatment strategies. Despite existing challenges, such as non-standardized image acquisition parameters and limitations in model reproducibility, the incorporation of AI significantly enhanced the precision and efficiency of image analysis, thereby improving the prediction of disease progression and the formulation of treatment plans. We emphasized the critical importance of standardizing image acquisition parameters and discussed the role of AI in advancing the clinical application of radiomics, alongside future research directions.
{"title":"[Advances in radiomics for early diagnosis and precision treatment of lung cancer].","authors":"Jiayi Li, Wenxin Luo, Zhoufeng Wang, Weimin Li","doi":"10.7507/1001-5515.202405059","DOIUrl":"10.7507/1001-5515.202405059","url":null,"abstract":"<p><p>Lung cancer is a leading cause of cancer-related deaths worldwide, with its high mortality rate primarily attributed to delayed diagnosis. Radiomics, by extracting abundant quantitative features from medical images, offers novel possibilities for early diagnosis and precise treatment of lung cancer. This article reviewed the latest advancements in radiomics for lung cancer management, particularly its integration with artificial intelligence (AI) to optimize diagnostic processes and personalize treatment strategies. Despite existing challenges, such as non-standardized image acquisition parameters and limitations in model reproducibility, the incorporation of AI significantly enhanced the precision and efficiency of image analysis, thereby improving the prediction of disease progression and the formulation of treatment plans. We emphasized the critical importance of standardizing image acquisition parameters and discussed the role of AI in advancing the clinical application of radiomics, alongside future research directions.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"42 5","pages":"1062-1068"},"PeriodicalIF":0.0,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12568731/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145393595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-25DOI: 10.7507/1001-5515.202503025
Naigong Yu, Jingsen Huang, Ke Lin, Zhiwen Zhang
In animal navigation, head direction is encoded by head direction cells within the olfactory-hippocampal structures of the brain. Even in darkness or unfamiliar environments, animals can estimate their head direction by integrating self-motion cues, though this process accumulates errors over time and undermines navigational accuracy. Traditional strategies rely on visual input to correct head direction, but visual scenes combined with self-motion information offer only partially accurate estimates. This study proposed an innovative calibration mechanism that dynamically adjusts the association between visual scenes and head direction based on the historical firing rates of head direction cells, without relying on specific landmarks. It also introduced a method to fine-tune error correction by modulating the strength of self-motion input to control the movement speed of the head direction cell activity bump. Experimental results showed that this approach effectively reduced the accumulation of self-motion-related errors and significantly enhanced the accuracy and robustness of the navigation system. These findings offer a new perspective for biologically inspired robotic navigation systems and underscore the potential of neural mechanisms in enabling efficient and reliable autonomous navigation.
{"title":"[A head direction cell model based on a spiking neural network with landmark-free calibration].","authors":"Naigong Yu, Jingsen Huang, Ke Lin, Zhiwen Zhang","doi":"10.7507/1001-5515.202503025","DOIUrl":"10.7507/1001-5515.202503025","url":null,"abstract":"<p><p>In animal navigation, head direction is encoded by head direction cells within the olfactory-hippocampal structures of the brain. Even in darkness or unfamiliar environments, animals can estimate their head direction by integrating self-motion cues, though this process accumulates errors over time and undermines navigational accuracy. Traditional strategies rely on visual input to correct head direction, but visual scenes combined with self-motion information offer only partially accurate estimates. This study proposed an innovative calibration mechanism that dynamically adjusts the association between visual scenes and head direction based on the historical firing rates of head direction cells, without relying on specific landmarks. It also introduced a method to fine-tune error correction by modulating the strength of self-motion input to control the movement speed of the head direction cell activity bump. Experimental results showed that this approach effectively reduced the accumulation of self-motion-related errors and significantly enhanced the accuracy and robustness of the navigation system. These findings offer a new perspective for biologically inspired robotic navigation systems and underscore the potential of neural mechanisms in enabling efficient and reliable autonomous navigation.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"42 5","pages":"970-976"},"PeriodicalIF":0.0,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12568723/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145393615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-25DOI: 10.7507/1001-5515.202507024
Yuyu Cao, Yuhang Xue, Hengyuan Yang, Fan Wang, Tianwen Li, Lei Zhao, Yunfa Fu
Artificial intelligence-enhanced brain-computer interfaces (BCI) are expected to significantly improve the performance of traditional BCIs in multiple aspects, including usability, user experience, and user satisfaction, particularly in terms of intelligence. However, such AI-integrated or AI-based BCI systems may introduce new ethical issues. This paper first evaluated the potential of AI technology, especially deep learning, in enhancing the performance of BCI systems, including improving decoding accuracy, information transfer rate, real-time performance, and adaptability. Building on this, it was considered that AI-enhanced BCI systems might introduce new or more severe ethical issues compared to traditional BCI systems. These include the possibility of making users' intentions and behaviors more predictable and manipulable, as well as the increased likelihood of technological abuse. The discussion also addressed measures to mitigate the ethical risks associated with these issues. It is hoped that this paper will promote a deeper understanding and reflection on the ethical risks and corresponding regulations of AI-enhanced BCIs.
{"title":"[Ethical considerations for artificial intelligence-enhanced brain-computer interface].","authors":"Yuyu Cao, Yuhang Xue, Hengyuan Yang, Fan Wang, Tianwen Li, Lei Zhao, Yunfa Fu","doi":"10.7507/1001-5515.202507024","DOIUrl":"10.7507/1001-5515.202507024","url":null,"abstract":"<p><p>Artificial intelligence-enhanced brain-computer interfaces (BCI) are expected to significantly improve the performance of traditional BCIs in multiple aspects, including usability, user experience, and user satisfaction, particularly in terms of intelligence. However, such AI-integrated or AI-based BCI systems may introduce new ethical issues. This paper first evaluated the potential of AI technology, especially deep learning, in enhancing the performance of BCI systems, including improving decoding accuracy, information transfer rate, real-time performance, and adaptability. Building on this, it was considered that AI-enhanced BCI systems might introduce new or more severe ethical issues compared to traditional BCI systems. These include the possibility of making users' intentions and behaviors more predictable and manipulable, as well as the increased likelihood of technological abuse. The discussion also addressed measures to mitigate the ethical risks associated with these issues. It is hoped that this paper will promote a deeper understanding and reflection on the ethical risks and corresponding regulations of AI-enhanced BCIs.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"42 5","pages":"1085-1091"},"PeriodicalIF":0.0,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12568730/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145393781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-25DOI: 10.7507/1001-5515.202412057
Shuo Zhu, Xukang Zhang, Zongyang Wang, Rui Jiang, Zhengda Liu
To address the challenges in blood cell recognition caused by diverse morphology, dense distribution, and the abundance of small target information, this paper proposes a blood cell detection algorithm - the "You Only Look Once" model based on hybrid mixing attention and deep over-parameters (HADO-YOLO). First, a hybrid attention mechanism is introduced into the backbone network to enhance the model's sensitivity to detailed features. Second, the standard convolution layers with downsampling in the neck network are replaced with deep over-parameterized convolutions to expand the receptive field and improve feature representation. Finally, the detection head is decoupled to enhance the model's robustness for detecting abnormal cells. Experimental results on the Blood Cell Counting Dataset (BCCD) demonstrate that the HADO-YOLO algorithm achieves a mean average precision of 90.2% and a precision of 93.8%, outperforming the baseline YOLO model. Compared with existing blood cell detection methods, the proposed algorithm achieves state-of-the-art detection performance. In conclusion, HADO-YOLO offers a more efficient and accurate solution for identifying various types of blood cells, providing valuable technical support for future clinical diagnostic applications.
针对血细胞形态多样、分布密集、小目标信息丰富等问题给血细胞识别带来的挑战,本文提出了一种基于混合注意和深度过参数(HADO-YOLO)的血细胞检测算法——“You Only Look Once”模型。首先,在骨干网中引入混合注意机制,增强模型对细节特征的敏感性;其次,将颈部网络中的下采样标准卷积层替换为深度过参数化卷积,以扩大接受域并改善特征表示。最后,对检测头进行解耦,增强了模型检测异常细胞的鲁棒性。在血细胞计数数据集(Blood Cell Counting Dataset, BCCD)上的实验结果表明,HADO-YOLO算法的平均精度为90.2%,精度为93.8%,优于基线YOLO模型。与现有的血细胞检测方法相比,该算法具有较好的检测性能。总之,HADO-YOLO为多种类型血细胞的识别提供了更高效、准确的解决方案,为未来临床诊断应用提供了宝贵的技术支持。
{"title":"[Deep overparameterized blood cell detection algorithm utilizing hybrid attention mechanisms].","authors":"Shuo Zhu, Xukang Zhang, Zongyang Wang, Rui Jiang, Zhengda Liu","doi":"10.7507/1001-5515.202412057","DOIUrl":"10.7507/1001-5515.202412057","url":null,"abstract":"<p><p>To address the challenges in blood cell recognition caused by diverse morphology, dense distribution, and the abundance of small target information, this paper proposes a blood cell detection algorithm - the \"You Only Look Once\" model based on hybrid mixing attention and deep over-parameters (HADO-YOLO). First, a hybrid attention mechanism is introduced into the backbone network to enhance the model's sensitivity to detailed features. Second, the standard convolution layers with downsampling in the neck network are replaced with deep over-parameterized convolutions to expand the receptive field and improve feature representation. Finally, the detection head is decoupled to enhance the model's robustness for detecting abnormal cells. Experimental results on the Blood Cell Counting Dataset (BCCD) demonstrate that the HADO-YOLO algorithm achieves a mean average precision of 90.2% and a precision of 93.8%, outperforming the baseline YOLO model. Compared with existing blood cell detection methods, the proposed algorithm achieves state-of-the-art detection performance. In conclusion, HADO-YOLO offers a more efficient and accurate solution for identifying various types of blood cells, providing valuable technical support for future clinical diagnostic applications.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"42 5","pages":"936-944"},"PeriodicalIF":0.0,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12568728/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145393803","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stent migration is one of the common complications following transcatheter valve implantation. This study aims to design a "drum-shaped" balloon-expandable aortic valve stent to address this issue and conduct a mechanical analysis. The implantation process of the stent was evaluated using a method that combines numerical simulation and in vitro experiments. Furthermore, the fatigue process of the stent under pulsatile cyclic loading was simulated, and its fatigue performance was assessed using a Goodman diagram. The process of the stent migrating toward the left ventricular side was simulated, and the force-displacement curve of the stent was extracted to evaluate its anti- migration performance. The results showed that all five stent models could be crimped into a 14F sheath and enabled uniform expansion of the native valve leaflets. The stress in each stent was below the ultimate stress, so no fatigue fracture occurred. As the cell height ratio decreased, the contact area fraction between the stent and the aortic root gradually decreased. However, the mean contact force and the maximum anti-migration force first decreased and then increased. Specifically, model S5 had the smallest contact area fraction but the largest mean contact force and maximum anti-migration force, reaching approximately 0.16 MPa and 10.73 N, respectively. The designed stent achieves a "drum-shaped" change after expansion and has good anti-migration performance.
{"title":"[Structural design and mechanical analysis of a \"drum-shaped\" balloon-expandable valve stent in expanded configuration].","authors":"Youzhi Zhao, Qianwen Hou, Jianye Zhou, Shiliang Chen, Hanbing Zhang, Aike Qiao","doi":"10.7507/1001-5515.202505020","DOIUrl":"10.7507/1001-5515.202505020","url":null,"abstract":"<p><p>Stent migration is one of the common complications following transcatheter valve implantation. This study aims to design a \"drum-shaped\" balloon-expandable aortic valve stent to address this issue and conduct a mechanical analysis. The implantation process of the stent was evaluated using a method that combines numerical simulation and <i>in vitro</i> experiments. Furthermore, the fatigue process of the stent under pulsatile cyclic loading was simulated, and its fatigue performance was assessed using a Goodman diagram. The process of the stent migrating toward the left ventricular side was simulated, and the force-displacement curve of the stent was extracted to evaluate its anti- migration performance. The results showed that all five stent models could be crimped into a 14F sheath and enabled uniform expansion of the native valve leaflets. The stress in each stent was below the ultimate stress, so no fatigue fracture occurred. As the cell height ratio decreased, the contact area fraction between the stent and the aortic root gradually decreased. However, the mean contact force and the maximum anti-migration force first decreased and then increased. Specifically, model S5 had the smallest contact area fraction but the largest mean contact force and maximum anti-migration force, reaching approximately 0.16 MPa and 10.73 N, respectively. The designed stent achieves a \"drum-shaped\" change after expansion and has good anti-migration performance.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"42 5","pages":"945-953"},"PeriodicalIF":0.0,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12568740/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145393871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Colorectal polyps are important early markers of colorectal cancer, and their early detection is crucial for cancer prevention. Although existing polyp segmentation models have achieved certain results, they still face challenges such as diverse polyp morphology, blurred boundaries, and insufficient feature extraction. To address these issues, this study proposes a parallel coordinate fusion network (PCFNet), aiming to improve the accuracy and robustness of polyp segmentation. PCFNet integrates parallel convolutional modules and a coordinate attention mechanism, enabling the preservation of global feature information while precisely capturing detailed features, thereby effectively segmenting polyps with complex boundaries. Experimental results on Kvasir-SEG and CVC-ClinicDB demonstrate the outstanding performance of PCFNet across multiple metrics. Specifically, on the Kvasir-SEG dataset, PCFNet achieved an F1-score of 0.897 4 and a mean intersection over union (mIoU) of 0.835 8; on the CVC-ClinicDB dataset, it attained an F1-score of 0.939 8 and an mIoU of 0.892 3. Compared with other methods, PCFNet shows significant improvements across all performance metrics, particularly in multi-scale feature fusion and spatial information capture, demonstrating its innovativeness. The proposed method provides a more reliable AI-assisted diagnostic tool for early colorectal cancer screening.
{"title":"[A multi-scale feature capturing and spatial position attention model for colorectal polyp image segmentation].","authors":"Wen Guo, Xiangyang Chen, Jian Wu, Jiaqi Li, Pengxue Zhu","doi":"10.7507/1001-5515.202412012","DOIUrl":"10.7507/1001-5515.202412012","url":null,"abstract":"<p><p>Colorectal polyps are important early markers of colorectal cancer, and their early detection is crucial for cancer prevention. Although existing polyp segmentation models have achieved certain results, they still face challenges such as diverse polyp morphology, blurred boundaries, and insufficient feature extraction. To address these issues, this study proposes a parallel coordinate fusion network (PCFNet), aiming to improve the accuracy and robustness of polyp segmentation. PCFNet integrates parallel convolutional modules and a coordinate attention mechanism, enabling the preservation of global feature information while precisely capturing detailed features, thereby effectively segmenting polyps with complex boundaries. Experimental results on Kvasir-SEG and CVC-ClinicDB demonstrate the outstanding performance of PCFNet across multiple metrics. Specifically, on the Kvasir-SEG dataset, PCFNet achieved an F1-score of 0.897 4 and a mean intersection over union (mIoU) of 0.835 8; on the CVC-ClinicDB dataset, it attained an F1-score of 0.939 8 and an mIoU of 0.892 3. Compared with other methods, PCFNet shows significant improvements across all performance metrics, particularly in multi-scale feature fusion and spatial information capture, demonstrating its innovativeness. The proposed method provides a more reliable AI-assisted diagnostic tool for early colorectal cancer screening.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"42 5","pages":"910-918"},"PeriodicalIF":0.0,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12568743/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145393584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-25DOI: 10.7507/1001-5515.202502053
Xin Meng, Sunjie Zhang
Heart sounds are critical for early detection of cardiovascular diseases, yet existing studies mostly focus on traditional signal segmentation, feature extraction, and shallow classifiers, which often fail to sufficiently capture the dynamic and nonlinear characteristics of heart sounds, limit recognition of complex heart sound patterns, and are sensitive to data imbalance, resulting in poor classification performance. To address these limitations, this study proposes a novel heart sound classification method that integrates improved Mel-frequency cepstral coefficients (MFCC) for feature extraction with a convolutional neural network (CNN) and a deep Transformer model. In the preprocessing stage, a Butterworth filter is applied for denoising, and continuous heart sound signals are directly processed without segmenting the cardiac cycles, allowing the improved MFCC features to better capture dynamic characteristics. These features are then fed into a CNN for feature learning, followed by global average pooling (GAP) to reduce model complexity and mitigate overfitting. Lastly, a deep Transformer module is employed to further extract and fuse features, completing the heart sound classification. To handle data imbalance, the model uses focal loss as the objective function. Experiments on two public datasets demonstrate that the proposed method performs effectively in both binary and multi-class classification tasks. This approach enables efficient classification of continuous heart sound signals, provides a reference methodology for future heart sound research for disease classification, and supports the development of wearable devices and home monitoring systems.
{"title":"[A study on heart sound classification algorithm based on improved Mel-frequency cepstrum coefficient feature extraction and deep Transformer].","authors":"Xin Meng, Sunjie Zhang","doi":"10.7507/1001-5515.202502053","DOIUrl":"10.7507/1001-5515.202502053","url":null,"abstract":"<p><p>Heart sounds are critical for early detection of cardiovascular diseases, yet existing studies mostly focus on traditional signal segmentation, feature extraction, and shallow classifiers, which often fail to sufficiently capture the dynamic and nonlinear characteristics of heart sounds, limit recognition of complex heart sound patterns, and are sensitive to data imbalance, resulting in poor classification performance. To address these limitations, this study proposes a novel heart sound classification method that integrates improved Mel-frequency cepstral coefficients (MFCC) for feature extraction with a convolutional neural network (CNN) and a deep Transformer model. In the preprocessing stage, a Butterworth filter is applied for denoising, and continuous heart sound signals are directly processed without segmenting the cardiac cycles, allowing the improved MFCC features to better capture dynamic characteristics. These features are then fed into a CNN for feature learning, followed by global average pooling (GAP) to reduce model complexity and mitigate overfitting. Lastly, a deep Transformer module is employed to further extract and fuse features, completing the heart sound classification. To handle data imbalance, the model uses focal loss as the objective function. Experiments on two public datasets demonstrate that the proposed method performs effectively in both binary and multi-class classification tasks. This approach enables efficient classification of continuous heart sound signals, provides a reference methodology for future heart sound research for disease classification, and supports the development of wearable devices and home monitoring systems.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"42 5","pages":"1012-1020"},"PeriodicalIF":0.0,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12568742/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145393606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-25DOI: 10.7507/1001-5515.202504040
Lilin Jie, Yangmeng Zou, Zhengxiu Li, Baoliang Lyu, Weilong Zheng, Ming Li
Current studies on electroencephalogram (EEG) emotion recognition primarily concentrate on discrete stimulus paradigms under controlled laboratory settings, which cannot adequately represent the dynamic transition characteristics of emotional states during multi-context interactions. To address this issue, this paper proposes a novel method for emotion transition recognition that leverages a cross-modal feature fusion and global perception network (CFGPN). Firstly, an experimental paradigm encompassing six types of emotion transition scenarios was designed, and EEG and eye movement data were simultaneously collected from 20 participants, each annotated with dynamic continuous emotion labels. Subsequently, deep canonical correlation analysis integrated with a cross-modal attention mechanism was employed to fuse features from EEG and eye movement signals, resulting in multimodal feature vectors enriched with highly discriminative emotional information. These vectors are then input into a parallel hybrid architecture that combines convolutional neural networks (CNNs) and Transformers. The CNN is employed to capture local time-series features, whereas the Transformer leverages its robust global perception capabilities to effectively model long-range temporal dependencies, enabling accurate dynamic emotion transition recognition. The results demonstrate that the proposed method achieves the lowest mean square error in both valence and arousal recognition tasks on the dynamic emotion transition dataset and a classic multimodal emotion dataset. It exhibits superior recognition accuracy and stability when compared with five existing unimodal and six multimodal deep learning models. The approach enhances both adaptability and robustness in recognizing emotional state transitions in real-world scenarios, showing promising potential for applications in the field of biomedical engineering.
{"title":"[A method for emotion transition recognition using cross-modal feature fusion and global perception].","authors":"Lilin Jie, Yangmeng Zou, Zhengxiu Li, Baoliang Lyu, Weilong Zheng, Ming Li","doi":"10.7507/1001-5515.202504040","DOIUrl":"10.7507/1001-5515.202504040","url":null,"abstract":"<p><p>Current studies on electroencephalogram (EEG) emotion recognition primarily concentrate on discrete stimulus paradigms under controlled laboratory settings, which cannot adequately represent the dynamic transition characteristics of emotional states during multi-context interactions. To address this issue, this paper proposes a novel method for emotion transition recognition that leverages a cross-modal feature fusion and global perception network (CFGPN). Firstly, an experimental paradigm encompassing six types of emotion transition scenarios was designed, and EEG and eye movement data were simultaneously collected from 20 participants, each annotated with dynamic continuous emotion labels. Subsequently, deep canonical correlation analysis integrated with a cross-modal attention mechanism was employed to fuse features from EEG and eye movement signals, resulting in multimodal feature vectors enriched with highly discriminative emotional information. These vectors are then input into a parallel hybrid architecture that combines convolutional neural networks (CNNs) and Transformers. The CNN is employed to capture local time-series features, whereas the Transformer leverages its robust global perception capabilities to effectively model long-range temporal dependencies, enabling accurate dynamic emotion transition recognition. The results demonstrate that the proposed method achieves the lowest mean square error in both valence and arousal recognition tasks on the dynamic emotion transition dataset and a classic multimodal emotion dataset. It exhibits superior recognition accuracy and stability when compared with five existing unimodal and six multimodal deep learning models. The approach enhances both adaptability and robustness in recognizing emotional state transitions in real-world scenarios, showing promising potential for applications in the field of biomedical engineering.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"42 5","pages":"977-986"},"PeriodicalIF":0.0,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12568737/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145393621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Protein lysine β-hydroxybutyrylation (Kbhb) is a newly discovered post-translational modification associated with a wide range of biological processes. Identifying Kbhb sites is critical for a better understanding of its mechanism of action. However, biochemical experimental methods for probing Kbhb sites are costly and have a long cycle. Therefore, a feature embedding learning method based on the Transformer encoder was proposed to predict Kbhb sites. In this method, amino acid residues were mapped into numerical vectors according to their amino acid class and position in a learnable feature embedding method. Then the Transformer encoder was used to extract discriminating features, and the bidirectional long short-term memory network (BiLSTM) was used to capture the correlation between different features. In this paper, a benchmark dataset was constructed, and a Kbhb site predictor, AutoTF-Kbhb, was implemented based on the proposed method. Experimental results showed that the proposed feature embedding learning method could extract effective features. AutoTF-Kbhb achieved an area under curve (AUC) of 0.87 and a Matthews correlation coefficient (MCC) of 0.37 on the independent test set, significantly outperforming other methods in comparison. Therefore, AutoTF-Kbhb can be used as an auxiliary means to identify Kbhb sites.
{"title":"[Prediction of protein Kbhb sites based on learnable feature embedding].","authors":"Zhisen Wei, Zhiwei Wang, Jinyao Yu, Cheng Deng, Dongjun Yu","doi":"10.7507/1001-5515.202401005","DOIUrl":"10.7507/1001-5515.202401005","url":null,"abstract":"<p><p>Protein lysine β-hydroxybutyrylation (Kbhb) is a newly discovered post-translational modification associated with a wide range of biological processes. Identifying Kbhb sites is critical for a better understanding of its mechanism of action. However, biochemical experimental methods for probing Kbhb sites are costly and have a long cycle. Therefore, a feature embedding learning method based on the Transformer encoder was proposed to predict Kbhb sites. In this method, amino acid residues were mapped into numerical vectors according to their amino acid class and position in a learnable feature embedding method. Then the Transformer encoder was used to extract discriminating features, and the bidirectional long short-term memory network (BiLSTM) was used to capture the correlation between different features. In this paper, a benchmark dataset was constructed, and a Kbhb site predictor, AutoTF-Kbhb, was implemented based on the proposed method. Experimental results showed that the proposed feature embedding learning method could extract effective features. AutoTF-Kbhb achieved an area under curve (AUC) of 0.87 and a Matthews correlation coefficient (MCC) of 0.37 on the independent test set, significantly outperforming other methods in comparison. Therefore, AutoTF-Kbhb can be used as an auxiliary means to identify Kbhb sites.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"42 5","pages":"1029-1035"},"PeriodicalIF":0.0,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12568741/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145393804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-25DOI: 10.7507/1001-5515.202501026
Zidong An, Liqiang Wang, Yi Wu, Yongjie Pang, Keming Chen, Yuhai Gao
This study aims to investigate the therapeutic efficacy of 50 Hz-0.6 mT low-frequency pulsed electromagnetic field (PEMF) on postmenopausal osteoporosis in ovariectomized rats. Thirty 3-month-old female SD rats were selected and divided into a sham operation group (Sham), an ovariectomized model group (OVX), and a low-frequency pulsed electromagnetic field (PEMF) treatment group, with 10 rats in each group. After 8 weeks, the whole-body bone mineral density (BMD) of each group of rats was measured. The treatment group began to receive PEMF stimulation for 90 minutes daily, while the OVX group only received a simulated placement without electricity. After 6 weeks of intervention, all rats were sacrificed and tested for in vitro BMD, micro-CT, biomechanics, serum biochemical indicators, and bone tissue-related proteins. The results showed that the BMD of the OVX group was significantly lower than that of the Sham group 8 weeks after surgery, indicating successful modeling. After 6 weeks of treatment, compared with the OVX group, the PEMF group exhibited significantly increased BMD in the whole body, femur, and vertebral bodies. Micro-CT analysis results showed improved bone microstructure, significantly increased maximum load and bending strength of the femur, elevated levels of serum bone formation markers, and increased expression of osteogenic-related proteins. In conclusion, this study demonstrates that daily 90-minute exposure to 50 Hz-0.6 mT PEMF effectively enhances BMD, improves bone biomechanical properties, optimizes bone microstructure, stimulates bone formation, and inhibits bone resorption in ovariectomized rats, highlighting its therapeutic potential for postmenopausal osteoporosis.
{"title":"[Experimental study on the treatment of postmenopausal osteoporosis with low-frequency pulsed electromagnetic fields].","authors":"Zidong An, Liqiang Wang, Yi Wu, Yongjie Pang, Keming Chen, Yuhai Gao","doi":"10.7507/1001-5515.202501026","DOIUrl":"10.7507/1001-5515.202501026","url":null,"abstract":"<p><p>This study aims to investigate the therapeutic efficacy of 50 Hz-0.6 mT low-frequency pulsed electromagnetic field (PEMF) on postmenopausal osteoporosis in ovariectomized rats. Thirty 3-month-old female SD rats were selected and divided into a sham operation group (Sham), an ovariectomized model group (OVX), and a low-frequency pulsed electromagnetic field (PEMF) treatment group, with 10 rats in each group. After 8 weeks, the whole-body bone mineral density (BMD) of each group of rats was measured. The treatment group began to receive PEMF stimulation for 90 minutes daily, while the OVX group only received a simulated placement without electricity. After 6 weeks of intervention, all rats were sacrificed and tested for <i>in vitro</i> BMD, micro-CT, biomechanics, serum biochemical indicators, and bone tissue-related proteins. The results showed that the BMD of the OVX group was significantly lower than that of the Sham group 8 weeks after surgery, indicating successful modeling. After 6 weeks of treatment, compared with the OVX group, the PEMF group exhibited significantly increased BMD in the whole body, femur, and vertebral bodies. Micro-CT analysis results showed improved bone microstructure, significantly increased maximum load and bending strength of the femur, elevated levels of serum bone formation markers, and increased expression of osteogenic-related proteins. In conclusion, this study demonstrates that daily 90-minute exposure to 50 Hz-0.6 mT PEMF effectively enhances BMD, improves bone biomechanical properties, optimizes bone microstructure, stimulates bone formation, and inhibits bone resorption in ovariectomized rats, highlighting its therapeutic potential for postmenopausal osteoporosis.</p>","PeriodicalId":39324,"journal":{"name":"生物医学工程学杂志","volume":"42 5","pages":"1054-1061"},"PeriodicalIF":0.0,"publicationDate":"2025-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12568744/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145393845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}