Heart sound classification is an important research direction in the field of biomedicine, which is of great significance for reducing cardiovascular mortality. Based on the non-segmentation basis, this paper proposed to use the bispectral analysis method in the high-order spectrum for feature extraction, and then use the neural network with the attention block to perform classification learning to realize the abnormal detection of heart sound signals. The experiment used the Challenge 2016 dataset for training and testing, and finally gets a sensitivity of 0.9409, a specificity of 0.8450, and a comprehensive score of 0.8930. Compared with ResNet, MobileNet and other pre-training networks using transfer learning technology, the CNN-Attention architecture proposed in this paper has greatly reduced the number of layers. At the same time, the training time and the resources required for system operation are also drastically reduced. The performance of the proposed algorithm is generally better than the reference algorithms.
{"title":"A Heart Sound Classification Algorithm Based on Bispectral Analysis and Deep Learning","authors":"Chundong Xu, Zhengjie Yang, Cheng Zhu","doi":"10.1145/3560071.3560072","DOIUrl":"https://doi.org/10.1145/3560071.3560072","url":null,"abstract":"Heart sound classification is an important research direction in the field of biomedicine, which is of great significance for reducing cardiovascular mortality. Based on the non-segmentation basis, this paper proposed to use the bispectral analysis method in the high-order spectrum for feature extraction, and then use the neural network with the attention block to perform classification learning to realize the abnormal detection of heart sound signals. The experiment used the Challenge 2016 dataset for training and testing, and finally gets a sensitivity of 0.9409, a specificity of 0.8450, and a comprehensive score of 0.8930. Compared with ResNet, MobileNet and other pre-training networks using transfer learning technology, the CNN-Attention architecture proposed in this paper has greatly reduced the number of layers. At the same time, the training time and the resources required for system operation are also drastically reduced. The performance of the proposed algorithm is generally better than the reference algorithms.","PeriodicalId":249276,"journal":{"name":"Proceedings of the 2022 International Conference on Intelligent Medicine and Health","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127418537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Herein, we focus on the problem of automatically medical concept normalization in social media posts. Specifically, the task is to map medical mentions within social media texts to the suitable concepts in a reference knowledge base. We propose a new medical concept normalization model using multi-task learning. The model uses BioBERT to encode mentions and their contexts, and classifies their concept IDs and types of mention. We evaluate our approach on two datasets and achieve new state-of-the-art performance.
{"title":"Enriching Pre-Trained Language Model with Multi-Task Learning and Context for Medical Concept Normalization","authors":"Yiling Cao, Lu Fang, Zhongguang Zheng","doi":"10.1145/3560071.3560084","DOIUrl":"https://doi.org/10.1145/3560071.3560084","url":null,"abstract":"Herein, we focus on the problem of automatically medical concept normalization in social media posts. Specifically, the task is to map medical mentions within social media texts to the suitable concepts in a reference knowledge base. We propose a new medical concept normalization model using multi-task learning. The model uses BioBERT to encode mentions and their contexts, and classifies their concept IDs and types of mention. We evaluate our approach on two datasets and achieve new state-of-the-art performance.","PeriodicalId":249276,"journal":{"name":"Proceedings of the 2022 International Conference on Intelligent Medicine and Health","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127137348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Myopia is the most common eye disorder in the world, and posterior chamber phakic intraocular lens implantation, as a myopia correction surgery, has been widely used in clinics due to its reversibility, wide range of correction degree, and retention of the lens adjustment ability. We address the problem of vault prediction, which assists to ensure the safety of this myopia correction surgery. The existing methods need to measure the eye parameters first and then use the regression method, which is very time-consuming and has subjective errors. Thus, we aim to design an automatic deep learning-based method for vault prediction only considering anterior segment optical coherence tomography (AS-OCT) images. Specifically, a deep neural network is utilized to extract the image features, and then a regression module is designed to predict the vault. Furthermore, we introduce domain prior supervision into the deep learning framework. Anterior chamber structure segmentation obtained by semi-supervised learning is considered to provide additional structural features. The prediction of auxiliary measurements, which are related to the vault, is designed to deeply supervise the learning process. Experiments on our dataset (465 test samples) show that the proposed method can reduce the mean absolute error by 39.36-57.34 and 7.39-9.20 compared with the multiple regression methods and machine learning-based methods, respectively. These results show that it is promising to predict vault using AS-OCT images without parameter measurement.
{"title":"Purely Image-based Vault Prediction with Domain Prior Supervision for Intraocular Lens Implantation","authors":"Huihui Fang, Yifan Yang, Yu-lan Di, Zhen Qiu, Junde Wu, Mingkui Tan, Yan Luo, Yanwu Xu","doi":"10.1145/3560071.3560079","DOIUrl":"https://doi.org/10.1145/3560071.3560079","url":null,"abstract":"Myopia is the most common eye disorder in the world, and posterior chamber phakic intraocular lens implantation, as a myopia correction surgery, has been widely used in clinics due to its reversibility, wide range of correction degree, and retention of the lens adjustment ability. We address the problem of vault prediction, which assists to ensure the safety of this myopia correction surgery. The existing methods need to measure the eye parameters first and then use the regression method, which is very time-consuming and has subjective errors. Thus, we aim to design an automatic deep learning-based method for vault prediction only considering anterior segment optical coherence tomography (AS-OCT) images. Specifically, a deep neural network is utilized to extract the image features, and then a regression module is designed to predict the vault. Furthermore, we introduce domain prior supervision into the deep learning framework. Anterior chamber structure segmentation obtained by semi-supervised learning is considered to provide additional structural features. The prediction of auxiliary measurements, which are related to the vault, is designed to deeply supervise the learning process. Experiments on our dataset (465 test samples) show that the proposed method can reduce the mean absolute error by 39.36-57.34 and 7.39-9.20 compared with the multiple regression methods and machine learning-based methods, respectively. These results show that it is promising to predict vault using AS-OCT images without parameter measurement.","PeriodicalId":249276,"journal":{"name":"Proceedings of the 2022 International Conference on Intelligent Medicine and Health","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117176016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kunli Zhang, Chenghao Zhang, Yajuan Ye, Hongying Zan, Xiaomei Liu
Named entity recognition is the first step in clinical electronic medical record text mining, which is significant for clinical decision support and personalized medicine. However, the lack of annotated electronic medical record datasets limits the application of pre-trained language models and deep neural networks in this field. To alleviate the problem of data scarcity, we propose T-RoBERTa-BiLSTM-CRF, a transfer learning-based electronic medical record entity recognition model, which aggregates the characteristics of medical data from different sources and uses a small amount of electronic medical record data as target data for further training. Compared with existing models, our approach can model medical entities more effectively, and the extensive comparative experiments on the CCKS 2019 and DEMRC datasets show the effectiveness of our approach.
{"title":"Named Entity Recognition in Electronic Medical Records Based on Transfer Learning","authors":"Kunli Zhang, Chenghao Zhang, Yajuan Ye, Hongying Zan, Xiaomei Liu","doi":"10.1145/3560071.3560086","DOIUrl":"https://doi.org/10.1145/3560071.3560086","url":null,"abstract":"Named entity recognition is the first step in clinical electronic medical record text mining, which is significant for clinical decision support and personalized medicine. However, the lack of annotated electronic medical record datasets limits the application of pre-trained language models and deep neural networks in this field. To alleviate the problem of data scarcity, we propose T-RoBERTa-BiLSTM-CRF, a transfer learning-based electronic medical record entity recognition model, which aggregates the characteristics of medical data from different sources and uses a small amount of electronic medical record data as target data for further training. Compared with existing models, our approach can model medical entities more effectively, and the extensive comparative experiments on the CCKS 2019 and DEMRC datasets show the effectiveness of our approach.","PeriodicalId":249276,"journal":{"name":"Proceedings of the 2022 International Conference on Intelligent Medicine and Health","volume":"92 26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128864726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Registration plays an important role in medical image analysis. Deep learning-based methods have been studied for medical image registration, which leverage convolutional neural networks (CNNs) for efficiently regressing a dense deformation field from a pair of images. However, CNNs are limited in its ability to extract semantically meaningful intra- and inter-image spatial correspondences, which are of importance for accurate image registration. This study proposes a novel end-to-end deep learning-based framework for unsupervised affine and diffeomorphic deformable registration, referred as ACSGRegNet, which integrates a cross-attention module for establishing inter-image feature correspondences and a self-attention module for intra-image anatomical structures aware. Both attention modules are built on transformer encoders. The output from each attention module is respectively fed to a decoder to generate a velocity field. We further introduce a gated fusion module to fuse both velocity fields. The fused velocity field is then integrated to a dense deformation field. Extensive experiments are conducted on lumbar spine CT images. Once the model is trained, pairs of unseen lumbar vertebrae can be registered in one shot. Evaluated on 450 pairs of vertebral CT data, our method achieved an average Dice of 0.963 and an average distance error of 0.321mm, which are better than the state-of-the-art (SOTA).
{"title":"ACSGRegNet: A Deep Learning-based Framework for Unsupervised Joint Affine and Diffeomorphic Registration of Lumbar Spine CT via Cross- and Self-Attention Fusion","authors":"Xiaoru Gao, Guoyan Zheng","doi":"10.1145/3560071.3560081","DOIUrl":"https://doi.org/10.1145/3560071.3560081","url":null,"abstract":"Registration plays an important role in medical image analysis. Deep learning-based methods have been studied for medical image registration, which leverage convolutional neural networks (CNNs) for efficiently regressing a dense deformation field from a pair of images. However, CNNs are limited in its ability to extract semantically meaningful intra- and inter-image spatial correspondences, which are of importance for accurate image registration. This study proposes a novel end-to-end deep learning-based framework for unsupervised affine and diffeomorphic deformable registration, referred as ACSGRegNet, which integrates a cross-attention module for establishing inter-image feature correspondences and a self-attention module for intra-image anatomical structures aware. Both attention modules are built on transformer encoders. The output from each attention module is respectively fed to a decoder to generate a velocity field. We further introduce a gated fusion module to fuse both velocity fields. The fused velocity field is then integrated to a dense deformation field. Extensive experiments are conducted on lumbar spine CT images. Once the model is trained, pairs of unseen lumbar vertebrae can be registered in one shot. Evaluated on 450 pairs of vertebral CT data, our method achieved an average Dice of 0.963 and an average distance error of 0.321mm, which are better than the state-of-the-art (SOTA).","PeriodicalId":249276,"journal":{"name":"Proceedings of the 2022 International Conference on Intelligent Medicine and Health","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128326205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Proceedings of the 2022 International Conference on Intelligent Medicine and Health","authors":"","doi":"10.1145/3560071","DOIUrl":"https://doi.org/10.1145/3560071","url":null,"abstract":"","PeriodicalId":249276,"journal":{"name":"Proceedings of the 2022 International Conference on Intelligent Medicine and Health","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126045502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}