Pub Date : 2022-07-01DOI: 10.1109/CBMS55023.2022.00056
E. Santos, Francisco Santos, J. Almeida, K. Aires, J. M. R. Tavares, R. Veras
Diabetic Foot Ulcers (DFU) are lesions in the foot region caused by diabetes mellitus. It is essential to define the appropriate treatment in the early stages of the disease once late treatment may result in amputation. This article proposes an ensemble approach composed of five modified convolutional neural networks (CNNs) - VGG-16, VGG-19, Resnet-50, InceptionV3, and Densenet-201 - to classify DFU images. To define the parameters, we fine-tuned the CNNs, evaluated different configurations of fully connected layers, and used batch normalization and dropout operations. The modified CNNs were well suited to the problem; however, we observed that the union of the five CNNs significantly increased the success rates. We performed tests using 8,250 images with different resolution, contrast, color, and texture characteristics and included data augmentation operations to expand the training dataset. 5-fold cross-validation led to an average accuracy of 95.04%, resulting in a Kappa index greater than 91.85%, considered “Excellent”.
{"title":"Diabetic Foot Ulcers Classification using a fine-tuned CNNs Ensemble","authors":"E. Santos, Francisco Santos, J. Almeida, K. Aires, J. M. R. Tavares, R. Veras","doi":"10.1109/CBMS55023.2022.00056","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00056","url":null,"abstract":"Diabetic Foot Ulcers (DFU) are lesions in the foot region caused by diabetes mellitus. It is essential to define the appropriate treatment in the early stages of the disease once late treatment may result in amputation. This article proposes an ensemble approach composed of five modified convolutional neural networks (CNNs) - VGG-16, VGG-19, Resnet-50, InceptionV3, and Densenet-201 - to classify DFU images. To define the parameters, we fine-tuned the CNNs, evaluated different configurations of fully connected layers, and used batch normalization and dropout operations. The modified CNNs were well suited to the problem; however, we observed that the union of the five CNNs significantly increased the success rates. We performed tests using 8,250 images with different resolution, contrast, color, and texture characteristics and included data augmentation operations to expand the training dataset. 5-fold cross-validation led to an average accuracy of 95.04%, resulting in a Kappa index greater than 91.85%, considered “Excellent”.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132173521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CBMS55023.2022.00036
Che Wang, Seng Jia, Zhonghong Yan, Yijia Zheng, Shaonan Liu, Haifeng Wang, Dong Liang, Yanjie Zhu
In order to test the performance of online reconstruction of deep low-rank pulse sparse network (L+S-Net) for fast dynamic MR imaging. The L+S-Net was implemented on Gadgetron platform for online reconstruction of the scanner. Although L+S-net has a good image reconstruction performance., it takes a long time to estimate the coil sensitivity using ESPIRiT method. In this study, SigPy's signal processing software package was adopted to accelerate the calculation of coil sensitivity to speed up the online reconstruction. The results of experiments showed that compared with the CPU based method., the time of the coil sensitivity estimation could be shortened more than 100 times by using the gridding reconstruction method based on SigPy GPU. The reconstruction performance is stable and can realize online fast dynamic MR imaging reconstruction within 10 seconds.
{"title":"Online reconstruction of fast dynamic MR imaging using deep low-rank plus sparse network","authors":"Che Wang, Seng Jia, Zhonghong Yan, Yijia Zheng, Shaonan Liu, Haifeng Wang, Dong Liang, Yanjie Zhu","doi":"10.1109/CBMS55023.2022.00036","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00036","url":null,"abstract":"In order to test the performance of online reconstruction of deep low-rank pulse sparse network (L+S-Net) for fast dynamic MR imaging. The L+S-Net was implemented on Gadgetron platform for online reconstruction of the scanner. Although L+S-net has a good image reconstruction performance., it takes a long time to estimate the coil sensitivity using ESPIRiT method. In this study, SigPy's signal processing software package was adopted to accelerate the calculation of coil sensitivity to speed up the online reconstruction. The results of experiments showed that compared with the CPU based method., the time of the coil sensitivity estimation could be shortened more than 100 times by using the gridding reconstruction method based on SigPy GPU. The reconstruction performance is stable and can realize online fast dynamic MR imaging reconstruction within 10 seconds.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"353 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115889936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CBMS55023.2022.00087
R. Sinnott, William Hu
Clinical trials depend upon secure and robust data collection. Often this data is longitudinal in nature and can include clinical information captured in clinical/hospital settings as well as patient reported information. This paper describes a web application and associated mobile applications (iPhone/Android) developed to support a clinical trial focused on the application of a commercially available tropical fruit-based ointment applied to a range of skin conditions including eczema and skin rashes, cracked/dry skin on heels, sunburn and insect bites: the LPR Study. The LPR study involved 138 participants in four cohorts with treatments over different time periods (10–21 days) depending on their associated skin condition. The paper describes the solution that was developed and the practical experiences and challenges that were faced and overcome in delivery of the underpinning platform.
{"title":"Experiences in Development and Support of a Multi-technology Skin Conditions Clinical Trial Platform","authors":"R. Sinnott, William Hu","doi":"10.1109/CBMS55023.2022.00087","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00087","url":null,"abstract":"Clinical trials depend upon secure and robust data collection. Often this data is longitudinal in nature and can include clinical information captured in clinical/hospital settings as well as patient reported information. This paper describes a web application and associated mobile applications (iPhone/Android) developed to support a clinical trial focused on the application of a commercially available tropical fruit-based ointment applied to a range of skin conditions including eczema and skin rashes, cracked/dry skin on heels, sunburn and insect bites: the LPR Study. The LPR study involved 138 participants in four cohorts with treatments over different time periods (10–21 days) depending on their associated skin condition. The paper describes the solution that was developed and the practical experiences and challenges that were faced and overcome in delivery of the underpinning platform.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117268484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CBMS55023.2022.00045
Niccolò McConnell, A. Miron, Zidong Wang, Yongmin Li
The nnUNet is a fully automated and generalisable framework which automatically configures the full training pipeline for the segmentation task it is applied on, while taking into account dataset properties and hardware constraints. It utilises a basic UNet type architecture which is self-configuring in terms of topology. In this work, we propose to extend the nnUNet by integrating mechanisms from more advanced UNet variations such as the residual, dense, and inception blocks, resulting in three new nnUNet variations, namely the Residual-nnUNet, Dense-nnUNet, and Inception-nnUNet. We have evaluated the segmentation performance on eight datasets consisting of 20 target anatomical structures. Our results demonstrate that altering network architecture may lead to performance gains, but the extent of gains and the optimally chosen nnUNet variation is dataset dependent.
{"title":"Integrating Residual, Dense, and Inception Blocks into the nnUNet","authors":"Niccolò McConnell, A. Miron, Zidong Wang, Yongmin Li","doi":"10.1109/CBMS55023.2022.00045","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00045","url":null,"abstract":"The nnUNet is a fully automated and generalisable framework which automatically configures the full training pipeline for the segmentation task it is applied on, while taking into account dataset properties and hardware constraints. It utilises a basic UNet type architecture which is self-configuring in terms of topology. In this work, we propose to extend the nnUNet by integrating mechanisms from more advanced UNet variations such as the residual, dense, and inception blocks, resulting in three new nnUNet variations, namely the Residual-nnUNet, Dense-nnUNet, and Inception-nnUNet. We have evaluated the segmentation performance on eight datasets consisting of 20 target anatomical structures. Our results demonstrate that altering network architecture may lead to performance gains, but the extent of gains and the optimally chosen nnUNet variation is dataset dependent.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123688364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CBMS55023.2022.00008
Athanasios Lagopoulos, D. Hristu-Varsakelis
One of the crucial indicators of the heart's functioning, is the so-called left ventricular ejection fraction (LVEF), which measures the heart's ability to pump blood, and corresponds to the relative change in volume within the heart's left ventricle between it's most expanded (end-diastole) and most contracted state (end-systole) during a cardiac cycle. A reduced LVEF is a key indicator of heart failure, and as such, its accurate measurement plays a prominent role in cardiology. This work proposes a machine learning approach for estimating the LVEF from short echocardiogram videos. Our model, based on gradient-boosted trees, is significantly simpler than the state of the art, but is competitive in terms of accuracy and has a higher degree of explainability. The proposed model operates on a set of geometric features of the heart's left ventricle, tracking its evolution during the cardiac cycle; some of these features are novel and are proposed here for the first time. We discuss the performance of our model on a dataset of over 10,000 samples, including the relative importance of our proposed features, and show that the model's estimation error is well within the margin of variation that occurs when the same LVEF is measured by different experts.
{"title":"Measuring the Left Ventricular Ejection Fraction using Geometric Features","authors":"Athanasios Lagopoulos, D. Hristu-Varsakelis","doi":"10.1109/CBMS55023.2022.00008","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00008","url":null,"abstract":"One of the crucial indicators of the heart's functioning, is the so-called left ventricular ejection fraction (LVEF), which measures the heart's ability to pump blood, and corresponds to the relative change in volume within the heart's left ventricle between it's most expanded (end-diastole) and most contracted state (end-systole) during a cardiac cycle. A reduced LVEF is a key indicator of heart failure, and as such, its accurate measurement plays a prominent role in cardiology. This work proposes a machine learning approach for estimating the LVEF from short echocardiogram videos. Our model, based on gradient-boosted trees, is significantly simpler than the state of the art, but is competitive in terms of accuracy and has a higher degree of explainability. The proposed model operates on a set of geometric features of the heart's left ventricle, tracking its evolution during the cardiac cycle; some of these features are novel and are proposed here for the first time. We discuss the performance of our model on a dataset of over 10,000 samples, including the relative importance of our proposed features, and show that the model's estimation error is well within the margin of variation that occurs when the same LVEF is measured by different experts.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116799807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CBMS55023.2022.00050
Filipa G. Carvalho, Maryam Abbasi, B. Ribeiro, Joel P. Arrais
Cancer is among the deadliest diseases, enhancing the need for its detection and treatment. In the era of precision medicine, the main goal is to take into account individual vari-ability in order to choose more accurately which treatment and prevention strategies suit each person. However, drug response prediction for cancer therapy remains a challenge. In this work, we propose a deep neural network model to predict the effect of anticancer drugs in tumors through the half-maximal inhibitory concentration (IC50). The model can be seen as two-fold: first, we pre-trained two autoencoders with high-dimensional gene expression and mutation data to capture the crucial features from tumors; then, this genetic background is translated to cancer cell lines to predict the impact of the genetic variants on a given drug. Moreover, SMILES structures were introduced so that the model can apprehend relevant features regarding the drug compound. Finally, we use drug sensitivity data correlated to the genomic and drugs data to identify features that predict the IC50 value for each pair of drug-cell line. The obtained results demonstrate the effectiveness of the extracted deep representations in the prediction of drug-target interactions, achieving a performance of a mean squared error of 1.07 and surpassing previous state-of-the-art models.
{"title":"Deep Model for Anticancer Drug Response through Genomic Profiles and Compound Structures","authors":"Filipa G. Carvalho, Maryam Abbasi, B. Ribeiro, Joel P. Arrais","doi":"10.1109/CBMS55023.2022.00050","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00050","url":null,"abstract":"Cancer is among the deadliest diseases, enhancing the need for its detection and treatment. In the era of precision medicine, the main goal is to take into account individual vari-ability in order to choose more accurately which treatment and prevention strategies suit each person. However, drug response prediction for cancer therapy remains a challenge. In this work, we propose a deep neural network model to predict the effect of anticancer drugs in tumors through the half-maximal inhibitory concentration (IC50). The model can be seen as two-fold: first, we pre-trained two autoencoders with high-dimensional gene expression and mutation data to capture the crucial features from tumors; then, this genetic background is translated to cancer cell lines to predict the impact of the genetic variants on a given drug. Moreover, SMILES structures were introduced so that the model can apprehend relevant features regarding the drug compound. Finally, we use drug sensitivity data correlated to the genomic and drugs data to identify features that predict the IC50 value for each pair of drug-cell line. The obtained results demonstrate the effectiveness of the extracted deep representations in the prediction of drug-target interactions, achieving a performance of a mean squared error of 1.07 and surpassing previous state-of-the-art models.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130182720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CBMS55023.2022.00059
Aref Smiley, J. Finkelstein
The COVID-19 pandemic has impacted every aspect of health delivery and encouraged to replace in-person clinical visits with telecommunications. By providing wireless communication between embedded electronic devices and sensors, telerehabilitation enables constant monitoring of vital body functions, and tracking of physical activities of a person and aids physical therapy. In this paper, we designed and tested two remotely controlled versions of interactive bike (iBikE) systems which communicate through either Wi-Fi or BLE and give the clinical team the capability to monitor exercise progress in real time using simple graphical representation. We used the same hardware and user interface for both designs. The software uses either Wi-Fi or BLE protocol to connect the iBikE equipment and PC tablet. The bike can be used for upper or lower limb rehabilitation. A customized tablet app was developed to provide user interface between the app and the bike sensors. Both bikes were tested with a single group of nine individuals in two separate sessions. Each individual was asked to hand-cycle for three separate sub-sessions (1 minute each for slow, medium, and fast pace) with one-minute rest. During each sub-session, speed of the bikes was measured continuously using a tachometer, in addition to reading speed values from the iBikE app, to compare the functionality and accuracy of the measured data. Measured RPMs in each sub-session from iBikE and tachometer were further divided into 4 categories: 10-second bins (6 bins), 20-second bins (3 bins), 30-second bins (2 bins), and RPMs in each sub-session (1 minute, 1 bin). Then, the mean difference of each category (iBikE, tachometer) was calculated for each sub-session. Finally, mean and standard deviation (SD) of the calculated mean differences were reported for all individuals. We saw decreasing trend in both mean and SD from 10 second to 1 minute measurement. For BLE iBikE system, minimum mean RPM difference was $0.2 pm 0.3$ in one-minute sub-session with medium speed. This number was $0.21 pm 0.21$ in one-minute sub-session with slow speed for Wi-Fi iBikE system. Thus, testing confirmed high accuracy of our interfaces.
{"title":"Aerobic Exercise System for Home Telerehabilitation","authors":"Aref Smiley, J. Finkelstein","doi":"10.1109/CBMS55023.2022.00059","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00059","url":null,"abstract":"The COVID-19 pandemic has impacted every aspect of health delivery and encouraged to replace in-person clinical visits with telecommunications. By providing wireless communication between embedded electronic devices and sensors, telerehabilitation enables constant monitoring of vital body functions, and tracking of physical activities of a person and aids physical therapy. In this paper, we designed and tested two remotely controlled versions of interactive bike (iBikE) systems which communicate through either Wi-Fi or BLE and give the clinical team the capability to monitor exercise progress in real time using simple graphical representation. We used the same hardware and user interface for both designs. The software uses either Wi-Fi or BLE protocol to connect the iBikE equipment and PC tablet. The bike can be used for upper or lower limb rehabilitation. A customized tablet app was developed to provide user interface between the app and the bike sensors. Both bikes were tested with a single group of nine individuals in two separate sessions. Each individual was asked to hand-cycle for three separate sub-sessions (1 minute each for slow, medium, and fast pace) with one-minute rest. During each sub-session, speed of the bikes was measured continuously using a tachometer, in addition to reading speed values from the iBikE app, to compare the functionality and accuracy of the measured data. Measured RPMs in each sub-session from iBikE and tachometer were further divided into 4 categories: 10-second bins (6 bins), 20-second bins (3 bins), 30-second bins (2 bins), and RPMs in each sub-session (1 minute, 1 bin). Then, the mean difference of each category (iBikE, tachometer) was calculated for each sub-session. Finally, mean and standard deviation (SD) of the calculated mean differences were reported for all individuals. We saw decreasing trend in both mean and SD from 10 second to 1 minute measurement. For BLE iBikE system, minimum mean RPM difference was $0.2 pm 0.3$ in one-minute sub-session with medium speed. This number was $0.21 pm 0.21$ in one-minute sub-session with slow speed for Wi-Fi iBikE system. Thus, testing confirmed high accuracy of our interfaces.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131325493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CBMS55023.2022.00072
Yanling Liu, Yueying Zhou, Daoqiang Zhang
In recent years, Electroencephalogram (EEG)-based emotion recognition has developed rapidly and gained increasing attention in the field of brain-computer interface. Relevant studies in the neuroscience domain have shown that various emotional states may activate differently in brain regions and time points. Though the EEG signals have the characteristics of high temporal resolution and strong global correlation, the low signal-to-noise ratio and much redundant information bring challenges to the fast emotion recognition. To cope with the above problem, we propose a Temporal and channel Transformer (TcT) model for emotion recognition, which is directly applied to the raw preprocessed EEG data. In the model, we propose a TcT self-attention mechanism that simultaneously captures temporal and channel dependencies. The sliding window weight sharing strategy is designed to gradually refine the features from coarse time granularity, and reduce the complexity of the attention calculation. The original signal is passed between layers through the residual structure to integrate the features of different layers. We conduct experiments on the DEAP database to verify the effectiveness of the proposed model. The results show that the model achieves better classification performance in less time and with fewer resources than state-of-the-art methods.
{"title":"TcT: Temporal and channel Transformer for EEG-based Emotion Recognition","authors":"Yanling Liu, Yueying Zhou, Daoqiang Zhang","doi":"10.1109/CBMS55023.2022.00072","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00072","url":null,"abstract":"In recent years, Electroencephalogram (EEG)-based emotion recognition has developed rapidly and gained increasing attention in the field of brain-computer interface. Relevant studies in the neuroscience domain have shown that various emotional states may activate differently in brain regions and time points. Though the EEG signals have the characteristics of high temporal resolution and strong global correlation, the low signal-to-noise ratio and much redundant information bring challenges to the fast emotion recognition. To cope with the above problem, we propose a Temporal and channel Transformer (TcT) model for emotion recognition, which is directly applied to the raw preprocessed EEG data. In the model, we propose a TcT self-attention mechanism that simultaneously captures temporal and channel dependencies. The sliding window weight sharing strategy is designed to gradually refine the features from coarse time granularity, and reduce the complexity of the attention calculation. The original signal is passed between layers through the residual structure to integrate the features of different layers. We conduct experiments on the DEAP database to verify the effectiveness of the proposed model. The results show that the model achieves better classification performance in less time and with fewer resources than state-of-the-art methods.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122197822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thresholding is a popular technique for image segmentation, specifically in the field of medical image processing. The main challenge for image thresholding is to determine the optimum threshold based on intensity distributions of object and background in the image. In this paper, we propose a new image thresholding method by injecting the Bayesian probability estimation into the classical Tsallis entropy framework. The classical algorithm assumes that the intensity distribution of object does not affect the background pixels, and vice versa. However, the intensity distributions of object and background are essentially crossed. It is possible to estimate the probability of a pixel belonging to object or background by Bayes rule, and use it to update the classical form of Tsallis entropy. The optimum threshold is finally determined by optimizing the information measure function defined with the new form of Tsallis entropy. Extensive experiments conducted over two public datasets of medical brain images have verified the significant superiority of the proposed method.
{"title":"Optimum Thresholding for Medical Brain Images Based on Tsallis Entropy and Bayesian Estimation","authors":"Sijin Luo, Zhehao Luo, Zhi-Qin Zhan, Guoyuan Liang","doi":"10.1109/CBMS55023.2022.00071","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00071","url":null,"abstract":"Thresholding is a popular technique for image segmentation, specifically in the field of medical image processing. The main challenge for image thresholding is to determine the optimum threshold based on intensity distributions of object and background in the image. In this paper, we propose a new image thresholding method by injecting the Bayesian probability estimation into the classical Tsallis entropy framework. The classical algorithm assumes that the intensity distribution of object does not affect the background pixels, and vice versa. However, the intensity distributions of object and background are essentially crossed. It is possible to estimate the probability of a pixel belonging to object or background by Bayes rule, and use it to update the classical form of Tsallis entropy. The optimum threshold is finally determined by optimizing the information measure function defined with the new form of Tsallis entropy. Extensive experiments conducted over two public datasets of medical brain images have verified the significant superiority of the proposed method.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"33 10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125710598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The cardiac is one of the essential organs, and the segmentation of the left and right ventricular of cardiac is essential in diagnosing various heart diseases. The most popular method for the segmentation of 3D MRI images is the nnUNet. However, the 3D MRI volume of the ventricular contains other organs which interfere with the segmentation of the ventricular. Hence, we proposed a novel region-aware U-Net segmentation method RegUNet for ventricular segmentation. RegUNet improves the ventricular's segmentation performance by first capturing the region of interest (RoI) of the ventricular and then segmenting the ventricular with the captured RoI features, which reduces the segmentation module's difficulty by keeping the cardiac's features and leaving others such that RegUNet can focus on ventricular segmentation. Besides, since the model segments the ventricular with the captured RoI features, it saves the model's computing resources from identifying the background of the volume. Since 3D cardiac MRI volumes scanned by the different devices have diverse statistical characteristics, which causes the model's performance in processing the multi-source cardiac volumes to be unstable. We stabilize the model's performance with a multi-sources feature normalization strategy, which normalizes the feature from a different source with different parameters. We validated the proposed method on the M&MS dataset, a multi-sources 3D MRI cardiac segmentation dataset. Experiments showed that RegUNet's segmentation ability reached the state-of-the-art.
{"title":"Left and Right Ventricular Segmentation Based on 3D Region-Aware U-Net","authors":"Xiao-jing Huang, Wenjie Chen, Xueting Liu, Huisi Wu, Zhenkun Wen, Linlin Shen","doi":"10.1109/CBMS55023.2022.00031","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00031","url":null,"abstract":"The cardiac is one of the essential organs, and the segmentation of the left and right ventricular of cardiac is essential in diagnosing various heart diseases. The most popular method for the segmentation of 3D MRI images is the nnUNet. However, the 3D MRI volume of the ventricular contains other organs which interfere with the segmentation of the ventricular. Hence, we proposed a novel region-aware U-Net segmentation method RegUNet for ventricular segmentation. RegUNet improves the ventricular's segmentation performance by first capturing the region of interest (RoI) of the ventricular and then segmenting the ventricular with the captured RoI features, which reduces the segmentation module's difficulty by keeping the cardiac's features and leaving others such that RegUNet can focus on ventricular segmentation. Besides, since the model segments the ventricular with the captured RoI features, it saves the model's computing resources from identifying the background of the volume. Since 3D cardiac MRI volumes scanned by the different devices have diverse statistical characteristics, which causes the model's performance in processing the multi-source cardiac volumes to be unstable. We stabilize the model's performance with a multi-sources feature normalization strategy, which normalizes the feature from a different source with different parameters. We validated the proposed method on the M&MS dataset, a multi-sources 3D MRI cardiac segmentation dataset. Experiments showed that RegUNet's segmentation ability reached the state-of-the-art.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131667314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}