Pub Date : 2022-07-01DOI: 10.1109/CBMS55023.2022.00077
Sami Puustinen, J. Hyttinen, Gemal Hisuin, Hana Vrzakova, Antti Huotarinen, P. Fält, M. Hauta-Kasari, A. Immonen, T. Koivisto, J. Jääskeläinen, A. Elomaa
Hyperspectral imaging (HSI) can enhance the recognition of normal and pathological tissues exposed during microscopic or endoscopic surgeries. However, robust HSI classification models would require meticulous documentation of the tissue-specific optical properties to account for individual variation and intraoperative factors. Publicly available HSI databases are yet scarce or lack relevant metadata, anatomical accuracy, and patients' characteristics which limits the clinical utility of the data. The essential problem is that clinical standards for HSI acquisition and archival do not exist. We collected a total of 52 microsurgical HSI images from 10 patients using our customized HSI system for the operation microscopes. We annotated the relevant microanatomical structures and labeled the tissue areas intended for HSI analyses. Using the collected HSI data, we developed the initial design of the microneurosurgical HSI database. The HSI database allows to display and query anatomical annotations, localizing magnetic resonance imaging (MRI) scans, operation videos, tissue labels, and HSI spectra per individual patient. Here we present the fundamental structures and functions of the HSI database in development. Our clinical HSI database will provide grounds for further development of HSI algorithms and machine-learning applications in microscopic and endoscopic surgery. Future collaborative research will establish clinical HSI standards with approved supporting technologies.
{"title":"Towards Clinical Hyperspectral Imaging (HSI) Standards: Initial Design for a Microneurosurgical HSI Database","authors":"Sami Puustinen, J. Hyttinen, Gemal Hisuin, Hana Vrzakova, Antti Huotarinen, P. Fält, M. Hauta-Kasari, A. Immonen, T. Koivisto, J. Jääskeläinen, A. Elomaa","doi":"10.1109/CBMS55023.2022.00077","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00077","url":null,"abstract":"Hyperspectral imaging (HSI) can enhance the recognition of normal and pathological tissues exposed during microscopic or endoscopic surgeries. However, robust HSI classification models would require meticulous documentation of the tissue-specific optical properties to account for individual variation and intraoperative factors. Publicly available HSI databases are yet scarce or lack relevant metadata, anatomical accuracy, and patients' characteristics which limits the clinical utility of the data. The essential problem is that clinical standards for HSI acquisition and archival do not exist. We collected a total of 52 microsurgical HSI images from 10 patients using our customized HSI system for the operation microscopes. We annotated the relevant microanatomical structures and labeled the tissue areas intended for HSI analyses. Using the collected HSI data, we developed the initial design of the microneurosurgical HSI database. The HSI database allows to display and query anatomical annotations, localizing magnetic resonance imaging (MRI) scans, operation videos, tissue labels, and HSI spectra per individual patient. Here we present the fundamental structures and functions of the HSI database in development. Our clinical HSI database will provide grounds for further development of HSI algorithms and machine-learning applications in microscopic and endoscopic surgery. Future collaborative research will establish clinical HSI standards with approved supporting technologies.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128209409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CBMS55023.2022.00034
Junhao Zhu, Yi Zeng, Jianheng Zhou, Xunde Dong
Automatic ECG beat classification plays an important role in detecting cardiac disease. In this paper, we propose an automatic recognition model for ECG signals based on discrete wavelet transform (DWT), principal component analysis (PCA), kernel principal component analysis (KPCA), and adaptive kernel principal component analysis (AKPCA). We extracted different ECG features using DWT, PCA, KPCA, and AKPCA, respectively. These features were combined and used as support vector machine (SVM) input to classify the ECG. ECG records taken from the MIT-BIH arrhythmia database are selected to test the proposed method. The following five heartbeat types were classified using this method: normal beats (N), premature ventricular beats (V), right bundle branch block beats (R), left bundle branch block beats (L), and premature atrial beats (A). The sensitivity, accuracy, precision, and specificity reached 99.95%, 99.86%, 99.53%, and 99.70%, respectively. These results indicate the proposed method is reliable and efficient for ECG beat classification.
{"title":"ECG heartbeat classification based on combined features extracted by PCA, KPCA, AKPCA and DWT","authors":"Junhao Zhu, Yi Zeng, Jianheng Zhou, Xunde Dong","doi":"10.1109/CBMS55023.2022.00034","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00034","url":null,"abstract":"Automatic ECG beat classification plays an important role in detecting cardiac disease. In this paper, we propose an automatic recognition model for ECG signals based on discrete wavelet transform (DWT), principal component analysis (PCA), kernel principal component analysis (KPCA), and adaptive kernel principal component analysis (AKPCA). We extracted different ECG features using DWT, PCA, KPCA, and AKPCA, respectively. These features were combined and used as support vector machine (SVM) input to classify the ECG. ECG records taken from the MIT-BIH arrhythmia database are selected to test the proposed method. The following five heartbeat types were classified using this method: normal beats (N), premature ventricular beats (V), right bundle branch block beats (R), left bundle branch block beats (L), and premature atrial beats (A). The sensitivity, accuracy, precision, and specificity reached 99.95%, 99.86%, 99.53%, and 99.70%, respectively. These results indicate the proposed method is reliable and efficient for ECG beat classification.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"283 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122960178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CBMS55023.2022.00080
Grigorii Shoroshov, O. Senyukova, Dmitry Semenov, D. Sharova
MRI quality control plays a significant role in ensuring safety and quality of examinations. Most of the work in the area is devoted to the development of no-reference quality metrics. Some recent works use 2D or 3D convolutional neural networks. For this study, we collected a dataset of 363 clinical MRI sequences with known results of quality control as well as 1295 clinical MRI sequences without known results of quality control. We propose a method based on neural networks that takes into account the three-dimensional context through the use of bidirectional LSTM, as well as a pre-training method based on a prediction of no-reference quality metrics using EfficientNet convolutional neural network that allows the use of unlabeled data. The proposed method makes it possible to predict the result of quality control with ROC-AUC of almost 0.94.
{"title":"MRI Quality Control Algorithm Based on Image Analysis Using Convolutional and Recurrent Neural Networks","authors":"Grigorii Shoroshov, O. Senyukova, Dmitry Semenov, D. Sharova","doi":"10.1109/CBMS55023.2022.00080","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00080","url":null,"abstract":"MRI quality control plays a significant role in ensuring safety and quality of examinations. Most of the work in the area is devoted to the development of no-reference quality metrics. Some recent works use 2D or 3D convolutional neural networks. For this study, we collected a dataset of 363 clinical MRI sequences with known results of quality control as well as 1295 clinical MRI sequences without known results of quality control. We propose a method based on neural networks that takes into account the three-dimensional context through the use of bidirectional LSTM, as well as a pre-training method based on a prediction of no-reference quality metrics using EfficientNet convolutional neural network that allows the use of unlabeled data. The proposed method makes it possible to predict the result of quality control with ROC-AUC of almost 0.94.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125335178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CBMS55023.2022.00052
Mahbub Ul Alam, Jón R. Baldvinsson, Yuxia Wang
The area of interpretable deep neural networks has received increased attention in recent years due to the need for transparency in various fields, including medicine, healthcare, stock market analysis, compliance with legislation, and law. Layer-wise Relevance Propagation (LRP) and Gradient-weighted Class Activation Mapping (Grad-CAM) are two widely used algorithms to interpret deep neural networks. In this work, we investigated the applicability of these two algorithms in the sensitive application area of interpreting chest radiography images. In order to get a more nuanced and balanced outcome, we use a multi-label classification-based dataset and analyze the model prediction by visualizing the outcome of LRP and Grad-CAM on the chest radiography images. The results show that LRP provides more granular heatmaps than Grad-CAM when applied to the CheXpert dataset classification model. We posit that this is due to the inherent construction difference of these algorithms (LRP is layer-wise accumulation, whereas Grad-CAM focuses primarily on the final sections in the model's architecture). Both can be useful for understanding the classification from a micro or macro level to get a superior and interpretable clinical decision support system.
{"title":"Exploring LRP and Grad-CAM visualization to interpret multi-label-multi-class pathology prediction using chest radiography","authors":"Mahbub Ul Alam, Jón R. Baldvinsson, Yuxia Wang","doi":"10.1109/CBMS55023.2022.00052","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00052","url":null,"abstract":"The area of interpretable deep neural networks has received increased attention in recent years due to the need for transparency in various fields, including medicine, healthcare, stock market analysis, compliance with legislation, and law. Layer-wise Relevance Propagation (LRP) and Gradient-weighted Class Activation Mapping (Grad-CAM) are two widely used algorithms to interpret deep neural networks. In this work, we investigated the applicability of these two algorithms in the sensitive application area of interpreting chest radiography images. In order to get a more nuanced and balanced outcome, we use a multi-label classification-based dataset and analyze the model prediction by visualizing the outcome of LRP and Grad-CAM on the chest radiography images. The results show that LRP provides more granular heatmaps than Grad-CAM when applied to the CheXpert dataset classification model. We posit that this is due to the inherent construction difference of these algorithms (LRP is layer-wise accumulation, whereas Grad-CAM focuses primarily on the final sections in the model's architecture). Both can be useful for understanding the classification from a micro or macro level to get a superior and interpretable clinical decision support system.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114662787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CBMS55023.2022.00046
Fellipe Paes Ferreira, Aengus Daly
The use of wearable sensors such as smartwatches is becoming increasingly popular allied with their increasing functionality and interest in their outputs. This has led to a corresponding interest and increase by researchers to develop tools to analyse the outputted data. In this research, machine learning and deep learning algorithms are applied to classify the presence of schizophrenia using time series activity data. The dataset was collected from a study about behavioural patterns in people with schizophrenia which contains per minute motor activity measurements for an average of 12.7 days for 54 participants, 22 with schizophrenia and 32 without. New features were developed by firstly generating statistical measures in the time domain and secondly by subdividing the day into 3 separate time categories, representing different portions of the circadian rhythm. Five machine learning models are trained using these features. These models classify participants into the condition group (with schizophrenia) and the control group (without schizophrenia). A deep learning convolutional neural network (ConvNet) was also developed which also utilized time of day categories. The best machine learning model using 10-fold cross-validation achieved an average precision of 97.6% compared to a baseline of 83.6% from the original paper that analysed this dataset. Using Leave One Patient Out (LOPO) as a validation technique the machine learning model gives an accuracy of 86.7%, with the deep learning model giving an average accuracy of 87.6% which is comparable to the state-of-the-art of 88%-92.5%. This is the first time to the best of the researchers' knowledge that a deep learning ConvNet model has been applied to this task.
{"title":"ConvNet and machine learning models with feature engineering using motor activity data for schizophrenia classification","authors":"Fellipe Paes Ferreira, Aengus Daly","doi":"10.1109/CBMS55023.2022.00046","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00046","url":null,"abstract":"The use of wearable sensors such as smartwatches is becoming increasingly popular allied with their increasing functionality and interest in their outputs. This has led to a corresponding interest and increase by researchers to develop tools to analyse the outputted data. In this research, machine learning and deep learning algorithms are applied to classify the presence of schizophrenia using time series activity data. The dataset was collected from a study about behavioural patterns in people with schizophrenia which contains per minute motor activity measurements for an average of 12.7 days for 54 participants, 22 with schizophrenia and 32 without. New features were developed by firstly generating statistical measures in the time domain and secondly by subdividing the day into 3 separate time categories, representing different portions of the circadian rhythm. Five machine learning models are trained using these features. These models classify participants into the condition group (with schizophrenia) and the control group (without schizophrenia). A deep learning convolutional neural network (ConvNet) was also developed which also utilized time of day categories. The best machine learning model using 10-fold cross-validation achieved an average precision of 97.6% compared to a baseline of 83.6% from the original paper that analysed this dataset. Using Leave One Patient Out (LOPO) as a validation technique the machine learning model gives an accuracy of 86.7%, with the deep learning model giving an average accuracy of 87.6% which is comparable to the state-of-the-art of 88%-92.5%. This is the first time to the best of the researchers' knowledge that a deep learning ConvNet model has been applied to this task.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"1949 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129121021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CBMS55023.2022.00028
Md. Kawsher Mahbub, Md. Zakir Hossain Zamil, Md. Abdul Mozid Miah, Partho Ghose, M. Biswas, K. Santosh
Illness due to infectious diseases has been always a global threat. Millions of people die per year due to COVID-19, pneumonia, and Tuberculosis (TB) as all of them infect the lungs. For all cases, early screening/diagnosis can help provide opportunities for better care. To handle this, we develop an application, which we call MobApp4InfectiousDisease that can identify abnormalities due to COVID-19, pneumonia, and TB using Chest X-ray image. In our MobApp4InfectiousDisease, we implemented a customized deep network with a single transfer learning technique. For validation, we offered in-depth experimental study and we achieved, for COVID-19-pneumonia-TB cases, accuracy of 97.72%196.62%199.75%, precision of 92.72%1100.0%199.29%, recall of 98.89%188.54%199.65%, and F1-score of 95.00%194.00%199.00%. Our results are compared with state-of-the-art techniques. To the best of our knowl-edge, this is the first time we deployed our proof-of-the-concept MobApp4InfectiousDisease for a multi-class infec-tious disease classification.
{"title":"MobApp4InfectiousDisease: Classify COVID-19, Pneumonia, and Tuberculosis","authors":"Md. Kawsher Mahbub, Md. Zakir Hossain Zamil, Md. Abdul Mozid Miah, Partho Ghose, M. Biswas, K. Santosh","doi":"10.1109/CBMS55023.2022.00028","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00028","url":null,"abstract":"Illness due to infectious diseases has been always a global threat. Millions of people die per year due to COVID-19, pneumonia, and Tuberculosis (TB) as all of them infect the lungs. For all cases, early screening/diagnosis can help provide opportunities for better care. To handle this, we develop an application, which we call MobApp4InfectiousDisease that can identify abnormalities due to COVID-19, pneumonia, and TB using Chest X-ray image. In our MobApp4InfectiousDisease, we implemented a customized deep network with a single transfer learning technique. For validation, we offered in-depth experimental study and we achieved, for COVID-19-pneumonia-TB cases, accuracy of 97.72%196.62%199.75%, precision of 92.72%1100.0%199.29%, recall of 98.89%188.54%199.65%, and F1-score of 95.00%194.00%199.00%. Our results are compared with state-of-the-art techniques. To the best of our knowl-edge, this is the first time we deployed our proof-of-the-concept MobApp4InfectiousDisease for a multi-class infec-tious disease classification.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125553015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CBMS55023.2022.00022
Caroline B. Gonçalves, Jefferson R. Souza, H. Fernandes
Convolutional neural networks (CNNs) have shown great potential in different real word application. Defining a suitable CNN architecture is vital for obtaining good performance. In this work we propose a random forest surrogate combined with two bio-inspired optimization algorithm, genetic algorithms (GA) and particle swarm optimization (PSO) used to find good CNN fully connected layer architecture and hyperparameters for three state of the art CNNs: VGG-16, Resnet-50 and Densenet-201. The proposed model is used to classify breast thermography images from the DMR-IR database in order to find whether or not the patient has cancer. The proposed model improved F1-score from 0.92 to 1 for the Densenet using the GA and also Resnet from 0.85 of F1-score to 0.92 using the PSO. Moreover, the surrogate model also helped reducing training time.
{"title":"CNN optimization using surrogate evolutionary algorithm for breast cancer detection using infrared images","authors":"Caroline B. Gonçalves, Jefferson R. Souza, H. Fernandes","doi":"10.1109/CBMS55023.2022.00022","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00022","url":null,"abstract":"Convolutional neural networks (CNNs) have shown great potential in different real word application. Defining a suitable CNN architecture is vital for obtaining good performance. In this work we propose a random forest surrogate combined with two bio-inspired optimization algorithm, genetic algorithms (GA) and particle swarm optimization (PSO) used to find good CNN fully connected layer architecture and hyperparameters for three state of the art CNNs: VGG-16, Resnet-50 and Densenet-201. The proposed model is used to classify breast thermography images from the DMR-IR database in order to find whether or not the patient has cancer. The proposed model improved F1-score from 0.92 to 1 for the Densenet using the GA and also Resnet from 0.85 of F1-score to 0.92 using the PSO. Moreover, the surrogate model also helped reducing training time.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125723273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/cbms55023.2022.00005
{"title":"Preface to CBMS 2022","authors":"","doi":"10.1109/cbms55023.2022.00005","DOIUrl":"https://doi.org/10.1109/cbms55023.2022.00005","url":null,"abstract":"","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117242036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CBMS55023.2022.00044
E. Torre, Luisa Francini, E. Cordelli, R. Sicilia, S. Manfrini, V. Piemonte, P. Soda
Nowadays diabetes still remains one of the leading causes of death worldwide and it has serious consequences if not properly treated. The advent of hybrid closed-loop systems, connection with consumer electronics and cloud-based data systems have hastened the advancement of diabetes technology. In the wake of this progress, we exploit information technology to make insulin pens smart so as to promote adherence to injection therapy and improve the socio-economic impact for the patient. In this respect, this work focuses on two main open issues, namely injection site rotation and lipodystrophies detection while the patient is taking the insulin. The first one is addressed collecting data with IMU sensor which are processed by a machine learning classifier to detect the injection site. The second one is tackled through a sensor equipped with two leds: features computed from such signals fed a one-class Support Vector Machine trained to recognise healthy tissue, so that samples different from those in the training set can be considered as lipodystrophies. The results obtained for the injection site recognition show an average accuracy larger than 0.957, whilst in the case of lipodystrophies detection we reach an accuracy greater than 0.95 using the IR led.
{"title":"Exploiting AI to make insulin pens smart: injection site recognition and lipodystrophy detection","authors":"E. Torre, Luisa Francini, E. Cordelli, R. Sicilia, S. Manfrini, V. Piemonte, P. Soda","doi":"10.1109/CBMS55023.2022.00044","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00044","url":null,"abstract":"Nowadays diabetes still remains one of the leading causes of death worldwide and it has serious consequences if not properly treated. The advent of hybrid closed-loop systems, connection with consumer electronics and cloud-based data systems have hastened the advancement of diabetes technology. In the wake of this progress, we exploit information technology to make insulin pens smart so as to promote adherence to injection therapy and improve the socio-economic impact for the patient. In this respect, this work focuses on two main open issues, namely injection site rotation and lipodystrophies detection while the patient is taking the insulin. The first one is addressed collecting data with IMU sensor which are processed by a machine learning classifier to detect the injection site. The second one is tackled through a sensor equipped with two leds: features computed from such signals fed a one-class Support Vector Machine trained to recognise healthy tissue, so that samples different from those in the training set can be considered as lipodystrophies. The results obtained for the injection site recognition show an average accuracy larger than 0.957, whilst in the case of lipodystrophies detection we reach an accuracy greater than 0.95 using the IR led.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114226580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/CBMS55023.2022.00021
Jonathan S. Ramos, Erikson Júlio De Aguiar, Ivar Vargas Belizario, Márcus V. L. Costa, J. G. Maciel, M. Cazzolato, C. Traina, M. Nogueira-Barbosa, A. J. Traina
Bone mineral density (BMD) is the international standard for evaluating osteoporosis/osteopenia. The success rate of BMD alone in estimating the risk of vertebral fragility fracture (VFF) is approximately 50%, making BMD far from ideal in predicting VFF. In addition, whether or not a patient has been diagnosed with osteoporosis or osteopenia, he or she may suffer a VFF. For this reason, we conducted an extensive empirical study to assess VFFs in postmenopausal women. We considered a representative dataset of 94 T1- and T2-weighted routine spine MRI (with osteopenia or osteoporosis), split into 2,400 samples (slices). Comparing the classification results of machine learning and deep learning (DL) techniques showed that DL generally achieved better results at the cost of higher computational power and hard explainability. ResNet achieved the best results in discriminating patients from groups with and without VFFs with 83% accuracy and 90% AUC (with a confidence interval of 99%). Our results represent a significant step toward prospective and longitudinal studies investigating methods to achieve higher accuracy in predicting VFFs based on spine MRI features of vertebrae without fracture.
{"title":"Analysis of vertebrae without fracture on spine MRI to assess bone fragility: A Comparison of Traditional Machine Learning and Deep Learning","authors":"Jonathan S. Ramos, Erikson Júlio De Aguiar, Ivar Vargas Belizario, Márcus V. L. Costa, J. G. Maciel, M. Cazzolato, C. Traina, M. Nogueira-Barbosa, A. J. Traina","doi":"10.1109/CBMS55023.2022.00021","DOIUrl":"https://doi.org/10.1109/CBMS55023.2022.00021","url":null,"abstract":"Bone mineral density (BMD) is the international standard for evaluating osteoporosis/osteopenia. The success rate of BMD alone in estimating the risk of vertebral fragility fracture (VFF) is approximately 50%, making BMD far from ideal in predicting VFF. In addition, whether or not a patient has been diagnosed with osteoporosis or osteopenia, he or she may suffer a VFF. For this reason, we conducted an extensive empirical study to assess VFFs in postmenopausal women. We considered a representative dataset of 94 T1- and T2-weighted routine spine MRI (with osteopenia or osteoporosis), split into 2,400 samples (slices). Comparing the classification results of machine learning and deep learning (DL) techniques showed that DL generally achieved better results at the cost of higher computational power and hard explainability. ResNet achieved the best results in discriminating patients from groups with and without VFFs with 83% accuracy and 90% AUC (with a confidence interval of 99%). Our results represent a significant step toward prospective and longitudinal studies investigating methods to achieve higher accuracy in predicting VFFs based on spine MRI features of vertebrae without fracture.","PeriodicalId":218475,"journal":{"name":"2022 IEEE 35th International Symposium on Computer-Based Medical Systems (CBMS)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115266846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}