Pub Date : 2022-09-27DOI: 10.1109/BHI56158.2022.9926817
N. Filipovic, Smiljana Tomasevic, Andjela Blagojević, Branko Arsić, Miloš Anić, T. Djukić
In study, we presented a new computational model for atheromatic plaque growth progression in the carotid artery using specialized mathematical models and computational simulations which will enable the accurate prediction of the cardiovascular disease evolution. The simulated model with coupled Agent Based Method (ABM) and Finite Element Method (FEM) has been presented. The ABM was coupled with an initial WSS profile, which triggers a pathologic vascular remodeling by perturbing the baseline cellular activity and favoring lipid infiltration and accumulation within the arterial wall. The ABM model takes shear stress and LDL initial distribution from the lumen and starts iterative calculation inside the wall for lipid infiltration and accumulation using a random number generator for each time step. After ABM iterations, both wall lipid distribution and wall geometry are changed. This directly influences the wall artery geometry which is also modeled with finite element, with ABM elements inside these large finite elements. Then, fluid-structure solver is running and lumen domain is calculated again. The change of the shape of the cross-sections of the arterial wall is shown in three specific moments in time (baseline, after 3 months and after 6 months). One main pros of this new approach are the use of realistic 3D reconstructed artery providing in this way a more realistic, patient-specific simulation of plaque progression.
{"title":"Modeling of Plaque Progression in the Carotid Artery Using Coupled Agent Based with Finite Element Method","authors":"N. Filipovic, Smiljana Tomasevic, Andjela Blagojević, Branko Arsić, Miloš Anić, T. Djukić","doi":"10.1109/BHI56158.2022.9926817","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926817","url":null,"abstract":"In study, we presented a new computational model for atheromatic plaque growth progression in the carotid artery using specialized mathematical models and computational simulations which will enable the accurate prediction of the cardiovascular disease evolution. The simulated model with coupled Agent Based Method (ABM) and Finite Element Method (FEM) has been presented. The ABM was coupled with an initial WSS profile, which triggers a pathologic vascular remodeling by perturbing the baseline cellular activity and favoring lipid infiltration and accumulation within the arterial wall. The ABM model takes shear stress and LDL initial distribution from the lumen and starts iterative calculation inside the wall for lipid infiltration and accumulation using a random number generator for each time step. After ABM iterations, both wall lipid distribution and wall geometry are changed. This directly influences the wall artery geometry which is also modeled with finite element, with ABM elements inside these large finite elements. Then, fluid-structure solver is running and lumen domain is calculated again. The change of the shape of the cross-sections of the arterial wall is shown in three specific moments in time (baseline, after 3 months and after 6 months). One main pros of this new approach are the use of realistic 3D reconstructed artery providing in this way a more realistic, patient-specific simulation of plaque progression.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126520933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-27DOI: 10.1109/BHI56158.2022.9926929
Dimitris Zaridis, E. Mylona, N. Tachos, K. Marias, M. Tsiknakis, D. Fotiadis
Precise delineation of the prostate gland on MRI is the cornerstone for accurate prostate cancer diagnosis, detection, characterization and treatment. The present work proposes a meta-learner deep learning (DL) network that combines the complexity of 3 well-established DL models and fine tune them in order to improve the segmentation of the prostate compared to the base learners. The backbone of the meta-learner consist the original U-net, Dense2U-net and Bridged U-net models. A model was added on top of the three base networks that has four convolutions with different receptor fields. The meta-learner outperformed the base-learners in 4 out of 5 performance metrics. The median Dice Score for the meta-learner was 89% while for the second best model it was 83%. Except for Hausdorff distance, where the meta-learner and Dense2U-net performed equally well, the improvement achieved in terms of average sensitivity, balanced accuracy, dice score and rand error, compared to the best performing base-learner, was 6%, 3%, 5% and 4%, respectively.
{"title":"Fine-tuned feature selection to improve prostate segmentation via a fully connected meta-learner architecture","authors":"Dimitris Zaridis, E. Mylona, N. Tachos, K. Marias, M. Tsiknakis, D. Fotiadis","doi":"10.1109/BHI56158.2022.9926929","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926929","url":null,"abstract":"Precise delineation of the prostate gland on MRI is the cornerstone for accurate prostate cancer diagnosis, detection, characterization and treatment. The present work proposes a meta-learner deep learning (DL) network that combines the complexity of 3 well-established DL models and fine tune them in order to improve the segmentation of the prostate compared to the base learners. The backbone of the meta-learner consist the original U-net, Dense2U-net and Bridged U-net models. A model was added on top of the three base networks that has four convolutions with different receptor fields. The meta-learner outperformed the base-learners in 4 out of 5 performance metrics. The median Dice Score for the meta-learner was 89% while for the second best model it was 83%. Except for Hausdorff distance, where the meta-learner and Dense2U-net performed equally well, the improvement achieved in terms of average sensitivity, balanced accuracy, dice score and rand error, compared to the best performing base-learner, was 6%, 3%, 5% and 4%, respectively.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123690063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-27DOI: 10.1109/BHI56158.2022.9926950
Yilun Zhu, A. Mariakakis, E. de Lara, T. Falk
Recent work has shown the potential of using speech signals for remote detection of coronavirus disease 2019 (COVID-19). Due to the limited amount of available data, however, existing systems have been typically evaluated within the same dataset. Hence, it is not clear whether systems can be generalized to unseen speech signals and if they indeed capture COVID-19 acoustic biomarkers or only dataset-specific nuances. In this paper, we start by evaluating the robustness of systems proposed in the literature, including two based on hand-crafted features and two on deep neural network architectures. In particular, these systems are tested across two international COVID-19 detection challenge datasets (COMPARE and DICOVA2). Experiments show that the performance of the explored systems degraded to chance levels when tested on unseen data, especially those based on deep neural networks. To increase the generalizability of existing systems, we propose a new set of acoustic biomarkers based on speech modulation spectrograms. The new biomarkers, when used to train a simple linear classifier, showed substantial improvements in cross-dataset testing performance. Further interpretation of the biomarkers provides a better understanding of the acoustic properties of COVID-19 speech. The generalizability and inter-pretability of the selected biomarkers allow for the development of a more reliable and lower-cost COVID-19 detection system.
{"title":"How Generalizable and Interpretable are Speech-Based COVID-19 Detection Systems?: A Comparative Analysis and New System Proposal","authors":"Yilun Zhu, A. Mariakakis, E. de Lara, T. Falk","doi":"10.1109/BHI56158.2022.9926950","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926950","url":null,"abstract":"Recent work has shown the potential of using speech signals for remote detection of coronavirus disease 2019 (COVID-19). Due to the limited amount of available data, however, existing systems have been typically evaluated within the same dataset. Hence, it is not clear whether systems can be generalized to unseen speech signals and if they indeed capture COVID-19 acoustic biomarkers or only dataset-specific nuances. In this paper, we start by evaluating the robustness of systems proposed in the literature, including two based on hand-crafted features and two on deep neural network architectures. In particular, these systems are tested across two international COVID-19 detection challenge datasets (COMPARE and DICOVA2). Experiments show that the performance of the explored systems degraded to chance levels when tested on unseen data, especially those based on deep neural networks. To increase the generalizability of existing systems, we propose a new set of acoustic biomarkers based on speech modulation spectrograms. The new biomarkers, when used to train a simple linear classifier, showed substantial improvements in cross-dataset testing performance. Further interpretation of the biomarkers provides a better understanding of the acoustic properties of COVID-19 speech. The generalizability and inter-pretability of the selected biomarkers allow for the development of a more reliable and lower-cost COVID-19 detection system.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127080367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-27DOI: 10.1109/BHI56158.2022.9926959
Lulin Shi, Ivy H. M. Wong, Claudia T. K. Lo, T. T. Wong
Virtual histological staining technique with a label-free auto-fluorescence image as an input is a challenging scientific pursuit to visualize complicated biological structures with distinct features. Recently, most of the related methods follow the two-side image translation architecture to get rid of paired data restriction, which is necessary for unprocessed and thick tissue virtual histological staining style transformation. However, the associated cycle consistency loss will inevitably lead to huge calculation consumption and cannot address the problem of incorrect translation among intracellular and extracellular components, which we termed as incorrect staining. In this paper, we propose a novel and computational-efficient one-side image translation framework to transfer unstained auto-fluorescence images into virtual hematoxylin- and eosin-stained counterparts for both thin and thick human samples. To address the incorrect nuclear staining issue, we design a region-classification loss to incorporate partial supervision information. Experimental data on both thin and thick human lung samples are used to demonstrate that our method is computationally efficient while achieving a comparable transformation performance over the two-side framework.
{"title":"One-side Virtual Histological Staining Model for Complex Human Samples","authors":"Lulin Shi, Ivy H. M. Wong, Claudia T. K. Lo, T. T. Wong","doi":"10.1109/BHI56158.2022.9926959","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926959","url":null,"abstract":"Virtual histological staining technique with a label-free auto-fluorescence image as an input is a challenging scientific pursuit to visualize complicated biological structures with distinct features. Recently, most of the related methods follow the two-side image translation architecture to get rid of paired data restriction, which is necessary for unprocessed and thick tissue virtual histological staining style transformation. However, the associated cycle consistency loss will inevitably lead to huge calculation consumption and cannot address the problem of incorrect translation among intracellular and extracellular components, which we termed as incorrect staining. In this paper, we propose a novel and computational-efficient one-side image translation framework to transfer unstained auto-fluorescence images into virtual hematoxylin- and eosin-stained counterparts for both thin and thick human samples. To address the incorrect nuclear staining issue, we design a region-classification loss to incorporate partial supervision information. Experimental data on both thin and thick human lung samples are used to demonstrate that our method is computationally efficient while achieving a comparable transformation performance over the two-side framework.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126494948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-27DOI: 10.1109/BHI56158.2022.9926967
Adria Mallol-Ragolta, Shuo Liu, B. Schuller
In this work, we focus on the automatic detection of COVID-19 patients from the analysis of cough, breath, and speech samples. Our goal is to investigate the suitability of Self-Supervised Learning (SSL) representations extracted using Wav2Vec 2.0 for the task at hand. For this, in addition to the SSL representations, the models trained exploit the Low-Level Descriptors (LLD) of the eGeMAPS feature set, and Mel-spectrogram coefficients. The extracted representations are analysed using Convolutional Neural Networks (CNN) reinforced with contextual attention. Our experiments are performed using the data released as part of the Second Diagnosing COVID-19 using Acoustics (DiCOVA) Challenge, and we use the Area Under the Curve (AUC) as the evaluation metric. When using the CNNs without contextual attention, the multi-type model exploiting the SSL Wav2Vec 2.0 representations from the cough, breath, and speech sounds scores the highest AUC, 80.37 %. When reinforcing the embedded representations learnt with contextual attention, the AUC obtained using this same model slightly decreases to 80.01 %. The best performance on the test set is obtained with a multi-type model fusing the embedded representations extracted from the LLDs of the cough, breath, and speech samples and reinforced using contextual attention, scoring an AUC of 81.27 %.
{"title":"COVID-19 Detection Exploiting Self-Supervised Learning Representations of Respiratory Sounds","authors":"Adria Mallol-Ragolta, Shuo Liu, B. Schuller","doi":"10.1109/BHI56158.2022.9926967","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926967","url":null,"abstract":"In this work, we focus on the automatic detection of COVID-19 patients from the analysis of cough, breath, and speech samples. Our goal is to investigate the suitability of Self-Supervised Learning (SSL) representations extracted using Wav2Vec 2.0 for the task at hand. For this, in addition to the SSL representations, the models trained exploit the Low-Level Descriptors (LLD) of the eGeMAPS feature set, and Mel-spectrogram coefficients. The extracted representations are analysed using Convolutional Neural Networks (CNN) reinforced with contextual attention. Our experiments are performed using the data released as part of the Second Diagnosing COVID-19 using Acoustics (DiCOVA) Challenge, and we use the Area Under the Curve (AUC) as the evaluation metric. When using the CNNs without contextual attention, the multi-type model exploiting the SSL Wav2Vec 2.0 representations from the cough, breath, and speech sounds scores the highest AUC, 80.37 %. When reinforcing the embedded representations learnt with contextual attention, the AUC obtained using this same model slightly decreases to 80.01 %. The best performance on the test set is obtained with a multi-type model fusing the embedded representations extracted from the LLDs of the cough, breath, and speech samples and reinforced using contextual attention, scoring an AUC of 81.27 %.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124399190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-27DOI: 10.1109/BHI56158.2022.9926935
Jui-Fu Hong, Y. Tseng
Deep learning has been widely used in the medical field to support medical decision making. Simultaneously, with the rise of data privacy protection, accessing clinical records across different institutions has become a possible challenge. Several approaches, such as federated and transfer learning, have been proposed to train models without accessing all the records from each institution, but the performance of these privacy-preserved models may not be as good as centralized approaches, which aggregate all records to build a centralized model. To explore the potential of privacy-preserving second primary cancer (SPC) prediction of lung cancer survivors using real-world data, we evaluated the performance of federated learning, transfer learning, learning with a single institution, and traditional centralized learning. We trained machine learning models using data from four hospitals and compared the model performances of learning from a single institution, centralized learning, federated learning, and transfer learning. The results show that federated learning outperformed other learning strategies in three of the four sites (AUROC from 0.733 to 0.777). However, only Site 6 showed that federated learning significantly outperformed all the other learning strategies (P < 0.05). In summary, federated learning can develop a unified model for the multiple institutions while maintaining data security.
{"title":"Performance vs. Privacy: Evaluating the Performance of Predicting Second Primary Cancer in Lung Cancer Survivors with Privacy-preserving Approaches","authors":"Jui-Fu Hong, Y. Tseng","doi":"10.1109/BHI56158.2022.9926935","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926935","url":null,"abstract":"Deep learning has been widely used in the medical field to support medical decision making. Simultaneously, with the rise of data privacy protection, accessing clinical records across different institutions has become a possible challenge. Several approaches, such as federated and transfer learning, have been proposed to train models without accessing all the records from each institution, but the performance of these privacy-preserved models may not be as good as centralized approaches, which aggregate all records to build a centralized model. To explore the potential of privacy-preserving second primary cancer (SPC) prediction of lung cancer survivors using real-world data, we evaluated the performance of federated learning, transfer learning, learning with a single institution, and traditional centralized learning. We trained machine learning models using data from four hospitals and compared the model performances of learning from a single institution, centralized learning, federated learning, and transfer learning. The results show that federated learning outperformed other learning strategies in three of the four sites (AUROC from 0.733 to 0.777). However, only Site 6 showed that federated learning significantly outperformed all the other learning strategies (P < 0.05). In summary, federated learning can develop a unified model for the multiple institutions while maintaining data security.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131706749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-27DOI: 10.1109/BHI56158.2022.9926870
Julio Marcos Gomes Junior, Fabricio M. Lopes
The lack of attendance of employees is called absenteeism and occurs for various reasons, such as vigorous physical activity, advanced age and high psychological demands of the work. The absenteeism affects the direct and indirect costs of the companies, and may reach 15% of the payroll. Therefore, it is fundamental to know its main causes and contribute to control and mitigation strategies. Neural networks have been successfully applied in the classification of several problems, but they are black boxes, because they do not explain which aspects are considered in their decisions. This aspect is very important in health applications, in which it is necessary to explain and clearly interpret the results. In this context, this work presents an approach to classify absenteeism through neural networks and Layer-wise relevance propagation (LRP) aggregation in order to identify the most relevant features and to assign relevance scores individually per class and among all classes. The proposed approach was assessed by considering a dataset widely used as a benchmark and compared to the existing literature methods. The proposed approach presented the highest assertiveness rates among the compared methods, reaching an average accuracy of 0.83, identifying the most relevant features for the classification of absenteeism through a relevance score. Therefore, the results allow the interpretability of the causes of each class of absenteeism, which contribute to the management of human resources, occupational medicine and the development of strategies for its mitigation.
{"title":"Interpretability with Relevance Aggregation in Neural Networks for Absenteeism Prediction","authors":"Julio Marcos Gomes Junior, Fabricio M. Lopes","doi":"10.1109/BHI56158.2022.9926870","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926870","url":null,"abstract":"The lack of attendance of employees is called absenteeism and occurs for various reasons, such as vigorous physical activity, advanced age and high psychological demands of the work. The absenteeism affects the direct and indirect costs of the companies, and may reach 15% of the payroll. Therefore, it is fundamental to know its main causes and contribute to control and mitigation strategies. Neural networks have been successfully applied in the classification of several problems, but they are black boxes, because they do not explain which aspects are considered in their decisions. This aspect is very important in health applications, in which it is necessary to explain and clearly interpret the results. In this context, this work presents an approach to classify absenteeism through neural networks and Layer-wise relevance propagation (LRP) aggregation in order to identify the most relevant features and to assign relevance scores individually per class and among all classes. The proposed approach was assessed by considering a dataset widely used as a benchmark and compared to the existing literature methods. The proposed approach presented the highest assertiveness rates among the compared methods, reaching an average accuracy of 0.83, identifying the most relevant features for the classification of absenteeism through a relevance score. Therefore, the results allow the interpretability of the causes of each class of absenteeism, which contribute to the management of human resources, occupational medicine and the development of strategies for its mitigation.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134278325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-27DOI: 10.1109/BHI56158.2022.9926892
S. Gurbuz, Mohammad Mahbubur Rahman, Emre Kurtoğlu, D. Martelli
Human activity recognition (HAR) and gait analysis are important functions that support aging-in-place and remote health monitoring. Although there have been many works investigating HAR with radar based on single-activity snapshots in time, few works address recognition in continuous streams of radio frequency (RF) data, where in daily life many different activities are conducted. This work proposes a novel variable window averaging method to segment RF data streams containing a mixture of large-scale gross motor activities as well as fine-grain hand gestures, a physics-aware generative adversarial network (PhGAN) to recognize daily activities, and a new technique to estimate step-time variability from RF data. Our results show that extraction of motion detected intervals and GAN-synthesized samples during training boosts HAR accuracy, while the estimation variance of time-step variability from radar compares well with that obtained from a Vicon motion capture system.
{"title":"Continuous Human Activity Recognition and Step-Time Variability Analysis with FMCW Radar","authors":"S. Gurbuz, Mohammad Mahbubur Rahman, Emre Kurtoğlu, D. Martelli","doi":"10.1109/BHI56158.2022.9926892","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926892","url":null,"abstract":"Human activity recognition (HAR) and gait analysis are important functions that support aging-in-place and remote health monitoring. Although there have been many works investigating HAR with radar based on single-activity snapshots in time, few works address recognition in continuous streams of radio frequency (RF) data, where in daily life many different activities are conducted. This work proposes a novel variable window averaging method to segment RF data streams containing a mixture of large-scale gross motor activities as well as fine-grain hand gestures, a physics-aware generative adversarial network (PhGAN) to recognize daily activities, and a new technique to estimate step-time variability from RF data. Our results show that extraction of motion detected intervals and GAN-synthesized samples during training boosts HAR accuracy, while the estimation variance of time-step variability from radar compares well with that obtained from a Vicon motion capture system.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134123724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-27DOI: 10.1109/BHI56158.2022.9926904
James Gascoigne-Burns, Stamos Katsigiannis
Deep learning models have demonstrated superhuman performance in a multitude of image classification tasks, including the classification of chest X-ray images. Despite this, medical professionals are reluctant to embrace these models in clinical settings due to a lack of interpretability, citing being able to visualise the image areas contributing most to a model's predictions as one of the best ways to establish trust. To aid the discussion of their suitability for real-world use, in this work, we attempt to address this issue by conducting a localisation study of two state-of-the-art deep learning models for chest X-ray image classification, ResNet-38-large-meta and CheXNet, on a set of 984 radiologist annotated X-ray images from the publicly available ChestX-ray14 dataset. We do this by applying and comparing several state-of-the-art visualisation methods, combined with a novel dynamic thresholding approach for generating bounding boxes, which we show to outperform the static thresholding method used by similar localisation studies in the literature. Results also seem to indicate that localisation quality is more sensitive to the choice of thresholding scheme than the visualisation method used, and that a high discriminative ability as measured by classification performance is not necessarily sufficient for models to produce useful and accurate localisations.
{"title":"A Localisation Study of Deep Learning Models for Chest X-ray Image Classification","authors":"James Gascoigne-Burns, Stamos Katsigiannis","doi":"10.1109/BHI56158.2022.9926904","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926904","url":null,"abstract":"Deep learning models have demonstrated superhuman performance in a multitude of image classification tasks, including the classification of chest X-ray images. Despite this, medical professionals are reluctant to embrace these models in clinical settings due to a lack of interpretability, citing being able to visualise the image areas contributing most to a model's predictions as one of the best ways to establish trust. To aid the discussion of their suitability for real-world use, in this work, we attempt to address this issue by conducting a localisation study of two state-of-the-art deep learning models for chest X-ray image classification, ResNet-38-large-meta and CheXNet, on a set of 984 radiologist annotated X-ray images from the publicly available ChestX-ray14 dataset. We do this by applying and comparing several state-of-the-art visualisation methods, combined with a novel dynamic thresholding approach for generating bounding boxes, which we show to outperform the static thresholding method used by similar localisation studies in the literature. Results also seem to indicate that localisation quality is more sensitive to the choice of thresholding scheme than the visualisation method used, and that a high discriminative ability as measured by classification performance is not necessarily sufficient for models to produce useful and accurate localisations.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132822163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-27DOI: 10.1109/BHI56158.2022.9926856
Laura Carolina Martínez Esmeral, A. Uhl
Deriving patients' identity from medical imagery threatens privacy, as these data are acquired to support diagnosis but not to reveal identity-related features. Still, for many medical imaging modalities, such identity breaches have been reported. To cope with this, some de-identification methods based on the generation of synthetic data have been explored in the literature. However, in this paper, we try to perform, instead, an occlusion of the personal identifiers directly on the data by means of Class Activation Maps, in such a way that diagnosis related features do not get altered.
{"title":"Class Activation Maps for the disentanglement and occlusion of identity attributes in medical imagery","authors":"Laura Carolina Martínez Esmeral, A. Uhl","doi":"10.1109/BHI56158.2022.9926856","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926856","url":null,"abstract":"Deriving patients' identity from medical imagery threatens privacy, as these data are acquired to support diagnosis but not to reveal identity-related features. Still, for many medical imaging modalities, such identity breaches have been reported. To cope with this, some de-identification methods based on the generation of synthetic data have been explored in the literature. However, in this paper, we try to perform, instead, an occlusion of the personal identifiers directly on the data by means of Class Activation Maps, in such a way that diagnosis related features do not get altered.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127712059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}