Pub Date : 2022-09-27DOI: 10.1109/BHI56158.2022.9926810
Kosmia Loizidou, G. Skouroumouni, Gabriella Savvidou, A. Constantinidou, Christos Nikolaou, C. Pitris
Breast cancer remains one of the leading cancers worldwide and is the main cause of death in women with cancer. Effective early-stage diagnosis can reduce the mortality rates of breast cancer. Currently, mammography is the most reliable screening method and has significantly decreased the mortality rates of these malignancies. However, accurate classification of breast abnormalities using mammograms is especially challenging, driving the development of Computer-Aided Diagnosis (CAD) systems. In this work, subtraction of temporally consecutive digital mammograms and machine learning were combined, to develop an algorithm for the automatic detection and classification of benign and malignant breast masses. A private dataset was collected specifically for this study. A total of 196 images were gathered, from 49 patients (two time points and two views of each breast), with precisely annotated mass locations and biopsy confirmed malignant cases. For the classification, ninety-six features were extracted and five feature selection techniques were combined. Ten classifiers were tested, using leave-one-patient-out and 7-fold cross-validation. The classification performance reached 91.7% sensitivity, 89.7% specificity and 90.8% accuracy, using Neural Networks, an improvement, compared to the state-of-the-art algorithms that utilized sequential mammograms for the classification of benign and malignant breast masses. This work demonstrates the effectiveness of combining subtraction of temporally sequential digital mammograms, along with machine learning, for the automatic classification of benign and malignant breast masses.
{"title":"Benign and Malignant Breast Mass Detection and Classification in Digital Mammography: The Effect of Subtracting Temporally Consecutive Mammograms","authors":"Kosmia Loizidou, G. Skouroumouni, Gabriella Savvidou, A. Constantinidou, Christos Nikolaou, C. Pitris","doi":"10.1109/BHI56158.2022.9926810","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926810","url":null,"abstract":"Breast cancer remains one of the leading cancers worldwide and is the main cause of death in women with cancer. Effective early-stage diagnosis can reduce the mortality rates of breast cancer. Currently, mammography is the most reliable screening method and has significantly decreased the mortality rates of these malignancies. However, accurate classification of breast abnormalities using mammograms is especially challenging, driving the development of Computer-Aided Diagnosis (CAD) systems. In this work, subtraction of temporally consecutive digital mammograms and machine learning were combined, to develop an algorithm for the automatic detection and classification of benign and malignant breast masses. A private dataset was collected specifically for this study. A total of 196 images were gathered, from 49 patients (two time points and two views of each breast), with precisely annotated mass locations and biopsy confirmed malignant cases. For the classification, ninety-six features were extracted and five feature selection techniques were combined. Ten classifiers were tested, using leave-one-patient-out and 7-fold cross-validation. The classification performance reached 91.7% sensitivity, 89.7% specificity and 90.8% accuracy, using Neural Networks, an improvement, compared to the state-of-the-art algorithms that utilized sequential mammograms for the classification of benign and malignant breast masses. This work demonstrates the effectiveness of combining subtraction of temporally sequential digital mammograms, along with machine learning, for the automatic classification of benign and malignant breast masses.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"607 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116452266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-27DOI: 10.1109/BHI56158.2022.9926924
Jayroop Ramesh, Donthi Sankalpa, A. Khamis, A. Sagahyroon, F. Aloul
Vitamin A deficiency is one of the leading causes of visual impairment globally. While blood tests are common approaches in developed countries, various socioeconomic and public perspectives render this a challenge in developing countries. In Africa and Southeast Asia, the alarming rise of preventable childhood blindness and delayed growth rates has been dubbed as an “epidemic”. With the proliferation of machine learning in clinical support systems and the relative availability of electronic health records, there is the potential promise of early detection, and curbing ocular complication progression. In this work, different machine learning methods are applied to a sparse dataset of ocular symptomatology and diagnoses acquired from Maradi, Nigeria collected during routine eye examinations conducted within a school setting. The goal is to develop a screening system for Vitamin A deficiency in children without requiring retinol serum blood tests, but rather by utilizing existing health records. The SVC model achieved the best scores of accuracy: 75.7%, sensitivity:83.7%, and specificity: 74.9%. Additionally, Shapley values are employed to provide post-hoc clinical explainability (XAI) in terms of relative feature contributions with each classification decision. This is a vital step towards augmenting domain expert reasoning, and ensuring clinical consistency of shallow machine learning models.
{"title":"Explainable Machine Learning for Vitamin A Deficiency Classification in Schoolchildren","authors":"Jayroop Ramesh, Donthi Sankalpa, A. Khamis, A. Sagahyroon, F. Aloul","doi":"10.1109/BHI56158.2022.9926924","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926924","url":null,"abstract":"Vitamin A deficiency is one of the leading causes of visual impairment globally. While blood tests are common approaches in developed countries, various socioeconomic and public perspectives render this a challenge in developing countries. In Africa and Southeast Asia, the alarming rise of preventable childhood blindness and delayed growth rates has been dubbed as an “epidemic”. With the proliferation of machine learning in clinical support systems and the relative availability of electronic health records, there is the potential promise of early detection, and curbing ocular complication progression. In this work, different machine learning methods are applied to a sparse dataset of ocular symptomatology and diagnoses acquired from Maradi, Nigeria collected during routine eye examinations conducted within a school setting. The goal is to develop a screening system for Vitamin A deficiency in children without requiring retinol serum blood tests, but rather by utilizing existing health records. The SVC model achieved the best scores of accuracy: 75.7%, sensitivity:83.7%, and specificity: 74.9%. Additionally, Shapley values are employed to provide post-hoc clinical explainability (XAI) in terms of relative feature contributions with each classification decision. This is a vital step towards augmenting domain expert reasoning, and ensuring clinical consistency of shallow machine learning models.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123964261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-27DOI: 10.1109/BHI56158.2022.9926855
Muhammad Zubair Khan, Yugyung Lee, M. Khan, Arslan Munir
Semantic segmentation is one of the challenging tasks in computer vision. Before the advent of deep learning, hand-crafted features were used to semantically extract the region-of-interest (ROI). Deep learning has recently achieved enormous response in semantic image segmentation. The previously developed U-Net inspired architectures operate with continuous stride and pooling operations, leading to spatial data loss. Also, the methods lack establishing long-term pixels connection to preserve context knowledge and reduce spatial loss in prediction. This article developed encoder-decoder architecture with a sequential block embedded in long skip-connections and densely connected convolution blocks. The network non-linearly combines the feature maps across encoder-decoder paths for finding dependency and correlation between image pixels. Additionally, the densely connected convolutional blocks are kept in the final encoding layer to reuse features and prevent redundant data sharing. The method applied batch-normalization to reduce internal covariate shift in data distributions. We have used LUNA, ISIC2018, and DRIVE datasets to reflect three different segmentation problems (lung nodules, skin lesions, vessels) and claim the effectiveness of the proposed architecture. The network is also compared with other techniques designed to highlight similar problems. It is found through empirical evidence that our method shows promising results when compared with other segmentation techniques.
{"title":"Towards Long - Range Pixels Connection for Context-Aware Semantic Segmentation","authors":"Muhammad Zubair Khan, Yugyung Lee, M. Khan, Arslan Munir","doi":"10.1109/BHI56158.2022.9926855","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926855","url":null,"abstract":"Semantic segmentation is one of the challenging tasks in computer vision. Before the advent of deep learning, hand-crafted features were used to semantically extract the region-of-interest (ROI). Deep learning has recently achieved enormous response in semantic image segmentation. The previously developed U-Net inspired architectures operate with continuous stride and pooling operations, leading to spatial data loss. Also, the methods lack establishing long-term pixels connection to preserve context knowledge and reduce spatial loss in prediction. This article developed encoder-decoder architecture with a sequential block embedded in long skip-connections and densely connected convolution blocks. The network non-linearly combines the feature maps across encoder-decoder paths for finding dependency and correlation between image pixels. Additionally, the densely connected convolutional blocks are kept in the final encoding layer to reuse features and prevent redundant data sharing. The method applied batch-normalization to reduce internal covariate shift in data distributions. We have used LUNA, ISIC2018, and DRIVE datasets to reflect three different segmentation problems (lung nodules, skin lesions, vessels) and claim the effectiveness of the proposed architecture. The network is also compared with other techniques designed to highlight similar problems. It is found through empirical evidence that our method shows promising results when compared with other segmentation techniques.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"2003 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132998081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-27DOI: 10.1109/BHI56158.2022.9926782
Pon Deepika, Prasad Sistla, G. Subramaniam, M. Rao
Intracranial Hemorrhage is a serious medical emer-gency which requires immediate medical attention. With most of the countries facing acute shortage of radiologists, it is important to develop an automated system which analyses the radiographic images and prioritise cases that require urgent medical attention. In this context, there has been attempts to apply deep learning (DL) techniques to the Head Computed Tomography (CT) slices to detect hemorrhage adequately in the past, where annotation effort is spent for individual slices of the CT volume for building a model. Our work aims to develop a robust model for the annotated CT volume dataset, which does not require slice level information for the presence of hemorrhage so that the annotation effort could be cut down substantially. A novel DL pipeline architecture based on the combination of convolutional neural network (CNN) and bi-directional long-short-term-memory (biLSTM) to capture both intra and inter slice level features for diagnosing hemorrhage from the non-contrast head CT volumes is introduced. The proposed model achieved a high accuracy score of 98.15 %, specificity of 1, sensitivity of 0.96 and F1 score of 0.98 with 95.3 % mitigation in the labelling effort of radiologists. However the performance scores are very well comparable to the scores achieved by the state-of-the-art models trained over the CT Volumes with slice wise annotation pertaining to intracranial hemorrhage detection. Additionally, the novel contribution is in integrating Gradient-weighted Class Activation Mapping (GRAD-CAM) visualization to the system, to offer visual explanations for the decisions made and provide supplementary information forming a strong advocate to radiologists in the clinical evaluation stage. The novel system is a first step towards building a robust autonomous assistive technology for radiologists, and leads to develop similar pipelined DL architecture for extracting information pertaining to other neurological disorders from Non-Contrast Head CT volumes.
颅内出血是一种严重的医疗紧急情况,需要立即就医。由于大多数国家面临放射科医生的严重短缺,开发一个自动化系统来分析放射图像并优先考虑需要紧急医疗照顾的病例是很重要的。在这种情况下,过去已经有人尝试将深度学习(DL)技术应用于头部计算机断层扫描(CT)切片以充分检测出血,其中注释工作花费在CT体积的单个切片上以建立模型。我们的工作旨在为注释的CT体积数据集开发一个健壮的模型,该模型不需要存在出血的切片水平信息,从而可以大大减少注释的工作量。本文提出了一种基于卷积神经网络(CNN)和双向长短期记忆(biLSTM)相结合的DL管道结构,用于非对比头部CT体积的出血诊断。该模型的准确率为98.15%,特异性为1,敏感性为0.96,F1评分为0.98,减少了放射科医生标记工作的95.3%。然而,性能分数与在CT体积上训练的最先进的模型所获得的分数非常相似,这些模型带有与颅内出血检测相关的切片注释。此外,该系统还将梯度加权类激活映射(Gradient-weighted Class Activation Mapping, GRAD-CAM)可视化集成到系统中,为所做的决定提供可视化解释,并提供补充信息,为放射科医生在临床评估阶段提供强有力的支持。该新系统是为放射科医生建立强大的自主辅助技术的第一步,并导致开发类似的流水线DL架构,用于从非对比头部CT卷中提取有关其他神经系统疾病的信息。
{"title":"Deep Learning based Automated Screening for Intracranial Hemorrhages and GRAD-CAM Visualizations on Non-Contrast Head Computed Tomography Volumes","authors":"Pon Deepika, Prasad Sistla, G. Subramaniam, M. Rao","doi":"10.1109/BHI56158.2022.9926782","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926782","url":null,"abstract":"Intracranial Hemorrhage is a serious medical emer-gency which requires immediate medical attention. With most of the countries facing acute shortage of radiologists, it is important to develop an automated system which analyses the radiographic images and prioritise cases that require urgent medical attention. In this context, there has been attempts to apply deep learning (DL) techniques to the Head Computed Tomography (CT) slices to detect hemorrhage adequately in the past, where annotation effort is spent for individual slices of the CT volume for building a model. Our work aims to develop a robust model for the annotated CT volume dataset, which does not require slice level information for the presence of hemorrhage so that the annotation effort could be cut down substantially. A novel DL pipeline architecture based on the combination of convolutional neural network (CNN) and bi-directional long-short-term-memory (biLSTM) to capture both intra and inter slice level features for diagnosing hemorrhage from the non-contrast head CT volumes is introduced. The proposed model achieved a high accuracy score of 98.15 %, specificity of 1, sensitivity of 0.96 and F1 score of 0.98 with 95.3 % mitigation in the labelling effort of radiologists. However the performance scores are very well comparable to the scores achieved by the state-of-the-art models trained over the CT Volumes with slice wise annotation pertaining to intracranial hemorrhage detection. Additionally, the novel contribution is in integrating Gradient-weighted Class Activation Mapping (GRAD-CAM) visualization to the system, to offer visual explanations for the decisions made and provide supplementary information forming a strong advocate to radiologists in the clinical evaluation stage. The novel system is a first step towards building a robust autonomous assistive technology for radiologists, and leads to develop similar pipelined DL architecture for extracting information pertaining to other neurological disorders from Non-Contrast Head CT volumes.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"209 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115642690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-27DOI: 10.1109/BHI56158.2022.9926952
E. Mylona, Konstantina Kourou, Georgios C. Manikis, H. Kondylakis, E. Karademas, K. Marias, K. Mazzocco, P. Poikonen-Saksela, R. Pat-Horenczyk, B. Sousa, P. Simos, D. Fotiadis
Mental health impairment after breast cancer diagnosis may persist for months or years. The present work leverages on novel machine learning techniques to identify distinct trajectories of mental health progression in an 18-month period following BC diagnosis and develop an explainable predictive model of mental health progression using a large list of clinical, sociodemographic and psychological variables. The modelling process was conducted in two phases. The first modeling step included an unsupervised clustering to define the number of trajectory clusters, by means of a longitudinal K-means algorithm. In the second modeling step an explainable ML framework was developed, on the basis of Extreme Gradient Boosting (XGBoost) model and SHAP values, in order to identify the most prominent variables that can discriminate between good and unfavorable mental health progression and to explain how they contribute to model's decisions. The trajectory analysis revealed 5 distinct trajectory groups with the majority of patients following stable good (56%) or improving (21%) trends, while for others mental health levels either deteriorated (12%) or remained at unsatisfactory levels (11%). The model's performance for classifying patient mental health into good and unfavorable progression achieved an AUC of $0.82pm 0.04$. The top ranking predictors driving the classification task were the higher number of sick leave days, aggressive cancer type (triple-negative) and higher levels of negative affect, anxious preoccupation, helplessness, arm and breast symptoms, as well as lower values of optimism, social and emotional support and lower age.
{"title":"Explainable machine learning analysis of longitudinal mental health trajectories after breast cancer diagnosis","authors":"E. Mylona, Konstantina Kourou, Georgios C. Manikis, H. Kondylakis, E. Karademas, K. Marias, K. Mazzocco, P. Poikonen-Saksela, R. Pat-Horenczyk, B. Sousa, P. Simos, D. Fotiadis","doi":"10.1109/BHI56158.2022.9926952","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926952","url":null,"abstract":"Mental health impairment after breast cancer diagnosis may persist for months or years. The present work leverages on novel machine learning techniques to identify distinct trajectories of mental health progression in an 18-month period following BC diagnosis and develop an explainable predictive model of mental health progression using a large list of clinical, sociodemographic and psychological variables. The modelling process was conducted in two phases. The first modeling step included an unsupervised clustering to define the number of trajectory clusters, by means of a longitudinal K-means algorithm. In the second modeling step an explainable ML framework was developed, on the basis of Extreme Gradient Boosting (XGBoost) model and SHAP values, in order to identify the most prominent variables that can discriminate between good and unfavorable mental health progression and to explain how they contribute to model's decisions. The trajectory analysis revealed 5 distinct trajectory groups with the majority of patients following stable good (56%) or improving (21%) trends, while for others mental health levels either deteriorated (12%) or remained at unsatisfactory levels (11%). The model's performance for classifying patient mental health into good and unfavorable progression achieved an AUC of $0.82pm 0.04$. The top ranking predictors driving the classification task were the higher number of sick leave days, aggressive cancer type (triple-negative) and higher levels of negative affect, anxious preoccupation, helplessness, arm and breast symptoms, as well as lower values of optimism, social and emotional support and lower age.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121231062","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-27DOI: 10.1109/BHI56158.2022.9926854
Riccardo Asnaghi, L. Clementi, M. Santambrogio
Functional connectivity mapping provides information about correlated brain areas, useful for many applications such as on mental disorders. This work aims to improve this mapping by using deep metric learning considering the directionality of information flow and time-domain features. To deal with the computational cost of a complete pairwise combination network, we trained a network able to recognize similar signals and, after training, feed it with all combinations of signals from each brain area. The labels of similarity or dissimilarity are determined by agglomerative clustering using the Jensen-Shannon Distance as a metric. To validate our approach we employed a resting-state eye-open functional MRI dataset from ADHD and healthy subjects. Once registered, the signals are filtered and averaged by area with a functional trimmed mean. After obtaining the connectivity maps from each subject, we perform a feature importance selection using logistic regression. The ten most promising areas were extracted, such as the frontal cortex and the limbic system. These results are in complete agreement with previous literature. It is well known those areas are mainly involved in attention and impulsivity.
{"title":"BEBOP: Bidirectional dEep Brain cOnnectivity maPping","authors":"Riccardo Asnaghi, L. Clementi, M. Santambrogio","doi":"10.1109/BHI56158.2022.9926854","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926854","url":null,"abstract":"Functional connectivity mapping provides information about correlated brain areas, useful for many applications such as on mental disorders. This work aims to improve this mapping by using deep metric learning considering the directionality of information flow and time-domain features. To deal with the computational cost of a complete pairwise combination network, we trained a network able to recognize similar signals and, after training, feed it with all combinations of signals from each brain area. The labels of similarity or dissimilarity are determined by agglomerative clustering using the Jensen-Shannon Distance as a metric. To validate our approach we employed a resting-state eye-open functional MRI dataset from ADHD and healthy subjects. Once registered, the signals are filtered and averaged by area with a functional trimmed mean. After obtaining the connectivity maps from each subject, we perform a feature importance selection using logistic regression. The ten most promising areas were extracted, such as the frontal cortex and the limbic system. These results are in complete agreement with previous literature. It is well known those areas are mainly involved in attention and impulsivity.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"26 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113955569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-27DOI: 10.1109/BHI56158.2022.9926825
P. Siogkas, V. Tsakanikas, A. Sakellarios, Vassiliki T. Potsika, G. Galyfos, F. Sigala, Smiljana Tomasevic, T. Djukić, Nenad D Filipović, I. Končar, D. Fotiadis
The progression of atherosclerotic carotid plaque causes a gradual stenosis in the arterial lumen which might result to catastrophic plaque rupture ending to thromboembolism and stroke. Carotid artery disease is the main cause for ischemic stroke in the EU, thus intensifying the need of the development of tools for risk stratification and patient management in carotid artery disease. In this work, we present a comparative study between ultrasound-based and MRI-based 3D carotid artery models to investigate if US-based models can be used to assess the hemodynamic status of the carotid vasculature compared with the respective MRI-based models which are considered as the most realistic representation of the carotid vasculature. In-house developed algorithms were used to reconstruct the carotid vasculature in 3D. Our work revealed a promising similarity between the two methods of reconstruction in terms of geometrical parameters such as cross-sectional areas and centerline lengths, as well as simulated hemodynamic parameters such as peak Time-Averaged WSS values and areas of low WSS values which are crucial for the hemodynamic status of the cerebral vasculature. The aforementioned findings, therefore, constitute carotid US a possible MRI surrogate for the initial carotid artery disease assessment in terms of plaque evolution and possible plaque destabilization.
{"title":"MRI vs. US 3D computational models of carotid arteries: a proof-of-concept study","authors":"P. Siogkas, V. Tsakanikas, A. Sakellarios, Vassiliki T. Potsika, G. Galyfos, F. Sigala, Smiljana Tomasevic, T. Djukić, Nenad D Filipović, I. Končar, D. Fotiadis","doi":"10.1109/BHI56158.2022.9926825","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926825","url":null,"abstract":"The progression of atherosclerotic carotid plaque causes a gradual stenosis in the arterial lumen which might result to catastrophic plaque rupture ending to thromboembolism and stroke. Carotid artery disease is the main cause for ischemic stroke in the EU, thus intensifying the need of the development of tools for risk stratification and patient management in carotid artery disease. In this work, we present a comparative study between ultrasound-based and MRI-based 3D carotid artery models to investigate if US-based models can be used to assess the hemodynamic status of the carotid vasculature compared with the respective MRI-based models which are considered as the most realistic representation of the carotid vasculature. In-house developed algorithms were used to reconstruct the carotid vasculature in 3D. Our work revealed a promising similarity between the two methods of reconstruction in terms of geometrical parameters such as cross-sectional areas and centerline lengths, as well as simulated hemodynamic parameters such as peak Time-Averaged WSS values and areas of low WSS values which are crucial for the hemodynamic status of the cerebral vasculature. The aforementioned findings, therefore, constitute carotid US a possible MRI surrogate for the initial carotid artery disease assessment in terms of plaque evolution and possible plaque destabilization.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122842249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-27DOI: 10.1109/BHI56158.2022.9926813
F. Cruciani, A. Altmann, Marco Lorenzi, G. Menegaz, I. Galazzo
In this work we exploited Partial Least Squares (PLS) model for analyzing the genetic underpinning of grey matter atrophy in Alzheimer's Disease (AD). To this end, 42 features derived from T1-weighted Magnetic Resonance Imaging, including cortical thicknesses and subcortical volumes were considered to describe the imaging phenotype, while the genotype information consisted of 14 recently proposed AD related Polygenic Risk Scores (PRS), calculated by including Single Nucleotide Polymorphism passing different significance thresholds. The PLS model was applied on a large study cohort obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database including both healthy individuals and AD patients, and validated on an independent ADNI Mild Cognitive Impairment (MCI) cohort, including Early (EMCI) and Late MCI (LMCI). The experimental results confirm the existence of a joint dynamics between brain atrophy and genotype data in AD, while providing important generalization results when tested on a clinically heterogeneous cohort. In particular, less AD specific PRS scores were negatively correlated with cortical thicknesses, while highly AD specific PRSs showed a peculiar correlation pattern among specific subcortical volumes and cortical thicknesses. While the first outcome is in line with the well known neurodegeneration process in AD, the second could be revealing of different AD subtypes.
{"title":"What PLS can still do for Imaging Genetics in Alzheimer's disease","authors":"F. Cruciani, A. Altmann, Marco Lorenzi, G. Menegaz, I. Galazzo","doi":"10.1109/BHI56158.2022.9926813","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926813","url":null,"abstract":"In this work we exploited Partial Least Squares (PLS) model for analyzing the genetic underpinning of grey matter atrophy in Alzheimer's Disease (AD). To this end, 42 features derived from T1-weighted Magnetic Resonance Imaging, including cortical thicknesses and subcortical volumes were considered to describe the imaging phenotype, while the genotype information consisted of 14 recently proposed AD related Polygenic Risk Scores (PRS), calculated by including Single Nucleotide Polymorphism passing different significance thresholds. The PLS model was applied on a large study cohort obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database including both healthy individuals and AD patients, and validated on an independent ADNI Mild Cognitive Impairment (MCI) cohort, including Early (EMCI) and Late MCI (LMCI). The experimental results confirm the existence of a joint dynamics between brain atrophy and genotype data in AD, while providing important generalization results when tested on a clinically heterogeneous cohort. In particular, less AD specific PRS scores were negatively correlated with cortical thicknesses, while highly AD specific PRSs showed a peculiar correlation pattern among specific subcortical volumes and cortical thicknesses. While the first outcome is in line with the well known neurodegeneration process in AD, the second could be revealing of different AD subtypes.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131899808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-27DOI: 10.1109/BHI56158.2022.9926795
A. Pentari, George P. Kafentzis, M. Tsiknakis
During the last decades, automatic speech emotion recognition (SER) has gained an increased interest by the research community. Specifically, SER aims to recognize the emotional state of a speaker directly from a speech recording. The most prominent approaches in the literature include feature extraction of speech signals in time and/or frequency domain that are successively applied as input into a classification scheme. In this paper, we propose to exploit graph theory and structures as alternative forms of speech representations. We suggest applying the so-called Visibility Graph (VG) theory to represent speech data using an adjacency matrix and extract well-known graph-based features from the latter. Finally, these features are fed into a Support Vector Machine (SVM) classifier in a leave-one-speaker-out, multi-class fashion. Our proposed feature set is compared with a well-known acoustic feature set named the Geneva Minimalistic Acoustic Parameter Set (GeMAPS). We test both approaches on two publicly available speech datasets: SAVEE and EMOVO. The experimental results show that the proposed graph-based features provide better results, namely a classification accuracy of 70% and 98%, respectively, yielding an increase by 29.2% and 60.6%, respectively, when compared to GeMAPS.
{"title":"Investigating Graph-based Features for Speech Emotion Recognition","authors":"A. Pentari, George P. Kafentzis, M. Tsiknakis","doi":"10.1109/BHI56158.2022.9926795","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926795","url":null,"abstract":"During the last decades, automatic speech emotion recognition (SER) has gained an increased interest by the research community. Specifically, SER aims to recognize the emotional state of a speaker directly from a speech recording. The most prominent approaches in the literature include feature extraction of speech signals in time and/or frequency domain that are successively applied as input into a classification scheme. In this paper, we propose to exploit graph theory and structures as alternative forms of speech representations. We suggest applying the so-called Visibility Graph (VG) theory to represent speech data using an adjacency matrix and extract well-known graph-based features from the latter. Finally, these features are fed into a Support Vector Machine (SVM) classifier in a leave-one-speaker-out, multi-class fashion. Our proposed feature set is compared with a well-known acoustic feature set named the Geneva Minimalistic Acoustic Parameter Set (GeMAPS). We test both approaches on two publicly available speech datasets: SAVEE and EMOVO. The experimental results show that the proposed graph-based features provide better results, namely a classification accuracy of 70% and 98%, respectively, yielding an increase by 29.2% and 60.6%, respectively, when compared to GeMAPS.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133368678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-09-27DOI: 10.1109/BHI56158.2022.9926849
Junhong Chen, Zeyu Wang, Ruiqi Zhu, Rui Zhang, Weibang Bai, Benny P. L. Lo
In the field of robotic surgery, Robot-Assisted Minimally Invasive Surgery(RAMIS) has shown its great potential of benefiting both surgeons and patients in the past few decades of research and practice. The current trend of RAMIS targets towards a higher level of autonomy in carrying out surgical tasks. However, most real RAMIS tasks still rely on manual control, thus the performance mostly depends on the dexterity of the surgeon. Their fatigue or small errors could cause life-threatening damages to the patients, especially high-workload surgeons. Since corrections and errors are inevitable in manual control, the actual tool paths in real operations are often deviated from ideal trajectories. For robot Learning from Demonstrations(LfD), these sub-optimal paths would eventually affect the robot's learning performance. Therefore, much research is being explored in enhancing the performance of robot-generated instrument tool paths and at the same time reducing the reliance on manual manipulation demonstrations in surgical robot learning. In this paper, both Reinforcement Learning and Learning from Demonstration are used to generate a smooth moving trajectory without the use of manual robotic control kinematics data. Two tasks, peg transfer and pattern cutting, were chosen to verify the performance. The method was trained and validated in simulations, namely Asynchronous Multi-Body Framework (AMBF) and Moveit. Then da Vinci Research Kit is used to validate the real case performance. The results have shown that this path generation framework could automate given repetitive surgical tasks, and potentially adapted to other surgical tasks.
{"title":"Path Generation with Reinforcement Learning for Surgical Robot Control","authors":"Junhong Chen, Zeyu Wang, Ruiqi Zhu, Rui Zhang, Weibang Bai, Benny P. L. Lo","doi":"10.1109/BHI56158.2022.9926849","DOIUrl":"https://doi.org/10.1109/BHI56158.2022.9926849","url":null,"abstract":"In the field of robotic surgery, Robot-Assisted Minimally Invasive Surgery(RAMIS) has shown its great potential of benefiting both surgeons and patients in the past few decades of research and practice. The current trend of RAMIS targets towards a higher level of autonomy in carrying out surgical tasks. However, most real RAMIS tasks still rely on manual control, thus the performance mostly depends on the dexterity of the surgeon. Their fatigue or small errors could cause life-threatening damages to the patients, especially high-workload surgeons. Since corrections and errors are inevitable in manual control, the actual tool paths in real operations are often deviated from ideal trajectories. For robot Learning from Demonstrations(LfD), these sub-optimal paths would eventually affect the robot's learning performance. Therefore, much research is being explored in enhancing the performance of robot-generated instrument tool paths and at the same time reducing the reliance on manual manipulation demonstrations in surgical robot learning. In this paper, both Reinforcement Learning and Learning from Demonstration are used to generate a smooth moving trajectory without the use of manual robotic control kinematics data. Two tasks, peg transfer and pattern cutting, were chosen to verify the performance. The method was trained and validated in simulations, namely Asynchronous Multi-Body Framework (AMBF) and Moveit. Then da Vinci Research Kit is used to validate the real case performance. The results have shown that this path generation framework could automate given repetitive surgical tasks, and potentially adapted to other surgical tasks.","PeriodicalId":347210,"journal":{"name":"2022 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116945959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}