Pub Date : 2022-07-01DOI: 10.1109/ICDH55609.2022.00023
Ghada Alhussein, M. Alkhodari, Ahsan Khandokher, L. Hadjileontiadis
Emotions play a pivotal role in the individual's overall physical health. Therefore, there has been a steadily increasing interest towards emotion recognition in conversation (ERC). In this work, we propose bidirectional long short term memory (Bi-LSTM), convolutional neural network (CNN), and CNN-BiLSTM based models to predict the emotional climate established during the conversation by peers. Their speech signals across their conversation are analyzed using Mel frequency cepstral coefficients (MFCCs) that are then fed to the Bi-LSTM, CNN and CNN-BiLSTM models to predict the valence and arousal emotional climate cues. The proposed approach was tested on a publicly available dataset, namely K-EmoCon, that includes emotion labeling and peers' speech signals, during their conversation. The obtained results show that Bi-LSTM, CNN and CNN-BiLSTM models achieved a classification accuracy (arousal/valence) of 67.5%/57.7%, 73.3%/66.9%, and 75.1%/68.3%, respectively. These encouraging results show that a combination of deep learning schemes could increase the classification accuracy and provide efficient emotional climate recognition in naturalistic conversation environments.
{"title":"Emotional Climate Recognition in Interactive Conversational Speech Using Deep Learning","authors":"Ghada Alhussein, M. Alkhodari, Ahsan Khandokher, L. Hadjileontiadis","doi":"10.1109/ICDH55609.2022.00023","DOIUrl":"https://doi.org/10.1109/ICDH55609.2022.00023","url":null,"abstract":"Emotions play a pivotal role in the individual's overall physical health. Therefore, there has been a steadily increasing interest towards emotion recognition in conversation (ERC). In this work, we propose bidirectional long short term memory (Bi-LSTM), convolutional neural network (CNN), and CNN-BiLSTM based models to predict the emotional climate established during the conversation by peers. Their speech signals across their conversation are analyzed using Mel frequency cepstral coefficients (MFCCs) that are then fed to the Bi-LSTM, CNN and CNN-BiLSTM models to predict the valence and arousal emotional climate cues. The proposed approach was tested on a publicly available dataset, namely K-EmoCon, that includes emotion labeling and peers' speech signals, during their conversation. The obtained results show that Bi-LSTM, CNN and CNN-BiLSTM models achieved a classification accuracy (arousal/valence) of 67.5%/57.7%, 73.3%/66.9%, and 75.1%/68.3%, respectively. These encouraging results show that a combination of deep learning schemes could increase the classification accuracy and provide efficient emotional climate recognition in naturalistic conversation environments.","PeriodicalId":120923,"journal":{"name":"2022 IEEE International Conference on Digital Health (ICDH)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127912789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/ICDH55609.2022.00014
Juan F. Arias
This paper presents an analysis of how using data collected from wearables can lead to improvements in our health. Correlating data from different sources, can be used to identify the factors that have negative as well as positive impact on our health, allowing us to make changes accordingly. The ultimate goal is to be able to create personalized recommendations about actions to take to improve sleep.
{"title":"Using Data from Wearables for Better Sleep","authors":"Juan F. Arias","doi":"10.1109/ICDH55609.2022.00014","DOIUrl":"https://doi.org/10.1109/ICDH55609.2022.00014","url":null,"abstract":"This paper presents an analysis of how using data collected from wearables can lead to improvements in our health. Correlating data from different sources, can be used to identify the factors that have negative as well as positive impact on our health, allowing us to make changes accordingly. The ultimate goal is to be able to create personalized recommendations about actions to take to improve sleep.","PeriodicalId":120923,"journal":{"name":"2022 IEEE International Conference on Digital Health (ICDH)","volume":"404 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126975679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/ICDH55609.2022.00026
Mirko Rossi, G. D'Avenio, G. Rosa, G. Ferraro, P. Mancini, C. Veneri, M. Iaconelli, L. Lucentini, L. Bonadonna, Mario Cerroni, F. Simonetti, D. Brandtner, E. Suffredini, M. Grigioni
The presence of SARS-CoV-2 RNA in wastewaters was demonstrated early into the COVID-19 pandemic. Data on the presence of SARS-CoV-2 in urban wastewater can be exploited for different aims, including: i) description of outbreaks trends, ii) early warning system for new COVID-19 outbreaks or for the spread of the virus in new territories, iii) study of SARS-Co V-2 genetic diversity and detection of its variants, and iv) estimating the prevalence of COVID-19 infections. Therefore, wastewater surveillance (known as Wastewater Based Epidemiology, WBE) can be a powerful tool to support the decision-making process on public health measures. Italy was among the first EU countries investigating the occurrence and concentration of SARS-Co V-2 RNA in urban wastewaters, virus detection being accomplished at an early phase of the epidemic, between February and May 2020 in north and central Italy. The present study reports on the methodological issues, related to sample data collection and management, encountered in establishing the systematic, wastewater-based SARS-CoV-2 surveillance, and describes the results of the first six months of surveillance.
{"title":"Surveillance of SARS-CoV-2 in Urban Wastewater in Italy","authors":"Mirko Rossi, G. D'Avenio, G. Rosa, G. Ferraro, P. Mancini, C. Veneri, M. Iaconelli, L. Lucentini, L. Bonadonna, Mario Cerroni, F. Simonetti, D. Brandtner, E. Suffredini, M. Grigioni","doi":"10.1109/ICDH55609.2022.00026","DOIUrl":"https://doi.org/10.1109/ICDH55609.2022.00026","url":null,"abstract":"The presence of SARS-CoV-2 RNA in wastewaters was demonstrated early into the COVID-19 pandemic. Data on the presence of SARS-CoV-2 in urban wastewater can be exploited for different aims, including: i) description of outbreaks trends, ii) early warning system for new COVID-19 outbreaks or for the spread of the virus in new territories, iii) study of SARS-Co V-2 genetic diversity and detection of its variants, and iv) estimating the prevalence of COVID-19 infections. Therefore, wastewater surveillance (known as Wastewater Based Epidemiology, WBE) can be a powerful tool to support the decision-making process on public health measures. Italy was among the first EU countries investigating the occurrence and concentration of SARS-Co V-2 RNA in urban wastewaters, virus detection being accomplished at an early phase of the epidemic, between February and May 2020 in north and central Italy. The present study reports on the methodological issues, related to sample data collection and management, encountered in establishing the systematic, wastewater-based SARS-CoV-2 surveillance, and describes the results of the first six months of surveillance.","PeriodicalId":120923,"journal":{"name":"2022 IEEE International Conference on Digital Health (ICDH)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125555110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/ICDH55609.2022.00037
Chunpeng Wu, Che-Lun Hung, Teng‐Yu Lee, Chun-Ying Wu, William C. Chu
Liver cancer is mainly caused by hepatitis B and C virus infection. In recent years, the prevalence of hepatitis B and C has been greatly reduced. With poor lifestyle and eating habits, the prevalence of fatty liver disease has increased. Fatty liver disease perhaps gradually replaces viral hepatitis as the leading cause of liver cancer. Ultrasound images are usually the primary checkpoint for the clinical examination of the fatty liver. This study applied a deep learning image segmentation model and image texture feature analysis. First, texture features were extracted from ultrasound images, and then model training was performed on texture features to achieve the clinical objective diagnosis. The US images used in this study were collected from the public medical center US machine. Ultrasound images and FibroScan of liver fibrosis scanner were collected from 235 patients. According to the classification and diagnosis of the severity of fatty liver, this study is divided into two parts. First, the ultrasound image data of patients is applied to image cutting model training and texture feature extraction. Second, the value of the texture feature is compared with the results of liver tissue pathology CAP corresponding to the training and verification of the fatty liver severity classification model. The experimental results show that the proposed model can predict fatty liver disease on a specific instrument and can achieve an area under the curve above 0.8.
{"title":"Fatty Liver Diagnosis Using Deep Learning in Ultrasound Image","authors":"Chunpeng Wu, Che-Lun Hung, Teng‐Yu Lee, Chun-Ying Wu, William C. Chu","doi":"10.1109/ICDH55609.2022.00037","DOIUrl":"https://doi.org/10.1109/ICDH55609.2022.00037","url":null,"abstract":"Liver cancer is mainly caused by hepatitis B and C virus infection. In recent years, the prevalence of hepatitis B and C has been greatly reduced. With poor lifestyle and eating habits, the prevalence of fatty liver disease has increased. Fatty liver disease perhaps gradually replaces viral hepatitis as the leading cause of liver cancer. Ultrasound images are usually the primary checkpoint for the clinical examination of the fatty liver. This study applied a deep learning image segmentation model and image texture feature analysis. First, texture features were extracted from ultrasound images, and then model training was performed on texture features to achieve the clinical objective diagnosis. The US images used in this study were collected from the public medical center US machine. Ultrasound images and FibroScan of liver fibrosis scanner were collected from 235 patients. According to the classification and diagnosis of the severity of fatty liver, this study is divided into two parts. First, the ultrasound image data of patients is applied to image cutting model training and texture feature extraction. Second, the value of the texture feature is compared with the results of liver tissue pathology CAP corresponding to the training and verification of the fatty liver severity classification model. The experimental results show that the proposed model can predict fatty liver disease on a specific instrument and can achieve an area under the curve above 0.8.","PeriodicalId":120923,"journal":{"name":"2022 IEEE International Conference on Digital Health (ICDH)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131640262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/ICDH55609.2022.00021
Enrico Martini, Nicola Valè, Michele Boldo, Anna Righetti, N. Smania, N. Bombieri
Assessing upper limb (UL) movements post-stroke is crucial to monitor and understand sensorimotor recovery. Recently, several research works focused on the relationship between reach-to-target kinematics and clinical outcomes. Since, conventionally, the assessment of sensorimotor impairments is primarily based on clinical scales and observation, and hence likely to be subjective, one of the challenges is to quantify such kinematics through automated platforms like inertial measurement units, optical, or electromagnetic motion capture systems. Even more challenging is to quantify UL kinematics through non-invasive systems, to avoid any influence or bias in the measurements. In this context, tools based on video cameras and deep learning software have shown to achieve high levels of accuracy for the estimation of the human pose. Nevertheless, an analysis of their accuracy in measuring kinematics features for the Finger-to-Nose Test (FNT) is missing. We first present an extended quantitative evaluation of such inference software (i.e., OpenPose) for measuring a clinically meaningful set of UL movement features. Then, we propose an algorithm and the corresponding software implementation that automates the segmentation of the FNT movements. This allows us to automatically extrapolate the whole set of measures from the videos with no manual intervention. We measured the software accuracy by using an infrared motion capture system on a total of 26 healthy and 26 stroke subjects.
{"title":"On the Pose Estimation Software for Measuring Movement Features in the Finger-to-Nose Test","authors":"Enrico Martini, Nicola Valè, Michele Boldo, Anna Righetti, N. Smania, N. Bombieri","doi":"10.1109/ICDH55609.2022.00021","DOIUrl":"https://doi.org/10.1109/ICDH55609.2022.00021","url":null,"abstract":"Assessing upper limb (UL) movements post-stroke is crucial to monitor and understand sensorimotor recovery. Recently, several research works focused on the relationship between reach-to-target kinematics and clinical outcomes. Since, conventionally, the assessment of sensorimotor impairments is primarily based on clinical scales and observation, and hence likely to be subjective, one of the challenges is to quantify such kinematics through automated platforms like inertial measurement units, optical, or electromagnetic motion capture systems. Even more challenging is to quantify UL kinematics through non-invasive systems, to avoid any influence or bias in the measurements. In this context, tools based on video cameras and deep learning software have shown to achieve high levels of accuracy for the estimation of the human pose. Nevertheless, an analysis of their accuracy in measuring kinematics features for the Finger-to-Nose Test (FNT) is missing. We first present an extended quantitative evaluation of such inference software (i.e., OpenPose) for measuring a clinically meaningful set of UL movement features. Then, we propose an algorithm and the corresponding software implementation that automates the segmentation of the FNT movements. This allows us to automatically extrapolate the whole set of measures from the videos with no manual intervention. We measured the software accuracy by using an infrared motion capture system on a total of 26 healthy and 26 stroke subjects.","PeriodicalId":120923,"journal":{"name":"2022 IEEE International Conference on Digital Health (ICDH)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129408846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/ICDH55609.2022.00033
T. Chomutare, A. Budrionis, H. Dalianis
Computer assisted coding (CAC) of clinical text into standardized classifications such as ICD-10 is an important challenge. For frequently used ICD-10 codes, deep learning approaches have been quite successful. For rare codes, however, the problem is still outstanding. To improve performance for rare codes, a pipeline is proposed that takes advantage of the ICD-10 code hierarchy to combine semantic capabilities of deep learning and the flexibility of fuzzy logic. The data used are discharge summaries in Swedish in the medical speciality of gastrointestinal diseases. Using our pipeline, fuzzy matching computation time is reduced and accuracy of the top 10 hits of the rare codes is also improved. While the method is promising, further work is required before the pipeline can be part of a usable prototype. Code repository: https://github.com/icd-coding/zeroshot.
{"title":"Combining deep learning and fuzzy logic to predict rare ICD-10 codes from clinical notes","authors":"T. Chomutare, A. Budrionis, H. Dalianis","doi":"10.1109/ICDH55609.2022.00033","DOIUrl":"https://doi.org/10.1109/ICDH55609.2022.00033","url":null,"abstract":"Computer assisted coding (CAC) of clinical text into standardized classifications such as ICD-10 is an important challenge. For frequently used ICD-10 codes, deep learning approaches have been quite successful. For rare codes, however, the problem is still outstanding. To improve performance for rare codes, a pipeline is proposed that takes advantage of the ICD-10 code hierarchy to combine semantic capabilities of deep learning and the flexibility of fuzzy logic. The data used are discharge summaries in Swedish in the medical speciality of gastrointestinal diseases. Using our pipeline, fuzzy matching computation time is reduced and accuracy of the top 10 hits of the rare codes is also improved. While the method is promising, further work is required before the pipeline can be part of a usable prototype. Code repository: https://github.com/icd-coding/zeroshot.","PeriodicalId":120923,"journal":{"name":"2022 IEEE International Conference on Digital Health (ICDH)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129764118","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/ICDH55609.2022.00039
Melissa J. Morine, C. Priami, Edith Coronado, Juliana Haber, J. Kaput
Health and the initiation, progression, and outcome of disease are the result of multiple environmental factors interacting with individual genetic makeups. Collectively, results from primary clinical research on health and disease represent the most compendious and reliable source of actionable knowledge on strategies to optimize health. However, the dispersal of this information as unstructured data, distributed across millions of documents, is a substantial challenge in bridging the gap between primary research and concrete recommendations for improving health. Described here is the development and implementation of a machine reading pipeline that builds a knowledge graph of causal relationships between a broad range of predictive/modifiable diet and lifestyle factors and health outcomes, extracted from the vast biomedical corpus in the National Library of Medicine.
{"title":"A Comprehensive and Holistic Health Database","authors":"Melissa J. Morine, C. Priami, Edith Coronado, Juliana Haber, J. Kaput","doi":"10.1109/ICDH55609.2022.00039","DOIUrl":"https://doi.org/10.1109/ICDH55609.2022.00039","url":null,"abstract":"Health and the initiation, progression, and outcome of disease are the result of multiple environmental factors interacting with individual genetic makeups. Collectively, results from primary clinical research on health and disease represent the most compendious and reliable source of actionable knowledge on strategies to optimize health. However, the dispersal of this information as unstructured data, distributed across millions of documents, is a substantial challenge in bridging the gap between primary research and concrete recommendations for improving health. Described here is the development and implementation of a machine reading pipeline that builds a knowledge graph of causal relationships between a broad range of predictive/modifiable diet and lifestyle factors and health outcomes, extracted from the vast biomedical corpus in the National Library of Medicine.","PeriodicalId":120923,"journal":{"name":"2022 IEEE International Conference on Digital Health (ICDH)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128891076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/ICDH55609.2022.00036
Chahd Chabib, L. Hadjileontiadis, S. Jemimah, Aamna Al Shehhi
Alzheimer's Disease (AD) is one of the most common neurodegenerative diseases, as projected in the related Magnetic Resonance Imaging (MRI). The early identification of AD is essential for preventive treatment; thus, different machine/deep learning (ML/DL) approaches applied on MRI scans from patients at different AD stages have been proposed in recent years. Here, a new method, namely CurvMRI, for AD detection from MRI images using Fast Curvelet Transform (FCT) is proposed. The approach is realized via a sequence of steps, i.e., feature extraction, feature reduction, and classification. MRI images are obtained from a Kaggle dataset containing five AD stages, from where Cognitive Normal (CN) (493/87 (training/testing)) and AD (145/26) MRI images were selected for binary classification. The FCT with wrapping method was implemented, and higher-order statistics, such as kurtosis and skewness, as well as energy and variance, were then used to extract features from the curvelet sub-bands. Features were then concatenated and fed to a Support Vector Machine (SVM) classifier, giving an accuracy of 77.6%, which outperforms the most common DL classification approaches applied to the same dataset. These results showcase the potentiality of the proposed CurvMRI to efficiently discriminate AD from CN in MRI images, and provide a fast and easy to implement ML tool for assisting physicians in AD detection.
{"title":"CurvMRI: A Curvelet Transform-Based MRI Approach for Alzheimer's Disease Detection","authors":"Chahd Chabib, L. Hadjileontiadis, S. Jemimah, Aamna Al Shehhi","doi":"10.1109/ICDH55609.2022.00036","DOIUrl":"https://doi.org/10.1109/ICDH55609.2022.00036","url":null,"abstract":"Alzheimer's Disease (AD) is one of the most common neurodegenerative diseases, as projected in the related Magnetic Resonance Imaging (MRI). The early identification of AD is essential for preventive treatment; thus, different machine/deep learning (ML/DL) approaches applied on MRI scans from patients at different AD stages have been proposed in recent years. Here, a new method, namely CurvMRI, for AD detection from MRI images using Fast Curvelet Transform (FCT) is proposed. The approach is realized via a sequence of steps, i.e., feature extraction, feature reduction, and classification. MRI images are obtained from a Kaggle dataset containing five AD stages, from where Cognitive Normal (CN) (493/87 (training/testing)) and AD (145/26) MRI images were selected for binary classification. The FCT with wrapping method was implemented, and higher-order statistics, such as kurtosis and skewness, as well as energy and variance, were then used to extract features from the curvelet sub-bands. Features were then concatenated and fed to a Support Vector Machine (SVM) classifier, giving an accuracy of 77.6%, which outperforms the most common DL classification approaches applied to the same dataset. These results showcase the potentiality of the proposed CurvMRI to efficiently discriminate AD from CN in MRI images, and provide a fast and easy to implement ML tool for assisting physicians in AD detection.","PeriodicalId":120923,"journal":{"name":"2022 IEEE International Conference on Digital Health (ICDH)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127171105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/ICDH55609.2022.00035
S. Dias, L. Hadjileontiadis, H. F. Jelinek
Rehabilitation programs for post stroke recovery or following a heart attack are always stressful for patients, who have been spending time in hospital, an unaccustomed environment, experiencing surgery burden, irregular sleep, and undergoing general rehabilitation exercise programs. In the latter, the exercise intensity and difficulty are often more than what a patient can manage, and usually subjective decisions on the level of exercise intensity and difficulty are followed. To address this issue in a more personalized way, the development of a new rehabilitation framework, namely MultiGRehab (multi-sensed biosignals combined with serious games), is proposed here. In fact, MultiGRehab captures multimodal biosignals in a real-time fashion during a patient's rehabilitation session that includes serious gaming. Through biosignals swarm decomposition and deep learning, the emotional state of the patient is estimated and used as a controlling factor for the serious game adaptation, in terms of exercise type, duration and intensity level. In this way, MultiGRehab is expected to increase a patient's motivation, adherence to the exercise protocol and personalization of rehabilitation targets and outcomes.
{"title":"MultiGRehab: Developing a Multimodal Biosignals Acquisition and Analysis Framework for Personalizing Stroke and Cardiac Rehabilitation based on Adaptive Serious Games","authors":"S. Dias, L. Hadjileontiadis, H. F. Jelinek","doi":"10.1109/ICDH55609.2022.00035","DOIUrl":"https://doi.org/10.1109/ICDH55609.2022.00035","url":null,"abstract":"Rehabilitation programs for post stroke recovery or following a heart attack are always stressful for patients, who have been spending time in hospital, an unaccustomed environment, experiencing surgery burden, irregular sleep, and undergoing general rehabilitation exercise programs. In the latter, the exercise intensity and difficulty are often more than what a patient can manage, and usually subjective decisions on the level of exercise intensity and difficulty are followed. To address this issue in a more personalized way, the development of a new rehabilitation framework, namely MultiGRehab (multi-sensed biosignals combined with serious games), is proposed here. In fact, MultiGRehab captures multimodal biosignals in a real-time fashion during a patient's rehabilitation session that includes serious gaming. Through biosignals swarm decomposition and deep learning, the emotional state of the patient is estimated and used as a controlling factor for the serious game adaptation, in terms of exercise type, duration and intensity level. In this way, MultiGRehab is expected to increase a patient's motivation, adherence to the exercise protocol and personalization of rehabilitation targets and outcomes.","PeriodicalId":120923,"journal":{"name":"2022 IEEE International Conference on Digital Health (ICDH)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133522251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-01DOI: 10.1109/ICDH55609.2022.00012
Thong Vo, P. Dave, G. Bajpai, R. Kashef, N. Khan
Brain tumor segmentation is an essential process to diagnose and monitor the development of cancerous cells in the brain. Conventional segmentation methods rely on experts who manually label radiology individual images. Meanwhile, deep learning has shown tremendous progress in medical image seg-mentation where minor details are difficult to differentiate. In the paper, we propose a deep learning architecture to automatically segment such radiology images, named UVR-Net model. The proposed architecture is based on the popular U-Net framework which demonstrated its robustness and capabilities in the medical imaging field. Experimental results show that the proposed UVR-Net achieves a Dice score of 0.76, and IOU scores 0.89 compared to the traditional vanilla U-Net architecture by a factor of 11% in terms of Dice score. In addition, we also perform sensitivity analysis for critical parameters and loss functions in the proposed model.
{"title":"Brain Tumor Segmentation in MRI Images Using A Modified U-Net Model","authors":"Thong Vo, P. Dave, G. Bajpai, R. Kashef, N. Khan","doi":"10.1109/ICDH55609.2022.00012","DOIUrl":"https://doi.org/10.1109/ICDH55609.2022.00012","url":null,"abstract":"Brain tumor segmentation is an essential process to diagnose and monitor the development of cancerous cells in the brain. Conventional segmentation methods rely on experts who manually label radiology individual images. Meanwhile, deep learning has shown tremendous progress in medical image seg-mentation where minor details are difficult to differentiate. In the paper, we propose a deep learning architecture to automatically segment such radiology images, named UVR-Net model. The proposed architecture is based on the popular U-Net framework which demonstrated its robustness and capabilities in the medical imaging field. Experimental results show that the proposed UVR-Net achieves a Dice score of 0.76, and IOU scores 0.89 compared to the traditional vanilla U-Net architecture by a factor of 11% in terms of Dice score. In addition, we also perform sensitivity analysis for critical parameters and loss functions in the proposed model.","PeriodicalId":120923,"journal":{"name":"2022 IEEE International Conference on Digital Health (ICDH)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133965564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}