Devender Kumar, Raju Maharjan, Alban Maxhuni, Helena Domínguez, A. Frølich, J. Bardram
This article presents the design, technical implementation, and feasibility evaluation of mCardia—a context-aware, mobile electrocardiogram (ECG) collection system for longitudinal arrhythmia screening under free-living conditions. Along with ECG, mCardia also records active and passive contextual data, including patient-reported symptoms and physical activity. This contextual data can provide a more accurate understanding of what happens before, during, and after an arrhythmia event, thereby providing additional information in the diagnosis of arrhythmia. By using a plugin-based architecture for ECG and contextual sensing, mCardia is device-agnostic and can integrate with various wireless ECG devices and supports cross-platform deployment. We deployed the mCardia system in a feasibility study involving 24 patients who used the system over a two-week period. During the study, we observed high patient acceptance and compliance with a satisfactory yield of collected ECG and contextual data. The results demonstrate the high usability and feasibility of mCardia for longitudinal ambulatory monitoring under free-living conditions. The article also reports from two clinical cases, which demonstrate how a cardiologist can utilize the collected contextual data to improve the accuracy of arrhythmia analysis. Finally, the article discusses the lessons learned and the challenges found in the mCardia design and the feasibility study.
{"title":"mCardia: A Context-Aware ECG Collection System for Ambulatory Arrhythmia Screening","authors":"Devender Kumar, Raju Maharjan, Alban Maxhuni, Helena Domínguez, A. Frølich, J. Bardram","doi":"10.1145/3494581","DOIUrl":"https://doi.org/10.1145/3494581","url":null,"abstract":"This article presents the design, technical implementation, and feasibility evaluation of mCardia—a context-aware, mobile electrocardiogram (ECG) collection system for longitudinal arrhythmia screening under free-living conditions. Along with ECG, mCardia also records active and passive contextual data, including patient-reported symptoms and physical activity. This contextual data can provide a more accurate understanding of what happens before, during, and after an arrhythmia event, thereby providing additional information in the diagnosis of arrhythmia. By using a plugin-based architecture for ECG and contextual sensing, mCardia is device-agnostic and can integrate with various wireless ECG devices and supports cross-platform deployment. We deployed the mCardia system in a feasibility study involving 24 patients who used the system over a two-week period. During the study, we observed high patient acceptance and compliance with a satisfactory yield of collected ECG and contextual data. The results demonstrate the high usability and feasibility of mCardia for longitudinal ambulatory monitoring under free-living conditions. The article also reports from two clinical cases, which demonstrate how a cardiologist can utilize the collected contextual data to improve the accuracy of arrhythmia analysis. Finally, the article discusses the lessons learned and the challenges found in the mCardia design and the feasibility study.","PeriodicalId":288903,"journal":{"name":"ACM Transactions on Computing for Healthcare (HEALTH)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132404121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This article studies the problem of automated information processing from large volumes of unstructured, heterogeneous, and sometimes untrustworthy data sources. The main contribution is a novel framework called Machine Assisted Record Selection (MARS). Instead of today’s standard practice of relying on human experts to manually decide the order of records for processing, MARS learns the optimal record selection via an online learning algorithm. It further integrates algorithm-based record selection and processing with human-based error resolution to achieve a balanced task allocation between machine and human. Both fixed and adaptive MARS algorithms are proposed, leveraging different statistical knowledge about the existence, quality, and cost associated with the records. Experiments using semi-synthetic data that are generated from real-world patients record processing in the UK national cancer registry are carried out, which demonstrate significant (3 to 4 fold) performance gain over the fixed-order processing. MARS represents one of the few examples demonstrating that machine learning can assist humans with complex jobs by automating complex triaging tasks.
{"title":"MARS: Assisting Human with Information Processing Tasks Using Machine Learning","authors":"Cong Shen, Z. Qian, Alihan Hüyük, M. Schaar","doi":"10.1145/3494582","DOIUrl":"https://doi.org/10.1145/3494582","url":null,"abstract":"This article studies the problem of automated information processing from large volumes of unstructured, heterogeneous, and sometimes untrustworthy data sources. The main contribution is a novel framework called Machine Assisted Record Selection (MARS). Instead of today’s standard practice of relying on human experts to manually decide the order of records for processing, MARS learns the optimal record selection via an online learning algorithm. It further integrates algorithm-based record selection and processing with human-based error resolution to achieve a balanced task allocation between machine and human. Both fixed and adaptive MARS algorithms are proposed, leveraging different statistical knowledge about the existence, quality, and cost associated with the records. Experiments using semi-synthetic data that are generated from real-world patients record processing in the UK national cancer registry are carried out, which demonstrate significant (3 to 4 fold) performance gain over the fixed-order processing. MARS represents one of the few examples demonstrating that machine learning can assist humans with complex jobs by automating complex triaging tasks.","PeriodicalId":288903,"journal":{"name":"ACM Transactions on Computing for Healthcare (HEALTH)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127719958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jermaine Marshall, Priscilla Jiménez-Pazmino, Ronald Metoyer, N. Chawla
It is well known that unhealthy food consumption plays a significant role in dietary and lifestyle-related diseases. Therefore, it is important for researchers to examine methods that may encourage the consumer to consider healthier dietary and lifestyle habits as diseases such as obesity, heart disease, and high blood pressure remain a worldwide issue. One promising approach to influencing healthy dietary and lifestyle habits is food recommendation models that recommend food to users based on various factors such as health effects, nutrition, preferences, and daily habits. Unfortunately, much of this work has focused on individual factors such as taste preferences and often neglects to understand other factors that influence our choices. Additionally, the evaluation of technological approaches often lacks user studies in the context of intended use. In this systematic review of food choice technology, we focus on the factors that may influence food choices and how technology can play a role in supporting those choices. We also describe existing work, approaches, trends, and issues in current food choice technology and give advice for future work areas in this space.
{"title":"A Survey on Healthy Food Decision Influences Through Technological Innovations","authors":"Jermaine Marshall, Priscilla Jiménez-Pazmino, Ronald Metoyer, N. Chawla","doi":"10.1145/3494580","DOIUrl":"https://doi.org/10.1145/3494580","url":null,"abstract":"It is well known that unhealthy food consumption plays a significant role in dietary and lifestyle-related diseases. Therefore, it is important for researchers to examine methods that may encourage the consumer to consider healthier dietary and lifestyle habits as diseases such as obesity, heart disease, and high blood pressure remain a worldwide issue. One promising approach to influencing healthy dietary and lifestyle habits is food recommendation models that recommend food to users based on various factors such as health effects, nutrition, preferences, and daily habits. Unfortunately, much of this work has focused on individual factors such as taste preferences and often neglects to understand other factors that influence our choices. Additionally, the evaluation of technological approaches often lacks user studies in the context of intended use. In this systematic review of food choice technology, we focus on the factors that may influence food choices and how technology can play a role in supporting those choices. We also describe existing work, approaches, trends, and issues in current food choice technology and give advice for future work areas in this space.","PeriodicalId":288903,"journal":{"name":"ACM Transactions on Computing for Healthcare (HEALTH)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133125443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Burrello, D. J. Pagliari, Pierangelo Maria Rapa, Matilde Semilia, Matteo Risso, T. Polonelli, M. Poncino, L. Benini, S. Benatti
Photoplethysmography (PPG) sensors allow for non-invasive and comfortable heart rate (HR) monitoring, suitable for compact wrist-worn devices. Unfortunately, motion artifacts (MAs) severely impact the monitoring accuracy, causing high variability in the skin-to-sensor interface. Several data fusion techniques have been introduced to cope with this problem, based on combining PPG signals with inertial sensor data. Until now, both commercial and reasearch solutions are computationally efficient but not very robust, or strongly dependent on hand-tuned parameters, which leads to poor generalization performance. In this work, we tackle these limitations by proposing a computationally lightweight yet robust deep learning-based approach for PPG-based HR estimation. Specifically, we derive a diverse set of Temporal Convolutional Networks for HR estimation, leveraging Neural Architecture Search. Moreover, we also introduce ActPPG, an adaptive algorithm that selects among multiple HR estimators depending on the amount of MAs, to improve energy efficiency. We validate our approaches on two benchmark datasets, achieving as low as 3.84 beats per minute of Mean Absolute Error on PPG-Dalia, which outperforms the previous state of the art. Moreover, we deploy our models on a low-power commercial microcontroller (STM32L4), obtaining a rich set of Pareto optimal solutions in the complexity vs. accuracy space.
{"title":"Embedding Temporal Convolutional Networks for Energy-efficient PPG-based Heart Rate Monitoring","authors":"A. Burrello, D. J. Pagliari, Pierangelo Maria Rapa, Matilde Semilia, Matteo Risso, T. Polonelli, M. Poncino, L. Benini, S. Benatti","doi":"10.1145/3487910","DOIUrl":"https://doi.org/10.1145/3487910","url":null,"abstract":"Photoplethysmography (PPG) sensors allow for non-invasive and comfortable heart rate (HR) monitoring, suitable for compact wrist-worn devices. Unfortunately, motion artifacts (MAs) severely impact the monitoring accuracy, causing high variability in the skin-to-sensor interface. Several data fusion techniques have been introduced to cope with this problem, based on combining PPG signals with inertial sensor data. Until now, both commercial and reasearch solutions are computationally efficient but not very robust, or strongly dependent on hand-tuned parameters, which leads to poor generalization performance. In this work, we tackle these limitations by proposing a computationally lightweight yet robust deep learning-based approach for PPG-based HR estimation. Specifically, we derive a diverse set of Temporal Convolutional Networks for HR estimation, leveraging Neural Architecture Search. Moreover, we also introduce ActPPG, an adaptive algorithm that selects among multiple HR estimators depending on the amount of MAs, to improve energy efficiency. We validate our approaches on two benchmark datasets, achieving as low as 3.84 beats per minute of Mean Absolute Error on PPG-Dalia, which outperforms the previous state of the art. Moreover, we deploy our models on a low-power commercial microcontroller (STM32L4), obtaining a rich set of Pareto optimal solutions in the complexity vs. accuracy space.","PeriodicalId":288903,"journal":{"name":"ACM Transactions on Computing for Healthcare (HEALTH)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115315438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is now well established that biomedical text requires methods targeted for the domain. Developments in deep learning and a series of successful shared challenges have contributed to a steady progress in techniques for natural language processing of biomedical text. Contributing to this on-going progress and particularly focusing on computational methods, this special issue was created to encourage research in novel approaches for analyzing biomedical text. The six papers selected for the issue offer a diversity of novel methods that leverage biomedical text for research and clinical uses. A well-established practice in pretraining deep learning models for biomedical applications has been to adopt a most promising model that was already pretrained on general domain natural language corpus and then “add” additional pre-training with biomedical corpora. In “Domain-specific language model pretraining for biomedical natural language processing”, Gu et al. successfully challenge this approach. The authors conducted an experiment where multiple standard benchmarks were used to compare a model that was pre-trained entirely and only on biomedical corpus with models that were pretrained using the “add” on approach. Results showed an impressive improvement in favor of pretraining only with biomedical corpus. The study provides an excellent data-point in support of clarity in model training rather than accumulation. Tariq et al. also find using domain-aware tokenization and embeddings to be more effective in their paper “Bridging the Gap Between Structured and Free-form Radiology Reporting: A Case-study on Coronary CT Angiography”. They compare a variety of models constructed to predict the severity of cardiovascular disease from the language used within free-text radiology reports. Models that used medical-domain-aware tokenization and word embeddings of the reports were consistently more effective than raw word-based. The better models are able to accurately predict disease severity under real-world conditions of diverse terminology from different radiologists and unbalanced class size. Two papers address the problem of maintaining the privacy of clinical documents, though from widely different perspectives. De-identification is the most used approach to eliminate PHI (Protected Health Information) in clinical documents before making the data available to NLP researchers. In “A Context-enhanced De-identification System”, Kahyun et al. describe an improved de-identification technique for clinical records. Their context-enhanced de-identification system called CEDI uses attention mechanisms in a long short-term memory (LSTM) network to capture the appropriate context. This context allows the system to detect dependencies that cross sentence boundaries, an important feature since clinical reports often contain such dependencies. Nonetheless, accurate and broad-coverage de-identification of unstructured data remains challenging, and lack of trust in the pro
{"title":"Introduction to the Special Issue on Computational Methods for Biomedical NLP","authors":"M. Devarakonda, E. Voorhees","doi":"10.1145/3492302","DOIUrl":"https://doi.org/10.1145/3492302","url":null,"abstract":"It is now well established that biomedical text requires methods targeted for the domain. Developments in deep learning and a series of successful shared challenges have contributed to a steady progress in techniques for natural language processing of biomedical text. Contributing to this on-going progress and particularly focusing on computational methods, this special issue was created to encourage research in novel approaches for analyzing biomedical text. The six papers selected for the issue offer a diversity of novel methods that leverage biomedical text for research and clinical uses. A well-established practice in pretraining deep learning models for biomedical applications has been to adopt a most promising model that was already pretrained on general domain natural language corpus and then “add” additional pre-training with biomedical corpora. In “Domain-specific language model pretraining for biomedical natural language processing”, Gu et al. successfully challenge this approach. The authors conducted an experiment where multiple standard benchmarks were used to compare a model that was pre-trained entirely and only on biomedical corpus with models that were pretrained using the “add” on approach. Results showed an impressive improvement in favor of pretraining only with biomedical corpus. The study provides an excellent data-point in support of clarity in model training rather than accumulation. Tariq et al. also find using domain-aware tokenization and embeddings to be more effective in their paper “Bridging the Gap Between Structured and Free-form Radiology Reporting: A Case-study on Coronary CT Angiography”. They compare a variety of models constructed to predict the severity of cardiovascular disease from the language used within free-text radiology reports. Models that used medical-domain-aware tokenization and word embeddings of the reports were consistently more effective than raw word-based. The better models are able to accurately predict disease severity under real-world conditions of diverse terminology from different radiologists and unbalanced class size. Two papers address the problem of maintaining the privacy of clinical documents, though from widely different perspectives. De-identification is the most used approach to eliminate PHI (Protected Health Information) in clinical documents before making the data available to NLP researchers. In “A Context-enhanced De-identification System”, Kahyun et al. describe an improved de-identification technique for clinical records. Their context-enhanced de-identification system called CEDI uses attention mechanisms in a long short-term memory (LSTM) network to capture the appropriate context. This context allows the system to detect dependencies that cross sentence boundaries, an important feature since clinical reports often contain such dependencies. Nonetheless, accurate and broad-coverage de-identification of unstructured data remains challenging, and lack of trust in the pro","PeriodicalId":288903,"journal":{"name":"ACM Transactions on Computing for Healthcare (HEALTH)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115356652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ye Gao, Asif Salekin, Kristin D. Gordon, Karen Rose, Hongning Wang, J. Stankovic
The rapid development of machine learning on acoustic signal processing has resulted in many solutions for detecting emotions from speech. Early works were developed for clean and acted speech and for a fixed set of emotions. Importantly, the datasets and solutions assumed that a person only exhibited one of these emotions. More recent work has continually been adding realism to emotion detection by considering issues such as reverberation, de-amplification, and background noise, but often considering one dataset at a time, and also assuming all emotions are accounted for in the model. We significantly improve realistic considerations for emotion detection by (i) more comprehensively assessing different situations by combining the five common publicly available datasets as one and enhancing the new dataset with data augmentation that considers reverberation and de-amplification, (ii) incorporating 11 typical home noises into the acoustics, and (iii) considering that in real situations a person may be exhibiting many emotions that are not currently of interest and they should not have to fit into a pre-fixed category nor be improperly labeled. Our novel solution combines CNN with out-of-data distribution detection. Our solution increases the situations where emotions can be effectively detected and outperforms a state-of-the-art baseline.
{"title":"Emotion Recognition Robust to Indoor Environmental Distortions and Non-targeted Emotions Using Out-of-distribution Detection","authors":"Ye Gao, Asif Salekin, Kristin D. Gordon, Karen Rose, Hongning Wang, J. Stankovic","doi":"10.1145/3492300","DOIUrl":"https://doi.org/10.1145/3492300","url":null,"abstract":"The rapid development of machine learning on acoustic signal processing has resulted in many solutions for detecting emotions from speech. Early works were developed for clean and acted speech and for a fixed set of emotions. Importantly, the datasets and solutions assumed that a person only exhibited one of these emotions. More recent work has continually been adding realism to emotion detection by considering issues such as reverberation, de-amplification, and background noise, but often considering one dataset at a time, and also assuming all emotions are accounted for in the model. We significantly improve realistic considerations for emotion detection by (i) more comprehensively assessing different situations by combining the five common publicly available datasets as one and enhancing the new dataset with data augmentation that considers reverberation and de-amplification, (ii) incorporating 11 typical home noises into the acoustics, and (iii) considering that in real situations a person may be exhibiting many emotions that are not currently of interest and they should not have to fit into a pre-fixed category nor be improperly labeled. Our novel solution combines CNN with out-of-data distribution detection. Our solution increases the situations where emotions can be effectively detected and outperforms a state-of-the-art baseline.","PeriodicalId":288903,"journal":{"name":"ACM Transactions on Computing for Healthcare (HEALTH)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132561112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial Intelligence-enabled applications on edge devices have the potential to revolutionize disease detection and monitoring in future smart health (sHealth) systems. In this study, we investigated a minimalist approach for the severity classification, severity estimation, and progression monitoring of obstructive sleep apnea (OSA) in a home environment using wearables. We used the recursive feature elimination technique to select the best feature set of 70 features from a total of 200 features extracted from polysomnogram. We used a multi-layer perceptron model to investigate the performance of OSA severity classification with all the ranked features to a subset of features available from either Electroencephalography or Heart Rate Variability (HRV) and time duration of SpO2 level. The results indicate that using only computationally inexpensive features from HRV and SpO2, an area under the curve of 0.91 and an accuracy of 83.97% can be achieved for the severity classification of OSA. For estimation of the apnea-hypopnea index, the accuracy of RMSE = 4.6 and R-squared value = 0.71 have been achieved in the test set using only ranked HRV and SpO2 features. The Wilcoxon-signed-rank test indicates a significant change (p < 0.05) in the selected feature values for a progression in the disease over 2.5 years. The method has the potential for integration with edge computing for deployment on everyday wearables. This may facilitate the preliminary severity estimation, monitoring, and management of OSA patients and reduce associated healthcare costs as well as the prevalence of untreated OSA.
{"title":"A Minimalist Method Toward Severity Assessment and Progression Monitoring of Obstructive Sleep Apnea on the Edge","authors":"Md Juber Rahman, B. Morshed","doi":"10.1145/3479432","DOIUrl":"https://doi.org/10.1145/3479432","url":null,"abstract":"Artificial Intelligence-enabled applications on edge devices have the potential to revolutionize disease detection and monitoring in future smart health (sHealth) systems. In this study, we investigated a minimalist approach for the severity classification, severity estimation, and progression monitoring of obstructive sleep apnea (OSA) in a home environment using wearables. We used the recursive feature elimination technique to select the best feature set of 70 features from a total of 200 features extracted from polysomnogram. We used a multi-layer perceptron model to investigate the performance of OSA severity classification with all the ranked features to a subset of features available from either Electroencephalography or Heart Rate Variability (HRV) and time duration of SpO2 level. The results indicate that using only computationally inexpensive features from HRV and SpO2, an area under the curve of 0.91 and an accuracy of 83.97% can be achieved for the severity classification of OSA. For estimation of the apnea-hypopnea index, the accuracy of RMSE = 4.6 and R-squared value = 0.71 have been achieved in the test set using only ranked HRV and SpO2 features. The Wilcoxon-signed-rank test indicates a significant change (p < 0.05) in the selected feature values for a progression in the disease over 2.5 years. The method has the potential for integration with edge computing for deployment on everyday wearables. This may facilitate the preliminary severity estimation, monitoring, and management of OSA patients and reduce associated healthcare costs as well as the prevalence of untreated OSA.","PeriodicalId":288903,"journal":{"name":"ACM Transactions on Computing for Healthcare (HEALTH)","volume":"196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116407318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The standard approach to expert-in-the-loop machine learning is active learning, where, repeatedly, an expert is asked to annotate one or more records and the machine finds a classifier that respects all annotations made until that point. We propose an alternative approach, IQRef, in which the expert iteratively designs a classifier and the machine helps him or her to determine how well it is performing and, importantly, when to stop, by reporting statistics on a fixed, hold-out sample of annotated records. We justify our approach based on prior work giving a theoretical model of how to re-use hold-out data. We compare the two approaches in the context of identifying a cohort of EHRs and examine their strengths and weaknesses through a case study arising from an optometric research problem. We conclude that both approaches are complementary, and we recommend that they both be employed in conjunction to address the problem of cohort identification in health research.
{"title":"Computer-Assisted Cohort Identification in Practice","authors":"Besat Kassaie, E. Irving, Frank Wm. Tompa","doi":"10.1145/3483411","DOIUrl":"https://doi.org/10.1145/3483411","url":null,"abstract":"The standard approach to expert-in-the-loop machine learning is active learning, where, repeatedly, an expert is asked to annotate one or more records and the machine finds a classifier that respects all annotations made until that point. We propose an alternative approach, IQRef, in which the expert iteratively designs a classifier and the machine helps him or her to determine how well it is performing and, importantly, when to stop, by reporting statistics on a fixed, hold-out sample of annotated records. We justify our approach based on prior work giving a theoretical model of how to re-use hold-out data. We compare the two approaches in the context of identifying a cohort of EHRs and examine their strengths and weaknesses through a case study arising from an optometric research problem. We conclude that both approaches are complementary, and we recommend that they both be employed in conjunction to address the problem of cohort identification in health research.","PeriodicalId":288903,"journal":{"name":"ACM Transactions on Computing for Healthcare (HEALTH)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125730496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Radiation-induced xerostomia, as a major problem in radiation treatment of the head and neck cancer, is mainly due to the overdose irradiation injury to the parotid glands. Helical Tomotherapy-based megavoltage computed tomography (MVCT) imaging during the Tomotherapy treatment can be applied to monitor the successive variations in the parotid glands. While manual segmentation is time consuming, laborious, and subjective, automatic segmentation is quite challenging due to the complicated anatomical environment of head and neck as well as noises in MVCT images. In this article, we propose a localization-refinement scheme to segment the parotid gland in MVCT. After data pre-processing we use mask region convolutional neural network (Mask R-CNN) in the localization stage after data pre-processing, and design a modified U-Net in the following fine segmentation stage. To the best of our knowledge, this study is a pioneering work of deep learning on MVCT segmentation. Comprehensive experiments based on different data distribution of head and neck MVCTs and different segmentation models have demonstrated the superiority of our approach in terms of accuracy, effectiveness, flexibility, and practicability. Our method can be adopted as a powerful tool for radiation-induced injury studies, where accurate organ segmentation is crucial.
{"title":"Automatic Parotid Gland Segmentation in MVCT Using Deep Convolutional Neural Networks","authors":"Junqian Zhang, Ying-Zhi Sun, Hongen Liao, Jian Zhu, Yuan Zhang","doi":"10.1145/3485278","DOIUrl":"https://doi.org/10.1145/3485278","url":null,"abstract":"Radiation-induced xerostomia, as a major problem in radiation treatment of the head and neck cancer, is mainly due to the overdose irradiation injury to the parotid glands. Helical Tomotherapy-based megavoltage computed tomography (MVCT) imaging during the Tomotherapy treatment can be applied to monitor the successive variations in the parotid glands. While manual segmentation is time consuming, laborious, and subjective, automatic segmentation is quite challenging due to the complicated anatomical environment of head and neck as well as noises in MVCT images. In this article, we propose a localization-refinement scheme to segment the parotid gland in MVCT. After data pre-processing we use mask region convolutional neural network (Mask R-CNN) in the localization stage after data pre-processing, and design a modified U-Net in the following fine segmentation stage. To the best of our knowledge, this study is a pioneering work of deep learning on MVCT segmentation. Comprehensive experiments based on different data distribution of head and neck MVCTs and different segmentation models have demonstrated the superiority of our approach in terms of accuracy, effectiveness, flexibility, and practicability. Our method can be adopted as a powerful tool for radiation-induced injury studies, where accurate organ segmentation is crucial.","PeriodicalId":288903,"journal":{"name":"ACM Transactions on Computing for Healthcare (HEALTH)","volume":"325 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123308817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seemandhar Jain, Prarthi Jain, P. K. Upadhyay, J. M. Moualeu, Abhishek Srivastava
Wireless Body Area Networks (WBANs) comprise a network of sensors subcutaneously implanted or placed near the body surface and facilitate continuous monitoring of health parameters of a patient. Research endeavours involving WBAN are directed towards effective transmission of detected parameters to a Local Processing Unit (LPU, usually a mobile device) and analysis of the parameters at the LPU or a back-end cloud. An important concern in WBAN is the lightweight nature of WBAN nodes and the need to conserve their energy. This is especially true for subcutaneously implanted nodes that cannot be recharged or regularly replaced. Work in energy conservation is mostly aimed at optimising the routing of signals to minimise energy expended. In this article, a simple yet innovative approach to energy conservation and detection of alarming health status is proposed. Energy conservation is ensured through a two-tier approach wherein the first tier eliminates “uninteresting” health parameter readings at the site of a sensing node and prevents these from being transmitted across the WBAN to the LPU. The second tier of assessment includes a proposed anomaly detection model at the LPU that is capable of identifying anomalies from streaming health parameter readings and indicates an adverse medical condition. In addition to being able to handle streaming data, the model works within the resource-constrained environments of an LPU and eliminates the need of transmitting the data to a back-end cloud, ensuring further energy savings. The anomaly detection capability of the model is validated using data available from the critical care units of hospitals and is shown to be superior to other anomaly detection techniques.
{"title":"An Energy Efficient Health Monitoring Approach with Wireless Body Area Networks","authors":"Seemandhar Jain, Prarthi Jain, P. K. Upadhyay, J. M. Moualeu, Abhishek Srivastava","doi":"10.1145/3501773","DOIUrl":"https://doi.org/10.1145/3501773","url":null,"abstract":"Wireless Body Area Networks (WBANs) comprise a network of sensors subcutaneously implanted or placed near the body surface and facilitate continuous monitoring of health parameters of a patient. Research endeavours involving WBAN are directed towards effective transmission of detected parameters to a Local Processing Unit (LPU, usually a mobile device) and analysis of the parameters at the LPU or a back-end cloud. An important concern in WBAN is the lightweight nature of WBAN nodes and the need to conserve their energy. This is especially true for subcutaneously implanted nodes that cannot be recharged or regularly replaced. Work in energy conservation is mostly aimed at optimising the routing of signals to minimise energy expended. In this article, a simple yet innovative approach to energy conservation and detection of alarming health status is proposed. Energy conservation is ensured through a two-tier approach wherein the first tier eliminates “uninteresting” health parameter readings at the site of a sensing node and prevents these from being transmitted across the WBAN to the LPU. The second tier of assessment includes a proposed anomaly detection model at the LPU that is capable of identifying anomalies from streaming health parameter readings and indicates an adverse medical condition. In addition to being able to handle streaming data, the model works within the resource-constrained environments of an LPU and eliminates the need of transmitting the data to a back-end cloud, ensuring further energy savings. The anomaly detection capability of the model is validated using data available from the critical care units of hospitals and is shown to be superior to other anomaly detection techniques.","PeriodicalId":288903,"journal":{"name":"ACM Transactions on Computing for Healthcare (HEALTH)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128080763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}