Cohort study is one of the most commonly used study methods in medical and public health researches, which result in longitudinal data. Conventional statistical models and machine learning methods are not capable of modeling the evolution trend of the variables in longitudinal data. In this paper, we propose a Trend Analysis Neural Networks (TANN), which models the evolution trend of the variables by adaptive feature learning. TANN was tested on dataset of Kaiuan research. The task was to predict occurrence of cardiovascular events within 2 and 5 years, with 3 repeated medical examinations during 2008 and 2013. For 2-year prediction, The AUC of the TANN is 0.7378, which is a significant improvement than that of conventional methods, while that of TRNS, RNN, DNN, GBDT, RF, and LR are 0.7222, 0.7034, 0.7054, 0.7136, 0.7160 and 0.7024, respectively. For 5-year prediction, TANN also shows improvement. The experimental results show that the proposed TANN achieves better prediction performance on cardiovascular events prediction than conventional models. Furthermore, by analyzing the weights of TANN, we could find out important trends of the indicators, which are ignored by conventional machine learning models. The trend discovery mechanism interprets the model well. TANN is an appropriate balance between high performance and interpretability.
{"title":"An Interpretable Trend Analysis Neural Networks for Longitudinal Data Analysis","authors":"Zhenjie Yao, Yixin Chen, Jinwei Wang, Junjuan Li, Shuohua Chen, Shouling Wu, Yanhui Tu, Ming-Hui Zhao, Luxia Zhang","doi":"10.1145/3648105","DOIUrl":"https://doi.org/10.1145/3648105","url":null,"abstract":"Cohort study is one of the most commonly used study methods in medical and public health researches, which result in longitudinal data. Conventional statistical models and machine learning methods are not capable of modeling the evolution trend of the variables in longitudinal data. In this paper, we propose a Trend Analysis Neural Networks (TANN), which models the evolution trend of the variables by adaptive feature learning. TANN was tested on dataset of Kaiuan research. The task was to predict occurrence of cardiovascular events within 2 and 5 years, with 3 repeated medical examinations during 2008 and 2013. For 2-year prediction, The AUC of the TANN is 0.7378, which is a significant improvement than that of conventional methods, while that of TRNS, RNN, DNN, GBDT, RF, and LR are 0.7222, 0.7034, 0.7054, 0.7136, 0.7160 and 0.7024, respectively. For 5-year prediction, TANN also shows improvement. The experimental results show that the proposed TANN achieves better prediction performance on cardiovascular events prediction than conventional models. Furthermore, by analyzing the weights of TANN, we could find out important trends of the indicators, which are ignored by conventional machine learning models. The trend discovery mechanism interprets the model well. TANN is an appropriate balance between high performance and interpretability.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"22 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139958360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Electroencephalography (EEG) provides an opportunity to gain insights to electrocortical activity without the need for invasive technology. While increasingly used in various application areas, EEG headsets tend to be suited only to a laboratory environment due to the long preparation time to don the headset and the need for users to remain stationary. We present our design of a dry, dual-electrodes flexible PCB assembly that realizes accurate sensing in face of practical motion artifacts. Using it, we present WalkingWizard, our prototype dry-electrode EEG baseball cap that can be used under motion in everyday scenarios. We first evaluated its hardware performance by comparing its electrode-scalp impedance and ability to capture alpha rhythm against both wet EEG, and commercially available dry EEG headsets. We then tested WalkingWizard using SSVEP experiments, achieving high classification accuracy of 87% for walking speeds up to 5.0km/hr, beating state-of-the-art. Expanding on WalkingWizard, we integrated all necessary electronic components into a flexible PCB assembly - realizing WalkingWizard Integrated , in a truly wearable form-factor. Utilizing WalkingWizard Integrated, we demonstrated several applications as proof-of-concept: Classification of SSVEP in VR environment while walking, Real-time acquisition of emotional state of users while moving around the neighbourhood, and Understanding the effect of guided meditation for relaxation.
{"title":"WalkingWizard - A truly wearable EEG headset for everyday use","authors":"Teck Lun Goh, L. Peh","doi":"10.1145/3648106","DOIUrl":"https://doi.org/10.1145/3648106","url":null,"abstract":"\u0000 Electroencephalography (EEG) provides an opportunity to gain insights to electrocortical activity without the need for invasive technology. While increasingly used in various application areas, EEG headsets tend to be suited only to a laboratory environment due to the long preparation time to don the headset and the need for users to remain stationary. We present our design of a dry, dual-electrodes flexible PCB assembly that realizes accurate sensing in face of practical motion artifacts. Using it, we present WalkingWizard, our prototype dry-electrode EEG baseball cap that can be used under motion in everyday scenarios. We first evaluated its hardware performance by comparing its electrode-scalp impedance and ability to capture alpha rhythm against both wet EEG, and commercially available dry EEG headsets. We then tested WalkingWizard using SSVEP experiments, achieving high classification accuracy of 87% for walking speeds up to 5.0km/hr, beating state-of-the-art. Expanding on WalkingWizard, we integrated all necessary electronic components into a flexible PCB assembly - realizing\u0000 WalkingWizard Integrated\u0000 , in a truly wearable form-factor. Utilizing WalkingWizard Integrated, we demonstrated several applications as proof-of-concept: Classification of SSVEP in VR environment while walking, Real-time acquisition of emotional state of users while moving around the neighbourhood, and Understanding the effect of guided meditation for relaxation.\u0000","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"61 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139836174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Electroencephalography (EEG) provides an opportunity to gain insights to electrocortical activity without the need for invasive technology. While increasingly used in various application areas, EEG headsets tend to be suited only to a laboratory environment due to the long preparation time to don the headset and the need for users to remain stationary. We present our design of a dry, dual-electrodes flexible PCB assembly that realizes accurate sensing in face of practical motion artifacts. Using it, we present WalkingWizard, our prototype dry-electrode EEG baseball cap that can be used under motion in everyday scenarios. We first evaluated its hardware performance by comparing its electrode-scalp impedance and ability to capture alpha rhythm against both wet EEG, and commercially available dry EEG headsets. We then tested WalkingWizard using SSVEP experiments, achieving high classification accuracy of 87% for walking speeds up to 5.0km/hr, beating state-of-the-art. Expanding on WalkingWizard, we integrated all necessary electronic components into a flexible PCB assembly - realizing WalkingWizard Integrated , in a truly wearable form-factor. Utilizing WalkingWizard Integrated, we demonstrated several applications as proof-of-concept: Classification of SSVEP in VR environment while walking, Real-time acquisition of emotional state of users while moving around the neighbourhood, and Understanding the effect of guided meditation for relaxation.
{"title":"WalkingWizard - A truly wearable EEG headset for everyday use","authors":"Teck Lun Goh, L. Peh","doi":"10.1145/3648106","DOIUrl":"https://doi.org/10.1145/3648106","url":null,"abstract":"\u0000 Electroencephalography (EEG) provides an opportunity to gain insights to electrocortical activity without the need for invasive technology. While increasingly used in various application areas, EEG headsets tend to be suited only to a laboratory environment due to the long preparation time to don the headset and the need for users to remain stationary. We present our design of a dry, dual-electrodes flexible PCB assembly that realizes accurate sensing in face of practical motion artifacts. Using it, we present WalkingWizard, our prototype dry-electrode EEG baseball cap that can be used under motion in everyday scenarios. We first evaluated its hardware performance by comparing its electrode-scalp impedance and ability to capture alpha rhythm against both wet EEG, and commercially available dry EEG headsets. We then tested WalkingWizard using SSVEP experiments, achieving high classification accuracy of 87% for walking speeds up to 5.0km/hr, beating state-of-the-art. Expanding on WalkingWizard, we integrated all necessary electronic components into a flexible PCB assembly - realizing\u0000 WalkingWizard Integrated\u0000 , in a truly wearable form-factor. Utilizing WalkingWizard Integrated, we demonstrated several applications as proof-of-concept: Classification of SSVEP in VR environment while walking, Real-time acquisition of emotional state of users while moving around the neighbourhood, and Understanding the effect of guided meditation for relaxation.\u0000","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"118 19","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139776668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pei-Xuan Li, Hsun-Ping Hsieh, Chiang Fan Yang, Ding-You Wu, Ching-Chung Ko
This paper explores the application of self-supervised contrastive learning in the medical domain, focusing on classification of multi-modality Magnetic Resonance (MR) images. To address the challenges of limited and hard-to-annotate medical data, we introduce multi-modality data augmentation (MDA) and cross-modality group convolution (CGC). In the pre-training phase, we leverage Simple Siamese networks to maximize the similarity between two augmented MR images from a patient, without a handcrafted pretext task. Our approach also combines 3D and 2D group convolution with a channel shuffle operation to efficiently incorporate different modalities of image features. Evaluation on liver MR images from a well-known hospital in Taiwan demonstrates a significant improvement over previous methods. This work contributes to advancing multi-modality contrastive learning, particularly in the context of medical imaging, offering enhanced tools for analyzing complex image data.
{"title":"Enhancing Robust Liver Cancer Diagnosis: A Contrastive Multi-Modality Learner with Lightweight Fusion and Effective Data Augmentation","authors":"Pei-Xuan Li, Hsun-Ping Hsieh, Chiang Fan Yang, Ding-You Wu, Ching-Chung Ko","doi":"10.1145/3639414","DOIUrl":"https://doi.org/10.1145/3639414","url":null,"abstract":"This paper explores the application of self-supervised contrastive learning in the medical domain, focusing on classification of multi-modality Magnetic Resonance (MR) images. To address the challenges of limited and hard-to-annotate medical data, we introduce multi-modality data augmentation (MDA) and cross-modality group convolution (CGC). In the pre-training phase, we leverage Simple Siamese networks to maximize the similarity between two augmented MR images from a patient, without a handcrafted pretext task. Our approach also combines 3D and 2D group convolution with a channel shuffle operation to efficiently incorporate different modalities of image features. Evaluation on liver MR images from a well-known hospital in Taiwan demonstrates a significant improvement over previous methods. This work contributes to advancing multi-modality contrastive learning, particularly in the context of medical imaging, offering enhanced tools for analyzing complex image data.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":" 19","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139140053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Extreme Learning Machine (ELM) is becoming a popular learning algorithm due to its diverse applications, including Human Activity Recognition (HAR). In ELM, the hidden node parameters are generated at random, and the output weights are computed analytically. However, even with a large number of hidden nodes, feature learning using ELM may not be efficient for natural signals due to its shallow architecture. Due to noisy signals of the smartphone sensors and high dimensional data, substantial feature engineering is required to obtain discriminant features and address the “curse-of-dimensionality”. In traditional ML approaches, dimensionality reduction and classification are two separate and independent tasks, increasing the system’s computational complexity. This research proposes a new ELM-based ensemble learning framework for human activity recognition to overcome this problem. The proposed architecture consists of two key parts: 1) Self-taught dimensionality reduction followed by classification. 2) they are bridged by “Subsampled Randomized Hadamard Transformation” (SRHT). Two different HAR datasets are used to establish the feasibility of the proposed framework. The experimental results clearly demonstrate the superiority of our method over the current state-of-the-art methods.
极限学习机(ELM)因其广泛的应用而成为一种流行的学习算法,包括人类活动识别(HAR)。在 ELM 中,隐藏节点的参数是随机生成的,输出权重是通过分析计算得出的。然而,即使有大量的隐藏节点,由于其架构较浅,使用 ELM 进行特征学习对于自然信号可能并不有效。由于智能手机传感器信号嘈杂,数据维度高,因此需要大量的特征工程来获取判别特征,解决 "维度诅咒 "问题。在传统的 ML 方法中,降维和分类是两个独立的任务,增加了系统的计算复杂度。为克服这一问题,本研究提出了一种新的基于 ELM 的人类活动识别集合学习框架。建议的架构由两个关键部分组成:1)自学降维,然后是分类。2)通过 "子采样随机哈达玛变换"(SRHT)将它们连接起来。我们使用了两个不同的 HAR 数据集来确定所提框架的可行性。实验结果清楚地证明了我们的方法优于目前最先进的方法。
{"title":"Subsampled Randomized Hadamard Transformation based Ensemble Extreme Learning Machine for Human Activity Recognition","authors":"Dipanwita Thakur, Arindam Pal","doi":"10.1145/3634813","DOIUrl":"https://doi.org/10.1145/3634813","url":null,"abstract":"Extreme Learning Machine (ELM) is becoming a popular learning algorithm due to its diverse applications, including Human Activity Recognition (HAR). In ELM, the hidden node parameters are generated at random, and the output weights are computed analytically. However, even with a large number of hidden nodes, feature learning using ELM may not be efficient for natural signals due to its shallow architecture. Due to noisy signals of the smartphone sensors and high dimensional data, substantial feature engineering is required to obtain discriminant features and address the “curse-of-dimensionality”. In traditional ML approaches, dimensionality reduction and classification are two separate and independent tasks, increasing the system’s computational complexity. This research proposes a new ELM-based ensemble learning framework for human activity recognition to overcome this problem. The proposed architecture consists of two key parts: 1) Self-taught dimensionality reduction followed by classification. 2) they are bridged by “Subsampled Randomized Hadamard Transformation” (SRHT). Two different HAR datasets are used to establish the feasibility of the proposed framework. The experimental results clearly demonstrate the superiority of our method over the current state-of-the-art methods.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139229816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Luigi D’Arco, Graham McCalmont, Haiying Wang, Huiru Zheng
Recent years have witnessed the increasing literature on using smart insoles in health and well-being, and yet, their capability of daily living activity recognition has not been reviewed. This paper addressed this need and provided a systematic review of smart insole-based systems in the recognition of Activities of Daily Living (ADLs). The review followed the PRISMA guidelines, assessing the sensing elements used, the participants involved, the activities recognised, and the algorithms employed. The findings demonstrate the feasibility of using smart insoles for recognising ADLs, showing their high performance in recognising ambulation and physical activities involving the lower body, ranging from 70% to 99.8% of Accuracy, with 13 studies over 95%. The preferred solutions have been those including machine learning. A lack of existing publicly available datasets has been identified, and the majority of the studies were conducted in controlled environments. Furthermore, no studies assessed the impact of different sampling frequencies during data collection, and a trade-off between comfort and performance has been identified between the solutions. In conclusion, real-life applications were investigated showing the benefits of smart insoles over other solutions and placing more emphasis on the capabilities of smart insoles.
{"title":"Application of Smart Insoles for Recognition of Activities of Daily Living: A Systematic Review","authors":"Luigi D’Arco, Graham McCalmont, Haiying Wang, Huiru Zheng","doi":"10.1145/3633785","DOIUrl":"https://doi.org/10.1145/3633785","url":null,"abstract":"Recent years have witnessed the increasing literature on using smart insoles in health and well-being, and yet, their capability of daily living activity recognition has not been reviewed. This paper addressed this need and provided a systematic review of smart insole-based systems in the recognition of Activities of Daily Living (ADLs). The review followed the PRISMA guidelines, assessing the sensing elements used, the participants involved, the activities recognised, and the algorithms employed. The findings demonstrate the feasibility of using smart insoles for recognising ADLs, showing their high performance in recognising ambulation and physical activities involving the lower body, ranging from 70% to 99.8% of Accuracy, with 13 studies over 95%. The preferred solutions have been those including machine learning. A lack of existing publicly available datasets has been identified, and the majority of the studies were conducted in controlled environments. Furthermore, no studies assessed the impact of different sampling frequencies during data collection, and a trade-off between comfort and performance has been identified between the solutions. In conclusion, real-life applications were investigated showing the benefits of smart insoles over other solutions and placing more emphasis on the capabilities of smart insoles.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"49 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139239470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The quantification of emotional states is an important step to understanding wellbeing. Time series data from multiple modalities such as physiological and motion sensor data have proven to be integral for measuring and quantifying emotions. Monitoring emotional trajectories over long periods of time inherits some critical limitations in relation to the size of the training data. This shortcoming may hinder the development of reliable and accurate machine learning models. To address this problem, this paper proposes a framework to tackle the limitation in performing emotional state recognition: 1) encoding time series data into coloured images; 2) leveraging pre-trained object recognition models to apply a Transfer Learning (TL) approach using the images from step 1; 3) utilising a 1D Convolutional Neural Network (CNN) to perform emotion classification from physiological data; 4) concatenating the pre-trained TL model with the 1D CNN. We demonstrate that model performance when inferring real-world wellbeing rated on a 5-point Likert scale can be enhanced using our framework, resulting in up to 98.5% accuracy, outperforming a conventional CNN by 4.5%. Subject-independent models using the same approach resulted in an average of 72.3% accuracy (SD 0.038). The proposed methodology helps improve performance and overcome problems with small training datasets.
{"title":"Combining Deep Learning with Signal-image Encoding for Multi-Modal Mental Wellbeing Classification","authors":"Kieran Woodward, Eiman Kanjo, Athanasios Tsanas","doi":"10.1145/3631618","DOIUrl":"https://doi.org/10.1145/3631618","url":null,"abstract":"The quantification of emotional states is an important step to understanding wellbeing. Time series data from multiple modalities such as physiological and motion sensor data have proven to be integral for measuring and quantifying emotions. Monitoring emotional trajectories over long periods of time inherits some critical limitations in relation to the size of the training data. This shortcoming may hinder the development of reliable and accurate machine learning models. To address this problem, this paper proposes a framework to tackle the limitation in performing emotional state recognition: 1) encoding time series data into coloured images; 2) leveraging pre-trained object recognition models to apply a Transfer Learning (TL) approach using the images from step 1; 3) utilising a 1D Convolutional Neural Network (CNN) to perform emotion classification from physiological data; 4) concatenating the pre-trained TL model with the 1D CNN. We demonstrate that model performance when inferring real-world wellbeing rated on a 5-point Likert scale can be enhanced using our framework, resulting in up to 98.5% accuracy, outperforming a conventional CNN by 4.5%. Subject-independent models using the same approach resulted in an average of 72.3% accuracy (SD 0.038). The proposed methodology helps improve performance and overcome problems with small training datasets.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"41 17","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135818871","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shaojing Fan, Ramesh C. Jain, Mohan S. Kankanhalli
Mobile health (mHealth) applications have become increasingly valuable in preventive healthcare and in reducing the burden on healthcare organizations. The aim of this paper is to investigate the factors that influence user acceptance of mHealth apps and identify the underlying structure that shapes users’ behavioral intention. An online study that employed factorial survey design with vignettes was conducted, and a total of 1,669 participants from eight countries across four continents were included in the study. Structural equation modeling was employed to quantitatively assess how various factors collectively contribute to users’ willingness to use mHealth apps. The results indicate that users’ digital literacy has the strongest impact on their willingness to use them, followed by their online habit of sharing personal information. Users’ concerns about personal privacy only had a weak impact. Furthermore, users’ demographic background, such as their country of residence, age, ethnicity, and education, has a significant moderating effect. Our findings have implications for app designers, healthcare practitioners, and policymakers. Efforts are needed to regulate data collection and sharing and promote digital literacy among the general population to facilitate the widespread adoption of mHealth apps.
{"title":"A Comprehensive Picture of Factors Affecting User Willingness to Use Mobile Health Applications","authors":"Shaojing Fan, Ramesh C. Jain, Mohan S. Kankanhalli","doi":"10.1145/3626962","DOIUrl":"https://doi.org/10.1145/3626962","url":null,"abstract":"Mobile health (mHealth) applications have become increasingly valuable in preventive healthcare and in reducing the burden on healthcare organizations. The aim of this paper is to investigate the factors that influence user acceptance of mHealth apps and identify the underlying structure that shapes users’ behavioral intention. An online study that employed factorial survey design with vignettes was conducted, and a total of 1,669 participants from eight countries across four continents were included in the study. Structural equation modeling was employed to quantitatively assess how various factors collectively contribute to users’ willingness to use mHealth apps. The results indicate that users’ digital literacy has the strongest impact on their willingness to use them, followed by their online habit of sharing personal information. Users’ concerns about personal privacy only had a weak impact. Furthermore, users’ demographic background, such as their country of residence, age, ethnicity, and education, has a significant moderating effect. Our findings have implications for app designers, healthcare practitioners, and policymakers. Efforts are needed to regulate data collection and sharing and promote digital literacy among the general population to facilitate the widespread adoption of mHealth apps.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"146 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136295534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Asiful Arefeen, Ali Akbari, Seyed Iman Mirzadeh, Roozbeh Jafari, Behrooz A. Shirazi, Hassan Ghasemzadeh
Inter-beat interval (IBI) measurement enables estimation of heart-tare variability (HRV) which, in turns, can provide early indication of potential cardiovascular diseases (CVDs). However, extracting IBIs from noisy signals is challenging since the morphology of the signal gets distorted in the presence of noise. Electrocardiogram (ECG) of a person in heavy motion is highly corrupted with noise, known as motion-artifact, and IBI extracted from it is inaccurate. As a part of remote health monitoring and wearable system development, denoising ECG signals and estimating IBIs correctly from them have become an emerging topic among signal-processing researchers. Apart from conventional methods, deep-learning techniques have been successfully used in signal denoising recently, and diagnosis process has become easier, leading to accuracy levels that were previously unachievable. We propose a deep-learning approach leveraging tiramisu autoencoder model to suppress motion-artifact noise and make the R-peaks of the ECG signal prominent even in the presence of high-intensity motion. After denoising, IBIs are estimated more accurately expediting diagnosis tasks. Results illustrate that our method enables IBI estimation from noisy ECG signals with SNR up to -30dB with average root mean square error (RMSE) of 13 milliseconds for estimated IBIs. At this noise level, our error percentage remains below 8% and outperforms other state of the art techniques.
{"title":"Inter-Beat Interval Estimation with Tiramisu Model: A Novel Approach with Reduced Error","authors":"Asiful Arefeen, Ali Akbari, Seyed Iman Mirzadeh, Roozbeh Jafari, Behrooz A. Shirazi, Hassan Ghasemzadeh","doi":"10.1145/3616020","DOIUrl":"https://doi.org/10.1145/3616020","url":null,"abstract":"Inter-beat interval (IBI) measurement enables estimation of heart-tare variability (HRV) which, in turns, can provide early indication of potential cardiovascular diseases (CVDs). However, extracting IBIs from noisy signals is challenging since the morphology of the signal gets distorted in the presence of noise. Electrocardiogram (ECG) of a person in heavy motion is highly corrupted with noise, known as motion-artifact, and IBI extracted from it is inaccurate. As a part of remote health monitoring and wearable system development, denoising ECG signals and estimating IBIs correctly from them have become an emerging topic among signal-processing researchers. Apart from conventional methods, deep-learning techniques have been successfully used in signal denoising recently, and diagnosis process has become easier, leading to accuracy levels that were previously unachievable. We propose a deep-learning approach leveraging tiramisu autoencoder model to suppress motion-artifact noise and make the R-peaks of the ECG signal prominent even in the presence of high-intensity motion. After denoising, IBIs are estimated more accurately expediting diagnosis tasks. Results illustrate that our method enables IBI estimation from noisy ECG signals with SNR up to -30dB with average root mean square error (RMSE) of 13 milliseconds for estimated IBIs. At this noise level, our error percentage remains below 8% and outperforms other state of the art techniques.","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":"298 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135302162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01Epub Date: 2023-09-13DOI: 10.1145/3616021
Nathan C Hurley, Sanket S Dhruva, Nihar R Desai, Joseph R Ross, Che G Ngufor, Frederick Masoudi, Harlan M Krumholz, Bobak J Mortazavi
Observational medical data present unique opportunities for analysis of medical outcomes and treatment decision making. However, because these datasets do not contain the strict pairing of randomized control trials, matching techniques are to draw comparisons among patients. A key limitation to such techniques is verification that the variables used to model treatment decision making are also relevant in identifying the risk of major adverse events. This article explores a deep mixture of experts approach to jointly learn how to match patients and model the risk of major adverse events in patients. Although trained with information regarding treatment and outcomes, after training, the proposed model is decomposable into a network that clusters patients into phenotypes from information available before treatment. This model is validated on a dataset of patients with acute myocardial infarction complicated by cardiogenic shock. The mixture of experts approach can predict the outcome of mortality with an area under the receiver operating characteristic curve of 0.85 ± 0.01 while jointly discovering five potential phenotypes of interest. The technique and interpretation allow for identifying clinically relevant phenotypes that may be used both for outcomes modeling as well as potentially evaluating individualized treatment effects.
{"title":"Clinical Phenotyping with an Outcomes-driven Mixture of Experts for Patient Matching and Risk Estimation.","authors":"Nathan C Hurley, Sanket S Dhruva, Nihar R Desai, Joseph R Ross, Che G Ngufor, Frederick Masoudi, Harlan M Krumholz, Bobak J Mortazavi","doi":"10.1145/3616021","DOIUrl":"10.1145/3616021","url":null,"abstract":"<p><p>Observational medical data present unique opportunities for analysis of medical outcomes and treatment decision making. However, because these datasets do not contain the strict pairing of randomized control trials, matching techniques are to draw comparisons among patients. A key limitation to such techniques is verification that the variables used to model treatment decision making are also relevant in identifying the risk of major adverse events. This article explores a deep mixture of experts approach to jointly learn how to match patients and model the risk of major adverse events in patients. Although trained with information regarding treatment and outcomes, after training, the proposed model is decomposable into a network that clusters patients into phenotypes from information available before treatment. This model is validated on a dataset of patients with acute myocardial infarction complicated by cardiogenic shock. The mixture of experts approach can predict the outcome of mortality with an area under the receiver operating characteristic curve of 0.85 ± 0.01 while jointly discovering five potential phenotypes of interest. The technique and interpretation allow for identifying clinically relevant phenotypes that may be used both for outcomes modeling as well as potentially evaluating individualized treatment effects.</p>","PeriodicalId":72043,"journal":{"name":"ACM transactions on computing for healthcare","volume":" ","pages":"1-18"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10613929/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46461728","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}