首页 > 最新文献

Smart Health最新文献

英文 中文
Smart health practices: Strategies to improve healthcare efficiency through digital twin technology
Q2 Health Professions Pub Date : 2025-02-01 DOI: 10.1016/j.smhl.2025.100541
Md. Armanul Hasan , Ridwan Mustofa , Niamat Ullah Ibne Hossain , Md. Saiful Islam
A digital twin (DT) is a virtual representation of a real-world object that has dynamic, bidirectional connections between the real-world object and its digital domain. With the advent of Industry 4.0, DT technology was initially applied in the engineering and manufacturing sectors, but recent research indicates DT may also be useful within the healthcare sector. The purpose of this study was to determine the potential applications of DT technology in the healthcare sector and offer suggestions for its effective implementation by healthcare institutions to increase service efficiency. Based on a review of the literature, we developed a model to demonstrate the applications of DTs on public and personal health. A questionnaire with five points Likert scale was then designed based on this model. Data were collected through an online survey conducted with 306 participants. To verify our hypothesized correlations among the constructs, structural equation modeling was used. The findings suggested that explainable artificial intelligence-based early diagnosis, simulation model-based vaccination, artificial intelligence location technology, sensor-based real-time health monitoring, and in silico personalized medicine are potential applications of DT that can increase healthcare efficiency. We also considered the moderating influence of (a) security and privacy and (b) certification and regulatory issues, acknowledging their pivotal roles in ensuring the successful implementation and widespread acceptance of DT technology in the field of healthcare. This study contributes to the body of knowledge in academia and offers useful insights for technologists, policymakers, and healthcare professionals who want to fully utilize DT technology to build an effective healthcare system that can adapt to the changing needs of communities and individuals.
{"title":"Smart health practices: Strategies to improve healthcare efficiency through digital twin technology","authors":"Md. Armanul Hasan ,&nbsp;Ridwan Mustofa ,&nbsp;Niamat Ullah Ibne Hossain ,&nbsp;Md. Saiful Islam","doi":"10.1016/j.smhl.2025.100541","DOIUrl":"10.1016/j.smhl.2025.100541","url":null,"abstract":"<div><div>A digital twin (DT) is a virtual representation of a real-world object that has dynamic, bidirectional connections between the real-world object and its digital domain. With the advent of Industry 4.0, DT technology was initially applied in the engineering and manufacturing sectors, but recent research indicates DT may also be useful within the healthcare sector. The purpose of this study was to determine the potential applications of DT technology in the healthcare sector and offer suggestions for its effective implementation by healthcare institutions to increase service efficiency. Based on a review of the literature, we developed a model to demonstrate the applications of DTs on public and personal health. A questionnaire with five points Likert scale was then designed based on this model. Data were collected through an online survey conducted with 306 participants. To verify our hypothesized correlations among the constructs, structural equation modeling was used. The findings suggested that explainable artificial intelligence-based early diagnosis, simulation model-based vaccination, artificial intelligence location technology, sensor-based real-time health monitoring, and in silico personalized medicine are potential applications of DT that can increase healthcare efficiency. We also considered the moderating influence of (a) security and privacy and (b) certification and regulatory issues, acknowledging their pivotal roles in ensuring the successful implementation and widespread acceptance of DT technology in the field of healthcare. This study contributes to the body of knowledge in academia and offers useful insights for technologists, policymakers, and healthcare professionals who want to fully utilize DT technology to build an effective healthcare system that can adapt to the changing needs of communities and individuals.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100541"},"PeriodicalIF":0.0,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143372960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human knowledge-based artificial intelligence methods for skin cancer management: Accuracy and interpretability study
Q2 Health Professions Pub Date : 2025-01-23 DOI: 10.1016/j.smhl.2025.100540
Eman Rezk , Mohamed Eltorki , Wael El-Dakhakhni
Skin cancer management, including monitoring and excision, involves sophisticated decisions reliant on several interdependent factors. This complexity leads to a scarcity of data useful for skin cancer management. Deep learning achieved massive success in computer vision due to its ability to extract representative features from images. However, deep learning methods require large amounts of data to develop accurate models, whereas machine learning methods perform well with small datasets. In this work, we aim to compare the accuracy and interpretability of skin cancer management prediction 1) using deep learning and machine learning methods and 2) utilizing various inputs including clinical images, dermoscopic images, and lesion clinical tabular features created by experts to represent lesion characteristics. We implemented two approaches, a deep learning pipeline for feature extraction and classification trained on different input modalities including images and lesion clinical features. The second approach uses lesion clinical features to train machine learning classifiers. The results show that the machine learning approach trained on clinical features achieves higher accuracy (0.80) and higher area under the curve (0.92) compared to the deep learning pipeline trained on skin images and lesion clinical features which achieves an accuracy of 0.66 and area under the curve of 0.74. Additionally, the machine learning approach provides more informative and understandable interpretations of the results. This work emphasizes the significance of utilizing human knowledge in developing precise and transparent predictive models. In addition, our findings highlight the potential of machine learning methods in predicting lesion management in situation where the data size is insufficient to leverage deep learning capabilities.
{"title":"Human knowledge-based artificial intelligence methods for skin cancer management: Accuracy and interpretability study","authors":"Eman Rezk ,&nbsp;Mohamed Eltorki ,&nbsp;Wael El-Dakhakhni","doi":"10.1016/j.smhl.2025.100540","DOIUrl":"10.1016/j.smhl.2025.100540","url":null,"abstract":"<div><div>Skin cancer management, including monitoring and excision, involves sophisticated decisions reliant on several interdependent factors. This complexity leads to a scarcity of data useful for skin cancer management. Deep learning achieved massive success in computer vision due to its ability to extract representative features from images. However, deep learning methods require large amounts of data to develop accurate models, whereas machine learning methods perform well with small datasets. In this work, we aim to compare the accuracy and interpretability of skin cancer management prediction 1) using deep learning and machine learning methods and 2) utilizing various inputs including clinical images, dermoscopic images, and lesion clinical tabular features created by experts to represent lesion characteristics. We implemented two approaches, a deep learning pipeline for feature extraction and classification trained on different input modalities including images and lesion clinical features. The second approach uses lesion clinical features to train machine learning classifiers. The results show that the machine learning approach trained on clinical features achieves higher accuracy (0.80) and higher area under the curve (0.92) compared to the deep learning pipeline trained on skin images and lesion clinical features which achieves an accuracy of 0.66 and area under the curve of 0.74. Additionally, the machine learning approach provides more informative and understandable interpretations of the results. This work emphasizes the significance of utilizing human knowledge in developing precise and transparent predictive models. In addition, our findings highlight the potential of machine learning methods in predicting lesion management in situation where the data size is insufficient to leverage deep learning capabilities.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"36 ","pages":"Article 100540"},"PeriodicalIF":0.0,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143100542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SAFE: Sound Analysis for Fall Event detection using machine learning
Q2 Health Professions Pub Date : 2025-01-06 DOI: 10.1016/j.smhl.2024.100539
Antony Garcia , Xinming Huang
This study evaluates the application of machine learning (ML) and deep learning (DL) algorithms for fall detection using sound signals. The work is supported by the Sound Analysis for Fall Events (SAFE) dataset, comprising 950 audio samples, including 475 fall events recorded with a grappling dummy to simulate realistic scenarios. Decision tree-based ML algorithms achieved a classification accuracy of 93% at lower sampling rates, indicating that critical features are preserved despite reduced resolution. DL models, using spectrogram-based feature extraction, reached accuracies up to 99%, surpassing traditional ML methods in performance. Linear models also achieved high accuracy (up to 97%) in various spectrogram techniques, emphasizing the separability of audio features. These results establish the viability of sound-based fall detection systems as efficient and accurate solutions.
{"title":"SAFE: Sound Analysis for Fall Event detection using machine learning","authors":"Antony Garcia ,&nbsp;Xinming Huang","doi":"10.1016/j.smhl.2024.100539","DOIUrl":"10.1016/j.smhl.2024.100539","url":null,"abstract":"<div><div>This study evaluates the application of machine learning (ML) and deep learning (DL) algorithms for fall detection using sound signals. The work is supported by the Sound Analysis for Fall Events (SAFE) dataset, comprising 950 audio samples, including 475 fall events recorded with a grappling dummy to simulate realistic scenarios. Decision tree-based ML algorithms achieved a classification accuracy of 93% at lower sampling rates, indicating that critical features are preserved despite reduced resolution. DL models, using spectrogram-based feature extraction, reached accuracies up to 99%, surpassing traditional ML methods in performance. Linear models also achieved high accuracy (up to 97%) in various spectrogram techniques, emphasizing the separability of audio features. These results establish the viability of sound-based fall detection systems as efficient and accurate solutions.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"35 ","pages":"Article 100539"},"PeriodicalIF":0.0,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Latent Space Representation of Adversarial AutoEncoder for Human Activity Recognition: Application to a low-cost commercial force plate and inertial measurement units
Q2 Health Professions Pub Date : 2025-01-04 DOI: 10.1016/j.smhl.2024.100537
Kenta Kamikokuryo , Gentiane Venture , Vincent Hernandez
Human Activity Recognition (HAR) is a key component of a home rehabilitation system that provides real-time monitoring and personalized feedback. This research explores the application of Adversarial AutoEncoder (AAE) models for data dimensionality reduction in the context of HAR. Visualizing data in a lower-dimensional space is important to understand changes in motor control due to medical conditions or aging, to aid personalized interventions, and to ensure continuous benefits in remote rehabilitation settings. This makes patient assessment effective, easier, and faster.
In this study, the classification performance of the latent space created by the AAE is evaluated using the Wii Balance Board (WiiBB) and/or three Inertial Measurement Units (IMUs) placed on the forearms and hip. Various sensor configurations are considered, including only WiiBB, only IMUs, combinations of WiiBB with the IMU at the hip, and combinations of WiiBB with the 3 IMUs.
The accuracy of the latent space representation is compared with two common supervised classification models, which are the Convolutional Neural Network (CNN) and the neural network called CNNLSTM, which is composed of convolution layers followed by recurrent layers. The approach was demonstrated for two different sets of exercises consisting of upper and lower body exercises collected with 19 participants.
The results show that the latent space representation of the AAE achieves a strong classification accuracy performance while also serving as a visualization tool. This study is an initial demonstration of the potential of integrating WiiBB and IMU sensors for comprehensive activity recognition for upper and lower body movement analysis.
{"title":"Latent Space Representation of Adversarial AutoEncoder for Human Activity Recognition: Application to a low-cost commercial force plate and inertial measurement units","authors":"Kenta Kamikokuryo ,&nbsp;Gentiane Venture ,&nbsp;Vincent Hernandez","doi":"10.1016/j.smhl.2024.100537","DOIUrl":"10.1016/j.smhl.2024.100537","url":null,"abstract":"<div><div>Human Activity Recognition (HAR) is a key component of a home rehabilitation system that provides real-time monitoring and personalized feedback. This research explores the application of Adversarial AutoEncoder (AAE) models for data dimensionality reduction in the context of HAR. Visualizing data in a lower-dimensional space is important to understand changes in motor control due to medical conditions or aging, to aid personalized interventions, and to ensure continuous benefits in remote rehabilitation settings. This makes patient assessment effective, easier, and faster.</div><div>In this study, the classification performance of the latent space created by the AAE is evaluated using the Wii Balance Board (WiiBB) and/or three Inertial Measurement Units (IMUs) placed on the forearms and hip. Various sensor configurations are considered, including only WiiBB, only IMUs, combinations of WiiBB with the IMU at the hip, and combinations of WiiBB with the 3 IMUs.</div><div>The accuracy of the latent space representation is compared with two common supervised classification models, which are the Convolutional Neural Network (CNN) and the neural network called CNNLSTM, which is composed of convolution layers followed by recurrent layers. The approach was demonstrated for two different sets of exercises consisting of upper and lower body exercises collected with 19 participants.</div><div>The results show that the latent space representation of the AAE achieves a strong classification accuracy performance while also serving as a visualization tool. This study is an initial demonstration of the potential of integrating WiiBB and IMU sensors for comprehensive activity recognition for upper and lower body movement analysis.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"35 ","pages":"Article 100537"},"PeriodicalIF":0.0,"publicationDate":"2025-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Novel EEG feature selection based on hellinger distance for epileptic seizure detection
Q2 Health Professions Pub Date : 2025-01-01 DOI: 10.1016/j.smhl.2024.100536
Muhammed Sadiq , Mustafa Noaman Kadhim , Dhiah Al-Shammary , Mariofanna Milanova
This study introduces a novel feature selection method based on Hellinger distance and particle swarm optimization (PSO) for reducing the dimensionality of features in electroencephalogram (EEG) signals and improving epileptic seizure detection accuracy. In the first phase, the Hellinger distance is used as a filter to remove redundant and irrelevant features by calculating the similarity between blocks within the feature, thus reducing the search space for the subsequent second phase. In the second phase, PSO searches the reduced feature space to select the best subset. Recognizing that both classification accuracy and dimensionality play crucial roles in the performance of feature subsets, PSO searches various sets of features (ranging from 410 to 2867 in EEG signals) derived from the first stage using Hellinger distance, rather than searching through the full set of 4047 features, to select the optimal subset. The proposed Hellinger-PSO approach demonstrates significant improvements in classification accuracy across multiple models. Specifically, Logistic Regression (LR) improved from 91% to 95% (4% improvement), Decision Tree (DT) from 95% to 97% (2% improvement), Naive Bayes (NB) from 94% to 99% (5% improvement), and Random Forest (RF) from 96% to 98% (2% improvement) on the Bonn dataset. Additionally, the method reduces dimensionality while maintaining high classification performance. The results validate the efficacy of the Hellinger-PSO technique, which enhances both the accuracy and efficiency of epileptic seizure detection. This approach has the potential to improve diagnostic accuracy in medical settings, aiding in better patient care and more effective clinical decision-making.
{"title":"Novel EEG feature selection based on hellinger distance for epileptic seizure detection","authors":"Muhammed Sadiq ,&nbsp;Mustafa Noaman Kadhim ,&nbsp;Dhiah Al-Shammary ,&nbsp;Mariofanna Milanova","doi":"10.1016/j.smhl.2024.100536","DOIUrl":"10.1016/j.smhl.2024.100536","url":null,"abstract":"<div><div>This study introduces a novel feature selection method based on Hellinger distance and particle swarm optimization (PSO) for reducing the dimensionality of features in electroencephalogram (EEG) signals and improving epileptic seizure detection accuracy. In the first phase, the Hellinger distance is used as a filter to remove redundant and irrelevant features by calculating the similarity between blocks within the feature, thus reducing the search space for the subsequent second phase. In the second phase, PSO searches the reduced feature space to select the best subset. Recognizing that both classification accuracy and dimensionality play crucial roles in the performance of feature subsets, PSO searches various sets of features (ranging from 410 to 2867 in EEG signals) derived from the first stage using Hellinger distance, rather than searching through the full set of 4047 features, to select the optimal subset. The proposed Hellinger-PSO approach demonstrates significant improvements in classification accuracy across multiple models. Specifically, Logistic Regression (LR) improved from 91% to 95% (4% improvement), Decision Tree (DT) from 95% to 97% (2% improvement), Naive Bayes (NB) from 94% to 99% (5% improvement), and Random Forest (RF) from 96% to 98% (2% improvement) on the Bonn dataset. Additionally, the method reduces dimensionality while maintaining high classification performance. The results validate the efficacy of the Hellinger-PSO technique, which enhances both the accuracy and efficiency of epileptic seizure detection. This approach has the potential to improve diagnostic accuracy in medical settings, aiding in better patient care and more effective clinical decision-making.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"35 ","pages":"Article 100536"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explainable screening of oral cancer via deep learning and case-based reasoning
Q2 Health Professions Pub Date : 2025-01-01 DOI: 10.1016/j.smhl.2024.100538
Mario G.C.A. Cimino , Giuseppina Campisi , Federico A. Galatolo , Paolo Neri , Pietro Tozzo , Marco Parola , Gaetano La Mantia , Olga Di Fede
Oral Squamous Cell Carcinoma is characterized by significant mortality and morbidity. Dental professionals can play an important role in its early detection, thanks to the availability of embedded smart cameras for oral photos and remote screening supported by Deep Learning (DL). Despite the promising results of DL for automated detection and classification of oral lesions, its effectiveness is based on a clearly defined protocol, on the explainability of results, and on periodic cases collection. This paper proposes a novel method, combining DL and Case-Based Reasoning (CBR), to allow the post-hoc explanation of the system answer. The method uses explainability tools organized in a protocol defined in the Business Process Model and Notation (BPMN) to allow its experimental validation. A redesign of the Faster-R-CNN Feature Pyramid Networks (FPN) + DL architecture is also proposed for lesions detection and classification, fine-tuned on 160 cases belonging to three classes of oral ulcers. The DL system achieves state-of-the-art performance, i.e., 83% detection and 92% classification rate (98% for neoplastic vs. non-neoplastic binary classification). A preliminary experimentation of the protocol involved both resident and specialized doctors over selected difficult cases. The system and cases have been publicly released to foster collaboration between research centers.
{"title":"Explainable screening of oral cancer via deep learning and case-based reasoning","authors":"Mario G.C.A. Cimino ,&nbsp;Giuseppina Campisi ,&nbsp;Federico A. Galatolo ,&nbsp;Paolo Neri ,&nbsp;Pietro Tozzo ,&nbsp;Marco Parola ,&nbsp;Gaetano La Mantia ,&nbsp;Olga Di Fede","doi":"10.1016/j.smhl.2024.100538","DOIUrl":"10.1016/j.smhl.2024.100538","url":null,"abstract":"<div><div>Oral Squamous Cell Carcinoma is characterized by significant mortality and morbidity. Dental professionals can play an important role in its early detection, thanks to the availability of embedded smart cameras for oral photos and remote screening supported by Deep Learning (DL). Despite the promising results of DL for automated detection and classification of oral lesions, its effectiveness is based on a clearly defined protocol, on the explainability of results, and on periodic cases collection. This paper proposes a novel method, combining DL and Case-Based Reasoning (CBR), to allow the post-hoc explanation of the system answer. The method uses explainability tools organized in a protocol defined in the Business Process Model and Notation (BPMN) to allow its experimental validation. A redesign of the Faster-R-CNN Feature Pyramid Networks (FPN) + DL architecture is also proposed for lesions detection and classification, fine-tuned on 160 cases belonging to three classes of oral ulcers. The DL system achieves state-of-the-art performance, i.e., 83% detection and 92% classification rate (98% for neoplastic vs. non-neoplastic binary classification). A preliminary experimentation of the protocol involved both resident and specialized doctors over selected difficult cases. The system and cases have been publicly released to foster collaboration between research centers.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"35 ","pages":"Article 100538"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel convolutional interpretability model for pixel-level interpretation of medical image classification through fusion of machine learning and fuzzy logic
Q2 Health Professions Pub Date : 2024-12-21 DOI: 10.1016/j.smhl.2024.100535
Mohammad Ennab, Hamid Mcheick
Artificial intelligence (AI) models for medical image analysis have achieved high diagnostic performance, but they often lack interpretability, limiting their clinical adoption. Existing methods can explain predictions at the image level, but they cannot provide pixel-level insights. This study proposes a novel fusion of machine learning and fuzzy logic to develop an interpretable model that can precisely identify discriminative image regions driving diagnostic decisions and generate heatmap visualization. The model is trained and evaluated on a dataset of CT scans containing healthy and diseased organ images. Quantitative features are extracted across pixels and normalized into representation matrices using a machine learning model. Subsequently, the contribution of each detected lesion to the overall prediction is quantified using fuzzy logic. Organ segment weighted averages are computed to identify significant lesions. The model explains application of AI in medical imaging with an unprecedented level of detail. It can explain fine-grained image areas that have the greatest influence on diagnostic outcomes by mapping raw image pixels to fuzzy membership concepts. Lesions are found with effect sizes and statistical significance (p < 0.05).
Our model outperforms three existing methods in terms of interpretability and diagnostic accuracy by 10–15%, while maintaining computational efficiency. By disclosing crucial image evidence that supports AI decisions, this interpretable model improves transparency and clinician trust. Ethical implications of integrating AI in clinical settings are discussed, and future research directions are outlined. This study significantly advances the development of safe and interpretable AI for enhancing patient care through imaging analytics.
{"title":"A novel convolutional interpretability model for pixel-level interpretation of medical image classification through fusion of machine learning and fuzzy logic","authors":"Mohammad Ennab,&nbsp;Hamid Mcheick","doi":"10.1016/j.smhl.2024.100535","DOIUrl":"10.1016/j.smhl.2024.100535","url":null,"abstract":"<div><div>Artificial intelligence (AI) models for medical image analysis have achieved high diagnostic performance, but they often lack interpretability, limiting their clinical adoption. Existing methods can explain predictions at the image level, but they cannot provide pixel-level insights. This study proposes a novel fusion of machine learning and fuzzy logic to develop an interpretable model that can precisely identify discriminative image regions driving diagnostic decisions and generate heatmap visualization. The model is trained and evaluated on a dataset of CT scans containing healthy and diseased organ images. Quantitative features are extracted across pixels and normalized into representation matrices using a machine learning model. Subsequently, the contribution of each detected lesion to the overall prediction is quantified using fuzzy logic. Organ segment weighted averages are computed to identify significant lesions. The model explains application of AI in medical imaging with an unprecedented level of detail. It can explain fine-grained image areas that have the greatest influence on diagnostic outcomes by mapping raw image pixels to fuzzy membership concepts. Lesions are found with effect sizes and statistical significance (p &lt; 0.05).</div><div>Our model outperforms three existing methods in terms of interpretability and diagnostic accuracy by 10–15%, while maintaining computational efficiency. By disclosing crucial image evidence that supports AI decisions, this interpretable model improves transparency and clinician trust. Ethical implications of integrating AI in clinical settings are discussed, and future research directions are outlined. This study significantly advances the development of safe and interpretable AI for enhancing patient care through imaging analytics.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"35 ","pages":"Article 100535"},"PeriodicalIF":0.0,"publicationDate":"2024-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel guidance framework for nasal rapid antigen tests with improved swab keypoint detection
Q2 Health Professions Pub Date : 2024-12-06 DOI: 10.1016/j.smhl.2024.100534
Matthias Tschöpe, Dennis Schneider, Sungho Suh, Paul Lukowicz
The global impact of the COVID-19 pandemic has placed an unprecedented burden on healthcare systems. In this paper, we present a novel deep learning-based framework to guide individuals in performing nasal antigen rapid tests, with a particular focus on improving swab keypoint detection. Our system provides real-time feedback to participants on the correct execution of the test and may issue a certificate upon successful completion. While initially developed for COVID-19 antigen rapid tests, our versatile framework extends its applicability to various nasal screening tests, eliminating the need for specific information about the liquid solvent. To implement and evaluate our framework, we curated a comprehensive dataset with rapid test components and trained an object detection model to identify the position and size of all objects in each video frame. Addressing the challenge of swab depth classification, we propose a novel approach to locate and classify crucial swab points by a self-defined decision tree for depth assessment within the nasal cavity. The robustness of the proposed framework is validated with COVID-19 antigen rapid tests from various manufacturers. Experimental results demonstrate the remarkable performance of the framework in classifying the nasal placement of the swab, achieving an F1-Score of 89.78%. Additionally, our framework attains an F1-Score of 99.37% in classifying final test results on the test device.
{"title":"A novel guidance framework for nasal rapid antigen tests with improved swab keypoint detection","authors":"Matthias Tschöpe,&nbsp;Dennis Schneider,&nbsp;Sungho Suh,&nbsp;Paul Lukowicz","doi":"10.1016/j.smhl.2024.100534","DOIUrl":"10.1016/j.smhl.2024.100534","url":null,"abstract":"<div><div>The global impact of the COVID-19 pandemic has placed an unprecedented burden on healthcare systems. In this paper, we present a novel deep learning-based framework to guide individuals in performing nasal antigen rapid tests, with a particular focus on improving swab keypoint detection. Our system provides real-time feedback to participants on the correct execution of the test and may issue a certificate upon successful completion. While initially developed for COVID-19 antigen rapid tests, our versatile framework extends its applicability to various nasal screening tests, eliminating the need for specific information about the liquid solvent. To implement and evaluate our framework, we curated a comprehensive dataset with rapid test components and trained an object detection model to identify the position and size of all objects in each video frame. Addressing the challenge of swab depth classification, we propose a novel approach to locate and classify crucial swab points by a self-defined decision tree for depth assessment within the nasal cavity. The robustness of the proposed framework is validated with COVID-19 antigen rapid tests from various manufacturers. Experimental results demonstrate the remarkable performance of the framework in classifying the nasal placement of the swab, achieving an <span><math><msub><mrow><mi>F</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span>-Score of 89.78%. Additionally, our framework attains an <span><math><msub><mrow><mi>F</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span>-Score of 99.37% in classifying final test results on the test device.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"35 ","pages":"Article 100534"},"PeriodicalIF":0.0,"publicationDate":"2024-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data-driven assessment of the effectiveness of non-pharmaceutical interventions on Covid spread mitigation in Italy
Q2 Health Professions Pub Date : 2024-12-05 DOI: 10.1016/j.smhl.2024.100524
Divya Pragna Mulla , Mario Alessandro Bochicchio , Antonella Longo
To mitigate the impact of pandemics such as COVID-19, governments can implement various Non-Pharmaceutical Interventions (NPIs), ranging from the use of personal protective equipment to social distancing measures. While it has been demonstrated that NPIs can be effective over time, the assessment of their efficacy and the estimation of their cost-benefit ratio are still debated issues. For COVID-19, several authors have used case confirmation as a key parameter to assess the efficacy of NPIs. In this paper, we compare the efficacy of this parameter to that of the death rate, hospitalizations, and intensive care unit cases, in conjunction with human mobility indicators, in evaluating the effectiveness of NPIs. Our research uses data on daily COVID-19 cases and deaths, intensive care unit cases, hospitalizations, Google Mobility Reports, and NPI data from all Italian regions from March 2020 to May 2022. The evaluation method is based on the approach proposed by Wang et al., in 2020 to assess the impact of NPI efficacy and understand the effect of other parameters. Our results indicate that, when combined with human mobility indicators, the mortality rate and the number of intensive care units perform better than the number of cases in determining the efficacy of NPIs. These findings can assist policymakers in developing the best data-driven methods for dealing with confinement problems and planning for future outbreaks.
{"title":"Data-driven assessment of the effectiveness of non-pharmaceutical interventions on Covid spread mitigation in Italy","authors":"Divya Pragna Mulla ,&nbsp;Mario Alessandro Bochicchio ,&nbsp;Antonella Longo","doi":"10.1016/j.smhl.2024.100524","DOIUrl":"10.1016/j.smhl.2024.100524","url":null,"abstract":"<div><div>To mitigate the impact of pandemics such as COVID-19, governments can implement various Non-Pharmaceutical Interventions (NPIs), ranging from the use of personal protective equipment to social distancing measures. While it has been demonstrated that NPIs can be effective over time, the assessment of their efficacy and the estimation of their cost-benefit ratio are still debated issues. For COVID-19, several authors have used case confirmation as a key parameter to assess the efficacy of NPIs. In this paper, we compare the efficacy of this parameter to that of the death rate, hospitalizations, and intensive care unit cases, in conjunction with human mobility indicators, in evaluating the effectiveness of NPIs. Our research uses data on daily COVID-19 cases and deaths, intensive care unit cases, hospitalizations, Google Mobility Reports, and NPI data from all Italian regions from March 2020 to May 2022. The evaluation method is based on the approach proposed by Wang et al., in 2020 to assess the impact of NPI efficacy and understand the effect of other parameters. Our results indicate that, when combined with human mobility indicators, the mortality rate and the number of intensive care units perform better than the number of cases in determining the efficacy of NPIs. These findings can assist policymakers in developing the best data-driven methods for dealing with confinement problems and planning for future outbreaks.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"35 ","pages":"Article 100524"},"PeriodicalIF":0.0,"publicationDate":"2024-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel rule-based expert system for early diagnosis of bipolar and Major Depressive Disorder
Q2 Health Professions Pub Date : 2024-12-04 DOI: 10.1016/j.smhl.2024.100525
Mohammad Hossein Zolfagharnasab , Siavash Damari , Madjid Soltani , Artie Ng , Hengameh Karbalaeipour , Amin Haghdadi , Masood Hamed Saghayan , Farzam Matinfar
A confident and timely diagnosis of mental illnesses is one of the primary challenges practitioners repeatedly encounter when they start treating new patients. However, diagnosing can quickly become problematic as the subjects expose comparative symptoms among mental illnesses. Due to influencing a broad populace among mental ailments, an adjusted differentiation between Major Depressive Disorder, Mania Bipolar Disorder, Depressive Bipolar Disorder, and ordinary individuals with mild symptoms is one of the critical subjects for community health. This study responded to the described problem by proposing a novel rule-based Expert System, which evaluates the impact of disorder symptoms on the Certainty Factor concerning each mental status. The semantic rules are developed based on the recommendation of experts, and the implementation is carried out using Prolog and C# languages. Furthermore, an easy-to-use user interface is considered to facilitate the system workflow. The consistency of the developed framework is established by performing rigorous tests by expert psychiatrists as well as 120 clinical samples collected from private samples. Based on the results, the current model classifies mental disorder cases with a success rate of 93.33% using only the 17 symptoms specified in the ontology model. Furthermore, a questionnaire that measures user satisfaction after the test also achieves a mean score of 3.56 out of 4, which indicates a high degree of user acceptance. As a result, it is concluded that the current framework is a reliable tool for achieving a solid diagnosis in a shorter period.
{"title":"A novel rule-based expert system for early diagnosis of bipolar and Major Depressive Disorder","authors":"Mohammad Hossein Zolfagharnasab ,&nbsp;Siavash Damari ,&nbsp;Madjid Soltani ,&nbsp;Artie Ng ,&nbsp;Hengameh Karbalaeipour ,&nbsp;Amin Haghdadi ,&nbsp;Masood Hamed Saghayan ,&nbsp;Farzam Matinfar","doi":"10.1016/j.smhl.2024.100525","DOIUrl":"10.1016/j.smhl.2024.100525","url":null,"abstract":"<div><div>A confident and timely diagnosis of mental illnesses is one of the primary challenges practitioners repeatedly encounter when they start treating new patients. However, diagnosing can quickly become problematic as the subjects expose comparative symptoms among mental illnesses. Due to influencing a broad populace among mental ailments, an adjusted differentiation between Major Depressive Disorder, Mania Bipolar Disorder, Depressive Bipolar Disorder, and ordinary individuals with mild symptoms is one of the critical subjects for community health. This study responded to the described problem by proposing a novel rule-based Expert System, which evaluates the impact of disorder symptoms on the Certainty Factor concerning each mental status. The semantic rules are developed based on the recommendation of experts, and the implementation is carried out using Prolog and <em>C#</em> languages. Furthermore, an easy-to-use user interface is considered to facilitate the system workflow. The consistency of the developed framework is established by performing rigorous tests by expert psychiatrists as well as 120 clinical samples collected from private samples. Based on the results, the current model classifies mental disorder cases with a success rate of 93.33% using only the 17 symptoms specified in the ontology model. Furthermore, a questionnaire that measures user satisfaction after the test also achieves a mean score of 3.56 out of 4, which indicates a high degree of user acceptance. As a result, it is concluded that the current framework is a reliable tool for achieving a solid diagnosis in a shorter period.</div></div>","PeriodicalId":37151,"journal":{"name":"Smart Health","volume":"35 ","pages":"Article 100525"},"PeriodicalIF":0.0,"publicationDate":"2024-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143129215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Smart Health
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1