首页 > 最新文献

Frontiers in Artificial Intelligence最新文献

英文 中文
Real-time temperature anomaly detection in vaccine refrigeration systems using deep learning on a resource-constrained microcontroller. 在资源受限的微控制器上利用深度学习实时检测疫苗制冷系统的温度异常。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-01 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1429602
Mokhtar Harrabi, Abdelaziz Hamdi, Bouraoui Ouni, Jamel Bel Hadj Tahar

Maintaining consistent and accurate temperature is critical for the safe and effective storage of vaccines. Traditional monitoring methods often lack real-time capabilities and may not be sensitive enough to detect subtle anomalies. This paper presents a novel deep learning-based system for real-time temperature fault detection in refrigeration systems used for vaccine storage. Our system utilizes a semi-supervised Convolutional Autoencoder (CAE) model deployed on a resource-constrained ESP32 microcontroller. The CAE is trained on real-world temperature sensor data to capture temporal patterns and reconstruct normal temperature profiles. Deviations from the reconstructed profiles are flagged as potential anomalies, enabling real-time fault detection. Evaluation using real-time data demonstrates an impressive 92% accuracy in identifying temperature faults. The system's low energy consumption (0.05 watts) and memory usage (1.2 MB) make it suitable for deployment in resource-constrained environments. This work paves the way for improved monitoring and fault detection in refrigeration systems, ultimately contributing to the reliable storage of life-saving vaccines.

保持稳定准确的温度对于安全有效地储存疫苗至关重要。传统的监测方法往往缺乏实时性,而且灵敏度可能不足以检测到细微的异常情况。本文介绍了一种基于深度学习的新型系统,用于实时检测用于疫苗储存的制冷系统中的温度故障。我们的系统利用部署在资源受限的 ESP32 微控制器上的半监督卷积自动编码器(CAE)模型。CAE 根据真实世界的温度传感器数据进行训练,以捕捉时间模式并重建正常的温度曲线。与重建曲线的偏差会被标记为潜在异常,从而实现实时故障检测。使用实时数据进行的评估表明,该系统在识别温度故障方面的准确率高达 92%,令人印象深刻。该系统能耗低(0.05 瓦),内存占用少(1.2 MB),适合在资源有限的环境中部署。这项工作为改进制冷系统的监控和故障检测铺平了道路,最终将有助于救命疫苗的可靠储存。
{"title":"Real-time temperature anomaly detection in vaccine refrigeration systems using deep learning on a resource-constrained microcontroller.","authors":"Mokhtar Harrabi, Abdelaziz Hamdi, Bouraoui Ouni, Jamel Bel Hadj Tahar","doi":"10.3389/frai.2024.1429602","DOIUrl":"10.3389/frai.2024.1429602","url":null,"abstract":"<p><p>Maintaining consistent and accurate temperature is critical for the safe and effective storage of vaccines. Traditional monitoring methods often lack real-time capabilities and may not be sensitive enough to detect subtle anomalies. This paper presents a novel deep learning-based system for real-time temperature fault detection in refrigeration systems used for vaccine storage. Our system utilizes a semi-supervised Convolutional Autoencoder (CAE) model deployed on a resource-constrained ESP32 microcontroller. The CAE is trained on real-world temperature sensor data to capture temporal patterns and reconstruct normal temperature profiles. Deviations from the reconstructed profiles are flagged as potential anomalies, enabling real-time fault detection. Evaluation using real-time data demonstrates an impressive 92% accuracy in identifying temperature faults. The system's low energy consumption (0.05 watts) and memory usage (1.2 MB) make it suitable for deployment in resource-constrained environments. This work paves the way for improved monitoring and fault detection in refrigeration systems, ultimately contributing to the reliable storage of life-saving vaccines.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1429602"},"PeriodicalIF":3.0,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11324578/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141989086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A unified Foot and Mouth Disease dataset for Uganda: evaluating machine learning predictive performance degradation under varying distributions. 乌干达统一的口蹄疫数据集:评估机器学习在不同分布情况下的预测性能下降。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-31 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1446368
Geofrey Kapalaga, Florence N Kivunike, Susan Kerfua, Daudi Jjingo, Savino Biryomumaisho, Justus Rutaisire, Paul Ssajjakambwe, Swidiq Mugerwa, Yusuf Kiwala

In Uganda, the absence of a unified dataset for constructing machine learning models to predict Foot and Mouth Disease outbreaks hinders preparedness. Although machine learning models exhibit excellent predictive performance for Foot and Mouth Disease outbreaks under stationary conditions, they are susceptible to performance degradation in non-stationary environments. Rainfall and temperature are key factors influencing these outbreaks, and their variability due to climate change can significantly impact predictive performance. This study created a unified Foot and Mouth Disease dataset by integrating disparate sources and pre-processing data using mean imputation, duplicate removal, visualization, and merging techniques. To evaluate performance degradation, seven machine learning models were trained and assessed using metrics including accuracy, area under the receiver operating characteristic curve, recall, precision and F1-score. The dataset showed a significant class imbalance with more non-outbreaks than outbreaks, requiring data augmentation methods. Variability in rainfall and temperature impacted predictive performance, causing notable degradation. Random Forest with borderline SMOTE was the top-performing model in a stationary environment, achieving 92% accuracy, 0.97 area under the receiver operating characteristic curve, 0.94 recall, 0.90 precision, and 0.92 F1-score. However, under varying distributions, all models exhibited significant performance degradation, with random forest accuracy dropping to 46%, area under the receiver operating characteristic curve to 0.58, recall to 0.03, precision to 0.24, and F1-score to 0.06. This study underscores the creation of a unified Foot and Mouth Disease dataset for Uganda and reveals significant performance degradation in seven machine learning models under varying distributions. These findings highlight the need for new methods to address the impact of distribution variability on predictive performance.

在乌干达,缺乏统一的数据集来构建机器学习模型以预测口蹄疫的爆发,这阻碍了备灾工作的开展。虽然机器学习模型在静态条件下对口蹄疫疫情具有出色的预测性能,但在非静态环境下,这些模型的性能很容易下降。降雨和温度是影响口蹄疫爆发的关键因素,而气候变化导致的降雨和温度变化会对预测性能产生重大影响。本研究通过整合不同来源的数据,并使用均值估算、重复删除、可视化和合并技术对数据进行预处理,创建了一个统一的口蹄疫数据集。为了评估性能下降情况,对七个机器学习模型进行了训练,并使用准确率、接收者工作特征曲线下面积、召回率、精确度和 F1 分数等指标进行评估。数据集显示出严重的类不平衡,非疫情爆发多于疫情爆发,这就需要采用数据增强方法。降雨量和温度的变化影响了预测性能,造成了明显的下降。在静态环境中,带有边界 SMOTE 的随机森林是表现最好的模型,准确率达到 92%,接收者工作特征曲线下面积为 0.97,召回率为 0.94,精确度为 0.90,F1 分数为 0.92。然而,在不同的分布情况下,所有模型都表现出明显的性能下降,随机森林的准确率下降到 46%,接收器工作特征曲线下面积下降到 0.58,召回率下降到 0.03,精确度下降到 0.24,F1-分数下降到 0.06。这项研究强调了为乌干达创建统一口蹄疫数据集的重要性,并揭示了七个机器学习模型在不同分布条件下的显著性能下降。这些发现突出表明,需要采用新方法来解决分布变化对预测性能的影响。
{"title":"A unified Foot and Mouth Disease dataset for Uganda: evaluating machine learning predictive performance degradation under varying distributions.","authors":"Geofrey Kapalaga, Florence N Kivunike, Susan Kerfua, Daudi Jjingo, Savino Biryomumaisho, Justus Rutaisire, Paul Ssajjakambwe, Swidiq Mugerwa, Yusuf Kiwala","doi":"10.3389/frai.2024.1446368","DOIUrl":"10.3389/frai.2024.1446368","url":null,"abstract":"<p><p>In Uganda, the absence of a unified dataset for constructing machine learning models to predict Foot and Mouth Disease outbreaks hinders preparedness. Although machine learning models exhibit excellent predictive performance for Foot and Mouth Disease outbreaks under stationary conditions, they are susceptible to performance degradation in non-stationary environments. Rainfall and temperature are key factors influencing these outbreaks, and their variability due to climate change can significantly impact predictive performance. This study created a unified Foot and Mouth Disease dataset by integrating disparate sources and pre-processing data using mean imputation, duplicate removal, visualization, and merging techniques. To evaluate performance degradation, seven machine learning models were trained and assessed using metrics including accuracy, area under the receiver operating characteristic curve, recall, precision and F1-score. The dataset showed a significant class imbalance with more non-outbreaks than outbreaks, requiring data augmentation methods. Variability in rainfall and temperature impacted predictive performance, causing notable degradation. Random Forest with borderline SMOTE was the top-performing model in a stationary environment, achieving 92% accuracy, 0.97 area under the receiver operating characteristic curve, 0.94 recall, 0.90 precision, and 0.92 F1-score. However, under varying distributions, all models exhibited significant performance degradation, with random forest accuracy dropping to 46%, area under the receiver operating characteristic curve to 0.58, recall to 0.03, precision to 0.24, and F1-score to 0.06. This study underscores the creation of a unified Foot and Mouth Disease dataset for Uganda and reveals significant performance degradation in seven machine learning models under varying distributions. These findings highlight the need for new methods to address the impact of distribution variability on predictive performance.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1446368"},"PeriodicalIF":3.0,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11322090/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141983459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal data integration for oncology in the era of deep neural networks: a review. 深度神经网络时代的肿瘤学多模态数据整合:综述。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-25 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1408843
Asim Waqas, Aakash Tripathi, Ravi P Ramachandran, Paul A Stewart, Ghulam Rasool

Cancer research encompasses data across various scales, modalities, and resolutions, from screening and diagnostic imaging to digitized histopathology slides to various types of molecular data and clinical records. The integration of these diverse data types for personalized cancer care and predictive modeling holds the promise of enhancing the accuracy and reliability of cancer screening, diagnosis, and treatment. Traditional analytical methods, which often focus on isolated or unimodal information, fall short of capturing the complex and heterogeneous nature of cancer data. The advent of deep neural networks has spurred the development of sophisticated multimodal data fusion techniques capable of extracting and synthesizing information from disparate sources. Among these, Graph Neural Networks (GNNs) and Transformers have emerged as powerful tools for multimodal learning, demonstrating significant success. This review presents the foundational principles of multimodal learning including oncology data modalities, taxonomy of multimodal learning, and fusion strategies. We delve into the recent advancements in GNNs and Transformers for the fusion of multimodal data in oncology, spotlighting key studies and their pivotal findings. We discuss the unique challenges of multimodal learning, such as data heterogeneity and integration complexities, alongside the opportunities it presents for a more nuanced and comprehensive understanding of cancer. Finally, we present some of the latest comprehensive multimodal pan-cancer data sources. By surveying the landscape of multimodal data integration in oncology, our goal is to underline the transformative potential of multimodal GNNs and Transformers. Through technological advancements and the methodological innovations presented in this review, we aim to chart a course for future research in this promising field. This review may be the first that highlights the current state of multimodal modeling applications in cancer using GNNs and transformers, presents comprehensive multimodal oncology data sources, and sets the stage for multimodal evolution, encouraging further exploration and development in personalized cancer care.

癌症研究涵盖各种规模、模式和分辨率的数据,从筛查和诊断成像到数字化组织病理学切片,再到各种类型的分子数据和临床记录。整合这些不同类型的数据用于个性化癌症护理和预测建模,有望提高癌症筛查、诊断和治疗的准确性和可靠性。传统的分析方法通常侧重于孤立或单模态信息,无法捕捉癌症数据的复杂性和异质性。深度神经网络的出现推动了复杂的多模态数据融合技术的发展,这些技术能够从不同来源中提取和综合信息。其中,图神经网络(GNN)和变换器已成为多模态学习的强大工具,并取得了显著的成功。本综述介绍了多模态学习的基本原理,包括肿瘤数据模式、多模态学习分类和融合策略。我们将深入探讨 GNN 和 Transformers 在融合肿瘤学多模态数据方面的最新进展,重点介绍关键研究及其重要发现。我们讨论了多模态学习所面临的独特挑战,如数据异质性和整合复杂性,以及它为更细致、更全面地了解癌症所带来的机遇。最后,我们将介绍一些最新的综合性多模态泛癌症数据源。通过调查肿瘤学多模态数据整合的现状,我们的目标是强调多模态 GNN 和 Transformers 的变革潜力。通过本综述中介绍的技术进步和方法创新,我们旨在为这一前景广阔的领域的未来研究指明方向。这篇综述可能是第一篇重点介绍使用 GNN 和变换器进行癌症多模态建模应用现状的文章,介绍了全面的多模态肿瘤学数据源,并为多模态演化奠定了基础,鼓励在个性化癌症治疗方面进一步探索和发展。
{"title":"Multimodal data integration for oncology in the era of deep neural networks: a review.","authors":"Asim Waqas, Aakash Tripathi, Ravi P Ramachandran, Paul A Stewart, Ghulam Rasool","doi":"10.3389/frai.2024.1408843","DOIUrl":"10.3389/frai.2024.1408843","url":null,"abstract":"<p><p>Cancer research encompasses data across various scales, modalities, and resolutions, from screening and diagnostic imaging to digitized histopathology slides to various types of molecular data and clinical records. The integration of these diverse data types for personalized cancer care and predictive modeling holds the promise of enhancing the accuracy and reliability of cancer screening, diagnosis, and treatment. Traditional analytical methods, which often focus on isolated or unimodal information, fall short of capturing the complex and heterogeneous nature of cancer data. The advent of deep neural networks has spurred the development of sophisticated multimodal data fusion techniques capable of extracting and synthesizing information from disparate sources. Among these, Graph Neural Networks (GNNs) and Transformers have emerged as powerful tools for multimodal learning, demonstrating significant success. This review presents the foundational principles of multimodal learning including oncology data modalities, taxonomy of multimodal learning, and fusion strategies. We delve into the recent advancements in GNNs and Transformers for the fusion of multimodal data in oncology, spotlighting key studies and their pivotal findings. We discuss the unique challenges of multimodal learning, such as data heterogeneity and integration complexities, alongside the opportunities it presents for a more nuanced and comprehensive understanding of cancer. Finally, we present some of the latest comprehensive multimodal pan-cancer data sources. By surveying the landscape of multimodal data integration in oncology, our goal is to underline the transformative potential of multimodal GNNs and Transformers. Through technological advancements and the methodological innovations presented in this review, we aim to chart a course for future research in this promising field. This review may be the first that highlights the current state of multimodal modeling applications in cancer using GNNs and transformers, presents comprehensive multimodal oncology data sources, and sets the stage for multimodal evolution, encouraging further exploration and development in personalized cancer care.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1408843"},"PeriodicalIF":3.0,"publicationDate":"2024-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11308435/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141907822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generating rhythm game music with jukebox. 用点唱机生成节奏游戏音乐。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-05 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1296034
Nicholas Yan

Music has always been thought of as a "human" endeavor- when praising a piece of music, we emphasize the composer's creativity and the emotions the music invokes. Because music also heavily relies on patterns and repetition in the form of recurring melodic themes and chord progressions, artificial intelligence has increasingly been able to replicate music in a human-like fashion. This research investigated the capabilities of Jukebox, an open-source commercially available neural network, to accurately replicate two genres of music often found in rhythm games, artcore and orchestral. A Google Colab notebook provided the computational resources necessary to sample and extend a total of 16 piano arrangements of both genres. A survey containing selected samples was distributed to a local youth orchestra to gauge people's perceptions of the musicality of AI and human-generated music. Even though humans preferred human-generated music, Jukebox's slightly high rating showed that it was somewhat capable at mimicking the styles of both genres. Despite limitations of Jukebox only using raw audio and a relatively small sample size, it shows promise for the future of AI as a collaborative tool in music production.

音乐一直被认为是 "人类 "的事业--在赞美一首乐曲时,我们会强调作曲家的创造力和音乐所唤起的情感。由于音乐在很大程度上也依赖于旋律主题和和弦行进等反复出现的模式和重复,人工智能已经越来越能够以类似人类的方式复制音乐。本研究调查了开源商业神经网络 Jukebox 在准确复制节奏游戏中经常出现的两种音乐类型(艺术核心和管弦乐)方面的能力。谷歌 Colab 笔记本提供了必要的计算资源,用于采样和扩展这两种音乐类型的共 16 首钢琴曲。我们向当地的一个青年交响乐团分发了一份调查问卷,其中包含所选样本,以了解人们对人工智能和人类生成音乐的音乐性的看法。尽管人类更喜欢人工生成的音乐,但Jukebox的评分略高,这表明它在一定程度上能够模仿这两种流派的风格。尽管 Jukebox 只能使用原始音频,样本量也相对较小,但它显示了人工智能作为音乐制作协作工具的未来前景。
{"title":"Generating rhythm game music with jukebox.","authors":"Nicholas Yan","doi":"10.3389/frai.2024.1296034","DOIUrl":"https://doi.org/10.3389/frai.2024.1296034","url":null,"abstract":"<p><p>Music has always been thought of as a \"human\" endeavor- when praising a piece of music, we emphasize the composer's creativity and the emotions the music invokes. Because music also heavily relies on patterns and repetition in the form of recurring melodic themes and chord progressions, artificial intelligence has increasingly been able to replicate music in a human-like fashion. This research investigated the capabilities of Jukebox, an open-source commercially available neural network, to accurately replicate two genres of music often found in rhythm games, artcore and orchestral. A Google Colab notebook provided the computational resources necessary to sample and extend a total of 16 piano arrangements of both genres. A survey containing selected samples was distributed to a local youth orchestra to gauge people's perceptions of the musicality of AI and human-generated music. Even though humans preferred human-generated music, Jukebox's slightly high rating showed that it was somewhat capable at mimicking the styles of both genres. Despite limitations of Jukebox only using raw audio and a relatively small sample size, it shows promise for the future of AI as a collaborative tool in music production.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1296034"},"PeriodicalIF":3.0,"publicationDate":"2024-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11258020/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141735201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The privacy-explainability trade-off: unraveling the impacts of differential privacy and federated learning on attribution methods. 隐私与可解释性的权衡:揭示不同隐私和联合学习对归因方法的影响。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-03 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1236947
Saifullah Saifullah, Dominique Mercier, Adriano Lucieri, Andreas Dengel, Sheraz Ahmed

Since the advent of deep learning (DL), the field has witnessed a continuous stream of innovations. However, the translation of these advancements into practical applications has not kept pace, particularly in safety-critical domains where artificial intelligence (AI) must meet stringent regulatory and ethical standards. This is underscored by the ongoing research in eXplainable AI (XAI) and privacy-preserving machine learning (PPML), which seek to address some limitations associated with these opaque and data-intensive models. Despite brisk research activity in both fields, little attention has been paid to their interaction. This work is the first to thoroughly investigate the effects of privacy-preserving techniques on explanations generated by common XAI methods for DL models. A detailed experimental analysis is conducted to quantify the impact of private training on the explanations provided by DL models, applied to six image datasets and five time series datasets across various domains. The analysis comprises three privacy techniques, nine XAI methods, and seven model architectures. The findings suggest non-negligible changes in explanations through the implementation of privacy measures. Apart from reporting individual effects of PPML on XAI, the paper gives clear recommendations for the choice of techniques in real applications. By unveiling the interdependencies of these pivotal technologies, this research marks an initial step toward resolving the challenges that hinder the deployment of AI in safety-critical settings.

自深度学习(DL)问世以来,该领域的创新层出不穷。然而,将这些进步转化为实际应用的步伐并没有跟上,尤其是在人工智能(AI)必须满足严格的监管和道德标准的安全关键领域。目前正在进行的可解释人工智能(XAI)和隐私保护机器学习(PPML)方面的研究就凸显了这一点,这些研究试图解决与这些不透明和数据密集型模型相关的一些局限性。尽管这两个领域的研究活动都很活跃,但很少有人关注它们之间的相互作用。这项研究首次深入探讨了隐私保护技术对常见 XAI 方法生成的 DL 模型解释的影响。我们进行了详细的实验分析,以量化隐私训练对 DL 模型所提供解释的影响,并将其应用于不同领域的六个图像数据集和五个时间序列数据集。分析包括三种隐私技术、九种 XAI 方法和七种模型架构。研究结果表明,通过实施隐私措施,解释发生了不可忽略的变化。除了报告 PPML 对 XAI 的个别影响外,论文还为在实际应用中选择技术提出了明确的建议。通过揭示这些关键技术的相互依存关系,这项研究标志着朝着解决阻碍在安全关键环境中部署人工智能的挑战迈出了第一步。
{"title":"The privacy-explainability trade-off: unraveling the impacts of differential privacy and federated learning on attribution methods.","authors":"Saifullah Saifullah, Dominique Mercier, Adriano Lucieri, Andreas Dengel, Sheraz Ahmed","doi":"10.3389/frai.2024.1236947","DOIUrl":"10.3389/frai.2024.1236947","url":null,"abstract":"<p><p>Since the advent of deep learning (DL), the field has witnessed a continuous stream of innovations. However, the translation of these advancements into practical applications has not kept pace, particularly in safety-critical domains where artificial intelligence (AI) must meet stringent regulatory and ethical standards. This is underscored by the ongoing research in eXplainable AI (XAI) and privacy-preserving machine learning (PPML), which seek to address some limitations associated with these opaque and data-intensive models. Despite brisk research activity in both fields, little attention has been paid to their interaction. This work is the first to thoroughly investigate the effects of privacy-preserving techniques on explanations generated by common XAI methods for DL models. A detailed experimental analysis is conducted to quantify the impact of private training on the explanations provided by DL models, applied to six image datasets and five time series datasets across various domains. The analysis comprises three privacy techniques, nine XAI methods, and seven model architectures. The findings suggest non-negligible changes in explanations through the implementation of privacy measures. Apart from reporting individual effects of PPML on XAI, the paper gives clear recommendations for the choice of techniques in real applications. By unveiling the interdependencies of these pivotal technologies, this research marks an initial step toward resolving the challenges that hinder the deployment of AI in safety-critical settings.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1236947"},"PeriodicalIF":3.0,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11253022/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141634754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Survival prediction landscape: an in-depth systematic literature review on activities, methods, tools, diseases, and databases. 生存预测前景:关于活动、方法、工具、疾病和数据库的深入系统文献综述。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-03 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1428501
Ahtisham Fazeel Abbasi, Muhammad Nabeel Asim, Sheraz Ahmed, Sebastian Vollmer, Andreas Dengel

Survival prediction integrates patient-specific molecular information and clinical signatures to forecast the anticipated time of an event, such as recurrence, death, or disease progression. Survival prediction proves valuable in guiding treatment decisions, optimizing resource allocation, and interventions of precision medicine. The wide range of diseases, the existence of various variants within the same disease, and the reliance on available data necessitate disease-specific computational survival predictors. The widespread adoption of artificial intelligence (AI) methods in crafting survival predictors has undoubtedly revolutionized this field. However, the ever-increasing demand for more sophisticated and effective prediction models necessitates the continued creation of innovative advancements. To catalyze these advancements, it is crucial to bring existing survival predictors knowledge and insights into a centralized platform. The paper in hand thoroughly examines 23 existing review studies and provides a concise overview of their scope and limitations. Focusing on a comprehensive set of 90 most recent survival predictors across 44 diverse diseases, it delves into insights of diverse types of methods that are used in the development of disease-specific predictors. This exhaustive analysis encompasses the utilized data modalities along with a detailed analysis of subsets of clinical features, feature engineering methods, and the specific statistical, machine or deep learning approaches that have been employed. It also provides insights about survival prediction data sources, open-source predictors, and survival prediction frameworks.

生存预测整合了患者特异性分子信息和临床特征,以预测复发、死亡或疾病进展等事件的预计发生时间。事实证明,生存预测在指导治疗决策、优化资源分配和干预精准医疗方面具有重要价值。由于疾病种类繁多、同一疾病存在多种变异以及对现有数据的依赖,因此有必要开发针对特定疾病的计算生存预测器。人工智能(AI)方法在制定生存预测指标方面的广泛应用无疑给这一领域带来了革命性的变化。然而,由于对更复杂、更有效的预测模型的需求不断增加,因此有必要继续创造创新性的进展。为了推动这些进步,将现有的生存预测知识和见解整合到一个集中的平台上至关重要。本文深入研究了现有的 23 项综述研究,并对其范围和局限性进行了简要概述。论文以 44 种不同疾病的 90 种最新生存预测指标为重点,深入探讨了用于开发特定疾病预测指标的各类方法。这项详尽的分析包括所使用的数据模式,以及对临床特征子集、特征工程方法和所采用的特定统计、机器或深度学习方法的详细分析。它还提供了有关生存预测数据源、开源预测因子和生存预测框架的见解。
{"title":"Survival prediction landscape: an in-depth systematic literature review on activities, methods, tools, diseases, and databases.","authors":"Ahtisham Fazeel Abbasi, Muhammad Nabeel Asim, Sheraz Ahmed, Sebastian Vollmer, Andreas Dengel","doi":"10.3389/frai.2024.1428501","DOIUrl":"10.3389/frai.2024.1428501","url":null,"abstract":"<p><p>Survival prediction integrates patient-specific molecular information and clinical signatures to forecast the anticipated time of an event, such as recurrence, death, or disease progression. Survival prediction proves valuable in guiding treatment decisions, optimizing resource allocation, and interventions of precision medicine. The wide range of diseases, the existence of various variants within the same disease, and the reliance on available data necessitate disease-specific computational survival predictors. The widespread adoption of artificial intelligence (AI) methods in crafting survival predictors has undoubtedly revolutionized this field. However, the ever-increasing demand for more sophisticated and effective prediction models necessitates the continued creation of innovative advancements. To catalyze these advancements, it is crucial to bring existing survival predictors knowledge and insights into a centralized platform. The paper in hand thoroughly examines 23 existing review studies and provides a concise overview of their scope and limitations. Focusing on a comprehensive set of 90 most recent survival predictors across 44 diverse diseases, it delves into insights of diverse types of methods that are used in the development of disease-specific predictors. This exhaustive analysis encompasses the utilized data modalities along with a detailed analysis of subsets of clinical features, feature engineering methods, and the specific statistical, machine or deep learning approaches that have been employed. It also provides insights about survival prediction data sources, open-source predictors, and survival prediction frameworks.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1428501"},"PeriodicalIF":3.0,"publicationDate":"2024-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11252047/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141634753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ArabBert-LSTM: improving Arabic sentiment analysis based on transformer model and Long Short-Term Memory. ArabBert-LSTM:基于转换器模型和长短时记忆改进阿拉伯语情感分析。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-02 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1408845
Wael Alosaimi, Hager Saleh, Ali A Hamzah, Nora El-Rashidy, Abdullah Alharb, Ahmed Elaraby, Sherif Mostafa

Sentiment analysis also referred to as opinion mining, plays a significant role in automating the identification of negative, positive, or neutral sentiments expressed in textual data. The proliferation of social networks, review sites, and blogs has rendered these platforms valuable resources for mining opinions. Sentiment analysis finds applications in various domains and languages, including English and Arabic. However, Arabic presents unique challenges due to its complex morphology characterized by inflectional and derivation patterns. To effectively analyze sentiment in Arabic text, sentiment analysis techniques must account for this intricacy. This paper proposes a model designed using the transformer model and deep learning (DL) techniques. The word embedding is represented by Transformer-based Model for Arabic Language Understanding (ArabBert), and then passed to the AraBERT model. The output of AraBERT is subsequently fed into a Long Short-Term Memory (LSTM) model, followed by feedforward neural networks and an output layer. AraBERT is used to capture rich contextual information and LSTM to enhance sequence modeling and retain long-term dependencies within the text data. We compared the proposed model with machine learning (ML) algorithms and DL algorithms, as well as different vectorization techniques: term frequency-inverse document frequency (TF-IDF), ArabBert, Continuous Bag-of-Words (CBOW), and skipGrams using four Arabic benchmark datasets. Through extensive experimentation and evaluation of Arabic sentiment analysis datasets, we showcase the effectiveness of our approach. The results underscore significant improvements in sentiment analysis accuracy, highlighting the potential of leveraging transformer models for Arabic Sentiment Analysis. The outcomes of this research contribute to advancing Arabic sentiment analysis, enabling more accurate and reliable sentiment analysis in Arabic text. The findings reveal that the proposed framework exhibits exceptional performance in sentiment classification, achieving an impressive accuracy rate of over 97%.

情感分析又称意见挖掘,在自动识别文本数据中表达的负面、正面或中性情感方面发挥着重要作用。社交网络、评论网站和博客的普及使这些平台成为挖掘观点的宝贵资源。情感分析可应用于各种领域和语言,包括英语和阿拉伯语。然而,阿拉伯语因其复杂的词形,以转折和派生模式为特征,带来了独特的挑战。要有效分析阿拉伯语文本中的情感,情感分析技术必须考虑到这种复杂性。本文提出了一种利用变换器模型和深度学习(DL)技术设计的模型。单词嵌入由基于转换器的阿拉伯语理解模型(ArabBert)表示,然后传递给 AraBERT 模型。AraBERT 的输出随后被送入长短期记忆(LSTM)模型,然后是前馈神经网络和输出层。AraBERT 用于捕捉丰富的上下文信息,LSTM 用于增强序列建模并保留文本数据中的长期依赖关系。我们使用四个阿拉伯语基准数据集,将所提出的模型与机器学习(ML)算法、DL 算法以及不同的矢量化技术(词频-反向文档频率(TF-IDF)、ArabBert、连续词袋(CBOW)和 skipGrams)进行了比较。通过对阿拉伯语情感分析数据集的广泛实验和评估,我们展示了我们方法的有效性。研究结果表明,情感分析的准确性有了显著提高,凸显了利用转换器模型进行阿拉伯语情感分析的潜力。这项研究的成果有助于推进阿拉伯语情感分析,使阿拉伯语文本中的情感分析更加准确可靠。研究结果表明,所提出的框架在情感分类方面表现出色,准确率超过 97%,令人印象深刻。
{"title":"ArabBert-LSTM: improving Arabic sentiment analysis based on transformer model and Long Short-Term Memory.","authors":"Wael Alosaimi, Hager Saleh, Ali A Hamzah, Nora El-Rashidy, Abdullah Alharb, Ahmed Elaraby, Sherif Mostafa","doi":"10.3389/frai.2024.1408845","DOIUrl":"10.3389/frai.2024.1408845","url":null,"abstract":"<p><p>Sentiment analysis also referred to as opinion mining, plays a significant role in automating the identification of negative, positive, or neutral sentiments expressed in textual data. The proliferation of social networks, review sites, and blogs has rendered these platforms valuable resources for mining opinions. Sentiment analysis finds applications in various domains and languages, including English and Arabic. However, Arabic presents unique challenges due to its complex morphology characterized by inflectional and derivation patterns. To effectively analyze sentiment in Arabic text, sentiment analysis techniques must account for this intricacy. This paper proposes a model designed using the transformer model and deep learning (DL) techniques. The word embedding is represented by Transformer-based Model for Arabic Language Understanding (ArabBert), and then passed to the AraBERT model. The output of AraBERT is subsequently fed into a Long Short-Term Memory (LSTM) model, followed by feedforward neural networks and an output layer. AraBERT is used to capture rich contextual information and LSTM to enhance sequence modeling and retain long-term dependencies within the text data. We compared the proposed model with machine learning (ML) algorithms and DL algorithms, as well as different vectorization techniques: term frequency-inverse document frequency (TF-IDF), ArabBert, Continuous Bag-of-Words (CBOW), and skipGrams using four Arabic benchmark datasets. Through extensive experimentation and evaluation of Arabic sentiment analysis datasets, we showcase the effectiveness of our approach. The results underscore significant improvements in sentiment analysis accuracy, highlighting the potential of leveraging transformer models for Arabic Sentiment Analysis. The outcomes of this research contribute to advancing Arabic sentiment analysis, enabling more accurate and reliable sentiment analysis in Arabic text. The findings reveal that the proposed framework exhibits exceptional performance in sentiment classification, achieving an impressive accuracy rate of over 97%.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1408845"},"PeriodicalIF":3.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11250580/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141627893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial: Artificial intelligence education & governance -human enhancive, culturally sensitive and personally adaptive HAI. 社论:人工智能教育与管理--人类增强型、文化敏感型和个人适应型 HAI。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-07-01 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1443386
Rajiv Kashyap, Yana Samuel, Linda Weiser Friedman, Jim Samuel
{"title":"Editorial: Artificial intelligence education & governance -human enhancive, culturally sensitive and personally adaptive HAI.","authors":"Rajiv Kashyap, Yana Samuel, Linda Weiser Friedman, Jim Samuel","doi":"10.3389/frai.2024.1443386","DOIUrl":"10.3389/frai.2024.1443386","url":null,"abstract":"","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1443386"},"PeriodicalIF":3.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11247171/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141621038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HAWKFOG-an enhanced deep learning framework for the Fog-IoT environment. HAWKFOG--适用于雾-物联网环境的增强型深度学习框架。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-28 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1354742
R Abirami, Poovammal E

Cardiac disease is considered as the one of the deadliest diseases that constantly increases the globe's mortality rate. Since a lot of expertise is required for an accurate prediction of heart disease, designing an intelligent predictive system for cardiac diseases remains to be complex and tricky. Internet of Things based health regulation systems are a relatively recent technology. In addition, novel Edge and Fog device concepts are presented to advance prediction results. However, the main problem with the current systems is that they are unable to meet the demands of effective diagnosis systems due to their poor prediction capabilities. To overcome this problem, this research proposes a novel framework called HAWKFOGS which innovatively integrates the deep learning for a practical diagnosis of cardiac problems using edge and fog computing devices. The current datasets were gathered from different subjects using IoT devices interfaced with the electrocardiography and blood pressure sensors. The data are then predicted as normal and abnormal using the Logistic Chaos based Harris Hawk Optimized Enhanced Gated Recurrent Neural Networks. The ablation experiments are carried out using IoT nodes interfaced with medical sensors and fog gateways based on Embedded Jetson Nano devices. The suggested algorithm's performance is measured. Additionally, Model Building Time is computed to validate the suggested model's response. Compared to the other algorithms, the suggested model yielded the best results in terms of accuracy (99.7%), precision (99.65%), recall (99.7%), specificity (99.7%). F1-score (99.69%) and used the least amount of Model Building Time (1.16 s) to predict cardiac diseases.

心脏病被认为是最致命的疾病之一,会不断增加全球的死亡率。由于准确预测心脏病需要大量的专业知识,因此设计心脏疾病智能预测系统仍然复杂而棘手。基于物联网的健康监管系统是一项相对较新的技术。此外,还提出了新颖的边缘和雾设备概念,以推进预测结果。然而,当前系统的主要问题是预测能力差,无法满足有效诊断系统的需求。为了克服这一问题,本研究提出了一种名为 HAWKFOGS 的新型框架,该框架创新性地整合了深度学习,利用边缘和雾计算设备对心脏问题进行实用诊断。当前的数据集是从使用与心电图和血压传感器连接的物联网设备的不同受试者处收集的。然后使用基于逻辑混沌的哈里斯-霍克优化增强门控循环神经网络预测数据的正常和异常。消融实验是使用与医疗传感器连接的物联网节点和基于嵌入式 Jetson Nano 设备的雾网关进行的。对建议算法的性能进行了测量。此外,还计算了模型构建时间,以验证建议模型的响应。与其他算法相比,建议的模型在准确率(99.7%)、精确率(99.65%)、召回率(99.7%)和特异性(99.7%)方面取得了最佳结果。F1-分数(99.69%)和模型构建时间(1.16 秒)方面都是最少的。
{"title":"HAWKFOG-an enhanced deep learning framework for the Fog-IoT environment.","authors":"R Abirami, Poovammal E","doi":"10.3389/frai.2024.1354742","DOIUrl":"10.3389/frai.2024.1354742","url":null,"abstract":"<p><p>Cardiac disease is considered as the one of the deadliest diseases that constantly increases the globe's mortality rate. Since a lot of expertise is required for an accurate prediction of heart disease, designing an intelligent predictive system for cardiac diseases remains to be complex and tricky. Internet of Things based health regulation systems are a relatively recent technology. In addition, novel Edge and Fog device concepts are presented to advance prediction results. However, the main problem with the current systems is that they are unable to meet the demands of effective diagnosis systems due to their poor prediction capabilities. To overcome this problem, this research proposes a novel framework called HAWKFOGS which innovatively integrates the deep learning for a practical diagnosis of cardiac problems using edge and fog computing devices. The current datasets were gathered from different subjects using IoT devices interfaced with the electrocardiography and blood pressure sensors. The data are then predicted as normal and abnormal using the Logistic Chaos based Harris Hawk Optimized Enhanced Gated Recurrent Neural Networks. The ablation experiments are carried out using IoT nodes interfaced with medical sensors and fog gateways based on Embedded Jetson Nano devices. The suggested algorithm's performance is measured. Additionally, Model Building Time is computed to validate the suggested model's response. Compared to the other algorithms, the suggested model yielded the best results in terms of accuracy (99.7%), precision (99.65%), recall (99.7%), specificity (99.7%). F1-score (99.69%) and used the least amount of Model Building Time (1.16 s) to predict cardiac diseases.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1354742"},"PeriodicalIF":3.0,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11239548/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141617200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
COVLIAS 3.0: cloud-based quantized hybrid UNet3+ deep learning for COVID-19 lesion detection in lung computed tomography. COVLIAS 3.0:基于云量化混合 UNet3+ 深度学习的肺部计算机断层扫描 COVID-19 病灶检测。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-28 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1304483
Sushant Agarwal, Sanjay Saxena, Alessandro Carriero, Gian Luca Chabert, Gobinath Ravindran, Sudip Paul, John R Laird, Deepak Garg, Mostafa Fatemi, Lopamudra Mohanty, Arun K Dubey, Rajesh Singh, Mostafa M Fouda, Narpinder Singh, Subbaram Naidu, Klaudija Viskovic, Melita Kukuljan, Manudeep K Kalra, Luca Saba, Jasjit S Suri

Background and novelty: When RT-PCR is ineffective in early diagnosis and understanding of COVID-19 severity, Computed Tomography (CT) scans are needed for COVID diagnosis, especially in patients having high ground-glass opacities, consolidations, and crazy paving. Radiologists find the manual method for lesion detection in CT very challenging and tedious. Previously solo deep learning (SDL) was tried but they had low to moderate-level performance. This study presents two new cloud-based quantized deep learning UNet3+ hybrid (HDL) models, which incorporated full-scale skip connections to enhance and improve the detections.

Methodology: Annotations from expert radiologists were used to train one SDL (UNet3+), and two HDL models, namely, VGG-UNet3+ and ResNet-UNet3+. For accuracy, 5-fold cross-validation protocols, training on 3,500 CT scans, and testing on unseen 500 CT scans were adopted in the cloud framework. Two kinds of loss functions were used: Dice Similarity (DS) and binary cross-entropy (BCE). Performance was evaluated using (i) Area error, (ii) DS, (iii) Jaccard Index, (iii) Bland-Altman, and (iv) Correlation plots.

Results: Among the two HDL models, ResNet-UNet3+ was superior to UNet3+ by 17 and 10% for Dice and BCE loss. The models were further compressed using quantization showing a percentage size reduction of 66.76, 36.64, and 46.23%, respectively, for UNet3+, VGG-UNet3+, and ResNet-UNet3+. Its stability and reliability were proved by statistical tests such as the Mann-Whitney, Paired t-Test, Wilcoxon test, and Friedman test all of which had a p < 0.001.

Conclusion: Full-scale skip connections of UNet3+ with VGG and ResNet in HDL framework proved the hypothesis showing powerful results improving the detection accuracy of COVID-19.

背景与新颖性:当 RT-PCR 对早期诊断和了解 COVID-19 的严重程度无效时,就需要通过计算机断层扫描(CT)来诊断 COVID,尤其是对有高磨玻璃不透光、合并症和疯狂铺层的患者。放射科医生发现,在 CT 中手动检测病灶的方法非常具有挑战性且乏味。以前曾尝试过单人深度学习(SDL),但其性能处于中下水平。本研究提出了两个新的基于云的量化深度学习 UNet3+ 混合(HDL)模型,其中包含了全面的跳转连接,以增强和改善检测:方法:利用放射科专家的注释来训练一个 SDL(UNet3+)和两个 HDL 模型,即 VGG-UNet3+ 和 ResNet-UNet3+。为了保证准确性,云框架采用了 5 倍交叉验证协议,在 3,500 张 CT 扫描图像上进行训练,并在未见过的 500 张 CT 扫描图像上进行测试。使用了两种损失函数:骰子相似度(Dice Similarity,DS)和二元交叉熵(binary crossentropy,BCE)。使用(i)面积误差、(ii)DS、(iii)Jaccard 指数、(iii)Bland-Altman 和(iv)相关图评估性能:在两种 HDL 模型中,ResNet-UNet3+ 在 Dice 和 BCE 损失方面分别比 UNet3+ 高出 17% 和 10%。使用量化技术对模型进行进一步压缩后显示,UNet3+、VGG-UNet3+ 和 ResNet-UNet3+ 模型的大小分别减少了 66.76%、36.64% 和 46.23%。Mann-Whitney 检验、配对 t 检验、Wilcoxon 检验和 Friedman 检验等统计检验证明了其稳定性和可靠性,所有检验结果均为 p 结论:在 HDL 框架下,UNet3+ 与 VGG 和 ResNet 的全面跳接证明了这一假设,显示出提高 COVID-19 检测准确性的强大效果。
{"title":"COVLIAS 3.0: cloud-based quantized hybrid UNet3+ deep learning for COVID-19 lesion detection in lung computed tomography.","authors":"Sushant Agarwal, Sanjay Saxena, Alessandro Carriero, Gian Luca Chabert, Gobinath Ravindran, Sudip Paul, John R Laird, Deepak Garg, Mostafa Fatemi, Lopamudra Mohanty, Arun K Dubey, Rajesh Singh, Mostafa M Fouda, Narpinder Singh, Subbaram Naidu, Klaudija Viskovic, Melita Kukuljan, Manudeep K Kalra, Luca Saba, Jasjit S Suri","doi":"10.3389/frai.2024.1304483","DOIUrl":"10.3389/frai.2024.1304483","url":null,"abstract":"<p><strong>Background and novelty: </strong>When RT-PCR is ineffective in early diagnosis and understanding of COVID-19 severity, Computed Tomography (CT) scans are needed for COVID diagnosis, especially in patients having high ground-glass opacities, consolidations, and crazy paving. Radiologists find the manual method for lesion detection in CT very challenging and tedious. Previously solo deep learning (SDL) was tried but they had low to moderate-level performance. This study presents two new cloud-based quantized deep learning UNet3+ hybrid (HDL) models, which incorporated full-scale skip connections to enhance and improve the detections.</p><p><strong>Methodology: </strong>Annotations from expert radiologists were used to train one SDL (UNet3+), and two HDL models, namely, VGG-UNet3+ and ResNet-UNet3+. For accuracy, 5-fold cross-validation protocols, training on 3,500 CT scans, and testing on unseen 500 CT scans were adopted in the cloud framework. Two kinds of loss functions were used: Dice Similarity (DS) and binary cross-entropy (BCE). Performance was evaluated using (i) Area error, (ii) DS, (iii) Jaccard Index, (iii) Bland-Altman, and (iv) Correlation plots.</p><p><strong>Results: </strong>Among the two HDL models, ResNet-UNet3+ was superior to UNet3+ by 17 and 10% for Dice and BCE loss. The models were further compressed using quantization showing a percentage size reduction of 66.76, 36.64, and 46.23%, respectively, for UNet3+, VGG-UNet3+, and ResNet-UNet3+. Its stability and reliability were proved by statistical tests such as the Mann-Whitney, Paired <i>t</i>-Test, Wilcoxon test, and Friedman test all of which had a <i>p</i> < 0.001.</p><p><strong>Conclusion: </strong>Full-scale skip connections of UNet3+ with VGG and ResNet in HDL framework proved the hypothesis showing powerful results improving the detection accuracy of COVID-19.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1304483"},"PeriodicalIF":3.0,"publicationDate":"2024-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11240867/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141617199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1