首页 > 最新文献

Frontiers in Artificial Intelligence最新文献

英文 中文
Combining large language models with enterprise knowledge graphs: a perspective on enhanced natural language understanding. 将大型语言模型与企业知识图谱相结合:增强自然语言理解的视角。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-27 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1460065
Luca Mariotti, Veronica Guidetti, Federica Mandreoli, Andrea Belli, Paolo Lombardi

Knowledge Graphs (KGs) have revolutionized knowledge representation, enabling a graph-structured framework where entities and their interrelations are systematically organized. Since their inception, KGs have significantly enhanced various knowledge-aware applications, including recommendation systems and question-answering systems. Sensigrafo, an enterprise KG developed by Expert.AI, exemplifies this advancement by focusing on Natural Language Understanding through a machine-oriented lexicon representation. Despite the progress, maintaining and enriching KGs remains a challenge, often requiring manual efforts. Recent developments in Large Language Models (LLMs) offer promising solutions for KG enrichment (KGE) by leveraging their ability to understand natural language. In this article, we discuss the state-of-the-art LLM-based techniques for KGE and show the challenges associated with automating and deploying these processes in an industrial setup. We then propose our perspective on overcoming problems associated with data quality and scarcity, economic viability, privacy issues, language evolution, and the need to automate the KGE process while maintaining high accuracy.

知识图谱(Knowledge Graphs,KGs)为知识表示带来了革命性的变化,它实现了一种图结构框架,在这种框架中,实体及其相互关系被系统地组织起来。自诞生以来,知识图谱极大地增强了各种知识感知应用,包括推荐系统和问题解答系统。Sensigrafo是Expert.AI公司开发的一款企业级KG,通过面向机器的词库表示法专注于自然语言理解,是这一进步的典范。尽管取得了进展,但维护和丰富 KG 仍然是一项挑战,通常需要人工操作。大型语言模型(LLM)的最新发展利用其理解自然语言的能力,为丰富 KG(KGE)提供了前景广阔的解决方案。在本文中,我们将讨论最先进的基于 LLM 的 KGE 技术,并展示在工业环境中自动化和部署这些流程所面临的挑战。然后,我们提出了自己的观点,以克服与数据质量和稀缺性、经济可行性、隐私问题、语言演变以及在保持高准确性的同时实现 KGE 过程自动化的必要性相关的问题。
{"title":"Combining large language models with enterprise knowledge graphs: a perspective on enhanced natural language understanding.","authors":"Luca Mariotti, Veronica Guidetti, Federica Mandreoli, Andrea Belli, Paolo Lombardi","doi":"10.3389/frai.2024.1460065","DOIUrl":"https://doi.org/10.3389/frai.2024.1460065","url":null,"abstract":"<p><p>Knowledge Graphs (KGs) have revolutionized knowledge representation, enabling a graph-structured framework where entities and their interrelations are systematically organized. Since their inception, KGs have significantly enhanced various knowledge-aware applications, including recommendation systems and question-answering systems. Sensigrafo, an enterprise KG developed by Expert.AI, exemplifies this advancement by focusing on Natural Language Understanding through a machine-oriented lexicon representation. Despite the progress, maintaining and enriching KGs remains a challenge, often requiring manual efforts. Recent developments in Large Language Models (LLMs) offer promising solutions for KG enrichment (KGE) by leveraging their ability to understand natural language. In this article, we discuss the state-of-the-art LLM-based techniques for KGE and show the challenges associated with automating and deploying these processes in an industrial setup. We then propose our perspective on overcoming problems associated with data quality and scarcity, economic viability, privacy issues, language evolution, and the need to automate the KGE process while maintaining high accuracy.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1460065"},"PeriodicalIF":3.0,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11385612/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring artificial intelligence techniques to research low energy nuclear reactions. 探索研究低能核反应的人工智能技术。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-23 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1401782
Anasse Bari, Tanya Pushkin Garg, Yvonne Wu, Sneha Singh, David Nagel

The world urgently needs new sources of clean energy due to a growing global population, rising energy use, and the effects of climate change. Nuclear energy is one of the most promising solutions for meeting the world's energy needs now and in the future. One type of nuclear energy, Low Energy Nuclear Reactions (LENR), has gained interest as a potential clean energy source. Recent AI advancements create new ways to help research LENR and to comprehensively analyze the relationships between experimental parameters, materials, and outcomes across diverse LENR research endeavors worldwide. This study explores and investigates the effectiveness of modern AI capabilities leveraging embedding models and topic modeling techniques, including Latent Dirichlet Allocation (LDA), BERTopic, and Top2Vec, in elucidating the underlying structure and prevalent themes within a large LENR research corpus. These methodologies offer unique perspectives on understanding relationships and trends within the LENR research landscape, thereby facilitating advancements in this crucial energy research area. Furthermore, the study presents LENRsim, an experimental machine learning tool to identify similar LENR studies, along with a user-friendly web interface for widespread adoption and utilization. The findings contribute to the understanding and progression of LENR research through data-driven analysis and tool development, enabling more informed decision-making and strategic planning for future research in this field. The insights derived from this study, along with the experimental tools we developed and deployed, hold the potential to significantly aid researchers in advancing their studies of LENR.

由于全球人口不断增长、能源使用量不断增加以及气候变化的影响,世界迫切需要新的清洁能源。核能是满足当前和未来世界能源需求的最有前途的解决方案之一。其中一种核能,即低能核反应(LENR),作为一种潜在的清洁能源,已经引起了人们的兴趣。最近的人工智能进步创造了新的方法来帮助研究低能核反应,并全面分析全球各种低能核反应研究工作中的实验参数、材料和结果之间的关系。本研究探讨并研究了现代人工智能能力在利用嵌入模型和主题建模技术(包括潜在德里希特分配 (LDA)、BERTopic 和 Top2Vec)阐明大型 LENR 研究语料库中的潜在结构和流行主题方面的有效性。这些方法为了解低能耗研究领域的关系和趋势提供了独特的视角,从而促进了这一重要能源研究领域的进步。此外,该研究还介绍了 LENRsim,这是一种用于识别类似 LENR 研究的实验性机器学习工具,同时还提供了用户友好型网络界面,以便广泛采用和使用。研究结果通过数据驱动的分析和工具开发,促进了对低能耗研究的理解和发展,为该领域的未来研究提供了更明智的决策和战略规划。这项研究得出的见解以及我们开发和部署的实验工具,有可能极大地帮助研究人员推进低能辐射研究。
{"title":"Exploring artificial intelligence techniques to research low energy nuclear reactions.","authors":"Anasse Bari, Tanya Pushkin Garg, Yvonne Wu, Sneha Singh, David Nagel","doi":"10.3389/frai.2024.1401782","DOIUrl":"10.3389/frai.2024.1401782","url":null,"abstract":"<p><p>The world urgently needs new sources of clean energy due to a growing global population, rising energy use, and the effects of climate change. Nuclear energy is one of the most promising solutions for meeting the world's energy needs now and in the future. One type of nuclear energy, Low Energy Nuclear Reactions (LENR), has gained interest as a potential clean energy source. Recent AI advancements create new ways to help research LENR and to comprehensively analyze the relationships between experimental parameters, materials, and outcomes across diverse LENR research endeavors worldwide. This study explores and investigates the effectiveness of modern AI capabilities leveraging embedding models and topic modeling techniques, including Latent Dirichlet Allocation (LDA), BERTopic, and Top2Vec, in elucidating the underlying structure and prevalent themes within a large LENR research corpus. These methodologies offer unique perspectives on understanding relationships and trends within the LENR research landscape, thereby facilitating advancements in this crucial energy research area. Furthermore, the study presents LENRsim, an experimental machine learning tool to identify similar LENR studies, along with a user-friendly web interface for widespread adoption and utilization. The findings contribute to the understanding and progression of LENR research through data-driven analysis and tool development, enabling more informed decision-making and strategic planning for future research in this field. The insights derived from this study, along with the experimental tools we developed and deployed, hold the potential to significantly aid researchers in advancing their studies of LENR.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1401782"},"PeriodicalIF":3.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11377257/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multitask connected U-Net: automatic lung cancer segmentation from CT images using PET knowledge guidance. 多任务连接 U-Net:利用 PET 知识指导从 CT 图像自动分割肺癌。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-23 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1423535
Lu Zhou, Chaoyong Wu, Yiheng Chen, Zhicheng Zhang

Lung cancer is a predominant cause of cancer-related mortality worldwide, necessitating precise tumor segmentation of medical images for accurate diagnosis and treatment. However, the intrinsic complexity and variability of tumor morphology pose substantial challenges to segmentation tasks. To address this issue, we propose a multitask connected U-Net model with a teacher-student framework to enhance the effectiveness of lung tumor segmentation. The proposed model and framework integrate PET knowledge into the segmentation process, leveraging complementary information from both CT and PET modalities to improve segmentation performance. Additionally, we implemented a tumor area detection method to enhance tumor segmentation performance. In extensive experiments on four datasets, the average Dice coefficient of 0.56, obtained using our model, surpassed those of existing methods such as Segformer (0.51), Transformer (0.50), and UctransNet (0.43). These findings validate the efficacy of the proposed method in lung tumor segmentation tasks.

肺癌是导致全球癌症相关死亡的主要原因,因此需要对医学图像进行精确的肿瘤分割,以进行准确的诊断和治疗。然而,肿瘤形态的内在复杂性和多变性给分割任务带来了巨大挑战。为解决这一问题,我们提出了一个多任务连接 U-Net 模型和一个师生框架,以提高肺部肿瘤分割的有效性。所提出的模型和框架将 PET 知识整合到分割过程中,利用 CT 和 PET 两种模式的互补信息来提高分割性能。此外,我们还采用了一种肿瘤区域检测方法来提高肿瘤分割性能。在四个数据集的广泛实验中,使用我们的模型获得的平均 Dice 系数为 0.56,超过了 Segformer(0.51)、Transformer(0.50)和 UctransNet(0.43)等现有方法。这些发现验证了所提方法在肺部肿瘤分割任务中的有效性。
{"title":"Multitask connected U-Net: automatic lung cancer segmentation from CT images using PET knowledge guidance.","authors":"Lu Zhou, Chaoyong Wu, Yiheng Chen, Zhicheng Zhang","doi":"10.3389/frai.2024.1423535","DOIUrl":"10.3389/frai.2024.1423535","url":null,"abstract":"<p><p>Lung cancer is a predominant cause of cancer-related mortality worldwide, necessitating precise tumor segmentation of medical images for accurate diagnosis and treatment. However, the intrinsic complexity and variability of tumor morphology pose substantial challenges to segmentation tasks. To address this issue, we propose a multitask connected U-Net model with a teacher-student framework to enhance the effectiveness of lung tumor segmentation. The proposed model and framework integrate PET knowledge into the segmentation process, leveraging complementary information from both CT and PET modalities to improve segmentation performance. Additionally, we implemented a tumor area detection method to enhance tumor segmentation performance. In extensive experiments on four datasets, the average Dice coefficient of 0.56, obtained using our model, surpassed those of existing methods such as Segformer (0.51), Transformer (0.50), and UctransNet (0.43). These findings validate the efficacy of the proposed method in lung tumor segmentation tasks.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1423535"},"PeriodicalIF":3.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11377414/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AttentionTTE: a deep learning model for estimated time of arrival. AttentionTTE:估计到达时间的深度学习模型。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-23 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1258086
Mu Li, Yijun Feng, Xiangdong Wu

Estimating travel time (ETA) for arbitrary paths is crucial in urban intelligent transportation systems. Previous studies primarily focus on constructing complex feature systems for individual road segments or sub-segments, which fail to effectively model the influence of each road segment on others. To address this issue, we propose an end-to-end model, AttentionTTE. It utilizes a self-attention mechanism to capture global spatial correlations and a recurrent neural network to capture temporal dependencies from local spatial correlations. Additionally, a multi-task learning module integrates global spatial correlations and temporal dependencies to estimate the travel time for both the entire path and each local path. We evaluate our model on a large trajectory dataset, and extensive experimental results demonstrate that AttentionTTE achieves state-of-the-art performance compared to other methods.

在城市智能交通系统中,估算任意路径的旅行时间(ETA)至关重要。以往的研究主要集中在为单个路段或子路段构建复杂的特征系统,而这些系统无法有效地模拟每个路段对其他路段的影响。为解决这一问题,我们提出了一种端到端模型--AttentionTTE。它利用自我注意机制捕捉全局空间相关性,并利用递归神经网络捕捉局部空间相关性的时间依赖性。此外,多任务学习模块整合了全局空间相关性和时间相关性,以估算整个路径和每个局部路径的旅行时间。我们在一个大型轨迹数据集上对我们的模型进行了评估,大量实验结果表明,与其他方法相比,AttentionTTE 实现了最先进的性能。
{"title":"AttentionTTE: a deep learning model for estimated time of arrival.","authors":"Mu Li, Yijun Feng, Xiangdong Wu","doi":"10.3389/frai.2024.1258086","DOIUrl":"10.3389/frai.2024.1258086","url":null,"abstract":"<p><p>Estimating travel time (ETA) for arbitrary paths is crucial in urban intelligent transportation systems. Previous studies primarily focus on constructing complex feature systems for individual road segments or sub-segments, which fail to effectively model the influence of each road segment on others. To address this issue, we propose an end-to-end model, AttentionTTE. It utilizes a self-attention mechanism to capture global spatial correlations and a recurrent neural network to capture temporal dependencies from local spatial correlations. Additionally, a multi-task learning module integrates global spatial correlations and temporal dependencies to estimate the travel time for both the entire path and each local path. We evaluate our model on a large trajectory dataset, and extensive experimental results demonstrate that AttentionTTE achieves state-of-the-art performance compared to other methods.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1258086"},"PeriodicalIF":3.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11378341/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Implications of causality in artificial intelligence. 人工智能中因果关系的影响。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-21 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1439702
Luís Cavique

Over the last decade, investment in artificial intelligence (AI) has grown significantly, driven by technology companies and the demand for PhDs in AI. However, new challenges have emerged, such as the 'black box' and bias in AI models. Several approaches have been developed to reduce these problems. Responsible AI focuses on the ethical development of AI systems, considering social impact. Fair AI seeks to identify and correct algorithm biases, promoting equitable decisions. Explainable AI aims to create transparent models that allow users to interpret results. Finally, Causal AI emphasizes identifying cause-and-effect relationships and plays a crucial role in creating more robust and reliable systems, thereby promoting fairness and transparency in AI development. Responsible, Fair, and Explainable AI has several weaknesses. However, Causal AI is the approach with the slightest criticism, offering reassurance about the ethical development of AI.

过去十年间,在科技公司和对人工智能博士需求的推动下,人工智能(AI)领域的投资大幅增长。然而,新的挑战也随之出现,如人工智能模型中的 "黑箱 "和偏见。为了减少这些问题,人们开发了几种方法。负责任的人工智能侧重于人工智能系统的道德开发,同时考虑到社会影响。公平的人工智能旨在识别和纠正算法偏差,促进公平决策。可解释的人工智能旨在创建透明的模型,让用户能够解释结果。最后,因果人工智能强调识别因果关系,在创建更强大、更可靠的系统方面发挥关键作用,从而促进人工智能开发的公平性和透明度。负责任、公平和可解释的人工智能有几个弱点。然而,因果人工智能是批评最少的一种方法,为人工智能的道德发展提供了保证。
{"title":"Implications of causality in artificial intelligence.","authors":"Luís Cavique","doi":"10.3389/frai.2024.1439702","DOIUrl":"10.3389/frai.2024.1439702","url":null,"abstract":"<p><p>Over the last decade, investment in artificial intelligence (AI) has grown significantly, driven by technology companies and the demand for PhDs in AI. However, new challenges have emerged, such as the 'black box' and bias in AI models. Several approaches have been developed to reduce these problems. Responsible AI focuses on the ethical development of AI systems, considering social impact. Fair AI seeks to identify and correct algorithm biases, promoting equitable decisions. Explainable AI aims to create transparent models that allow users to interpret results. Finally, Causal AI emphasizes identifying cause-and-effect relationships and plays a crucial role in creating more robust and reliable systems, thereby promoting fairness and transparency in AI development. Responsible, Fair, and Explainable AI has several weaknesses. However, Causal AI is the approach with the slightest criticism, offering reassurance about the ethical development of AI.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1439702"},"PeriodicalIF":3.0,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11371780/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analyzing classification and feature selection strategies for diabetes prediction across diverse diabetes datasets. 分析不同糖尿病数据集的糖尿病预测分类和特征选择策略。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-21 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1421751
Jayakumar Kaliappan, I J Saravana Kumar, S Sundaravelan, T Anesh, R R Rithik, Yashbir Singh, Diana V Vera-Garcia, Yassine Himeur, Wathiq Mansoor, Shadi Atalla, Kathiravan Srinivasan

Introduction: In the evolving landscape of healthcare and medicine, the merging of extensive medical datasets with the powerful capabilities of machine learning (ML) models presents a significant opportunity for transforming diagnostics, treatments, and patient care.

Methods: This research paper delves into the realm of data-driven healthcare, placing a special focus on identifying the most effective ML models for diabetes prediction and uncovering the critical features that aid in this prediction. The prediction performance is analyzed using a variety of ML models, such as Random Forest (RF), XG Boost (XGB), Linear Regression (LR), Gradient Boosting (GB), and Support VectorMachine (SVM), across numerousmedical datasets. The study of feature importance is conducted using methods including Filter-based, Wrapper-based techniques, and Explainable Artificial Intelligence (Explainable AI). By utilizing Explainable AI techniques, specifically Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), the decision-making process of the models is ensured to be transparent, thereby bolstering trust in AI-driven decisions.

Results: Features identified by RF in Wrapper-based techniques and the Chi-square in Filter-based techniques have been shown to enhance prediction performance. A notable precision and recall values, reaching up to 0.9 is achieved in predicting diabetes.

Discussion: Both approaches are found to assign considerable importance to features like age, family history of diabetes, polyuria, polydipsia, and high blood pressure, which are strongly associated with diabetes. In this age of data-driven healthcare, the research presented here aspires to substantially improve healthcare outcomes.

简介在不断发展的医疗保健领域,广泛的医疗数据集与机器学习(ML)模型的强大功能相结合,为诊断、治疗和患者护理的变革带来了重大机遇:本研究论文深入探讨了数据驱动的医疗保健领域,重点是确定最有效的糖尿病预测 ML 模型,并揭示有助于该预测的关键特征。在众多医疗数据集中,使用随机森林(RF)、XG Boost(XGB)、线性回归(LR)、梯度提升(GB)和支持向量机(SVM)等多种 ML 模型分析了预测性能。研究特征重要性的方法包括基于过滤器的技术、基于封装的技术和可解释人工智能(Explainable AI)。通过利用可解释的人工智能技术,特别是本地可解释模型-不可知解释(LIME)和SHAPLE Additive exPlanations(SHAP),确保了模型决策过程的透明性,从而增强了对人工智能驱动决策的信任:结果:在基于封装的技术中,通过射频识别的特征和在基于过滤器的技术中,通过奇偶校验识别的特征都能提高预测性能。在预测糖尿病方面,精确度和召回值显著提高,达到 0.9:这两种方法都非常重视年龄、糖尿病家族史、多尿、多饮和高血压等与糖尿病密切相关的特征。在这个数据驱动医疗保健的时代,本文介绍的研究有望大幅改善医疗保健结果。
{"title":"Analyzing classification and feature selection strategies for diabetes prediction across diverse diabetes datasets.","authors":"Jayakumar Kaliappan, I J Saravana Kumar, S Sundaravelan, T Anesh, R R Rithik, Yashbir Singh, Diana V Vera-Garcia, Yassine Himeur, Wathiq Mansoor, Shadi Atalla, Kathiravan Srinivasan","doi":"10.3389/frai.2024.1421751","DOIUrl":"10.3389/frai.2024.1421751","url":null,"abstract":"<p><strong>Introduction: </strong>In the evolving landscape of healthcare and medicine, the merging of extensive medical datasets with the powerful capabilities of machine learning (ML) models presents a significant opportunity for transforming diagnostics, treatments, and patient care.</p><p><strong>Methods: </strong>This research paper delves into the realm of data-driven healthcare, placing a special focus on identifying the most effective ML models for diabetes prediction and uncovering the critical features that aid in this prediction. The prediction performance is analyzed using a variety of ML models, such as Random Forest (RF), XG Boost (XGB), Linear Regression (LR), Gradient Boosting (GB), and Support VectorMachine (SVM), across numerousmedical datasets. The study of feature importance is conducted using methods including Filter-based, Wrapper-based techniques, and Explainable Artificial Intelligence (Explainable AI). By utilizing Explainable AI techniques, specifically Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), the decision-making process of the models is ensured to be transparent, thereby bolstering trust in AI-driven decisions.</p><p><strong>Results: </strong>Features identified by RF in Wrapper-based techniques and the Chi-square in Filter-based techniques have been shown to enhance prediction performance. A notable precision and recall values, reaching up to 0.9 is achieved in predicting diabetes.</p><p><strong>Discussion: </strong>Both approaches are found to assign considerable importance to features like age, family history of diabetes, polyuria, polydipsia, and high blood pressure, which are strongly associated with diabetes. In this age of data-driven healthcare, the research presented here aspires to substantially improve healthcare outcomes.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1421751"},"PeriodicalIF":3.0,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11371799/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Refinement of machine learning arterial waveform models for predicting blood loss in canines. 改进机器学习动脉波形模型,用于预测犬失血量。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-21 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1408029
Jose M Gonzalez, Thomas H Edwards, Guillaume L Hoareau, Eric J Snider

Introduction: Hemorrhage remains a leading cause of death in civilian and military trauma. Hemorrhages also extend to military working dogs, who can experience injuries similar to those of the humans they work alongside. Unfortunately, current physiological monitoring is often inadequate for early detection of hemorrhage. Here, we evaluate if features extracted from the arterial waveform can allow for early hemorrhage prediction and improved intervention in canines.

Methods: In this effort, we extracted more than 1,900 features from an arterial waveform in canine hemorrhage datasets prior to hemorrhage, during hemorrhage, and during a shock hold period. Different features were used as input to decision tree machine learning (ML) model architectures to track three model predictors-total blood loss volume, estimated percent blood loss, and area under the time versus hemorrhaged blood volume curve.

Results: ML models were successfully developed for total and estimated percent blood loss, with the total blood loss having a higher correlation coefficient. The area predictors were unsuccessful at being directly predicted by decision tree ML models but could be calculated indirectly from the ML prediction models for blood loss. Overall, the area under the hemorrhage curve had the highest sensitivity for detecting hemorrhage at approximately 4 min after hemorrhage onset, compared to more than 45 min before detection based on mean arterial pressure.

Conclusion: ML methods successfully tracked hemorrhage and provided earlier prediction in canines, potentially improving hemorrhage detection and objectifying triage for veterinary medicine. Further, its use can potentially be extended to human use with proper training datasets.

导言:大出血仍然是导致平民和军人创伤死亡的主要原因。军犬也会发生大出血,因为它们可能会受到与人类类似的伤害。遗憾的是,目前的生理监测往往不足以对出血进行早期检测。在此,我们评估了从动脉波形中提取的特征是否可以用于早期出血预测和改善犬类干预:在这项工作中,我们从犬出血数据集中的出血前、出血期间和休克维持期的动脉波形中提取了 1900 多个特征。不同的特征被用作决策树机器学习(ML)模型架构的输入,以跟踪三个模型预测因子--总失血量、估计失血百分比以及时间与出血血量曲线下的面积:针对总失血量和估计失血百分比成功开发出了 ML 模型,其中总失血量的相关系数更高。面积预测因子无法通过决策树 ML 模型直接预测,但可以通过失血量的 ML 预测模型间接计算。总体而言,出血曲线下面积在出血发生后约 4 分钟时检测出血的灵敏度最高,而根据平均动脉压检测则需要超过 45 分钟:ML 方法成功地追踪了犬类的出血情况,并提供了更早的预测,有可能改进出血检测和兽医客观分诊。此外,如果有适当的训练数据集,还可将其应用扩展到人类。
{"title":"Refinement of machine learning arterial waveform models for predicting blood loss in canines.","authors":"Jose M Gonzalez, Thomas H Edwards, Guillaume L Hoareau, Eric J Snider","doi":"10.3389/frai.2024.1408029","DOIUrl":"10.3389/frai.2024.1408029","url":null,"abstract":"<p><strong>Introduction: </strong>Hemorrhage remains a leading cause of death in civilian and military trauma. Hemorrhages also extend to military working dogs, who can experience injuries similar to those of the humans they work alongside. Unfortunately, current physiological monitoring is often inadequate for early detection of hemorrhage. Here, we evaluate if features extracted from the arterial waveform can allow for early hemorrhage prediction and improved intervention in canines.</p><p><strong>Methods: </strong>In this effort, we extracted more than 1,900 features from an arterial waveform in canine hemorrhage datasets prior to hemorrhage, during hemorrhage, and during a shock hold period. Different features were used as input to decision tree machine learning (ML) model architectures to track three model predictors-total blood loss volume, estimated percent blood loss, and area under the time versus hemorrhaged blood volume curve.</p><p><strong>Results: </strong>ML models were successfully developed for total and estimated percent blood loss, with the total blood loss having a higher correlation coefficient. The area predictors were unsuccessful at being directly predicted by decision tree ML models but could be calculated indirectly from the ML prediction models for blood loss. Overall, the area under the hemorrhage curve had the highest sensitivity for detecting hemorrhage at approximately 4 min after hemorrhage onset, compared to more than 45 min before detection based on mean arterial pressure.</p><p><strong>Conclusion: </strong>ML methods successfully tracked hemorrhage and provided earlier prediction in canines, potentially improving hemorrhage detection and objectifying triage for veterinary medicine. Further, its use can potentially be extended to human use with proper training datasets.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1408029"},"PeriodicalIF":3.0,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11371769/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diagnostic performance of AI-based models versus physicians among patients with hepatocellular carcinoma: a systematic review and meta-analysis. 基于人工智能的模型与医生对肝细胞癌患者的诊断效果对比:系统综述和荟萃分析。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-19 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1398205
Feras Al-Obeidat, Wael Hafez, Muneir Gador, Nesma Ahmed, Marwa Muhammed Abdeljawad, Antesh Yadav, Asrar Rashed

Background: Hepatocellular carcinoma (HCC) is a common primary liver cancer that requires early diagnosis due to its poor prognosis. Recent advances in artificial intelligence (AI) have facilitated hepatocellular carcinoma detection using multiple AI models; however, their performance is still uncertain.

Aim: This meta-analysis aimed to compare the diagnostic performance of different AI models with that of clinicians in the detection of hepatocellular carcinoma.

Methods: We searched the PubMed, Scopus, Cochrane Library, and Web of Science databases for eligible studies. The R package was used to synthesize the results. The outcomes of various studies were aggregated using fixed-effect and random-effects models. Statistical heterogeneity was evaluated using I-squared (I2) and chi-square statistics.

Results: We included seven studies in our meta-analysis;. Both physicians and AI-based models scored an average sensitivity of 93%. Great variation in sensitivity, accuracy, and specificity was observed depending on the model and diagnostic technique used. The region-based convolutional neural network (RCNN) model showed high sensitivity (96%). Physicians had the highest specificity in diagnosing hepatocellular carcinoma(100%); furthermore, models-based convolutional neural networks achieved high sensitivity. Models based on AI-assisted Contrast-enhanced ultrasound (CEUS) showed poor accuracy (69.9%) compared to physicians and other models. The leave-one-out sensitivity revealed high heterogeneity among studies, which represented true differences among the studies.

Conclusion: Models based on Faster R-CNN excel in image classification and data extraction, while both CNN-based models and models combining contrast-enhanced ultrasound (CEUS) with artificial intelligence (AI) had good sensitivity. Although AI models outperform physicians in diagnosing HCC, they should be utilized as supportive tools to help make more accurate and timely decisions.

背景:肝细胞癌(HCC)是一种常见的原发性肝癌,由于预后不良,需要早期诊断。人工智能(AI)的最新进展促进了多种人工智能模型对肝细胞癌的检测,但其性能仍不确定。目的:本荟萃分析旨在比较不同人工智能模型与临床医生在检测肝细胞癌方面的诊断性能:我们在 PubMed、Scopus、Cochrane Library 和 Web of Science 数据库中搜索了符合条件的研究。使用 R 软件包对结果进行综合。使用固定效应和随机效应模型对不同研究的结果进行汇总。统计异质性采用 I 平方(I2)和卡方统计进行评估:我们在荟萃分析中纳入了七项研究。医生和人工智能模型的平均灵敏度均为 93%。根据所用模型和诊断技术的不同,灵敏度、准确性和特异性也有很大差异。基于区域的卷积神经网络(RCNN)模型显示出较高的灵敏度(96%)。医生诊断肝细胞癌的特异性最高(100%);此外,基于模型的卷积神经网络也实现了高灵敏度。与医生和其他模型相比,基于人工智能辅助对比增强超声波(CEUS)的模型准确率较低(69.9%)。留空灵敏度显示了各研究之间的高度异质性,这代表了各研究之间的真实差异:结论:基于 Faster R-CNN 的模型在图像分类和数据提取方面表现出色,而基于 CNN 的模型以及将对比增强超声(CEUS)与人工智能(AI)相结合的模型都具有良好的灵敏度。虽然人工智能模型在诊断 HCC 方面优于医生,但应将其作为辅助工具来使用,以帮助做出更准确、更及时的决定。
{"title":"Diagnostic performance of AI-based models versus physicians among patients with hepatocellular carcinoma: a systematic review and meta-analysis.","authors":"Feras Al-Obeidat, Wael Hafez, Muneir Gador, Nesma Ahmed, Marwa Muhammed Abdeljawad, Antesh Yadav, Asrar Rashed","doi":"10.3389/frai.2024.1398205","DOIUrl":"10.3389/frai.2024.1398205","url":null,"abstract":"<p><strong>Background: </strong>Hepatocellular carcinoma (HCC) is a common primary liver cancer that requires early diagnosis due to its poor prognosis. Recent advances in artificial intelligence (AI) have facilitated hepatocellular carcinoma detection using multiple AI models; however, their performance is still uncertain.</p><p><strong>Aim: </strong>This meta-analysis aimed to compare the diagnostic performance of different AI models with that of clinicians in the detection of hepatocellular carcinoma.</p><p><strong>Methods: </strong>We searched the PubMed, Scopus, Cochrane Library, and Web of Science databases for eligible studies. The R package was used to synthesize the results. The outcomes of various studies were aggregated using fixed-effect and random-effects models. Statistical heterogeneity was evaluated using I-squared (I<sup>2</sup>) and chi-square statistics.</p><p><strong>Results: </strong>We included seven studies in our meta-analysis;. Both physicians and AI-based models scored an average sensitivity of 93%. Great variation in sensitivity, accuracy, and specificity was observed depending on the model and diagnostic technique used. The region-based convolutional neural network (RCNN) model showed high sensitivity (96%). Physicians had the highest specificity in diagnosing hepatocellular carcinoma(100%); furthermore, models-based convolutional neural networks achieved high sensitivity. Models based on AI-assisted Contrast-enhanced ultrasound (CEUS) showed poor accuracy (69.9%) compared to physicians and other models. The leave-one-out sensitivity revealed high heterogeneity among studies, which represented true differences among the studies.</p><p><strong>Conclusion: </strong>Models based on Faster R-CNN excel in image classification and data extraction, while both CNN-based models and models combining contrast-enhanced ultrasound (CEUS) with artificial intelligence (AI) had good sensitivity. Although AI models outperform physicians in diagnosing HCC, they should be utilized as supportive tools to help make more accurate and timely decisions.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1398205"},"PeriodicalIF":3.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11368160/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142120780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Person-based design and evaluation of MIA, a digital medical interview assistant for radiology. 以人为本设计和评估 MIA--放射学数字医学访谈助手。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-16 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1431156
Kerstin Denecke, Daniel Reichenpfader, Dominic Willi, Karin Kennel, Harald Bonel, Knud Nairz, Nikola Cihoric, Damien Papaux, Hendrik von Tengg-Kobligk

Introduction: Radiologists frequently lack direct patient contact due to time constraints. Digital medical interview assistants aim to facilitate the collection of health information. In this paper, we propose leveraging conversational agents to realize a medical interview assistant to facilitate medical history taking, while at the same time offering patients the opportunity to ask questions on the examination.

Methods: MIA, the digital medical interview assistant, was developed using a person-based design approach, involving patient opinions and expert knowledge during the design and development with a specific use case in collecting information before a mammography examination. MIA consists of two modules: the interview module and the question answering module (Q&A). To ensure interoperability with clinical information systems, we use HL7 FHIR to store and exchange the results collected by MIA during the patient interaction. The system was evaluated according to an existing evaluation framework that covers a broad range of aspects related to the technical quality of a conversational agent including usability, but also accessibility and security.

Results: Thirty-six patients recruited from two Swiss hospitals (Lindenhof group and Inselspital, Bern) and two patient organizations conducted the usability test. MIA was favorably received by the participants, who particularly noted the clarity of communication. However, there is room for improvement in the perceived quality of the conversation, the information provided, and the protection of privacy. The Q&A module achieved a precision of 0.51, a recall of 0.87 and an F-Score of 0.64 based on 114 questions asked by the participants. Security and accessibility also require improvements.

Conclusion: The applied person-based process described in this paper can provide best practices for future development of medical interview assistants. The application of a standardized evaluation framework helped in saving time and ensures comparability of results.

导言:由于时间限制,放射科医生经常无法直接接触病人。数字医疗问诊助手旨在为收集健康信息提供便利。在本文中,我们建议利用对话代理来实现医疗问诊助手,以方便病史采集,同时为患者提供在检查中提问的机会:方法:数字医疗问诊助手 MIA 的开发采用了以人为本的设计方法,在设计和开发过程中参考了患者的意见和专家的知识,并以乳腺 X 射线检查前的信息收集为特定用例。MIA 由两个模块组成:访谈模块和问题解答模块(Q&A)。为确保与临床信息系统的互操作性,我们使用 HL7 FHIR 来存储和交换 MIA 在与患者互动过程中收集到的结果。我们根据现有的评估框架对该系统进行了评估,该框架涵盖了与对话代理技术质量相关的广泛方面,包括可用性、可访问性和安全性:从瑞士两家医院(伯尔尼的林登霍夫医院和 Inselspital 医院)和两个患者组织招募的 36 名患者进行了可用性测试。MIA 得到了参与者的好评,他们尤其注意到了通信的清晰度。不过,在对话质量、提供的信息和隐私保护方面还有待改进。根据参与者提出的 114 个问题,问答模块的精确度为 0.51,召回率为 0.87,F-Score 为 0.64。安全性和可访问性也需要改进:本文所描述的基于人的应用流程可为未来医学访谈助手的开发提供最佳实践。标准化评估框架的应用有助于节省时间并确保结果的可比性。
{"title":"Person-based design and evaluation of MIA, a digital medical interview assistant for radiology.","authors":"Kerstin Denecke, Daniel Reichenpfader, Dominic Willi, Karin Kennel, Harald Bonel, Knud Nairz, Nikola Cihoric, Damien Papaux, Hendrik von Tengg-Kobligk","doi":"10.3389/frai.2024.1431156","DOIUrl":"10.3389/frai.2024.1431156","url":null,"abstract":"<p><strong>Introduction: </strong>Radiologists frequently lack direct patient contact due to time constraints. Digital medical interview assistants aim to facilitate the collection of health information. In this paper, we propose leveraging conversational agents to realize a medical interview assistant to facilitate medical history taking, while at the same time offering patients the opportunity to ask questions on the examination.</p><p><strong>Methods: </strong>MIA, the digital medical interview assistant, was developed using a person-based design approach, involving patient opinions and expert knowledge during the design and development with a specific use case in collecting information before a mammography examination. MIA consists of two modules: the interview module and the question answering module (Q&A). To ensure interoperability with clinical information systems, we use HL7 FHIR to store and exchange the results collected by MIA during the patient interaction. The system was evaluated according to an existing evaluation framework that covers a broad range of aspects related to the technical quality of a conversational agent including usability, but also accessibility and security.</p><p><strong>Results: </strong>Thirty-six patients recruited from two Swiss hospitals (Lindenhof group and Inselspital, Bern) and two patient organizations conducted the usability test. MIA was favorably received by the participants, who particularly noted the clarity of communication. However, there is room for improvement in the perceived quality of the conversation, the information provided, and the protection of privacy. The Q&A module achieved a precision of 0.51, a recall of 0.87 and an F-Score of 0.64 based on 114 questions asked by the participants. Security and accessibility also require improvements.</p><p><strong>Conclusion: </strong>The applied person-based process described in this paper can provide best practices for future development of medical interview assistants. The application of a standardized evaluation framework helped in saving time and ensures comparability of results.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1431156"},"PeriodicalIF":3.0,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11363708/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142112829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning models for the early detection of maize streak virus and maize lethal necrosis diseases in Tanzania. 用于早期检测坦桑尼亚玉米条纹病毒和玉米致死坏死病的深度学习模型。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-16 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1384709
Flavia Mayo, Ciira Maina, Mvurya Mgala, Neema Mduma

Agriculture is considered the backbone of Tanzania's economy, with more than 60% of the residents depending on it for survival. Maize is the country's dominant and primary food crop, accounting for 45% of all farmland production. However, its productivity is challenged by the limitation to detect maize diseases early enough. Maize streak virus (MSV) and maize lethal necrosis virus (MLN) are common diseases often detected too late by farmers. This has led to the need to develop a method for the early detection of these diseases so that they can be treated on time. This study investigated the potential of developing deep-learning models for the early detection of maize diseases in Tanzania. The regions where data was collected are Arusha, Kilimanjaro, and Manyara. Data was collected through observation by a plant. The study proposed convolutional neural network (CNN) and vision transformer (ViT) models. Four classes of imagery data were used to train both models: MLN, Healthy, MSV, and WRONG. The results revealed that the ViT model surpassed the CNN model, with 93.1 and 90.96% accuracies, respectively. Further studies should focus on mobile app development and deployment of the model with greater precision for early detection of the diseases mentioned above in real life.

农业被认为是坦桑尼亚的经济支柱,60% 以上的居民依靠农业生存。玉米是该国最主要的粮食作物,占农田总产量的 45%。然而,由于无法及早发现玉米病害,玉米的产量受到了挑战。玉米条斑病毒(MSV)和玉米致死坏死病毒(MLN)是常见的病害,农民往往发现得太晚。因此,需要开发一种早期检测这些疾病的方法,以便及时治疗。本研究调查了开发深度学习模型用于早期检测坦桑尼亚玉米疾病的潜力。收集数据的地区包括阿鲁沙、乞力马扎罗和马尼亚拉。数据是通过植物观察收集的。研究提出了卷积神经网络(CNN)和视觉转换器(ViT)模型。四类图像数据被用于训练这两种模型:MLN、Healthy、MSV 和 WRONG。结果显示,ViT 模型超过了 CNN 模型,准确率分别为 93.1% 和 90.96%。进一步的研究应侧重于移动应用程序的开发和模型的部署,以便在现实生活中更精确地早期检测上述疾病。
{"title":"Deep learning models for the early detection of maize streak virus and maize lethal necrosis diseases in Tanzania.","authors":"Flavia Mayo, Ciira Maina, Mvurya Mgala, Neema Mduma","doi":"10.3389/frai.2024.1384709","DOIUrl":"10.3389/frai.2024.1384709","url":null,"abstract":"<p><p>Agriculture is considered the backbone of Tanzania's economy, with more than 60% of the residents depending on it for survival. Maize is the country's dominant and primary food crop, accounting for 45% of all farmland production. However, its productivity is challenged by the limitation to detect maize diseases early enough. Maize streak virus (MSV) and maize lethal necrosis virus (MLN) are common diseases often detected too late by farmers. This has led to the need to develop a method for the early detection of these diseases so that they can be treated on time. This study investigated the potential of developing deep-learning models for the early detection of maize diseases in Tanzania. The regions where data was collected are Arusha, Kilimanjaro, and Manyara. Data was collected through observation by a plant. The study proposed convolutional neural network (CNN) and vision transformer (ViT) models. Four classes of imagery data were used to train both models: MLN, Healthy, MSV, and WRONG. The results revealed that the ViT model surpassed the CNN model, with 93.1 and 90.96% accuracies, respectively. Further studies should focus on mobile app development and deployment of the model with greater precision for early detection of the diseases mentioned above in real life.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1384709"},"PeriodicalIF":3.0,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11362060/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142112753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1