Pub Date : 2024-08-27eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1460065
Luca Mariotti, Veronica Guidetti, Federica Mandreoli, Andrea Belli, Paolo Lombardi
Knowledge Graphs (KGs) have revolutionized knowledge representation, enabling a graph-structured framework where entities and their interrelations are systematically organized. Since their inception, KGs have significantly enhanced various knowledge-aware applications, including recommendation systems and question-answering systems. Sensigrafo, an enterprise KG developed by Expert.AI, exemplifies this advancement by focusing on Natural Language Understanding through a machine-oriented lexicon representation. Despite the progress, maintaining and enriching KGs remains a challenge, often requiring manual efforts. Recent developments in Large Language Models (LLMs) offer promising solutions for KG enrichment (KGE) by leveraging their ability to understand natural language. In this article, we discuss the state-of-the-art LLM-based techniques for KGE and show the challenges associated with automating and deploying these processes in an industrial setup. We then propose our perspective on overcoming problems associated with data quality and scarcity, economic viability, privacy issues, language evolution, and the need to automate the KGE process while maintaining high accuracy.
{"title":"Combining large language models with enterprise knowledge graphs: a perspective on enhanced natural language understanding.","authors":"Luca Mariotti, Veronica Guidetti, Federica Mandreoli, Andrea Belli, Paolo Lombardi","doi":"10.3389/frai.2024.1460065","DOIUrl":"https://doi.org/10.3389/frai.2024.1460065","url":null,"abstract":"<p><p>Knowledge Graphs (KGs) have revolutionized knowledge representation, enabling a graph-structured framework where entities and their interrelations are systematically organized. Since their inception, KGs have significantly enhanced various knowledge-aware applications, including recommendation systems and question-answering systems. Sensigrafo, an enterprise KG developed by Expert.AI, exemplifies this advancement by focusing on Natural Language Understanding through a machine-oriented lexicon representation. Despite the progress, maintaining and enriching KGs remains a challenge, often requiring manual efforts. Recent developments in Large Language Models (LLMs) offer promising solutions for KG enrichment (KGE) by leveraging their ability to understand natural language. In this article, we discuss the state-of-the-art LLM-based techniques for KGE and show the challenges associated with automating and deploying these processes in an industrial setup. We then propose our perspective on overcoming problems associated with data quality and scarcity, economic viability, privacy issues, language evolution, and the need to automate the KGE process while maintaining high accuracy.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1460065"},"PeriodicalIF":3.0,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11385612/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The world urgently needs new sources of clean energy due to a growing global population, rising energy use, and the effects of climate change. Nuclear energy is one of the most promising solutions for meeting the world's energy needs now and in the future. One type of nuclear energy, Low Energy Nuclear Reactions (LENR), has gained interest as a potential clean energy source. Recent AI advancements create new ways to help research LENR and to comprehensively analyze the relationships between experimental parameters, materials, and outcomes across diverse LENR research endeavors worldwide. This study explores and investigates the effectiveness of modern AI capabilities leveraging embedding models and topic modeling techniques, including Latent Dirichlet Allocation (LDA), BERTopic, and Top2Vec, in elucidating the underlying structure and prevalent themes within a large LENR research corpus. These methodologies offer unique perspectives on understanding relationships and trends within the LENR research landscape, thereby facilitating advancements in this crucial energy research area. Furthermore, the study presents LENRsim, an experimental machine learning tool to identify similar LENR studies, along with a user-friendly web interface for widespread adoption and utilization. The findings contribute to the understanding and progression of LENR research through data-driven analysis and tool development, enabling more informed decision-making and strategic planning for future research in this field. The insights derived from this study, along with the experimental tools we developed and deployed, hold the potential to significantly aid researchers in advancing their studies of LENR.
{"title":"Exploring artificial intelligence techniques to research low energy nuclear reactions.","authors":"Anasse Bari, Tanya Pushkin Garg, Yvonne Wu, Sneha Singh, David Nagel","doi":"10.3389/frai.2024.1401782","DOIUrl":"10.3389/frai.2024.1401782","url":null,"abstract":"<p><p>The world urgently needs new sources of clean energy due to a growing global population, rising energy use, and the effects of climate change. Nuclear energy is one of the most promising solutions for meeting the world's energy needs now and in the future. One type of nuclear energy, Low Energy Nuclear Reactions (LENR), has gained interest as a potential clean energy source. Recent AI advancements create new ways to help research LENR and to comprehensively analyze the relationships between experimental parameters, materials, and outcomes across diverse LENR research endeavors worldwide. This study explores and investigates the effectiveness of modern AI capabilities leveraging embedding models and topic modeling techniques, including Latent Dirichlet Allocation (LDA), BERTopic, and Top2Vec, in elucidating the underlying structure and prevalent themes within a large LENR research corpus. These methodologies offer unique perspectives on understanding relationships and trends within the LENR research landscape, thereby facilitating advancements in this crucial energy research area. Furthermore, the study presents LENRsim, an experimental machine learning tool to identify similar LENR studies, along with a user-friendly web interface for widespread adoption and utilization. The findings contribute to the understanding and progression of LENR research through data-driven analysis and tool development, enabling more informed decision-making and strategic planning for future research in this field. The insights derived from this study, along with the experimental tools we developed and deployed, hold the potential to significantly aid researchers in advancing their studies of LENR.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1401782"},"PeriodicalIF":3.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11377257/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-23eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1423535
Lu Zhou, Chaoyong Wu, Yiheng Chen, Zhicheng Zhang
Lung cancer is a predominant cause of cancer-related mortality worldwide, necessitating precise tumor segmentation of medical images for accurate diagnosis and treatment. However, the intrinsic complexity and variability of tumor morphology pose substantial challenges to segmentation tasks. To address this issue, we propose a multitask connected U-Net model with a teacher-student framework to enhance the effectiveness of lung tumor segmentation. The proposed model and framework integrate PET knowledge into the segmentation process, leveraging complementary information from both CT and PET modalities to improve segmentation performance. Additionally, we implemented a tumor area detection method to enhance tumor segmentation performance. In extensive experiments on four datasets, the average Dice coefficient of 0.56, obtained using our model, surpassed those of existing methods such as Segformer (0.51), Transformer (0.50), and UctransNet (0.43). These findings validate the efficacy of the proposed method in lung tumor segmentation tasks.
肺癌是导致全球癌症相关死亡的主要原因,因此需要对医学图像进行精确的肿瘤分割,以进行准确的诊断和治疗。然而,肿瘤形态的内在复杂性和多变性给分割任务带来了巨大挑战。为解决这一问题,我们提出了一个多任务连接 U-Net 模型和一个师生框架,以提高肺部肿瘤分割的有效性。所提出的模型和框架将 PET 知识整合到分割过程中,利用 CT 和 PET 两种模式的互补信息来提高分割性能。此外,我们还采用了一种肿瘤区域检测方法来提高肿瘤分割性能。在四个数据集的广泛实验中,使用我们的模型获得的平均 Dice 系数为 0.56,超过了 Segformer(0.51)、Transformer(0.50)和 UctransNet(0.43)等现有方法。这些发现验证了所提方法在肺部肿瘤分割任务中的有效性。
{"title":"Multitask connected U-Net: automatic lung cancer segmentation from CT images using PET knowledge guidance.","authors":"Lu Zhou, Chaoyong Wu, Yiheng Chen, Zhicheng Zhang","doi":"10.3389/frai.2024.1423535","DOIUrl":"10.3389/frai.2024.1423535","url":null,"abstract":"<p><p>Lung cancer is a predominant cause of cancer-related mortality worldwide, necessitating precise tumor segmentation of medical images for accurate diagnosis and treatment. However, the intrinsic complexity and variability of tumor morphology pose substantial challenges to segmentation tasks. To address this issue, we propose a multitask connected U-Net model with a teacher-student framework to enhance the effectiveness of lung tumor segmentation. The proposed model and framework integrate PET knowledge into the segmentation process, leveraging complementary information from both CT and PET modalities to improve segmentation performance. Additionally, we implemented a tumor area detection method to enhance tumor segmentation performance. In extensive experiments on four datasets, the average Dice coefficient of 0.56, obtained using our model, surpassed those of existing methods such as Segformer (0.51), Transformer (0.50), and UctransNet (0.43). These findings validate the efficacy of the proposed method in lung tumor segmentation tasks.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1423535"},"PeriodicalIF":3.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11377414/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-23eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1258086
Mu Li, Yijun Feng, Xiangdong Wu
Estimating travel time (ETA) for arbitrary paths is crucial in urban intelligent transportation systems. Previous studies primarily focus on constructing complex feature systems for individual road segments or sub-segments, which fail to effectively model the influence of each road segment on others. To address this issue, we propose an end-to-end model, AttentionTTE. It utilizes a self-attention mechanism to capture global spatial correlations and a recurrent neural network to capture temporal dependencies from local spatial correlations. Additionally, a multi-task learning module integrates global spatial correlations and temporal dependencies to estimate the travel time for both the entire path and each local path. We evaluate our model on a large trajectory dataset, and extensive experimental results demonstrate that AttentionTTE achieves state-of-the-art performance compared to other methods.
{"title":"AttentionTTE: a deep learning model for estimated time of arrival.","authors":"Mu Li, Yijun Feng, Xiangdong Wu","doi":"10.3389/frai.2024.1258086","DOIUrl":"10.3389/frai.2024.1258086","url":null,"abstract":"<p><p>Estimating travel time (ETA) for arbitrary paths is crucial in urban intelligent transportation systems. Previous studies primarily focus on constructing complex feature systems for individual road segments or sub-segments, which fail to effectively model the influence of each road segment on others. To address this issue, we propose an end-to-end model, AttentionTTE. It utilizes a self-attention mechanism to capture global spatial correlations and a recurrent neural network to capture temporal dependencies from local spatial correlations. Additionally, a multi-task learning module integrates global spatial correlations and temporal dependencies to estimate the travel time for both the entire path and each local path. We evaluate our model on a large trajectory dataset, and extensive experimental results demonstrate that AttentionTTE achieves state-of-the-art performance compared to other methods.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1258086"},"PeriodicalIF":3.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11378341/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-21eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1439702
Luís Cavique
Over the last decade, investment in artificial intelligence (AI) has grown significantly, driven by technology companies and the demand for PhDs in AI. However, new challenges have emerged, such as the 'black box' and bias in AI models. Several approaches have been developed to reduce these problems. Responsible AI focuses on the ethical development of AI systems, considering social impact. Fair AI seeks to identify and correct algorithm biases, promoting equitable decisions. Explainable AI aims to create transparent models that allow users to interpret results. Finally, Causal AI emphasizes identifying cause-and-effect relationships and plays a crucial role in creating more robust and reliable systems, thereby promoting fairness and transparency in AI development. Responsible, Fair, and Explainable AI has several weaknesses. However, Causal AI is the approach with the slightest criticism, offering reassurance about the ethical development of AI.
{"title":"Implications of causality in artificial intelligence.","authors":"Luís Cavique","doi":"10.3389/frai.2024.1439702","DOIUrl":"10.3389/frai.2024.1439702","url":null,"abstract":"<p><p>Over the last decade, investment in artificial intelligence (AI) has grown significantly, driven by technology companies and the demand for PhDs in AI. However, new challenges have emerged, such as the 'black box' and bias in AI models. Several approaches have been developed to reduce these problems. Responsible AI focuses on the ethical development of AI systems, considering social impact. Fair AI seeks to identify and correct algorithm biases, promoting equitable decisions. Explainable AI aims to create transparent models that allow users to interpret results. Finally, Causal AI emphasizes identifying cause-and-effect relationships and plays a crucial role in creating more robust and reliable systems, thereby promoting fairness and transparency in AI development. Responsible, Fair, and Explainable AI has several weaknesses. However, Causal AI is the approach with the slightest criticism, offering reassurance about the ethical development of AI.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1439702"},"PeriodicalIF":3.0,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11371780/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-21eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1421751
Jayakumar Kaliappan, I J Saravana Kumar, S Sundaravelan, T Anesh, R R Rithik, Yashbir Singh, Diana V Vera-Garcia, Yassine Himeur, Wathiq Mansoor, Shadi Atalla, Kathiravan Srinivasan
Introduction: In the evolving landscape of healthcare and medicine, the merging of extensive medical datasets with the powerful capabilities of machine learning (ML) models presents a significant opportunity for transforming diagnostics, treatments, and patient care.
Methods: This research paper delves into the realm of data-driven healthcare, placing a special focus on identifying the most effective ML models for diabetes prediction and uncovering the critical features that aid in this prediction. The prediction performance is analyzed using a variety of ML models, such as Random Forest (RF), XG Boost (XGB), Linear Regression (LR), Gradient Boosting (GB), and Support VectorMachine (SVM), across numerousmedical datasets. The study of feature importance is conducted using methods including Filter-based, Wrapper-based techniques, and Explainable Artificial Intelligence (Explainable AI). By utilizing Explainable AI techniques, specifically Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), the decision-making process of the models is ensured to be transparent, thereby bolstering trust in AI-driven decisions.
Results: Features identified by RF in Wrapper-based techniques and the Chi-square in Filter-based techniques have been shown to enhance prediction performance. A notable precision and recall values, reaching up to 0.9 is achieved in predicting diabetes.
Discussion: Both approaches are found to assign considerable importance to features like age, family history of diabetes, polyuria, polydipsia, and high blood pressure, which are strongly associated with diabetes. In this age of data-driven healthcare, the research presented here aspires to substantially improve healthcare outcomes.
简介在不断发展的医疗保健领域,广泛的医疗数据集与机器学习(ML)模型的强大功能相结合,为诊断、治疗和患者护理的变革带来了重大机遇:本研究论文深入探讨了数据驱动的医疗保健领域,重点是确定最有效的糖尿病预测 ML 模型,并揭示有助于该预测的关键特征。在众多医疗数据集中,使用随机森林(RF)、XG Boost(XGB)、线性回归(LR)、梯度提升(GB)和支持向量机(SVM)等多种 ML 模型分析了预测性能。研究特征重要性的方法包括基于过滤器的技术、基于封装的技术和可解释人工智能(Explainable AI)。通过利用可解释的人工智能技术,特别是本地可解释模型-不可知解释(LIME)和SHAPLE Additive exPlanations(SHAP),确保了模型决策过程的透明性,从而增强了对人工智能驱动决策的信任:结果:在基于封装的技术中,通过射频识别的特征和在基于过滤器的技术中,通过奇偶校验识别的特征都能提高预测性能。在预测糖尿病方面,精确度和召回值显著提高,达到 0.9:这两种方法都非常重视年龄、糖尿病家族史、多尿、多饮和高血压等与糖尿病密切相关的特征。在这个数据驱动医疗保健的时代,本文介绍的研究有望大幅改善医疗保健结果。
{"title":"Analyzing classification and feature selection strategies for diabetes prediction across diverse diabetes datasets.","authors":"Jayakumar Kaliappan, I J Saravana Kumar, S Sundaravelan, T Anesh, R R Rithik, Yashbir Singh, Diana V Vera-Garcia, Yassine Himeur, Wathiq Mansoor, Shadi Atalla, Kathiravan Srinivasan","doi":"10.3389/frai.2024.1421751","DOIUrl":"10.3389/frai.2024.1421751","url":null,"abstract":"<p><strong>Introduction: </strong>In the evolving landscape of healthcare and medicine, the merging of extensive medical datasets with the powerful capabilities of machine learning (ML) models presents a significant opportunity for transforming diagnostics, treatments, and patient care.</p><p><strong>Methods: </strong>This research paper delves into the realm of data-driven healthcare, placing a special focus on identifying the most effective ML models for diabetes prediction and uncovering the critical features that aid in this prediction. The prediction performance is analyzed using a variety of ML models, such as Random Forest (RF), XG Boost (XGB), Linear Regression (LR), Gradient Boosting (GB), and Support VectorMachine (SVM), across numerousmedical datasets. The study of feature importance is conducted using methods including Filter-based, Wrapper-based techniques, and Explainable Artificial Intelligence (Explainable AI). By utilizing Explainable AI techniques, specifically Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), the decision-making process of the models is ensured to be transparent, thereby bolstering trust in AI-driven decisions.</p><p><strong>Results: </strong>Features identified by RF in Wrapper-based techniques and the Chi-square in Filter-based techniques have been shown to enhance prediction performance. A notable precision and recall values, reaching up to 0.9 is achieved in predicting diabetes.</p><p><strong>Discussion: </strong>Both approaches are found to assign considerable importance to features like age, family history of diabetes, polyuria, polydipsia, and high blood pressure, which are strongly associated with diabetes. In this age of data-driven healthcare, the research presented here aspires to substantially improve healthcare outcomes.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1421751"},"PeriodicalIF":3.0,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11371799/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-21eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1408029
Jose M Gonzalez, Thomas H Edwards, Guillaume L Hoareau, Eric J Snider
Introduction: Hemorrhage remains a leading cause of death in civilian and military trauma. Hemorrhages also extend to military working dogs, who can experience injuries similar to those of the humans they work alongside. Unfortunately, current physiological monitoring is often inadequate for early detection of hemorrhage. Here, we evaluate if features extracted from the arterial waveform can allow for early hemorrhage prediction and improved intervention in canines.
Methods: In this effort, we extracted more than 1,900 features from an arterial waveform in canine hemorrhage datasets prior to hemorrhage, during hemorrhage, and during a shock hold period. Different features were used as input to decision tree machine learning (ML) model architectures to track three model predictors-total blood loss volume, estimated percent blood loss, and area under the time versus hemorrhaged blood volume curve.
Results: ML models were successfully developed for total and estimated percent blood loss, with the total blood loss having a higher correlation coefficient. The area predictors were unsuccessful at being directly predicted by decision tree ML models but could be calculated indirectly from the ML prediction models for blood loss. Overall, the area under the hemorrhage curve had the highest sensitivity for detecting hemorrhage at approximately 4 min after hemorrhage onset, compared to more than 45 min before detection based on mean arterial pressure.
Conclusion: ML methods successfully tracked hemorrhage and provided earlier prediction in canines, potentially improving hemorrhage detection and objectifying triage for veterinary medicine. Further, its use can potentially be extended to human use with proper training datasets.
导言:大出血仍然是导致平民和军人创伤死亡的主要原因。军犬也会发生大出血,因为它们可能会受到与人类类似的伤害。遗憾的是,目前的生理监测往往不足以对出血进行早期检测。在此,我们评估了从动脉波形中提取的特征是否可以用于早期出血预测和改善犬类干预:在这项工作中,我们从犬出血数据集中的出血前、出血期间和休克维持期的动脉波形中提取了 1900 多个特征。不同的特征被用作决策树机器学习(ML)模型架构的输入,以跟踪三个模型预测因子--总失血量、估计失血百分比以及时间与出血血量曲线下的面积:针对总失血量和估计失血百分比成功开发出了 ML 模型,其中总失血量的相关系数更高。面积预测因子无法通过决策树 ML 模型直接预测,但可以通过失血量的 ML 预测模型间接计算。总体而言,出血曲线下面积在出血发生后约 4 分钟时检测出血的灵敏度最高,而根据平均动脉压检测则需要超过 45 分钟:ML 方法成功地追踪了犬类的出血情况,并提供了更早的预测,有可能改进出血检测和兽医客观分诊。此外,如果有适当的训练数据集,还可将其应用扩展到人类。
{"title":"Refinement of machine learning arterial waveform models for predicting blood loss in canines.","authors":"Jose M Gonzalez, Thomas H Edwards, Guillaume L Hoareau, Eric J Snider","doi":"10.3389/frai.2024.1408029","DOIUrl":"10.3389/frai.2024.1408029","url":null,"abstract":"<p><strong>Introduction: </strong>Hemorrhage remains a leading cause of death in civilian and military trauma. Hemorrhages also extend to military working dogs, who can experience injuries similar to those of the humans they work alongside. Unfortunately, current physiological monitoring is often inadequate for early detection of hemorrhage. Here, we evaluate if features extracted from the arterial waveform can allow for early hemorrhage prediction and improved intervention in canines.</p><p><strong>Methods: </strong>In this effort, we extracted more than 1,900 features from an arterial waveform in canine hemorrhage datasets prior to hemorrhage, during hemorrhage, and during a shock hold period. Different features were used as input to decision tree machine learning (ML) model architectures to track three model predictors-total blood loss volume, estimated percent blood loss, and area under the time versus hemorrhaged blood volume curve.</p><p><strong>Results: </strong>ML models were successfully developed for total and estimated percent blood loss, with the total blood loss having a higher correlation coefficient. The area predictors were unsuccessful at being directly predicted by decision tree ML models but could be calculated indirectly from the ML prediction models for blood loss. Overall, the area under the hemorrhage curve had the highest sensitivity for detecting hemorrhage at approximately 4 min after hemorrhage onset, compared to more than 45 min before detection based on mean arterial pressure.</p><p><strong>Conclusion: </strong>ML methods successfully tracked hemorrhage and provided earlier prediction in canines, potentially improving hemorrhage detection and objectifying triage for veterinary medicine. Further, its use can potentially be extended to human use with proper training datasets.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1408029"},"PeriodicalIF":3.0,"publicationDate":"2024-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11371769/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Hepatocellular carcinoma (HCC) is a common primary liver cancer that requires early diagnosis due to its poor prognosis. Recent advances in artificial intelligence (AI) have facilitated hepatocellular carcinoma detection using multiple AI models; however, their performance is still uncertain.
Aim: This meta-analysis aimed to compare the diagnostic performance of different AI models with that of clinicians in the detection of hepatocellular carcinoma.
Methods: We searched the PubMed, Scopus, Cochrane Library, and Web of Science databases for eligible studies. The R package was used to synthesize the results. The outcomes of various studies were aggregated using fixed-effect and random-effects models. Statistical heterogeneity was evaluated using I-squared (I2) and chi-square statistics.
Results: We included seven studies in our meta-analysis;. Both physicians and AI-based models scored an average sensitivity of 93%. Great variation in sensitivity, accuracy, and specificity was observed depending on the model and diagnostic technique used. The region-based convolutional neural network (RCNN) model showed high sensitivity (96%). Physicians had the highest specificity in diagnosing hepatocellular carcinoma(100%); furthermore, models-based convolutional neural networks achieved high sensitivity. Models based on AI-assisted Contrast-enhanced ultrasound (CEUS) showed poor accuracy (69.9%) compared to physicians and other models. The leave-one-out sensitivity revealed high heterogeneity among studies, which represented true differences among the studies.
Conclusion: Models based on Faster R-CNN excel in image classification and data extraction, while both CNN-based models and models combining contrast-enhanced ultrasound (CEUS) with artificial intelligence (AI) had good sensitivity. Although AI models outperform physicians in diagnosing HCC, they should be utilized as supportive tools to help make more accurate and timely decisions.
背景:肝细胞癌(HCC)是一种常见的原发性肝癌,由于预后不良,需要早期诊断。人工智能(AI)的最新进展促进了多种人工智能模型对肝细胞癌的检测,但其性能仍不确定。目的:本荟萃分析旨在比较不同人工智能模型与临床医生在检测肝细胞癌方面的诊断性能:我们在 PubMed、Scopus、Cochrane Library 和 Web of Science 数据库中搜索了符合条件的研究。使用 R 软件包对结果进行综合。使用固定效应和随机效应模型对不同研究的结果进行汇总。统计异质性采用 I 平方(I2)和卡方统计进行评估:我们在荟萃分析中纳入了七项研究。医生和人工智能模型的平均灵敏度均为 93%。根据所用模型和诊断技术的不同,灵敏度、准确性和特异性也有很大差异。基于区域的卷积神经网络(RCNN)模型显示出较高的灵敏度(96%)。医生诊断肝细胞癌的特异性最高(100%);此外,基于模型的卷积神经网络也实现了高灵敏度。与医生和其他模型相比,基于人工智能辅助对比增强超声波(CEUS)的模型准确率较低(69.9%)。留空灵敏度显示了各研究之间的高度异质性,这代表了各研究之间的真实差异:结论:基于 Faster R-CNN 的模型在图像分类和数据提取方面表现出色,而基于 CNN 的模型以及将对比增强超声(CEUS)与人工智能(AI)相结合的模型都具有良好的灵敏度。虽然人工智能模型在诊断 HCC 方面优于医生,但应将其作为辅助工具来使用,以帮助做出更准确、更及时的决定。
{"title":"Diagnostic performance of AI-based models versus physicians among patients with hepatocellular carcinoma: a systematic review and meta-analysis.","authors":"Feras Al-Obeidat, Wael Hafez, Muneir Gador, Nesma Ahmed, Marwa Muhammed Abdeljawad, Antesh Yadav, Asrar Rashed","doi":"10.3389/frai.2024.1398205","DOIUrl":"10.3389/frai.2024.1398205","url":null,"abstract":"<p><strong>Background: </strong>Hepatocellular carcinoma (HCC) is a common primary liver cancer that requires early diagnosis due to its poor prognosis. Recent advances in artificial intelligence (AI) have facilitated hepatocellular carcinoma detection using multiple AI models; however, their performance is still uncertain.</p><p><strong>Aim: </strong>This meta-analysis aimed to compare the diagnostic performance of different AI models with that of clinicians in the detection of hepatocellular carcinoma.</p><p><strong>Methods: </strong>We searched the PubMed, Scopus, Cochrane Library, and Web of Science databases for eligible studies. The R package was used to synthesize the results. The outcomes of various studies were aggregated using fixed-effect and random-effects models. Statistical heterogeneity was evaluated using I-squared (I<sup>2</sup>) and chi-square statistics.</p><p><strong>Results: </strong>We included seven studies in our meta-analysis;. Both physicians and AI-based models scored an average sensitivity of 93%. Great variation in sensitivity, accuracy, and specificity was observed depending on the model and diagnostic technique used. The region-based convolutional neural network (RCNN) model showed high sensitivity (96%). Physicians had the highest specificity in diagnosing hepatocellular carcinoma(100%); furthermore, models-based convolutional neural networks achieved high sensitivity. Models based on AI-assisted Contrast-enhanced ultrasound (CEUS) showed poor accuracy (69.9%) compared to physicians and other models. The leave-one-out sensitivity revealed high heterogeneity among studies, which represented true differences among the studies.</p><p><strong>Conclusion: </strong>Models based on Faster R-CNN excel in image classification and data extraction, while both CNN-based models and models combining contrast-enhanced ultrasound (CEUS) with artificial intelligence (AI) had good sensitivity. Although AI models outperform physicians in diagnosing HCC, they should be utilized as supportive tools to help make more accurate and timely decisions.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1398205"},"PeriodicalIF":3.0,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11368160/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142120780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-16eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1431156
Kerstin Denecke, Daniel Reichenpfader, Dominic Willi, Karin Kennel, Harald Bonel, Knud Nairz, Nikola Cihoric, Damien Papaux, Hendrik von Tengg-Kobligk
Introduction: Radiologists frequently lack direct patient contact due to time constraints. Digital medical interview assistants aim to facilitate the collection of health information. In this paper, we propose leveraging conversational agents to realize a medical interview assistant to facilitate medical history taking, while at the same time offering patients the opportunity to ask questions on the examination.
Methods: MIA, the digital medical interview assistant, was developed using a person-based design approach, involving patient opinions and expert knowledge during the design and development with a specific use case in collecting information before a mammography examination. MIA consists of two modules: the interview module and the question answering module (Q&A). To ensure interoperability with clinical information systems, we use HL7 FHIR to store and exchange the results collected by MIA during the patient interaction. The system was evaluated according to an existing evaluation framework that covers a broad range of aspects related to the technical quality of a conversational agent including usability, but also accessibility and security.
Results: Thirty-six patients recruited from two Swiss hospitals (Lindenhof group and Inselspital, Bern) and two patient organizations conducted the usability test. MIA was favorably received by the participants, who particularly noted the clarity of communication. However, there is room for improvement in the perceived quality of the conversation, the information provided, and the protection of privacy. The Q&A module achieved a precision of 0.51, a recall of 0.87 and an F-Score of 0.64 based on 114 questions asked by the participants. Security and accessibility also require improvements.
Conclusion: The applied person-based process described in this paper can provide best practices for future development of medical interview assistants. The application of a standardized evaluation framework helped in saving time and ensures comparability of results.
导言:由于时间限制,放射科医生经常无法直接接触病人。数字医疗问诊助手旨在为收集健康信息提供便利。在本文中,我们建议利用对话代理来实现医疗问诊助手,以方便病史采集,同时为患者提供在检查中提问的机会:方法:数字医疗问诊助手 MIA 的开发采用了以人为本的设计方法,在设计和开发过程中参考了患者的意见和专家的知识,并以乳腺 X 射线检查前的信息收集为特定用例。MIA 由两个模块组成:访谈模块和问题解答模块(Q&A)。为确保与临床信息系统的互操作性,我们使用 HL7 FHIR 来存储和交换 MIA 在与患者互动过程中收集到的结果。我们根据现有的评估框架对该系统进行了评估,该框架涵盖了与对话代理技术质量相关的广泛方面,包括可用性、可访问性和安全性:从瑞士两家医院(伯尔尼的林登霍夫医院和 Inselspital 医院)和两个患者组织招募的 36 名患者进行了可用性测试。MIA 得到了参与者的好评,他们尤其注意到了通信的清晰度。不过,在对话质量、提供的信息和隐私保护方面还有待改进。根据参与者提出的 114 个问题,问答模块的精确度为 0.51,召回率为 0.87,F-Score 为 0.64。安全性和可访问性也需要改进:本文所描述的基于人的应用流程可为未来医学访谈助手的开发提供最佳实践。标准化评估框架的应用有助于节省时间并确保结果的可比性。
{"title":"Person-based design and evaluation of MIA, a digital medical interview assistant for radiology.","authors":"Kerstin Denecke, Daniel Reichenpfader, Dominic Willi, Karin Kennel, Harald Bonel, Knud Nairz, Nikola Cihoric, Damien Papaux, Hendrik von Tengg-Kobligk","doi":"10.3389/frai.2024.1431156","DOIUrl":"10.3389/frai.2024.1431156","url":null,"abstract":"<p><strong>Introduction: </strong>Radiologists frequently lack direct patient contact due to time constraints. Digital medical interview assistants aim to facilitate the collection of health information. In this paper, we propose leveraging conversational agents to realize a medical interview assistant to facilitate medical history taking, while at the same time offering patients the opportunity to ask questions on the examination.</p><p><strong>Methods: </strong>MIA, the digital medical interview assistant, was developed using a person-based design approach, involving patient opinions and expert knowledge during the design and development with a specific use case in collecting information before a mammography examination. MIA consists of two modules: the interview module and the question answering module (Q&A). To ensure interoperability with clinical information systems, we use HL7 FHIR to store and exchange the results collected by MIA during the patient interaction. The system was evaluated according to an existing evaluation framework that covers a broad range of aspects related to the technical quality of a conversational agent including usability, but also accessibility and security.</p><p><strong>Results: </strong>Thirty-six patients recruited from two Swiss hospitals (Lindenhof group and Inselspital, Bern) and two patient organizations conducted the usability test. MIA was favorably received by the participants, who particularly noted the clarity of communication. However, there is room for improvement in the perceived quality of the conversation, the information provided, and the protection of privacy. The Q&A module achieved a precision of 0.51, a recall of 0.87 and an F-Score of 0.64 based on 114 questions asked by the participants. Security and accessibility also require improvements.</p><p><strong>Conclusion: </strong>The applied person-based process described in this paper can provide best practices for future development of medical interview assistants. The application of a standardized evaluation framework helped in saving time and ensures comparability of results.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1431156"},"PeriodicalIF":3.0,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11363708/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142112829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Agriculture is considered the backbone of Tanzania's economy, with more than 60% of the residents depending on it for survival. Maize is the country's dominant and primary food crop, accounting for 45% of all farmland production. However, its productivity is challenged by the limitation to detect maize diseases early enough. Maize streak virus (MSV) and maize lethal necrosis virus (MLN) are common diseases often detected too late by farmers. This has led to the need to develop a method for the early detection of these diseases so that they can be treated on time. This study investigated the potential of developing deep-learning models for the early detection of maize diseases in Tanzania. The regions where data was collected are Arusha, Kilimanjaro, and Manyara. Data was collected through observation by a plant. The study proposed convolutional neural network (CNN) and vision transformer (ViT) models. Four classes of imagery data were used to train both models: MLN, Healthy, MSV, and WRONG. The results revealed that the ViT model surpassed the CNN model, with 93.1 and 90.96% accuracies, respectively. Further studies should focus on mobile app development and deployment of the model with greater precision for early detection of the diseases mentioned above in real life.
{"title":"Deep learning models for the early detection of maize streak virus and maize lethal necrosis diseases in Tanzania.","authors":"Flavia Mayo, Ciira Maina, Mvurya Mgala, Neema Mduma","doi":"10.3389/frai.2024.1384709","DOIUrl":"10.3389/frai.2024.1384709","url":null,"abstract":"<p><p>Agriculture is considered the backbone of Tanzania's economy, with more than 60% of the residents depending on it for survival. Maize is the country's dominant and primary food crop, accounting for 45% of all farmland production. However, its productivity is challenged by the limitation to detect maize diseases early enough. Maize streak virus (MSV) and maize lethal necrosis virus (MLN) are common diseases often detected too late by farmers. This has led to the need to develop a method for the early detection of these diseases so that they can be treated on time. This study investigated the potential of developing deep-learning models for the early detection of maize diseases in Tanzania. The regions where data was collected are Arusha, Kilimanjaro, and Manyara. Data was collected through observation by a plant. The study proposed convolutional neural network (CNN) and vision transformer (ViT) models. Four classes of imagery data were used to train both models: MLN, Healthy, MSV, and WRONG. The results revealed that the ViT model surpassed the CNN model, with 93.1 and 90.96% accuracies, respectively. Further studies should focus on mobile app development and deployment of the model with greater precision for early detection of the diseases mentioned above in real life.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1384709"},"PeriodicalIF":3.0,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11362060/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142112753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}