首页 > 最新文献

Frontiers in Artificial Intelligence最新文献

英文 中文
Whale-optimized LSTM networks for enhanced automatic text summarization. 用于增强自动文本摘要的鲸鱼优化 LSTM 网络。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-29 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1399168
Bharathi Mohan Gurusamy, Prasanna Kumar Rangarajan, Ali Altalbe

Automatic text summarization is a cornerstone of natural language processing, yet existing methods often struggle to maintain contextual integrity and capture nuanced sentence relationships. Introducing the Optimized Auto Encoded Long Short-Term Memory Network (OAELSTM), enhanced by the Whale Optimization Algorithm (WOA), offers a novel approach to this challenge. Existing summarization models frequently produce summaries that are either too generic or disjointed, failing to preserve the essential content. The OAELSTM model, integrating deep LSTM layers and autoencoder mechanisms, focuses on extracting key phrases and concepts, ensuring that summaries are both informative and coherent. WOA fine-tunes the model's parameters, enhancing its precision and efficiency. Evaluation on datasets like CNN/Daily Mail and Gigaword demonstrates the model's superiority over existing approaches. It achieves a ROUGE Score of 0.456, an accuracy rate of 84.47%, and a specificity score of 0.3244, all within an efficient processing time of 4,341.95 s.

自动文本摘要是自然语言处理的基石,但现有方法往往难以保持上下文的完整性和捕捉细微的句子关系。采用鲸鱼优化算法(WOA)增强的优化自动编码长短期记忆网络(OAELSTM)为应对这一挑战提供了一种新方法。现有的摘要模型生成的摘要往往过于笼统或脱节,无法保留基本内容。OAELSTM 模型集成了深层 LSTM 层和自动编码器机制,重点是提取关键短语和概念,确保摘要内容丰富且连贯一致。WOA 可对模型参数进行微调,从而提高其精确度和效率。对 CNN/每日邮报和 Gigaword 等数据集的评估表明,该模型优于现有方法。它的 ROUGE 得分为 0.456,准确率为 84.47%,特异性得分为 0.3244,所有这些都在 4341.95 秒的高效处理时间内完成。
{"title":"Whale-optimized LSTM networks for enhanced automatic text summarization.","authors":"Bharathi Mohan Gurusamy, Prasanna Kumar Rangarajan, Ali Altalbe","doi":"10.3389/frai.2024.1399168","DOIUrl":"https://doi.org/10.3389/frai.2024.1399168","url":null,"abstract":"<p><p>Automatic text summarization is a cornerstone of natural language processing, yet existing methods often struggle to maintain contextual integrity and capture nuanced sentence relationships. Introducing the Optimized Auto Encoded Long Short-Term Memory Network (OAELSTM), enhanced by the Whale Optimization Algorithm (WOA), offers a novel approach to this challenge. Existing summarization models frequently produce summaries that are either too generic or disjointed, failing to preserve the essential content. The OAELSTM model, integrating deep LSTM layers and autoencoder mechanisms, focuses on extracting key phrases and concepts, ensuring that summaries are both informative and coherent. WOA fine-tunes the model's parameters, enhancing its precision and efficiency. Evaluation on datasets like CNN/Daily Mail and Gigaword demonstrates the model's superiority over existing approaches. It achieves a ROUGE Score of 0.456, an accuracy rate of 84.47%, and a specificity score of 0.3244, all within an efficient processing time of 4,341.95 s.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11391734/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automating parasite egg detection: insights from the first AI-KFM challenge. 寄生虫卵检测自动化:第一次人工智能-KFM 挑战赛的启示。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-29 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1325219
Salvatore Capuozzo, Stefano Marrone, Michela Gravina, Giuseppe Cringoli, Laura Rinaldi, Maria Paola Maurelli, Antonio Bosco, Giulia Orrù, Gian Luca Marcialis, Luca Ghiani, Stefano Bini, Alessia Saggese, Mario Vento, Carlo Sansone

In the field of veterinary medicine, the detection of parasite eggs in the fecal samples of livestock animals represents one of the most challenging tasks, since their spread and diffusion may lead to severe clinical disease. Nowadays, the scanning procedure is typically performed by physicians with professional microscopes and requires a significant amount of time, domain knowledge, and resources. The Kubic FLOTAC Microscope (KFM) is a compact, low-cost, portable digital microscope that can autonomously analyze fecal specimens for parasites and hosts in both field and laboratory settings. It has been shown to acquire images that are comparable to those obtained with traditional optical microscopes, and it can complete the scanning and imaging process in just a few minutes, freeing up the operator's time for other tasks. To promote research in this area, the first AI-KFM challenge was organized, which focused on the detection of gastrointestinal nematodes (GINs) in cattle using RGB images. The challenge aimed to provide a standardized experimental protocol with a large number of samples collected in a well-known environment and a set of scores for the approaches submitted by the competitors. This paper describes the process of generating and structuring the challenge dataset and the approaches submitted by the competitors, as well as the lessons learned throughout this journey.

在兽医领域,检测家畜粪便样本中的寄生虫卵是最具挑战性的任务之一,因为寄生虫卵的传播和扩散可能导致严重的临床疾病。如今,扫描过程通常由医生使用专业显微镜进行,需要大量的时间、领域知识和资源。Kubic FLOTAC 显微镜(KFM)是一种结构紧凑、成本低廉的便携式数码显微镜,可在野外和实验室环境中自主分析粪便标本中的寄生虫和宿主。事实证明,它所获得的图像可与传统光学显微镜获得的图像相媲美,而且只需几分钟就能完成扫描和成像过程,从而使操作人员能够腾出时间从事其他工作。为促进该领域的研究,举办了第一届 AI-KFM 挑战赛,重点是利用 RGB 图像检测牛的胃肠道线虫 (GIN)。该挑战赛旨在提供一个标准化的实验方案,在众所周知的环境中采集大量样本,并为参赛者提交的方法提供一套评分标准。本文介绍了挑战赛数据集的生成和结构化过程、参赛者提交的方法以及整个过程中的经验教训。
{"title":"Automating parasite egg detection: insights from the first AI-KFM challenge.","authors":"Salvatore Capuozzo, Stefano Marrone, Michela Gravina, Giuseppe Cringoli, Laura Rinaldi, Maria Paola Maurelli, Antonio Bosco, Giulia Orrù, Gian Luca Marcialis, Luca Ghiani, Stefano Bini, Alessia Saggese, Mario Vento, Carlo Sansone","doi":"10.3389/frai.2024.1325219","DOIUrl":"https://doi.org/10.3389/frai.2024.1325219","url":null,"abstract":"<p><p>In the field of veterinary medicine, the detection of parasite eggs in the fecal samples of livestock animals represents one of the most challenging tasks, since their spread and diffusion may lead to severe clinical disease. Nowadays, the scanning procedure is typically performed by physicians with professional microscopes and requires a significant amount of time, domain knowledge, and resources. The Kubic FLOTAC Microscope (KFM) is a compact, low-cost, portable digital microscope that can autonomously analyze fecal specimens for parasites and hosts in both field and laboratory settings. It has been shown to acquire images that are comparable to those obtained with traditional optical microscopes, and it can complete the scanning and imaging process in just a few minutes, freeing up the operator's time for other tasks. To promote research in this area, the first AI-KFM challenge was organized, which focused on the detection of gastrointestinal nematodes (GINs) in cattle using RGB images. The challenge aimed to provide a standardized experimental protocol with a large number of samples collected in a well-known environment and a set of scores for the approaches submitted by the competitors. This paper describes the process of generating and structuring the challenge dataset and the approaches submitted by the competitors, as well as the lessons learned throughout this journey.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11390596/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Software engineering education in the era of conversational AI: current trends and future directions. 对话式人工智能时代的软件工程教育:当前趋势与未来方向。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-29 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1436350
Cigdem Sengul, Rumyana Neykova, Giuseppe Destefanis

The developments in conversational AI raised urgent questions about the future direction of many aspects of society, including computing education. The first reactions to the fast-paced evolution of conversational agents were varied: Some announced "the end of programming," while others considered this "premature obituary of programming." Some adopted a defensive approach to detecting the use of conversational AI and avoiding an increase in plagiarism, while others questioned, "So what if ChatGPT wrote it?" Nevertheless, questions arise about whether computing education in its current form will still be relevant and fit for purpose in the era of conversational AI. Recognizing these diverse reactions to the advent of conversational AI, this paper aims to contribute to the ongoing discourse by exploring the current state through three perspectives in a dedicated literature review: adoption of conversational AI in (1) software engineering education specifically and (2) computing education in general, and (3) a comparison with software engineering practice. Our results show a gap between software engineering practice and higher education in the pace of adoption and the areas of use and generally identify preliminary research on student experience, teaching, and learning tools for software engineering.

会话式人工智能的发展对包括计算机教育在内的社会诸多方面的未来走向提出了迫切的问题。对于对话式代理的快速发展,人们的第一反应各不相同:一些人宣布 "编程的终结",另一些人则认为这是 "编程过早的讣告"。一些人采取了防御性方法来检测对话式人工智能的使用,避免剽窃行为的增加,而另一些人则质疑:"就算是 ChatGPT 写的又怎样?"尽管如此,人们还是对当前形式的计算教育在对话式人工智能时代是否仍有意义和适用性产生了疑问。认识到人们对会话式人工智能的出现所做出的这些不同反应,本文旨在通过专门的文献综述,从三个角度探讨当前的状况,从而为正在进行的讨论做出贡献:(1) 会话式人工智能在软件工程教育中的具体应用;(2) 计算教育的总体应用;(3) 与软件工程实践的比较。我们的研究结果表明,软件工程实践与高等教育在采用速度和使用领域方面存在差距,并普遍确定了有关学生体验、教学和软件工程学习工具的初步研究。
{"title":"Software engineering education in the era of conversational AI: current trends and future directions.","authors":"Cigdem Sengul, Rumyana Neykova, Giuseppe Destefanis","doi":"10.3389/frai.2024.1436350","DOIUrl":"https://doi.org/10.3389/frai.2024.1436350","url":null,"abstract":"<p><p>The developments in conversational AI raised urgent questions about the future direction of many aspects of society, including computing education. The first reactions to the fast-paced evolution of conversational agents were varied: Some announced \"the end of programming,\" while others considered this \"premature obituary of programming.\" Some adopted a defensive approach to detecting the use of conversational AI and avoiding an increase in plagiarism, while others questioned, \"So what if ChatGPT wrote it?\" Nevertheless, questions arise about whether computing education in its current form will still be relevant and fit for purpose in the era of conversational AI. Recognizing these diverse reactions to the advent of conversational AI, this paper aims to contribute to the ongoing discourse by exploring the current state through three perspectives in a dedicated literature review: adoption of conversational AI in (1) software engineering education specifically and (2) computing education in general, and (3) a comparison with software engineering practice. Our results show a gap between software engineering practice and higher education in the pace of adoption and the areas of use and generally identify preliminary research on student experience, teaching, and learning tools for software engineering.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11391529/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fall risk prediction using temporal gait features and machine learning approaches. 利用时间步态特征和机器学习方法预测跌倒风险。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-28 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1425713
Zhe Khae Lim, Tee Connie, Michael Kah Ong Goh, Nor 'Izzati Binti Saedon

Introduction: Falls have been acknowledged as a major public health issue around the world. Early detection of fall risk is pivotal for preventive measures. Traditional clinical assessments, although reliable, are resource-intensive and may not always be feasible.

Methods: This study explores the efficacy of artificial intelligence (AI) in predicting fall risk, leveraging gait analysis through computer vision and machine learning techniques. Data was collected using the Timed Up and Go (TUG) test and JHFRAT assessment from MMU collaborators and augmented with a public dataset from Mendeley involving older adults. The study introduces a robust approach for extracting and analyzing gait features, such as stride time, step time, cadence, and stance time, to distinguish between fallers and non-fallers.

Results: Two experimental setups were investigated: one considering separate gait features for each foot and another analyzing averaged features for both feet. Ultimately, the proposed solutions produce promising outcomes, greatly enhancing the model's ability to achieve high levels of accuracy. In particular, the LightGBM demonstrates a superior accuracy of 96% in the prediction task.

Discussion: The findings demonstrate that simple machine learning models can successfully identify individuals at higher fall risk based on gait characteristics, with promising results that could potentially streamline fall risk assessment processes. However, several limitations were discovered throughout the experiment, including an insufficient dataset and data variation, limiting the model's generalizability. These issues are raised for future work consideration. Overall, this research contributes to the growing body of knowledge on fall risk prediction and underscores the potential of AI in enhancing public health strategies through the early identification of at-risk individuals.

引言跌倒已被公认为是全世界的一个重大公共卫生问题。及早发现跌倒风险对采取预防措施至关重要。传统的临床评估虽然可靠,但需要耗费大量资源,而且并不总是可行:本研究通过计算机视觉和机器学习技术,利用步态分析,探索人工智能(AI)在预测跌倒风险方面的功效。数据收集采用了MMU合作者提供的定时起立行走(TUG)测试和JHFRAT评估,并利用Mendeley提供的涉及老年人的公共数据集进行了扩充。研究介绍了一种提取和分析步态特征(如步幅时间、步幅时间、步频和站立时间)的稳健方法,以区分跌倒者和非跌倒者:研究了两种实验设置:一种考虑了每只脚的单独步态特征,另一种分析了两只脚的平均特征。最终,提出的解决方案取得了可喜的成果,大大提高了模型实现高准确度的能力。其中,LightGBM 在预测任务中的准确率高达 96%:讨论:研究结果表明,简单的机器学习模型可以根据步态特征成功识别出跌倒风险较高的个体,其结果很有可能简化跌倒风险评估流程。然而,在整个实验过程中也发现了一些局限性,包括数据集不足和数据变化,从而限制了模型的通用性。这些问题都需要在今后的工作中加以考虑。总之,这项研究为不断增长的跌倒风险预测知识做出了贡献,并强调了人工智能通过早期识别高危人群来加强公共卫生策略的潜力。
{"title":"Fall risk prediction using temporal gait features and machine learning approaches.","authors":"Zhe Khae Lim, Tee Connie, Michael Kah Ong Goh, Nor 'Izzati Binti Saedon","doi":"10.3389/frai.2024.1425713","DOIUrl":"https://doi.org/10.3389/frai.2024.1425713","url":null,"abstract":"<p><strong>Introduction: </strong>Falls have been acknowledged as a major public health issue around the world. Early detection of fall risk is pivotal for preventive measures. Traditional clinical assessments, although reliable, are resource-intensive and may not always be feasible.</p><p><strong>Methods: </strong>This study explores the efficacy of artificial intelligence (AI) in predicting fall risk, leveraging gait analysis through computer vision and machine learning techniques. Data was collected using the Timed Up and Go (TUG) test and JHFRAT assessment from MMU collaborators and augmented with a public dataset from Mendeley involving older adults. The study introduces a robust approach for extracting and analyzing gait features, such as stride time, step time, cadence, and stance time, to distinguish between fallers and non-fallers.</p><p><strong>Results: </strong>Two experimental setups were investigated: one considering separate gait features for each foot and another analyzing averaged features for both feet. Ultimately, the proposed solutions produce promising outcomes, greatly enhancing the model's ability to achieve high levels of accuracy. In particular, the LightGBM demonstrates a superior accuracy of 96% in the prediction task.</p><p><strong>Discussion: </strong>The findings demonstrate that simple machine learning models can successfully identify individuals at higher fall risk based on gait characteristics, with promising results that could potentially streamline fall risk assessment processes. However, several limitations were discovered throughout the experiment, including an insufficient dataset and data variation, limiting the model's generalizability. These issues are raised for future work consideration. Overall, this research contributes to the growing body of knowledge on fall risk prediction and underscores the potential of AI in enhancing public health strategies through the early identification of at-risk individuals.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11389313/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adolescents' use and perceived usefulness of generative AI for schoolwork: exploring their relationships with executive functioning and academic achievement. 青少年在学校作业中使用和感知生成式人工智能的有用性:探索其与执行功能和学业成绩的关系。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-28 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1415782
Johan Klarin, Eva Hoff, Adam Larsson, Daiva Daukantaitė

In this study, we aimed to explore the frequency of use and perceived usefulness of LLM generative AI chatbots (e.g., ChatGPT) for schoolwork, particularly in relation to adolescents' executive functioning (EF), which includes critical cognitive processes like planning, inhibition, and cognitive flexibility essential for academic success. Two studies were conducted, encompassing both younger (Study 1: N = 385, 46% girls, mean age 14 years) and older (Study 2: N = 359, 67% girls, mean age 17 years) adolescents, to comprehensively examine these associations across different age groups. In Study 1, approximately 14.8% of participants reported using generative AI, while in Study 2, the adoption rate among older students was 52.6%, with ChatGPT emerging as the preferred tool among adolescents in both studies. Consistently across both studies, we found that adolescents facing more EF challenges perceived generative AI as more useful for schoolwork, particularly in completing assignments. Notably, academic achievement showed no significant associations with AI usage or usefulness, as revealed in Study 1. This study represents the first exploration into how individual characteristics, such as EF, relate to the frequency and perceived usefulness of LLM generative AI chatbots for schoolwork among adolescents. Given the early stage of generative AI chatbots during the survey, future research should validate these findings and delve deeper into the utilization and integration of generative AI into educational settings. It is crucial to adopt a proactive approach to address the potential challenges and opportunities associated with these emerging technologies in education.

在本研究中,我们旨在探索 LLM 生成式人工智能聊天机器人(如 ChatGPT)在学校作业中的使用频率和感知有用性,尤其是与青少年的执行功能(EF)有关的方面,执行功能包括对学业成功至关重要的计划、抑制和认知灵活性等关键认知过程。我们进行了两项研究,涵盖了年龄较小的青少年(研究 1:N = 385,46% 为女孩,平均年龄为 14 岁)和年龄较大的青少年(研究 2:N = 359,67% 为女孩,平均年龄为 17 岁),以全面考察不同年龄段青少年的这些关联。在研究 1 中,约有 14.8% 的参与者报告使用了生成式人工智能,而在研究 2 中,高年级学生的采用率为 52.6%,在这两项研究中,ChatGPT 都成为了青少年的首选工具。在这两项研究中,我们一致发现,面临更多情境挑战的青少年认为生成式人工智能对学业更有用,尤其是在完成作业方面。值得注意的是,正如研究 1 所显示的那样,学习成绩与人工智能的使用或实用性并无明显关联。本研究首次探索了个体特征(如 EF)与 LLM 生成式人工智能聊天机器人在青少年学校作业中的使用频率和感知有用性之间的关系。鉴于生成式人工智能聊天机器人在调查中还处于早期阶段,未来的研究应验证这些发现,并深入探讨生成式人工智能在教育环境中的应用和整合。关键是要采取积极主动的方法,应对这些新兴技术在教育领域带来的潜在挑战和机遇。
{"title":"Adolescents' use and perceived usefulness of generative AI for schoolwork: exploring their relationships with executive functioning and academic achievement.","authors":"Johan Klarin, Eva Hoff, Adam Larsson, Daiva Daukantaitė","doi":"10.3389/frai.2024.1415782","DOIUrl":"https://doi.org/10.3389/frai.2024.1415782","url":null,"abstract":"<p><p>In this study, we aimed to explore the frequency of use and perceived usefulness of LLM generative AI chatbots (e.g., ChatGPT) for schoolwork, particularly in relation to adolescents' executive functioning (EF), which includes critical cognitive processes like planning, inhibition, and cognitive flexibility essential for academic success. Two studies were conducted, encompassing both younger (Study 1: <i>N</i> = 385, 46% girls, mean age 14 years) and older (Study 2: <i>N</i> = 359, 67% girls, mean age 17 years) adolescents, to comprehensively examine these associations across different age groups. In Study 1, approximately 14.8% of participants reported using generative AI, while in Study 2, the adoption rate among older students was 52.6%, with ChatGPT emerging as the preferred tool among adolescents in both studies. Consistently across both studies, we found that adolescents facing more EF challenges perceived generative AI as more useful for schoolwork, particularly in completing assignments. Notably, academic achievement showed no significant associations with AI usage or usefulness, as revealed in Study 1. This study represents the first exploration into how individual characteristics, such as EF, relate to the frequency and perceived usefulness of LLM generative AI chatbots for schoolwork among adolescents. Given the early stage of generative AI chatbots during the survey, future research should validate these findings and delve deeper into the utilization and integration of generative AI into educational settings. It is crucial to adopt a proactive approach to address the potential challenges and opportunities associated with these emerging technologies in education.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11387220/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian model of tilling wheat confronting climatic and sustainability challenges. 面对气候和可持续性挑战的小麦耕作贝叶斯模型。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-27 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1402098
Qaisar Ali

Conventional farming poses threats to sustainable agriculture in growing food demands and increasing flooding risks. This research introduces a Bayesian Belief Network (BBN) to address these concerns. The model explores tillage adaptation for flood management in soils with varying organic carbon (OC) contents for winter wheat production. Three real soils, emphasizing texture and soil water properties, were sourced from the NETMAP soilscape of the Pang catchment area in Berkshire, United Kingdom. Modified with OC content at four levels (1, 3, 5, 7%), they were modeled alongside relevant variables in a BBN. The Decision Support System for Agrotechnology Transfer (DSSAT) simulated datasets across 48 cropping seasons to parameterize the BBN. The study compared tillage effects on wheat yield, surface runoff, and GHG-CO2 emissions, categorizing model parameters (from lower to higher bands) based on statistical data distribution. Results revealed that NT outperformed CT in the highest parametric category, comparing probabilistic estimates with reduced GHG-CO2 emissions from "7.34 to 7.31%" and cumulative runoff from "8.52 to 8.50%," while yield increased from "7.46 to 7.56%." Conversely, CT exhibited increased emissions from "7.34 to 7.36%" and cumulative runoff from "8.52 to 8.55%," along with reduced yield from "7.46 to 7.35%." The BBN model effectively captured uncertainties, offering posterior probability distributions reflecting conditional relationships across variables and offered decision choice for NT favoring soil carbon stocks in winter wheat (highest among soils "NT.OC-7%PDPG8," e.g., 286,634 kg/ha) over CT (lowest in "CT.OC-3.9%PDPG8," e.g., 5,894 kg/ha). On average, NT released minimum GHG- CO2 emissions to "3,985 kgCO2eqv/ha," while CT emitted "7,415 kgCO2eqv/ha." Conversely, NT emitted "8,747 kgCO2eqv/ha" for maximum emissions, while CT emitted "15,356 kgCO2eqv/ha." NT resulted in lower surface runoff against CT in all soils and limits runoff generations naturally for flood alleviation with the potential for customized improvement. The study recommends the model for extensive assessments of various spatiotemporal conditions. The research findings align with sustainable development goals, e.g., SDG12 and SDG13 for responsible production and climate actions, respectively, as defined by the Agriculture and Food Organization of the United Nations.

传统耕作对可持续农业构成了威胁,因为粮食需求不断增长,洪水风险也在增加。本研究引入贝叶斯信念网络(BBN)来解决这些问题。该模型探讨了冬小麦生产中不同有机碳(OC)含量土壤的耕作对洪水管理的适应性。从英国伯克郡庞集水区的 NETMAP 土壤图谱中选取了三种真实土壤,强调其质地和土壤水分特性。这些土壤的 OC 含量分为四个等级(1%、3%、5%、7%),并与相关变量一起在 BBN 中建模。农业技术转让决策支持系统(DSSAT)模拟了 48 个耕种季节的数据集,以确定 BBN 的参数。研究比较了耕作对小麦产量、地表径流和温室气体-二氧化碳排放的影响,并根据统计数据分布对模型参数进行了分类(从低到高)。结果显示,在最高参数类别中,NT 的表现优于 CT,比较概率估计值,温室气体-CO2 排放量从 "7.34% 降至 7.31%",累积径流从 "8.52% 降至 8.50%",而产量从 "7.46% 增至 7.56%"。相反,CT 显示排放量从 "7.34% 增加到 7.36%",累积径流从 "8.52% 增加到 8.55%",产量从 "7.46% 减少到 7.35%"。BBN 模型有效地捕捉了不确定性,提供了反映变量间条件关系的后验概率分布,并提供了有利于冬小麦土壤碳储量(在 "NT.OC-7%PDPG8 "土壤中最高,如 286,634 千克/公顷)而非 CT(在 "CT.OC-3.9%PDPG8 "土壤中最低,如 5,894 千克/公顷)的新界决策选择。平均而言,新界的温室气体二氧化碳排放量最低,为 "3,985 千克二氧化碳当量/公顷",而 CT 的排放量为 "7,415 千克二氧化碳当量/公顷"。相反,NT 的最大排放量为 "8,747 千克 CO2eqv/公顷",而 CT 的排放量为 "15,356 千克 CO2eqv/公顷"。与 CT 相比,NT 在所有土壤中的地表径流量都较低,并限制了径流的自然生成,从而缓解了洪水,并有可能进行定制改进。研究建议使用该模型对各种时空条件进行广泛评估。研究结果符合可持续发展目标,如联合国农业和粮食组织分别为负责任的生产和气候行动制定的 SDG12 和 SDG13。
{"title":"Bayesian model of tilling wheat confronting climatic and sustainability challenges.","authors":"Qaisar Ali","doi":"10.3389/frai.2024.1402098","DOIUrl":"https://doi.org/10.3389/frai.2024.1402098","url":null,"abstract":"<p><p>Conventional farming poses threats to sustainable agriculture in growing food demands and increasing flooding risks. This research introduces a Bayesian Belief Network (BBN) to address these concerns. The model explores tillage adaptation for flood management in soils with varying organic carbon (OC) contents for winter wheat production. Three real soils, emphasizing texture and soil water properties, were sourced from the NETMAP soilscape of the Pang catchment area in Berkshire, United Kingdom. Modified with OC content at four levels (1, 3, 5, 7%), they were modeled alongside relevant variables in a BBN. The Decision Support System for Agrotechnology Transfer (DSSAT) simulated datasets across 48 cropping seasons to parameterize the BBN. The study compared tillage effects on wheat yield, surface runoff, and GHG-CO<sub>2</sub> emissions, categorizing model parameters (from lower to higher bands) based on statistical data distribution. Results revealed that NT outperformed CT in the highest parametric category, comparing probabilistic estimates with reduced GHG-CO<sub>2</sub> emissions from \"7.34 to 7.31%\" and cumulative runoff from \"8.52 to 8.50%,\" while yield increased from \"7.46 to 7.56%.\" Conversely, CT exhibited increased emissions from \"7.34 to 7.36%\" and cumulative runoff from \"8.52 to 8.55%,\" along with reduced yield from \"7.46 to 7.35%.\" The BBN model effectively captured uncertainties, offering posterior probability distributions reflecting conditional relationships across variables and offered decision choice for NT favoring soil carbon stocks in winter wheat (highest among soils \"NT.OC-7%PDPG8,\" e.g., 286,634 kg/ha) over CT (lowest in \"CT.OC-3.9%PDPG8,\" e.g., 5,894 kg/ha). On average, NT released minimum GHG- CO<sub>2</sub> emissions to \"3,985 kgCO<sub>2</sub>eqv/ha,\" while CT emitted \"7,415 kgCO<sub>2</sub>eqv/ha.\" Conversely, NT emitted \"8,747 kgCO<sub>2</sub>eqv/ha\" for maximum emissions, while CT emitted \"15,356 kgCO<sub>2</sub>eqv/ha.\" NT resulted in lower surface runoff against CT in all soils and limits runoff generations naturally for flood alleviation with the potential for customized improvement. The study recommends the model for extensive assessments of various spatiotemporal conditions. The research findings align with sustainable development goals, e.g., SDG12 and SDG13 for responsible production and climate actions, respectively, as defined by the Agriculture and Food Organization of the United Nations.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11385300/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combining large language models with enterprise knowledge graphs: a perspective on enhanced natural language understanding. 将大型语言模型与企业知识图谱相结合:增强自然语言理解的视角。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-27 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1460065
Luca Mariotti, Veronica Guidetti, Federica Mandreoli, Andrea Belli, Paolo Lombardi

Knowledge Graphs (KGs) have revolutionized knowledge representation, enabling a graph-structured framework where entities and their interrelations are systematically organized. Since their inception, KGs have significantly enhanced various knowledge-aware applications, including recommendation systems and question-answering systems. Sensigrafo, an enterprise KG developed by Expert.AI, exemplifies this advancement by focusing on Natural Language Understanding through a machine-oriented lexicon representation. Despite the progress, maintaining and enriching KGs remains a challenge, often requiring manual efforts. Recent developments in Large Language Models (LLMs) offer promising solutions for KG enrichment (KGE) by leveraging their ability to understand natural language. In this article, we discuss the state-of-the-art LLM-based techniques for KGE and show the challenges associated with automating and deploying these processes in an industrial setup. We then propose our perspective on overcoming problems associated with data quality and scarcity, economic viability, privacy issues, language evolution, and the need to automate the KGE process while maintaining high accuracy.

知识图谱(Knowledge Graphs,KGs)为知识表示带来了革命性的变化,它实现了一种图结构框架,在这种框架中,实体及其相互关系被系统地组织起来。自诞生以来,知识图谱极大地增强了各种知识感知应用,包括推荐系统和问题解答系统。Sensigrafo是Expert.AI公司开发的一款企业级KG,通过面向机器的词库表示法专注于自然语言理解,是这一进步的典范。尽管取得了进展,但维护和丰富 KG 仍然是一项挑战,通常需要人工操作。大型语言模型(LLM)的最新发展利用其理解自然语言的能力,为丰富 KG(KGE)提供了前景广阔的解决方案。在本文中,我们将讨论最先进的基于 LLM 的 KGE 技术,并展示在工业环境中自动化和部署这些流程所面临的挑战。然后,我们提出了自己的观点,以克服与数据质量和稀缺性、经济可行性、隐私问题、语言演变以及在保持高准确性的同时实现 KGE 过程自动化的必要性相关的问题。
{"title":"Combining large language models with enterprise knowledge graphs: a perspective on enhanced natural language understanding.","authors":"Luca Mariotti, Veronica Guidetti, Federica Mandreoli, Andrea Belli, Paolo Lombardi","doi":"10.3389/frai.2024.1460065","DOIUrl":"https://doi.org/10.3389/frai.2024.1460065","url":null,"abstract":"<p><p>Knowledge Graphs (KGs) have revolutionized knowledge representation, enabling a graph-structured framework where entities and their interrelations are systematically organized. Since their inception, KGs have significantly enhanced various knowledge-aware applications, including recommendation systems and question-answering systems. Sensigrafo, an enterprise KG developed by Expert.AI, exemplifies this advancement by focusing on Natural Language Understanding through a machine-oriented lexicon representation. Despite the progress, maintaining and enriching KGs remains a challenge, often requiring manual efforts. Recent developments in Large Language Models (LLMs) offer promising solutions for KG enrichment (KGE) by leveraging their ability to understand natural language. In this article, we discuss the state-of-the-art LLM-based techniques for KGE and show the challenges associated with automating and deploying these processes in an industrial setup. We then propose our perspective on overcoming problems associated with data quality and scarcity, economic viability, privacy issues, language evolution, and the need to automate the KGE process while maintaining high accuracy.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11385612/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring artificial intelligence techniques to research low energy nuclear reactions. 探索研究低能核反应的人工智能技术。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-23 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1401782
Anasse Bari, Tanya Pushkin Garg, Yvonne Wu, Sneha Singh, David Nagel

The world urgently needs new sources of clean energy due to a growing global population, rising energy use, and the effects of climate change. Nuclear energy is one of the most promising solutions for meeting the world's energy needs now and in the future. One type of nuclear energy, Low Energy Nuclear Reactions (LENR), has gained interest as a potential clean energy source. Recent AI advancements create new ways to help research LENR and to comprehensively analyze the relationships between experimental parameters, materials, and outcomes across diverse LENR research endeavors worldwide. This study explores and investigates the effectiveness of modern AI capabilities leveraging embedding models and topic modeling techniques, including Latent Dirichlet Allocation (LDA), BERTopic, and Top2Vec, in elucidating the underlying structure and prevalent themes within a large LENR research corpus. These methodologies offer unique perspectives on understanding relationships and trends within the LENR research landscape, thereby facilitating advancements in this crucial energy research area. Furthermore, the study presents LENRsim, an experimental machine learning tool to identify similar LENR studies, along with a user-friendly web interface for widespread adoption and utilization. The findings contribute to the understanding and progression of LENR research through data-driven analysis and tool development, enabling more informed decision-making and strategic planning for future research in this field. The insights derived from this study, along with the experimental tools we developed and deployed, hold the potential to significantly aid researchers in advancing their studies of LENR.

由于全球人口不断增长、能源使用量不断增加以及气候变化的影响,世界迫切需要新的清洁能源。核能是满足当前和未来世界能源需求的最有前途的解决方案之一。其中一种核能,即低能核反应(LENR),作为一种潜在的清洁能源,已经引起了人们的兴趣。最近的人工智能进步创造了新的方法来帮助研究低能核反应,并全面分析全球各种低能核反应研究工作中的实验参数、材料和结果之间的关系。本研究探讨并研究了现代人工智能能力在利用嵌入模型和主题建模技术(包括潜在德里希特分配 (LDA)、BERTopic 和 Top2Vec)阐明大型 LENR 研究语料库中的潜在结构和流行主题方面的有效性。这些方法为了解低能耗研究领域的关系和趋势提供了独特的视角,从而促进了这一重要能源研究领域的进步。此外,该研究还介绍了 LENRsim,这是一种用于识别类似 LENR 研究的实验性机器学习工具,同时还提供了用户友好型网络界面,以便广泛采用和使用。研究结果通过数据驱动的分析和工具开发,促进了对低能耗研究的理解和发展,为该领域的未来研究提供了更明智的决策和战略规划。这项研究得出的见解以及我们开发和部署的实验工具,有可能极大地帮助研究人员推进低能辐射研究。
{"title":"Exploring artificial intelligence techniques to research low energy nuclear reactions.","authors":"Anasse Bari, Tanya Pushkin Garg, Yvonne Wu, Sneha Singh, David Nagel","doi":"10.3389/frai.2024.1401782","DOIUrl":"10.3389/frai.2024.1401782","url":null,"abstract":"<p><p>The world urgently needs new sources of clean energy due to a growing global population, rising energy use, and the effects of climate change. Nuclear energy is one of the most promising solutions for meeting the world's energy needs now and in the future. One type of nuclear energy, Low Energy Nuclear Reactions (LENR), has gained interest as a potential clean energy source. Recent AI advancements create new ways to help research LENR and to comprehensively analyze the relationships between experimental parameters, materials, and outcomes across diverse LENR research endeavors worldwide. This study explores and investigates the effectiveness of modern AI capabilities leveraging embedding models and topic modeling techniques, including Latent Dirichlet Allocation (LDA), BERTopic, and Top2Vec, in elucidating the underlying structure and prevalent themes within a large LENR research corpus. These methodologies offer unique perspectives on understanding relationships and trends within the LENR research landscape, thereby facilitating advancements in this crucial energy research area. Furthermore, the study presents LENRsim, an experimental machine learning tool to identify similar LENR studies, along with a user-friendly web interface for widespread adoption and utilization. The findings contribute to the understanding and progression of LENR research through data-driven analysis and tool development, enabling more informed decision-making and strategic planning for future research in this field. The insights derived from this study, along with the experimental tools we developed and deployed, hold the potential to significantly aid researchers in advancing their studies of LENR.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11377257/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multitask connected U-Net: automatic lung cancer segmentation from CT images using PET knowledge guidance. 多任务连接 U-Net:利用 PET 知识指导从 CT 图像自动分割肺癌。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-23 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1423535
Lu Zhou, Chaoyong Wu, Yiheng Chen, Zhicheng Zhang

Lung cancer is a predominant cause of cancer-related mortality worldwide, necessitating precise tumor segmentation of medical images for accurate diagnosis and treatment. However, the intrinsic complexity and variability of tumor morphology pose substantial challenges to segmentation tasks. To address this issue, we propose a multitask connected U-Net model with a teacher-student framework to enhance the effectiveness of lung tumor segmentation. The proposed model and framework integrate PET knowledge into the segmentation process, leveraging complementary information from both CT and PET modalities to improve segmentation performance. Additionally, we implemented a tumor area detection method to enhance tumor segmentation performance. In extensive experiments on four datasets, the average Dice coefficient of 0.56, obtained using our model, surpassed those of existing methods such as Segformer (0.51), Transformer (0.50), and UctransNet (0.43). These findings validate the efficacy of the proposed method in lung tumor segmentation tasks.

肺癌是导致全球癌症相关死亡的主要原因,因此需要对医学图像进行精确的肿瘤分割,以进行准确的诊断和治疗。然而,肿瘤形态的内在复杂性和多变性给分割任务带来了巨大挑战。为解决这一问题,我们提出了一个多任务连接 U-Net 模型和一个师生框架,以提高肺部肿瘤分割的有效性。所提出的模型和框架将 PET 知识整合到分割过程中,利用 CT 和 PET 两种模式的互补信息来提高分割性能。此外,我们还采用了一种肿瘤区域检测方法来提高肿瘤分割性能。在四个数据集的广泛实验中,使用我们的模型获得的平均 Dice 系数为 0.56,超过了 Segformer(0.51)、Transformer(0.50)和 UctransNet(0.43)等现有方法。这些发现验证了所提方法在肺部肿瘤分割任务中的有效性。
{"title":"Multitask connected U-Net: automatic lung cancer segmentation from CT images using PET knowledge guidance.","authors":"Lu Zhou, Chaoyong Wu, Yiheng Chen, Zhicheng Zhang","doi":"10.3389/frai.2024.1423535","DOIUrl":"10.3389/frai.2024.1423535","url":null,"abstract":"<p><p>Lung cancer is a predominant cause of cancer-related mortality worldwide, necessitating precise tumor segmentation of medical images for accurate diagnosis and treatment. However, the intrinsic complexity and variability of tumor morphology pose substantial challenges to segmentation tasks. To address this issue, we propose a multitask connected U-Net model with a teacher-student framework to enhance the effectiveness of lung tumor segmentation. The proposed model and framework integrate PET knowledge into the segmentation process, leveraging complementary information from both CT and PET modalities to improve segmentation performance. Additionally, we implemented a tumor area detection method to enhance tumor segmentation performance. In extensive experiments on four datasets, the average Dice coefficient of 0.56, obtained using our model, surpassed those of existing methods such as Segformer (0.51), Transformer (0.50), and UctransNet (0.43). These findings validate the efficacy of the proposed method in lung tumor segmentation tasks.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11377414/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AttentionTTE: a deep learning model for estimated time of arrival. AttentionTTE:估计到达时间的深度学习模型。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-23 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1258086
Mu Li, Yijun Feng, Xiangdong Wu

Estimating travel time (ETA) for arbitrary paths is crucial in urban intelligent transportation systems. Previous studies primarily focus on constructing complex feature systems for individual road segments or sub-segments, which fail to effectively model the influence of each road segment on others. To address this issue, we propose an end-to-end model, AttentionTTE. It utilizes a self-attention mechanism to capture global spatial correlations and a recurrent neural network to capture temporal dependencies from local spatial correlations. Additionally, a multi-task learning module integrates global spatial correlations and temporal dependencies to estimate the travel time for both the entire path and each local path. We evaluate our model on a large trajectory dataset, and extensive experimental results demonstrate that AttentionTTE achieves state-of-the-art performance compared to other methods.

在城市智能交通系统中,估算任意路径的旅行时间(ETA)至关重要。以往的研究主要集中在为单个路段或子路段构建复杂的特征系统,而这些系统无法有效地模拟每个路段对其他路段的影响。为解决这一问题,我们提出了一种端到端模型--AttentionTTE。它利用自我注意机制捕捉全局空间相关性,并利用递归神经网络捕捉局部空间相关性的时间依赖性。此外,多任务学习模块整合了全局空间相关性和时间相关性,以估算整个路径和每个局部路径的旅行时间。我们在一个大型轨迹数据集上对我们的模型进行了评估,大量实验结果表明,与其他方法相比,AttentionTTE 实现了最先进的性能。
{"title":"AttentionTTE: a deep learning model for estimated time of arrival.","authors":"Mu Li, Yijun Feng, Xiangdong Wu","doi":"10.3389/frai.2024.1258086","DOIUrl":"10.3389/frai.2024.1258086","url":null,"abstract":"<p><p>Estimating travel time (ETA) for arbitrary paths is crucial in urban intelligent transportation systems. Previous studies primarily focus on constructing complex feature systems for individual road segments or sub-segments, which fail to effectively model the influence of each road segment on others. To address this issue, we propose an end-to-end model, AttentionTTE. It utilizes a self-attention mechanism to capture global spatial correlations and a recurrent neural network to capture temporal dependencies from local spatial correlations. Additionally, a multi-task learning module integrates global spatial correlations and temporal dependencies to estimate the travel time for both the entire path and each local path. We evaluate our model on a large trajectory dataset, and extensive experimental results demonstrate that AttentionTTE achieves state-of-the-art performance compared to other methods.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11378341/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1