Pub Date : 2024-08-29eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1399168
Bharathi Mohan Gurusamy, Prasanna Kumar Rangarajan, Ali Altalbe
Automatic text summarization is a cornerstone of natural language processing, yet existing methods often struggle to maintain contextual integrity and capture nuanced sentence relationships. Introducing the Optimized Auto Encoded Long Short-Term Memory Network (OAELSTM), enhanced by the Whale Optimization Algorithm (WOA), offers a novel approach to this challenge. Existing summarization models frequently produce summaries that are either too generic or disjointed, failing to preserve the essential content. The OAELSTM model, integrating deep LSTM layers and autoencoder mechanisms, focuses on extracting key phrases and concepts, ensuring that summaries are both informative and coherent. WOA fine-tunes the model's parameters, enhancing its precision and efficiency. Evaluation on datasets like CNN/Daily Mail and Gigaword demonstrates the model's superiority over existing approaches. It achieves a ROUGE Score of 0.456, an accuracy rate of 84.47%, and a specificity score of 0.3244, all within an efficient processing time of 4,341.95 s.
{"title":"Whale-optimized LSTM networks for enhanced automatic text summarization.","authors":"Bharathi Mohan Gurusamy, Prasanna Kumar Rangarajan, Ali Altalbe","doi":"10.3389/frai.2024.1399168","DOIUrl":"https://doi.org/10.3389/frai.2024.1399168","url":null,"abstract":"<p><p>Automatic text summarization is a cornerstone of natural language processing, yet existing methods often struggle to maintain contextual integrity and capture nuanced sentence relationships. Introducing the Optimized Auto Encoded Long Short-Term Memory Network (OAELSTM), enhanced by the Whale Optimization Algorithm (WOA), offers a novel approach to this challenge. Existing summarization models frequently produce summaries that are either too generic or disjointed, failing to preserve the essential content. The OAELSTM model, integrating deep LSTM layers and autoencoder mechanisms, focuses on extracting key phrases and concepts, ensuring that summaries are both informative and coherent. WOA fine-tunes the model's parameters, enhancing its precision and efficiency. Evaluation on datasets like CNN/Daily Mail and Gigaword demonstrates the model's superiority over existing approaches. It achieves a ROUGE Score of 0.456, an accuracy rate of 84.47%, and a specificity score of 0.3244, all within an efficient processing time of 4,341.95 s.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11391734/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-29eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1325219
Salvatore Capuozzo, Stefano Marrone, Michela Gravina, Giuseppe Cringoli, Laura Rinaldi, Maria Paola Maurelli, Antonio Bosco, Giulia Orrù, Gian Luca Marcialis, Luca Ghiani, Stefano Bini, Alessia Saggese, Mario Vento, Carlo Sansone
In the field of veterinary medicine, the detection of parasite eggs in the fecal samples of livestock animals represents one of the most challenging tasks, since their spread and diffusion may lead to severe clinical disease. Nowadays, the scanning procedure is typically performed by physicians with professional microscopes and requires a significant amount of time, domain knowledge, and resources. The Kubic FLOTAC Microscope (KFM) is a compact, low-cost, portable digital microscope that can autonomously analyze fecal specimens for parasites and hosts in both field and laboratory settings. It has been shown to acquire images that are comparable to those obtained with traditional optical microscopes, and it can complete the scanning and imaging process in just a few minutes, freeing up the operator's time for other tasks. To promote research in this area, the first AI-KFM challenge was organized, which focused on the detection of gastrointestinal nematodes (GINs) in cattle using RGB images. The challenge aimed to provide a standardized experimental protocol with a large number of samples collected in a well-known environment and a set of scores for the approaches submitted by the competitors. This paper describes the process of generating and structuring the challenge dataset and the approaches submitted by the competitors, as well as the lessons learned throughout this journey.
{"title":"Automating parasite egg detection: insights from the first AI-KFM challenge.","authors":"Salvatore Capuozzo, Stefano Marrone, Michela Gravina, Giuseppe Cringoli, Laura Rinaldi, Maria Paola Maurelli, Antonio Bosco, Giulia Orrù, Gian Luca Marcialis, Luca Ghiani, Stefano Bini, Alessia Saggese, Mario Vento, Carlo Sansone","doi":"10.3389/frai.2024.1325219","DOIUrl":"https://doi.org/10.3389/frai.2024.1325219","url":null,"abstract":"<p><p>In the field of veterinary medicine, the detection of parasite eggs in the fecal samples of livestock animals represents one of the most challenging tasks, since their spread and diffusion may lead to severe clinical disease. Nowadays, the scanning procedure is typically performed by physicians with professional microscopes and requires a significant amount of time, domain knowledge, and resources. The Kubic FLOTAC Microscope (KFM) is a compact, low-cost, portable digital microscope that can autonomously analyze fecal specimens for parasites and hosts in both field and laboratory settings. It has been shown to acquire images that are comparable to those obtained with traditional optical microscopes, and it can complete the scanning and imaging process in just a few minutes, freeing up the operator's time for other tasks. To promote research in this area, the first AI-KFM challenge was organized, which focused on the detection of gastrointestinal nematodes (GINs) in cattle using RGB images. The challenge aimed to provide a standardized experimental protocol with a large number of samples collected in a well-known environment and a set of scores for the approaches submitted by the competitors. This paper describes the process of generating and structuring the challenge dataset and the approaches submitted by the competitors, as well as the lessons learned throughout this journey.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11390596/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-29eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1436350
Cigdem Sengul, Rumyana Neykova, Giuseppe Destefanis
The developments in conversational AI raised urgent questions about the future direction of many aspects of society, including computing education. The first reactions to the fast-paced evolution of conversational agents were varied: Some announced "the end of programming," while others considered this "premature obituary of programming." Some adopted a defensive approach to detecting the use of conversational AI and avoiding an increase in plagiarism, while others questioned, "So what if ChatGPT wrote it?" Nevertheless, questions arise about whether computing education in its current form will still be relevant and fit for purpose in the era of conversational AI. Recognizing these diverse reactions to the advent of conversational AI, this paper aims to contribute to the ongoing discourse by exploring the current state through three perspectives in a dedicated literature review: adoption of conversational AI in (1) software engineering education specifically and (2) computing education in general, and (3) a comparison with software engineering practice. Our results show a gap between software engineering practice and higher education in the pace of adoption and the areas of use and generally identify preliminary research on student experience, teaching, and learning tools for software engineering.
{"title":"Software engineering education in the era of conversational AI: current trends and future directions.","authors":"Cigdem Sengul, Rumyana Neykova, Giuseppe Destefanis","doi":"10.3389/frai.2024.1436350","DOIUrl":"https://doi.org/10.3389/frai.2024.1436350","url":null,"abstract":"<p><p>The developments in conversational AI raised urgent questions about the future direction of many aspects of society, including computing education. The first reactions to the fast-paced evolution of conversational agents were varied: Some announced \"the end of programming,\" while others considered this \"premature obituary of programming.\" Some adopted a defensive approach to detecting the use of conversational AI and avoiding an increase in plagiarism, while others questioned, \"So what if ChatGPT wrote it?\" Nevertheless, questions arise about whether computing education in its current form will still be relevant and fit for purpose in the era of conversational AI. Recognizing these diverse reactions to the advent of conversational AI, this paper aims to contribute to the ongoing discourse by exploring the current state through three perspectives in a dedicated literature review: adoption of conversational AI in (1) software engineering education specifically and (2) computing education in general, and (3) a comparison with software engineering practice. Our results show a gap between software engineering practice and higher education in the pace of adoption and the areas of use and generally identify preliminary research on student experience, teaching, and learning tools for software engineering.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11391529/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-28eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1425713
Zhe Khae Lim, Tee Connie, Michael Kah Ong Goh, Nor 'Izzati Binti Saedon
Introduction: Falls have been acknowledged as a major public health issue around the world. Early detection of fall risk is pivotal for preventive measures. Traditional clinical assessments, although reliable, are resource-intensive and may not always be feasible.
Methods: This study explores the efficacy of artificial intelligence (AI) in predicting fall risk, leveraging gait analysis through computer vision and machine learning techniques. Data was collected using the Timed Up and Go (TUG) test and JHFRAT assessment from MMU collaborators and augmented with a public dataset from Mendeley involving older adults. The study introduces a robust approach for extracting and analyzing gait features, such as stride time, step time, cadence, and stance time, to distinguish between fallers and non-fallers.
Results: Two experimental setups were investigated: one considering separate gait features for each foot and another analyzing averaged features for both feet. Ultimately, the proposed solutions produce promising outcomes, greatly enhancing the model's ability to achieve high levels of accuracy. In particular, the LightGBM demonstrates a superior accuracy of 96% in the prediction task.
Discussion: The findings demonstrate that simple machine learning models can successfully identify individuals at higher fall risk based on gait characteristics, with promising results that could potentially streamline fall risk assessment processes. However, several limitations were discovered throughout the experiment, including an insufficient dataset and data variation, limiting the model's generalizability. These issues are raised for future work consideration. Overall, this research contributes to the growing body of knowledge on fall risk prediction and underscores the potential of AI in enhancing public health strategies through the early identification of at-risk individuals.
{"title":"Fall risk prediction using temporal gait features and machine learning approaches.","authors":"Zhe Khae Lim, Tee Connie, Michael Kah Ong Goh, Nor 'Izzati Binti Saedon","doi":"10.3389/frai.2024.1425713","DOIUrl":"https://doi.org/10.3389/frai.2024.1425713","url":null,"abstract":"<p><strong>Introduction: </strong>Falls have been acknowledged as a major public health issue around the world. Early detection of fall risk is pivotal for preventive measures. Traditional clinical assessments, although reliable, are resource-intensive and may not always be feasible.</p><p><strong>Methods: </strong>This study explores the efficacy of artificial intelligence (AI) in predicting fall risk, leveraging gait analysis through computer vision and machine learning techniques. Data was collected using the Timed Up and Go (TUG) test and JHFRAT assessment from MMU collaborators and augmented with a public dataset from Mendeley involving older adults. The study introduces a robust approach for extracting and analyzing gait features, such as stride time, step time, cadence, and stance time, to distinguish between fallers and non-fallers.</p><p><strong>Results: </strong>Two experimental setups were investigated: one considering separate gait features for each foot and another analyzing averaged features for both feet. Ultimately, the proposed solutions produce promising outcomes, greatly enhancing the model's ability to achieve high levels of accuracy. In particular, the LightGBM demonstrates a superior accuracy of 96% in the prediction task.</p><p><strong>Discussion: </strong>The findings demonstrate that simple machine learning models can successfully identify individuals at higher fall risk based on gait characteristics, with promising results that could potentially streamline fall risk assessment processes. However, several limitations were discovered throughout the experiment, including an insufficient dataset and data variation, limiting the model's generalizability. These issues are raised for future work consideration. Overall, this research contributes to the growing body of knowledge on fall risk prediction and underscores the potential of AI in enhancing public health strategies through the early identification of at-risk individuals.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11389313/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-28eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1415782
Johan Klarin, Eva Hoff, Adam Larsson, Daiva Daukantaitė
In this study, we aimed to explore the frequency of use and perceived usefulness of LLM generative AI chatbots (e.g., ChatGPT) for schoolwork, particularly in relation to adolescents' executive functioning (EF), which includes critical cognitive processes like planning, inhibition, and cognitive flexibility essential for academic success. Two studies were conducted, encompassing both younger (Study 1: N = 385, 46% girls, mean age 14 years) and older (Study 2: N = 359, 67% girls, mean age 17 years) adolescents, to comprehensively examine these associations across different age groups. In Study 1, approximately 14.8% of participants reported using generative AI, while in Study 2, the adoption rate among older students was 52.6%, with ChatGPT emerging as the preferred tool among adolescents in both studies. Consistently across both studies, we found that adolescents facing more EF challenges perceived generative AI as more useful for schoolwork, particularly in completing assignments. Notably, academic achievement showed no significant associations with AI usage or usefulness, as revealed in Study 1. This study represents the first exploration into how individual characteristics, such as EF, relate to the frequency and perceived usefulness of LLM generative AI chatbots for schoolwork among adolescents. Given the early stage of generative AI chatbots during the survey, future research should validate these findings and delve deeper into the utilization and integration of generative AI into educational settings. It is crucial to adopt a proactive approach to address the potential challenges and opportunities associated with these emerging technologies in education.
{"title":"Adolescents' use and perceived usefulness of generative AI for schoolwork: exploring their relationships with executive functioning and academic achievement.","authors":"Johan Klarin, Eva Hoff, Adam Larsson, Daiva Daukantaitė","doi":"10.3389/frai.2024.1415782","DOIUrl":"https://doi.org/10.3389/frai.2024.1415782","url":null,"abstract":"<p><p>In this study, we aimed to explore the frequency of use and perceived usefulness of LLM generative AI chatbots (e.g., ChatGPT) for schoolwork, particularly in relation to adolescents' executive functioning (EF), which includes critical cognitive processes like planning, inhibition, and cognitive flexibility essential for academic success. Two studies were conducted, encompassing both younger (Study 1: <i>N</i> = 385, 46% girls, mean age 14 years) and older (Study 2: <i>N</i> = 359, 67% girls, mean age 17 years) adolescents, to comprehensively examine these associations across different age groups. In Study 1, approximately 14.8% of participants reported using generative AI, while in Study 2, the adoption rate among older students was 52.6%, with ChatGPT emerging as the preferred tool among adolescents in both studies. Consistently across both studies, we found that adolescents facing more EF challenges perceived generative AI as more useful for schoolwork, particularly in completing assignments. Notably, academic achievement showed no significant associations with AI usage or usefulness, as revealed in Study 1. This study represents the first exploration into how individual characteristics, such as EF, relate to the frequency and perceived usefulness of LLM generative AI chatbots for schoolwork among adolescents. Given the early stage of generative AI chatbots during the survey, future research should validate these findings and delve deeper into the utilization and integration of generative AI into educational settings. It is crucial to adopt a proactive approach to address the potential challenges and opportunities associated with these emerging technologies in education.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11387220/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-27eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1402098
Qaisar Ali
Conventional farming poses threats to sustainable agriculture in growing food demands and increasing flooding risks. This research introduces a Bayesian Belief Network (BBN) to address these concerns. The model explores tillage adaptation for flood management in soils with varying organic carbon (OC) contents for winter wheat production. Three real soils, emphasizing texture and soil water properties, were sourced from the NETMAP soilscape of the Pang catchment area in Berkshire, United Kingdom. Modified with OC content at four levels (1, 3, 5, 7%), they were modeled alongside relevant variables in a BBN. The Decision Support System for Agrotechnology Transfer (DSSAT) simulated datasets across 48 cropping seasons to parameterize the BBN. The study compared tillage effects on wheat yield, surface runoff, and GHG-CO2 emissions, categorizing model parameters (from lower to higher bands) based on statistical data distribution. Results revealed that NT outperformed CT in the highest parametric category, comparing probabilistic estimates with reduced GHG-CO2 emissions from "7.34 to 7.31%" and cumulative runoff from "8.52 to 8.50%," while yield increased from "7.46 to 7.56%." Conversely, CT exhibited increased emissions from "7.34 to 7.36%" and cumulative runoff from "8.52 to 8.55%," along with reduced yield from "7.46 to 7.35%." The BBN model effectively captured uncertainties, offering posterior probability distributions reflecting conditional relationships across variables and offered decision choice for NT favoring soil carbon stocks in winter wheat (highest among soils "NT.OC-7%PDPG8," e.g., 286,634 kg/ha) over CT (lowest in "CT.OC-3.9%PDPG8," e.g., 5,894 kg/ha). On average, NT released minimum GHG- CO2 emissions to "3,985 kgCO2eqv/ha," while CT emitted "7,415 kgCO2eqv/ha." Conversely, NT emitted "8,747 kgCO2eqv/ha" for maximum emissions, while CT emitted "15,356 kgCO2eqv/ha." NT resulted in lower surface runoff against CT in all soils and limits runoff generations naturally for flood alleviation with the potential for customized improvement. The study recommends the model for extensive assessments of various spatiotemporal conditions. The research findings align with sustainable development goals, e.g., SDG12 and SDG13 for responsible production and climate actions, respectively, as defined by the Agriculture and Food Organization of the United Nations.
{"title":"Bayesian model of tilling wheat confronting climatic and sustainability challenges.","authors":"Qaisar Ali","doi":"10.3389/frai.2024.1402098","DOIUrl":"https://doi.org/10.3389/frai.2024.1402098","url":null,"abstract":"<p><p>Conventional farming poses threats to sustainable agriculture in growing food demands and increasing flooding risks. This research introduces a Bayesian Belief Network (BBN) to address these concerns. The model explores tillage adaptation for flood management in soils with varying organic carbon (OC) contents for winter wheat production. Three real soils, emphasizing texture and soil water properties, were sourced from the NETMAP soilscape of the Pang catchment area in Berkshire, United Kingdom. Modified with OC content at four levels (1, 3, 5, 7%), they were modeled alongside relevant variables in a BBN. The Decision Support System for Agrotechnology Transfer (DSSAT) simulated datasets across 48 cropping seasons to parameterize the BBN. The study compared tillage effects on wheat yield, surface runoff, and GHG-CO<sub>2</sub> emissions, categorizing model parameters (from lower to higher bands) based on statistical data distribution. Results revealed that NT outperformed CT in the highest parametric category, comparing probabilistic estimates with reduced GHG-CO<sub>2</sub> emissions from \"7.34 to 7.31%\" and cumulative runoff from \"8.52 to 8.50%,\" while yield increased from \"7.46 to 7.56%.\" Conversely, CT exhibited increased emissions from \"7.34 to 7.36%\" and cumulative runoff from \"8.52 to 8.55%,\" along with reduced yield from \"7.46 to 7.35%.\" The BBN model effectively captured uncertainties, offering posterior probability distributions reflecting conditional relationships across variables and offered decision choice for NT favoring soil carbon stocks in winter wheat (highest among soils \"NT.OC-7%PDPG8,\" e.g., 286,634 kg/ha) over CT (lowest in \"CT.OC-3.9%PDPG8,\" e.g., 5,894 kg/ha). On average, NT released minimum GHG- CO<sub>2</sub> emissions to \"3,985 kgCO<sub>2</sub>eqv/ha,\" while CT emitted \"7,415 kgCO<sub>2</sub>eqv/ha.\" Conversely, NT emitted \"8,747 kgCO<sub>2</sub>eqv/ha\" for maximum emissions, while CT emitted \"15,356 kgCO<sub>2</sub>eqv/ha.\" NT resulted in lower surface runoff against CT in all soils and limits runoff generations naturally for flood alleviation with the potential for customized improvement. The study recommends the model for extensive assessments of various spatiotemporal conditions. The research findings align with sustainable development goals, e.g., SDG12 and SDG13 for responsible production and climate actions, respectively, as defined by the Agriculture and Food Organization of the United Nations.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11385300/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-27eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1460065
Luca Mariotti, Veronica Guidetti, Federica Mandreoli, Andrea Belli, Paolo Lombardi
Knowledge Graphs (KGs) have revolutionized knowledge representation, enabling a graph-structured framework where entities and their interrelations are systematically organized. Since their inception, KGs have significantly enhanced various knowledge-aware applications, including recommendation systems and question-answering systems. Sensigrafo, an enterprise KG developed by Expert.AI, exemplifies this advancement by focusing on Natural Language Understanding through a machine-oriented lexicon representation. Despite the progress, maintaining and enriching KGs remains a challenge, often requiring manual efforts. Recent developments in Large Language Models (LLMs) offer promising solutions for KG enrichment (KGE) by leveraging their ability to understand natural language. In this article, we discuss the state-of-the-art LLM-based techniques for KGE and show the challenges associated with automating and deploying these processes in an industrial setup. We then propose our perspective on overcoming problems associated with data quality and scarcity, economic viability, privacy issues, language evolution, and the need to automate the KGE process while maintaining high accuracy.
{"title":"Combining large language models with enterprise knowledge graphs: a perspective on enhanced natural language understanding.","authors":"Luca Mariotti, Veronica Guidetti, Federica Mandreoli, Andrea Belli, Paolo Lombardi","doi":"10.3389/frai.2024.1460065","DOIUrl":"https://doi.org/10.3389/frai.2024.1460065","url":null,"abstract":"<p><p>Knowledge Graphs (KGs) have revolutionized knowledge representation, enabling a graph-structured framework where entities and their interrelations are systematically organized. Since their inception, KGs have significantly enhanced various knowledge-aware applications, including recommendation systems and question-answering systems. Sensigrafo, an enterprise KG developed by Expert.AI, exemplifies this advancement by focusing on Natural Language Understanding through a machine-oriented lexicon representation. Despite the progress, maintaining and enriching KGs remains a challenge, often requiring manual efforts. Recent developments in Large Language Models (LLMs) offer promising solutions for KG enrichment (KGE) by leveraging their ability to understand natural language. In this article, we discuss the state-of-the-art LLM-based techniques for KGE and show the challenges associated with automating and deploying these processes in an industrial setup. We then propose our perspective on overcoming problems associated with data quality and scarcity, economic viability, privacy issues, language evolution, and the need to automate the KGE process while maintaining high accuracy.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11385612/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The world urgently needs new sources of clean energy due to a growing global population, rising energy use, and the effects of climate change. Nuclear energy is one of the most promising solutions for meeting the world's energy needs now and in the future. One type of nuclear energy, Low Energy Nuclear Reactions (LENR), has gained interest as a potential clean energy source. Recent AI advancements create new ways to help research LENR and to comprehensively analyze the relationships between experimental parameters, materials, and outcomes across diverse LENR research endeavors worldwide. This study explores and investigates the effectiveness of modern AI capabilities leveraging embedding models and topic modeling techniques, including Latent Dirichlet Allocation (LDA), BERTopic, and Top2Vec, in elucidating the underlying structure and prevalent themes within a large LENR research corpus. These methodologies offer unique perspectives on understanding relationships and trends within the LENR research landscape, thereby facilitating advancements in this crucial energy research area. Furthermore, the study presents LENRsim, an experimental machine learning tool to identify similar LENR studies, along with a user-friendly web interface for widespread adoption and utilization. The findings contribute to the understanding and progression of LENR research through data-driven analysis and tool development, enabling more informed decision-making and strategic planning for future research in this field. The insights derived from this study, along with the experimental tools we developed and deployed, hold the potential to significantly aid researchers in advancing their studies of LENR.
{"title":"Exploring artificial intelligence techniques to research low energy nuclear reactions.","authors":"Anasse Bari, Tanya Pushkin Garg, Yvonne Wu, Sneha Singh, David Nagel","doi":"10.3389/frai.2024.1401782","DOIUrl":"10.3389/frai.2024.1401782","url":null,"abstract":"<p><p>The world urgently needs new sources of clean energy due to a growing global population, rising energy use, and the effects of climate change. Nuclear energy is one of the most promising solutions for meeting the world's energy needs now and in the future. One type of nuclear energy, Low Energy Nuclear Reactions (LENR), has gained interest as a potential clean energy source. Recent AI advancements create new ways to help research LENR and to comprehensively analyze the relationships between experimental parameters, materials, and outcomes across diverse LENR research endeavors worldwide. This study explores and investigates the effectiveness of modern AI capabilities leveraging embedding models and topic modeling techniques, including Latent Dirichlet Allocation (LDA), BERTopic, and Top2Vec, in elucidating the underlying structure and prevalent themes within a large LENR research corpus. These methodologies offer unique perspectives on understanding relationships and trends within the LENR research landscape, thereby facilitating advancements in this crucial energy research area. Furthermore, the study presents LENRsim, an experimental machine learning tool to identify similar LENR studies, along with a user-friendly web interface for widespread adoption and utilization. The findings contribute to the understanding and progression of LENR research through data-driven analysis and tool development, enabling more informed decision-making and strategic planning for future research in this field. The insights derived from this study, along with the experimental tools we developed and deployed, hold the potential to significantly aid researchers in advancing their studies of LENR.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11377257/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-23eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1423535
Lu Zhou, Chaoyong Wu, Yiheng Chen, Zhicheng Zhang
Lung cancer is a predominant cause of cancer-related mortality worldwide, necessitating precise tumor segmentation of medical images for accurate diagnosis and treatment. However, the intrinsic complexity and variability of tumor morphology pose substantial challenges to segmentation tasks. To address this issue, we propose a multitask connected U-Net model with a teacher-student framework to enhance the effectiveness of lung tumor segmentation. The proposed model and framework integrate PET knowledge into the segmentation process, leveraging complementary information from both CT and PET modalities to improve segmentation performance. Additionally, we implemented a tumor area detection method to enhance tumor segmentation performance. In extensive experiments on four datasets, the average Dice coefficient of 0.56, obtained using our model, surpassed those of existing methods such as Segformer (0.51), Transformer (0.50), and UctransNet (0.43). These findings validate the efficacy of the proposed method in lung tumor segmentation tasks.
肺癌是导致全球癌症相关死亡的主要原因,因此需要对医学图像进行精确的肿瘤分割,以进行准确的诊断和治疗。然而,肿瘤形态的内在复杂性和多变性给分割任务带来了巨大挑战。为解决这一问题,我们提出了一个多任务连接 U-Net 模型和一个师生框架,以提高肺部肿瘤分割的有效性。所提出的模型和框架将 PET 知识整合到分割过程中,利用 CT 和 PET 两种模式的互补信息来提高分割性能。此外,我们还采用了一种肿瘤区域检测方法来提高肿瘤分割性能。在四个数据集的广泛实验中,使用我们的模型获得的平均 Dice 系数为 0.56,超过了 Segformer(0.51)、Transformer(0.50)和 UctransNet(0.43)等现有方法。这些发现验证了所提方法在肺部肿瘤分割任务中的有效性。
{"title":"Multitask connected U-Net: automatic lung cancer segmentation from CT images using PET knowledge guidance.","authors":"Lu Zhou, Chaoyong Wu, Yiheng Chen, Zhicheng Zhang","doi":"10.3389/frai.2024.1423535","DOIUrl":"10.3389/frai.2024.1423535","url":null,"abstract":"<p><p>Lung cancer is a predominant cause of cancer-related mortality worldwide, necessitating precise tumor segmentation of medical images for accurate diagnosis and treatment. However, the intrinsic complexity and variability of tumor morphology pose substantial challenges to segmentation tasks. To address this issue, we propose a multitask connected U-Net model with a teacher-student framework to enhance the effectiveness of lung tumor segmentation. The proposed model and framework integrate PET knowledge into the segmentation process, leveraging complementary information from both CT and PET modalities to improve segmentation performance. Additionally, we implemented a tumor area detection method to enhance tumor segmentation performance. In extensive experiments on four datasets, the average Dice coefficient of 0.56, obtained using our model, surpassed those of existing methods such as Segformer (0.51), Transformer (0.50), and UctransNet (0.43). These findings validate the efficacy of the proposed method in lung tumor segmentation tasks.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11377414/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156170","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-23eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1258086
Mu Li, Yijun Feng, Xiangdong Wu
Estimating travel time (ETA) for arbitrary paths is crucial in urban intelligent transportation systems. Previous studies primarily focus on constructing complex feature systems for individual road segments or sub-segments, which fail to effectively model the influence of each road segment on others. To address this issue, we propose an end-to-end model, AttentionTTE. It utilizes a self-attention mechanism to capture global spatial correlations and a recurrent neural network to capture temporal dependencies from local spatial correlations. Additionally, a multi-task learning module integrates global spatial correlations and temporal dependencies to estimate the travel time for both the entire path and each local path. We evaluate our model on a large trajectory dataset, and extensive experimental results demonstrate that AttentionTTE achieves state-of-the-art performance compared to other methods.
{"title":"AttentionTTE: a deep learning model for estimated time of arrival.","authors":"Mu Li, Yijun Feng, Xiangdong Wu","doi":"10.3389/frai.2024.1258086","DOIUrl":"10.3389/frai.2024.1258086","url":null,"abstract":"<p><p>Estimating travel time (ETA) for arbitrary paths is crucial in urban intelligent transportation systems. Previous studies primarily focus on constructing complex feature systems for individual road segments or sub-segments, which fail to effectively model the influence of each road segment on others. To address this issue, we propose an end-to-end model, AttentionTTE. It utilizes a self-attention mechanism to capture global spatial correlations and a recurrent neural network to capture temporal dependencies from local spatial correlations. Additionally, a multi-task learning module integrates global spatial correlations and temporal dependencies to estimate the travel time for both the entire path and each local path. We evaluate our model on a large trajectory dataset, and extensive experimental results demonstrate that AttentionTTE achieves state-of-the-art performance compared to other methods.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11378341/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}