Pub Date : 2024-09-02eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1457586
Yasir Abdelgadir, Charat Thongprayoon, Jing Miao, Supawadee Suppadungsuk, Justin H Pham, Michael A Mao, Iasmina M Craici, Wisit Cheungpasitporn
Background: Accurate ICD-10 coding is crucial for healthcare reimbursement, patient care, and research. AI implementation, like ChatGPT, could improve coding accuracy and reduce physician burden. This study assessed ChatGPT's performance in identifying ICD-10 codes for nephrology conditions through case scenarios for pre-visit testing.
Methods: Two nephrologists created 100 simulated nephrology cases. ChatGPT versions 3.5 and 4.0 were evaluated by comparing AI-generated ICD-10 codes against predetermined correct codes. Assessments were conducted in two rounds, 2 weeks apart, in April 2024.
Results: In the first round, the accuracy of ChatGPT for assigning correct diagnosis codes was 91 and 99% for version 3.5 and 4.0, respectively. In the second round, the accuracy of ChatGPT for assigning the correct diagnosis code was 87% for version 3.5 and 99% for version 4.0. ChatGPT 4.0 had higher accuracy than ChatGPT 3.5 (p = 0.02 and 0.002 for the first and second round respectively). The accuracy did not significantly differ between the two rounds (p > 0.05).
Conclusion: ChatGPT 4.0 can significantly improve ICD-10 coding accuracy in nephrology through case scenarios for pre-visit testing, potentially reducing healthcare professionals' workload. However, the small error percentage underscores the need for ongoing review and improvement of AI systems to ensure accurate reimbursement, optimal patient care, and reliable research data.
{"title":"AI integration in nephrology: evaluating ChatGPT for accurate ICD-10 documentation and coding.","authors":"Yasir Abdelgadir, Charat Thongprayoon, Jing Miao, Supawadee Suppadungsuk, Justin H Pham, Michael A Mao, Iasmina M Craici, Wisit Cheungpasitporn","doi":"10.3389/frai.2024.1457586","DOIUrl":"https://doi.org/10.3389/frai.2024.1457586","url":null,"abstract":"<p><strong>Background: </strong>Accurate ICD-10 coding is crucial for healthcare reimbursement, patient care, and research. AI implementation, like ChatGPT, could improve coding accuracy and reduce physician burden. This study assessed ChatGPT's performance in identifying ICD-10 codes for nephrology conditions through case scenarios for pre-visit testing.</p><p><strong>Methods: </strong>Two nephrologists created 100 simulated nephrology cases. ChatGPT versions 3.5 and 4.0 were evaluated by comparing AI-generated ICD-10 codes against predetermined correct codes. Assessments were conducted in two rounds, 2 weeks apart, in April 2024.</p><p><strong>Results: </strong>In the first round, the accuracy of ChatGPT for assigning correct diagnosis codes was 91 and 99% for version 3.5 and 4.0, respectively. In the second round, the accuracy of ChatGPT for assigning the correct diagnosis code was 87% for version 3.5 and 99% for version 4.0. ChatGPT 4.0 had higher accuracy than ChatGPT 3.5 (<i>p</i> = 0.02 and 0.002 for the first and second round respectively). The accuracy did not significantly differ between the two rounds (<i>p</i> > 0.05).</p><p><strong>Conclusion: </strong>ChatGPT 4.0 can significantly improve ICD-10 coding accuracy in nephrology through case scenarios for pre-visit testing, potentially reducing healthcare professionals' workload. However, the small error percentage underscores the need for ongoing review and improvement of AI systems to ensure accurate reimbursement, optimal patient care, and reliable research data.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1457586"},"PeriodicalIF":3.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11402808/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-02eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1456069
Nabil M AbdelAziz, Wael Said, Mohamed M AbdelHafeez, Asmaa H Ali
Early detection of Alzheimer's disease (AD) is vital for effective treatment, as interventions are most successful in the disease's early stages. Combining Magnetic Resonance Imaging (MRI) with artificial intelligence (AI) offers significant potential for enhancing AD diagnosis. However, traditional AI models often lack transparency in their decision-making processes. Explainable Artificial Intelligence (XAI) is an evolving field that aims to make AI decisions understandable to humans, providing transparency and insight into AI systems. This research introduces the Squeeze-and-Excitation Convolutional Neural Network with Random Forest (SECNN-RF) framework for early AD detection using MRI scans. The SECNN-RF integrates Squeeze-and-Excitation (SE) blocks into a Convolutional Neural Network (CNN) to focus on crucial features and uses Dropout layers to prevent overfitting. It then employs a Random Forest classifier to accurately categorize the extracted features. The SECNN-RF demonstrates high accuracy (99.89%) and offers an explainable analysis, enhancing the model's interpretability. Further exploration of the SECNN framework involved substituting the Random Forest classifier with other machine learning algorithms like Decision Tree, XGBoost, Support Vector Machine, and Gradient Boosting. While all these classifiers improved model performance, Random Forest achieved the highest accuracy, followed closely by XGBoost, Gradient Boosting, Support Vector Machine, and Decision Tree which achieved lower accuracy.
{"title":"Advanced interpretable diagnosis of Alzheimer's disease using SECNN-RF framework with explainable AI.","authors":"Nabil M AbdelAziz, Wael Said, Mohamed M AbdelHafeez, Asmaa H Ali","doi":"10.3389/frai.2024.1456069","DOIUrl":"https://doi.org/10.3389/frai.2024.1456069","url":null,"abstract":"<p><p>Early detection of Alzheimer's disease (AD) is vital for effective treatment, as interventions are most successful in the disease's early stages. Combining Magnetic Resonance Imaging (MRI) with artificial intelligence (AI) offers significant potential for enhancing AD diagnosis. However, traditional AI models often lack transparency in their decision-making processes. Explainable Artificial Intelligence (XAI) is an evolving field that aims to make AI decisions understandable to humans, providing transparency and insight into AI systems. This research introduces the Squeeze-and-Excitation Convolutional Neural Network with Random Forest (SECNN-RF) framework for early AD detection using MRI scans. The SECNN-RF integrates Squeeze-and-Excitation (SE) blocks into a Convolutional Neural Network (CNN) to focus on crucial features and uses Dropout layers to prevent overfitting. It then employs a Random Forest classifier to accurately categorize the extracted features. The SECNN-RF demonstrates high accuracy (99.89%) and offers an explainable analysis, enhancing the model's interpretability. Further exploration of the SECNN framework involved substituting the Random Forest classifier with other machine learning algorithms like Decision Tree, XGBoost, Support Vector Machine, and Gradient Boosting. While all these classifiers improved model performance, Random Forest achieved the highest accuracy, followed closely by XGBoost, Gradient Boosting, Support Vector Machine, and Decision Tree which achieved lower accuracy.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1456069"},"PeriodicalIF":3.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11402894/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Introduction: This study introduces the Supervised Magnitude-Altitude Scoring (SMAS) methodology, a novel machine learning-based approach for analyzing gene expression data from non-human primates (NHPs) infected with Ebola virus (EBOV). By focusing on host-pathogen interactions, this research aims to enhance the understanding and identification of critical biomarkers for Ebola infection.
Methods: We utilized a comprehensive dataset of NanoString gene expression profiles from Ebola-infected NHPs. The SMAS system combines gene selection based on both statistical significance and expression changes. Employing linear classifiers such as logistic regression, the method facilitates precise differentiation between RT-qPCR positive and negative NHP samples.
Results: The application of SMAS led to the identification of IFI6 and IFI27 as key biomarkers, which demonstrated perfect predictive performance with 100% accuracy and optimal Area Under the Curve (AUC) metrics in classifying various stages of Ebola infection. Additionally, genes including MX1, OAS1, and ISG15 were significantly upregulated, underscoring their vital roles in the immune response to EBOV.
Discussion: Gene Ontology (GO) analysis further elucidated the involvement of these genes in critical biological processes and immune response pathways, reinforcing their significance in Ebola pathogenesis. Our findings highlight the efficacy of the SMAS methodology in revealing complex genetic interactions and response mechanisms, which are essential for advancing the development of diagnostic tools and therapeutic strategies.
Conclusion: This study provides valuable insights into EBOV pathogenesis, demonstrating the potential of SMAS to enhance the precision of diagnostics and interventions for Ebola and other viral infections.
{"title":"Machine learning-based analysis of Ebola virus' impact on gene expression in nonhuman primates.","authors":"Mostafa Rezapour, Muhammad Khalid Khan Niazi, Hao Lu, Aarthi Narayanan, Metin Nafi Gurcan","doi":"10.3389/frai.2024.1405332","DOIUrl":"https://doi.org/10.3389/frai.2024.1405332","url":null,"abstract":"<p><strong>Introduction: </strong>This study introduces the Supervised Magnitude-Altitude Scoring (SMAS) methodology, a novel machine learning-based approach for analyzing gene expression data from non-human primates (NHPs) infected with Ebola virus (EBOV). By focusing on host-pathogen interactions, this research aims to enhance the understanding and identification of critical biomarkers for Ebola infection.</p><p><strong>Methods: </strong>We utilized a comprehensive dataset of NanoString gene expression profiles from Ebola-infected NHPs. The SMAS system combines gene selection based on both statistical significance and expression changes. Employing linear classifiers such as logistic regression, the method facilitates precise differentiation between RT-qPCR positive and negative NHP samples.</p><p><strong>Results: </strong>The application of SMAS led to the identification of IFI6 and IFI27 as key biomarkers, which demonstrated perfect predictive performance with 100% accuracy and optimal Area Under the Curve (AUC) metrics in classifying various stages of Ebola infection. Additionally, genes including MX1, OAS1, and ISG15 were significantly upregulated, underscoring their vital roles in the immune response to EBOV.</p><p><strong>Discussion: </strong>Gene Ontology (GO) analysis further elucidated the involvement of these genes in critical biological processes and immune response pathways, reinforcing their significance in Ebola pathogenesis. Our findings highlight the efficacy of the SMAS methodology in revealing complex genetic interactions and response mechanisms, which are essential for advancing the development of diagnostic tools and therapeutic strategies.</p><p><strong>Conclusion: </strong>This study provides valuable insights into EBOV pathogenesis, demonstrating the potential of SMAS to enhance the precision of diagnostics and interventions for Ebola and other viral infections.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1405332"},"PeriodicalIF":3.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11392916/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-30eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1411838
Bama Andika Putra
Despite the rapid development of AI, ASEAN has not been able to devise a regional governance framework to address relevant existing and future challenges. This is concerning, considering the potential of AI to accelerate GDP among ASEAN member states in the coming years. This qualitative inquiry discusses AI governance in Southeast Asia in the past 5 years and what regulatory policies ASEAN can explore to better modulate its use among its member states. It considers the unique political landscape of the region, defined by the adoption of unique norms such as non-interference and priority over dialog, commonly termed the ASEAN Way. The following measures are concluded as potential regional governance frameworks: (1) Elevation of the topic's importance in ASEAN's intra and inter-regional forums to formulate collective regional agreements on AI, (2) adoption of AI governance measures in the field of education, specifically, reskilling and upskilling strategies to respond to future transformation of the working landscape, and (3) establishment of an ASEAN working group to bridge knowledge gaps among member states, caused by the disparity of AI-readiness in the region.
{"title":"Governing AI in Southeast Asia: ASEAN's way forward.","authors":"Bama Andika Putra","doi":"10.3389/frai.2024.1411838","DOIUrl":"https://doi.org/10.3389/frai.2024.1411838","url":null,"abstract":"<p><p>Despite the rapid development of AI, ASEAN has not been able to devise a regional governance framework to address relevant existing and future challenges. This is concerning, considering the potential of AI to accelerate GDP among ASEAN member states in the coming years. This qualitative inquiry discusses AI governance in Southeast Asia in the past 5 years and what regulatory policies ASEAN can explore to better modulate its use among its member states. It considers the unique political landscape of the region, defined by the adoption of unique norms such as non-interference and priority over dialog, commonly termed the ASEAN Way. The following measures are concluded as potential regional governance frameworks: (1) Elevation of the topic's importance in ASEAN's intra and inter-regional forums to formulate collective regional agreements on AI, (2) adoption of AI governance measures in the field of education, specifically, reskilling and upskilling strategies to respond to future transformation of the working landscape, and (3) establishment of an ASEAN working group to bridge knowledge gaps among member states, caused by the disparity of AI-readiness in the region.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1411838"},"PeriodicalIF":3.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11392876/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-29eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1399168
Bharathi Mohan Gurusamy, Prasanna Kumar Rangarajan, Ali Altalbe
Automatic text summarization is a cornerstone of natural language processing, yet existing methods often struggle to maintain contextual integrity and capture nuanced sentence relationships. Introducing the Optimized Auto Encoded Long Short-Term Memory Network (OAELSTM), enhanced by the Whale Optimization Algorithm (WOA), offers a novel approach to this challenge. Existing summarization models frequently produce summaries that are either too generic or disjointed, failing to preserve the essential content. The OAELSTM model, integrating deep LSTM layers and autoencoder mechanisms, focuses on extracting key phrases and concepts, ensuring that summaries are both informative and coherent. WOA fine-tunes the model's parameters, enhancing its precision and efficiency. Evaluation on datasets like CNN/Daily Mail and Gigaword demonstrates the model's superiority over existing approaches. It achieves a ROUGE Score of 0.456, an accuracy rate of 84.47%, and a specificity score of 0.3244, all within an efficient processing time of 4,341.95 s.
{"title":"Whale-optimized LSTM networks for enhanced automatic text summarization.","authors":"Bharathi Mohan Gurusamy, Prasanna Kumar Rangarajan, Ali Altalbe","doi":"10.3389/frai.2024.1399168","DOIUrl":"https://doi.org/10.3389/frai.2024.1399168","url":null,"abstract":"<p><p>Automatic text summarization is a cornerstone of natural language processing, yet existing methods often struggle to maintain contextual integrity and capture nuanced sentence relationships. Introducing the Optimized Auto Encoded Long Short-Term Memory Network (OAELSTM), enhanced by the Whale Optimization Algorithm (WOA), offers a novel approach to this challenge. Existing summarization models frequently produce summaries that are either too generic or disjointed, failing to preserve the essential content. The OAELSTM model, integrating deep LSTM layers and autoencoder mechanisms, focuses on extracting key phrases and concepts, ensuring that summaries are both informative and coherent. WOA fine-tunes the model's parameters, enhancing its precision and efficiency. Evaluation on datasets like CNN/Daily Mail and Gigaword demonstrates the model's superiority over existing approaches. It achieves a ROUGE Score of 0.456, an accuracy rate of 84.47%, and a specificity score of 0.3244, all within an efficient processing time of 4,341.95 s.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1399168"},"PeriodicalIF":3.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11391734/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-29eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1325219
Salvatore Capuozzo, Stefano Marrone, Michela Gravina, Giuseppe Cringoli, Laura Rinaldi, Maria Paola Maurelli, Antonio Bosco, Giulia Orrù, Gian Luca Marcialis, Luca Ghiani, Stefano Bini, Alessia Saggese, Mario Vento, Carlo Sansone
In the field of veterinary medicine, the detection of parasite eggs in the fecal samples of livestock animals represents one of the most challenging tasks, since their spread and diffusion may lead to severe clinical disease. Nowadays, the scanning procedure is typically performed by physicians with professional microscopes and requires a significant amount of time, domain knowledge, and resources. The Kubic FLOTAC Microscope (KFM) is a compact, low-cost, portable digital microscope that can autonomously analyze fecal specimens for parasites and hosts in both field and laboratory settings. It has been shown to acquire images that are comparable to those obtained with traditional optical microscopes, and it can complete the scanning and imaging process in just a few minutes, freeing up the operator's time for other tasks. To promote research in this area, the first AI-KFM challenge was organized, which focused on the detection of gastrointestinal nematodes (GINs) in cattle using RGB images. The challenge aimed to provide a standardized experimental protocol with a large number of samples collected in a well-known environment and a set of scores for the approaches submitted by the competitors. This paper describes the process of generating and structuring the challenge dataset and the approaches submitted by the competitors, as well as the lessons learned throughout this journey.
{"title":"Automating parasite egg detection: insights from the first AI-KFM challenge.","authors":"Salvatore Capuozzo, Stefano Marrone, Michela Gravina, Giuseppe Cringoli, Laura Rinaldi, Maria Paola Maurelli, Antonio Bosco, Giulia Orrù, Gian Luca Marcialis, Luca Ghiani, Stefano Bini, Alessia Saggese, Mario Vento, Carlo Sansone","doi":"10.3389/frai.2024.1325219","DOIUrl":"https://doi.org/10.3389/frai.2024.1325219","url":null,"abstract":"<p><p>In the field of veterinary medicine, the detection of parasite eggs in the fecal samples of livestock animals represents one of the most challenging tasks, since their spread and diffusion may lead to severe clinical disease. Nowadays, the scanning procedure is typically performed by physicians with professional microscopes and requires a significant amount of time, domain knowledge, and resources. The Kubic FLOTAC Microscope (KFM) is a compact, low-cost, portable digital microscope that can autonomously analyze fecal specimens for parasites and hosts in both field and laboratory settings. It has been shown to acquire images that are comparable to those obtained with traditional optical microscopes, and it can complete the scanning and imaging process in just a few minutes, freeing up the operator's time for other tasks. To promote research in this area, the first AI-KFM challenge was organized, which focused on the detection of gastrointestinal nematodes (GINs) in cattle using RGB images. The challenge aimed to provide a standardized experimental protocol with a large number of samples collected in a well-known environment and a set of scores for the approaches submitted by the competitors. This paper describes the process of generating and structuring the challenge dataset and the approaches submitted by the competitors, as well as the lessons learned throughout this journey.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1325219"},"PeriodicalIF":3.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11390596/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-29eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1436350
Cigdem Sengul, Rumyana Neykova, Giuseppe Destefanis
The developments in conversational AI raised urgent questions about the future direction of many aspects of society, including computing education. The first reactions to the fast-paced evolution of conversational agents were varied: Some announced "the end of programming," while others considered this "premature obituary of programming." Some adopted a defensive approach to detecting the use of conversational AI and avoiding an increase in plagiarism, while others questioned, "So what if ChatGPT wrote it?" Nevertheless, questions arise about whether computing education in its current form will still be relevant and fit for purpose in the era of conversational AI. Recognizing these diverse reactions to the advent of conversational AI, this paper aims to contribute to the ongoing discourse by exploring the current state through three perspectives in a dedicated literature review: adoption of conversational AI in (1) software engineering education specifically and (2) computing education in general, and (3) a comparison with software engineering practice. Our results show a gap between software engineering practice and higher education in the pace of adoption and the areas of use and generally identify preliminary research on student experience, teaching, and learning tools for software engineering.
{"title":"Software engineering education in the era of conversational AI: current trends and future directions.","authors":"Cigdem Sengul, Rumyana Neykova, Giuseppe Destefanis","doi":"10.3389/frai.2024.1436350","DOIUrl":"https://doi.org/10.3389/frai.2024.1436350","url":null,"abstract":"<p><p>The developments in conversational AI raised urgent questions about the future direction of many aspects of society, including computing education. The first reactions to the fast-paced evolution of conversational agents were varied: Some announced \"the end of programming,\" while others considered this \"premature obituary of programming.\" Some adopted a defensive approach to detecting the use of conversational AI and avoiding an increase in plagiarism, while others questioned, \"So what if ChatGPT wrote it?\" Nevertheless, questions arise about whether computing education in its current form will still be relevant and fit for purpose in the era of conversational AI. Recognizing these diverse reactions to the advent of conversational AI, this paper aims to contribute to the ongoing discourse by exploring the current state through three perspectives in a dedicated literature review: adoption of conversational AI in (1) software engineering education specifically and (2) computing education in general, and (3) a comparison with software engineering practice. Our results show a gap between software engineering practice and higher education in the pace of adoption and the areas of use and generally identify preliminary research on student experience, teaching, and learning tools for software engineering.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1436350"},"PeriodicalIF":3.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11391529/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-28eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1425713
Zhe Khae Lim, Tee Connie, Michael Kah Ong Goh, Nor 'Izzati Binti Saedon
Introduction: Falls have been acknowledged as a major public health issue around the world. Early detection of fall risk is pivotal for preventive measures. Traditional clinical assessments, although reliable, are resource-intensive and may not always be feasible.
Methods: This study explores the efficacy of artificial intelligence (AI) in predicting fall risk, leveraging gait analysis through computer vision and machine learning techniques. Data was collected using the Timed Up and Go (TUG) test and JHFRAT assessment from MMU collaborators and augmented with a public dataset from Mendeley involving older adults. The study introduces a robust approach for extracting and analyzing gait features, such as stride time, step time, cadence, and stance time, to distinguish between fallers and non-fallers.
Results: Two experimental setups were investigated: one considering separate gait features for each foot and another analyzing averaged features for both feet. Ultimately, the proposed solutions produce promising outcomes, greatly enhancing the model's ability to achieve high levels of accuracy. In particular, the LightGBM demonstrates a superior accuracy of 96% in the prediction task.
Discussion: The findings demonstrate that simple machine learning models can successfully identify individuals at higher fall risk based on gait characteristics, with promising results that could potentially streamline fall risk assessment processes. However, several limitations were discovered throughout the experiment, including an insufficient dataset and data variation, limiting the model's generalizability. These issues are raised for future work consideration. Overall, this research contributes to the growing body of knowledge on fall risk prediction and underscores the potential of AI in enhancing public health strategies through the early identification of at-risk individuals.
{"title":"Fall risk prediction using temporal gait features and machine learning approaches.","authors":"Zhe Khae Lim, Tee Connie, Michael Kah Ong Goh, Nor 'Izzati Binti Saedon","doi":"10.3389/frai.2024.1425713","DOIUrl":"https://doi.org/10.3389/frai.2024.1425713","url":null,"abstract":"<p><strong>Introduction: </strong>Falls have been acknowledged as a major public health issue around the world. Early detection of fall risk is pivotal for preventive measures. Traditional clinical assessments, although reliable, are resource-intensive and may not always be feasible.</p><p><strong>Methods: </strong>This study explores the efficacy of artificial intelligence (AI) in predicting fall risk, leveraging gait analysis through computer vision and machine learning techniques. Data was collected using the Timed Up and Go (TUG) test and JHFRAT assessment from MMU collaborators and augmented with a public dataset from Mendeley involving older adults. The study introduces a robust approach for extracting and analyzing gait features, such as stride time, step time, cadence, and stance time, to distinguish between fallers and non-fallers.</p><p><strong>Results: </strong>Two experimental setups were investigated: one considering separate gait features for each foot and another analyzing averaged features for both feet. Ultimately, the proposed solutions produce promising outcomes, greatly enhancing the model's ability to achieve high levels of accuracy. In particular, the LightGBM demonstrates a superior accuracy of 96% in the prediction task.</p><p><strong>Discussion: </strong>The findings demonstrate that simple machine learning models can successfully identify individuals at higher fall risk based on gait characteristics, with promising results that could potentially streamline fall risk assessment processes. However, several limitations were discovered throughout the experiment, including an insufficient dataset and data variation, limiting the model's generalizability. These issues are raised for future work consideration. Overall, this research contributes to the growing body of knowledge on fall risk prediction and underscores the potential of AI in enhancing public health strategies through the early identification of at-risk individuals.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1425713"},"PeriodicalIF":3.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11389313/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-28eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1415782
Johan Klarin, Eva Hoff, Adam Larsson, Daiva Daukantaitė
In this study, we aimed to explore the frequency of use and perceived usefulness of LLM generative AI chatbots (e.g., ChatGPT) for schoolwork, particularly in relation to adolescents' executive functioning (EF), which includes critical cognitive processes like planning, inhibition, and cognitive flexibility essential for academic success. Two studies were conducted, encompassing both younger (Study 1: N = 385, 46% girls, mean age 14 years) and older (Study 2: N = 359, 67% girls, mean age 17 years) adolescents, to comprehensively examine these associations across different age groups. In Study 1, approximately 14.8% of participants reported using generative AI, while in Study 2, the adoption rate among older students was 52.6%, with ChatGPT emerging as the preferred tool among adolescents in both studies. Consistently across both studies, we found that adolescents facing more EF challenges perceived generative AI as more useful for schoolwork, particularly in completing assignments. Notably, academic achievement showed no significant associations with AI usage or usefulness, as revealed in Study 1. This study represents the first exploration into how individual characteristics, such as EF, relate to the frequency and perceived usefulness of LLM generative AI chatbots for schoolwork among adolescents. Given the early stage of generative AI chatbots during the survey, future research should validate these findings and delve deeper into the utilization and integration of generative AI into educational settings. It is crucial to adopt a proactive approach to address the potential challenges and opportunities associated with these emerging technologies in education.
{"title":"Adolescents' use and perceived usefulness of generative AI for schoolwork: exploring their relationships with executive functioning and academic achievement.","authors":"Johan Klarin, Eva Hoff, Adam Larsson, Daiva Daukantaitė","doi":"10.3389/frai.2024.1415782","DOIUrl":"https://doi.org/10.3389/frai.2024.1415782","url":null,"abstract":"<p><p>In this study, we aimed to explore the frequency of use and perceived usefulness of LLM generative AI chatbots (e.g., ChatGPT) for schoolwork, particularly in relation to adolescents' executive functioning (EF), which includes critical cognitive processes like planning, inhibition, and cognitive flexibility essential for academic success. Two studies were conducted, encompassing both younger (Study 1: <i>N</i> = 385, 46% girls, mean age 14 years) and older (Study 2: <i>N</i> = 359, 67% girls, mean age 17 years) adolescents, to comprehensively examine these associations across different age groups. In Study 1, approximately 14.8% of participants reported using generative AI, while in Study 2, the adoption rate among older students was 52.6%, with ChatGPT emerging as the preferred tool among adolescents in both studies. Consistently across both studies, we found that adolescents facing more EF challenges perceived generative AI as more useful for schoolwork, particularly in completing assignments. Notably, academic achievement showed no significant associations with AI usage or usefulness, as revealed in Study 1. This study represents the first exploration into how individual characteristics, such as EF, relate to the frequency and perceived usefulness of LLM generative AI chatbots for schoolwork among adolescents. Given the early stage of generative AI chatbots during the survey, future research should validate these findings and delve deeper into the utilization and integration of generative AI into educational settings. It is crucial to adopt a proactive approach to address the potential challenges and opportunities associated with these emerging technologies in education.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1415782"},"PeriodicalIF":3.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11387220/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297009","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-27eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1402098
Qaisar Ali
Conventional farming poses threats to sustainable agriculture in growing food demands and increasing flooding risks. This research introduces a Bayesian Belief Network (BBN) to address these concerns. The model explores tillage adaptation for flood management in soils with varying organic carbon (OC) contents for winter wheat production. Three real soils, emphasizing texture and soil water properties, were sourced from the NETMAP soilscape of the Pang catchment area in Berkshire, United Kingdom. Modified with OC content at four levels (1, 3, 5, 7%), they were modeled alongside relevant variables in a BBN. The Decision Support System for Agrotechnology Transfer (DSSAT) simulated datasets across 48 cropping seasons to parameterize the BBN. The study compared tillage effects on wheat yield, surface runoff, and GHG-CO2 emissions, categorizing model parameters (from lower to higher bands) based on statistical data distribution. Results revealed that NT outperformed CT in the highest parametric category, comparing probabilistic estimates with reduced GHG-CO2 emissions from "7.34 to 7.31%" and cumulative runoff from "8.52 to 8.50%," while yield increased from "7.46 to 7.56%." Conversely, CT exhibited increased emissions from "7.34 to 7.36%" and cumulative runoff from "8.52 to 8.55%," along with reduced yield from "7.46 to 7.35%." The BBN model effectively captured uncertainties, offering posterior probability distributions reflecting conditional relationships across variables and offered decision choice for NT favoring soil carbon stocks in winter wheat (highest among soils "NT.OC-7%PDPG8," e.g., 286,634 kg/ha) over CT (lowest in "CT.OC-3.9%PDPG8," e.g., 5,894 kg/ha). On average, NT released minimum GHG- CO2 emissions to "3,985 kgCO2eqv/ha," while CT emitted "7,415 kgCO2eqv/ha." Conversely, NT emitted "8,747 kgCO2eqv/ha" for maximum emissions, while CT emitted "15,356 kgCO2eqv/ha." NT resulted in lower surface runoff against CT in all soils and limits runoff generations naturally for flood alleviation with the potential for customized improvement. The study recommends the model for extensive assessments of various spatiotemporal conditions. The research findings align with sustainable development goals, e.g., SDG12 and SDG13 for responsible production and climate actions, respectively, as defined by the Agriculture and Food Organization of the United Nations.
{"title":"Bayesian model of tilling wheat confronting climatic and sustainability challenges.","authors":"Qaisar Ali","doi":"10.3389/frai.2024.1402098","DOIUrl":"https://doi.org/10.3389/frai.2024.1402098","url":null,"abstract":"<p><p>Conventional farming poses threats to sustainable agriculture in growing food demands and increasing flooding risks. This research introduces a Bayesian Belief Network (BBN) to address these concerns. The model explores tillage adaptation for flood management in soils with varying organic carbon (OC) contents for winter wheat production. Three real soils, emphasizing texture and soil water properties, were sourced from the NETMAP soilscape of the Pang catchment area in Berkshire, United Kingdom. Modified with OC content at four levels (1, 3, 5, 7%), they were modeled alongside relevant variables in a BBN. The Decision Support System for Agrotechnology Transfer (DSSAT) simulated datasets across 48 cropping seasons to parameterize the BBN. The study compared tillage effects on wheat yield, surface runoff, and GHG-CO<sub>2</sub> emissions, categorizing model parameters (from lower to higher bands) based on statistical data distribution. Results revealed that NT outperformed CT in the highest parametric category, comparing probabilistic estimates with reduced GHG-CO<sub>2</sub> emissions from \"7.34 to 7.31%\" and cumulative runoff from \"8.52 to 8.50%,\" while yield increased from \"7.46 to 7.56%.\" Conversely, CT exhibited increased emissions from \"7.34 to 7.36%\" and cumulative runoff from \"8.52 to 8.55%,\" along with reduced yield from \"7.46 to 7.35%.\" The BBN model effectively captured uncertainties, offering posterior probability distributions reflecting conditional relationships across variables and offered decision choice for NT favoring soil carbon stocks in winter wheat (highest among soils \"NT.OC-7%PDPG8,\" e.g., 286,634 kg/ha) over CT (lowest in \"CT.OC-3.9%PDPG8,\" e.g., 5,894 kg/ha). On average, NT released minimum GHG- CO<sub>2</sub> emissions to \"3,985 kgCO<sub>2</sub>eqv/ha,\" while CT emitted \"7,415 kgCO<sub>2</sub>eqv/ha.\" Conversely, NT emitted \"8,747 kgCO<sub>2</sub>eqv/ha\" for maximum emissions, while CT emitted \"15,356 kgCO<sub>2</sub>eqv/ha.\" NT resulted in lower surface runoff against CT in all soils and limits runoff generations naturally for flood alleviation with the potential for customized improvement. The study recommends the model for extensive assessments of various spatiotemporal conditions. The research findings align with sustainable development goals, e.g., SDG12 and SDG13 for responsible production and climate actions, respectively, as defined by the Agriculture and Food Organization of the United Nations.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1402098"},"PeriodicalIF":3.0,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11385300/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}