Pub Date : 2024-09-09eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1376546
Hanan Saadat, Mohammad Mehdi Sepehri, Mahdi-Reza Borna, Behnam Maleki
Background: This study delves into the crucial domain of sperm segmentation, a pivotal component of male infertility diagnosis. It explores the efficacy of diverse architectural configurations coupled with various encoders, leveraging frames from the VISEM dataset for evaluation.
Methods: The pursuit of automated sperm segmentation led to the examination of multiple deep learning architectures, each paired with distinct encoders. Extensive experimentation was conducted on the VISEM dataset to assess their performance.
Results: Our study evaluated various deep learning architectures with different encoders for sperm segmentation using the VISEM dataset. While each model configuration exhibited distinct strengths and weaknesses, UNet++ with ResNet34 emerged as a top-performing model, demonstrating exceptional accuracy in distinguishing sperm cells from non-sperm cells. However, challenges persist in accurately identifying closely adjacent sperm cells. These findings provide valuable insights for improving automated sperm segmentation in male infertility diagnosis.
Discussion: The study underscores the significance of selecting appropriate model combinations based on specific diagnostic requirements. It also highlights the challenges related to distinguishing closely adjacent sperm cells.
Conclusion: This research advances the field of automated sperm segmentation for male infertility diagnosis, showcasing the potential of deep learning techniques. Future work should aim to enhance accuracy in scenarios involving close proximity between sperm cells, ultimately improving clinical sperm analysis.
{"title":"A modified U-Net to detect real sperms in videos of human sperm cell.","authors":"Hanan Saadat, Mohammad Mehdi Sepehri, Mahdi-Reza Borna, Behnam Maleki","doi":"10.3389/frai.2024.1376546","DOIUrl":"10.3389/frai.2024.1376546","url":null,"abstract":"<p><strong>Background: </strong>This study delves into the crucial domain of sperm segmentation, a pivotal component of male infertility diagnosis. It explores the efficacy of diverse architectural configurations coupled with various encoders, leveraging frames from the VISEM dataset for evaluation.</p><p><strong>Methods: </strong>The pursuit of automated sperm segmentation led to the examination of multiple deep learning architectures, each paired with distinct encoders. Extensive experimentation was conducted on the VISEM dataset to assess their performance.</p><p><strong>Results: </strong>Our study evaluated various deep learning architectures with different encoders for sperm segmentation using the VISEM dataset. While each model configuration exhibited distinct strengths and weaknesses, UNet++ with ResNet34 emerged as a top-performing model, demonstrating exceptional accuracy in distinguishing sperm cells from non-sperm cells. However, challenges persist in accurately identifying closely adjacent sperm cells. These findings provide valuable insights for improving automated sperm segmentation in male infertility diagnosis.</p><p><strong>Discussion: </strong>The study underscores the significance of selecting appropriate model combinations based on specific diagnostic requirements. It also highlights the challenges related to distinguishing closely adjacent sperm cells.</p><p><strong>Conclusion: </strong>This research advances the field of automated sperm segmentation for male infertility diagnosis, showcasing the potential of deep learning techniques. Future work should aim to enhance accuracy in scenarios involving close proximity between sperm cells, ultimately improving clinical sperm analysis.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11418809/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142308683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-05eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1410790
Jaime Govea, Rommel Gutierrez, William Villegas-Ch
In today's information age, recommender systems have become an essential tool to filter and personalize the massive data flow to users. However, these systems' increasing complexity and opaque nature have raised concerns about transparency and user trust. Lack of explainability in recommendations can lead to ill-informed decisions and decreased confidence in these advanced systems. Our study addresses this problem by integrating explainability techniques into recommendation systems to improve both the precision of the recommendations and their transparency. We implemented and evaluated recommendation models on the MovieLens and Amazon datasets, applying explainability methods like LIME and SHAP to disentangle the model decisions. The results indicated significant improvements in the precision of the recommendations, with a notable increase in the user's ability to understand and trust the suggestions provided by the system. For example, we saw a 3% increase in recommendation precision when incorporating these explainability techniques, demonstrating their added value in performance and improving the user experience.
{"title":"Transparency and precision in the age of AI: evaluation of explainability-enhanced recommendation systems.","authors":"Jaime Govea, Rommel Gutierrez, William Villegas-Ch","doi":"10.3389/frai.2024.1410790","DOIUrl":"https://doi.org/10.3389/frai.2024.1410790","url":null,"abstract":"<p><p>In today's information age, recommender systems have become an essential tool to filter and personalize the massive data flow to users. However, these systems' increasing complexity and opaque nature have raised concerns about transparency and user trust. Lack of explainability in recommendations can lead to ill-informed decisions and decreased confidence in these advanced systems. Our study addresses this problem by integrating explainability techniques into recommendation systems to improve both the precision of the recommendations and their transparency. We implemented and evaluated recommendation models on the MovieLens and Amazon datasets, applying explainability methods like LIME and SHAP to disentangle the model decisions. The results indicated significant improvements in the precision of the recommendations, with a notable increase in the user's ability to understand and trust the suggestions provided by the system. For example, we saw a 3% increase in recommendation precision when incorporating these explainability techniques, demonstrating their added value in performance and improving the user experience.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11410769/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Introduction: Deep learning (DL) has significantly advanced medical image classification. However, it often relies on transfer learning (TL) from models pretrained on large, generic non-medical image datasets like ImageNet. Conversely, medical images possess unique visual characteristics that such general models may not adequately capture.
Methods: This study examines the effectiveness of modality-specific pretext learning strengthened by image denoising and deblurring in enhancing the classification of pediatric chest X-ray (CXR) images into those exhibiting no findings, i.e., normal lungs, or with cardiopulmonary disease manifestations. Specifically, we use a VGG-16-Sharp-U-Net architecture and leverage its encoder in conjunction with a classification head to distinguish normal from abnormal pediatric CXR findings. We benchmark this performance against the traditional TL approach, viz., the VGG-16 model pretrained only on ImageNet. Measures used for performance evaluation are balanced accuracy, sensitivity, specificity, F-score, Matthew's Correlation Coefficient (MCC), Kappa statistic, and Youden's index.
Results: Our findings reveal that models developed from CXR modality-specific pretext encoders substantially outperform the ImageNet-only pretrained model, viz., Baseline, and achieve significantly higher sensitivity (p < 0.05) with marked improvements in balanced accuracy, F-score, MCC, Kappa statistic, and Youden's index. A novel attention-based fuzzy ensemble of the pretext-learned models further improves performance across these metrics (Balanced accuracy: 0.6376; Sensitivity: 0.4991; F-score: 0.5102; MCC: 0.2783; Kappa: 0.2782, and Youden's index:0.2751), compared to Baseline (Balanced accuracy: 0.5654; Sensitivity: 0.1983; F-score: 0.2977; MCC: 0.1998; Kappa: 0.1599, and Youden's index:0.1327).
Discussion: The superior results of CXR modality-specific pretext learning and their ensemble underscore its potential as a viable alternative to conventional ImageNet pretraining for medical image classification. Results from this study promote further exploration of medical modality-specific TL techniques in the development of DL models for various medical imaging applications.
{"title":"Noise-induced modality-specific pretext learning for pediatric chest X-ray image classification.","authors":"Sivaramakrishnan Rajaraman, Zhaohui Liang, Zhiyun Xue, Sameer Antani","doi":"10.3389/frai.2024.1419638","DOIUrl":"https://doi.org/10.3389/frai.2024.1419638","url":null,"abstract":"<p><strong>Introduction: </strong>Deep learning (DL) has significantly advanced medical image classification. However, it often relies on transfer learning (TL) from models pretrained on large, generic non-medical image datasets like ImageNet. Conversely, medical images possess unique visual characteristics that such general models may not adequately capture.</p><p><strong>Methods: </strong>This study examines the effectiveness of modality-specific pretext learning strengthened by image denoising and deblurring in enhancing the classification of pediatric chest X-ray (CXR) images into those exhibiting no findings, i.e., normal lungs, or with cardiopulmonary disease manifestations. Specifically, we use a <i>VGG-16-Sharp-U-Net</i> architecture and leverage its encoder in conjunction with a classification head to distinguish normal from abnormal pediatric CXR findings. We benchmark this performance against the traditional TL approach, <i>viz.</i>, the VGG-16 model pretrained only on ImageNet. Measures used for performance evaluation are balanced accuracy, sensitivity, specificity, F-score, Matthew's Correlation Coefficient (MCC), Kappa statistic, and Youden's index.</p><p><strong>Results: </strong>Our findings reveal that models developed from CXR modality-specific pretext encoders substantially outperform the ImageNet-only pretrained model, <i>viz.</i>, Baseline, and achieve significantly higher sensitivity (<i>p</i> < 0.05) with marked improvements in balanced accuracy, F-score, MCC, Kappa statistic, and Youden's index. A novel attention-based fuzzy ensemble of the pretext-learned models further improves performance across these metrics (Balanced accuracy: 0.6376; Sensitivity: 0.4991; F-score: 0.5102; MCC: 0.2783; Kappa: 0.2782, and Youden's index:0.2751), compared to Baseline (Balanced accuracy: 0.5654; Sensitivity: 0.1983; F-score: 0.2977; MCC: 0.1998; Kappa: 0.1599, and Youden's index:0.1327).</p><p><strong>Discussion: </strong>The superior results of CXR modality-specific pretext learning and their ensemble underscore its potential as a viable alternative to conventional ImageNet pretraining for medical image classification. Results from this study promote further exploration of medical modality-specific TL techniques in the development of DL models for various medical imaging applications.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11410760/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-04eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1387936
Sarada Krithivasan, Sanchari Sen, Swagath Venkataramani, Anand Raghunathan
Training Deep Neural Networks (DNNs) places immense compute requirements on the underlying hardware platforms, expending large amounts of time and energy. An important factor contributing to the long training times is the increasing dataset complexity required to reach state-of-the-art performance in real-world applications. To address this challenge, we explore the use of input mixing, where multiple inputs are combined into a single composite input with an associated composite label for training. The goal is for training on the mixed input to achieve a similar effect as training separately on each the constituent inputs that it represents. This results in a lower number of inputs (or mini-batches) to be processed in each epoch, proportionally reducing training time. We find that naive input mixing leads to a considerable drop in learning performance and model accuracy due to interference between the forward/backward propagation of the mixed inputs. We propose two strategies to address this challenge and realize training speedups from input mixing with minimal impact on accuracy. First, we reduce the impact of inter-input interference by exploiting the spatial separation between the features of the constituent inputs in the network's intermediate representations. We also adaptively vary the mixing ratio of constituent inputs based on their loss in previous epochs. Second, we propose heuristics to automatically identify the subset of the training dataset that is subject to mixing in each epoch. Across ResNets of varying depth, MobileNetV2 and two Vision Transformer networks, we obtain upto 1.6 × and 1.8 × speedups in training for the ImageNet and Cifar10 datasets, respectively, on an Nvidia RTX 2080Ti GPU, with negligible loss in classification accuracy.
{"title":"MixTrain: accelerating DNN training via input mixing.","authors":"Sarada Krithivasan, Sanchari Sen, Swagath Venkataramani, Anand Raghunathan","doi":"10.3389/frai.2024.1387936","DOIUrl":"https://doi.org/10.3389/frai.2024.1387936","url":null,"abstract":"<p><p>Training Deep Neural Networks (DNNs) places immense compute requirements on the underlying hardware platforms, expending large amounts of time and energy. An important factor contributing to the long training times is the increasing dataset complexity required to reach state-of-the-art performance in real-world applications. To address this challenge, we explore the use of input mixing, where multiple inputs are combined into a single composite input with an associated composite label for training. The goal is for training on the mixed input to achieve a similar effect as training separately on each the constituent inputs that it represents. This results in a lower number of inputs (or mini-batches) to be processed in each epoch, proportionally reducing training time. We find that naive input mixing leads to a considerable drop in learning performance and model accuracy due to interference between the forward/backward propagation of the mixed inputs. We propose two strategies to address this challenge and realize training speedups from input mixing with minimal impact on accuracy. First, we reduce the impact of inter-input interference by exploiting the spatial separation between the features of the constituent inputs in the network's intermediate representations. We also adaptively vary the mixing ratio of constituent inputs based on their loss in previous epochs. Second, we propose heuristics to automatically identify the subset of the training dataset that is subject to mixing in each epoch. Across ResNets of varying depth, MobileNetV2 and two Vision Transformer networks, we obtain upto 1.6 × and 1.8 × speedups in training for the ImageNet and Cifar10 datasets, respectively, on an Nvidia RTX 2080Ti GPU, with negligible loss in classification accuracy.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11443600/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142362211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-03eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1451963
Jithin K Sreedharan, Asma Alharbi, Amal Alsomali, Gokul Krishna Gopalakrishnan, Abdullah Almojaibel, Rawan Alajmi, Ibrahim Albalawi, Musallam Alnasser, Meshal Alenezi, Abdullah Alqahtani, Mohammed Alahmari, Eidan Alzahrani, Manjush Karthika
Background: Artificial intelligence (AI) is reforming healthcare, particularly in respiratory medicine and critical care, by utilizing big and synthetic data to improve diagnostic accuracy and therapeutic benefits. This survey aimed to evaluate the knowledge, perceptions, and practices of respiratory therapists (RTs) regarding AI to effectively incorporate these technologies into the clinical practice.
Methods: The study approved by the institutional review board, aimed at the RTs working in the Kingdom of Saudi Arabia. The validated questionnaire collected reflective insights from 448 RTs in Saudi Arabia. Descriptive statistics, thematic analysis, Fisher's exact test, and chi-square test were used to evaluate the significance of the data.
Results: The survey revealed a nearly equal distribution of genders (51% female, 49% male). Most respondents were in the 20-25 age group (54%), held bachelor's degrees (69%), and had 0-5 years of experience (73%). While 28% had some knowledge of AI, only 8.5% had practical experience. Significant gender disparities in AI knowledge were noted (p < 0.001). Key findings included 59% advocating for basics of AI in the curriculum, 51% believing AI would play a vital role in respiratory care, and 41% calling for specialized AI personnel. Major challenges identified included knowledge deficiencies (23%), skill enhancement (23%), and limited access to training (17%).
Conclusion: In conclusion, this study highlights differences in the levels of knowledge and perceptions regarding AI among respiratory care professionals, underlining its recognized significance and futuristic awareness in the field. Tailored education and strategic planning are crucial for enhancing the quality of respiratory care, with the integration of AI. Addressing these gaps is essential for utilizing the full potential of AI in advancing respiratory care practices.
{"title":"Artificial intelligence in respiratory care: knowledge, perceptions, and practices-a cross-sectional study.","authors":"Jithin K Sreedharan, Asma Alharbi, Amal Alsomali, Gokul Krishna Gopalakrishnan, Abdullah Almojaibel, Rawan Alajmi, Ibrahim Albalawi, Musallam Alnasser, Meshal Alenezi, Abdullah Alqahtani, Mohammed Alahmari, Eidan Alzahrani, Manjush Karthika","doi":"10.3389/frai.2024.1451963","DOIUrl":"https://doi.org/10.3389/frai.2024.1451963","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) is reforming healthcare, particularly in respiratory medicine and critical care, by utilizing big and synthetic data to improve diagnostic accuracy and therapeutic benefits. This survey aimed to evaluate the knowledge, perceptions, and practices of respiratory therapists (RTs) regarding AI to effectively incorporate these technologies into the clinical practice.</p><p><strong>Methods: </strong>The study approved by the institutional review board, aimed at the RTs working in the Kingdom of Saudi Arabia. The validated questionnaire collected reflective insights from 448 RTs in Saudi Arabia. Descriptive statistics, thematic analysis, Fisher's exact test, and chi-square test were used to evaluate the significance of the data.</p><p><strong>Results: </strong>The survey revealed a nearly equal distribution of genders (51% female, 49% male). Most respondents were in the 20-25 age group (54%), held bachelor's degrees (69%), and had 0-5 years of experience (73%). While 28% had some knowledge of AI, only 8.5% had practical experience. Significant gender disparities in AI knowledge were noted (<i>p</i> < 0.001). Key findings included 59% advocating for basics of AI in the curriculum, 51% believing AI would play a vital role in respiratory care, and 41% calling for specialized AI personnel. Major challenges identified included knowledge deficiencies (23%), skill enhancement (23%), and limited access to training (17%).</p><p><strong>Conclusion: </strong>In conclusion, this study highlights differences in the levels of knowledge and perceptions regarding AI among respiratory care professionals, underlining its recognized significance and futuristic awareness in the field. Tailored education and strategic planning are crucial for enhancing the quality of respiratory care, with the integration of AI. Addressing these gaps is essential for utilizing the full potential of AI in advancing respiratory care practices.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11405306/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
[This corrects the article DOI: 10.3389/frai.2024.1386753.].
[此处更正了文章 DOI:10.3389/frai.2024.1386753]。
{"title":"Corrigendum: Contextual emotion detection in images using deep learning.","authors":"Fatiha Limami, Boutaina Hdioud, Rachid Oulad Haj Thami","doi":"10.3389/frai.2024.1476791","DOIUrl":"https://doi.org/10.3389/frai.2024.1476791","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.3389/frai.2024.1386753.].</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11405858/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-02eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1457586
Yasir Abdelgadir, Charat Thongprayoon, Jing Miao, Supawadee Suppadungsuk, Justin H Pham, Michael A Mao, Iasmina M Craici, Wisit Cheungpasitporn
Background: Accurate ICD-10 coding is crucial for healthcare reimbursement, patient care, and research. AI implementation, like ChatGPT, could improve coding accuracy and reduce physician burden. This study assessed ChatGPT's performance in identifying ICD-10 codes for nephrology conditions through case scenarios for pre-visit testing.
Methods: Two nephrologists created 100 simulated nephrology cases. ChatGPT versions 3.5 and 4.0 were evaluated by comparing AI-generated ICD-10 codes against predetermined correct codes. Assessments were conducted in two rounds, 2 weeks apart, in April 2024.
Results: In the first round, the accuracy of ChatGPT for assigning correct diagnosis codes was 91 and 99% for version 3.5 and 4.0, respectively. In the second round, the accuracy of ChatGPT for assigning the correct diagnosis code was 87% for version 3.5 and 99% for version 4.0. ChatGPT 4.0 had higher accuracy than ChatGPT 3.5 (p = 0.02 and 0.002 for the first and second round respectively). The accuracy did not significantly differ between the two rounds (p > 0.05).
Conclusion: ChatGPT 4.0 can significantly improve ICD-10 coding accuracy in nephrology through case scenarios for pre-visit testing, potentially reducing healthcare professionals' workload. However, the small error percentage underscores the need for ongoing review and improvement of AI systems to ensure accurate reimbursement, optimal patient care, and reliable research data.
{"title":"AI integration in nephrology: evaluating ChatGPT for accurate ICD-10 documentation and coding.","authors":"Yasir Abdelgadir, Charat Thongprayoon, Jing Miao, Supawadee Suppadungsuk, Justin H Pham, Michael A Mao, Iasmina M Craici, Wisit Cheungpasitporn","doi":"10.3389/frai.2024.1457586","DOIUrl":"https://doi.org/10.3389/frai.2024.1457586","url":null,"abstract":"<p><strong>Background: </strong>Accurate ICD-10 coding is crucial for healthcare reimbursement, patient care, and research. AI implementation, like ChatGPT, could improve coding accuracy and reduce physician burden. This study assessed ChatGPT's performance in identifying ICD-10 codes for nephrology conditions through case scenarios for pre-visit testing.</p><p><strong>Methods: </strong>Two nephrologists created 100 simulated nephrology cases. ChatGPT versions 3.5 and 4.0 were evaluated by comparing AI-generated ICD-10 codes against predetermined correct codes. Assessments were conducted in two rounds, 2 weeks apart, in April 2024.</p><p><strong>Results: </strong>In the first round, the accuracy of ChatGPT for assigning correct diagnosis codes was 91 and 99% for version 3.5 and 4.0, respectively. In the second round, the accuracy of ChatGPT for assigning the correct diagnosis code was 87% for version 3.5 and 99% for version 4.0. ChatGPT 4.0 had higher accuracy than ChatGPT 3.5 (<i>p</i> = 0.02 and 0.002 for the first and second round respectively). The accuracy did not significantly differ between the two rounds (<i>p</i> > 0.05).</p><p><strong>Conclusion: </strong>ChatGPT 4.0 can significantly improve ICD-10 coding accuracy in nephrology through case scenarios for pre-visit testing, potentially reducing healthcare professionals' workload. However, the small error percentage underscores the need for ongoing review and improvement of AI systems to ensure accurate reimbursement, optimal patient care, and reliable research data.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11402808/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-02eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1456069
Nabil M AbdelAziz, Wael Said, Mohamed M AbdelHafeez, Asmaa H Ali
Early detection of Alzheimer's disease (AD) is vital for effective treatment, as interventions are most successful in the disease's early stages. Combining Magnetic Resonance Imaging (MRI) with artificial intelligence (AI) offers significant potential for enhancing AD diagnosis. However, traditional AI models often lack transparency in their decision-making processes. Explainable Artificial Intelligence (XAI) is an evolving field that aims to make AI decisions understandable to humans, providing transparency and insight into AI systems. This research introduces the Squeeze-and-Excitation Convolutional Neural Network with Random Forest (SECNN-RF) framework for early AD detection using MRI scans. The SECNN-RF integrates Squeeze-and-Excitation (SE) blocks into a Convolutional Neural Network (CNN) to focus on crucial features and uses Dropout layers to prevent overfitting. It then employs a Random Forest classifier to accurately categorize the extracted features. The SECNN-RF demonstrates high accuracy (99.89%) and offers an explainable analysis, enhancing the model's interpretability. Further exploration of the SECNN framework involved substituting the Random Forest classifier with other machine learning algorithms like Decision Tree, XGBoost, Support Vector Machine, and Gradient Boosting. While all these classifiers improved model performance, Random Forest achieved the highest accuracy, followed closely by XGBoost, Gradient Boosting, Support Vector Machine, and Decision Tree which achieved lower accuracy.
{"title":"Advanced interpretable diagnosis of Alzheimer's disease using SECNN-RF framework with explainable AI.","authors":"Nabil M AbdelAziz, Wael Said, Mohamed M AbdelHafeez, Asmaa H Ali","doi":"10.3389/frai.2024.1456069","DOIUrl":"https://doi.org/10.3389/frai.2024.1456069","url":null,"abstract":"<p><p>Early detection of Alzheimer's disease (AD) is vital for effective treatment, as interventions are most successful in the disease's early stages. Combining Magnetic Resonance Imaging (MRI) with artificial intelligence (AI) offers significant potential for enhancing AD diagnosis. However, traditional AI models often lack transparency in their decision-making processes. Explainable Artificial Intelligence (XAI) is an evolving field that aims to make AI decisions understandable to humans, providing transparency and insight into AI systems. This research introduces the Squeeze-and-Excitation Convolutional Neural Network with Random Forest (SECNN-RF) framework for early AD detection using MRI scans. The SECNN-RF integrates Squeeze-and-Excitation (SE) blocks into a Convolutional Neural Network (CNN) to focus on crucial features and uses Dropout layers to prevent overfitting. It then employs a Random Forest classifier to accurately categorize the extracted features. The SECNN-RF demonstrates high accuracy (99.89%) and offers an explainable analysis, enhancing the model's interpretability. Further exploration of the SECNN framework involved substituting the Random Forest classifier with other machine learning algorithms like Decision Tree, XGBoost, Support Vector Machine, and Gradient Boosting. While all these classifiers improved model performance, Random Forest achieved the highest accuracy, followed closely by XGBoost, Gradient Boosting, Support Vector Machine, and Decision Tree which achieved lower accuracy.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11402894/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Introduction: This study introduces the Supervised Magnitude-Altitude Scoring (SMAS) methodology, a novel machine learning-based approach for analyzing gene expression data from non-human primates (NHPs) infected with Ebola virus (EBOV). By focusing on host-pathogen interactions, this research aims to enhance the understanding and identification of critical biomarkers for Ebola infection.
Methods: We utilized a comprehensive dataset of NanoString gene expression profiles from Ebola-infected NHPs. The SMAS system combines gene selection based on both statistical significance and expression changes. Employing linear classifiers such as logistic regression, the method facilitates precise differentiation between RT-qPCR positive and negative NHP samples.
Results: The application of SMAS led to the identification of IFI6 and IFI27 as key biomarkers, which demonstrated perfect predictive performance with 100% accuracy and optimal Area Under the Curve (AUC) metrics in classifying various stages of Ebola infection. Additionally, genes including MX1, OAS1, and ISG15 were significantly upregulated, underscoring their vital roles in the immune response to EBOV.
Discussion: Gene Ontology (GO) analysis further elucidated the involvement of these genes in critical biological processes and immune response pathways, reinforcing their significance in Ebola pathogenesis. Our findings highlight the efficacy of the SMAS methodology in revealing complex genetic interactions and response mechanisms, which are essential for advancing the development of diagnostic tools and therapeutic strategies.
Conclusion: This study provides valuable insights into EBOV pathogenesis, demonstrating the potential of SMAS to enhance the precision of diagnostics and interventions for Ebola and other viral infections.
{"title":"Machine learning-based analysis of Ebola virus' impact on gene expression in nonhuman primates.","authors":"Mostafa Rezapour, Muhammad Khalid Khan Niazi, Hao Lu, Aarthi Narayanan, Metin Nafi Gurcan","doi":"10.3389/frai.2024.1405332","DOIUrl":"https://doi.org/10.3389/frai.2024.1405332","url":null,"abstract":"<p><strong>Introduction: </strong>This study introduces the Supervised Magnitude-Altitude Scoring (SMAS) methodology, a novel machine learning-based approach for analyzing gene expression data from non-human primates (NHPs) infected with Ebola virus (EBOV). By focusing on host-pathogen interactions, this research aims to enhance the understanding and identification of critical biomarkers for Ebola infection.</p><p><strong>Methods: </strong>We utilized a comprehensive dataset of NanoString gene expression profiles from Ebola-infected NHPs. The SMAS system combines gene selection based on both statistical significance and expression changes. Employing linear classifiers such as logistic regression, the method facilitates precise differentiation between RT-qPCR positive and negative NHP samples.</p><p><strong>Results: </strong>The application of SMAS led to the identification of IFI6 and IFI27 as key biomarkers, which demonstrated perfect predictive performance with 100% accuracy and optimal Area Under the Curve (AUC) metrics in classifying various stages of Ebola infection. Additionally, genes including MX1, OAS1, and ISG15 were significantly upregulated, underscoring their vital roles in the immune response to EBOV.</p><p><strong>Discussion: </strong>Gene Ontology (GO) analysis further elucidated the involvement of these genes in critical biological processes and immune response pathways, reinforcing their significance in Ebola pathogenesis. Our findings highlight the efficacy of the SMAS methodology in revealing complex genetic interactions and response mechanisms, which are essential for advancing the development of diagnostic tools and therapeutic strategies.</p><p><strong>Conclusion: </strong>This study provides valuable insights into EBOV pathogenesis, demonstrating the potential of SMAS to enhance the precision of diagnostics and interventions for Ebola and other viral infections.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11392916/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-30eCollection Date: 2024-01-01DOI: 10.3389/frai.2024.1411838
Bama Andika Putra
Despite the rapid development of AI, ASEAN has not been able to devise a regional governance framework to address relevant existing and future challenges. This is concerning, considering the potential of AI to accelerate GDP among ASEAN member states in the coming years. This qualitative inquiry discusses AI governance in Southeast Asia in the past 5 years and what regulatory policies ASEAN can explore to better modulate its use among its member states. It considers the unique political landscape of the region, defined by the adoption of unique norms such as non-interference and priority over dialog, commonly termed the ASEAN Way. The following measures are concluded as potential regional governance frameworks: (1) Elevation of the topic's importance in ASEAN's intra and inter-regional forums to formulate collective regional agreements on AI, (2) adoption of AI governance measures in the field of education, specifically, reskilling and upskilling strategies to respond to future transformation of the working landscape, and (3) establishment of an ASEAN working group to bridge knowledge gaps among member states, caused by the disparity of AI-readiness in the region.
{"title":"Governing AI in Southeast Asia: ASEAN's way forward.","authors":"Bama Andika Putra","doi":"10.3389/frai.2024.1411838","DOIUrl":"https://doi.org/10.3389/frai.2024.1411838","url":null,"abstract":"<p><p>Despite the rapid development of AI, ASEAN has not been able to devise a regional governance framework to address relevant existing and future challenges. This is concerning, considering the potential of AI to accelerate GDP among ASEAN member states in the coming years. This qualitative inquiry discusses AI governance in Southeast Asia in the past 5 years and what regulatory policies ASEAN can explore to better modulate its use among its member states. It considers the unique political landscape of the region, defined by the adoption of unique norms such as non-interference and priority over dialog, commonly termed the ASEAN Way. The following measures are concluded as potential regional governance frameworks: (1) Elevation of the topic's importance in ASEAN's intra and inter-regional forums to formulate collective regional agreements on AI, (2) adoption of AI governance measures in the field of education, specifically, reskilling and upskilling strategies to respond to future transformation of the working landscape, and (3) establishment of an ASEAN working group to bridge knowledge gaps among member states, caused by the disparity of AI-readiness in the region.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11392876/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}