首页 > 最新文献

Frontiers in Artificial Intelligence最新文献

英文 中文
OLTW-TEC: online learning with sliding windows for text classifier ensembles. OLTW-TEC:文本分类器集合的滑动窗口在线学习。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-11 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1401126
Khrystyna Lipianina-Honcharenko, Yevgeniy Bodyanskiy, Nataliia Kustra, Andrii Ivasechkо

In the digital age, rapid dissemination of information has elevated the challenge of distinguishing between authentic news and disinformation. This challenge is particularly acute in regions experiencing geopolitical tensions, where information plays a pivotal role in shaping public perception and policy. The prevalence of disinformation in the Ukrainian-language information space, intensified by the hybrid war with russia, necessitates the development of sophisticated tools for its detection and mitigation. Our study introduces the "Online Learning with Sliding Windows for Text Classifier Ensembles" (OLTW-TEC) method, designed to address this urgent need. This research aims to develop and validate an advanced machine learning method capable of dynamically adapting to evolving disinformation tactics. The focus is on creating a highly accurate, flexible, and efficient system for detecting disinformation in Ukrainian-language texts. The OLTW-TEC method leverages an ensemble of classifiers combined with a sliding window technique to continuously update the model with the most recent data, enhancing its adaptability and accuracy over time. A unique dataset comprising both authentic and fake news items was used to evaluate the method's performance. Advanced metrics, including precision, recall, and F1-score, facilitated a comprehensive analysis of its effectiveness. The OLTW-TEC method demonstrated exceptional performance, achieving a classification accuracy of 93%. The integration of the sliding window technique with a classifier ensemble significantly contributed to the system's ability to accurately identify disinformation, making it a robust tool in the ongoing battle against fake news in the Ukrainian context. The application of the OLTW-TEC method highlights its potential as a versatile and effective solution for disinformation detection. Its adaptability to the specifics of the Ukrainian language and the dynamic nature of information warfare offers valuable insights into the development of similar tools for other languages and regions. OLTW-TEC represents a significant advancement in the detection of disinformation within the Ukrainian-language information space. Its development and successful implementation underscore the importance of innovative machine learning techniques in combating fake news, paving the way for further research and application in the field of digital information integrity.

在数字时代,信息的快速传播提升了区分真实新闻和虚假信息的难度。在地缘政治局势紧张的地区,这一挑战尤为严峻,因为信息在塑造公众观念和政策方面起着举足轻重的作用。乌克兰语言信息空间中虚假信息的盛行,以及与俄罗斯的混合战争的加剧,都要求开发先进的工具来检测和减少虚假信息。我们的研究引入了 "文本分类器集合滑动窗口在线学习"(OLTW-TEC)方法,旨在满足这一迫切需求。这项研究旨在开发和验证一种先进的机器学习方法,该方法能够动态适应不断演变的虚假信息策略。重点是创建一个高度准确、灵活和高效的系统,用于检测乌克兰语文本中的虚假信息。OLTW-TEC 方法利用分类器集合与滑动窗口技术相结合,利用最新数据不断更新模型,从而提高其适应性和准确性。为了评估该方法的性能,我们使用了一个由真假新闻项目组成的独特数据集。精确度、召回率和 F1 分数等先进指标有助于全面分析该方法的有效性。OLTW-TEC 方法表现优异,分类准确率达到 93%。滑动窗口技术与分类器组合的集成极大地增强了系统准确识别虚假信息的能力,使其成为乌克兰正在进行的打击假新闻斗争中的有力工具。OLTW-TEC 方法的应用凸显了它作为一种多用途、有效的虚假信息检测解决方案的潜力。它对乌克兰语言特性和信息战动态性质的适应性为开发适用于其他语言和地区的类似工具提供了宝贵的启示。OLTW-TEC 是在乌克兰语信息空间内检测虚假信息方面取得的重大进展。它的开发和成功实施强调了创新机器学习技术在打击假新闻方面的重要性,为数字信息完整性领域的进一步研究和应用铺平了道路。
{"title":"OLTW-TEC: online learning with sliding windows for text classifier ensembles.","authors":"Khrystyna Lipianina-Honcharenko, Yevgeniy Bodyanskiy, Nataliia Kustra, Andrii Ivasechkо","doi":"10.3389/frai.2024.1401126","DOIUrl":"https://doi.org/10.3389/frai.2024.1401126","url":null,"abstract":"<p><p>In the digital age, rapid dissemination of information has elevated the challenge of distinguishing between authentic news and disinformation. This challenge is particularly acute in regions experiencing geopolitical tensions, where information plays a pivotal role in shaping public perception and policy. The prevalence of disinformation in the Ukrainian-language information space, intensified by the hybrid war with russia, necessitates the development of sophisticated tools for its detection and mitigation. Our study introduces the \"Online Learning with Sliding Windows for Text Classifier Ensembles\" (OLTW-TEC) method, designed to address this urgent need. This research aims to develop and validate an advanced machine learning method capable of dynamically adapting to evolving disinformation tactics. The focus is on creating a highly accurate, flexible, and efficient system for detecting disinformation in Ukrainian-language texts. The OLTW-TEC method leverages an ensemble of classifiers combined with a sliding window technique to continuously update the model with the most recent data, enhancing its adaptability and accuracy over time. A unique dataset comprising both authentic and fake news items was used to evaluate the method's performance. Advanced metrics, including precision, recall, and F1-score, facilitated a comprehensive analysis of its effectiveness. The OLTW-TEC method demonstrated exceptional performance, achieving a classification accuracy of 93%. The integration of the sliding window technique with a classifier ensemble significantly contributed to the system's ability to accurately identify disinformation, making it a robust tool in the ongoing battle against fake news in the Ukrainian context. The application of the OLTW-TEC method highlights its potential as a versatile and effective solution for disinformation detection. Its adaptability to the specifics of the Ukrainian language and the dynamic nature of information warfare offers valuable insights into the development of similar tools for other languages and regions. OLTW-TEC represents a significant advancement in the detection of disinformation within the Ukrainian-language information space. Its development and successful implementation underscore the importance of innovative machine learning techniques in combating fake news, paving the way for further research and application in the field of digital information integrity.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1401126"},"PeriodicalIF":3.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11422347/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dimensions of artificial intelligence on family communication. 人工智能对家庭沟通的影响
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-11 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1398960
Nada Mohammed Alfeir

Introduction: Artificial intelligence (AI) has created a plethora of prospects for communication. The study aims to examine the impacts of AI dimensions on family communication. By investigating the multifaceted effects of AI on family communication, this research aims to provide valuable insights, uncover potential concerns, and offer recommendations for both families and society at large in this digital era.

Method: A convenience sampling technique was adopted to recruit 300 participants.

Results: A linear regression model was measured to examine the impact of AI dimensions which showed a statistically significant effect on accessibility (p = 0.001), personalization (p = 0.001), and language translation (p = 0.016).

Discussion: The findings showed that in terms of accessibility (p = 0.006), and language translation (p = 0.010), except personalization (p = 0.126), there were differences between males and females. However, using multiple AI tools was statistically associated with raising concerns about bias and privacy (p = 0.015), safety, and dependence (p = 0.049) of parents.

Conclusion: The results showed a lack of knowledge and transparency about the data storage and privacy policy of AI-enabled communication systems. Overall, there was a positive impact of AI dimensions on family communication.

导 言人工智能(AI)为沟通创造了大量前景。本研究旨在探讨人工智能对家庭沟通的影响。通过研究人工智能对家庭沟通的多方面影响,本研究旨在提供有价值的见解,揭示潜在的担忧,并为数字时代的家庭和整个社会提供建议:方法:采用便利抽样技术,招募 300 名参与者:通过线性回归模型研究了人工智能维度的影响,结果显示人工智能维度对可访问性(p = 0.001)、个性化(p = 0.001)和语言翻译(p = 0.016)有显著的统计学影响:研究结果表明,除个性化(p = 0.126)外,男性和女性在无障碍(p = 0.006)和语言翻译(p = 0.010)方面存在差异。然而,从统计学角度看,使用多种人工智能工具会引起家长对偏见和隐私(p = 0.015)、安全性和依赖性(p = 0.049)的担忧:结论:研究结果表明,人们对人工智能通讯系统的数据存储和隐私政策缺乏了解和透明度。总体而言,人工智能对家庭沟通有积极影响。
{"title":"Dimensions of artificial intelligence on family communication.","authors":"Nada Mohammed Alfeir","doi":"10.3389/frai.2024.1398960","DOIUrl":"https://doi.org/10.3389/frai.2024.1398960","url":null,"abstract":"<p><strong>Introduction: </strong>Artificial intelligence (AI) has created a plethora of prospects for communication. The study aims to examine the impacts of AI dimensions on family communication. By investigating the multifaceted effects of AI on family communication, this research aims to provide valuable insights, uncover potential concerns, and offer recommendations for both families and society at large in this digital era.</p><p><strong>Method: </strong>A convenience sampling technique was adopted to recruit 300 participants.</p><p><strong>Results: </strong>A linear regression model was measured to examine the impact of AI dimensions which showed a statistically significant effect on accessibility (<i>p</i> = 0.001), personalization (<i>p</i> = 0.001), and language translation (<i>p</i> = 0.016).</p><p><strong>Discussion: </strong>The findings showed that in terms of accessibility (<i>p</i> = 0.006), and language translation (<i>p</i> = 0.010), except personalization (<i>p</i> = 0.126), there were differences between males and females. However, using multiple AI tools was statistically associated with raising concerns about bias and privacy (<i>p</i> = 0.015), safety, and dependence (<i>p</i> = 0.049) of parents.</p><p><strong>Conclusion: </strong>The results showed a lack of knowledge and transparency about the data storage and privacy policy of AI-enabled communication systems. Overall, there was a positive impact of AI dimensions on family communication.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1398960"},"PeriodicalIF":3.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11422382/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep-learning pipeline for the diagnosis and grading of common blinding ophthalmic diseases based on lesion-focused classification model. 基于病灶聚焦分类模型的眼科常见致盲疾病诊断和分级深度学习管道。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-11 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1444136
Zhihuan Li, Junxiong Huang, Jingfang Chen, Jin Zeng, Hong Jiang, Lin Ding, TianZi Zhang, Wen Sun, Rong Lu, Qiuli Zhang, Lizhong Liang

Background: Glaucoma (GLAU), Age-related Macular Degeneration (AMD), Retinal Vein Occlusion (RVO), and Diabetic Retinopathy (DR) are common blinding ophthalmic diseases worldwide.

Purpose: This approach is expected to enhance the early detection and treatment of common blinding ophthalmic diseases, contributing to the reduction of individual and economic burdens associated with these conditions.

Methods: We propose an effective deep-learning pipeline that combine both segmentation model and classification model for diagnosis and grading of four common blinding ophthalmic diseases and normal retinal fundus.

Results: In total, 102,786 fundus images of 75,682 individuals were used for training validation and external validation purposes. We test our model on internal validation data set, the micro Area Under the Receiver Operating Characteristic curve (AUROC) of which reached 0.995. Then, we fine-tuned the diagnosis model to classify each of the four disease into early and late stage, respectively, which achieved AUROCs of 0.597 (GL), 0.877 (AMD), 0.972 (RVO), and 0.961 (DR) respectively. To test the generalization of our model, we conducted two external validation experiments on Neimeng and Guangxi cohort, all of which maintained high accuracy.

Conclusion: Our algorithm demonstrates accurate artificial intelligence diagnosis pipeline for common blinding ophthalmic diseases based on Lesion-Focused fundus that overcomes the low-accuracy of the traditional classification method that based on raw retinal images, which has good generalization ability on diverse cases in different regions.

背景:青光眼(GLAU)、年龄相关性黄斑变性(AMD)、视网膜静脉闭塞(RVO)和糖尿病视网膜病变(DR)是全球常见的致盲性眼科疾病。目的:这种方法有望加强常见致盲性眼科疾病的早期检测和治疗,有助于减轻与这些疾病相关的个人和经济负担:我们提出了一种有效的深度学习管道,结合分割模型和分类模型对四种常见致盲眼科疾病和正常视网膜眼底进行诊断和分级:共有 75,682 人的 102,786 张眼底图像被用于训练验证和外部验证。我们在内部验证数据集上测试了我们的模型,其接收者工作特征曲线下的微观面积(AUROC)达到了 0.995。然后,我们对诊断模型进行了微调,将四种疾病分别划分为早期和晚期,其 AUROC 分别为 0.597(GL)、0.877(AMD)、0.972(RVO)和 0.961(DR)。为了检验模型的通用性,我们在内蒙和广西队列中进行了两次外部验证实验,均保持了较高的准确性:我们的算法展示了基于病变聚焦眼底的常见致盲性眼病人工智能诊断流水线,克服了传统基于原始视网膜图像的分类方法准确率低的问题,对不同地区的不同病例具有良好的泛化能力。
{"title":"A deep-learning pipeline for the diagnosis and grading of common blinding ophthalmic diseases based on lesion-focused classification model.","authors":"Zhihuan Li, Junxiong Huang, Jingfang Chen, Jin Zeng, Hong Jiang, Lin Ding, TianZi Zhang, Wen Sun, Rong Lu, Qiuli Zhang, Lizhong Liang","doi":"10.3389/frai.2024.1444136","DOIUrl":"https://doi.org/10.3389/frai.2024.1444136","url":null,"abstract":"<p><strong>Background: </strong>Glaucoma (GLAU), Age-related Macular Degeneration (AMD), Retinal Vein Occlusion (RVO), and Diabetic Retinopathy (DR) are common blinding ophthalmic diseases worldwide.</p><p><strong>Purpose: </strong>This approach is expected to enhance the early detection and treatment of common blinding ophthalmic diseases, contributing to the reduction of individual and economic burdens associated with these conditions.</p><p><strong>Methods: </strong>We propose an effective deep-learning pipeline that combine both segmentation model and classification model for diagnosis and grading of four common blinding ophthalmic diseases and normal retinal fundus.</p><p><strong>Results: </strong>In total, 102,786 fundus images of 75,682 individuals were used for training validation and external validation purposes. We test our model on internal validation data set, the micro Area Under the Receiver Operating Characteristic curve (AUROC) of which reached 0.995. Then, we fine-tuned the diagnosis model to classify each of the four disease into early and late stage, respectively, which achieved AUROCs of 0.597 (GL), 0.877 (AMD), 0.972 (RVO), and 0.961 (DR) respectively. To test the generalization of our model, we conducted two external validation experiments on Neimeng and Guangxi cohort, all of which maintained high accuracy.</p><p><strong>Conclusion: </strong>Our algorithm demonstrates accurate artificial intelligence diagnosis pipeline for common blinding ophthalmic diseases based on Lesion-Focused fundus that overcomes the low-accuracy of the traditional classification method that based on raw retinal images, which has good generalization ability on diverse cases in different regions.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1444136"},"PeriodicalIF":3.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11422385/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142355497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Large language model triaging of simulated nephrology patient inbox messages. 模拟肾科病人收件箱信息的大语言模型分流。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-09 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1452469
Justin H Pham, Charat Thongprayoon, Jing Miao, Supawadee Suppadungsuk, Priscilla Koirala, Iasmina M Craici, Wisit Cheungpasitporn

Background: Efficient triage of patient communications is crucial for timely medical attention and improved care. This study evaluates ChatGPT's accuracy in categorizing nephrology patient inbox messages, assessing its potential in outpatient settings.

Methods: One hundred and fifty simulated patient inbox messages were created based on cases typically encountered in everyday practice at a nephrology outpatient clinic. These messages were triaged as non-urgent, urgent, and emergent by two nephrologists. The messages were then submitted to ChatGPT-4 for independent triage into the same categories. The inquiry process was performed twice with a two-week period in between. ChatGPT responses were graded as correct (agreement with physicians), overestimation (higher priority), or underestimation (lower priority).

Results: In the first trial, ChatGPT correctly triaged 140 (93%) messages, overestimated the priority of 4 messages (3%), and underestimated the priority of 6 messages (4%). In the second trial, it correctly triaged 140 (93%) messages, overestimated the priority of 9 (6%), and underestimated the priority of 1 (1%). The accuracy did not depend on the urgency level of the message (p = 0.19). The internal agreement of ChatGPT responses was 92% with an intra-rater Kappa score of 0.88.

Conclusion: ChatGPT-4 demonstrated high accuracy in triaging nephrology patient messages, highlighting the potential for AI-driven triage systems to enhance operational efficiency and improve patient care in outpatient clinics.

背景:高效的患者通信分流对于及时就医和改善护理至关重要。本研究评估了 ChatGPT 对肾科病人收件箱信息进行分类的准确性,并评估了其在门诊环境中的应用潜力:方法:根据肾脏科门诊日常工作中遇到的典型病例,创建了 150 条模拟患者收件箱信息。这些信息由两名肾病专家分流为非紧急、紧急和紧急。然后,这些信息被提交给 ChatGPT-4,由其按照相同的类别进行独立分流。查询过程进行两次,中间间隔两周。ChatGPT 的回复分为正确(与医生意见一致)、高估(优先级较高)或低估(优先级较低):在第一次试验中,ChatGPT 正确分流了 140 条信息(93%),高估了 4 条信息的优先级(3%),低估了 6 条信息的优先级(4%)。在第二次试验中,它正确分流了 140 封邮件(93%),高估了 9 封邮件(6%)的优先级,低估了 1 封邮件(1%)的优先级。准确率与信息的紧急程度无关(p = 0.19)。ChatGPT 回答的内部一致性为 92%,评分者内部 Kappa 得分为 0.88:ChatGPT-4在分流肾病患者信息方面表现出了很高的准确性,突显了人工智能驱动的分流系统在提高门诊操作效率和改善患者护理方面的潜力。
{"title":"Large language model triaging of simulated nephrology patient inbox messages.","authors":"Justin H Pham, Charat Thongprayoon, Jing Miao, Supawadee Suppadungsuk, Priscilla Koirala, Iasmina M Craici, Wisit Cheungpasitporn","doi":"10.3389/frai.2024.1452469","DOIUrl":"10.3389/frai.2024.1452469","url":null,"abstract":"<p><strong>Background: </strong>Efficient triage of patient communications is crucial for timely medical attention and improved care. This study evaluates ChatGPT's accuracy in categorizing nephrology patient inbox messages, assessing its potential in outpatient settings.</p><p><strong>Methods: </strong>One hundred and fifty simulated patient inbox messages were created based on cases typically encountered in everyday practice at a nephrology outpatient clinic. These messages were triaged as non-urgent, urgent, and emergent by two nephrologists. The messages were then submitted to ChatGPT-4 for independent triage into the same categories. The inquiry process was performed twice with a two-week period in between. ChatGPT responses were graded as correct (agreement with physicians), overestimation (higher priority), or underestimation (lower priority).</p><p><strong>Results: </strong>In the first trial, ChatGPT correctly triaged 140 (93%) messages, overestimated the priority of 4 messages (3%), and underestimated the priority of 6 messages (4%). In the second trial, it correctly triaged 140 (93%) messages, overestimated the priority of 9 (6%), and underestimated the priority of 1 (1%). The accuracy did not depend on the urgency level of the message (<i>p</i> = 0.19). The internal agreement of ChatGPT responses was 92% with an intra-rater Kappa score of 0.88.</p><p><strong>Conclusion: </strong>ChatGPT-4 demonstrated high accuracy in triaging nephrology patient messages, highlighting the potential for AI-driven triage systems to enhance operational efficiency and improve patient care in outpatient clinics.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1452469"},"PeriodicalIF":3.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11417033/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142308684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A modified U-Net to detect real sperms in videos of human sperm cell. 改进的 U-Net 用于检测人类精子细胞视频中的真精子。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-09 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1376546
Hanan Saadat, Mohammad Mehdi Sepehri, Mahdi-Reza Borna, Behnam Maleki

Background: This study delves into the crucial domain of sperm segmentation, a pivotal component of male infertility diagnosis. It explores the efficacy of diverse architectural configurations coupled with various encoders, leveraging frames from the VISEM dataset for evaluation.

Methods: The pursuit of automated sperm segmentation led to the examination of multiple deep learning architectures, each paired with distinct encoders. Extensive experimentation was conducted on the VISEM dataset to assess their performance.

Results: Our study evaluated various deep learning architectures with different encoders for sperm segmentation using the VISEM dataset. While each model configuration exhibited distinct strengths and weaknesses, UNet++ with ResNet34 emerged as a top-performing model, demonstrating exceptional accuracy in distinguishing sperm cells from non-sperm cells. However, challenges persist in accurately identifying closely adjacent sperm cells. These findings provide valuable insights for improving automated sperm segmentation in male infertility diagnosis.

Discussion: The study underscores the significance of selecting appropriate model combinations based on specific diagnostic requirements. It also highlights the challenges related to distinguishing closely adjacent sperm cells.

Conclusion: This research advances the field of automated sperm segmentation for male infertility diagnosis, showcasing the potential of deep learning techniques. Future work should aim to enhance accuracy in scenarios involving close proximity between sperm cells, ultimately improving clinical sperm analysis.

研究背景本研究深入探讨了精子分割这一关键领域,精子分割是男性不育症诊断的重要组成部分。它利用来自 VISEM 数据集的帧进行评估,探索了不同架构配置和各种编码器的功效:方法:为了实现精子自动分割,我们研究了多种深度学习架构,每种架构都搭配了不同的编码器。我们在 VISEM 数据集上进行了广泛的实验,以评估它们的性能:我们的研究利用 VISEM 数据集评估了各种深度学习架构和不同编码器在精子分割方面的表现。虽然每种模型配置都表现出不同的优缺点,但 UNet++ 与 ResNet34 成为表现最佳的模型,在区分精子细胞与非精子细胞方面表现出了极高的准确性。然而,在准确识别紧密相邻的精子细胞方面仍然存在挑战。这些发现为改进男性不育诊断中的自动精子分割提供了宝贵的见解:讨论:这项研究强调了根据具体诊断要求选择适当模型组合的重要性。讨论:该研究强调了根据特定诊断要求选择适当的模型组合的重要性,同时也突出了与区分相邻精子细胞相关的挑战:这项研究推进了用于男性不育诊断的精子自动分割领域,展示了深度学习技术的潜力。未来的工作应着眼于提高精子细胞间紧密相邻情况下的准确性,最终改善临床精子分析。
{"title":"A modified U-Net to detect real sperms in videos of human sperm cell.","authors":"Hanan Saadat, Mohammad Mehdi Sepehri, Mahdi-Reza Borna, Behnam Maleki","doi":"10.3389/frai.2024.1376546","DOIUrl":"10.3389/frai.2024.1376546","url":null,"abstract":"<p><strong>Background: </strong>This study delves into the crucial domain of sperm segmentation, a pivotal component of male infertility diagnosis. It explores the efficacy of diverse architectural configurations coupled with various encoders, leveraging frames from the VISEM dataset for evaluation.</p><p><strong>Methods: </strong>The pursuit of automated sperm segmentation led to the examination of multiple deep learning architectures, each paired with distinct encoders. Extensive experimentation was conducted on the VISEM dataset to assess their performance.</p><p><strong>Results: </strong>Our study evaluated various deep learning architectures with different encoders for sperm segmentation using the VISEM dataset. While each model configuration exhibited distinct strengths and weaknesses, UNet++ with ResNet34 emerged as a top-performing model, demonstrating exceptional accuracy in distinguishing sperm cells from non-sperm cells. However, challenges persist in accurately identifying closely adjacent sperm cells. These findings provide valuable insights for improving automated sperm segmentation in male infertility diagnosis.</p><p><strong>Discussion: </strong>The study underscores the significance of selecting appropriate model combinations based on specific diagnostic requirements. It also highlights the challenges related to distinguishing closely adjacent sperm cells.</p><p><strong>Conclusion: </strong>This research advances the field of automated sperm segmentation for male infertility diagnosis, showcasing the potential of deep learning techniques. Future work should aim to enhance accuracy in scenarios involving close proximity between sperm cells, ultimately improving clinical sperm analysis.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1376546"},"PeriodicalIF":3.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11418809/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142308683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transparency and precision in the age of AI: evaluation of explainability-enhanced recommendation systems. 人工智能时代的透明度和精确度:可解释性增强推荐系统的评估。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-05 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1410790
Jaime Govea, Rommel Gutierrez, William Villegas-Ch

In today's information age, recommender systems have become an essential tool to filter and personalize the massive data flow to users. However, these systems' increasing complexity and opaque nature have raised concerns about transparency and user trust. Lack of explainability in recommendations can lead to ill-informed decisions and decreased confidence in these advanced systems. Our study addresses this problem by integrating explainability techniques into recommendation systems to improve both the precision of the recommendations and their transparency. We implemented and evaluated recommendation models on the MovieLens and Amazon datasets, applying explainability methods like LIME and SHAP to disentangle the model decisions. The results indicated significant improvements in the precision of the recommendations, with a notable increase in the user's ability to understand and trust the suggestions provided by the system. For example, we saw a 3% increase in recommendation precision when incorporating these explainability techniques, demonstrating their added value in performance and improving the user experience.

在当今的信息时代,推荐系统已成为过滤和个性化用户海量数据流的重要工具。然而,这些系统日益增加的复杂性和不透明性引起了人们对其透明度和用户信任度的担忧。推荐缺乏可解释性会导致用户在不知情的情况下做出决定,并降低对这些先进系统的信任度。我们的研究通过将可解释性技术整合到推荐系统中来提高推荐的精确度和透明度,从而解决这一问题。我们在 MovieLens 和亚马逊数据集上实施并评估了推荐模型,并应用 LIME 和 SHAP 等可解释性方法来分解模型决策。结果表明,推荐的精确度有了显著提高,用户理解和信任系统提供的建议的能力也明显增强。例如,在采用这些可解释性技术后,我们发现推荐精确度提高了 3%,这证明了它们在性能和改善用户体验方面的附加价值。
{"title":"Transparency and precision in the age of AI: evaluation of explainability-enhanced recommendation systems.","authors":"Jaime Govea, Rommel Gutierrez, William Villegas-Ch","doi":"10.3389/frai.2024.1410790","DOIUrl":"https://doi.org/10.3389/frai.2024.1410790","url":null,"abstract":"<p><p>In today's information age, recommender systems have become an essential tool to filter and personalize the massive data flow to users. However, these systems' increasing complexity and opaque nature have raised concerns about transparency and user trust. Lack of explainability in recommendations can lead to ill-informed decisions and decreased confidence in these advanced systems. Our study addresses this problem by integrating explainability techniques into recommendation systems to improve both the precision of the recommendations and their transparency. We implemented and evaluated recommendation models on the MovieLens and Amazon datasets, applying explainability methods like LIME and SHAP to disentangle the model decisions. The results indicated significant improvements in the precision of the recommendations, with a notable increase in the user's ability to understand and trust the suggestions provided by the system. For example, we saw a 3% increase in recommendation precision when incorporating these explainability techniques, demonstrating their added value in performance and improving the user experience.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1410790"},"PeriodicalIF":3.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11410769/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Noise-induced modality-specific pretext learning for pediatric chest X-ray image classification. 用于儿科胸部 X 光图像分类的噪声诱导模式特定借口学习。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-05 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1419638
Sivaramakrishnan Rajaraman, Zhaohui Liang, Zhiyun Xue, Sameer Antani

Introduction: Deep learning (DL) has significantly advanced medical image classification. However, it often relies on transfer learning (TL) from models pretrained on large, generic non-medical image datasets like ImageNet. Conversely, medical images possess unique visual characteristics that such general models may not adequately capture.

Methods: This study examines the effectiveness of modality-specific pretext learning strengthened by image denoising and deblurring in enhancing the classification of pediatric chest X-ray (CXR) images into those exhibiting no findings, i.e., normal lungs, or with cardiopulmonary disease manifestations. Specifically, we use a VGG-16-Sharp-U-Net architecture and leverage its encoder in conjunction with a classification head to distinguish normal from abnormal pediatric CXR findings. We benchmark this performance against the traditional TL approach, viz., the VGG-16 model pretrained only on ImageNet. Measures used for performance evaluation are balanced accuracy, sensitivity, specificity, F-score, Matthew's Correlation Coefficient (MCC), Kappa statistic, and Youden's index.

Results: Our findings reveal that models developed from CXR modality-specific pretext encoders substantially outperform the ImageNet-only pretrained model, viz., Baseline, and achieve significantly higher sensitivity (p < 0.05) with marked improvements in balanced accuracy, F-score, MCC, Kappa statistic, and Youden's index. A novel attention-based fuzzy ensemble of the pretext-learned models further improves performance across these metrics (Balanced accuracy: 0.6376; Sensitivity: 0.4991; F-score: 0.5102; MCC: 0.2783; Kappa: 0.2782, and Youden's index:0.2751), compared to Baseline (Balanced accuracy: 0.5654; Sensitivity: 0.1983; F-score: 0.2977; MCC: 0.1998; Kappa: 0.1599, and Youden's index:0.1327).

Discussion: The superior results of CXR modality-specific pretext learning and their ensemble underscore its potential as a viable alternative to conventional ImageNet pretraining for medical image classification. Results from this study promote further exploration of medical modality-specific TL techniques in the development of DL models for various medical imaging applications.

简介深度学习(DL)极大地推动了医学图像分类的发展。然而,它通常依赖于在大型通用非医学图像数据集(如 ImageNet)上预先训练的模型的迁移学习(TL)。相反,医学图像具有独特的视觉特征,这些通用模型可能无法充分捕捉:本研究探讨了通过图像去噪和去模糊强化的特定模式前置学习在将小儿胸部 X 光(CXR)图像分类为无发现(即肺部正常)或有心肺疾病表现方面的有效性。具体来说,我们使用 VGG-16-Sharp-U-Net 架构,并利用其编码器和分类头来区分正常和异常的儿科 CXR 结果。我们将这一性能与传统的 TL 方法(即仅在 ImageNet 上进行预训练的 VGG-16 模型)进行比较。用于性能评估的指标包括平衡准确性、灵敏度、特异性、F-分数、马修相关系数(MCC)、Kappa 统计量和尤登指数:我们的研究结果表明,根据 CXR 模态特定借口编码器开发的模型大大优于仅经过 ImageNet 预训练的模型,即基线模型,并且灵敏度明显更高(p 讨论):特定于 CXR 模式的前置词学习及其组合的优异结果突出表明,在医学图像分类中,它有潜力成为传统 ImageNet 预训练的可行替代方案。这项研究的结果促进了在开发用于各种医学成像应用的 DL 模型时进一步探索特定于医学模式的 TL 技术。
{"title":"Noise-induced modality-specific pretext learning for pediatric chest X-ray image classification.","authors":"Sivaramakrishnan Rajaraman, Zhaohui Liang, Zhiyun Xue, Sameer Antani","doi":"10.3389/frai.2024.1419638","DOIUrl":"https://doi.org/10.3389/frai.2024.1419638","url":null,"abstract":"<p><strong>Introduction: </strong>Deep learning (DL) has significantly advanced medical image classification. However, it often relies on transfer learning (TL) from models pretrained on large, generic non-medical image datasets like ImageNet. Conversely, medical images possess unique visual characteristics that such general models may not adequately capture.</p><p><strong>Methods: </strong>This study examines the effectiveness of modality-specific pretext learning strengthened by image denoising and deblurring in enhancing the classification of pediatric chest X-ray (CXR) images into those exhibiting no findings, i.e., normal lungs, or with cardiopulmonary disease manifestations. Specifically, we use a <i>VGG-16-Sharp-U-Net</i> architecture and leverage its encoder in conjunction with a classification head to distinguish normal from abnormal pediatric CXR findings. We benchmark this performance against the traditional TL approach, <i>viz.</i>, the VGG-16 model pretrained only on ImageNet. Measures used for performance evaluation are balanced accuracy, sensitivity, specificity, F-score, Matthew's Correlation Coefficient (MCC), Kappa statistic, and Youden's index.</p><p><strong>Results: </strong>Our findings reveal that models developed from CXR modality-specific pretext encoders substantially outperform the ImageNet-only pretrained model, <i>viz.</i>, Baseline, and achieve significantly higher sensitivity (<i>p</i> < 0.05) with marked improvements in balanced accuracy, F-score, MCC, Kappa statistic, and Youden's index. A novel attention-based fuzzy ensemble of the pretext-learned models further improves performance across these metrics (Balanced accuracy: 0.6376; Sensitivity: 0.4991; F-score: 0.5102; MCC: 0.2783; Kappa: 0.2782, and Youden's index:0.2751), compared to Baseline (Balanced accuracy: 0.5654; Sensitivity: 0.1983; F-score: 0.2977; MCC: 0.1998; Kappa: 0.1599, and Youden's index:0.1327).</p><p><strong>Discussion: </strong>The superior results of CXR modality-specific pretext learning and their ensemble underscore its potential as a viable alternative to conventional ImageNet pretraining for medical image classification. Results from this study promote further exploration of medical modality-specific TL techniques in the development of DL models for various medical imaging applications.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1419638"},"PeriodicalIF":3.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11410760/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MixTrain: accelerating DNN training via input mixing. MixTrain:通过输入混合加速 DNN 训练。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-04 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1387936
Sarada Krithivasan, Sanchari Sen, Swagath Venkataramani, Anand Raghunathan

Training Deep Neural Networks (DNNs) places immense compute requirements on the underlying hardware platforms, expending large amounts of time and energy. An important factor contributing to the long training times is the increasing dataset complexity required to reach state-of-the-art performance in real-world applications. To address this challenge, we explore the use of input mixing, where multiple inputs are combined into a single composite input with an associated composite label for training. The goal is for training on the mixed input to achieve a similar effect as training separately on each the constituent inputs that it represents. This results in a lower number of inputs (or mini-batches) to be processed in each epoch, proportionally reducing training time. We find that naive input mixing leads to a considerable drop in learning performance and model accuracy due to interference between the forward/backward propagation of the mixed inputs. We propose two strategies to address this challenge and realize training speedups from input mixing with minimal impact on accuracy. First, we reduce the impact of inter-input interference by exploiting the spatial separation between the features of the constituent inputs in the network's intermediate representations. We also adaptively vary the mixing ratio of constituent inputs based on their loss in previous epochs. Second, we propose heuristics to automatically identify the subset of the training dataset that is subject to mixing in each epoch. Across ResNets of varying depth, MobileNetV2 and two Vision Transformer networks, we obtain upto 1.6 × and 1.8 × speedups in training for the ImageNet and Cifar10 datasets, respectively, on an Nvidia RTX 2080Ti GPU, with negligible loss in classification accuracy.

深度神经网络(DNN)的训练对底层硬件平台的计算能力要求极高,需要耗费大量的时间和精力。导致训练时间过长的一个重要因素是,要在现实世界的应用中达到最先进的性能,所需的数据集复杂度越来越高。为了应对这一挑战,我们探索了使用输入混合的方法,即将多个输入合并为一个单一的复合输入,并带有相关的复合标签进行训练。我们的目标是在混合输入上进行训练,以达到与在混合输入所代表的每个组成输入上分别进行训练类似的效果。这样一来,每个历时中需要处理的输入(或迷你批次)数量就会减少,从而相应地缩短了训练时间。我们发现,由于混合输入的前向/后向传播之间存在干扰,天真的输入混合会导致学习性能和模型准确性大幅下降。我们提出了两种策略来应对这一挑战,并在对准确性影响最小的情况下通过输入混合实现训练加速。首先,我们利用网络中间表征中各组成输入特征之间的空间分隔来减少输入间干扰的影响。我们还根据组成输入在之前历时中的损失,自适应地改变其混合比例。其次,我们提出了启发式方法,以自动识别每个历时中需要混合的训练数据集子集。通过不同深度的 ResNets、MobileNetV2 和两个 Vision Transformer 网络,我们在 Nvidia RTX 2080Ti GPU 上对 ImageNet 和 Cifar10 数据集的训练速度分别提高了 1.6 倍和 1.8 倍,而分类准确性的损失几乎可以忽略不计。
{"title":"MixTrain: accelerating DNN training via input mixing.","authors":"Sarada Krithivasan, Sanchari Sen, Swagath Venkataramani, Anand Raghunathan","doi":"10.3389/frai.2024.1387936","DOIUrl":"https://doi.org/10.3389/frai.2024.1387936","url":null,"abstract":"<p><p>Training Deep Neural Networks (DNNs) places immense compute requirements on the underlying hardware platforms, expending large amounts of time and energy. An important factor contributing to the long training times is the increasing dataset complexity required to reach state-of-the-art performance in real-world applications. To address this challenge, we explore the use of input mixing, where multiple inputs are combined into a single composite input with an associated composite label for training. The goal is for training on the mixed input to achieve a similar effect as training separately on each the constituent inputs that it represents. This results in a lower number of inputs (or mini-batches) to be processed in each epoch, proportionally reducing training time. We find that naive input mixing leads to a considerable drop in learning performance and model accuracy due to interference between the forward/backward propagation of the mixed inputs. We propose two strategies to address this challenge and realize training speedups from input mixing with minimal impact on accuracy. First, we reduce the impact of inter-input interference by exploiting the spatial separation between the features of the constituent inputs in the network's intermediate representations. We also adaptively vary the mixing ratio of constituent inputs based on their loss in previous epochs. Second, we propose heuristics to automatically identify the subset of the training dataset that is subject to mixing in each epoch. Across ResNets of varying depth, MobileNetV2 and two Vision Transformer networks, we obtain upto 1.6 × and 1.8 × speedups in training for the ImageNet and Cifar10 datasets, respectively, on an Nvidia RTX 2080Ti GPU, with negligible loss in classification accuracy.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1387936"},"PeriodicalIF":3.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11443600/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142362211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence in respiratory care: knowledge, perceptions, and practices-a cross-sectional study. 人工智能在呼吸护理中的应用:知识、认知和实践--一项横断面研究。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-03 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1451963
Jithin K Sreedharan, Asma Alharbi, Amal Alsomali, Gokul Krishna Gopalakrishnan, Abdullah Almojaibel, Rawan Alajmi, Ibrahim Albalawi, Musallam Alnasser, Meshal Alenezi, Abdullah Alqahtani, Mohammed Alahmari, Eidan Alzahrani, Manjush Karthika

Background: Artificial intelligence (AI) is reforming healthcare, particularly in respiratory medicine and critical care, by utilizing big and synthetic data to improve diagnostic accuracy and therapeutic benefits. This survey aimed to evaluate the knowledge, perceptions, and practices of respiratory therapists (RTs) regarding AI to effectively incorporate these technologies into the clinical practice.

Methods: The study approved by the institutional review board, aimed at the RTs working in the Kingdom of Saudi Arabia. The validated questionnaire collected reflective insights from 448 RTs in Saudi Arabia. Descriptive statistics, thematic analysis, Fisher's exact test, and chi-square test were used to evaluate the significance of the data.

Results: The survey revealed a nearly equal distribution of genders (51% female, 49% male). Most respondents were in the 20-25 age group (54%), held bachelor's degrees (69%), and had 0-5 years of experience (73%). While 28% had some knowledge of AI, only 8.5% had practical experience. Significant gender disparities in AI knowledge were noted (p < 0.001). Key findings included 59% advocating for basics of AI in the curriculum, 51% believing AI would play a vital role in respiratory care, and 41% calling for specialized AI personnel. Major challenges identified included knowledge deficiencies (23%), skill enhancement (23%), and limited access to training (17%).

Conclusion: In conclusion, this study highlights differences in the levels of knowledge and perceptions regarding AI among respiratory care professionals, underlining its recognized significance and futuristic awareness in the field. Tailored education and strategic planning are crucial for enhancing the quality of respiratory care, with the integration of AI. Addressing these gaps is essential for utilizing the full potential of AI in advancing respiratory care practices.

背景:人工智能(AI)通过利用大数据和合成数据提高诊断准确性和治疗效果,正在改革医疗保健,尤其是呼吸内科和重症监护领域。本调查旨在评估呼吸治疗师(RTs)对人工智能的认识、看法和实践,以便有效地将这些技术融入临床实践:这项研究获得了机构审查委员会的批准,对象是在沙特阿拉伯王国工作的呼吸治疗师。经过验证的调查问卷收集了沙特阿拉伯 448 名 RT 的反思性见解。研究采用了描述性统计、专题分析、费雪精确检验和卡方检验来评估数据的显著性:调查显示,受访者的性别分布几乎相等(51% 为女性,49% 为男性)。大多数受访者年龄在 20-25 岁之间(54%),拥有学士学位(69%),工作经验为 0-5 年(73%)。虽然 28% 的受访者对人工智能有一定了解,但只有 8.5% 的受访者有实际经验。在人工智能知识方面存在显著的性别差异(P 结语):总之,本研究强调了呼吸护理专业人员对人工智能的认知水平和看法的差异,突出了人工智能在该领域的公认意义和未来意识。有针对性的教育和战略规划对于结合人工智能提高呼吸护理质量至关重要。要充分发挥人工智能在推进呼吸护理实践中的潜力,解决这些差距至关重要。
{"title":"Artificial intelligence in respiratory care: knowledge, perceptions, and practices-a cross-sectional study.","authors":"Jithin K Sreedharan, Asma Alharbi, Amal Alsomali, Gokul Krishna Gopalakrishnan, Abdullah Almojaibel, Rawan Alajmi, Ibrahim Albalawi, Musallam Alnasser, Meshal Alenezi, Abdullah Alqahtani, Mohammed Alahmari, Eidan Alzahrani, Manjush Karthika","doi":"10.3389/frai.2024.1451963","DOIUrl":"https://doi.org/10.3389/frai.2024.1451963","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) is reforming healthcare, particularly in respiratory medicine and critical care, by utilizing big and synthetic data to improve diagnostic accuracy and therapeutic benefits. This survey aimed to evaluate the knowledge, perceptions, and practices of respiratory therapists (RTs) regarding AI to effectively incorporate these technologies into the clinical practice.</p><p><strong>Methods: </strong>The study approved by the institutional review board, aimed at the RTs working in the Kingdom of Saudi Arabia. The validated questionnaire collected reflective insights from 448 RTs in Saudi Arabia. Descriptive statistics, thematic analysis, Fisher's exact test, and chi-square test were used to evaluate the significance of the data.</p><p><strong>Results: </strong>The survey revealed a nearly equal distribution of genders (51% female, 49% male). Most respondents were in the 20-25 age group (54%), held bachelor's degrees (69%), and had 0-5 years of experience (73%). While 28% had some knowledge of AI, only 8.5% had practical experience. Significant gender disparities in AI knowledge were noted (<i>p</i> < 0.001). Key findings included 59% advocating for basics of AI in the curriculum, 51% believing AI would play a vital role in respiratory care, and 41% calling for specialized AI personnel. Major challenges identified included knowledge deficiencies (23%), skill enhancement (23%), and limited access to training (17%).</p><p><strong>Conclusion: </strong>In conclusion, this study highlights differences in the levels of knowledge and perceptions regarding AI among respiratory care professionals, underlining its recognized significance and futuristic awareness in the field. Tailored education and strategic planning are crucial for enhancing the quality of respiratory care, with the integration of AI. Addressing these gaps is essential for utilizing the full potential of AI in advancing respiratory care practices.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1451963"},"PeriodicalIF":3.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11405306/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrigendum: Contextual emotion detection in images using deep learning. 更正:利用深度学习检测图像中的情感。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-03 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1476791
Fatiha Limami, Boutaina Hdioud, Rachid Oulad Haj Thami

[This corrects the article DOI: 10.3389/frai.2024.1386753.].

[此处更正了文章 DOI:10.3389/frai.2024.1386753]。
{"title":"Corrigendum: Contextual emotion detection in images using deep learning.","authors":"Fatiha Limami, Boutaina Hdioud, Rachid Oulad Haj Thami","doi":"10.3389/frai.2024.1476791","DOIUrl":"https://doi.org/10.3389/frai.2024.1476791","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.3389/frai.2024.1386753.].</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1476791"},"PeriodicalIF":3.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11405858/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1