首页 > 最新文献

Frontiers in Artificial Intelligence最新文献

英文 中文
A modified U-Net to detect real sperms in videos of human sperm cell. 改进的 U-Net 用于检测人类精子细胞视频中的真精子。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-09 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1376546
Hanan Saadat, Mohammad Mehdi Sepehri, Mahdi-Reza Borna, Behnam Maleki

Background: This study delves into the crucial domain of sperm segmentation, a pivotal component of male infertility diagnosis. It explores the efficacy of diverse architectural configurations coupled with various encoders, leveraging frames from the VISEM dataset for evaluation.

Methods: The pursuit of automated sperm segmentation led to the examination of multiple deep learning architectures, each paired with distinct encoders. Extensive experimentation was conducted on the VISEM dataset to assess their performance.

Results: Our study evaluated various deep learning architectures with different encoders for sperm segmentation using the VISEM dataset. While each model configuration exhibited distinct strengths and weaknesses, UNet++ with ResNet34 emerged as a top-performing model, demonstrating exceptional accuracy in distinguishing sperm cells from non-sperm cells. However, challenges persist in accurately identifying closely adjacent sperm cells. These findings provide valuable insights for improving automated sperm segmentation in male infertility diagnosis.

Discussion: The study underscores the significance of selecting appropriate model combinations based on specific diagnostic requirements. It also highlights the challenges related to distinguishing closely adjacent sperm cells.

Conclusion: This research advances the field of automated sperm segmentation for male infertility diagnosis, showcasing the potential of deep learning techniques. Future work should aim to enhance accuracy in scenarios involving close proximity between sperm cells, ultimately improving clinical sperm analysis.

研究背景本研究深入探讨了精子分割这一关键领域,精子分割是男性不育症诊断的重要组成部分。它利用来自 VISEM 数据集的帧进行评估,探索了不同架构配置和各种编码器的功效:方法:为了实现精子自动分割,我们研究了多种深度学习架构,每种架构都搭配了不同的编码器。我们在 VISEM 数据集上进行了广泛的实验,以评估它们的性能:我们的研究利用 VISEM 数据集评估了各种深度学习架构和不同编码器在精子分割方面的表现。虽然每种模型配置都表现出不同的优缺点,但 UNet++ 与 ResNet34 成为表现最佳的模型,在区分精子细胞与非精子细胞方面表现出了极高的准确性。然而,在准确识别紧密相邻的精子细胞方面仍然存在挑战。这些发现为改进男性不育诊断中的自动精子分割提供了宝贵的见解:讨论:这项研究强调了根据具体诊断要求选择适当模型组合的重要性。讨论:该研究强调了根据特定诊断要求选择适当的模型组合的重要性,同时也突出了与区分相邻精子细胞相关的挑战:这项研究推进了用于男性不育诊断的精子自动分割领域,展示了深度学习技术的潜力。未来的工作应着眼于提高精子细胞间紧密相邻情况下的准确性,最终改善临床精子分析。
{"title":"A modified U-Net to detect real sperms in videos of human sperm cell.","authors":"Hanan Saadat, Mohammad Mehdi Sepehri, Mahdi-Reza Borna, Behnam Maleki","doi":"10.3389/frai.2024.1376546","DOIUrl":"10.3389/frai.2024.1376546","url":null,"abstract":"<p><strong>Background: </strong>This study delves into the crucial domain of sperm segmentation, a pivotal component of male infertility diagnosis. It explores the efficacy of diverse architectural configurations coupled with various encoders, leveraging frames from the VISEM dataset for evaluation.</p><p><strong>Methods: </strong>The pursuit of automated sperm segmentation led to the examination of multiple deep learning architectures, each paired with distinct encoders. Extensive experimentation was conducted on the VISEM dataset to assess their performance.</p><p><strong>Results: </strong>Our study evaluated various deep learning architectures with different encoders for sperm segmentation using the VISEM dataset. While each model configuration exhibited distinct strengths and weaknesses, UNet++ with ResNet34 emerged as a top-performing model, demonstrating exceptional accuracy in distinguishing sperm cells from non-sperm cells. However, challenges persist in accurately identifying closely adjacent sperm cells. These findings provide valuable insights for improving automated sperm segmentation in male infertility diagnosis.</p><p><strong>Discussion: </strong>The study underscores the significance of selecting appropriate model combinations based on specific diagnostic requirements. It also highlights the challenges related to distinguishing closely adjacent sperm cells.</p><p><strong>Conclusion: </strong>This research advances the field of automated sperm segmentation for male infertility diagnosis, showcasing the potential of deep learning techniques. Future work should aim to enhance accuracy in scenarios involving close proximity between sperm cells, ultimately improving clinical sperm analysis.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11418809/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142308683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transparency and precision in the age of AI: evaluation of explainability-enhanced recommendation systems. 人工智能时代的透明度和精确度:可解释性增强推荐系统的评估。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-05 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1410790
Jaime Govea, Rommel Gutierrez, William Villegas-Ch

In today's information age, recommender systems have become an essential tool to filter and personalize the massive data flow to users. However, these systems' increasing complexity and opaque nature have raised concerns about transparency and user trust. Lack of explainability in recommendations can lead to ill-informed decisions and decreased confidence in these advanced systems. Our study addresses this problem by integrating explainability techniques into recommendation systems to improve both the precision of the recommendations and their transparency. We implemented and evaluated recommendation models on the MovieLens and Amazon datasets, applying explainability methods like LIME and SHAP to disentangle the model decisions. The results indicated significant improvements in the precision of the recommendations, with a notable increase in the user's ability to understand and trust the suggestions provided by the system. For example, we saw a 3% increase in recommendation precision when incorporating these explainability techniques, demonstrating their added value in performance and improving the user experience.

在当今的信息时代,推荐系统已成为过滤和个性化用户海量数据流的重要工具。然而,这些系统日益增加的复杂性和不透明性引起了人们对其透明度和用户信任度的担忧。推荐缺乏可解释性会导致用户在不知情的情况下做出决定,并降低对这些先进系统的信任度。我们的研究通过将可解释性技术整合到推荐系统中来提高推荐的精确度和透明度,从而解决这一问题。我们在 MovieLens 和亚马逊数据集上实施并评估了推荐模型,并应用 LIME 和 SHAP 等可解释性方法来分解模型决策。结果表明,推荐的精确度有了显著提高,用户理解和信任系统提供的建议的能力也明显增强。例如,在采用这些可解释性技术后,我们发现推荐精确度提高了 3%,这证明了它们在性能和改善用户体验方面的附加价值。
{"title":"Transparency and precision in the age of AI: evaluation of explainability-enhanced recommendation systems.","authors":"Jaime Govea, Rommel Gutierrez, William Villegas-Ch","doi":"10.3389/frai.2024.1410790","DOIUrl":"https://doi.org/10.3389/frai.2024.1410790","url":null,"abstract":"<p><p>In today's information age, recommender systems have become an essential tool to filter and personalize the massive data flow to users. However, these systems' increasing complexity and opaque nature have raised concerns about transparency and user trust. Lack of explainability in recommendations can lead to ill-informed decisions and decreased confidence in these advanced systems. Our study addresses this problem by integrating explainability techniques into recommendation systems to improve both the precision of the recommendations and their transparency. We implemented and evaluated recommendation models on the MovieLens and Amazon datasets, applying explainability methods like LIME and SHAP to disentangle the model decisions. The results indicated significant improvements in the precision of the recommendations, with a notable increase in the user's ability to understand and trust the suggestions provided by the system. For example, we saw a 3% increase in recommendation precision when incorporating these explainability techniques, demonstrating their added value in performance and improving the user experience.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11410769/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Noise-induced modality-specific pretext learning for pediatric chest X-ray image classification. 用于儿科胸部 X 光图像分类的噪声诱导模式特定借口学习。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-05 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1419638
Sivaramakrishnan Rajaraman, Zhaohui Liang, Zhiyun Xue, Sameer Antani

Introduction: Deep learning (DL) has significantly advanced medical image classification. However, it often relies on transfer learning (TL) from models pretrained on large, generic non-medical image datasets like ImageNet. Conversely, medical images possess unique visual characteristics that such general models may not adequately capture.

Methods: This study examines the effectiveness of modality-specific pretext learning strengthened by image denoising and deblurring in enhancing the classification of pediatric chest X-ray (CXR) images into those exhibiting no findings, i.e., normal lungs, or with cardiopulmonary disease manifestations. Specifically, we use a VGG-16-Sharp-U-Net architecture and leverage its encoder in conjunction with a classification head to distinguish normal from abnormal pediatric CXR findings. We benchmark this performance against the traditional TL approach, viz., the VGG-16 model pretrained only on ImageNet. Measures used for performance evaluation are balanced accuracy, sensitivity, specificity, F-score, Matthew's Correlation Coefficient (MCC), Kappa statistic, and Youden's index.

Results: Our findings reveal that models developed from CXR modality-specific pretext encoders substantially outperform the ImageNet-only pretrained model, viz., Baseline, and achieve significantly higher sensitivity (p < 0.05) with marked improvements in balanced accuracy, F-score, MCC, Kappa statistic, and Youden's index. A novel attention-based fuzzy ensemble of the pretext-learned models further improves performance across these metrics (Balanced accuracy: 0.6376; Sensitivity: 0.4991; F-score: 0.5102; MCC: 0.2783; Kappa: 0.2782, and Youden's index:0.2751), compared to Baseline (Balanced accuracy: 0.5654; Sensitivity: 0.1983; F-score: 0.2977; MCC: 0.1998; Kappa: 0.1599, and Youden's index:0.1327).

Discussion: The superior results of CXR modality-specific pretext learning and their ensemble underscore its potential as a viable alternative to conventional ImageNet pretraining for medical image classification. Results from this study promote further exploration of medical modality-specific TL techniques in the development of DL models for various medical imaging applications.

简介深度学习(DL)极大地推动了医学图像分类的发展。然而,它通常依赖于在大型通用非医学图像数据集(如 ImageNet)上预先训练的模型的迁移学习(TL)。相反,医学图像具有独特的视觉特征,这些通用模型可能无法充分捕捉:本研究探讨了通过图像去噪和去模糊强化的特定模式前置学习在将小儿胸部 X 光(CXR)图像分类为无发现(即肺部正常)或有心肺疾病表现方面的有效性。具体来说,我们使用 VGG-16-Sharp-U-Net 架构,并利用其编码器和分类头来区分正常和异常的儿科 CXR 结果。我们将这一性能与传统的 TL 方法(即仅在 ImageNet 上进行预训练的 VGG-16 模型)进行比较。用于性能评估的指标包括平衡准确性、灵敏度、特异性、F-分数、马修相关系数(MCC)、Kappa 统计量和尤登指数:我们的研究结果表明,根据 CXR 模态特定借口编码器开发的模型大大优于仅经过 ImageNet 预训练的模型,即基线模型,并且灵敏度明显更高(p 讨论):特定于 CXR 模式的前置词学习及其组合的优异结果突出表明,在医学图像分类中,它有潜力成为传统 ImageNet 预训练的可行替代方案。这项研究的结果促进了在开发用于各种医学成像应用的 DL 模型时进一步探索特定于医学模式的 TL 技术。
{"title":"Noise-induced modality-specific pretext learning for pediatric chest X-ray image classification.","authors":"Sivaramakrishnan Rajaraman, Zhaohui Liang, Zhiyun Xue, Sameer Antani","doi":"10.3389/frai.2024.1419638","DOIUrl":"https://doi.org/10.3389/frai.2024.1419638","url":null,"abstract":"<p><strong>Introduction: </strong>Deep learning (DL) has significantly advanced medical image classification. However, it often relies on transfer learning (TL) from models pretrained on large, generic non-medical image datasets like ImageNet. Conversely, medical images possess unique visual characteristics that such general models may not adequately capture.</p><p><strong>Methods: </strong>This study examines the effectiveness of modality-specific pretext learning strengthened by image denoising and deblurring in enhancing the classification of pediatric chest X-ray (CXR) images into those exhibiting no findings, i.e., normal lungs, or with cardiopulmonary disease manifestations. Specifically, we use a <i>VGG-16-Sharp-U-Net</i> architecture and leverage its encoder in conjunction with a classification head to distinguish normal from abnormal pediatric CXR findings. We benchmark this performance against the traditional TL approach, <i>viz.</i>, the VGG-16 model pretrained only on ImageNet. Measures used for performance evaluation are balanced accuracy, sensitivity, specificity, F-score, Matthew's Correlation Coefficient (MCC), Kappa statistic, and Youden's index.</p><p><strong>Results: </strong>Our findings reveal that models developed from CXR modality-specific pretext encoders substantially outperform the ImageNet-only pretrained model, <i>viz.</i>, Baseline, and achieve significantly higher sensitivity (<i>p</i> < 0.05) with marked improvements in balanced accuracy, F-score, MCC, Kappa statistic, and Youden's index. A novel attention-based fuzzy ensemble of the pretext-learned models further improves performance across these metrics (Balanced accuracy: 0.6376; Sensitivity: 0.4991; F-score: 0.5102; MCC: 0.2783; Kappa: 0.2782, and Youden's index:0.2751), compared to Baseline (Balanced accuracy: 0.5654; Sensitivity: 0.1983; F-score: 0.2977; MCC: 0.1998; Kappa: 0.1599, and Youden's index:0.1327).</p><p><strong>Discussion: </strong>The superior results of CXR modality-specific pretext learning and their ensemble underscore its potential as a viable alternative to conventional ImageNet pretraining for medical image classification. Results from this study promote further exploration of medical modality-specific TL techniques in the development of DL models for various medical imaging applications.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11410760/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MixTrain: accelerating DNN training via input mixing. MixTrain:通过输入混合加速 DNN 训练。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-04 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1387936
Sarada Krithivasan, Sanchari Sen, Swagath Venkataramani, Anand Raghunathan

Training Deep Neural Networks (DNNs) places immense compute requirements on the underlying hardware platforms, expending large amounts of time and energy. An important factor contributing to the long training times is the increasing dataset complexity required to reach state-of-the-art performance in real-world applications. To address this challenge, we explore the use of input mixing, where multiple inputs are combined into a single composite input with an associated composite label for training. The goal is for training on the mixed input to achieve a similar effect as training separately on each the constituent inputs that it represents. This results in a lower number of inputs (or mini-batches) to be processed in each epoch, proportionally reducing training time. We find that naive input mixing leads to a considerable drop in learning performance and model accuracy due to interference between the forward/backward propagation of the mixed inputs. We propose two strategies to address this challenge and realize training speedups from input mixing with minimal impact on accuracy. First, we reduce the impact of inter-input interference by exploiting the spatial separation between the features of the constituent inputs in the network's intermediate representations. We also adaptively vary the mixing ratio of constituent inputs based on their loss in previous epochs. Second, we propose heuristics to automatically identify the subset of the training dataset that is subject to mixing in each epoch. Across ResNets of varying depth, MobileNetV2 and two Vision Transformer networks, we obtain upto 1.6 × and 1.8 × speedups in training for the ImageNet and Cifar10 datasets, respectively, on an Nvidia RTX 2080Ti GPU, with negligible loss in classification accuracy.

深度神经网络(DNN)的训练对底层硬件平台的计算能力要求极高,需要耗费大量的时间和精力。导致训练时间过长的一个重要因素是,要在现实世界的应用中达到最先进的性能,所需的数据集复杂度越来越高。为了应对这一挑战,我们探索了使用输入混合的方法,即将多个输入合并为一个单一的复合输入,并带有相关的复合标签进行训练。我们的目标是在混合输入上进行训练,以达到与在混合输入所代表的每个组成输入上分别进行训练类似的效果。这样一来,每个历时中需要处理的输入(或迷你批次)数量就会减少,从而相应地缩短了训练时间。我们发现,由于混合输入的前向/后向传播之间存在干扰,天真的输入混合会导致学习性能和模型准确性大幅下降。我们提出了两种策略来应对这一挑战,并在对准确性影响最小的情况下通过输入混合实现训练加速。首先,我们利用网络中间表征中各组成输入特征之间的空间分隔来减少输入间干扰的影响。我们还根据组成输入在之前历时中的损失,自适应地改变其混合比例。其次,我们提出了启发式方法,以自动识别每个历时中需要混合的训练数据集子集。通过不同深度的 ResNets、MobileNetV2 和两个 Vision Transformer 网络,我们在 Nvidia RTX 2080Ti GPU 上对 ImageNet 和 Cifar10 数据集的训练速度分别提高了 1.6 倍和 1.8 倍,而分类准确性的损失几乎可以忽略不计。
{"title":"MixTrain: accelerating DNN training via input mixing.","authors":"Sarada Krithivasan, Sanchari Sen, Swagath Venkataramani, Anand Raghunathan","doi":"10.3389/frai.2024.1387936","DOIUrl":"https://doi.org/10.3389/frai.2024.1387936","url":null,"abstract":"<p><p>Training Deep Neural Networks (DNNs) places immense compute requirements on the underlying hardware platforms, expending large amounts of time and energy. An important factor contributing to the long training times is the increasing dataset complexity required to reach state-of-the-art performance in real-world applications. To address this challenge, we explore the use of input mixing, where multiple inputs are combined into a single composite input with an associated composite label for training. The goal is for training on the mixed input to achieve a similar effect as training separately on each the constituent inputs that it represents. This results in a lower number of inputs (or mini-batches) to be processed in each epoch, proportionally reducing training time. We find that naive input mixing leads to a considerable drop in learning performance and model accuracy due to interference between the forward/backward propagation of the mixed inputs. We propose two strategies to address this challenge and realize training speedups from input mixing with minimal impact on accuracy. First, we reduce the impact of inter-input interference by exploiting the spatial separation between the features of the constituent inputs in the network's intermediate representations. We also adaptively vary the mixing ratio of constituent inputs based on their loss in previous epochs. Second, we propose heuristics to automatically identify the subset of the training dataset that is subject to mixing in each epoch. Across ResNets of varying depth, MobileNetV2 and two Vision Transformer networks, we obtain upto 1.6 × and 1.8 × speedups in training for the ImageNet and Cifar10 datasets, respectively, on an Nvidia RTX 2080Ti GPU, with negligible loss in classification accuracy.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11443600/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142362211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence in respiratory care: knowledge, perceptions, and practices-a cross-sectional study. 人工智能在呼吸护理中的应用:知识、认知和实践--一项横断面研究。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-03 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1451963
Jithin K Sreedharan, Asma Alharbi, Amal Alsomali, Gokul Krishna Gopalakrishnan, Abdullah Almojaibel, Rawan Alajmi, Ibrahim Albalawi, Musallam Alnasser, Meshal Alenezi, Abdullah Alqahtani, Mohammed Alahmari, Eidan Alzahrani, Manjush Karthika

Background: Artificial intelligence (AI) is reforming healthcare, particularly in respiratory medicine and critical care, by utilizing big and synthetic data to improve diagnostic accuracy and therapeutic benefits. This survey aimed to evaluate the knowledge, perceptions, and practices of respiratory therapists (RTs) regarding AI to effectively incorporate these technologies into the clinical practice.

Methods: The study approved by the institutional review board, aimed at the RTs working in the Kingdom of Saudi Arabia. The validated questionnaire collected reflective insights from 448 RTs in Saudi Arabia. Descriptive statistics, thematic analysis, Fisher's exact test, and chi-square test were used to evaluate the significance of the data.

Results: The survey revealed a nearly equal distribution of genders (51% female, 49% male). Most respondents were in the 20-25 age group (54%), held bachelor's degrees (69%), and had 0-5 years of experience (73%). While 28% had some knowledge of AI, only 8.5% had practical experience. Significant gender disparities in AI knowledge were noted (p < 0.001). Key findings included 59% advocating for basics of AI in the curriculum, 51% believing AI would play a vital role in respiratory care, and 41% calling for specialized AI personnel. Major challenges identified included knowledge deficiencies (23%), skill enhancement (23%), and limited access to training (17%).

Conclusion: In conclusion, this study highlights differences in the levels of knowledge and perceptions regarding AI among respiratory care professionals, underlining its recognized significance and futuristic awareness in the field. Tailored education and strategic planning are crucial for enhancing the quality of respiratory care, with the integration of AI. Addressing these gaps is essential for utilizing the full potential of AI in advancing respiratory care practices.

背景:人工智能(AI)通过利用大数据和合成数据提高诊断准确性和治疗效果,正在改革医疗保健,尤其是呼吸内科和重症监护领域。本调查旨在评估呼吸治疗师(RTs)对人工智能的认识、看法和实践,以便有效地将这些技术融入临床实践:这项研究获得了机构审查委员会的批准,对象是在沙特阿拉伯王国工作的呼吸治疗师。经过验证的调查问卷收集了沙特阿拉伯 448 名 RT 的反思性见解。研究采用了描述性统计、专题分析、费雪精确检验和卡方检验来评估数据的显著性:调查显示,受访者的性别分布几乎相等(51% 为女性,49% 为男性)。大多数受访者年龄在 20-25 岁之间(54%),拥有学士学位(69%),工作经验为 0-5 年(73%)。虽然 28% 的受访者对人工智能有一定了解,但只有 8.5% 的受访者有实际经验。在人工智能知识方面存在显著的性别差异(P 结语):总之,本研究强调了呼吸护理专业人员对人工智能的认知水平和看法的差异,突出了人工智能在该领域的公认意义和未来意识。有针对性的教育和战略规划对于结合人工智能提高呼吸护理质量至关重要。要充分发挥人工智能在推进呼吸护理实践中的潜力,解决这些差距至关重要。
{"title":"Artificial intelligence in respiratory care: knowledge, perceptions, and practices-a cross-sectional study.","authors":"Jithin K Sreedharan, Asma Alharbi, Amal Alsomali, Gokul Krishna Gopalakrishnan, Abdullah Almojaibel, Rawan Alajmi, Ibrahim Albalawi, Musallam Alnasser, Meshal Alenezi, Abdullah Alqahtani, Mohammed Alahmari, Eidan Alzahrani, Manjush Karthika","doi":"10.3389/frai.2024.1451963","DOIUrl":"https://doi.org/10.3389/frai.2024.1451963","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) is reforming healthcare, particularly in respiratory medicine and critical care, by utilizing big and synthetic data to improve diagnostic accuracy and therapeutic benefits. This survey aimed to evaluate the knowledge, perceptions, and practices of respiratory therapists (RTs) regarding AI to effectively incorporate these technologies into the clinical practice.</p><p><strong>Methods: </strong>The study approved by the institutional review board, aimed at the RTs working in the Kingdom of Saudi Arabia. The validated questionnaire collected reflective insights from 448 RTs in Saudi Arabia. Descriptive statistics, thematic analysis, Fisher's exact test, and chi-square test were used to evaluate the significance of the data.</p><p><strong>Results: </strong>The survey revealed a nearly equal distribution of genders (51% female, 49% male). Most respondents were in the 20-25 age group (54%), held bachelor's degrees (69%), and had 0-5 years of experience (73%). While 28% had some knowledge of AI, only 8.5% had practical experience. Significant gender disparities in AI knowledge were noted (<i>p</i> < 0.001). Key findings included 59% advocating for basics of AI in the curriculum, 51% believing AI would play a vital role in respiratory care, and 41% calling for specialized AI personnel. Major challenges identified included knowledge deficiencies (23%), skill enhancement (23%), and limited access to training (17%).</p><p><strong>Conclusion: </strong>In conclusion, this study highlights differences in the levels of knowledge and perceptions regarding AI among respiratory care professionals, underlining its recognized significance and futuristic awareness in the field. Tailored education and strategic planning are crucial for enhancing the quality of respiratory care, with the integration of AI. Addressing these gaps is essential for utilizing the full potential of AI in advancing respiratory care practices.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11405306/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Corrigendum: Contextual emotion detection in images using deep learning. 更正:利用深度学习检测图像中的情感。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-03 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1476791
Fatiha Limami, Boutaina Hdioud, Rachid Oulad Haj Thami

[This corrects the article DOI: 10.3389/frai.2024.1386753.].

[此处更正了文章 DOI:10.3389/frai.2024.1386753]。
{"title":"Corrigendum: Contextual emotion detection in images using deep learning.","authors":"Fatiha Limami, Boutaina Hdioud, Rachid Oulad Haj Thami","doi":"10.3389/frai.2024.1476791","DOIUrl":"https://doi.org/10.3389/frai.2024.1476791","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.3389/frai.2024.1386753.].</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11405858/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI integration in nephrology: evaluating ChatGPT for accurate ICD-10 documentation and coding. 肾脏病学中的人工智能集成:评估 ChatGPT 在准确记录 ICD-10 和编码方面的作用。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-02 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1457586
Yasir Abdelgadir, Charat Thongprayoon, Jing Miao, Supawadee Suppadungsuk, Justin H Pham, Michael A Mao, Iasmina M Craici, Wisit Cheungpasitporn

Background: Accurate ICD-10 coding is crucial for healthcare reimbursement, patient care, and research. AI implementation, like ChatGPT, could improve coding accuracy and reduce physician burden. This study assessed ChatGPT's performance in identifying ICD-10 codes for nephrology conditions through case scenarios for pre-visit testing.

Methods: Two nephrologists created 100 simulated nephrology cases. ChatGPT versions 3.5 and 4.0 were evaluated by comparing AI-generated ICD-10 codes against predetermined correct codes. Assessments were conducted in two rounds, 2 weeks apart, in April 2024.

Results: In the first round, the accuracy of ChatGPT for assigning correct diagnosis codes was 91 and 99% for version 3.5 and 4.0, respectively. In the second round, the accuracy of ChatGPT for assigning the correct diagnosis code was 87% for version 3.5 and 99% for version 4.0. ChatGPT 4.0 had higher accuracy than ChatGPT 3.5 (p = 0.02 and 0.002 for the first and second round respectively). The accuracy did not significantly differ between the two rounds (p > 0.05).

Conclusion: ChatGPT 4.0 can significantly improve ICD-10 coding accuracy in nephrology through case scenarios for pre-visit testing, potentially reducing healthcare professionals' workload. However, the small error percentage underscores the need for ongoing review and improvement of AI systems to ensure accurate reimbursement, optimal patient care, and reliable research data.

背景:准确的 ICD-10 编码对医疗报销、患者护理和研究至关重要。人工智能的应用,如 ChatGPT,可以提高编码的准确性并减轻医生的负担。本研究通过就诊前测试的病例场景,评估了 ChatGPT 在识别肾脏病 ICD-10 编码方面的性能:方法:两名肾病专家创建了 100 个模拟肾病病例。通过比较人工智能生成的 ICD-10 代码与预先确定的正确代码,对 ChatGPT 3.5 和 4.0 版本进行了评估。评估在 2024 年 4 月分两轮进行,每轮相隔 2 周:在第一轮评估中,3.5 版和 4.0 版 ChatGPT 分配正确诊断代码的准确率分别为 91% 和 99%。在第二轮中,3.5 版和 4.0 版 ChatGPT 分配正确诊断代码的准确率分别为 87% 和 99%。ChatGPT 4.0 的准确率高于 ChatGPT 3.5(第一轮和第二轮分别为 p = 0.02 和 0.002)。两轮之间的准确率没有明显差异(p > 0.05):ChatGPT 4.0 可通过病例情景进行就诊前测试,显著提高肾内科 ICD-10 编码的准确性,从而减轻医护人员的工作量。然而,较小的错误率强调了对人工智能系统进行持续审查和改进的必要性,以确保准确的报销、最佳的患者护理和可靠的研究数据。
{"title":"AI integration in nephrology: evaluating ChatGPT for accurate ICD-10 documentation and coding.","authors":"Yasir Abdelgadir, Charat Thongprayoon, Jing Miao, Supawadee Suppadungsuk, Justin H Pham, Michael A Mao, Iasmina M Craici, Wisit Cheungpasitporn","doi":"10.3389/frai.2024.1457586","DOIUrl":"https://doi.org/10.3389/frai.2024.1457586","url":null,"abstract":"<p><strong>Background: </strong>Accurate ICD-10 coding is crucial for healthcare reimbursement, patient care, and research. AI implementation, like ChatGPT, could improve coding accuracy and reduce physician burden. This study assessed ChatGPT's performance in identifying ICD-10 codes for nephrology conditions through case scenarios for pre-visit testing.</p><p><strong>Methods: </strong>Two nephrologists created 100 simulated nephrology cases. ChatGPT versions 3.5 and 4.0 were evaluated by comparing AI-generated ICD-10 codes against predetermined correct codes. Assessments were conducted in two rounds, 2 weeks apart, in April 2024.</p><p><strong>Results: </strong>In the first round, the accuracy of ChatGPT for assigning correct diagnosis codes was 91 and 99% for version 3.5 and 4.0, respectively. In the second round, the accuracy of ChatGPT for assigning the correct diagnosis code was 87% for version 3.5 and 99% for version 4.0. ChatGPT 4.0 had higher accuracy than ChatGPT 3.5 (<i>p</i> = 0.02 and 0.002 for the first and second round respectively). The accuracy did not significantly differ between the two rounds (<i>p</i> > 0.05).</p><p><strong>Conclusion: </strong>ChatGPT 4.0 can significantly improve ICD-10 coding accuracy in nephrology through case scenarios for pre-visit testing, potentially reducing healthcare professionals' workload. However, the small error percentage underscores the need for ongoing review and improvement of AI systems to ensure accurate reimbursement, optimal patient care, and reliable research data.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11402808/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advanced interpretable diagnosis of Alzheimer's disease using SECNN-RF framework with explainable AI. 利用可解释人工智能的 SECNN-RF 框架对阿尔茨海默病进行高级可解释诊断。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-09-02 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1456069
Nabil M AbdelAziz, Wael Said, Mohamed M AbdelHafeez, Asmaa H Ali

Early detection of Alzheimer's disease (AD) is vital for effective treatment, as interventions are most successful in the disease's early stages. Combining Magnetic Resonance Imaging (MRI) with artificial intelligence (AI) offers significant potential for enhancing AD diagnosis. However, traditional AI models often lack transparency in their decision-making processes. Explainable Artificial Intelligence (XAI) is an evolving field that aims to make AI decisions understandable to humans, providing transparency and insight into AI systems. This research introduces the Squeeze-and-Excitation Convolutional Neural Network with Random Forest (SECNN-RF) framework for early AD detection using MRI scans. The SECNN-RF integrates Squeeze-and-Excitation (SE) blocks into a Convolutional Neural Network (CNN) to focus on crucial features and uses Dropout layers to prevent overfitting. It then employs a Random Forest classifier to accurately categorize the extracted features. The SECNN-RF demonstrates high accuracy (99.89%) and offers an explainable analysis, enhancing the model's interpretability. Further exploration of the SECNN framework involved substituting the Random Forest classifier with other machine learning algorithms like Decision Tree, XGBoost, Support Vector Machine, and Gradient Boosting. While all these classifiers improved model performance, Random Forest achieved the highest accuracy, followed closely by XGBoost, Gradient Boosting, Support Vector Machine, and Decision Tree which achieved lower accuracy.

早期发现阿尔茨海默病(AD)对有效治疗至关重要,因为在疾病的早期阶段采取干预措施最为成功。将磁共振成像(MRI)与人工智能(AI)相结合,为加强阿尔茨海默病诊断提供了巨大的潜力。然而,传统的人工智能模型在决策过程中往往缺乏透明度。可解释人工智能(XAI)是一个不断发展的领域,旨在让人类理解人工智能的决策,提供人工智能系统的透明度和洞察力。这项研究介绍了利用核磁共振扫描进行早期注意力缺失症检测的挤压-激发卷积神经网络与随机森林(SECNN-RF)框架。SECNN-RF将挤压-激发(SE)区块整合到卷积神经网络(CNN)中,以关注关键特征,并使用Dropout层防止过拟合。然后,它采用随机森林分类器对提取的特征进行精确分类。SECNN-RF 的准确率很高(99.89%),并提供了可解释的分析,增强了模型的可解释性。对 SECNN 框架的进一步探索包括用决策树、XGBoost、支持向量机和梯度提升等其他机器学习算法替代随机森林分类器。虽然所有这些分类器都提高了模型性能,但随机森林的准确率最高,XGBoost、梯度提升、支持向量机和决策树紧随其后,准确率较低。
{"title":"Advanced interpretable diagnosis of Alzheimer's disease using SECNN-RF framework with explainable AI.","authors":"Nabil M AbdelAziz, Wael Said, Mohamed M AbdelHafeez, Asmaa H Ali","doi":"10.3389/frai.2024.1456069","DOIUrl":"https://doi.org/10.3389/frai.2024.1456069","url":null,"abstract":"<p><p>Early detection of Alzheimer's disease (AD) is vital for effective treatment, as interventions are most successful in the disease's early stages. Combining Magnetic Resonance Imaging (MRI) with artificial intelligence (AI) offers significant potential for enhancing AD diagnosis. However, traditional AI models often lack transparency in their decision-making processes. Explainable Artificial Intelligence (XAI) is an evolving field that aims to make AI decisions understandable to humans, providing transparency and insight into AI systems. This research introduces the Squeeze-and-Excitation Convolutional Neural Network with Random Forest (SECNN-RF) framework for early AD detection using MRI scans. The SECNN-RF integrates Squeeze-and-Excitation (SE) blocks into a Convolutional Neural Network (CNN) to focus on crucial features and uses Dropout layers to prevent overfitting. It then employs a Random Forest classifier to accurately categorize the extracted features. The SECNN-RF demonstrates high accuracy (99.89%) and offers an explainable analysis, enhancing the model's interpretability. Further exploration of the SECNN framework involved substituting the Random Forest classifier with other machine learning algorithms like Decision Tree, XGBoost, Support Vector Machine, and Gradient Boosting. While all these classifiers improved model performance, Random Forest achieved the highest accuracy, followed closely by XGBoost, Gradient Boosting, Support Vector Machine, and Decision Tree which achieved lower accuracy.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11402894/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine learning-based analysis of Ebola virus' impact on gene expression in nonhuman primates. 基于机器学习的埃博拉病毒对非人灵长类动物基因表达影响的分析。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-30 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1405332
Mostafa Rezapour, Muhammad Khalid Khan Niazi, Hao Lu, Aarthi Narayanan, Metin Nafi Gurcan

Introduction: This study introduces the Supervised Magnitude-Altitude Scoring (SMAS) methodology, a novel machine learning-based approach for analyzing gene expression data from non-human primates (NHPs) infected with Ebola virus (EBOV). By focusing on host-pathogen interactions, this research aims to enhance the understanding and identification of critical biomarkers for Ebola infection.

Methods: We utilized a comprehensive dataset of NanoString gene expression profiles from Ebola-infected NHPs. The SMAS system combines gene selection based on both statistical significance and expression changes. Employing linear classifiers such as logistic regression, the method facilitates precise differentiation between RT-qPCR positive and negative NHP samples.

Results: The application of SMAS led to the identification of IFI6 and IFI27 as key biomarkers, which demonstrated perfect predictive performance with 100% accuracy and optimal Area Under the Curve (AUC) metrics in classifying various stages of Ebola infection. Additionally, genes including MX1, OAS1, and ISG15 were significantly upregulated, underscoring their vital roles in the immune response to EBOV.

Discussion: Gene Ontology (GO) analysis further elucidated the involvement of these genes in critical biological processes and immune response pathways, reinforcing their significance in Ebola pathogenesis. Our findings highlight the efficacy of the SMAS methodology in revealing complex genetic interactions and response mechanisms, which are essential for advancing the development of diagnostic tools and therapeutic strategies.

Conclusion: This study provides valuable insights into EBOV pathogenesis, demonstrating the potential of SMAS to enhance the precision of diagnostics and interventions for Ebola and other viral infections.

简介本研究介绍了监督幅度-高度评分(SMAS)方法,这是一种基于机器学习的新型方法,用于分析感染埃博拉病毒(EBOV)的非人灵长类动物(NHPs)的基因表达数据。通过关注宿主与病原体之间的相互作用,这项研究旨在加强对埃博拉病毒感染关键生物标志物的理解和鉴定:我们利用了来自埃博拉病毒感染的 NHP 的 NanoString 基因表达谱综合数据集。SMAS 系统结合了基于统计意义和表达变化的基因选择。该方法采用逻辑回归等线性分类器,有助于精确区分 RT-qPCR 阳性和阴性 NHP 样本:结果:应用 SMAS 方法确定了 IFI6 和 IFI27 作为关键生物标记物,它们在埃博拉感染不同阶段的分类中表现出完美的预测性能,准确率达 100%,且曲线下面积(AUC)指标最佳。此外,包括MX1、OAS1和ISG15在内的基因也显著上调,这表明它们在对EBOV的免疫反应中发挥着重要作用:讨论:基因本体(GO)分析进一步阐明了这些基因参与关键生物过程和免疫应答途径的情况,从而加强了它们在埃博拉发病机制中的重要性。我们的研究结果凸显了 SMAS 方法在揭示复杂的基因相互作用和反应机制方面的功效,这对于推动诊断工具和治疗策略的开发至关重要:本研究为 EBOV 发病机制提供了宝贵的见解,证明了 SMAS 在提高埃博拉和其他病毒感染诊断和干预的精确性方面的潜力。
{"title":"Machine learning-based analysis of Ebola virus' impact on gene expression in nonhuman primates.","authors":"Mostafa Rezapour, Muhammad Khalid Khan Niazi, Hao Lu, Aarthi Narayanan, Metin Nafi Gurcan","doi":"10.3389/frai.2024.1405332","DOIUrl":"https://doi.org/10.3389/frai.2024.1405332","url":null,"abstract":"<p><strong>Introduction: </strong>This study introduces the Supervised Magnitude-Altitude Scoring (SMAS) methodology, a novel machine learning-based approach for analyzing gene expression data from non-human primates (NHPs) infected with Ebola virus (EBOV). By focusing on host-pathogen interactions, this research aims to enhance the understanding and identification of critical biomarkers for Ebola infection.</p><p><strong>Methods: </strong>We utilized a comprehensive dataset of NanoString gene expression profiles from Ebola-infected NHPs. The SMAS system combines gene selection based on both statistical significance and expression changes. Employing linear classifiers such as logistic regression, the method facilitates precise differentiation between RT-qPCR positive and negative NHP samples.</p><p><strong>Results: </strong>The application of SMAS led to the identification of IFI6 and IFI27 as key biomarkers, which demonstrated perfect predictive performance with 100% accuracy and optimal Area Under the Curve (AUC) metrics in classifying various stages of Ebola infection. Additionally, genes including MX1, OAS1, and ISG15 were significantly upregulated, underscoring their vital roles in the immune response to EBOV.</p><p><strong>Discussion: </strong>Gene Ontology (GO) analysis further elucidated the involvement of these genes in critical biological processes and immune response pathways, reinforcing their significance in Ebola pathogenesis. Our findings highlight the efficacy of the SMAS methodology in revealing complex genetic interactions and response mechanisms, which are essential for advancing the development of diagnostic tools and therapeutic strategies.</p><p><strong>Conclusion: </strong>This study provides valuable insights into EBOV pathogenesis, demonstrating the potential of SMAS to enhance the precision of diagnostics and interventions for Ebola and other viral infections.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11392916/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Governing AI in Southeast Asia: ASEAN's way forward. 管理东南亚的人工智能:东盟的前进之路。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-08-30 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1411838
Bama Andika Putra

Despite the rapid development of AI, ASEAN has not been able to devise a regional governance framework to address relevant existing and future challenges. This is concerning, considering the potential of AI to accelerate GDP among ASEAN member states in the coming years. This qualitative inquiry discusses AI governance in Southeast Asia in the past 5 years and what regulatory policies ASEAN can explore to better modulate its use among its member states. It considers the unique political landscape of the region, defined by the adoption of unique norms such as non-interference and priority over dialog, commonly termed the ASEAN Way. The following measures are concluded as potential regional governance frameworks: (1) Elevation of the topic's importance in ASEAN's intra and inter-regional forums to formulate collective regional agreements on AI, (2) adoption of AI governance measures in the field of education, specifically, reskilling and upskilling strategies to respond to future transformation of the working landscape, and (3) establishment of an ASEAN working group to bridge knowledge gaps among member states, caused by the disparity of AI-readiness in the region.

尽管人工智能发展迅速,但东盟尚未能设计出一个区域治理框架来应对现有和未来的相关挑战。考虑到人工智能在未来几年加速东盟成员国国内生产总值增长的潜力,这种情况令人担忧。本定性调查讨论了东南亚过去五年的人工智能治理情况,以及东盟可以探索哪些监管政策来更好地调节其成员国对人工智能的使用。它考虑了该地区独特的政治格局,其定义是采用独特的规范,如不干涉和对话优先,通常被称为 "东盟方式"。以下措施被总结为潜在的区域治理框架:(1) 在东盟的区域内和区域间论坛上提升该主题的重要性,以制定关于人工智能的区域集体协议;(2) 在教育领域采取人工智能治理措施,特别是再培训和提高技能战略,以应对未来工作环境的转变;(3) 建立东盟工作组,以弥合因该区域人工智能准备程度差异而造成的成员国之间的知识差距。
{"title":"Governing AI in Southeast Asia: ASEAN's way forward.","authors":"Bama Andika Putra","doi":"10.3389/frai.2024.1411838","DOIUrl":"https://doi.org/10.3389/frai.2024.1411838","url":null,"abstract":"<p><p>Despite the rapid development of AI, ASEAN has not been able to devise a regional governance framework to address relevant existing and future challenges. This is concerning, considering the potential of AI to accelerate GDP among ASEAN member states in the coming years. This qualitative inquiry discusses AI governance in Southeast Asia in the past 5 years and what regulatory policies ASEAN can explore to better modulate its use among its member states. It considers the unique political landscape of the region, defined by the adoption of unique norms such as non-interference and priority over dialog, commonly termed the ASEAN Way. The following measures are concluded as potential regional governance frameworks: (1) Elevation of the topic's importance in ASEAN's intra and inter-regional forums to formulate collective regional agreements on AI, (2) adoption of AI governance measures in the field of education, specifically, reskilling and upskilling strategies to respond to future transformation of the working landscape, and (3) establishment of an ASEAN working group to bridge knowledge gaps among member states, caused by the disparity of AI-readiness in the region.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11392876/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142297031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1