首页 > 最新文献

Frontiers in Artificial Intelligence最新文献

英文 中文
Corrigendum: Person-based design and evaluation of MIA, a digital medical interview assistant for radiology. 更正:基于人的设计和评估MIA,放射学的数字医学访谈助理。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-07 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1546421
Kerstin Denecke, Daniel Reichenpfader, Dominic Willi, Karin Kennel, Harald Bonel, Knud Nairz, Nikola Cihoric, Damien Papaux, Hendrik von Tengg-Kobligk

[This corrects the article DOI: 10.3389/frai.2024.1431156.].

[这更正了文章DOI: 10.3389/frai.2024.1431156.]。
{"title":"Corrigendum: Person-based design and evaluation of MIA, a digital medical interview assistant for radiology.","authors":"Kerstin Denecke, Daniel Reichenpfader, Dominic Willi, Karin Kennel, Harald Bonel, Knud Nairz, Nikola Cihoric, Damien Papaux, Hendrik von Tengg-Kobligk","doi":"10.3389/frai.2024.1546421","DOIUrl":"https://doi.org/10.3389/frai.2024.1546421","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.3389/frai.2024.1431156.].</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1546421"},"PeriodicalIF":3.0,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11747822/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143012637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A bird's-eye view of the biological mechanism and machine learning prediction approaches for cell-penetrating peptides. 细胞穿透肽的生物学机制和机器学习预测方法的鸟瞰图。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-07 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1497307
Maduravani Ramasundaram, Honglae Sohn, Thirumurthy Madhavan

Cell-penetrating peptides (CPPs) are highly effective at passing through eukaryotic membranes with various cargo molecules, like drugs, proteins, nucleic acids, and nanoparticles, without causing significant harm. Creating drug delivery systems with CPP is associated with cancer, genetic disorders, and diabetes due to their unique chemical properties. Wet lab experiments in drug discovery methodologies are time-consuming and expensive. Machine learning (ML) techniques can enhance and accelerate the drug discovery process with accurate and intricate data quality. ML classifiers, such as support vector machine (SVM), random forest (RF), gradient-boosted decision trees (GBDT), and different types of artificial neural networks (ANN), are commonly used for CPP prediction with cross-validation performance evaluation. Functional CPP prediction is improved by using these ML strategies by using CPP datasets produced by high-throughput sequencing and computational methods. This review focuses on several ML-based CPP prediction tools. We discussed the CPP mechanism to understand the basic functioning of CPPs through cells. A comparative analysis of diverse CPP prediction methods was conducted based on their algorithms, dataset size, feature encoding, software utilities, assessment metrics, and prediction scores. The performance of the CPP prediction was evaluated based on accuracy, sensitivity, specificity, and Matthews correlation coefficient (MCC) on independent datasets. In conclusion, this review will encourage the use of ML algorithms for finding effective CPPs, which will have a positive impact on future research on drug delivery and therapeutics.

细胞穿透肽(CPPs)在通过真核生物膜的各种货物分子,如药物、蛋白质、核酸和纳米粒子方面非常有效,而不会造成重大伤害。由于其独特的化学性质,利用CPP制造药物输送系统与癌症、遗传疾病和糖尿病有关。药物发现方法的湿实验室实验既耗时又昂贵。机器学习(ML)技术可以通过精确和复杂的数据质量来增强和加速药物发现过程。ML分类器,如支持向量机(SVM)、随机森林(RF)、梯度增强决策树(GBDT)和不同类型的人工神经网络(ANN),通常用于具有交叉验证性能评估的CPP预测。通过使用高通量测序和计算方法产生的CPP数据集,使用这些ML策略改进了功能性CPP预测。本文综述了几种基于ml的CPP预测工具。我们讨论了CPP的机制,以了解CPP通过细胞的基本功能。根据不同的CPP预测方法的算法、数据集大小、特征编码、软件实用程序、评估指标和预测分数进行了比较分析。根据独立数据集的准确性、敏感性、特异性和马修斯相关系数(MCC)来评估CPP预测的性能。总之,这篇综述将鼓励使用ML算法来寻找有效的CPPs,这将对未来的药物传递和治疗研究产生积极影响。
{"title":"A bird's-eye view of the biological mechanism and machine learning prediction approaches for cell-penetrating peptides.","authors":"Maduravani Ramasundaram, Honglae Sohn, Thirumurthy Madhavan","doi":"10.3389/frai.2024.1497307","DOIUrl":"10.3389/frai.2024.1497307","url":null,"abstract":"<p><p>Cell-penetrating peptides (CPPs) are highly effective at passing through eukaryotic membranes with various cargo molecules, like drugs, proteins, nucleic acids, and nanoparticles, without causing significant harm. Creating drug delivery systems with CPP is associated with cancer, genetic disorders, and diabetes due to their unique chemical properties. Wet lab experiments in drug discovery methodologies are time-consuming and expensive. Machine learning (ML) techniques can enhance and accelerate the drug discovery process with accurate and intricate data quality. ML classifiers, such as support vector machine (SVM), random forest (RF), gradient-boosted decision trees (GBDT), and different types of artificial neural networks (ANN), are commonly used for CPP prediction with cross-validation performance evaluation. Functional CPP prediction is improved by using these ML strategies by using CPP datasets produced by high-throughput sequencing and computational methods. This review focuses on several ML-based CPP prediction tools. We discussed the CPP mechanism to understand the basic functioning of CPPs through cells. A comparative analysis of diverse CPP prediction methods was conducted based on their algorithms, dataset size, feature encoding, software utilities, assessment metrics, and prediction scores. The performance of the CPP prediction was evaluated based on accuracy, sensitivity, specificity, and Matthews correlation coefficient (MCC) on independent datasets. In conclusion, this review will encourage the use of ML algorithms for finding effective CPPs, which will have a positive impact on future research on drug delivery and therapeutics.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1497307"},"PeriodicalIF":3.0,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11747587/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143012123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accelerating computational fluid dynamics simulation of post-combustion carbon capture modeling with MeshGraphNets.
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-07 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1441985
Bo Lei, Yucheng Fu, Jose Cadena, Amar Saini, Yeping Hu, Jie Bao, Zhijie Xu, Brenda Ng, Phan Nguyen

Packed columns are commonly used in post-combustion processes to capture CO2 emissions by providing enhanced contact area between a CO2-laden gas and CO2-absorbing solvent. To study and optimize solvent-based post-combustion carbon capture systems (CCSs), computational fluid dynamics (CFD) can be used to model the liquid-gas countercurrent flow hydrodynamics in these columns and derive key determinants of CO2-capture efficiency. However, the large design space of these systems hinders the application of CFD for design optimization due to its high computational cost. In contrast, data-driven modeling approaches can produce fast surrogates to study large-scale physics problems. We build our surrogates using MeshGraphNets (MGN), a graph neural network framework that efficiently learns and produces mesh-based simulations. We apply MGN to a random packed column modeled with over 160K graph nodes and a design space consisting of three key input parameters: solvent surface tension, inlet velocity, and contact angle. Our models can adapt to a wide range of these parameters and accurately predict the complex interactions within the system at rates over 1700 times faster than CFD, affirming its practicality in downstream design optimization tasks. This underscores the robustness and versatility of MGN in modeling complex fluid dynamics for large-scale CCS analyses.

{"title":"Accelerating computational fluid dynamics simulation of post-combustion carbon capture modeling with MeshGraphNets.","authors":"Bo Lei, Yucheng Fu, Jose Cadena, Amar Saini, Yeping Hu, Jie Bao, Zhijie Xu, Brenda Ng, Phan Nguyen","doi":"10.3389/frai.2024.1441985","DOIUrl":"10.3389/frai.2024.1441985","url":null,"abstract":"<p><p>Packed columns are commonly used in post-combustion processes to capture CO<sub>2</sub> emissions by providing enhanced contact area between a CO<sub>2</sub>-laden gas and CO<sub>2</sub>-absorbing solvent. To study and optimize solvent-based post-combustion carbon capture systems (CCSs), computational fluid dynamics (CFD) can be used to model the liquid-gas countercurrent flow hydrodynamics in these columns and derive key determinants of CO<sub>2</sub>-capture efficiency. However, the large design space of these systems hinders the application of CFD for design optimization due to its high computational cost. In contrast, data-driven modeling approaches can produce fast surrogates to study large-scale physics problems. We build our surrogates using MeshGraphNets (MGN), a graph neural network framework that efficiently learns and produces mesh-based simulations. We apply MGN to a random packed column modeled with over 160K graph nodes and a design space consisting of three key input parameters: solvent surface tension, inlet velocity, and contact angle. Our models can adapt to a wide range of these parameters and accurately predict the complex interactions within the system at rates over 1700 times faster than CFD, affirming its practicality in downstream design optimization tasks. This underscores the robustness and versatility of MGN in modeling complex fluid dynamics for large-scale CCS analyses.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1441985"},"PeriodicalIF":3.0,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11752894/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143024964","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application progress of artificial intelligence in tumor diagnosis and treatment.
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-07 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1487207
Fan Sun, Li Zhang, Zhongsheng Tong

The rapid advancement of artificial intelligence (AI) has introduced transformative opportunities in oncology, enhancing the precision and efficiency of tumor diagnosis and treatment. This review examines recent advancements in AI applications across tumor imaging diagnostics, pathological analysis, and treatment optimization, with a particular focus on breast cancer, lung cancer, and liver cancer. By synthesizing findings from peer-reviewed studies published over the past decade, this paper analyzes the role of AI in enhancing diagnostic accuracy, streamlining therapeutic decision-making, and personalizing treatment strategies. Additionally, this paper addresses challenges related to AI integration into clinical workflows and regulatory compliance. As AI continues to evolve, its applications in oncology promise further improvements in patient outcomes, though additional research is needed to address its limitations and ensure ethical and effective deployment.

{"title":"Application progress of artificial intelligence in tumor diagnosis and treatment.","authors":"Fan Sun, Li Zhang, Zhongsheng Tong","doi":"10.3389/frai.2024.1487207","DOIUrl":"10.3389/frai.2024.1487207","url":null,"abstract":"<p><p>The rapid advancement of artificial intelligence (AI) has introduced transformative opportunities in oncology, enhancing the precision and efficiency of tumor diagnosis and treatment. This review examines recent advancements in AI applications across tumor imaging diagnostics, pathological analysis, and treatment optimization, with a particular focus on breast cancer, lung cancer, and liver cancer. By synthesizing findings from peer-reviewed studies published over the past decade, this paper analyzes the role of AI in enhancing diagnostic accuracy, streamlining therapeutic decision-making, and personalizing treatment strategies. Additionally, this paper addresses challenges related to AI integration into clinical workflows and regulatory compliance. As AI continues to evolve, its applications in oncology promise further improvements in patient outcomes, though additional research is needed to address its limitations and ensure ethical and effective deployment.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1487207"},"PeriodicalIF":3.0,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11753238/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143024967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visceral condition assessment through digital tongue image analysis. 基于数字舌图像分析的内脏状况评估。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-06 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1501184
Siu Cheong Ho, Yiliang Chen, Yao Jie Xie, Wing-Fai Yeung, Shu-Cheng Chen, Jing Qin

Traditional Chinese medicine (TCM) has long utilized tongue diagnosis as a crucial method for assessing internal visceral condition. This study aims to modernize this ancient practice by developing an automated system for analyzing tongue images in relation to the five organs, corresponding to the heart, liver, spleen, lung, and kidney-collectively known as the "five viscera" in TCM. We propose a novel tongue image partitioning algorithm that divides the tongue into four regions associated with these specific organs, according to TCM principles. These partitioned regions are then processed by our newly developed OrganNet, a specialized neural network designed to focus on organ-specific features. Our method simulates the TCM diagnostic process while leveraging modern machine learning techniques. To support this research, we have created a comprehensive tongue image dataset specifically tailored for these five visceral pattern assessment. Results demonstrate the effectiveness of our approach in accurately identifying correlations between tongue regions and visceral conditions. This study bridges TCM practices with contemporary technology, potentially enhancing diagnostic accuracy and efficiency in both TCM and modern medical contexts.

长期以来,中医一直将舌诊作为评估内脏疾病的重要方法。本研究旨在通过开发一个自动化系统来分析与五器官相关的舌头图像,从而使这一古老的做法现代化,这五器官分别对应于心、肝、脾、肺和肾,在中医中统称为“五脏”。我们提出了一种新的舌头图像分割算法,根据中医原理将舌头划分为与这些特定器官相关的四个区域。这些划分的区域然后由我们新开发的OrganNet进行处理,这是一种专门的神经网络,旨在专注于器官的特定特征。我们的方法模拟中医诊断过程,同时利用现代机器学习技术。为了支持这项研究,我们专门为这五种内脏模式评估创建了一个全面的舌头图像数据集。结果证明了我们的方法在准确识别舌区和内脏条件之间的相关性方面的有效性。这项研究将中医实践与现代技术联系起来,有可能提高中医和现代医学背景下诊断的准确性和效率。
{"title":"Visceral condition assessment through digital tongue image analysis.","authors":"Siu Cheong Ho, Yiliang Chen, Yao Jie Xie, Wing-Fai Yeung, Shu-Cheng Chen, Jing Qin","doi":"10.3389/frai.2024.1501184","DOIUrl":"10.3389/frai.2024.1501184","url":null,"abstract":"<p><p>Traditional Chinese medicine (TCM) has long utilized tongue diagnosis as a crucial method for assessing internal visceral condition. This study aims to modernize this ancient practice by developing an automated system for analyzing tongue images in relation to the five organs, corresponding to the heart, liver, spleen, lung, and kidney-collectively known as the \"five viscera\" in TCM. We propose a novel tongue image partitioning algorithm that divides the tongue into four regions associated with these specific organs, according to TCM principles. These partitioned regions are then processed by our newly developed OrganNet, a specialized neural network designed to focus on organ-specific features. Our method simulates the TCM diagnostic process while leveraging modern machine learning techniques. To support this research, we have created a comprehensive tongue image dataset specifically tailored for these five visceral pattern assessment. Results demonstrate the effectiveness of our approach in accurately identifying correlations between tongue regions and visceral conditions. This study bridges TCM practices with contemporary technology, potentially enhancing diagnostic accuracy and efficiency in both TCM and modern medical contexts.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1501184"},"PeriodicalIF":3.0,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11743429/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143012991","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating the role of generative AI and color patterns in the dissemination of war imagery and disinformation on social media. 评估生成人工智能和彩色图案在社交媒体上传播战争图像和虚假信息中的作用。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-06 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1457247
Estibaliz García-Huete, Sara Ignacio-Cerrato, David Pacios, José Luis Vázquez-Poletti, María José Pérez-Serrano, Andrea Donofrio, Clemente Cesarano, Nikolaos Schetakis, Alessio Di Iorio

This study explores the evolving role of social media in the spread of misinformation during the Ukraine-Russia conflict, with a focus on how artificial intelligence (AI) contributes to the creation of deceptive war imagery. Specifically, the research examines the relationship between color patterns (LUTs) in war-related visuals and their perceived authenticity, highlighting the economic, political, and social ramifications of such manipulative practices. AI technologies have significantly advanced the production of highly convincing, yet artificial, war imagery, blurring the line between fact and fiction. An experimental project is proposed to train a generative AI model capable of creating war imagery that mimics real-life footage. By analyzing the success of this experiment, the study aims to establish a link between specific color patterns and the likelihood of images being perceived as authentic. This could shed light on the mechanics of visual misinformation and manipulation. Additionally, the research investigates the potential of a serverless AI framework to advance both the generation and detection of fake news, marking a pivotal step in the fight against digital misinformation. Ultimately, the study seeks to contribute to ongoing debates on the ethical implications of AI in information manipulation and to propose strategies to combat these challenges in the digital era.

本研究探讨了在乌克兰-俄罗斯冲突期间,社交媒体在错误信息传播中的不断演变的作用,重点是人工智能(AI)如何有助于创造欺骗性的战争图像。具体而言,该研究考察了战争相关视觉图像中的颜色模式(lut)与其感知真实性之间的关系,强调了这种操纵行为的经济、政治和社会后果。人工智能技术极大地推动了高可信度但人为的战争图像的制作,模糊了事实与虚构之间的界限。提出了一个实验项目,以训练能够创建模拟真实镜头的战争图像的生成人工智能模型。通过分析这个实验的成功,该研究旨在建立特定颜色模式和图像被认为是真实的可能性之间的联系。这可能会揭示视觉错误信息和操纵的机制。此外,该研究还调查了无服务器人工智能框架在推进假新闻生成和检测方面的潜力,标志着打击数字错误信息的关键一步。最终,该研究旨在为正在进行的关于人工智能在信息操纵中的伦理影响的辩论做出贡献,并提出应对数字时代这些挑战的策略。
{"title":"Evaluating the role of generative AI and color patterns in the dissemination of war imagery and disinformation on social media.","authors":"Estibaliz García-Huete, Sara Ignacio-Cerrato, David Pacios, José Luis Vázquez-Poletti, María José Pérez-Serrano, Andrea Donofrio, Clemente Cesarano, Nikolaos Schetakis, Alessio Di Iorio","doi":"10.3389/frai.2024.1457247","DOIUrl":"10.3389/frai.2024.1457247","url":null,"abstract":"<p><p>This study explores the evolving role of social media in the spread of misinformation during the Ukraine-Russia conflict, with a focus on how artificial intelligence (AI) contributes to the creation of deceptive war imagery. Specifically, the research examines the relationship between color patterns (LUTs) in war-related visuals and their perceived authenticity, highlighting the economic, political, and social ramifications of such manipulative practices. AI technologies have significantly advanced the production of highly convincing, yet artificial, war imagery, blurring the line between fact and fiction. An experimental project is proposed to train a generative AI model capable of creating war imagery that mimics real-life footage. By analyzing the success of this experiment, the study aims to establish a link between specific color patterns and the likelihood of images being perceived as authentic. This could shed light on the mechanics of visual misinformation and manipulation. Additionally, the research investigates the potential of a serverless AI framework to advance both the generation and detection of fake news, marking a pivotal step in the fight against digital misinformation. Ultimately, the study seeks to contribute to ongoing debates on the ethical implications of AI in information manipulation and to propose strategies to combat these challenges in the digital era.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1457247"},"PeriodicalIF":3.0,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11743509/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143012834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ocular Biometry OCR: a machine learning algorithm leveraging optical character recognition to extract intra ocular lens biometry measurements. 眼部生物测量OCR:一种利用光学字符识别来提取晶状体内生物测量数据的机器学习算法。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-06 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1428716
Anish Salvi, Leo Arnal, Kevin Ly, Gabriel Ferreira, Sophia Y Wang, Curtis Langlotz, Vinit Mahajan, Chase A Ludwig

Given close relationships between ocular structure and ophthalmic disease, ocular biometry measurements (including axial length, lens thickness, anterior chamber depth, and keratometry values) may be leveraged as features in the prediction of eye diseases. However, ocular biometry measurements are often stored as PDFs rather than as structured data in electronic health records. Thus, time-consuming and laborious manual data entry is required for using biometry data as a disease predictor. Herein, we used two separate models, PaddleOCR and Gemini, to extract eye specific biometric measurements from 2,965 Lenstar, 104 IOL Master 500, and 3,616 IOL Master 700 optical biometry reports. For each patient eye, our text extraction pipeline, referred to as Ocular Biometry OCR, involves 1) cropping the report to the biometric data, 2) extracting the text via the optical character recognition model, 3) post-processing the metrics and values into key value pairs, 4) correcting erroneous angles within the pairs, 5) computing the number of errors or missing values, and 6) selecting the window specific results with fewest errors or missing values. To ensure the models' predictions could be put into a machine learning-ready format, artifacts were removed from categorical text data through manual modification where necessary. Performance was evaluated by scoring PaddleOCR and Gemini results. In the absence of ground truth, higher scoring indicated greater inter-model reliability, assuming an equal value between models indicated an accurate result. The detection scores, measuring the number of valid values (i.e., not missing or erroneous), were Lenstar: 0.990, IOLM 500: 1.000, and IOLM 700: 0.998. The similarity scores, measuring the number of equal values, were Lenstar: 0.995, IOLM 500: 0.999, and IOLM 700: 0.999. The agreement scores, combining detection and similarity scores, were Lenstar: 0.985, IOLM 500: 0.999, and IOLM 700: 0.998. IOLM 500 was annotated for ground truths; in this case, higher scoring indicated greater model-to-annotator accuracy. PaddleOCR-to-Annotator achieved scores of detection: 1.000, similarity: 0.999, and agreement: 0.999. Gemini-to-Annotator achieved scores of detection: 1.000, similarity: 1.000, and agreement: 1.000. Scores range from 0 to 1. While PaddleOCR and Gemini demonstrated high agreement, PaddleOCR offered slightly better performance upon reviewing quantitative and qualitative results.

鉴于眼部结构与眼部疾病之间的密切关系,眼生物测量(包括眼轴长度、晶状体厚度、前房深度和角膜测量值)可作为预测眼部疾病的特征。然而,眼部生物测量通常以pdf格式存储,而不是以电子健康记录中的结构化数据存储。因此,使用生物计量数据作为疾病预测器需要进行耗时和费力的手动数据输入。在此,我们使用两个独立的模型,PaddleOCR和Gemini,从2,965份Lenstar、104份IOL Master 500和3,616份IOL Master 700光学生物计量报告中提取眼部特异性生物计量数据。对于每只患者的眼睛,我们的文本提取管道,即眼部生物测量OCR,包括1)将报告裁剪为生物特征数据,2)通过光学字符识别模型提取文本,3)将指标和值后处理为关键值对,4)纠正对内的错误角度,5)计算错误或缺失值的数量,6)选择错误或缺失值最少的窗口特定结果。为了确保模型的预测可以转换为机器学习的格式,在必要时通过手动修改从分类文本数据中删除工件。通过评分PaddleOCR和Gemini结果来评估性能。在没有基础真值的情况下,得分越高表明模型间的可靠性越高,假设模型之间的值相等表明结果准确。检测分数,测量有效值的数量(即没有丢失或错误),为Lenstar: 0.990, IOLM 500: 1.000, IOLM 700: 0.998。相似度得分(衡量相等值的数量)分别为:Lenstar: 0.995, IOLM 500: 0.999, IOLM 700: 0.999。结合检测和相似度得分,一致性得分为Lenstar: 0.985, IOLM 500: 0.999, IOLM 700: 0.998。IOLM 500对基本事实进行了注释;在这种情况下,得分越高表示模型到注释者的准确性越高。PaddleOCR-to-Annotator的检测得分为1.000,相似度为0.999,一致性为0.999。Gemini-to-Annotator的检测得分为1.000,相似度为1.000,一致性为1.000。得分范围从0到1。虽然PaddleOCR和Gemini表现出很高的一致性,但在评估定量和定性结果时,PaddleOCR的表现略好一些。
{"title":"Ocular Biometry OCR: a machine learning algorithm leveraging optical character recognition to extract intra ocular lens biometry measurements.","authors":"Anish Salvi, Leo Arnal, Kevin Ly, Gabriel Ferreira, Sophia Y Wang, Curtis Langlotz, Vinit Mahajan, Chase A Ludwig","doi":"10.3389/frai.2024.1428716","DOIUrl":"https://doi.org/10.3389/frai.2024.1428716","url":null,"abstract":"<p><p>Given close relationships between ocular structure and ophthalmic disease, ocular biometry measurements (including axial length, lens thickness, anterior chamber depth, and keratometry values) may be leveraged as features in the prediction of eye diseases. However, ocular biometry measurements are often stored as PDFs rather than as structured data in electronic health records. Thus, time-consuming and laborious manual data entry is required for using biometry data as a disease predictor. Herein, we used two separate models, PaddleOCR and Gemini, to extract eye specific biometric measurements from 2,965 Lenstar, 104 IOL Master 500, and 3,616 IOL Master 700 optical biometry reports. For each patient eye, our text extraction pipeline, referred to as Ocular Biometry OCR, involves 1) cropping the report to the biometric data, 2) extracting the text via the optical character recognition model, 3) post-processing the metrics and values into key value pairs, 4) correcting erroneous angles within the pairs, 5) computing the number of errors or missing values, and 6) selecting the window specific results with fewest errors or missing values. To ensure the models' predictions could be put into a machine learning-ready format, artifacts were removed from categorical text data through manual modification where necessary. Performance was evaluated by scoring PaddleOCR and Gemini results. In the absence of ground truth, higher scoring indicated greater inter-model reliability, assuming an equal value between models indicated an accurate result. The detection scores, measuring the number of valid values (i.e., not missing or erroneous), were Lenstar: 0.990, IOLM 500: 1.000, and IOLM 700: 0.998. The similarity scores, measuring the number of equal values, were Lenstar: 0.995, IOLM 500: 0.999, and IOLM 700: 0.999. The agreement scores, combining detection and similarity scores, were Lenstar: 0.985, IOLM 500: 0.999, and IOLM 700: 0.998. IOLM 500 was annotated for ground truths; in this case, higher scoring indicated greater model-to-annotator accuracy. PaddleOCR-to-Annotator achieved scores of detection: 1.000, similarity: 0.999, and agreement: 0.999. Gemini-to-Annotator achieved scores of detection: 1.000, similarity: 1.000, and agreement: 1.000. Scores range from 0 to 1. While PaddleOCR and Gemini demonstrated high agreement, PaddleOCR offered slightly better performance upon reviewing quantitative and qualitative results.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1428716"},"PeriodicalIF":3.0,"publicationDate":"2025-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11743993/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143012755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reader's digest version of scientific writing: comparative evaluation of summarization capacity between large language models and medical students in analyzing scientific writing in sleep medicine. 读者文摘版科学写作:大语言模型与医学生在分析睡眠医学科学写作中总结能力的比较评价
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-24 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1477535
Jacob Matalon, August Spurzem, Sana Ahsan, Elizabeth White, Ronik Kothari, Madhu Varma

Introduction: As artificial intelligence systems like large language models (LLM) and natural language processing advance, the need to evaluate their utility within medicine and medical education grows. As medical research publications continue to grow exponentially, AI systems offer valuable opportunities to condense and synthesize information, especially in underrepresented areas such as Sleep Medicine. The present study aims to compare summarization capacity between LLM generated summaries of sleep medicine research article abstracts, to summaries generated by Medical Student (humans) and to evaluate if the research content, and literary readability summarized is retained comparably.

Methods: A collection of three AI-generated and human-generated summaries of sleep medicine research article abstracts were shared with 19 study participants (medical students) attending a sleep medicine conference. Participants were blind as to which summary was human or LLM generated. After reading both human and AI-generated research summaries participants completed a 1-5 Likert scale survey on the readability of the extracted writings. Participants also answered article-specific multiple-choice questions evaluating their comprehension of the summaries, as a representation of the quality of content retained by the AI-generated summaries.

Results: An independent sample t-test between the AI-generated and human-generated summaries comprehension by study participants revealed no significant difference between the Likert readability ratings (p = 0.702). A chi-squared test of proportions revealed no significant association (χ 2 = 1.485, p = 0.223), and a McNemar test revealed no significant association between summary type and the proportion of correct responses to the comprehension multiple choice questions (p = 0.289).

Discussion: Some limitations in this study were a small number of participants and user bias. Participants attended at a sleep conference and study summaries were all from sleep medicine journals. Lastly the summaries did not include graphs, numbers, and pictures, and thus were limited in material extraction. While the present analysis did not demonstrate a significant difference among the readability and content quality between the AI and human-generated summaries, limitations in the present study indicate that more research is needed to objectively measure, and further define strengths and weaknesses of AI models in condensing medical literature into efficient and accurate summaries.

导论:随着大型语言模型(LLM)和自然语言处理等人工智能系统的发展,评估它们在医学和医学教育中的应用的需求也在增长。随着医学研究出版物呈指数级增长,人工智能系统为浓缩和综合信息提供了宝贵的机会,特别是在睡眠医学等代表性不足的领域。本研究旨在比较LLM生成的睡眠医学研究论文摘要与医学生(人类)生成的摘要的总结能力,并评估总结的研究内容和文学可读性是否具有可比性。方法:与参加睡眠医学会议的19名研究参与者(医学生)分享3篇人工智能生成和人工生成的睡眠医学研究文章摘要。参与者不知道哪个摘要是人工生成的还是LLM生成的。在阅读了人类和人工智能生成的研究摘要后,参与者完成了一项1-5李克特量表调查,以评估提取的文章的可读性。参与者还回答了特定于文章的多项选择题,以评估他们对摘要的理解,作为人工智能生成的摘要保留的内容质量的代表。结果:研究参与者对人工智能生成的摘要理解和人类生成的摘要理解之间的独立样本t检验显示,Likert可读性评级之间没有显著差异(p = 0.702)。比例的卡方检验显示无显著相关性(χ 2 = 1.485,p = 0.223),McNemar检验显示总结类型与理解选择题的正确回答比例之间无显著相关性(p = 0.289)。讨论:本研究的一些局限性是参与者数量少和用户偏见。参与者参加了一个睡眠会议,研究总结都来自睡眠医学期刊。最后,摘要没有包含图形、数字和图片,因此在材料提取上受到限制。虽然本分析并未证明人工智能和人类生成的摘要在可读性和内容质量方面存在显著差异,但本研究的局限性表明,需要更多的研究来客观衡量,并进一步界定人工智能模型在将医学文献浓缩为高效、准确的摘要方面的优势和劣势。
{"title":"Reader's digest version of scientific writing: comparative evaluation of summarization capacity between large language models and medical students in analyzing scientific writing in sleep medicine.","authors":"Jacob Matalon, August Spurzem, Sana Ahsan, Elizabeth White, Ronik Kothari, Madhu Varma","doi":"10.3389/frai.2024.1477535","DOIUrl":"https://doi.org/10.3389/frai.2024.1477535","url":null,"abstract":"<p><strong>Introduction: </strong>As artificial intelligence systems like large language models (LLM) and natural language processing advance, the need to evaluate their utility within medicine and medical education grows. As medical research publications continue to grow exponentially, AI systems offer valuable opportunities to condense and synthesize information, especially in underrepresented areas such as Sleep Medicine. The present study aims to compare summarization capacity between LLM generated summaries of sleep medicine research article abstracts, to summaries generated by Medical Student (humans) and to evaluate if the research content, and literary readability summarized is retained comparably.</p><p><strong>Methods: </strong>A collection of three AI-generated and human-generated summaries of sleep medicine research article abstracts were shared with 19 study participants (medical students) attending a sleep medicine conference. Participants were blind as to which summary was human or LLM generated. After reading both human and AI-generated research summaries participants completed a 1-5 Likert scale survey on the readability of the extracted writings. Participants also answered article-specific multiple-choice questions evaluating their comprehension of the summaries, as a representation of the quality of content retained by the AI-generated summaries.</p><p><strong>Results: </strong>An independent sample t-test between the AI-generated and human-generated summaries comprehension by study participants revealed no significant difference between the Likert readability ratings (<i>p</i> = 0.702). A chi-squared test of proportions revealed no significant association (<i>χ</i> <sup>2</sup> = 1.485, <i>p</i> = 0.223), and a McNemar test revealed no significant association between summary type and the proportion of correct responses to the comprehension multiple choice questions (<i>p</i> = 0.289).</p><p><strong>Discussion: </strong>Some limitations in this study were a small number of participants and user bias. Participants attended at a sleep conference and study summaries were all from sleep medicine journals. Lastly the summaries did not include graphs, numbers, and pictures, and thus were limited in material extraction. While the present analysis did not demonstrate a significant difference among the readability and content quality between the AI and human-generated summaries, limitations in the present study indicate that more research is needed to objectively measure, and further define strengths and weaknesses of AI models in condensing medical literature into efficient and accurate summaries.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1477535"},"PeriodicalIF":3.0,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11704966/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142956060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prediction of PD-L1 tumor positive score in lung squamous cell carcinoma with H&E staining images and deep learning. H&E染色图像和深度学习预测肺鳞癌PD-L1肿瘤阳性评分。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-20 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1452563
Qiushi Wang, Xixiang Deng, Pan Huang, Qiang Ma, Lianhua Zhao, Yangyang Feng, Yiying Wang, Yuan Zhao, Yan Chen, Peng Zhong, Peng He, Mingrui Ma, Peng Feng, Hualiang Xiao

Background: Detecting programmed death ligand 1 (PD-L1) expression based on immunohistochemical (IHC) staining is an important guide for the treatment of lung cancer with immune checkpoint inhibitors. However, this method has problems such as high staining costs, tumor heterogeneity, and subjective differences among pathologists. Therefore, the application of deep learning models to segment and quantitatively predict PD-L1 expression in digital sections of Hematoxylin and eosin (H&E) stained lung squamous cell carcinoma is of great significance.

Methods: We constructed a dataset comprising H&E-stained digital sections of lung squamous cell carcinoma and used a Transformer Unet (TransUnet) deep learning network with an encoder-decoder design to segment PD-L1 negative and positive regions and quantitatively predict the tumor cell positive score (TPS).

Results: The results showed that the dice similarity coefficient (DSC) and intersection overunion (IoU) of deep learning for PD-L1 expression segmentation of H&E-stained digital slides of lung squamous cell carcinoma were 80 and 72%, respectively, which were better than the other seven cutting-edge segmentation models. The root mean square error (RMSE) of quantitative prediction TPS was 26.8, and the intra-group correlation coefficients with the gold standard was 0.92 (95% CI: 0.90-0.93), which was better than the consistency between the results of five pathologists and the gold standard.

Conclusion: The deep learning model is capable of segmenting and quantitatively predicting PD-L1 expression in H&E-stained digital sections of lung squamous cell carcinoma, which has significant implications for the application and guidance of immune checkpoint inhibitor treatments. And the link to the code is https://github.com/Baron-Huang/PD-L1-prediction-via-HE-image.

背景:基于免疫组化(IHC)染色检测程序性死亡配体1 (PD-L1)的表达是免疫检查点抑制剂治疗肺癌的重要指导。但该方法存在染色成本高、肿瘤异质性、病理医师主观差异等问题。因此,应用深度学习模型对苏木精和伊红(H&E)染色肺鳞癌数字切片中PD-L1的表达进行分割和定量预测具有重要意义。方法:我们构建了一个包含h&e染色肺鳞癌数字切片的数据集,并使用具有编码器-解码器设计的Transformer Unet (TransUnet)深度学习网络来分割PD-L1阴性和阳性区域,并定量预测肿瘤细胞阳性评分(TPS)。结果:结果显示,深度学习对肺鳞癌h&e染色数字切片PD-L1表达分割的骰子相似系数(DSC)和交叉过union (IoU)分别为80和72%,优于其他7种前沿分割模型。定量预测TPS的均方根误差(RMSE)为26.8,与金标准的组内相关系数为0.92 (95% CI: 0.90 ~ 0.93),优于5位病理医师结果与金标准的一致性。结论:深度学习模型能够对肺鳞状细胞癌h&e染色数字切片中PD-L1的表达进行分割和定量预测,对免疫检查点抑制剂治疗的应用和指导具有重要意义。代码的链接是https://github.com/Baron-Huang/PD-L1-prediction-via-HE-image。
{"title":"Prediction of PD-L1 tumor positive score in lung squamous cell carcinoma with H&E staining images and deep learning.","authors":"Qiushi Wang, Xixiang Deng, Pan Huang, Qiang Ma, Lianhua Zhao, Yangyang Feng, Yiying Wang, Yuan Zhao, Yan Chen, Peng Zhong, Peng He, Mingrui Ma, Peng Feng, Hualiang Xiao","doi":"10.3389/frai.2024.1452563","DOIUrl":"https://doi.org/10.3389/frai.2024.1452563","url":null,"abstract":"<p><strong>Background: </strong>Detecting programmed death ligand 1 (PD-L1) expression based on immunohistochemical (IHC) staining is an important guide for the treatment of lung cancer with immune checkpoint inhibitors. However, this method has problems such as high staining costs, tumor heterogeneity, and subjective differences among pathologists. Therefore, the application of deep learning models to segment and quantitatively predict PD-L1 expression in digital sections of Hematoxylin and eosin (H&E) stained lung squamous cell carcinoma is of great significance.</p><p><strong>Methods: </strong>We constructed a dataset comprising H&E-stained digital sections of lung squamous cell carcinoma and used a Transformer Unet (TransUnet) deep learning network with an encoder-decoder design to segment PD-L1 negative and positive regions and quantitatively predict the tumor cell positive score (TPS).</p><p><strong>Results: </strong>The results showed that the dice similarity coefficient (DSC) and intersection overunion (IoU) of deep learning for PD-L1 expression segmentation of H&E-stained digital slides of lung squamous cell carcinoma were 80 and 72%, respectively, which were better than the other seven cutting-edge segmentation models. The root mean square error (RMSE) of quantitative prediction TPS was 26.8, and the intra-group correlation coefficients with the gold standard was 0.92 (95% CI: 0.90-0.93), which was better than the consistency between the results of five pathologists and the gold standard.</p><p><strong>Conclusion: </strong>The deep learning model is capable of segmenting and quantitatively predicting PD-L1 expression in H&E-stained digital sections of lung squamous cell carcinoma, which has significant implications for the application and guidance of immune checkpoint inhibitor treatments. And the link to the code is https://github.com/Baron-Huang/PD-L1-prediction-via-HE-image.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1452563"},"PeriodicalIF":3.0,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11695341/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142932821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A graph neural architecture search approach for identifying bots in social media. 一种用于识别社交媒体机器人的图神经架构搜索方法。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-20 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1509179
Georgios Tzoumanekas, Michail Chatzianastasis, Loukas Ilias, George Kiokes, John Psarras, Dimitris Askounis

Social media platforms, including X, Facebook, and Instagram, host millions of daily users, giving rise to bots automated programs disseminating misinformation and ideologies with tangible real-world consequences. While bot detection in platform X has been the area of many deep learning models with adequate results, most approaches neglect the graph structure of social media relationships and often rely on hand-engineered architectures. Our work introduces the implementation of a Neural Architecture Search (NAS) technique, namely Deep and Flexible Graph Neural Architecture Search (DFG-NAS), tailored to Relational Graph Convolutional Neural Networks (RGCNs) in the task of bot detection in platform X. Our model constructs a graph that incorporates both the user relationships and their metadata. Then, DFG-NAS is adapted to automatically search for the optimal configuration of Propagation and Transformation functions in the RGCNs. Our experiments are conducted on the TwiBot-20 dataset, constructing a graph with 229,580 nodes and 227,979 edges. We study the five architectures with the highest performance during the search and achieve an accuracy of 85.7%, surpassing state-of-the-art models. Our approach not only addresses the bot detection challenge but also advocates for the broader implementation of NAS models in neural network design automation.

包括X、Facebook和Instagram在内的社交媒体平台每天都有数百万用户,这催生了机器人自动程序,传播错误信息和意识形态,对现实世界产生了切实的影响。虽然X平台上的机器人检测已经成为许多深度学习模型的领域,并取得了足够的结果,但大多数方法都忽略了社交媒体关系的图结构,并且通常依赖于手工设计的架构。我们的工作介绍了一种神经架构搜索(NAS)技术的实现,即深度和灵活的图神经架构搜索(DFG-NAS),专门针对关系图卷积神经网络(RGCNs)在x平台上的机器人检测任务。我们的模型构建了一个包含用户关系及其元数据的图。然后,利用DFG-NAS自动搜索RGCNs中传播和转换函数的最优配置。我们的实验是在TwiBot-20数据集上进行的,构建了一个有229,580个节点和227,979条边的图。我们在搜索过程中研究了具有最高性能的五种架构,并实现了85.7%的准确率,超过了最先进的模型。我们的方法不仅解决了机器人检测的挑战,而且倡导在神经网络设计自动化中更广泛地实施NAS模型。
{"title":"A graph neural architecture search approach for identifying bots in social media.","authors":"Georgios Tzoumanekas, Michail Chatzianastasis, Loukas Ilias, George Kiokes, John Psarras, Dimitris Askounis","doi":"10.3389/frai.2024.1509179","DOIUrl":"https://doi.org/10.3389/frai.2024.1509179","url":null,"abstract":"<p><p>Social media platforms, including X, Facebook, and Instagram, host millions of daily users, giving rise to bots automated programs disseminating misinformation and ideologies with tangible real-world consequences. While bot detection in platform X has been the area of many deep learning models with adequate results, most approaches neglect the graph structure of social media relationships and often rely on hand-engineered architectures. Our work introduces the implementation of a Neural Architecture Search (NAS) technique, namely Deep and Flexible Graph Neural Architecture Search (DFG-NAS), tailored to Relational Graph Convolutional Neural Networks (RGCNs) in the task of bot detection in platform X. Our model constructs a graph that incorporates both the user relationships and their metadata. Then, DFG-NAS is adapted to automatically search for the optimal configuration of Propagation and Transformation functions in the RGCNs. Our experiments are conducted on the TwiBot-20 dataset, constructing a graph with 229,580 nodes and 227,979 edges. We study the five architectures with the highest performance during the search and achieve an accuracy of 85.7%, surpassing state-of-the-art models. Our approach not only addresses the bot detection challenge but also advocates for the broader implementation of NAS models in neural network design automation.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1509179"},"PeriodicalIF":3.0,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11695282/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142932805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1