首页 > 最新文献

Frontiers in Artificial Intelligence最新文献

英文 中文
Evaluating accuracy and reproducibility of large language model performance on critical care assessments in pharmacy education.
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-09 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1514896
Huibo Yang, Mengxuan Hu, Amoreena Most, W Anthony Hawkins, Brian Murray, Susan E Smith, Sheng Li, Andrea Sikora

Background: Large language models (LLMs) have demonstrated impressive performance on medical licensing and diagnosis-related exams. However, comparative evaluations to optimize LLM performance and ability in the domain of comprehensive medication management (CMM) are lacking. The purpose of this evaluation was to test various LLMs performance optimization strategies and performance on critical care pharmacotherapy questions used in the assessment of Doctor of Pharmacy students.

Methods: In a comparative analysis using 219 multiple-choice pharmacotherapy questions, five LLMs (GPT-3.5, GPT-4, Claude 2, Llama2-7b and 2-13b) were evaluated. Each LLM was queried five times to evaluate the primary outcome of accuracy (i.e., correctness). Secondary outcomes included variance, the impact of prompt engineering techniques (e.g., chain-of-thought, CoT) and training of a customized GPT on performance, and comparison to third year doctor of pharmacy students on knowledge recall vs. knowledge application questions. Accuracy and variance were compared with student's t-test to compare performance under different model settings.

Results: ChatGPT-4 exhibited the highest accuracy (71.6%), while Llama2-13b had the lowest variance (0.070). All LLMs performed more accurately on knowledge recall vs. knowledge application questions (e.g., ChatGPT-4: 87% vs. 67%). When applied to ChatGPT-4, few-shot CoT across five runs improved accuracy (77.4% vs. 71.5%) with no effect on variance. Self-consistency and the custom-trained GPT demonstrated similar accuracy to ChatGPT-4 with few-shot CoT. Overall pharmacy student accuracy was 81%, compared to an optimal overall LLM accuracy of 73%. Comparing question types, six of the LLMs demonstrated equivalent or higher accuracy than pharmacy students on knowledge recall questions (e.g., self-consistency vs. students: 93% vs. 84%), but pharmacy students achieved higher accuracy than all LLMs on knowledge application questions (e.g., self-consistency vs. students: 68% vs. 80%).

Conclusion: ChatGPT-4 was the most accurate LLM on critical care pharmacy questions and few-shot CoT improved accuracy the most. Average student accuracy was similar to LLMs overall, and higher on knowledge application questions. These findings support the need for future assessment of customized training for the type of output needed. Reliance on LLMs is only supported with recall-based questions.

{"title":"Evaluating accuracy and reproducibility of large language model performance on critical care assessments in pharmacy education.","authors":"Huibo Yang, Mengxuan Hu, Amoreena Most, W Anthony Hawkins, Brian Murray, Susan E Smith, Sheng Li, Andrea Sikora","doi":"10.3389/frai.2024.1514896","DOIUrl":"10.3389/frai.2024.1514896","url":null,"abstract":"<p><strong>Background: </strong>Large language models (LLMs) have demonstrated impressive performance on medical licensing and diagnosis-related exams. However, comparative evaluations to optimize LLM performance and ability in the domain of comprehensive medication management (CMM) are lacking. The purpose of this evaluation was to test various LLMs performance optimization strategies and performance on critical care pharmacotherapy questions used in the assessment of Doctor of Pharmacy students.</p><p><strong>Methods: </strong>In a comparative analysis using 219 multiple-choice pharmacotherapy questions, five LLMs (GPT-3.5, GPT-4, Claude 2, Llama2-7b and 2-13b) were evaluated. Each LLM was queried five times to evaluate the primary outcome of accuracy (i.e., correctness). Secondary outcomes included variance, the impact of prompt engineering techniques (e.g., chain-of-thought, CoT) and training of a customized GPT on performance, and comparison to third year doctor of pharmacy students on knowledge recall vs. knowledge application questions. Accuracy and variance were compared with student's t-test to compare performance under different model settings.</p><p><strong>Results: </strong>ChatGPT-4 exhibited the highest accuracy (71.6%), while Llama2-13b had the lowest variance (0.070). All LLMs performed more accurately on knowledge recall vs. knowledge application questions (e.g., ChatGPT-4: 87% vs. 67%). When applied to ChatGPT-4, few-shot CoT across five runs improved accuracy (77.4% vs. 71.5%) with no effect on variance. Self-consistency and the custom-trained GPT demonstrated similar accuracy to ChatGPT-4 with few-shot CoT. Overall pharmacy student accuracy was 81%, compared to an optimal overall LLM accuracy of 73%. Comparing question types, six of the LLMs demonstrated equivalent or higher accuracy than pharmacy students on knowledge recall questions (e.g., self-consistency vs. students: 93% vs. 84%), but pharmacy students achieved higher accuracy than all LLMs on knowledge application questions (e.g., self-consistency vs. students: 68% vs. 80%).</p><p><strong>Conclusion: </strong>ChatGPT-4 was the most accurate LLM on critical care pharmacy questions and few-shot CoT improved accuracy the most. Average student accuracy was similar to LLMs overall, and higher on knowledge application questions. These findings support the need for future assessment of customized training for the type of output needed. Reliance on LLMs is only supported with recall-based questions.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1514896"},"PeriodicalIF":3.0,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11754395/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143029495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Protecting digital assets using an ontology based cyber situational awareness system.
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-09 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1394363
Tariq Ammar Almoabady, Yasser Mohammad Alblawi, Ahmad Emad Albalawi, Majed M Aborokbah, S Manimurugan, Ahmed Aljuhani, Hussain Aldawood, P Karthikeyan

Introduction: Cyber situational awareness is critical for detecting and mitigating cybersecurity threats in real-time. This study introduces a comprehensive methodology that integrates the Isolation Forest and autoencoder algorithms, Structured Threat Information Expression (STIX) implementation, and ontology development to enhance cybersecurity threat detection and intelligence. The Isolation Forest algorithm excels in anomaly detection in high-dimensional datasets, while autoencoders provide nonlinear detection capabilities and adaptive feature learning. Together, they form a robust framework for proactive anomaly detection.

Methods: The proposed methodology leverages the Isolation Forest for efficient anomaly identification and autoencoders for feature learning and nonlinear anomaly detection. Threat information was standardized using the STIX framework, facilitating structured and dynamic assessment of threat intelligence. Ontology development was employed to represent knowledge systematically and enable semantic correlation of threats. Feature mapping enriched datasets with contextual threat information.

Results: The proposed dual-algorithm framework demonstrated superior performance, achieving 95% accuracy, a 99% F1 score, and a 94.60% recall rate. These results outperformed the benchmarks, highlighting the model's effectiveness in proactive anomaly detection and cyber situational awareness enhancement.

Discussion: The integration of STIX and ontology development within the proposed methodology significantly enhanced threat information standardization and semantic analysis. The dual-algorithm approach provided improved detection capabilities compared to traditional methods, underscoring its potential for scalable and effective cybersecurity applications. Future research could explore further optimization and real-world deployments to refine and validate the approach.

{"title":"Protecting digital assets using an ontology based cyber situational awareness system.","authors":"Tariq Ammar Almoabady, Yasser Mohammad Alblawi, Ahmad Emad Albalawi, Majed M Aborokbah, S Manimurugan, Ahmed Aljuhani, Hussain Aldawood, P Karthikeyan","doi":"10.3389/frai.2024.1394363","DOIUrl":"10.3389/frai.2024.1394363","url":null,"abstract":"<p><strong>Introduction: </strong>Cyber situational awareness is critical for detecting and mitigating cybersecurity threats in real-time. This study introduces a comprehensive methodology that integrates the Isolation Forest and autoencoder algorithms, Structured Threat Information Expression (STIX) implementation, and ontology development to enhance cybersecurity threat detection and intelligence. The Isolation Forest algorithm excels in anomaly detection in high-dimensional datasets, while autoencoders provide nonlinear detection capabilities and adaptive feature learning. Together, they form a robust framework for proactive anomaly detection.</p><p><strong>Methods: </strong>The proposed methodology leverages the Isolation Forest for efficient anomaly identification and autoencoders for feature learning and nonlinear anomaly detection. Threat information was standardized using the STIX framework, facilitating structured and dynamic assessment of threat intelligence. Ontology development was employed to represent knowledge systematically and enable semantic correlation of threats. Feature mapping enriched datasets with contextual threat information.</p><p><strong>Results: </strong>The proposed dual-algorithm framework demonstrated superior performance, achieving 95% accuracy, a 99% F1 score, and a 94.60% recall rate. These results outperformed the benchmarks, highlighting the model's effectiveness in proactive anomaly detection and cyber situational awareness enhancement.</p><p><strong>Discussion: </strong>The integration of STIX and ontology development within the proposed methodology significantly enhanced threat information standardization and semantic analysis. The dual-algorithm approach provided improved detection capabilities compared to traditional methods, underscoring its potential for scalable and effective cybersecurity applications. Future research could explore further optimization and real-world deployments to refine and validate the approach.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1394363"},"PeriodicalIF":3.0,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11755673/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143029464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic-budget superpixel active learning for semantic segmentation.
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-09 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1498956
Yuemin Wang, Ian Stavness

Introduction: Active learning can significantly decrease the labeling cost of deep learning workflows by prioritizing the limited labeling budget to high-impact data points that have the highest positive impact on model accuracy. Active learning is especially useful for semantic segmentation tasks where we can selectively label only a few high-impact regions within these high-impact images. Most established regional active learning algorithms deploy a static-budget querying strategy where a fixed percentage of regions are queried in each image. A static budget could result in over- or under-labeling images as the number of high-impact regions in each image can vary.

Methods: In this paper, we present a novel dynamic-budget superpixel querying strategy that can query the optimal numbers of high-uncertainty superpixels in an image to improve the querying efficiency of regional active learning algorithms designed for semantic segmentation.

Results: For two distinct datasets, we show that by allowing a dynamic budget for each image, the active learning algorithm is more effective compared to static-budget querying at the same low total labeling budget. We investigate both low- and high-budget scenarios and the impact of superpixel size on our dynamic active learning scheme. In a low-budget scenario, our dynamic-budget querying outperforms static-budget querying by 5.6% mIoU on a specialized agriculture field image dataset and 2.4% mIoU on Cityscapes.

Discussion: The presented dynamic-budget querying strategy is simple, effective, and can be easily adapted to other regional active learning algorithms to further improve the data efficiency of semantic segmentation tasks.

{"title":"Dynamic-budget superpixel active learning for semantic segmentation.","authors":"Yuemin Wang, Ian Stavness","doi":"10.3389/frai.2024.1498956","DOIUrl":"10.3389/frai.2024.1498956","url":null,"abstract":"<p><strong>Introduction: </strong>Active learning can significantly decrease the labeling cost of deep learning workflows by prioritizing the limited labeling budget to high-impact data points that have the highest positive impact on model accuracy. Active learning is especially useful for semantic segmentation tasks where we can selectively label only a few high-impact regions within these high-impact images. Most established regional active learning algorithms deploy a static-budget querying strategy where a fixed percentage of regions are queried in each image. A static budget could result in over- or under-labeling images as the number of high-impact regions in each image can vary.</p><p><strong>Methods: </strong>In this paper, we present a novel dynamic-budget superpixel querying strategy that can query the optimal numbers of high-uncertainty superpixels in an image to improve the querying efficiency of regional active learning algorithms designed for semantic segmentation.</p><p><strong>Results: </strong>For two distinct datasets, we show that by allowing a dynamic budget for each image, the active learning algorithm is more effective compared to static-budget querying at the same low total labeling budget. We investigate both low- and high-budget scenarios and the impact of superpixel size on our dynamic active learning scheme. In a low-budget scenario, our dynamic-budget querying outperforms static-budget querying by 5.6% mIoU on a specialized agriculture field image dataset and 2.4% mIoU on Cityscapes.</p><p><strong>Discussion: </strong>The presented dynamic-budget querying strategy is simple, effective, and can be easily adapted to other regional active learning algorithms to further improve the data efficiency of semantic segmentation tasks.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1498956"},"PeriodicalIF":3.0,"publicationDate":"2025-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11754207/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143029398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fostering effective hybrid human-LLM reasoning and decision making.
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-08 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1464690
Andrea Passerini, Aryo Gema, Pasquale Minervini, Burcu Sayin, Katya Tentori

The impressive performance of modern Large Language Models (LLMs) across a wide range of tasks, along with their often non-trivial errors, has garnered unprecedented attention regarding the potential of AI and its impact on everyday life. While considerable effort has been and continues to be dedicated to overcoming the limitations of current models, the potentials and risks of human-LLM collaboration remain largely underexplored. In this perspective, we argue that enhancing the focus on human-LLM interaction should be a primary target for future LLM research. Specifically, we will briefly examine some of the biases that may hinder effective collaboration between humans and machines, explore potential solutions, and discuss two broader goals-mutual understanding and complementary team performance-that, in our view, future research should address to enhance effective human-LLM reasoning and decision-making.

{"title":"Fostering effective hybrid human-LLM reasoning and decision making.","authors":"Andrea Passerini, Aryo Gema, Pasquale Minervini, Burcu Sayin, Katya Tentori","doi":"10.3389/frai.2024.1464690","DOIUrl":"10.3389/frai.2024.1464690","url":null,"abstract":"<p><p>The impressive performance of modern Large Language Models (LLMs) across a wide range of tasks, along with their often non-trivial errors, has garnered unprecedented attention regarding the potential of AI and its impact on everyday life. While considerable effort has been and continues to be dedicated to overcoming the limitations of current models, the potentials and risks of human-LLM collaboration remain largely underexplored. In this perspective, we argue that enhancing the focus on human-LLM interaction should be a primary target for future LLM research. Specifically, we will briefly examine some of the biases that may hinder effective collaboration between humans and machines, explore potential solutions, and discuss two broader goals-mutual understanding and complementary team performance-that, in our view, future research should address to enhance effective human-LLM reasoning and decision-making.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1464690"},"PeriodicalIF":3.0,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11751230/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143024969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SPEMix: a lightweight method via superclass pseudo-label and efficient mixup for echocardiogram view classification.
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-08 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1467218
Shizhou Ma, Yifeng Zhang, Delong Li, Yixin Sun, Zhaowen Qiu, Lei Wei, Suyu Dong

Introduction: In clinical, the echocardiogram is the most widely used for diagnosing heart diseases. Different heart diseases are diagnosed based on different views of the echocardiogram images, so efficient echocardiogram view classification can help cardiologists diagnose heart disease rapidly. Echocardiogram view classification is mainly divided into supervised and semi-supervised methods. The supervised echocardiogram view classification methods have worse generalization performance due to the difficulty of labeling echocardiographic images, while the semi-supervised echocardiogram view classification can achieve acceptable results via a little labeled data. However, the current semi-supervised echocardiogram view classification faces challenges of declining accuracy due to out-of-distribution data and is constrained by complex model structures in clinical application.

Methods: To deal with the above challenges, we proposed a novel open-set semi-supervised method for echocardiogram view classification, SPEMix, which can improve performance and generalization by leveraging out-of-distribution unlabeled data. Our SPEMix consists of two core blocks, DAMix Block and SP Block. DAMix Block can generate a mixed mask that focuses on the valuable regions of echocardiograms at the pixel level to generate high-quality augmented echocardiograms for unlabeled data, improving classification accuracy. SP Block can generate a superclass pseudo-label of unlabeled data from the perspective of the superclass probability distribution, improving the classification generalization by leveraging the superclass pseudolabel.

Results: We also evaluate the generalization of our method on the Unity dataset and the CAMUS dataset. The lightweight model trained with SPEMix can achieve the best classification performance on the publicly available TMED2 dataset.

Discussion: For the first time, we applied the lightweight model to the echocardiogram view classification, which can solve the limits of the clinical application due to the complex model architecture and help cardiologists diagnose heart diseases more efficiently.

{"title":"SPEMix: a lightweight method via superclass pseudo-label and efficient mixup for echocardiogram view classification.","authors":"Shizhou Ma, Yifeng Zhang, Delong Li, Yixin Sun, Zhaowen Qiu, Lei Wei, Suyu Dong","doi":"10.3389/frai.2024.1467218","DOIUrl":"10.3389/frai.2024.1467218","url":null,"abstract":"<p><strong>Introduction: </strong>In clinical, the echocardiogram is the most widely used for diagnosing heart diseases. Different heart diseases are diagnosed based on different views of the echocardiogram images, so efficient echocardiogram view classification can help cardiologists diagnose heart disease rapidly. Echocardiogram view classification is mainly divided into supervised and semi-supervised methods. The supervised echocardiogram view classification methods have worse generalization performance due to the difficulty of labeling echocardiographic images, while the semi-supervised echocardiogram view classification can achieve acceptable results via a little labeled data. However, the current semi-supervised echocardiogram view classification faces challenges of declining accuracy due to out-of-distribution data and is constrained by complex model structures in clinical application.</p><p><strong>Methods: </strong>To deal with the above challenges, we proposed a novel open-set semi-supervised method for echocardiogram view classification, SPEMix, which can improve performance and generalization by leveraging out-of-distribution unlabeled data. Our SPEMix consists of two core blocks, DAMix Block and SP Block. DAMix Block can generate a mixed mask that focuses on the valuable regions of echocardiograms at the pixel level to generate high-quality augmented echocardiograms for unlabeled data, improving classification accuracy. SP Block can generate a superclass pseudo-label of unlabeled data from the perspective of the superclass probability distribution, improving the classification generalization by leveraging the superclass pseudolabel.</p><p><strong>Results: </strong>We also evaluate the generalization of our method on the Unity dataset and the CAMUS dataset. The lightweight model trained with SPEMix can achieve the best classification performance on the publicly available TMED2 dataset.</p><p><strong>Discussion: </strong>For the first time, we applied the lightweight model to the echocardiogram view classification, which can solve the limits of the clinical application due to the complex model architecture and help cardiologists diagnose heart diseases more efficiently.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1467218"},"PeriodicalIF":3.0,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11751229/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143024979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Developing and validating a drug recommendation system based on tumor microenvironment and drug fingerprint.
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-08 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1444127
Yan Wang, Xiaoye Jin, Rui Qiu, Bo Ma, Sheng Zhang, Xuyang Song, Jinxi He

Introduction: Tumor heterogeneity significantly complicates the selection of effective cancer treatments, as patient responses to drugs can vary widely. Personalized cancer therapy has emerged as a promising strategy to enhance treatment effectiveness and precision. This study aimed to develop a personalized drug recommendation model leveraging genomic profiles to optimize therapeutic outcomes.

Methods: A content-based filtering algorithm was implemented to predict drug sensitivity. Patient features were characterized by the tumor microenvironment (TME), and drug features were represented by drug fingerprints. The model was trained and validated using the Genomics of Drug Sensitivity in Cancer (GDSC) database, followed by independent validation with the Cancer Cell Line Encyclopedia (CCLE) dataset. Clinical application was assessed using The Cancer Genome Atlas (TCGA) dataset, with Best Overall Response (BOR) serving as the clinical efficacy measure. Two multilayer perceptron (MLP) models were built to predict IC50 values for 542 tumor cell lines across 18 drugs.

Results: The model exhibited high predictive accuracy, with correlation coefficients (R) of 0.914 in the training set and 0.902 in the test set. Predictions for cytotoxic drugs, including Docetaxel (R = 0.72) and Cisplatin (R = 0.71), were particularly robust, whereas predictions for targeted therapies were less accurate (R < 0.3). Validation with CCLE (MFI as the endpoint) showed strong correlations (R = 0.67). Application to TCGA data successfully predicted clinical outcomes, including a significant association with 6-month progression-free survival (PFS, P = 0.007, AUC = 0.793).

Discussion: The model demonstrates strong performance across preclinical datasets, showing its potential for real-world application in personalized cancer therapy. By bridging preclinical IC50 and clinical BOR endpoints, this approach provides a promising tool for optimizing patient-specific treatments.

{"title":"Developing and validating a drug recommendation system based on tumor microenvironment and drug fingerprint.","authors":"Yan Wang, Xiaoye Jin, Rui Qiu, Bo Ma, Sheng Zhang, Xuyang Song, Jinxi He","doi":"10.3389/frai.2024.1444127","DOIUrl":"10.3389/frai.2024.1444127","url":null,"abstract":"<p><strong>Introduction: </strong>Tumor heterogeneity significantly complicates the selection of effective cancer treatments, as patient responses to drugs can vary widely. Personalized cancer therapy has emerged as a promising strategy to enhance treatment effectiveness and precision. This study aimed to develop a personalized drug recommendation model leveraging genomic profiles to optimize therapeutic outcomes.</p><p><strong>Methods: </strong>A content-based filtering algorithm was implemented to predict drug sensitivity. Patient features were characterized by the tumor microenvironment (TME), and drug features were represented by drug fingerprints. The model was trained and validated using the Genomics of Drug Sensitivity in Cancer (GDSC) database, followed by independent validation with the Cancer Cell Line Encyclopedia (CCLE) dataset. Clinical application was assessed using The Cancer Genome Atlas (TCGA) dataset, with Best Overall Response (BOR) serving as the clinical efficacy measure. Two multilayer perceptron (MLP) models were built to predict IC<sub>50</sub> values for 542 tumor cell lines across 18 drugs.</p><p><strong>Results: </strong>The model exhibited high predictive accuracy, with correlation coefficients (<i>R</i>) of 0.914 in the training set and 0.902 in the test set. Predictions for cytotoxic drugs, including Docetaxel (<i>R</i> = 0.72) and Cisplatin (<i>R</i> = 0.71), were particularly robust, whereas predictions for targeted therapies were less accurate (<i>R</i> < 0.3). Validation with CCLE (MFI as the endpoint) showed strong correlations (<i>R</i> = 0.67). Application to TCGA data successfully predicted clinical outcomes, including a significant association with 6-month progression-free survival (PFS, <i>P</i> = 0.007, AUC = 0.793).</p><p><strong>Discussion: </strong>The model demonstrates strong performance across preclinical datasets, showing its potential for real-world application in personalized cancer therapy. By bridging preclinical IC<sub>50</sub> and clinical BOR endpoints, this approach provides a promising tool for optimizing patient-specific treatments.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1444127"},"PeriodicalIF":3.0,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11755346/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143029381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A systematic review of Machine Learning and Deep Learning approaches in Mexico: challenges and opportunities.
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-07 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1479855
José Luis Uc Castillo, Ana Elizabeth Marín Celestino, Diego Armando Martínez Cruz, José Tuxpan Vargas, José Alfredo Ramos Leal, Janete Morán Ramírez

This systematic review provides a state-of-art of Artificial Intelligence (AI) models such as Machine Learning (ML) and Deep Learning (DL) development and its applications in Mexico in diverse fields. These models are recognized as powerful tools in many fields due to their capability to carry out several tasks such as forecasting, image classification, recognition, natural language processing, machine translation, etc. This review article aimed to provide comprehensive information on the Machine Learning and Deep Learning algorithms applied in Mexico. A total of 120 original research papers were included and details such as trends in publication, spatial location, institutions, publishing issues, subject areas, algorithms applied, and performance metrics were discussed. Furthermore, future directions and opportunities are presented. A total of 15 subject areas were identified, where Social Sciences and Medicine were the main application areas. It observed that Artificial Neural Networks (ANN) models were preferred, probably due to their capability to learn and model non-linear and complex relationships in addition to other popular models such as Random Forest (RF) and Support Vector Machines (SVM). It identified that the selection and application of the algorithms rely on the study objective and the data patterns. Regarding the performance metrics applied, accuracy and recall were the most employed. This paper could assist the readers in understanding the several Machine Learning and Deep Learning techniques used and their subject area of application in the Artificial Intelligence field in the country. Moreover, the study could provide significant knowledge in the development and implementation of a national AI strategy, according to country needs.

{"title":"A systematic review of Machine Learning and Deep Learning approaches in Mexico: challenges and opportunities.","authors":"José Luis Uc Castillo, Ana Elizabeth Marín Celestino, Diego Armando Martínez Cruz, José Tuxpan Vargas, José Alfredo Ramos Leal, Janete Morán Ramírez","doi":"10.3389/frai.2024.1479855","DOIUrl":"10.3389/frai.2024.1479855","url":null,"abstract":"<p><p>This systematic review provides a state-of-art of Artificial Intelligence (AI) models such as Machine Learning (ML) and Deep Learning (DL) development and its applications in Mexico in diverse fields. These models are recognized as powerful tools in many fields due to their capability to carry out several tasks such as forecasting, image classification, recognition, natural language processing, machine translation, etc. This review article aimed to provide comprehensive information on the Machine Learning and Deep Learning algorithms applied in Mexico. A total of 120 original research papers were included and details such as trends in publication, spatial location, institutions, publishing issues, subject areas, algorithms applied, and performance metrics were discussed. Furthermore, future directions and opportunities are presented. A total of 15 subject areas were identified, where Social Sciences and Medicine were the main application areas. It observed that Artificial Neural Networks (ANN) models were preferred, probably due to their capability to learn and model non-linear and complex relationships in addition to other popular models such as Random Forest (RF) and Support Vector Machines (SVM). It identified that the selection and application of the algorithms rely on the study objective and the data patterns. Regarding the performance metrics applied, accuracy and recall were the most employed. This paper could assist the readers in understanding the several Machine Learning and Deep Learning techniques used and their subject area of application in the Artificial Intelligence field in the country. Moreover, the study could provide significant knowledge in the development and implementation of a national AI strategy, according to country needs.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1479855"},"PeriodicalIF":3.0,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11753225/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143024962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust predictive framework for diabetes classification using optimized machine learning on imbalanced datasets. 在不平衡数据集上使用优化机器学习的稳健预测糖尿病分类框架。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-07 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1499530
Inam Abousaber, Haitham F Abdallah, Hany El-Ghaish

Introduction: Diabetes prediction using clinical datasets is crucial for medical data analysis. However, class imbalances, where non-diabetic cases dominate, can significantly affect machine learning model performance, leading to biased predictions and reduced generalization.

Methods: A novel predictive framework employing cutting-edge machine learning algorithms and advanced imbalance handling techniques was developed. The framework integrates feature engineering and resampling strategies to enhance predictive accuracy.

Results: Rigorous testing was conducted on three datasets-PIMA, Diabetes Dataset 2019, and BIT_2019-demonstrating the robustness and adaptability of the methodology across varying data environments.

Discussion: The experimental results highlight the critical role of model selection and imbalance mitigation in achieving reliable and generalizable diabetes predictions. This study offers significant contributions to medical informatics by proposing a robust data-driven framework that addresses class imbalance challenges, thereby advancing diabetes prediction accuracy.

利用临床数据集进行糖尿病预测对医疗数据分析至关重要。然而,非糖尿病病例占主导地位的类别不平衡会显著影响机器学习模型的性能,导致有偏见的预测和降低泛化。方法:采用先进的机器学习算法和先进的不平衡处理技术,开发了一个新的预测框架。该框架集成了特征工程和重采样策略,以提高预测精度。结果:在三个数据集(pima、Diabetes Dataset 2019和bit_2019)上进行了严格的测试,证明了该方法在不同数据环境下的稳健性和适应性。讨论:实验结果强调了模型选择和失衡缓解在实现可靠和可推广的糖尿病预测中的关键作用。本研究通过提出一个健壮的数据驱动框架来解决类别不平衡的挑战,从而提高糖尿病预测的准确性,为医学信息学提供了重大贡献。
{"title":"Robust predictive framework for diabetes classification using optimized machine learning on imbalanced datasets.","authors":"Inam Abousaber, Haitham F Abdallah, Hany El-Ghaish","doi":"10.3389/frai.2024.1499530","DOIUrl":"10.3389/frai.2024.1499530","url":null,"abstract":"<p><strong>Introduction: </strong>Diabetes prediction using clinical datasets is crucial for medical data analysis. However, class imbalances, where non-diabetic cases dominate, can significantly affect machine learning model performance, leading to biased predictions and reduced generalization.</p><p><strong>Methods: </strong>A novel predictive framework employing cutting-edge machine learning algorithms and advanced imbalance handling techniques was developed. The framework integrates feature engineering and resampling strategies to enhance predictive accuracy.</p><p><strong>Results: </strong>Rigorous testing was conducted on three datasets-PIMA, Diabetes Dataset 2019, and BIT_2019-demonstrating the robustness and adaptability of the methodology across varying data environments.</p><p><strong>Discussion: </strong>The experimental results highlight the critical role of model selection and imbalance mitigation in achieving reliable and generalizable diabetes predictions. This study offers significant contributions to medical informatics by proposing a robust data-driven framework that addresses class imbalance challenges, thereby advancing diabetes prediction accuracy.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1499530"},"PeriodicalIF":3.0,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11747138/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143012988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Grammar-constrained decoding for structured information extraction with fine-tuned generative models applied to clinical trial abstracts. 应用于临床试验摘要的精细生成模型的结构化信息提取的语法约束解码。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-07 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1406857
David M Schmidt, Philipp Cimiano
<p><strong>Background: </strong>In the field of structured information extraction, there are typically semantic and syntactic constraints on the output of information extraction (IE) systems. These constraints, however, can typically not be guaranteed using standard (fine-tuned) encoder-decoder architectures. This has led to the development of constrained decoding approaches which allow, e.g., to specify constraints in form of context-free grammars. An open question is in how far an IE system can be effectively guided by a domain-specific grammar to ensure that the output structures follow the requirements of a certain domain data model.</p><p><strong>Methods: </strong>In this work we experimentally investigate the influence of grammar-constrained decoding as well as pointer generators on the performance of a domain-specific information extraction system. For this, we consider fine-tuned encoder-decoder models, Longformer and Flan-T5 in particular, and experimentally investigate whether the addition of grammar-constrained decoding and pointer generators improve information extraction results. Toward this goal, we consider the task of inducing structured representations from abstracts describing clinical trials, relying on the C-TrO ontology to semantically describe the clinical trials and their results. We frame the task as a slot filling problem where certain slots of templates need to be filled with token sequences occurring in the input text. We use a dataset comprising 211 annotated clinical trial abstracts about type 2 diabetes and glaucoma for training and evaluation. Our focus is on settings in which the available training data is in the order of a few hundred training examples, which we consider as a <i>low-resource setting</i>.</p><p><strong>Results: </strong>In all our experiments we could demonstrate the positive impact of grammar-constrained decoding, with an increase in <i>F</i> <sub>1</sub> score of pp 0.351 (absolute score 0.413) and pp 0.425 (absolute score 0.47) for the best-performing models on type 2 diabetes and glaucoma datasets, respectively. The addition of the pointer generators had a detrimental impact on the results, decreasing <i>F</i> <sub>1</sub> scores by pp 0.15 (absolute score 0.263) and pp 0.198 (absolute score 0.272) for the best-performing pointer generator models on type 2 diabetes and glaucoma datasets, respectively.</p><p><strong>Conclusion: </strong>The experimental results indicate that encoder-decoder models used for structure prediction for information extraction tasks in low-resource settings clearly benefit from grammar-constrained decoding guiding the output generation. In contrast, the evaluated pointer generator models decreased the performance drastically in some cases. Moreover, the performance of the pointer models appears to depend both on the used base model as well as the function used for aggregating the attention values. How the size of large language models affects the performance benefit of
背景:在结构化信息抽取领域,信息抽取(IE)系统的输出通常存在语义和句法约束。然而,使用标准(微调)编码器-解码器体系结构通常不能保证这些约束。这导致了约束解码方法的发展,例如,允许以与上下文无关的语法形式指定约束。一个悬而未决的问题是,IE系统在多大程度上可以被特定于领域的语法有效地引导,以确保输出结构符合特定领域数据模型的要求。方法:在这项工作中,我们实验研究了语法约束解码和指针生成器对特定领域信息提取系统性能的影响。为此,我们考虑微调编码器-解码器模型,特别是Longformer和Flan-T5,并通过实验研究添加语法约束解码和指针生成器是否可以改善信息提取结果。为了实现这一目标,我们考虑了从描述临床试验的摘要中归纳结构化表示的任务,依靠C-TrO本体在语义上描述临床试验及其结果。我们将此任务定义为一个槽填充问题,其中模板的某些槽需要用输入文本中出现的标记序列填充。我们使用了一个包含211篇关于2型糖尿病和青光眼的带注释的临床试验摘要的数据集来进行培训和评估。我们的重点是可用训练数据在几百个训练样例的设置,我们认为这是一个低资源设置。结果:在我们所有的实验中,我们都可以证明语法约束解码的积极影响,在2型糖尿病和青光眼数据集上,表现最好的模型的f1分数分别提高了0.351(绝对分数0.413)和0.425(绝对分数0.47)。添加指针生成器对结果有不利影响,在2型糖尿病和青光眼数据集上,表现最好的指针生成器模型的f1分数分别下降了0.15(绝对分数0.263)和0.198(绝对分数0.272)。结论:实验结果表明,用于低资源环境下信息提取任务的结构预测的编码器-解码器模型明显受益于语法约束解码指导输出生成。相反,在某些情况下,评估的指针生成器模型会大幅降低性能。此外,指针模型的性能似乎既取决于所使用的基本模型,也取决于用于聚合注意值的函数。大型语言模型的大小如何影响语法约束解码的性能优势,在未来的工作中还需要进行更多的结构性研究。
{"title":"Grammar-constrained decoding for structured information extraction with fine-tuned generative models applied to clinical trial abstracts.","authors":"David M Schmidt, Philipp Cimiano","doi":"10.3389/frai.2024.1406857","DOIUrl":"10.3389/frai.2024.1406857","url":null,"abstract":"&lt;p&gt;&lt;strong&gt;Background: &lt;/strong&gt;In the field of structured information extraction, there are typically semantic and syntactic constraints on the output of information extraction (IE) systems. These constraints, however, can typically not be guaranteed using standard (fine-tuned) encoder-decoder architectures. This has led to the development of constrained decoding approaches which allow, e.g., to specify constraints in form of context-free grammars. An open question is in how far an IE system can be effectively guided by a domain-specific grammar to ensure that the output structures follow the requirements of a certain domain data model.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Methods: &lt;/strong&gt;In this work we experimentally investigate the influence of grammar-constrained decoding as well as pointer generators on the performance of a domain-specific information extraction system. For this, we consider fine-tuned encoder-decoder models, Longformer and Flan-T5 in particular, and experimentally investigate whether the addition of grammar-constrained decoding and pointer generators improve information extraction results. Toward this goal, we consider the task of inducing structured representations from abstracts describing clinical trials, relying on the C-TrO ontology to semantically describe the clinical trials and their results. We frame the task as a slot filling problem where certain slots of templates need to be filled with token sequences occurring in the input text. We use a dataset comprising 211 annotated clinical trial abstracts about type 2 diabetes and glaucoma for training and evaluation. Our focus is on settings in which the available training data is in the order of a few hundred training examples, which we consider as a &lt;i&gt;low-resource setting&lt;/i&gt;.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Results: &lt;/strong&gt;In all our experiments we could demonstrate the positive impact of grammar-constrained decoding, with an increase in &lt;i&gt;F&lt;/i&gt; &lt;sub&gt;1&lt;/sub&gt; score of pp 0.351 (absolute score 0.413) and pp 0.425 (absolute score 0.47) for the best-performing models on type 2 diabetes and glaucoma datasets, respectively. The addition of the pointer generators had a detrimental impact on the results, decreasing &lt;i&gt;F&lt;/i&gt; &lt;sub&gt;1&lt;/sub&gt; scores by pp 0.15 (absolute score 0.263) and pp 0.198 (absolute score 0.272) for the best-performing pointer generator models on type 2 diabetes and glaucoma datasets, respectively.&lt;/p&gt;&lt;p&gt;&lt;strong&gt;Conclusion: &lt;/strong&gt;The experimental results indicate that encoder-decoder models used for structure prediction for information extraction tasks in low-resource settings clearly benefit from grammar-constrained decoding guiding the output generation. In contrast, the evaluated pointer generator models decreased the performance drastically in some cases. Moreover, the performance of the pointer models appears to depend both on the used base model as well as the function used for aggregating the attention values. How the size of large language models affects the performance benefit of","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1406857"},"PeriodicalIF":3.0,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11747381/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143012748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Countering AI-powered disinformation through national regulation: learning from the case of Ukraine. 通过国家监管打击人工智能驱动的虚假信息:从乌克兰的案例中吸取教训。
IF 3 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-07 eCollection Date: 2024-01-01 DOI: 10.3389/frai.2024.1474034
Anatolii Marushchak, Stanislav Petrov, Anayit Khoperiya

Advances in the use of AI have led to the emergence of a greater variety of forms disinformation can take and channels for its proliferation. In this context, the future of legal mechanisms to address AI-powered disinformation remains to be determined. Additional complexity for legislators working in the field arises from the need to harmonize national legal frameworks of democratic states with the need for regulation of potentially dangerous digital content. In this paper, we review and analyze some of the recent discussions concerning the use of legal regulation in addressing AI-powered disinformation and present the national case of Ukraine as an example of developments in the field. We develop the discussion through an analysis of the existing counter-disinformation ecosystems, the EU and US legislation, and the emerging regulations of AI systems. We show how the Ukrainian Law on Counter Disinformation, developed as an emergency response to internationally recognized Russian military aggression and hybrid warfare tactics, underscores the crucial need to align even emergency measures with international law and principles of free speech. Exemplifying the Ukrainian case, we argue that the effective actions necessary for countering AI-powered disinformation are prevention, detection, and implementation of a set of response actions. The latter are identified and listed in this review. The paper argues that there is still a need for scaling legal mechanisms that might enhance top-level challenges in countering AI-powered disinformation.

人工智能使用的进步导致虚假信息可以采取的形式和传播渠道更加多样化。在这种背景下,解决人工智能驱动的虚假信息的法律机制的未来仍有待确定。对于在这一领域工作的立法者来说,由于需要协调民主国家的国家法律框架与监管潜在危险数字内容的需要,这一领域的工作变得更加复杂。在本文中,我们回顾和分析了最近关于在解决人工智能驱动的虚假信息中使用法律法规的一些讨论,并将乌克兰的国家案例作为该领域发展的一个例子。我们通过分析现有的反虚假信息生态系统、欧盟和美国的立法以及人工智能系统的新兴法规来展开讨论。我们展示了作为对国际公认的俄罗斯军事侵略和混合战争战术的紧急反应而制定的乌克兰反虚假信息法如何强调了即使是紧急措施也必须与国际法和言论自由原则保持一致。以乌克兰的案例为例,我们认为,打击人工智能驱动的虚假信息所需的有效行动是预防、检测和实施一套响应行动。后者在本综述中被识别和列出。该论文认为,仍有必要扩大法律机制,以加强在打击人工智能驱动的虚假信息方面的顶级挑战。
{"title":"Countering AI-powered disinformation through national regulation: learning from the case of Ukraine.","authors":"Anatolii Marushchak, Stanislav Petrov, Anayit Khoperiya","doi":"10.3389/frai.2024.1474034","DOIUrl":"10.3389/frai.2024.1474034","url":null,"abstract":"<p><p>Advances in the use of AI have led to the emergence of a greater variety of forms disinformation can take and channels for its proliferation. In this context, the future of legal mechanisms to address AI-powered disinformation remains to be determined. Additional complexity for legislators working in the field arises from the need to harmonize national legal frameworks of democratic states with the need for regulation of potentially dangerous digital content. In this paper, we review and analyze some of the recent discussions concerning the use of legal regulation in addressing AI-powered disinformation and present the national case of Ukraine as an example of developments in the field. We develop the discussion through an analysis of the existing counter-disinformation ecosystems, the EU and US legislation, and the emerging regulations of AI systems. We show how the Ukrainian Law on Counter Disinformation, developed as an emergency response to internationally recognized Russian military aggression and hybrid warfare tactics, underscores the crucial need to align even emergency measures with international law and principles of free speech. Exemplifying the Ukrainian case, we argue that the effective actions necessary for countering AI-powered disinformation are prevention, detection, and implementation of a set of response actions. The latter are identified and listed in this review. The paper argues that there is still a need for scaling legal mechanisms that might enhance top-level challenges in countering AI-powered disinformation.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"7 ","pages":"1474034"},"PeriodicalIF":3.0,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11747593/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143012826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1