首页 > 最新文献

Frontiers in Artificial Intelligence最新文献

英文 中文
Transformer-based deep learning approach for obstructive sleep apnea detection using single-lead ECG. 基于变压器的深度学习方法用于单导联心电图阻塞性睡眠呼吸暂停检测。
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-11 eCollection Date: 2026-01-01 DOI: 10.3389/frai.2026.1727091
Malak Abdullah Almarshad, Saad Al-Ahmadi, Saiful Islam, Adel Soudani, Ahmed S BaHammam

Obstructive sleep apnea (OSA) results from repeated collapses of the upper airway during sleep, which can lead to serious health complications. Although polysomnography (PSG) is the diagnostic gold standard, it is costly, labor-intensive, and associated with long waiting times. With the rapid evolution of automated scoring solutions and the emergence of machine learning (ML) and deep learning (DL) in many disciplines, there is a need for tools that use fewer signals and can provide accurate diagnoses. DL models can an process large amounts of data and often generalize effectively to new instances. This makes them a suitable choice for classifying continuous time series data. This study introduces a transformer-based deep learning approach using a single-lead electrocardiogram (ECG) for OSA detection. The proposed architecture, designed to handle raw signals with high sampling rates, preserves temporal continuity over unlimited durations. Without any preprocessing, the model tolerates high-noise raw data. The model is tested with different positional embedding techniques. Additionally, a novel positional encoding technique using an autoencoder is introduced. The proposed approach achieves a high F1 score, outperforming other published work by an average margin of more than 13%. In addition, the model classifies apnea episodes at one-second intervals, providing clinicians with nuanced insights.

阻塞性睡眠呼吸暂停(OSA)是由睡眠期间上呼吸道反复塌陷引起的,可导致严重的健康并发症。虽然多导睡眠图(PSG)是诊断的黄金标准,但它是昂贵的,劳动密集型的,并且与长时间的等待有关。随着自动评分解决方案的快速发展以及许多学科中机器学习(ML)和深度学习(DL)的出现,需要使用更少信号并能提供准确诊断的工具。深度学习模型可以处理大量数据,并且通常可以有效地推广到新的实例。这使得它们成为对连续时间序列数据进行分类的合适选择。本研究介绍了一种基于变压器的深度学习方法,使用单导联心电图(ECG)进行OSA检测。该架构设计用于处理高采样率的原始信号,在无限持续时间内保持时间连续性。在没有任何预处理的情况下,该模型可以承受高噪声的原始数据。用不同的位置嵌入技术对模型进行了测试。此外,还介绍了一种新的位置编码技术——自编码器。所提出的方法获得了很高的F1分数,比其他已发表的作品平均高出13%以上。此外,该模型每隔一秒对呼吸暂停发作进行分类,为临床医生提供了细致入微的见解。
{"title":"Transformer-based deep learning approach for obstructive sleep apnea detection using single-lead ECG.","authors":"Malak Abdullah Almarshad, Saad Al-Ahmadi, Saiful Islam, Adel Soudani, Ahmed S BaHammam","doi":"10.3389/frai.2026.1727091","DOIUrl":"https://doi.org/10.3389/frai.2026.1727091","url":null,"abstract":"<p><p>Obstructive sleep apnea (OSA) results from repeated collapses of the upper airway during sleep, which can lead to serious health complications. Although polysomnography (PSG) is the diagnostic gold standard, it is costly, labor-intensive, and associated with long waiting times. With the rapid evolution of automated scoring solutions and the emergence of machine learning (ML) and deep learning (DL) in many disciplines, there is a need for tools that use fewer signals and can provide accurate diagnoses. DL models can an process large amounts of data and often generalize effectively to new instances. This makes them a suitable choice for classifying continuous time series data. This study introduces a transformer-based deep learning approach using a single-lead electrocardiogram (ECG) for OSA detection. The proposed architecture, designed to handle raw signals with high sampling rates, preserves temporal continuity over unlimited durations. Without any preprocessing, the model tolerates high-noise raw data. The model is tested with different positional embedding techniques. Additionally, a novel positional encoding technique using an autoencoder is introduced. The proposed approach achieves a high F1 score, outperforming other published work by an average margin of more than 13%. In addition, the model classifies apnea episodes at one-second intervals, providing clinicians with nuanced insights.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"9 ","pages":"1727091"},"PeriodicalIF":4.7,"publicationDate":"2026-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12932532/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147310774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A framework for causal concept-based model explanations. 基于因果概念的模型解释框架。
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-11 eCollection Date: 2025-01-01 DOI: 10.3389/frai.2025.1759000
Anna Rodum Bjøru, Jacob Lysnæs-Larsen, Oskar Jørgensen, Inga Strümke, Helge Langseth

This work presents a conceptual framework for causal concept-based post-hoc explainable artificial intelligence (XAI), based on the requirements that explanations for non-interpretable models must be both understandable and faithful to the model being explained. Local and global explanations are generated by calculating the probability of sufficiency of concept interventions. Example explanations are presented, generated with a proof-of-concept model made to explain classifiers trained on the CelebA dataset. Understandability is demonstrated through a clear concept-based vocabulary, subject to an implicit causal interpretation. Fidelity is addressed by highlighting important framework assumptions, stressing that the context of explanation interpretation must align with the context of explanation generation.

这项工作提出了一个基于因果概念的事后可解释人工智能(XAI)的概念框架,该框架基于对不可解释模型的解释必须既可理解又忠实于被解释的模型的要求。局部和全局解释是通过计算概念干预的充分性概率产生的。给出了示例解释,这些解释是用一个概念验证模型生成的,该模型用于解释在CelebA数据集上训练的分类器。可理解性是通过一个清晰的概念为基础的词汇来展示的,受制于一个隐含的因果解释。保真度是通过强调重要的框架假设来解决的,强调解释解释的背景必须与解释生成的背景保持一致。
{"title":"A framework for causal concept-based model explanations.","authors":"Anna Rodum Bjøru, Jacob Lysnæs-Larsen, Oskar Jørgensen, Inga Strümke, Helge Langseth","doi":"10.3389/frai.2025.1759000","DOIUrl":"https://doi.org/10.3389/frai.2025.1759000","url":null,"abstract":"<p><p>This work presents a conceptual framework for causal concept-based post-hoc explainable artificial intelligence (XAI), based on the requirements that explanations for non-interpretable models must be both understandable and faithful to the model being explained. Local and global explanations are generated by calculating the probability of sufficiency of concept interventions. Example explanations are presented, generated with a proof-of-concept model made to explain classifiers trained on the CelebA dataset. Understandability is demonstrated through a clear concept-based vocabulary, subject to an implicit causal interpretation. Fidelity is addressed by highlighting important framework assumptions, stressing that the context of explanation interpretation must align with the context of explanation generation.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1759000"},"PeriodicalIF":4.7,"publicationDate":"2026-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12933271/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147310574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The future of fundamental science led by generative closed-loop artificial intelligence. 由生成闭环人工智能引领的基础科学的未来。
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-11 eCollection Date: 2026-01-01 DOI: 10.3389/frai.2026.1678539
Hector Zenil, Jesper Tegnér, Felipe S Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G Frey, Adrian Weller, Larisa Soldatova, Alan R Bundy, Nicholas R Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D Gregory, Carla P Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King

Artificial intelligence is approaching the point at which it can complete the scientific cycle, from hypothesis generation to experimental design and validation, within a closed loop that requires little human intervention. Yet, the loop is not fully autonomous: humans still curate data, set hyperparameters, adjudicate interpretability, and decide what counts as a satisfactory explanation. As models scale, they begin to explore regions of hypothesis and solution space that are inaccessible to human reasoning because they are too intricate or alien to our intuitions. Scientists may soon rely on AI strategies they do not fully understand, trusting goals and empirical payoffs rather than derivations. This prospect forces a choice about how much control to relinquish to accelerate discovery while keeping outputs human relevant. The answer cannot be a blanket policy to deploy LLMs or any single paradigm everywhere. It demands principled matching of methods to domains, hybrid causal and neurosymbolic scaffolds around generative models, and governance that preserves plurality and counters recursive bias. Otherwise, recursive training and uncritical reuse risk model collapse in AI and an epistemic collapse in science, as statistical inertia amplifies flaws and narrows the investigation. We argue for graded autonomy in AI-conducted science: systems that can close the loop at machine speed, while remaining anchored to human priorities, verifiable mechanisms, and domain-appropriate forms of understanding.

人工智能正在接近一个临界点,即它可以在一个几乎不需要人为干预的闭环中完成从假设生成到实验设计和验证的科学周期。然而,这个循环并不是完全自主的:人类仍然在管理数据,设置超参数,判断可解释性,并决定什么是令人满意的解释。随着模型规模的扩大,它们开始探索人类无法推理的假设和解决方案空间区域,因为它们太复杂或与我们的直觉相异。科学家可能很快就会依赖于他们并不完全理解的人工智能策略,相信目标和经验回报,而不是推导结果。这一前景迫使人们做出选择,即放弃多少控制权,以加速发现,同时保持产出与人类相关。答案不可能是在任何地方部署法学硕士或任何单一范式的一揽子政策。它要求方法与领域的原则匹配,围绕生成模型的混合因果和神经符号支架,以及保持多样性和对抗递归偏见的治理。否则,递归训练和不加批判的重用风险模型会在人工智能中崩溃,在科学中也会出现认知崩溃,因为统计惯性会放大缺陷并缩小调查范围。我们主张在人工智能主导的科学中实现分级自治:系统可以以机器速度完成循环,同时保持对人类优先事项、可验证机制和领域适当理解形式的锚定。
{"title":"The future of fundamental science led by generative closed-loop artificial intelligence.","authors":"Hector Zenil, Jesper Tegnér, Felipe S Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G Frey, Adrian Weller, Larisa Soldatova, Alan R Bundy, Nicholas R Jennings, Koichi Takahashi, Lawrence Hunter, Saso Dzeroski, Andrew Briggs, Frederick D Gregory, Carla P Gomes, Jon Rowe, James Evans, Hiroaki Kitano, Ross King","doi":"10.3389/frai.2026.1678539","DOIUrl":"10.3389/frai.2026.1678539","url":null,"abstract":"<p><p>Artificial intelligence is approaching the point at which it can complete the scientific cycle, from hypothesis generation to experimental design and validation, within a closed loop that requires little human intervention. Yet, the loop is not fully autonomous: humans still curate data, set hyperparameters, adjudicate interpretability, and decide what counts as a satisfactory explanation. As models scale, they begin to explore regions of hypothesis and solution space that are inaccessible to human reasoning because they are too intricate or alien to our intuitions. Scientists may soon rely on AI strategies they do not fully understand, trusting goals and empirical payoffs rather than derivations. This prospect forces a choice about how much control to relinquish to accelerate discovery while keeping outputs human relevant. The answer cannot be a blanket policy to deploy LLMs or any single paradigm everywhere. It demands principled matching of methods to domains, hybrid causal and neurosymbolic scaffolds around generative models, and governance that preserves plurality and counters recursive bias. Otherwise, recursive training and uncritical reuse risk model collapse in AI and an epistemic collapse in science, as statistical inertia amplifies flaws and narrows the investigation. We argue for graded autonomy in AI-conducted science: systems that can close the loop at machine speed, while remaining anchored to human priorities, verifiable mechanisms, and domain-appropriate forms of understanding.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"9 ","pages":"1678539"},"PeriodicalIF":4.7,"publicationDate":"2026-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12932417/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147310698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital twin simulations of theory-driven crisis messaging during hurricane evacuations in synthetic populations: a Miami-Dade County case study. 合成人群中飓风疏散期间理论驱动的危机信息的数字孪生模拟:迈阿密-戴德县案例研究。
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-11 eCollection Date: 2026-01-01 DOI: 10.3389/frai.2026.1715883
Brandon Walling, Linda Desens, Vanessa Howard, Rhys O'Neill, Denise Scannell, Mary Giammarino, Sara Beth Elson, Scott Rosen

Background: Digital twin and agentic artificial intelligence technology provide innovative systems for testing behavioral science theory, which can improve emergency communication in crisis situations. More advanced and effective evidence-based messaging is needed for better safety preparation for extreme weather and more trusted evacuation communication.

Methods: This study developed a digital twin of Miami-Dade County populated with a synthetic population embedded with behavioral theory (Extended Parallel Process Model, Theory of Planned Behavior) and the development of a Message Assessment Framework (MAF) to systematically test theory-based crisis messages. Agents were exposed to fear-only, efficacy-only, norm-only, combined fear+efficacy, combined fear+efficacy+norm, and a neutral control message.

Results: Messages grounded in behavioral theory were more effective than the control message at encouraging evacuation. Messages that combined fear and efficacy provided the best results in the synthetic population's decision to evacuate (OR = 15.45, p < 0.001), while adding social cues did not produce a statistically distinguishable added benefit.

Discussion: This research demonstrates a proof-of-concept approach for using agentic AI and digital twins to pre-test communication strategies, offering a scalable method for optimizing emergency messaging prior to real-world implementation.

背景:数字孪生和代理人工智能技术为检验行为科学理论提供了创新体系,可以改善危机情况下的应急沟通。需要更先进和有效的基于证据的信息传递,以便更好地为极端天气做好安全准备,并提供更可信的疏散通信。方法:本研究开发了一个迈阿密-戴德县的数字双胞胎,其中嵌入了行为理论(扩展并行过程模型,计划行为理论)的合成人口,并开发了一个消息评估框架(MAF)来系统地测试基于理论的危机消息。被试暴露于仅恐惧、仅功效、仅规范、恐惧+功效联合、恐惧+功效+规范联合和中性控制信息。结果:基于行为理论的信息在鼓励疏散方面比控制信息更有效。结合恐惧和效能的信息在综合人群的撤离决策中提供了最好的结果(OR = 15.45, p < 0.001),而添加社会线索并没有产生统计学上可区分的额外好处。讨论:本研究展示了一种概念验证方法,用于使用代理人工智能和数字孪生来预先测试通信策略,为在实际实施之前优化紧急消息传递提供了一种可扩展的方法。
{"title":"Digital twin simulations of theory-driven crisis messaging during hurricane evacuations in synthetic populations: a Miami-Dade County case study.","authors":"Brandon Walling, Linda Desens, Vanessa Howard, Rhys O'Neill, Denise Scannell, Mary Giammarino, Sara Beth Elson, Scott Rosen","doi":"10.3389/frai.2026.1715883","DOIUrl":"10.3389/frai.2026.1715883","url":null,"abstract":"<p><strong>Background: </strong>Digital twin and agentic artificial intelligence technology provide innovative systems for testing behavioral science theory, which can improve emergency communication in crisis situations. More advanced and effective evidence-based messaging is needed for better safety preparation for extreme weather and more trusted evacuation communication.</p><p><strong>Methods: </strong>This study developed a digital twin of Miami-Dade County populated with a synthetic population embedded with behavioral theory (Extended Parallel Process Model, Theory of Planned Behavior) and the development of a Message Assessment Framework (MAF) to systematically test theory-based crisis messages. Agents were exposed to fear-only, efficacy-only, norm-only, combined fear+efficacy, combined fear+efficacy+norm, and a neutral control message.</p><p><strong>Results: </strong>Messages grounded in behavioral theory were more effective than the control message at encouraging evacuation. Messages that combined fear and efficacy provided the best results in the synthetic population's decision to evacuate (OR = 15.45, <i>p</i> < 0.001), while adding social cues did not produce a statistically distinguishable added benefit.</p><p><strong>Discussion: </strong>This research demonstrates a proof-of-concept approach for using agentic AI and digital twins to pre-test communication strategies, offering a scalable method for optimizing emergency messaging prior to real-world implementation.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"9 ","pages":"1715883"},"PeriodicalIF":4.7,"publicationDate":"2026-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12933421/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147310738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The association between national culture and AI readiness: a cross-national study. 国家文化与人工智能准备之间的关系:一项跨国研究。
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-11 eCollection Date: 2026-01-01 DOI: 10.3389/frai.2026.1727606
Kumiko Komatsu, Nina Ždanovič, Masaki Yamabe, Hiroyoshi Iwata, Misa Iwamoto, Shutaro Takeda

While the adoption of Artificial Intelligence (AI) is advancing globally, its pace varies significantly across nations. This study statistically examines the associations between Hofstede's cultural dimensions and national-level AI readiness. A correlation analysis was conducted using data from the Oxford Insights' "Government AI Readiness Index 2024" and Hofstede's cultural dimension scores. The findings reveal that Individualism and Long-Term Orientation have a significant positive correlation with AI readiness, whereas Power Distance and Uncertainty Avoidance show a significant negative correlation. Conversely, Masculinity and Indulgence did not have a statistically significant relationship. These results suggest that national cultural characteristics are associated with differences in the adoption of advanced technologies such as AI. To contextualize the statistics, we include an illustrative, non-causal comparison of Japan, the United States, and Singapore.

虽然人工智能(AI)的采用正在全球范围内推进,但其速度在各国之间差异很大。这项研究从统计上考察了Hofstede的文化维度与国家级人工智能准备程度之间的关系。相关分析使用了牛津洞察的“2024年政府人工智能准备指数”和Hofstede的文化维度得分的数据。研究发现,个人主义和长期取向与人工智能准备程度呈显著正相关,而权力距离和不确定性回避与人工智能准备程度呈显著负相关。相反,阳刚之气和放纵没有统计学上的显著关系。这些结果表明,民族文化特征与采用人工智能等先进技术的差异有关。为了将统计数据置于背景中,我们对日本、美国和新加坡进行了说明性的非因果比较。
{"title":"The association between national culture and AI readiness: a cross-national study.","authors":"Kumiko Komatsu, Nina Ždanovič, Masaki Yamabe, Hiroyoshi Iwata, Misa Iwamoto, Shutaro Takeda","doi":"10.3389/frai.2026.1727606","DOIUrl":"https://doi.org/10.3389/frai.2026.1727606","url":null,"abstract":"<p><p>While the adoption of Artificial Intelligence (AI) is advancing globally, its pace varies significantly across nations. This study statistically examines the associations between Hofstede's cultural dimensions and national-level AI readiness. A correlation analysis was conducted using data from the Oxford Insights' \"Government AI Readiness Index 2024\" and Hofstede's cultural dimension scores. The findings reveal that Individualism and Long-Term Orientation have a significant positive correlation with AI readiness, whereas Power Distance and Uncertainty Avoidance show a significant negative correlation. Conversely, Masculinity and Indulgence did not have a statistically significant relationship. These results suggest that national cultural characteristics are associated with differences in the adoption of advanced technologies such as AI. To contextualize the statistics, we include an illustrative, non-causal comparison of Japan, the United States, and Singapore.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"9 ","pages":"1727606"},"PeriodicalIF":4.7,"publicationDate":"2026-02-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12932928/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147310716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explainable AI-driven customer churn prediction: a multi-model ensemble approach with SHAP-based feature analysis. 可解释的人工智能驱动的客户流失预测:基于shap特征分析的多模型集成方法。
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-10 eCollection Date: 2026-01-01 DOI: 10.3389/frai.2026.1748799
Ali El Attar, Mohammed El-Hajj

Customer churn prediction is critical for telecommunications companies to maintain profitability and inform retention strategies. This study builds upon existing work by implementing a comprehensive machine learning framework using the Telco Customer Churn dataset (n = 7,043). Our methodology integrated comprehensive feature engineering, SMOTE oversampling, and training of seven machine learning models including XGBoost, Random Forest, and a Multi-layer Perceptron. Model interpretation was conducted via SHAP analysis and customer segmentation. Key results demonstrated that gradient boosting algorithms (XGBoost, LightGBM, Gradient Boosting) achieved the highest balanced performance with accuracy, precision, recall, and F1-scores of 0.84, with XGBoost attaining the best discriminative ability (AUC-ROC: 0.932). A soft-voting ensemble of the top models matched this performance (F1-score: 0.84, AUC-ROC: 0.918). SHAP analysis revealed that contract type, tenure, and technical support were the features contributing most to the model's churn predictions. Threshold optimization at 0.528 balanced precision (0.90) and recall (0.91) while reducing false negatives by 15%. The findings provide actionable insights for prioritizing high-risk customers and designing targeted retention strategies in the telecom sector.

客户流失预测对于电信公司维持盈利能力和制定保留策略至关重要。本研究建立在现有工作的基础上,利用电信客户流失数据集(n = 7043)实施了一个全面的机器学习框架。我们的方法集成了综合特征工程、SMOTE过采样和七种机器学习模型的训练,包括XGBoost、随机森林和多层感知器。通过SHAP分析和客户细分进行模型解释。关键结果表明,梯度增强算法(XGBoost、LightGBM、gradient boosting)的准确率、精密度、召回率和f1得分均为0.84,达到了最高的平衡性能,其中XGBoost的判别能力最佳(AUC-ROC: 0.932)。顶级模特的软投票集合符合这种表现(f1得分:0.84,AUC-ROC: 0.918)。SHAP分析显示,合同类型、期限和技术支持是对模型流失预测贡献最大的特征。阈值优化为0.528,平衡了精度(0.90)和召回率(0.91),同时减少了15%的假阴性。研究结果为电信行业优先考虑高风险客户和设计有针对性的保留策略提供了可操作的见解。
{"title":"Explainable AI-driven customer churn prediction: a multi-model ensemble approach with SHAP-based feature analysis.","authors":"Ali El Attar, Mohammed El-Hajj","doi":"10.3389/frai.2026.1748799","DOIUrl":"https://doi.org/10.3389/frai.2026.1748799","url":null,"abstract":"<p><p>Customer churn prediction is critical for telecommunications companies to maintain profitability and inform retention strategies. This study builds upon existing work by implementing a comprehensive machine learning framework using the Telco Customer Churn dataset (<i>n</i> = 7,043). Our methodology integrated comprehensive feature engineering, SMOTE oversampling, and training of seven machine learning models including XGBoost, Random Forest, and a Multi-layer Perceptron. Model interpretation was conducted via SHAP analysis and customer segmentation. Key results demonstrated that gradient boosting algorithms (XGBoost, LightGBM, Gradient Boosting) achieved the highest balanced performance with accuracy, precision, recall, and F1-scores of 0.84, with XGBoost attaining the best discriminative ability (AUC-ROC: 0.932). A soft-voting ensemble of the top models matched this performance (F1-score: 0.84, AUC-ROC: 0.918). SHAP analysis revealed that contract type, tenure, and technical support were the features contributing most to the model's churn predictions. Threshold optimization at 0.528 balanced precision (0.90) and recall (0.91) while reducing false negatives by 15%. The findings provide actionable insights for prioritizing high-risk customers and designing targeted retention strategies in the telecom sector.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"9 ","pages":"1748799"},"PeriodicalIF":4.7,"publicationDate":"2026-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12929532/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147310689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal hyperdimensional representation for learning and cognitive computation. 学习和认知计算的最佳超维表示。
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-10 eCollection Date: 2026-01-01 DOI: 10.3389/frai.2026.1690492
Prathyush P Poduval, Hamza Errahmouni Barkam, Xiangjian Liu, Sanggeon Yun, Yang Ni, Zhuowen Zou, Nathaniel D Bastian, Mohsen Imani

Hyperdimensional Computing (HDC) is a neurally inspired computing paradigm that leverages lightweight, high-dimensional operations to emulate key brain functions. Recent advances in HDC have primarily targeted two domains: learning, where the goal is to extract and generalize patterns for tasks such as classification, and cognitive computation, which requires accurate information retrieval for human-like reasoning. Although state-of-the-art HDC methods achieve strong performance in both areas, they lack a principled understanding of the fundamentally different requirements imposed by learning vs. cognition. In particular, existing works provide limited guidance on designing encoding methods that generate optimal hyperdimensional representations for these distinct tasks. In this study, we proposed the first universal hyperdimensional encoding method that dynamically adapts to the needs of both learning and cognitive computation. Our approach is based on neural-symbolic techniques that assign random complex hypervectors to atomic bases (e.g., alphabet definitions) and then apply algebraic operations in the high-dimensional hyperspace to control the correlation structure among encoded data points. Through theoretical analysis, we show that learning tasks benefit from correlated representations to maximize memorization and generalization capacity, whereas cognitive tasks require orthogonal, highly separable representations to enable accurate decoding and reasoning. We further derived a separation metric that quantifies this trade-off and validated it empirically across image classification and decoding tasks. Our results demonstrate that tuning the encoder to increase correlation improves classification accuracy from 65% to 95%, while maximizing separation enhances decoding accuracy from 85% to 100%. These findings provide the first systematic framework for designing hyperdimensional encoders that unify learning and cognition under a single, theoretically grounded representation model.

超维计算(HDC)是一种受神经启发的计算范式,它利用轻量级、高维操作来模拟关键的大脑功能。HDC的最新进展主要针对两个领域:学习,其目标是为分类等任务提取和概括模式,以及认知计算,这需要精确的信息检索以进行类似人类的推理。尽管最先进的HDC方法在这两个领域都取得了出色的表现,但它们缺乏对学习与认知所施加的根本不同要求的原则性理解。特别是,现有的工作提供有限的指导,设计编码方法,为这些不同的任务产生最佳的超维表示。在这项研究中,我们提出了第一个通用的超维编码方法,动态适应学习和认知计算的需要。我们的方法是基于神经符号技术,将随机复杂超向量分配给原子基(例如,字母定义),然后在高维超空间中应用代数运算来控制编码数据点之间的相关结构。通过理论分析,我们发现学习任务受益于相关表征以最大限度地提高记忆和泛化能力,而认知任务则需要正交的、高度可分离的表征以实现准确的解码和推理。我们进一步推导了一个分离度量,量化了这种权衡,并在图像分类和解码任务中对其进行了经验验证。我们的研究结果表明,调整编码器以增加相关性将分类准确率从65%提高到95%,而最大化分离将解码准确率从85%提高到100%。这些发现为设计超维编码器提供了第一个系统框架,该框架将学习和认知统一在一个单一的理论基础表示模型下。
{"title":"Optimal hyperdimensional representation for learning and cognitive computation.","authors":"Prathyush P Poduval, Hamza Errahmouni Barkam, Xiangjian Liu, Sanggeon Yun, Yang Ni, Zhuowen Zou, Nathaniel D Bastian, Mohsen Imani","doi":"10.3389/frai.2026.1690492","DOIUrl":"https://doi.org/10.3389/frai.2026.1690492","url":null,"abstract":"<p><p>Hyperdimensional Computing (HDC) is a neurally inspired computing paradigm that leverages lightweight, high-dimensional operations to emulate key brain functions. Recent advances in HDC have primarily targeted two domains: <i>learning</i>, where the goal is to extract and generalize patterns for tasks such as classification, and <i>cognitive computation</i>, which requires accurate information retrieval for human-like reasoning. Although state-of-the-art HDC methods achieve strong performance in both areas, they lack a principled understanding of the fundamentally different requirements imposed by learning vs. cognition. In particular, existing works provide limited guidance on designing encoding methods that generate optimal hyperdimensional representations for these distinct tasks. In this study, we proposed the first <i>universal hyperdimensional encoding method</i> that dynamically adapts to the needs of both learning and cognitive computation. Our approach is based on neural-symbolic techniques that assign random complex hypervectors to atomic bases (e.g., alphabet definitions) and then apply algebraic operations in the high-dimensional <i>hyperspace</i> to control the correlation structure among encoded data points. Through theoretical analysis, we show that learning tasks benefit from <i>correlated</i> representations to maximize memorization and generalization capacity, whereas cognitive tasks require <i>orthogonal, highly separable</i> representations to enable accurate decoding and reasoning. We further derived a separation metric that quantifies this trade-off and validated it empirically across image classification and decoding tasks. Our results demonstrate that tuning the encoder to increase correlation improves classification accuracy from 65% to 95%, while maximizing separation enhances decoding accuracy from 85% to 100%. These findings provide the first systematic framework for designing hyperdimensional encoders that unify learning and cognition under a single, theoretically grounded representation model.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"9 ","pages":"1690492"},"PeriodicalIF":4.7,"publicationDate":"2026-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12929535/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147290608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design of an AI-driven secure 5G-SDN framework with federated reinforcement learning for anomaly detection, mitigation, and attack forensics. 设计人工智能驱动的安全5G-SDN框架,采用联邦强化学习,用于异常检测、缓解和攻击取证。
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-10 eCollection Date: 2026-01-01 DOI: 10.3389/frai.2026.1701944
R Shameli, Sujatha Rajkumar

Introduction: The increasing adoption of Software-Defined Networking (SDN) in 5G networks has revolutionized network management. However, this paradigm shift has introduced critical security vulnerabilities, including data-plane anomalies, control-layer intrusions, and Distributed Denial-of-Service (DDoS) attacks. Existing intrusion detection approaches based on Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks suffer from high computational overhead, long detection latency, and limited scalability, making them unsuitable for real-time 5G-SDN environments.

Methods: This article proposes a novel multi-layered security framework for 5G-SDN that integrates EfficientNet with Knowledge Distillation (KD), Transformer Networks, Spiking Neural Networks (SNNs), Federated Reinforcement Learning (FRL), and blockchain technology. EfficientNet-KD enables lightweight and accurate anomaly detection at the data-plane layer. Transformer networks capture long-range temporal dependencies to enhance control-layer attack detection. SNNs are employed for ultra-low-latency attack classification by mimicking human brain neural processing. FRL supports decentralized and privacy-preserving mitigation across SDN controllers, improving scalability, while blockchain technology ensures the integrity and immutability of attack logs for forensic reliability.

Results: The proposed framework was evaluated using multiple benchmark datasets, including CICIDS2017, UNSW-NB15, IoT-23, and InSDN. Experimental results demonstrate an average detection accuracy of 97.75%, detection latency of 15 ms, and less than 5% throughput degradation. Each detection consumes only 0.25 J of energy, achieving a 40% reduction in energy usage compared to traditional CNN- and LSTM-based approaches.

Discussion: The results verify that the proposed framework provides a scalable, energy-efficient, and low-latency intrusion detection and mitigation solution for 5G-SDN environments. By integrating lightweight deep learning, neuromorphic computing, decentralized learning, and blockchain-based security, the framework effectively addresses the limitations of existing methods and offers a robust approach for securing next-generation 5G-SDN networks.

5G网络越来越多地采用软件定义网络(SDN),为网络管理带来了革命性的变化。然而,这种范式转变引入了严重的安全漏洞,包括数据平面异常、控制层入侵和分布式拒绝服务(DDoS)攻击。现有的基于卷积神经网络(cnn)和长短期记忆(LSTM)网络的入侵检测方法存在计算开销大、检测延迟长、可扩展性有限等问题,不适合实时5G-SDN环境。方法:本文提出了一种新的5G-SDN多层安全框架,该框架集成了高效网络与知识蒸馏(KD)、变压器网络、峰值神经网络(snn)、联邦强化学习(FRL)和区块链技术。EfficientNet-KD能够在数据平面层进行轻量级和精确的异常检测。变压器网络捕获远程时间依赖性,以增强控制层攻击检测。snn通过模拟人脑神经处理,用于超低延迟攻击分类。FRL支持跨SDN控制器的分散和隐私保护缓解,提高可扩展性,而区块链技术确保攻击日志的完整性和不可变性,以确保取证可靠性。结果:使用多个基准数据集对所提出的框架进行了评估,包括CICIDS2017、UNSW-NB15、IoT-23和InSDN。实验结果表明,平均检测准确率为97.75%,检测延迟为15 ms,吞吐量下降小于5%。每次检测仅消耗0.25 J的能量,与传统的基于CNN和lstm的方法相比,能耗降低了40%。讨论:结果验证了所提出的框架为5G-SDN环境提供了可扩展、节能、低延迟的入侵检测和缓解解决方案。通过集成轻量级深度学习、神经形态计算、分散学习和基于区块链的安全性,该框架有效地解决了现有方法的局限性,并为保护下一代5G-SDN网络提供了一种强大的方法。
{"title":"Design of an AI-driven secure 5G-SDN framework with federated reinforcement learning for anomaly detection, mitigation, and attack forensics.","authors":"R Shameli, Sujatha Rajkumar","doi":"10.3389/frai.2026.1701944","DOIUrl":"https://doi.org/10.3389/frai.2026.1701944","url":null,"abstract":"<p><strong>Introduction: </strong>The increasing adoption of Software-Defined Networking (SDN) in 5G networks has revolutionized network management. However, this paradigm shift has introduced critical security vulnerabilities, including data-plane anomalies, control-layer intrusions, and Distributed Denial-of-Service (DDoS) attacks. Existing intrusion detection approaches based on Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks suffer from high computational overhead, long detection latency, and limited scalability, making them unsuitable for real-time 5G-SDN environments.</p><p><strong>Methods: </strong>This article proposes a novel multi-layered security framework for 5G-SDN that integrates EfficientNet with Knowledge Distillation (KD), Transformer Networks, Spiking Neural Networks (SNNs), Federated Reinforcement Learning (FRL), and blockchain technology. EfficientNet-KD enables lightweight and accurate anomaly detection at the data-plane layer. Transformer networks capture long-range temporal dependencies to enhance control-layer attack detection. SNNs are employed for ultra-low-latency attack classification by mimicking human brain neural processing. FRL supports decentralized and privacy-preserving mitigation across SDN controllers, improving scalability, while blockchain technology ensures the integrity and immutability of attack logs for forensic reliability.</p><p><strong>Results: </strong>The proposed framework was evaluated using multiple benchmark datasets, including CICIDS2017, UNSW-NB15, IoT-23, and InSDN. Experimental results demonstrate an average detection accuracy of 97.75%, detection latency of 15 ms, and less than 5% throughput degradation. Each detection consumes only 0.25 J of energy, achieving a 40% reduction in energy usage compared to traditional CNN- and LSTM-based approaches.</p><p><strong>Discussion: </strong>The results verify that the proposed framework provides a scalable, energy-efficient, and low-latency intrusion detection and mitigation solution for 5G-SDN environments. By integrating lightweight deep learning, neuromorphic computing, decentralized learning, and blockchain-based security, the framework effectively addresses the limitations of existing methods and offers a robust approach for securing next-generation 5G-SDN networks.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"9 ","pages":"1701944"},"PeriodicalIF":4.7,"publicationDate":"2026-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12929375/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147291305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Correction: Optimized ensemble machine learning model for cyberattack classification in industrial IoT. 更正:工业物联网网络攻击分类的优化集成机器学习模型。
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-10 eCollection Date: 2026-01-01 DOI: 10.3389/frai.2026.1786635
Batool Alabdullah, Suresh Sankaranarayanan

[This corrects the article DOI: 10.3389/frai.2025.1685376.].

[这更正了文章DOI: 10.3389/frai.2025.1685376.]。
{"title":"Correction: Optimized ensemble machine learning model for cyberattack classification in industrial IoT.","authors":"Batool Alabdullah, Suresh Sankaranarayanan","doi":"10.3389/frai.2026.1786635","DOIUrl":"10.3389/frai.2026.1786635","url":null,"abstract":"<p><p>[This corrects the article DOI: 10.3389/frai.2025.1685376.].</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"9 ","pages":"1786635"},"PeriodicalIF":4.7,"publicationDate":"2026-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12930270/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147310708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HasLoss: a novel Hassanat distance-based loss functions for binary classification. HasLoss:一种新的基于Hassanat距离的二值分类损失函数。
IF 4.7 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-02-10 eCollection Date: 2025-01-01 DOI: 10.3389/frai.2025.1690830
Ahmad S Tarawneh

Introduction: Loss functions play a critical role in machine learning, particularly in training neural networks for classification tasks. In this work, we establish a theoretical framework for distance-based loss functions by adapting the Hassanat distance for binary classification.

Methods: Through gradient analysis, we prove that Hassanat losses exhibit bounded gradients with finite Lipschitz constants, providing convergence guarantees and robustness to outliers. We formulate six variants with different error sensitivities and validate these theoretical properties empirically. Their effectiveness is evaluated on synthetic datasets and nine real-world datasets, ranging from a few hundred to nearly 48,000 samples, under controlled experimental conditions. A comprehensive comparison is conducted against widely used loss functions, including Binary Cross-Entropy (BCE), Focal Loss, Mean Squared Error (MSE), and L1 Loss.

Results: Results show that the proposed Hassanat-based losses achieve competitive performance across evaluation metrics, with comparable or slightly improved results in calibration, convergence speed (in terms of epochs), precision, recall, F1-score, and AUC on several datasets, while exhibiting notable robustness to outliers and noise. The estimated Floating Point Operations (FLOPs) shows that the wall-clock time difference is due to implementation gap, not algorithmic. Importantly, Cohen's d effect size and confidence interval analyses shows that some of the proposed variants introduce a larger practical effect size than popular loss functions such as BCE.

Discussion: This work establishes both theoretical foundations and empirical validation for distance-based loss functions. The bounded gradient framework with finite Lipschitz constants provides principled optimization guarantees while explaining observed robustness and convergence behavior. This foundation enables systematic development of robust loss functions tailored to specific application requirements.

简介:损失函数在机器学习中起着至关重要的作用,特别是在训练用于分类任务的神经网络中。在这项工作中,我们通过将Hassanat距离用于二元分类,建立了基于距离的损失函数的理论框架。方法:通过梯度分析,证明了Hassanat损失具有有限Lipschitz常数的有界梯度,提供了收敛保证和对异常值的鲁棒性。我们提出了六种不同误差灵敏度的变量,并对这些理论性质进行了实证验证。在受控的实验条件下,在合成数据集和9个真实数据集上评估了它们的有效性,从几百个样本到近48,000个样本不等。对广泛使用的损失函数进行了全面的比较,包括二进制交叉熵(BCE)、焦损、均方误差(MSE)和L1损耗。结果表明,所提出的基于hassanat的损失在评估指标中具有竞争力,在校准,收敛速度(就时代而言),精度,召回率,f1分数和AUC方面的结果与几个数据集相当或略有改善,同时对异常值和噪声表现出显着的鲁棒性。估计的浮点运算(FLOPs)表明,壁钟时间差是由于实现差距,而不是算法。重要的是,Cohen的效应大小和置信区间分析表明,一些提出的变量比流行的损失函数(如BCE)引入了更大的实际效应大小。讨论:本工作建立了基于距离的损失函数的理论基础和经验验证。有限Lipschitz常数的有界梯度框架提供了原则性的优化保证,同时解释了观察到的鲁棒性和收敛性行为。这个基础能够系统地开发适合特定应用需求的鲁棒损失函数。
{"title":"HasLoss: a novel Hassanat distance-based loss functions for binary classification.","authors":"Ahmad S Tarawneh","doi":"10.3389/frai.2025.1690830","DOIUrl":"https://doi.org/10.3389/frai.2025.1690830","url":null,"abstract":"<p><strong>Introduction: </strong>Loss functions play a critical role in machine learning, particularly in training neural networks for classification tasks. In this work, we establish a theoretical framework for distance-based loss functions by adapting the Hassanat distance for binary classification.</p><p><strong>Methods: </strong>Through gradient analysis, we prove that Hassanat losses exhibit bounded gradients with finite Lipschitz constants, providing convergence guarantees and robustness to outliers. We formulate six variants with different error sensitivities and validate these theoretical properties empirically. Their effectiveness is evaluated on synthetic datasets and nine real-world datasets, ranging from a few hundred to nearly 48,000 samples, under controlled experimental conditions. A comprehensive comparison is conducted against widely used loss functions, including Binary Cross-Entropy (BCE), Focal Loss, Mean Squared Error (MSE), and L1 Loss.</p><p><strong>Results: </strong>Results show that the proposed Hassanat-based losses achieve competitive performance across evaluation metrics, with comparable or slightly improved results in calibration, convergence speed (in terms of epochs), precision, recall, F1-score, and AUC on several datasets, while exhibiting notable robustness to outliers and noise. The estimated Floating Point Operations (FLOPs) shows that the wall-clock time difference is due to implementation gap, not algorithmic. Importantly, Cohen's d effect size and confidence interval analyses shows that some of the proposed variants introduce a larger practical effect size than popular loss functions such as BCE.</p><p><strong>Discussion: </strong>This work establishes both theoretical foundations and empirical validation for distance-based loss functions. The bounded gradient framework with finite Lipschitz constants provides principled optimization guarantees while explaining observed robustness and convergence behavior. This foundation enables systematic development of robust loss functions tailored to specific application requirements.</p>","PeriodicalId":33315,"journal":{"name":"Frontiers in Artificial Intelligence","volume":"8 ","pages":"1690830"},"PeriodicalIF":4.7,"publicationDate":"2026-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12929465/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147310541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Frontiers in Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1