首页 > 最新文献

Artificial Intelligence Review最新文献

英文 中文
A review of neural architecture search methods for super-resolution imaging 超分辨率成像神经结构搜索方法综述
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-10 DOI: 10.1007/s10462-025-11488-0
Jingwen Guo, Xingyu Wang, Yuting Guo

Super-resolution (SR) imaging is a key task in computer vision, and recent progress has been driven by deep learning. However, manually designed SR networks often suffer from poor generalization, inefficiency, and long development cycles. Neural Architecture Search (NAS) offers an automated paradigm to overcome these limitations. However, its application to SR remains in a nascent stage, presenting significant research gaps such as prohibitive computational costs and the limited generalization of searched architectures. This review summarizes advances of NAS in SR, analyzing its essential components search space, search strategy, and performance evaluation and discussing applications in single image SR, remote sensing, and video SR. Studies show that NAS-based models can achieve competitive or superior performance with lower computational cost compared to handcrafted designs. Specifically, we emphasize the following contributions: (1) a comprehensive analysis of NAS components tailored to SR tasks; (2) a review of NAS applications across various SR domains with demonstrated improvements in performance and efficiency; and (3) identification of these unresolved challenges to outline actionable future directions, including reducing search costs, enhancing cross-domain robustness of lightweight models, and expanding NAS applications in SR-related tasks. This work aims to provide theoretical and methodological insights to support research and practical deployment of NAS in SR imaging.

超分辨率(SR)成像是计算机视觉中的一个关键任务,最近的进展是由深度学习推动的。然而,手工设计的SR网络通常存在泛化差、效率低和开发周期长的问题。神经结构搜索(NAS)提供了一个自动化的范例来克服这些限制。然而,它在SR中的应用仍处于起步阶段,存在显著的研究差距,例如高昂的计算成本和搜索架构的有限泛化。本文综述了NAS在SR中的研究进展,分析了其基本组成、搜索空间、搜索策略和性能评估,并讨论了其在单幅图像SR、遥感和视频SR中的应用。研究表明,与手工设计相比,基于NAS的模型可以以更低的计算成本获得具有竞争力或更高的性能。具体而言,我们强调以下贡献:(1)针对SR任务的NAS组件进行全面分析;(2)回顾不同SR域的NAS应用,并证明其性能和效率有所提高;(3)确定这些尚未解决的挑战,以概述可操作的未来方向,包括降低搜索成本,增强轻量级模型的跨域鲁棒性,以及在sr相关任务中扩展NAS应用。这项工作旨在提供理论和方法上的见解,以支持NAS在SR成像中的研究和实际部署。
{"title":"A review of neural architecture search methods for super-resolution imaging","authors":"Jingwen Guo,&nbsp;Xingyu Wang,&nbsp;Yuting Guo","doi":"10.1007/s10462-025-11488-0","DOIUrl":"10.1007/s10462-025-11488-0","url":null,"abstract":"<div><p>Super-resolution (SR) imaging is a key task in computer vision, and recent progress has been driven by deep learning. However, manually designed SR networks often suffer from poor generalization, inefficiency, and long development cycles. Neural Architecture Search (NAS) offers an automated paradigm to overcome these limitations. However, its application to SR remains in a nascent stage, presenting significant research gaps such as prohibitive computational costs and the limited generalization of searched architectures. This review summarizes advances of NAS in SR, analyzing its essential components search space, search strategy, and performance evaluation and discussing applications in single image SR, remote sensing, and video SR. Studies show that NAS-based models can achieve competitive or superior performance with lower computational cost compared to handcrafted designs. Specifically, we emphasize the following contributions: (1) a comprehensive analysis of NAS components tailored to SR tasks; (2) a review of NAS applications across various SR domains with demonstrated improvements in performance and efficiency; and (3) identification of these unresolved challenges to outline actionable future directions, including reducing search costs, enhancing cross-domain robustness of lightweight models, and expanding NAS applications in SR-related tasks. This work aims to provide theoretical and methodological insights to support research and practical deployment of NAS in SR imaging.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 2","pages":""},"PeriodicalIF":13.9,"publicationDate":"2026-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11488-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146026756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Structured sentiment analysis as transition-based dependency graph parsing 结构化情感分析作为基于转换的依赖图解析
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-10 DOI: 10.1007/s10462-025-11463-9
Daniel Fernández-González

Structured sentiment analysis (SSA) aims to automatically extract people’s opinions from a text in natural language and adequately represent that information in a graph structure. One of the most accurate methods for performing SSA was recently proposed and consists of approaching it as a dependency graph parsing task. Although we can find in the literature how transition-based algorithms excel in different dependency graph parsing tasks in terms of accuracy and efficiency, all proposed attempts to tackle SSA following that approach were based on graph-based models. In this article, we present the first transition-based method to address SSA as dependency graph parsing. Specifically, we design a transition system that processes the input text in a left-to-right pass, incrementally generating the graph structure containing all identified opinions. To effectively implement our final transition-based model, we resort to a Pointer Network architecture as a backbone. From an extensive evaluation, we demonstrate that our model offers the best performance to date in practically all cases among prior dependency-based methods, and surpasses recent task-specific techniques on the most challenging datasets. We additionally include an in-depth analysis and empirically prove that the average-case time complexity of our approach is quadratic in the sentence length, being more efficient than top-performing graph-based parsers.

结构化情感分析(SSA)旨在以自然语言从文本中自动提取人们的观点,并在图结构中充分表示这些信息。最近提出了执行SSA的最准确方法之一,该方法将其作为依赖图解析任务进行处理。尽管我们可以在文献中发现基于转换的算法在准确性和效率方面如何在不同的依赖图解析任务中表现出色,但所有根据该方法解决SSA的建议尝试都是基于基于图的模型。在本文中,我们提出了第一个基于转换的方法,作为依赖图解析来处理SSA。具体来说,我们设计了一个转换系统,该系统以从左到右的方式处理输入文本,增量地生成包含所有已识别意见的图结构。为了有效地实现最终的基于转换的模型,我们使用指针网络架构作为主干。从广泛的评估中,我们证明了我们的模型在几乎所有情况下都提供了迄今为止基于先前依赖性的方法的最佳性能,并且在最具挑战性的数据集上超过了最近的特定于任务的技术。此外,我们还进行了深入的分析,并通过经验证明,我们的方法的平均情况时间复杂度在句子长度中是二次的,比性能最好的基于图的解析器更有效。
{"title":"Structured sentiment analysis as transition-based dependency graph parsing","authors":"Daniel Fernández-González","doi":"10.1007/s10462-025-11463-9","DOIUrl":"10.1007/s10462-025-11463-9","url":null,"abstract":"<div><p>Structured sentiment analysis (SSA) aims to automatically extract people’s opinions from a text in natural language and adequately represent that information in a graph structure. One of the most accurate methods for performing SSA was recently proposed and consists of approaching it as a dependency graph parsing task. Although we can find in the literature how transition-based algorithms excel in different dependency graph parsing tasks in terms of accuracy and efficiency, all proposed attempts to tackle SSA following that approach were based on graph-based models. In this article, we present the first transition-based method to address SSA as dependency graph parsing. Specifically, we design a transition system that processes the input text in a left-to-right pass, incrementally generating the graph structure containing all identified opinions. To effectively implement our final transition-based model, we resort to a Pointer Network architecture as a backbone. From an extensive evaluation, we demonstrate that our model offers the best performance to date in practically all cases among prior dependency-based methods, and surpasses recent task-specific techniques on the most challenging datasets. We additionally include an in-depth analysis and empirically prove that the average-case time complexity of our approach is quadratic in the sentence length, being more efficient than top-performing graph-based parsers.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 2","pages":""},"PeriodicalIF":13.9,"publicationDate":"2026-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11463-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluating large language models effectiveness for flow-based intrusion detection: a comparative study with ML and DL baselines 评估大型语言模型对基于流的入侵检测的有效性:与ML和DL基线的比较研究
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-09 DOI: 10.1007/s10462-025-11432-2
Lorena Mehavilla, María Rodríguez, José García, Álvaro Alesanco

This paper presents the first systematic benchmark evaluating Large Language Models (LLMs), specifically GPT-2, GPT-Neo-125M, and LLaMA-3.2-1B, as standalone classifiers for intrusion detection, covering both binary and multiclass classification tasks, using structured Zeek logs derived from the CIC IoT 2023 dataset. We compare their performance against established and widely used Machine Learning (XGBoost, Random Forest, Decision Tree) and Deep Learning models (MLP, GRU, LeNet-5) across key evaluation metrics: detection effectiveness (precision, recall and F1-score), inference speed, and resource consumption. All models are consistently trained and rigorously evaluated on the CIC IoT 2023 dataset, ensuring fair, reproducible, and transparent comparisons. Our findings indicate that while LLMs achieve strong F1-score exceeding 95%, and do not fully utilize available GPU resources, they still do not outperform top-performing ML models. Notably XGBoost achieves a higher F1-score of 96.96%, using only 4% of the available CPU. These results emphasize the practical trade-offs between detection capability, inference efficiency, and hardware requirements when applying LLMs in flow-based IDS contexts, particularly in resource-constrained environments such as IoT or edge deployments.

本文提出了第一个评估大型语言模型(llm)的系统基准,特别是GPT-2, GPT-Neo-125M和LLaMA-3.2-1B,作为入侵检测的独立分类器,涵盖二进制和多类分类任务,使用来自CIC物联网2023数据集的结构化Zeek日志。我们将它们的性能与已建立和广泛使用的机器学习(XGBoost,随机森林,决策树)和深度学习模型(MLP, GRU, LeNet-5)进行比较,主要评估指标包括检测效率(精度,召回率和f1分数),推理速度和资源消耗。所有模型都在CIC物联网2023数据集上进行了一致的训练和严格评估,确保公平、可重复和透明的比较。我们的研究结果表明,虽然llm获得了超过95%的强大f1分数,并且没有充分利用可用的GPU资源,但它们仍然没有超过表现最好的ML模型。值得注意的是,XGBoost实现了96.96%的更高f1分数,只使用了4%的可用CPU。这些结果强调了在基于流的IDS环境中应用llm时,特别是在物联网或边缘部署等资源受限环境中,检测能力、推理效率和硬件需求之间的实际权衡。
{"title":"Evaluating large language models effectiveness for flow-based intrusion detection: a comparative study with ML and DL baselines","authors":"Lorena Mehavilla,&nbsp;María Rodríguez,&nbsp;José García,&nbsp;Álvaro Alesanco","doi":"10.1007/s10462-025-11432-2","DOIUrl":"10.1007/s10462-025-11432-2","url":null,"abstract":"<div><p>This paper presents the first systematic benchmark evaluating Large Language Models (LLMs), specifically GPT-2, GPT-Neo-125M, and LLaMA-3.2-1B, as standalone classifiers for intrusion detection, covering both binary and multiclass classification tasks, using structured Zeek logs derived from the CIC IoT 2023 dataset. We compare their performance against established and widely used Machine Learning (XGBoost, Random Forest, Decision Tree) and Deep Learning models (MLP, GRU, LeNet-5) across key evaluation metrics: detection effectiveness (precision, recall and F1-score), inference speed, and resource consumption. All models are consistently trained and rigorously evaluated on the CIC IoT 2023 dataset, ensuring fair, reproducible, and transparent comparisons. Our findings indicate that while LLMs achieve strong F1-score exceeding 95%, and do not fully utilize available GPU resources, they still do not outperform top-performing ML models. Notably XGBoost achieves a higher F1-score of 96.96%, using only 4% of the available CPU. These results emphasize the practical trade-offs between detection capability, inference efficiency, and hardware requirements when applying LLMs in flow-based IDS contexts, particularly in resource-constrained environments such as IoT or edge deployments.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 2","pages":""},"PeriodicalIF":13.9,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11432-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145930354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-objective hyper-heuristics: a survey 多目标超启发式:一项调查
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-08 DOI: 10.1007/s10462-025-11486-2
Julio Juárez, Hugo Terashima-Marín, Carlos A. Coello Coello

In recent years, research on the integration of evolutionary multi-objective optimization and (hyper-) heuristics (MOHHs) has significantly grown. This paper presents a comprehensive survey of MOHH research, categorizing existing approaches into four main classes: selection, generation, portfolio, and configuration MOHHs. Each category is analyzed in terms of methodology, key contributions, and open challenges. The analysis reveals an imbalance in research focus, with selection and portfolio MOHHs receiving the most attention, followed by configuration MOHHs, while generation MOHHs remain largely unaddressed. Selection MOHHs are further divided by the hierarchy of components they control: low-level approaches (which typically manage evolutionary operators) require further study on move acceptance methods, whereas mid-level approaches (which typically manage multi-objective evolutionary algorithms) need deeper exploration of selection strategies. Generation MOHHs, primarily based on genetic programming and grammatical evolution, lack investigation into alternative methodologies. Portfolio MOHHs, which produce a set of non-dominated constructive (hyper-) heuristics based on performance trade-offs, have been predominantly applied to combinatorial problems and exhibit limited diversity in the use of MOEAs as underlying optimizers. Configuration MOHHs, which focus on configuring algorithmic components for multi-objective optimizers, have largely relied on a single performance indicator, leaving room for multi-criteria performance approaches. Beyond this, the paper also reviews the test problems and practical applications that have been addressed by MOHHs, and outlines potential avenues for future research in the field.

近年来,对进化多目标优化与(超)启发式(MOHHs)相结合的研究有了显著的发展。本文对卫生保健研究进行了全面的综述,将现有的方法分为四大类:选择卫生保健、生成卫生保健、组合卫生保健和配置卫生保健。每个类别都根据方法、关键贡献和开放挑战进行分析。分析表明,研究重点不平衡,选择型和组合型卫生保健机构最受关注,其次是配置型卫生保健机构,而代用型卫生保健机构仍未得到重视。选择MOHHs根据其控制的组件层次进一步划分:低级方法(通常管理进化算子)需要进一步研究移动接受方法,而中级方法(通常管理多目标进化算法)需要更深入地探索选择策略。一代卫生保健主要基于遗传编程和语法进化,缺乏对替代方法的研究。组合MOHHs产生一组基于性能权衡的非支配的建设性(超)启发式,主要应用于组合问题,并且在使用moea作为底层优化器方面表现出有限的多样性。配置mohs侧重于配置多目标优化器的算法组件,主要依赖于单一性能指标,为多标准性能方法留下了空间。除此之外,本文还回顾了MOHHs已经解决的测试问题和实际应用,并概述了该领域未来研究的潜在途径。
{"title":"Multi-objective hyper-heuristics: a survey","authors":"Julio Juárez,&nbsp;Hugo Terashima-Marín,&nbsp;Carlos A. Coello Coello","doi":"10.1007/s10462-025-11486-2","DOIUrl":"10.1007/s10462-025-11486-2","url":null,"abstract":"<div><p>In recent years, research on the integration of evolutionary multi-objective optimization and (hyper-) heuristics (MOHHs) has significantly grown. This paper presents a comprehensive survey of MOHH research, categorizing existing approaches into four main classes: selection, generation, portfolio, and configuration MOHHs. Each category is analyzed in terms of methodology, key contributions, and open challenges. The analysis reveals an imbalance in research focus, with selection and portfolio MOHHs receiving the most attention, followed by configuration MOHHs, while generation MOHHs remain largely unaddressed. Selection MOHHs are further divided by the hierarchy of components they control: low-level approaches (which typically manage evolutionary operators) require further study on move acceptance methods, whereas mid-level approaches (which typically manage multi-objective evolutionary algorithms) need deeper exploration of selection strategies. Generation MOHHs, primarily based on genetic programming and grammatical evolution, lack investigation into alternative methodologies. Portfolio MOHHs, which produce a set of non-dominated constructive (hyper-) heuristics based on performance trade-offs, have been predominantly applied to combinatorial problems and exhibit limited diversity in the use of MOEAs as underlying optimizers. Configuration MOHHs, which focus on configuring algorithmic components for multi-objective optimizers, have largely relied on a single performance indicator, leaving room for multi-criteria performance approaches. Beyond this, the paper also reviews the test problems and practical applications that have been addressed by MOHHs, and outlines potential avenues for future research in the field.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 2","pages":""},"PeriodicalIF":13.9,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11486-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146027032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey on group fairness in federated learning: challenges, taxonomy of solutions and directions for future research 联邦学习中的群体公平问题:挑战、解决方案分类及未来研究方向
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-08 DOI: 10.1007/s10462-025-11475-5
Teresa Salazar, Helder Araujo, Alberto Cano, Pedro Henriques Abreu

Group fairness in machine learning is an important area of research focused on achieving equitable outcomes across different groups defined by sensitive attributes such as race or gender. Federated learning, a decentralized approach to training machine learning models across multiple clients, amplifies the need for fairness methodologies due to its inherent heterogeneous data distributions that can exacerbate biases. The intersection of federated learning and group fairness has attracted significant interest, with 48 research works specifically dedicated to addressing this issue. However, no comprehensive survey has specifically focused on group fairness in Federated Learning. In this work, we analyze the key challenges of this topic, propose practices for its identification and benchmarking, and create a novel taxonomy based on criteria such as data partitioning, location, and strategy. Furthermore, we analyze broader concerns, review how different approaches handle the complexities of various sensitive attributes, examine common datasets and applications, and discuss the ethical, legal, and policy implications of group fairness in FL. We conclude by highlighting key areas for future research, emphasizing the need for more methods to address the complexities of achieving group fairness in federated systems.

机器学习中的群体公平是一个重要的研究领域,专注于在由种族或性别等敏感属性定义的不同群体中实现公平的结果。联邦学习是一种跨多个客户端训练机器学习模型的分散方法,由于其固有的异构数据分布可能加剧偏见,因此放大了对公平性方法的需求。联邦学习和群体公平的交叉吸引了极大的兴趣,有48项研究工作专门致力于解决这个问题。然而,没有全面的调查专门关注联邦学习中的群体公平。在这项工作中,我们分析了该主题的主要挑战,提出了识别和基准测试的实践,并基于数据分区、位置和策略等标准创建了一个新的分类法。此外,我们分析了更广泛的问题,回顾了不同的方法如何处理各种敏感属性的复杂性,检查了常见的数据集和应用程序,并讨论了FL中群体公平的伦理、法律和政策含义。最后,我们强调了未来研究的关键领域,强调需要更多的方法来解决在联邦系统中实现群体公平的复杂性。
{"title":"A survey on group fairness in federated learning: challenges, taxonomy of solutions and directions for future research","authors":"Teresa Salazar,&nbsp;Helder Araujo,&nbsp;Alberto Cano,&nbsp;Pedro Henriques Abreu","doi":"10.1007/s10462-025-11475-5","DOIUrl":"10.1007/s10462-025-11475-5","url":null,"abstract":"<div>\u0000 \u0000 <p>Group fairness in machine learning is an important area of research focused on achieving equitable outcomes across different groups defined by sensitive attributes such as race or gender. Federated learning, a decentralized approach to training machine learning models across multiple clients, amplifies the need for fairness methodologies due to its inherent heterogeneous data distributions that can exacerbate biases. The intersection of federated learning and group fairness has attracted significant interest, with 48 research works specifically dedicated to addressing this issue. However, no comprehensive survey has specifically focused on group fairness in Federated Learning. In this work, we analyze the key challenges of this topic, propose practices for its identification and benchmarking, and create a novel taxonomy based on criteria such as data partitioning, location, and strategy. Furthermore, we analyze broader concerns, review how different approaches handle the complexities of various sensitive attributes, examine common datasets and applications, and discuss the ethical, legal, and policy implications of group fairness in FL. We conclude by highlighting key areas for future research, emphasizing the need for more methods to address the complexities of achieving group fairness in federated systems.</p>\u0000 </div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 2","pages":""},"PeriodicalIF":13.9,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11475-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146027029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the role of AI in building generative urban intelligence 关于人工智能在构建生成型城市智能中的作用
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-07 DOI: 10.1007/s10462-025-11469-3
João Carlos N. Bittencourt, Thommas K. S. Flores, Thiago C. Jesus, Daniel G. Costa

The rapid urbanization process has presented complex challenges that require innovative strategies to enhance urban living and promote sustainable growth. In this context, the concept of smart cities has quickly evolved, illustrating urban environments that utilize advanced technology to achieve greater efficiency, sustainability, and an improved quality of life for residents. The development of these smart environments relies on technologies like the Internet of Things (IoT), which collects extensive data through sensors, and Artificial Intelligence (AI), for advanced data processing and decision-making. For the latter, while traditional AI solutions have improved urban systems in multiple ways, emerging Generative Artificial Intelligence (GenAI) models signify a new era for smart cities, offering breakthroughs in urban design, simulation, and personalized, context-aware solutions. This article explores the applications, impacts, challenges, and promising future trends of GenAI within the context of smart cities, discussing generative urban intelligence perspectives for simulating alternative urban scenarios, co-designing infrastructure prototypes, and improving service delivery. It provides a pioneering perspective on an underexplored field that is expected to transform urban design, planning, and management.

快速的城市化进程带来了复杂的挑战,需要创新战略来改善城市生活和促进可持续增长。在这种背景下,智慧城市的概念迅速发展,说明城市环境利用先进技术实现更高的效率、可持续性和提高居民的生活质量。这些智能环境的发展依赖于物联网(IoT)等技术,物联网通过传感器收集大量数据,人工智能(AI)用于高级数据处理和决策。对于后者来说,虽然传统的人工智能解决方案以多种方式改善了城市系统,但新兴的生成式人工智能(GenAI)模型标志着智慧城市的新时代,在城市设计、模拟以及个性化、情境感知解决方案方面提供了突破。本文探讨了GenAI在智慧城市背景下的应用、影响、挑战和有希望的未来趋势,讨论了生成城市智能的视角,以模拟替代城市场景、共同设计基础设施原型和改善服务交付。它为一个尚未开发的领域提供了一个开创性的视角,有望改变城市设计、规划和管理。
{"title":"On the role of AI in building generative urban intelligence","authors":"João Carlos N. Bittencourt,&nbsp;Thommas K. S. Flores,&nbsp;Thiago C. Jesus,&nbsp;Daniel G. Costa","doi":"10.1007/s10462-025-11469-3","DOIUrl":"10.1007/s10462-025-11469-3","url":null,"abstract":"<div><p>The rapid urbanization process has presented complex challenges that require innovative strategies to enhance urban living and promote sustainable growth. In this context, the concept of smart cities has quickly evolved, illustrating urban environments that utilize advanced technology to achieve greater efficiency, sustainability, and an improved quality of life for residents. The development of these smart environments relies on technologies like the Internet of Things (IoT), which collects extensive data through sensors, and Artificial Intelligence (AI), for advanced data processing and decision-making. For the latter, while traditional AI solutions have improved urban systems in multiple ways, emerging Generative Artificial Intelligence (GenAI) models signify a new era for smart cities, offering breakthroughs in urban design, simulation, and personalized, context-aware solutions. This article explores the applications, impacts, challenges, and promising future trends of GenAI within the context of smart cities, discussing generative urban intelligence perspectives for simulating alternative urban scenarios, co-designing infrastructure prototypes, and improving service delivery. It provides a pioneering perspective on an underexplored field that is expected to transform urban design, planning, and management.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 2","pages":""},"PeriodicalIF":13.9,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11469-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146027027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Synergizing blockchain and AI to fortify IoT security: a comprehensive review 协同区块链和人工智能加强物联网安全:全面回顾
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-07 DOI: 10.1007/s10462-025-11434-0
Deepak Kaushik, Preeti Gulia, Nasib Singh Gill, Mohammad Yahya, Piyush Kumar Shukla, J. Shreyas

The relentless growth of connected devices is transforming industrial, urban and domestic environments, yet it also expands the attack surface for distributed denial of service (DDoS), unauthorized access and data manipulation. Centralized security architectures struggle to cope with the scale and heterogeneity of the Internet of Things, creating single points of failure and privacy risks. This review takes a close look at how blockchain and artificial intelligence (AI) can work together to solve these problems. Blockchain plays an important role in decentralizing trust, maintaining data integrity, and enabling transparent audit trails. AI subfields such as machine learning (ML), deep learning (DL), reinforcement learning (RL), and multi-agent systems (MAS) enhance these benefits. They enable real-time anomaly detection, predictive analytics, and adaptive policy control. A seven axis Blockchain–AI Security Integration Schema (BASIS) is proposed to classify solutions by security objectives, intelligence modalities, trust primitives, deployment choices, scalability techniques, privacy controls and interoperability mechanisms. In this study also review Layer-2 consensus protocols, federated learning and lightweight deep learning models that address energy and computational constraints. Case studies from supply chains, healthcare and smart grids illustrate the benefits and limitations of current deployments. The evidence suggests that while AI improves the accuracy and responsiveness of threat detection, blockchain offers tamper-proof data provenance. However, there are still issues in achieving scalability, reducing computational overhead, and striking a balance between auditability and privacy. Hybrid on-chain/off-chain architectures, quantum-safe cryptography, and standardized frameworks to guarantee adoption and interoperability are some future research avenues.

连接设备的不断增长正在改变工业、城市和家庭环境,但它也扩大了分布式拒绝服务(DDoS)、未经授权访问和数据操纵的攻击面。集中式安全架构难以应对物联网的规模和异构性,从而产生单点故障和隐私风险。本文将详细介绍区块链和人工智能(AI)如何协同工作来解决这些问题。区块链在分散信任、维护数据完整性和实现透明审计跟踪方面发挥着重要作用。人工智能的子领域,如机器学习(ML)、深度学习(DL)、强化学习(RL)和多智能体系统(MAS),增强了这些优势。它们支持实时异常检测、预测分析和自适应策略控制。提出了一个七轴区块链-人工智能安全集成模式(BASIS),根据安全目标、智能模式、信任原语、部署选择、可扩展性技术、隐私控制和互操作性机制对解决方案进行分类。本研究还回顾了解决能源和计算限制的第2层共识协议、联邦学习和轻量级深度学习模型。来自供应链、医疗保健和智能电网的案例研究说明了当前部署的好处和局限性。有证据表明,虽然人工智能提高了威胁检测的准确性和响应性,但区块链提供了防篡改的数据来源。然而,在实现可伸缩性、减少计算开销以及在可审计性和隐私性之间取得平衡方面仍然存在一些问题。链上/链下混合架构、量子安全加密以及保证采用和互操作性的标准化框架是未来的一些研究方向。
{"title":"Synergizing blockchain and AI to fortify IoT security: a comprehensive review","authors":"Deepak Kaushik,&nbsp;Preeti Gulia,&nbsp;Nasib Singh Gill,&nbsp;Mohammad Yahya,&nbsp;Piyush Kumar Shukla,&nbsp;J. Shreyas","doi":"10.1007/s10462-025-11434-0","DOIUrl":"10.1007/s10462-025-11434-0","url":null,"abstract":"<div><p>The relentless growth of connected devices is transforming industrial, urban and domestic environments, yet it also expands the attack surface for distributed denial of service (DDoS), unauthorized access and data manipulation. Centralized security architectures struggle to cope with the scale and heterogeneity of the Internet of Things, creating single points of failure and privacy risks. This review takes a close look at how blockchain and artificial intelligence (AI) can work together to solve these problems. Blockchain plays an important role in decentralizing trust, maintaining data integrity, and enabling transparent audit trails. AI subfields such as machine learning (ML), deep learning (DL), reinforcement learning (RL), and multi-agent systems (MAS) enhance these benefits. They enable real-time anomaly detection, predictive analytics, and adaptive policy control. A seven axis Blockchain–AI Security Integration Schema (BASIS) is proposed to classify solutions by security objectives, intelligence modalities, trust primitives, deployment choices, scalability techniques, privacy controls and interoperability mechanisms. In this study also review Layer-2 consensus protocols, federated learning and lightweight deep learning models that address energy and computational constraints. Case studies from supply chains, healthcare and smart grids illustrate the benefits and limitations of current deployments. The evidence suggests that while AI improves the accuracy and responsiveness of threat detection, blockchain offers tamper-proof data provenance. However, there are still issues in achieving scalability, reducing computational overhead, and striking a balance between auditability and privacy. Hybrid on-chain/off-chain architectures, quantum-safe cryptography, and standardized frameworks to guarantee adoption and interoperability are some future research avenues.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 2","pages":""},"PeriodicalIF":13.9,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11434-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145908989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Finer monocular depth estimation with long range in various driving lighting environments 更精细的单目深度估计与远距离在各种驾驶照明环境
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-07 DOI: 10.1007/s10462-025-11436-y
Yan Liu, Mingyu Yan, Yanqiu Xiao, Guangzhen Cui, Li Han

Depth estimation methods for autonomous driving application face numerous challenges, such as capturing fine details and handling varying lighting conditions. Based on these challenges, LRDepth is proposed to improve the depth estimation task, which includes a simple high frequency enhancement module (HFEM) and a progressive residual denoising diffusion (PRDD) module. HFEM aids in extracting high-frequency components and amplifying the features, such as object edge details, generating more precise depth predictions. Inspired by the strong performance of diffusion models in various vision tasks, PRDD is designed to refine the depth predictions by reducing noise and enhancing edge details, which ensures the accurate representation of distant objects and subtle features. Extensive experiments on the KITTI and DIODE datasets demonstrated that the proposed network boosts the performance of monocular depth estimation, achieving more accurate long range depth predictions and improving model robustness in various lighting environments. The experiment results verified the method's adaptability, and the model is potential for real-world applications, which is beneficial for the optimization of visual perception module in intelligent driving system.

自动驾驶应用的深度估计方法面临许多挑战,例如捕捉精细细节和处理不同的照明条件。基于这些挑战,提出了LRDepth方法来改进深度估计任务,该方法包括一个简单的高频增强模块(HFEM)和一个渐进残差去噪扩散模块(PRDD)。HFEM有助于提取高频成分并放大特征,例如物体边缘细节,从而产生更精确的深度预测。受扩散模型在各种视觉任务中的强大性能的启发,PRDD旨在通过降低噪声和增强边缘细节来改进深度预测,从而确保对远处物体和细微特征的准确表示。在KITTI和DIODE数据集上进行的大量实验表明,所提出的网络提高了单目深度估计的性能,实现了更准确的远程深度预测,并提高了模型在各种照明环境下的鲁棒性。实验结果验证了该方法的适应性,该模型具有实际应用的潜力,有利于智能驾驶系统中视觉感知模块的优化。
{"title":"Finer monocular depth estimation with long range in various driving lighting environments","authors":"Yan Liu,&nbsp;Mingyu Yan,&nbsp;Yanqiu Xiao,&nbsp;Guangzhen Cui,&nbsp;Li Han","doi":"10.1007/s10462-025-11436-y","DOIUrl":"10.1007/s10462-025-11436-y","url":null,"abstract":"<div><p>Depth estimation methods for autonomous driving application face numerous challenges, such as capturing fine details and handling varying lighting conditions. Based on these challenges, LRDepth is proposed to improve the depth estimation task, which includes a simple high frequency enhancement module (HFEM) and a progressive residual denoising diffusion (PRDD) module. HFEM aids in extracting high-frequency components and amplifying the features, such as object edge details, generating more precise depth predictions. Inspired by the strong performance of diffusion models in various vision tasks, PRDD is designed to refine the depth predictions by reducing noise and enhancing edge details, which ensures the accurate representation of distant objects and subtle features. Extensive experiments on the KITTI and DIODE datasets demonstrated that the proposed network boosts the performance of monocular depth estimation, achieving more accurate long range depth predictions and improving model robustness in various lighting environments. The experiment results verified the method's adaptability, and the model is potential for real-world applications, which is beneficial for the optimization of visual perception module in intelligent driving system.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 2","pages":""},"PeriodicalIF":13.9,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11436-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145909045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The intersection of artificial intelligence and assistive technologies in the diagnosis and intervention of mental health conditions 人工智能和辅助技术在心理健康状况诊断和干预中的交叉
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-07 DOI: 10.1007/s10462-025-11447-9
Muhammad Abrar, Mujeeb ur Rehman, Sohail Khalid, Rahmat Ullah

Mental health disorders are becoming a major global health concern and pose a significant burden on global healthcare systems. Nearly one billion people suffer from mental disorders, accounting for 13% of the global disease burden and $1 trillion in annual productivity loss. Depression is the leading cause of disability and suicide is the second leading cause of death among young individuals. Economic uncertainty, social isolation, climate change, shifting societal norms, political conflict, and increasing violence are key factors contributing to the high prevalence of mental health issues. In the future, increasing poverty and inequality are likely to worsen this trend, resulting in a greater incidence and burden of mental illness. Therefore, timely diagnosis and intervention are a high priority. Traditional diagnostic and intervention methods, such as self-report questionnaires, clinical interviews, psychotherapy, medication, electroconvulsive therapy, and occupational therapy, have drawbacks including subjectivity, time commitment, and the potential for prolonged treatment. Due to these limitations, advanced approaches are needed to improve diagnostic accuracy and precision and to develop more effective interventions. This review aims to explore and evaluate the applications of Artificial Intelligence in the diagnosis and treatment of mental health conditions. This study provides a thorough analysis of various artificial intelligence-driven techniques and their advancements in the diagnosis of mental health conditions. Artificial intelligence has the potential to greatly improve the accuracy and effectiveness of mental health conditions. Moreover, this work consolidates the research gaps in current techniques and provides research hypotheses on how to overcome the gaps using a proposed 3-tier solution.

精神健康障碍正在成为一个主要的全球卫生问题,并对全球卫生保健系统构成重大负担。近10亿人患有精神障碍,占全球疾病负担的13%,每年造成1万亿美元的生产力损失。抑郁症是导致残疾的主要原因,自杀是导致年轻人死亡的第二大原因。经济不确定性、社会孤立、气候变化、社会规范转变、政治冲突和暴力增加是导致精神卫生问题高比例流行的关键因素。在未来,日益增加的贫困和不平等可能会使这一趋势恶化,导致精神疾病的发病率和负担增加。因此,及时诊断和干预是重中之重。传统的诊断和干预方法,如自我报告问卷、临床访谈、心理治疗、药物治疗、电休克治疗和职业治疗等,存在主观性、时间投入和延长治疗时间等缺点。由于这些限制,需要先进的方法来提高诊断的准确性和精确性,并制定更有效的干预措施。本文旨在探讨和评价人工智能在精神疾病诊断和治疗中的应用。本研究提供了各种人工智能驱动的技术及其在心理健康状况诊断方面的进展的全面分析。人工智能有可能大大提高心理健康状况的准确性和有效性。此外,这项工作巩固了当前技术的研究差距,并提供了如何使用提议的三层解决方案来克服差距的研究假设。
{"title":"The intersection of artificial intelligence and assistive technologies in the diagnosis and intervention of mental health conditions","authors":"Muhammad Abrar,&nbsp;Mujeeb ur Rehman,&nbsp;Sohail Khalid,&nbsp;Rahmat Ullah","doi":"10.1007/s10462-025-11447-9","DOIUrl":"10.1007/s10462-025-11447-9","url":null,"abstract":"<div><p>Mental health disorders are becoming a major global health concern and pose a significant burden on global healthcare systems. Nearly one billion people suffer from mental disorders, accounting for 13% of the global disease burden and $1 trillion in annual productivity loss. Depression is the leading cause of disability and suicide is the second leading cause of death among young individuals. Economic uncertainty, social isolation, climate change, shifting societal norms, political conflict, and increasing violence are key factors contributing to the high prevalence of mental health issues. In the future, increasing poverty and inequality are likely to worsen this trend, resulting in a greater incidence and burden of mental illness. Therefore, timely diagnosis and intervention are a high priority. Traditional diagnostic and intervention methods, such as self-report questionnaires, clinical interviews, psychotherapy, medication, electroconvulsive therapy, and occupational therapy, have drawbacks including subjectivity, time commitment, and the potential for prolonged treatment. Due to these limitations, advanced approaches are needed to improve diagnostic accuracy and precision and to develop more effective interventions. This review aims to explore and evaluate the applications of Artificial Intelligence in the diagnosis and treatment of mental health conditions. This study provides a thorough analysis of various artificial intelligence-driven techniques and their advancements in the diagnosis of mental health conditions. Artificial intelligence has the potential to greatly improve the accuracy and effectiveness of mental health conditions. Moreover, this work consolidates the research gaps in current techniques and provides research hypotheses on how to overcome the gaps using a proposed 3-tier solution.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 2","pages":""},"PeriodicalIF":13.9,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11447-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145909044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-objective genetic programming-based algorithmic trading, using directional changes and a modified sharpe ratio score for identifying optimal trading strategies 基于多目标遗传规划的交易算法,利用方向变化和改进的夏普比率分数来确定最优交易策略
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2026-01-07 DOI: 10.1007/s10462-025-11390-9
Xinpeng Long, Michael Kampouridis, Tasos Papastylianou

This study explores the integration of directional changes (DC), genetic programming (GP), and multi-objective optimisation (MOO) to develop advanced algorithmic trading strategies. Directional changes offer a dynamic, event-based approach to market analysis, identifying significant price movements and trends. Genetic programming evolves trading rules to discover effective and profitable strategies. However, financial trading presents a multi-objective challenge, balancing conflicting objectives such as returns and risk. We propose a novel algorithmic trading framework, termed MOO3, which integrates genetic programming with the NSGA-II multi-objective optimisation algorithm to optimise three fitness functions: total return, expected rate of return, and risk. While the use of NSGA-II itself is well-established, our contribution lies in how we apply it within a trading context that combines (i) directional changes, (ii) genetic programming with both DC-based and physical-time indicators, and (iii) a modified Sharpe Ratio for post-optimisation strategy selection based on trader preferences. Utilising indicators from both paradigms allows the GP algorithm to create profitable trading strategies, while the multi-objective fitness function allows it to simultaneously optimise for risk. A definitive strategy is chosen from Pareto-optimal solutions using the modified Sharpe Ratio, allowing traders to prioritise multiple objectives. Our methodology is tested on 110 stock datasets from 10 international markets, aiming to demonstrate that the multi-objective framework can yield superior trading strategies with lower risk. Results indicate that the MOO3 algorithm consistently and significantly outperforms single-objective optimisation (SOO) methods, even when the same SOO criterion is employed for choosing a single, definitive investment strategy from the Pareto front.

本研究探讨了定向变化(DC)、遗传规划(GP)和多目标优化(MOO)的整合,以开发先进的算法交易策略。方向性变化为市场分析提供了一种动态的、基于事件的方法,可以识别重要的价格变动和趋势。遗传编程进化交易规则,以发现有效和有利可图的策略。然而,金融交易提出了一个多目标的挑战,平衡冲突的目标,如回报和风险。我们提出了一种新的算法交易框架,称为MOO3,它将遗传规划与NSGA-II多目标优化算法相结合,以优化三个适应度函数:总收益、预期收益率和风险。虽然NSGA-II本身的使用是完善的,但我们的贡献在于我们如何将其应用于交易环境中,该环境结合了(i)方向变化,(ii)基于dc和物理时间指标的遗传规划,以及(iii)基于交易者偏好的优化后策略选择的修改夏普比率。利用两种范式的指标,GP算法可以创建有利可图的交易策略,而多目标适应度函数可以同时优化风险。使用修改的夏普比率从帕累托最优解决方案中选择确定的策略,允许交易者优先考虑多个目标。我们的方法在来自10个国际市场的110个股票数据集上进行了测试,旨在证明多目标框架可以产生具有较低风险的卓越交易策略。结果表明,MOO3算法持续且显著优于单目标优化(SOO)方法,即使采用相同的SOO标准从Pareto前沿选择单一,确定的投资策略。
{"title":"Multi-objective genetic programming-based algorithmic trading, using directional changes and a modified sharpe ratio score for identifying optimal trading strategies","authors":"Xinpeng Long,&nbsp;Michael Kampouridis,&nbsp;Tasos Papastylianou","doi":"10.1007/s10462-025-11390-9","DOIUrl":"10.1007/s10462-025-11390-9","url":null,"abstract":"<div><p>This study explores the integration of directional changes (DC), genetic programming (GP), and multi-objective optimisation (MOO) to develop advanced algorithmic trading strategies. Directional changes offer a dynamic, event-based approach to market analysis, identifying significant price movements and trends. Genetic programming evolves trading rules to discover effective and profitable strategies. However, financial trading presents a multi-objective challenge, balancing conflicting objectives such as returns and risk. We propose a novel algorithmic trading framework, termed MOO3, which integrates genetic programming with the NSGA-II multi-objective optimisation algorithm to optimise three fitness functions: total return, expected rate of return, and risk. While the use of NSGA-II itself is well-established, our contribution lies in how we apply it within a trading context that combines (i) directional changes, (ii) genetic programming with both DC-based and physical-time indicators, and (iii) a modified Sharpe Ratio for post-optimisation strategy selection based on trader preferences. Utilising indicators from both paradigms allows the GP algorithm to create profitable trading strategies, while the multi-objective fitness function allows it to simultaneously optimise for risk. A definitive strategy is chosen from Pareto-optimal solutions using the modified Sharpe Ratio, allowing traders to prioritise multiple objectives. Our methodology is tested on 110 stock datasets from 10 international markets, aiming to demonstrate that the multi-objective framework can yield superior trading strategies with lower risk. Results indicate that the MOO3 algorithm consistently and significantly outperforms single-objective optimisation (SOO) methods, even when the same SOO criterion is employed for choosing a single, definitive investment strategy from the Pareto front.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 2","pages":""},"PeriodicalIF":13.9,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11390-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145909002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Artificial Intelligence Review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1