首页 > 最新文献

Artificial Intelligence Review最新文献

英文 中文
Large language models for mental health diagnosis and treatment: a survey 心理健康诊断和治疗的大型语言模型:一项调查
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-14 DOI: 10.1007/s10462-025-11418-0
Mohsen Ghorbian, Mostafa Ghobaei-Arani

Mental health (MeHE) is a fundamental dimension of human well-being that encompasses emotional, psychological, and social aspects. Effective MeHE management enables individuals to cope with stress, maintain healthy relationships, and achieve their personal and social goals. However, traditional approaches are often inadequate in addressing the multidimensional challenges of early detection, personalized interventions, and comprehensive MeHE education. Large Language Models offer a transformative approach to the field of MeHE. With the ability to process large and complex textual data, these models can identify behavioral patterns in patients’ responses, suggest personalized interventions, and improve access to MeHE resources. Despite these advances, significant challenges remain. Applying reinforcement learning techniques to MeHE applications necessitates addressing challenges such as model-driven bias, protecting sensitive information, and providing robust evidence of clinical performance. This review systematically examines the applications of large language models in MeHE, providing a comprehensive analysis of their capabilities and limitations. This study examined how large language models address existing challenges, including early diagnosis, personalized treatments, and effective public education. Findings show that large language models increased the accuracy of early diagnosis of mental disorders by 33%, the effectiveness of personalized treatment plans by 27%, and participation in MeHE education and awareness by 24%. Ultimately, this research underscores the pivotal role of large language models in promoting MeHE. By providing practical insights and suggesting strategies to overcome implementation challenges, this review lays the groundwork for developing innovative, effective, and equitable solutions in the field of MeHE.

心理健康(MeHE)是人类福祉的一个基本方面,包括情感、心理和社会方面。有效的MeHE管理使个人能够应对压力,保持健康的关系,实现个人和社会目标。然而,传统方法往往不足以解决早期发现、个性化干预和综合MeHE教育等多方面的挑战。大型语言模型为MeHE领域提供了一种变革性的方法。这些模型具有处理大量复杂文本数据的能力,可以识别患者反应中的行为模式,提出个性化干预建议,并改善对MeHE资源的访问。尽管取得了这些进展,但仍存在重大挑战。在MeHE应用中应用强化学习技术需要解决模型驱动偏差、保护敏感信息和提供临床表现的可靠证据等挑战。本文系统地考察了大型语言模型在MeHE中的应用,对其能力和局限性进行了全面的分析。这项研究考察了大型语言模型如何解决现有的挑战,包括早期诊断、个性化治疗和有效的公共教育。结果显示,大型语言模型使精神障碍早期诊断的准确率提高了33%,个性化治疗方案的有效性提高了27%,MeHE教育和意识的参与率提高了24%。最后,本研究强调了大型语言模型在促进MeHE中的关键作用。通过提供实际见解和建议策略来克服实施挑战,本综述为在MeHE领域制定创新、有效和公平的解决方案奠定了基础。
{"title":"Large language models for mental health diagnosis and treatment: a survey","authors":"Mohsen Ghorbian,&nbsp;Mostafa Ghobaei-Arani","doi":"10.1007/s10462-025-11418-0","DOIUrl":"10.1007/s10462-025-11418-0","url":null,"abstract":"<div><p>Mental health (MeHE) is a fundamental dimension of human well-being that encompasses emotional, psychological, and social aspects. Effective MeHE management enables individuals to cope with stress, maintain healthy relationships, and achieve their personal and social goals. However, traditional approaches are often inadequate in addressing the multidimensional challenges of early detection, personalized interventions, and comprehensive MeHE education. Large Language Models offer a transformative approach to the field of MeHE. With the ability to process large and complex textual data, these models can identify behavioral patterns in patients’ responses, suggest personalized interventions, and improve access to MeHE resources. Despite these advances, significant challenges remain. Applying reinforcement learning techniques to MeHE applications necessitates addressing challenges such as model-driven bias, protecting sensitive information, and providing robust evidence of clinical performance. This review systematically examines the applications of large language models in MeHE, providing a comprehensive analysis of their capabilities and limitations. This study examined how large language models address existing challenges, including early diagnosis, personalized treatments, and effective public education. Findings show that large language models increased the accuracy of early diagnosis of mental disorders by 33%, the effectiveness of personalized treatment plans by 27%, and participation in MeHE education and awareness by 24%. Ultimately, this research underscores the pivotal role of large language models in promoting MeHE. By providing practical insights and suggesting strategies to overcome implementation challenges, this review lays the groundwork for developing innovative, effective, and equitable solutions in the field of MeHE.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 1","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11418-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145511030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automating synthetic dataset generation for image-based 3D detection: a literature review 自动合成数据集生成基于图像的三维检测:文献综述
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-13 DOI: 10.1007/s10462-025-11431-3
Paul Schulz, Thorsten Hempel, Magnus Jung, Ayoub Al-Hamadi

Reliable 3D detection is fundamental to autonomous systems such as mobile robots, self-driving cars, and unmanned aerial vehicles (UAVs). To achieve this capability, researchers have developed and trained supervised networks, which require large amounts of diverse and precisely annotated data. Due to the complex, expensive, and time-consuming capturing and annotation process, synthetic dataset generation approaches have gained popularity over the last decade. With increasing computational resources and advances in simulation technologies, a variety of dataset generators have emerged. These methods rely on either traditional 3D modeling or neural image synthesis to generate data for specific scenarios or general-purpose 3D detection tasks. Their primary goal is to produce high-quality, annotated 3D datasets in an automated and scalable manner. In this review, we evaluate the extent to which state-of-the-art approaches fulfill this goal by introducing a categorization scheme and conducting a comprehensive analysis of both 3D modeling and neural synthesis methods. Our analysis includes techniques used to address the Sim-to-Real domain gap. Furthermore, we assess each method’s level of automation, prerequisites, and practical adoption. This review aims to guide the reader in selecting automated dataset generation workflows for specific detection problems. By considering dataset quality, prerequisites, and application scenarios, we offer practical insights into identifying suitable methods for diverse downstream tasks.

可靠的3D检测是移动机器人、自动驾驶汽车、无人驾驶飞行器(uav)等自主系统的基础。为了实现这种能力,研究人员开发并训练了监督网络,这需要大量不同的、精确注释的数据。由于复杂、昂贵和耗时的捕获和注释过程,合成数据集生成方法在过去十年中得到了普及。随着计算资源的增加和仿真技术的进步,出现了各种各样的数据集生成器。这些方法依靠传统的3D建模或神经图像合成来生成特定场景或通用3D检测任务的数据。他们的主要目标是以自动化和可扩展的方式生成高质量,带注释的3D数据集。在这篇综述中,我们通过引入一种分类方案并对3D建模和神经合成方法进行全面分析,评估了最先进的方法在多大程度上实现了这一目标。我们的分析包括用于解决模拟到真实领域差距的技术。此外,我们评估每个方法的自动化水平、先决条件和实际采用。这篇综述旨在指导读者为特定的检测问题选择自动数据集生成工作流程。通过考虑数据集质量、先决条件和应用场景,我们为确定适合不同下游任务的方法提供了实用的见解。
{"title":"Automating synthetic dataset generation for image-based 3D detection: a literature review","authors":"Paul Schulz,&nbsp;Thorsten Hempel,&nbsp;Magnus Jung,&nbsp;Ayoub Al-Hamadi","doi":"10.1007/s10462-025-11431-3","DOIUrl":"10.1007/s10462-025-11431-3","url":null,"abstract":"<div><p>Reliable 3D detection is fundamental to autonomous systems such as mobile robots, self-driving cars, and unmanned aerial vehicles (UAVs). To achieve this capability, researchers have developed and trained supervised networks, which require large amounts of diverse and precisely annotated data. Due to the complex, expensive, and time-consuming capturing and annotation process, synthetic dataset generation approaches have gained popularity over the last decade. With increasing computational resources and advances in simulation technologies, a variety of dataset generators have emerged. These methods rely on either traditional 3D modeling or neural image synthesis to generate data for specific scenarios or general-purpose 3D detection tasks. Their primary goal is to produce high-quality, annotated 3D datasets in an automated and scalable manner. In this review, we evaluate the extent to which state-of-the-art approaches fulfill this goal by introducing a categorization scheme and conducting a comprehensive analysis of both 3D modeling and neural synthesis methods. Our analysis includes techniques used to address the Sim-to-Real domain gap. Furthermore, we assess each method’s level of automation, prerequisites, and practical adoption. This review aims to guide the reader in selecting automated dataset generation workflows for specific detection problems. By considering dataset quality, prerequisites, and application scenarios, we offer practical insights into identifying suitable methods for diverse downstream tasks.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 1","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11431-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145510588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A systematic review of generative AI: importance of industry and startup-centered perspectives, agentic AI, ethical considerations & challenges, and future directions 对生成式人工智能的系统回顾:以行业和初创企业为中心的观点的重要性,人工智能代理,伦理考虑和挑战,以及未来方向
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-12 DOI: 10.1007/s10462-025-11435-z
Kinjal Patel, Milind Shah, Karishma M. Qureshi, Mohamed Rafik N. Qureshi

Generative Artificial Intelligence (GenAI) is rapidly redefining the landscape of work organizations and society at large. GenAI has rapidly evolved from rule-based symbolic systems ofThe 1940 s to advanced deep learning architectures capable of producing human-like content across modalities, such as text, images, audio, and video. This review focuses on current emerging trends, such as large concept models and critical comparisons of tools, including ChatGPT, Gemini, and Claude. This study synthesizes evidence of GenAI’s essential role across major industries, revealing transformative applications in the finance, cloud and IT, healthcare, education, and energy sectors. The paper also highlights the unique opportunities GenAI offers for start-ups, enabling agile projects to leverage cutting-edge technology for competitive advantage. However, the deployment of GenAI systems through edge devices also raises critical challenges related to ethics, transparency, bias, accountability, computational issues, and many more. To address these complexities, this paper examines emerging approaches such as AI agents, agentic AI, and multi-agent systems that aim to extend the functionality of GenAI through autonomy, goal-directed behavior, and collaborative intelligence. It discovers novel incorporations with agentic AI architecture, such as BabyAGI, and discusses emerging issues of coordination, hallucination, and security risks. The findings reveal persistent challenges related to scalability, interpretability, and regulatory compliance while identifying future research directions toward developing more sophisticated, ethical, and accessible GenAI systems that will continue to reshape technological landscapes and societal interactions. This systematic review informs researchers, academicians, data scientists, and developers about the latest advancements in GenAI and highlights its applications and role across various industries, as well as supporting practitioners and scholars in staying current with the rapidly evolving landscape of generative technologies.

生成式人工智能(GenAI)正在迅速重新定义工作、组织和整个社会的格局。GenAI已经从20世纪40年代基于规则的符号系统迅速发展到先进的深度学习架构,能够跨模式(如文本、图像、音频和视频)产生类似人类的内容。这篇综述的重点是当前的新兴趋势,例如大型概念模型和工具的关键比较,包括ChatGPT、Gemini和Claude。本研究综合了GenAI在主要行业中发挥重要作用的证据,揭示了在金融、云和IT、医疗保健、教育和能源领域的变革性应用。该论文还强调了GenAI为初创企业提供的独特机会,使敏捷项目能够利用尖端技术获得竞争优势。然而,通过边缘设备部署GenAI系统也提出了与道德、透明度、偏见、问责制、计算问题等相关的关键挑战。为了解决这些复杂性,本文研究了人工智能代理、代理人工智能和多智能体系统等新兴方法,这些方法旨在通过自治、目标导向行为和协作智能来扩展GenAI的功能。它发现了与BabyAGI等人工智能架构的新结合,并讨论了协调、幻觉和安全风险等新出现的问题。研究结果揭示了与可扩展性、可解释性和法规遵从性相关的持续挑战,同时确定了未来的研究方向,即开发更复杂、更符合伦理、更易于访问的GenAI系统,这些系统将继续重塑技术景观和社会互动。本系统综述向研究人员、学者、数据科学家和开发人员介绍了GenAI的最新进展,并强调了其在各个行业的应用和作用,同时支持从业者和学者跟上快速发展的生成技术的现状。
{"title":"A systematic review of generative AI: importance of industry and startup-centered perspectives, agentic AI, ethical considerations & challenges, and future directions","authors":"Kinjal Patel,&nbsp;Milind Shah,&nbsp;Karishma M. Qureshi,&nbsp;Mohamed Rafik N. Qureshi","doi":"10.1007/s10462-025-11435-z","DOIUrl":"10.1007/s10462-025-11435-z","url":null,"abstract":"<div><p>Generative Artificial Intelligence (GenAI) is rapidly redefining the landscape of work organizations and society at large. GenAI has rapidly evolved from rule-based symbolic systems ofThe 1940 s to advanced deep learning architectures capable of producing human-like content across modalities, such as text, images, audio, and video. This review focuses on current emerging trends, such as large concept models and critical comparisons of tools, including ChatGPT, Gemini, and Claude. This study synthesizes evidence of GenAI’s essential role across major industries, revealing transformative applications in the finance, cloud and IT, healthcare, education, and energy sectors. The paper also highlights the unique opportunities GenAI offers for start-ups, enabling agile projects to leverage cutting-edge technology for competitive advantage. However, the deployment of GenAI systems through edge devices also raises critical challenges related to ethics, transparency, bias, accountability, computational issues, and many more. To address these complexities, this paper examines emerging approaches such as AI agents, agentic AI, and multi-agent systems that aim to extend the functionality of GenAI through autonomy, goal-directed behavior, and collaborative intelligence. It discovers novel incorporations with agentic AI architecture, such as BabyAGI, and discusses emerging issues of coordination, hallucination, and security risks. The findings reveal persistent challenges related to scalability, interpretability, and regulatory compliance while identifying future research directions toward developing more sophisticated, ethical, and accessible GenAI systems that will continue to reshape technological landscapes and societal interactions. This systematic review informs researchers, academicians, data scientists, and developers about the latest advancements in GenAI and highlights its applications and role across various industries, as well as supporting practitioners and scholars in staying current with the rapidly evolving landscape of generative technologies.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 1","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11435-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145510492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A genetic algorithm for the optimization of multi-threshold trading strategies in the directional changes paradigm 方向变化范式下多阈值交易策略优化的遗传算法
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-11 DOI: 10.1007/s10462-025-11419-z
Ozgur Salman, Themistoklis Melissourgos, Michael Kampouridis

This paper proposes a novel genetic algorithm to optimize recommendations from multiple trading strategies derived from the Directional Changes (DC) paradigm. DC is an event-based approach that differs from the traditional physical time data, which employs fixed time intervals and uses a physical time scale. The DC method records price movements when specific events occur instead of using fixed intervals. The determination of these events relies on a threshold, which captures significant changes in price of a given asset. This work employs eight trading strategies that are developed based on directional changes. These strategies were profiled using varying values of thresholds to provide a comprehensive analysis of their effectiveness. In order to optimize and prioritize the conflicting recommendations given by the different trading strategies under different DC thresholds, we are proposing a novel genetic algorithm (GA). To analyze the GA’s trading performance, we utilize 200 stocks listed on the New York Stock Exchange. Our findings show that it can generate highly profitable trading strategies at very low risk levels. The GA is also able to statistically and significantly outperform other DC-based trading strategies, as well as 8 financial trading strategies that are based on technical indicators such as aroon, exponential moving average, and relative strength index, and also buy-and-hold. The proposed GA is also able to outperform the trading performance of 7 market indices, such as the Dow Jones Industrial Average, and the Standard & Poors (S&P) 500.

本文提出了一种新的遗传算法来优化来自方向性变化(DC)范式的多种交易策略的推荐。DC是一种基于事件的方法,与传统的物理时间数据不同,传统的物理时间数据采用固定的时间间隔并使用物理时间尺度。DC方法记录特定事件发生时的价格变动,而不是使用固定的时间间隔。这些事件的确定依赖于一个阈值,它捕获了给定资产价格的重大变化。这项工作采用了八种基于方向性变化的交易策略。使用不同的阈值对这些策略进行了概述,以提供对其有效性的全面分析。为了对不同DC阈值下不同交易策略给出的冲突建议进行优化和优先排序,我们提出了一种新的遗传算法(GA)。为了分析GA的交易表现,我们使用了200只在纽约证券交易所上市的股票。我们的研究结果表明,它可以在非常低的风险水平下产生高利润的交易策略。GA还能够在统计上显著优于其他基于dc的交易策略,以及基于aroon,指数移动平均线,相对强弱指数等技术指标的8种金融交易策略,也可以买入并持有。拟议中的GA还能够超越7个市场指数的交易表现,如道琼斯工业平均指数和标准普尔500指数。
{"title":"A genetic algorithm for the optimization of multi-threshold trading strategies in the directional changes paradigm","authors":"Ozgur Salman,&nbsp;Themistoklis Melissourgos,&nbsp;Michael Kampouridis","doi":"10.1007/s10462-025-11419-z","DOIUrl":"10.1007/s10462-025-11419-z","url":null,"abstract":"<div><p>This paper proposes a novel genetic algorithm to optimize recommendations from multiple trading strategies derived from the Directional Changes (DC) paradigm. DC is an event-based approach that differs from the traditional physical time data, which employs fixed time intervals and uses a physical time scale. The DC method records price movements when specific events occur instead of using fixed intervals. The determination of these events relies on a threshold, which captures significant changes in price of a given asset. This work employs eight trading strategies that are developed based on directional changes. These strategies were profiled using varying values of thresholds to provide a comprehensive analysis of their effectiveness. In order to optimize and prioritize the conflicting recommendations given by the different trading strategies under different DC thresholds, we are proposing a novel genetic algorithm (GA). To analyze the GA’s trading performance, we utilize 200 stocks listed on the New York Stock Exchange. Our findings show that it can generate highly profitable trading strategies at very low risk levels. The GA is also able to statistically and significantly outperform other DC-based trading strategies, as well as 8 financial trading strategies that are based on technical indicators such as aroon, exponential moving average, and relative strength index, and also buy-and-hold. The proposed GA is also able to outperform the trading performance of 7 market indices, such as the Dow Jones Industrial Average, and the Standard &amp; Poors (S&amp;P) 500.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 1","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11419-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145479724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing a robust extreme gradient boosting model with SHAP-based interpretation for predicting carbonation depth in recycled aggregate concrete 设计一个基于shap解释的鲁棒极端梯度增强模型,用于预测再生骨料混凝土的碳化深度
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-11 DOI: 10.1007/s10462-025-11411-7
Meysam Alizamir, Aliakbar Gholampour, Sungwon Kim, Salim Heddam, Jaehwan Kim

The degradation of concrete structures is significantly influenced by carbonation, where atmospheric carbon dioxide (CO2) penetrates the concrete matrix. Measuring how far carbonation penetrates into concrete plays a vital role in maintaining structural integrity and construction safety standards. Precisely forecasting the extent of carbonation penetration in recycled aggregate concrete (RAC) remains fundamental for understanding long-term performance and durability. This research is the first to introduce an innovative approach that leverages eight machine learning algorithms to estimate carbonation penetration depth. The selected techniques include NGBoost, GBRT, AdaBoost, CatBoost, XGBoost, LightGBM, HistGBRT, and MLR. Moreover, to evaluate model accuracy, four key performance indicators were employed. Additionally, SHapley Additive exPlanations (SHAP) was incorporated for enhanced model interpretability. Furthermore, the investigation examined six distinct input parameter configurations during training and testing to thoroughly assess model performance. Among the evaluated algorithms, XGBoost delivered the highest accuracy, with an RMSE of 1.389 mm, MAE of 1.005 mm, and R of 0.984. CatBoost followed closely, with RMSE of 1.772 mm, MAE of 1.344 mm, and R of 0.976. Then, the LightGBM ranked third in effectiveness, exhibiting an RMSE of 1.797 mm, MAE of 1.296 mm, and R of 0.975. These results demonstrate the reliability and interpretability of advanced machine learning models for carbonation depth estimation in RAC. The developed models offer practical tools for engineers seeking to evaluate how carbonation penetration affects structural integrity. These findings establish a strong foundation for understanding and predicting carbonation-related deterioration in concrete infrastructure.

混凝土结构的降解受到碳化作用的显著影响,其中大气中的二氧化碳(CO2)穿透混凝土基体。测量碳化渗透到混凝土中的程度对保持结构完整性和施工安全标准起着至关重要的作用。准确预测再生骨料混凝土(RAC)中碳化渗透的程度对于理解其长期性能和耐久性至关重要。这项研究首次引入了一种创新的方法,利用八种机器学习算法来估计碳化渗透深度。所选技术包括NGBoost、GBRT、AdaBoost、CatBoost、XGBoost、LightGBM、HistGBRT和MLR。此外,为了评估模型的准确性,采用了四个关键绩效指标。此外,为了提高模型的可解释性,还引入了SHapley加性解释(SHAP)。此外,调查在训练和测试期间检查了六种不同的输入参数配置,以彻底评估模型性能。其中,XGBoost的准确率最高,RMSE为1.389 mm, MAE为1.005 mm, R为0.984。CatBoost紧随其后,RMSE为1.772 mm, MAE为1.344 mm, R为0.976。然后,LightGBM的有效性排名第三,RMSE为1.797 mm, MAE为1.296 mm, R为0.975。这些结果证明了RAC中碳酸化深度估计的先进机器学习模型的可靠性和可解释性。开发的模型为工程师评估碳化渗透如何影响结构完整性提供了实用工具。这些发现为理解和预测混凝土基础设施与碳化相关的恶化奠定了坚实的基础。
{"title":"Designing a robust extreme gradient boosting model with SHAP-based interpretation for predicting carbonation depth in recycled aggregate concrete","authors":"Meysam Alizamir,&nbsp;Aliakbar Gholampour,&nbsp;Sungwon Kim,&nbsp;Salim Heddam,&nbsp;Jaehwan Kim","doi":"10.1007/s10462-025-11411-7","DOIUrl":"10.1007/s10462-025-11411-7","url":null,"abstract":"<div><p>The degradation of concrete structures is significantly influenced by carbonation, where atmospheric carbon dioxide (CO<sub>2</sub>) penetrates the concrete matrix. Measuring how far carbonation penetrates into concrete plays a vital role in maintaining structural integrity and construction safety standards. Precisely forecasting the extent of carbonation penetration in recycled aggregate concrete (RAC) remains fundamental for understanding long-term performance and durability. This research is the first to introduce an innovative approach that leverages eight machine learning algorithms to estimate carbonation penetration depth. The selected techniques include NGBoost, GBRT, AdaBoost, CatBoost, XGBoost, LightGBM, HistGBRT, and MLR. Moreover, to evaluate model accuracy, four key performance indicators were employed. Additionally, SHapley Additive exPlanations (SHAP) was incorporated for enhanced model interpretability. Furthermore, the investigation examined six distinct input parameter configurations during training and testing to thoroughly assess model performance. Among the evaluated algorithms, XGBoost delivered the highest accuracy, with an RMSE of 1.389 mm, MAE of 1.005 mm, and R of 0.984. CatBoost followed closely, with RMSE of 1.772 mm, MAE of 1.344 mm, and R of 0.976. Then, the LightGBM ranked third in effectiveness, exhibiting an RMSE of 1.797 mm, MAE of 1.296 mm, and R of 0.975. These results demonstrate the reliability and interpretability of advanced machine learning models for carbonation depth estimation in RAC. The developed models offer practical tools for engineers seeking to evaluate how carbonation penetration affects structural integrity. These findings establish a strong foundation for understanding and predicting carbonation-related deterioration in concrete infrastructure.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 1","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11411-7.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145479721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A literature review on automated machine learning 关于自动机器学习的文献综述
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-11 DOI: 10.1007/s10462-025-11397-2
Edesio Alcobaça, André C. P. L. F. de Carvalho

AutoML represents a pivotal advancement in machine learning by simplifying and speeding model development. This paper provides a comprehensive survey of AutoML, tracing its evolution from early metalearning, hyperparameter optimization, and transfer learning techniques to the latest advancements in neural architecture search, automated pipeline design, and few-shot learning. It covers historical context, classical approaches, and modern applications while also addressing emerging topics. Key research directions are highlighted, focusing on enhancing model interpretability, improving generalization and robustness, expanding automated pipeline design, and ethical implications of AutoML technologies. This paper aims to provide a holistic view of the current state of AutoML, serving as a valuable resource for researchers, practitioners, and stakeholders seeking to understand and advance the capabilities of AutoML in both theoretical and practical contexts.

通过简化和加速模型开发,AutoML代表了机器学习的关键进步。本文对AutoML进行了全面的概述,从早期的元学习、超参数优化和迁移学习技术到神经结构搜索、自动化管道设计和少镜头学习的最新进展。它涵盖了历史背景、经典方法和现代应用,同时也解决了新兴主题。重点研究方向是增强模型可解释性、提高泛化和鲁棒性、扩展自动化管道设计以及AutoML技术的伦理意义。本文旨在对AutoML的现状提供一个整体的看法,为研究人员、从业者和利益相关者在理论和实践环境中寻求理解和推进AutoML的能力提供有价值的资源。
{"title":"A literature review on automated machine learning","authors":"Edesio Alcobaça,&nbsp;André C. P. L. F. de Carvalho","doi":"10.1007/s10462-025-11397-2","DOIUrl":"10.1007/s10462-025-11397-2","url":null,"abstract":"<div><p>AutoML represents a pivotal advancement in machine learning by simplifying and speeding model development. This paper provides a comprehensive survey of AutoML, tracing its evolution from early metalearning, hyperparameter optimization, and transfer learning techniques to the latest advancements in neural architecture search, automated pipeline design, and few-shot learning. It covers historical context, classical approaches, and modern applications while also addressing emerging topics. Key research directions are highlighted, focusing on enhancing model interpretability, improving generalization and robustness, expanding automated pipeline design, and ethical implications of AutoML technologies. This paper aims to provide a holistic view of the current state of AutoML, serving as a valuable resource for researchers, practitioners, and stakeholders seeking to understand and advance the capabilities of AutoML in both theoretical and practical contexts.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 1","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11397-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145479719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Heuristics for the direct aperture optimisation in intensity modulated radiotion therapy: a systematic literature review 调强放疗中直接孔径优化的启发:系统文献综述
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-11 DOI: 10.1007/s10462-025-11378-5
Mauricio Moyano, Vinicius Cabrera Jameli, Keiny Meza-Vasquez, Maximiliano Beltran-Villarroel, Sebastian Muñoz-Bustos, Gonzalo Tello-Valenzuela, Nicolle Ojeda-Ortega, Guillermo Cabrera-Guerrero

Intensity-modulated radiation therapy (IMRT) is an advanced technique for cancer treatment that uses a computer-controlled linear accelerator to customise beams’ radiation intensities for patients, optimising the treatment effectiveness. The complexity of IMRT planning requires sophisticated algorithms to solve the different optimisation problems that arise in the context of IMRT treatment planning. One of those optimisation problems is the Direct Aperture Optimisation (DAO). The DAO problem aims to find a set of aperture shapes for each beam angle to enhance precision and improve clinical outcomes. However, this process is computationally intensive and thus, heuristic approaches have been proposed to balance computational efficiency and solution quality, offering nearly optimal solutions within clinically acceptable times. This systematic literature review aims to trace the development and application of heuristic algorithms for the DAO problem in the context of IMRT over the past two decades. We synthesised 41 studies published between 2002 and 2023, sourced from seven major databases (ACM, IEEE Xplore, PubMed, ScienceDirect, Springer, Scopus, and Web of Science). The review highlights key trends, innovations, and future directions in using heuristic methods for DAO, providing valuable insights for researchers and practitioners in radiotherapy optimisation.

调强放射治疗(IMRT)是一种先进的癌症治疗技术,它使用计算机控制的直线加速器为患者定制光束的辐射强度,从而优化治疗效果。IMRT计划的复杂性需要复杂的算法来解决在IMRT治疗计划中出现的不同优化问题。其中一个优化问题是直接孔径优化(DAO)。DAO问题旨在为每个光束角度找到一组孔径形状,以提高精度和改善临床效果。然而,这个过程是计算密集型的,因此,提出了启发式方法来平衡计算效率和解决方案质量,在临床可接受的时间内提供近乎最佳的解决方案。这篇系统的文献综述旨在追溯过去二十年来在IMRT背景下的DAO问题的启发式算法的发展和应用。我们综合了2002年至2023年间发表的41项研究,来自7个主要数据库(ACM、IEEE explore、PubMed、ScienceDirect、b施普林格、Scopus和Web of Science)。本文重点介绍了启发式方法在放疗优化中的关键趋势、创新和未来发展方向,为研究人员和从业人员提供了有价值的见解。
{"title":"Heuristics for the direct aperture optimisation in intensity modulated radiotion therapy: a systematic literature review","authors":"Mauricio Moyano,&nbsp;Vinicius Cabrera Jameli,&nbsp;Keiny Meza-Vasquez,&nbsp;Maximiliano Beltran-Villarroel,&nbsp;Sebastian Muñoz-Bustos,&nbsp;Gonzalo Tello-Valenzuela,&nbsp;Nicolle Ojeda-Ortega,&nbsp;Guillermo Cabrera-Guerrero","doi":"10.1007/s10462-025-11378-5","DOIUrl":"10.1007/s10462-025-11378-5","url":null,"abstract":"<div><p>Intensity-modulated radiation therapy (IMRT) is an advanced technique for cancer treatment that uses a computer-controlled linear accelerator to customise beams’ radiation intensities for patients, optimising the treatment effectiveness. The complexity of IMRT planning requires sophisticated algorithms to solve the different optimisation problems that arise in the context of IMRT treatment planning. One of those optimisation problems is the Direct Aperture Optimisation (DAO). The DAO problem aims to find a set of aperture shapes for each beam angle to enhance precision and improve clinical outcomes. However, this process is computationally intensive and thus, heuristic approaches have been proposed to balance computational efficiency and solution quality, offering nearly optimal solutions within clinically acceptable times. This systematic literature review aims to trace the development and application of heuristic algorithms for the DAO problem in the context of IMRT over the past two decades. We synthesised 41 studies published between 2002 and 2023, sourced from seven major databases (ACM, IEEE Xplore, PubMed, ScienceDirect, Springer, Scopus, and Web of Science). The review highlights key trends, innovations, and future directions in using heuristic methods for DAO, providing valuable insights for researchers and practitioners in radiotherapy optimisation.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 1","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11378-5.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145479722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LIFWCM: local information-based fuzzy weighted C-means algorithm for image segmentation LIFWCM:基于局部信息的模糊加权c均值图像分割算法
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-11 DOI: 10.1007/s10462-025-11420-6
Hanshuai Cui, Wenyi Zeng, Rong Ma, Dong Cheng, Qianpeng Chong, Zeshui Xu

Image segmentation aims to partition an image into non-overlapping regions that are coherent in appearance. Although the fuzzy C-means (FCM) algorithm is widely used for its simplicity and efficiency, it treats each pixel independently and is therefore sensitive to noise. We propose LIFWCM, a local information-based fuzzy weighted C-means algorithm that assigns a single-pass, data-driven weight to each pixel by aggregating neighborhood intensity variation and positional overlap, and then integrates these weights into the standard FCM objective and a spatially aware membership refinement. This design suppresses the influence of noisy and boundary pixels while preserving details with low computational overhead. Across six experiments on synthetic images and natural images from the Image Processing Toolbox and BSDS500, LIFWCM consistently improves segmentation quality under heavy noise. On the BSDS500 image with 30% salt-and-pepper noise, LIFWCM attains 98.96% segmentation accuracy, exceeding the best baseline, and surpassing classical FCM variants. LIFWCM also achieves higher MPA (0.94) and MIoU (0.82) than competing methods, while converging in a few iterations. These results demonstrate that LIFWCM is robust to high-intensity noise, preserves fine structures, and remains efficient due to one-time weight computation, making it suitable for real-world noisy images with complex structures.

图像分割的目的是将图像分割成在外观上连贯的不重叠区域。尽管模糊c均值(FCM)算法因其简单和高效而被广泛使用,但它对每个像素进行独立处理,因此对噪声很敏感。我们提出了一种基于局部信息的模糊加权c均值算法LIFWCM,该算法通过汇总邻域强度变化和位置重叠,为每个像素分配一次数据驱动的权重,然后将这些权重整合到标准FCM目标和空间感知的隶属度细化中。这种设计抑制了噪声和边界像素的影响,同时以较低的计算开销保留了细节。通过对来自图像处理工具箱和BSDS500的合成图像和自然图像的六次实验,LIFWCM在高噪声条件下持续提高分割质量。在盐和胡椒噪声为30%的BSDS500图像上,LIFWCM的分割准确率达到98.96%,超过了最佳基线,超过了经典的FCM变体。与竞争方法相比,LIFWCM的MPA(0.94)和MIoU(0.82)也更高,并且在几次迭代中收敛。这些结果表明,LIFWCM对高强度噪声具有鲁棒性,保留了精细结构,并且由于一次性权重计算而保持了效率,使其适用于具有复杂结构的真实噪声图像。
{"title":"LIFWCM: local information-based fuzzy weighted C-means algorithm for image segmentation","authors":"Hanshuai Cui,&nbsp;Wenyi Zeng,&nbsp;Rong Ma,&nbsp;Dong Cheng,&nbsp;Qianpeng Chong,&nbsp;Zeshui Xu","doi":"10.1007/s10462-025-11420-6","DOIUrl":"10.1007/s10462-025-11420-6","url":null,"abstract":"<div><p>Image segmentation aims to partition an image into non-overlapping regions that are coherent in appearance. Although the fuzzy C-means (FCM) algorithm is widely used for its simplicity and efficiency, it treats each pixel independently and is therefore sensitive to noise. We propose LIFWCM, a local information-based fuzzy weighted C-means algorithm that assigns a single-pass, data-driven weight to each pixel by aggregating neighborhood intensity variation and positional overlap, and then integrates these weights into the standard FCM objective and a spatially aware membership refinement. This design suppresses the influence of noisy and boundary pixels while preserving details with low computational overhead. Across six experiments on synthetic images and natural images from the Image Processing Toolbox and BSDS500, LIFWCM consistently improves segmentation quality under heavy noise. On the BSDS500 image with 30% salt-and-pepper noise, LIFWCM attains 98.96% segmentation accuracy, exceeding the best baseline, and surpassing classical FCM variants. LIFWCM also achieves higher MPA (0.94) and MIoU (0.82) than competing methods, while converging in a few iterations. These results demonstrate that LIFWCM is robust to high-intensity noise, preserves fine structures, and remains efficient due to one-time weight computation, making it suitable for real-world noisy images with complex structures.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 1","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11420-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145479723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Imbalanced data oversampling through subspace optimization with Bayesian reinforcement 基于贝叶斯强化的子空间优化非平衡数据过采样
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-10 DOI: 10.1007/s10462-025-11417-1
Mahesh Kumbhar, Sunith Bandaru, Alexander Karlsson

Many real-world machine learning classification problems suffer from imbalanced training data, where the least frequent label has high relevance and significance for the end user, such as equipment breakdowns or various types of process anomalies. This imbalance can negatively impact the learning algorithm and lead to misclassification of minority labels, resulting in erroneous actions and potentially high unexpected costs. Most previous oversampling methods rely only on the minority samples, often ignoring their overall density and distribution in relation to the other classes. In addition, most of them lack in the oversampling method’s explainability. In contrast, this paper proposes a novel oversampling method that considers a subspace of the feature-set for the creation of synthetic minority samples using nonlinear optimization of a class-sensitive objective function. Suitable subspaces for oversampling are identified through a Bayesian reinforcement strategy based on Dirichlet smoothing, which may be useful for explainable-AI. An empirical comparison of the proposed method is performed with 10 existing techniques on 18 real-world datasets using two traditional machine learning classifiers and four evaluation metrics. Statistical analysis of cross-validated runs over the 18 datasets and four metrics (i.e. 72 experiments) reveals that the proposed approach is among the best performing methods in 6 and 2 instances when using random forest classifier and support vector machine classifier, thus placing it at the top. The study also reveals that some feature combinations are more important than others for minority oversampling, and the proposed approach offers a way to identify such features.

许多现实世界的机器学习分类问题都受到训练数据不平衡的影响,其中频率最低的标签对最终用户具有很高的相关性和重要性,例如设备故障或各种类型的过程异常。这种不平衡会对学习算法产生负面影响,并导致少数标签的错误分类,从而导致错误的操作和潜在的高意外成本。大多数以前的过采样方法只依赖于少数样本,往往忽略了它们相对于其他类的总体密度和分布。此外,它们大多缺乏过采样方法的可解释性。相比之下,本文提出了一种新的过采样方法,该方法考虑了特征集的子空间,用于使用类敏感目标函数的非线性优化来创建合成少数样本。通过基于Dirichlet平滑的贝叶斯强化策略确定合适的过采样子空间,这可能对可解释人工智能有用。采用两个传统的机器学习分类器和四个评估指标,在18个真实数据集上与10种现有技术进行了实证比较。对18个数据集和4个指标(即72个实验)的交叉验证运行的统计分析表明,当使用随机森林分类器和支持向量机分类器时,所提出的方法在6个和2个实例中表现最佳,从而将其置于顶部。研究还表明,对于少数过采样,一些特征组合比其他特征组合更重要,所提出的方法为识别这些特征提供了一种方法。
{"title":"Imbalanced data oversampling through subspace optimization with Bayesian reinforcement","authors":"Mahesh Kumbhar,&nbsp;Sunith Bandaru,&nbsp;Alexander Karlsson","doi":"10.1007/s10462-025-11417-1","DOIUrl":"10.1007/s10462-025-11417-1","url":null,"abstract":"<div><p>Many real-world machine learning classification problems suffer from imbalanced training data, where the least frequent label has high relevance and significance for the end user, such as equipment breakdowns or various types of process anomalies. This imbalance can negatively impact the learning algorithm and lead to misclassification of minority labels, resulting in erroneous actions and potentially high unexpected costs. Most previous oversampling methods rely only on the minority samples, often ignoring their overall density and distribution in relation to the other classes. In addition, most of them lack in the oversampling method’s explainability. In contrast, this paper proposes a novel oversampling method that considers a subspace of the feature-set for the creation of synthetic minority samples using nonlinear optimization of a class-sensitive objective function. Suitable subspaces for oversampling are identified through a Bayesian reinforcement strategy based on Dirichlet smoothing, which may be useful for explainable-AI. An empirical comparison of the proposed method is performed with 10 existing techniques on 18 real-world datasets using two traditional machine learning classifiers and four evaluation metrics. Statistical analysis of cross-validated runs over the 18 datasets and four metrics (i.e. 72 experiments) reveals that the proposed approach is among the best performing methods in 6 and 2 instances when using random forest classifier and support vector machine classifier, thus placing it at the top. The study also reveals that some feature combinations are more important than others for minority oversampling, and the proposed approach offers a way to identify such features.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 1","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11417-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145479720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Individual variable priority: a model-independent local gradient method for variable importance 个体变量优先级:一种与模型无关的变量重要性局部梯度方法
IF 13.9 2区 计算机科学 Q1 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-11-05 DOI: 10.1007/s10462-025-11339-y
Min Lu, Hemant Ishwaran

Traditional variable importance measures quantify overall feature contributions but often overlook individual-level heterogeneity. Several new procedures attempt to address this limitation but remain model dependent and may introduce biases. We propose individual variable priority (iVarPro), an extension of the Variable Priority (VarPro) framework, which uses rule-based, data-driven partitioning to estimate the gradient of the conditional mean function. By focusing on gradients, iVarPro captures how small perturbations in a variable influence an individual’s outcome, providing a more precise and interpretable measure of importance. To demonstrate its advantages, we conducted simulations and analyzed a real-world survival dataset. Our results show that iVarPro more accurately captures the true functional relationship by flexibly leveraging local samples.

传统的变量重要性度量量化了总体特征贡献,但往往忽略了个体水平的异质性。一些新的程序试图解决这一限制,但仍然依赖于模型,并可能引入偏差。我们提出了个体可变优先级(iVarPro),它是可变优先级(VarPro)框架的扩展,它使用基于规则的数据驱动分区来估计条件平均函数的梯度。通过关注梯度,iVarPro捕捉到变量中的小扰动如何影响个体结果,从而提供更精确和可解释的重要性度量。为了证明它的优势,我们进行了模拟并分析了一个现实世界的生存数据集。结果表明,通过灵活地利用局部样本,iVarPro更准确地捕获了真实的函数关系。
{"title":"Individual variable priority: a model-independent local gradient method for variable importance","authors":"Min Lu,&nbsp;Hemant Ishwaran","doi":"10.1007/s10462-025-11339-y","DOIUrl":"10.1007/s10462-025-11339-y","url":null,"abstract":"<div><p>Traditional variable importance measures quantify overall feature contributions but often overlook individual-level heterogeneity. Several new procedures attempt to address this limitation but remain model dependent and may introduce biases. We propose individual variable priority (iVarPro), an extension of the Variable Priority (VarPro) framework, which uses rule-based, data-driven partitioning to estimate the gradient of the conditional mean function. By focusing on gradients, iVarPro captures how small perturbations in a variable influence an individual’s outcome, providing a more precise and interpretable measure of importance. To demonstrate its advantages, we conducted simulations and analyzed a real-world survival dataset. Our results show that iVarPro more accurately captures the true functional relationship by flexibly leveraging local samples.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"58 12","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11339-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145456682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Artificial Intelligence Review
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1