Pub Date : 2025-11-20DOI: 10.1007/s10462-025-11425-1
Muhammad Umer Zia, Wei Xiang, Tao Huang, Jameel Ahmad, Jawwad Nasar Chattha, Ijaz Haider Naqvi, Faran Awais Butt
The tremendous advancements in artificial intelligence (AI) techniques, particularly those pertinent to computer vision and image recognition, are revolutionizing the automotive industry towards the development of intelligent transportation systems for smart cities. Integrating AI techniques into connected autonomous vehicles (CAVs) and unmanned aerial vehicles (UAVs) and their data fusion, enables a new paradigm that allows for unparalleled real-time awareness of the surrounding environment. The potential of emerging wireless technologies can be fully exploited by establishing communication and cooperation among AI-augmented CAVs and UAVs. However, configuring appropriate deep learning (DL) models for connected vehicles is a complex task. Any errors can result in severe consequences, including loss of vehicles, infrastructure, and human lives. These systems are also susceptible to cyber attacks, necessitating a thorough and timely threat analysis and countermeasures to prevent catastrophic events. Our findings highlight the effectiveness of AI-driven data fusion in enhancing cooperative perception between CAVs and UAVs, identify security vulnerabilities in DL-based systems, and demonstrate how V2X-enabled UAVs can significantly improve situational awareness in corner cases.
{"title":"Unifying ground and air: a comprehensive review of deep learning-enabled CAVs and UAVs","authors":"Muhammad Umer Zia, Wei Xiang, Tao Huang, Jameel Ahmad, Jawwad Nasar Chattha, Ijaz Haider Naqvi, Faran Awais Butt","doi":"10.1007/s10462-025-11425-1","DOIUrl":"10.1007/s10462-025-11425-1","url":null,"abstract":"<div><p>The tremendous advancements in artificial intelligence (AI) techniques, particularly those pertinent to computer vision and image recognition, are revolutionizing the automotive industry towards the development of intelligent transportation systems for smart cities. Integrating AI techniques into connected autonomous vehicles (CAVs) and unmanned aerial vehicles (UAVs) and their data fusion, enables a new paradigm that allows for unparalleled real-time awareness of the surrounding environment. The potential of emerging wireless technologies can be fully exploited by establishing communication and cooperation among AI-augmented CAVs and UAVs. However, configuring appropriate deep learning (DL) models for connected vehicles is a complex task. Any errors can result in severe consequences, including loss of vehicles, infrastructure, and human lives. These systems are also susceptible to cyber attacks, necessitating a thorough and timely threat analysis and countermeasures to prevent catastrophic events. Our findings highlight the effectiveness of AI-driven data fusion in enhancing cooperative perception between CAVs and UAVs, identify security vulnerabilities in DL-based systems, and demonstrate how V2X-enabled UAVs can significantly improve situational awareness in corner cases.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 1","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11425-1.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145561195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Infrared–visible image fusion (IVIF) integrates complementary thermal and photometric cues for surveillance, remote sensing, and autonomous perception. Existing surveys, while comprehensive, provide limited guidance for design-to-deployment and seldom relate fusion quality to task outcomes or device constraints. This work provides a unified perspective that organizes IVIF methods along an interface-attention-alignment coordinate system covering classical spatial/transform pipelines and contemporary deep paradigms (generative, discriminative, multi-task, hybrid/Transformer, dynamic). Building on literature through 2025, we synthesize fidelity-robustness-efficiency trade-offs and introduce a comparison-to-deployment protocol that couples fusion metrics with task accuracy (AP/mIoU), latency, memory footprint, and condition-performance characterization (misregistration, noise, illumination/weather). We consolidate Transformer/hybrid coverage with practical recipes and focused guidance on temporal consistency, robustness auditing, and physics-grounded interpretability. Compared with previous reviews, our survey concurrently addresses four under-covered dimensions-video temporal consistency, robustness auditing, task-aware evaluation, and deployment reporting-and distills a practical checklist linking architectural choices to operating conditions and hardware budgets, enabling reproducible, task-relevant IVIF practice.
{"title":"Advances and challenges in infrared-visible image fusion: a comprehensive review of techniques and applications","authors":"Rongchao Wang, Zhaofa Zhou, Shuhui Li, Zhili Zhang","doi":"10.1007/s10462-025-11426-0","DOIUrl":"10.1007/s10462-025-11426-0","url":null,"abstract":"<div><p>Infrared–visible image fusion (IVIF) integrates complementary thermal and photometric cues for surveillance, remote sensing, and autonomous perception. Existing surveys, while comprehensive, provide limited guidance for <i>design-to-deployment</i> and seldom relate fusion quality to task outcomes or device constraints. This work provides a unified perspective that organizes IVIF methods along an interface-attention-alignment coordinate system covering classical spatial/transform pipelines and contemporary deep paradigms (generative, discriminative, multi-task, hybrid/Transformer, dynamic). Building on literature through 2025, we synthesize fidelity-robustness-efficiency trade-offs and introduce a comparison-to-deployment protocol that couples fusion metrics with task accuracy (AP/mIoU), latency, memory footprint, and condition-performance characterization (misregistration, noise, illumination/weather). We consolidate Transformer/hybrid coverage with practical recipes and focused guidance on temporal consistency, robustness auditing, and physics-grounded interpretability. Compared with previous reviews, our survey concurrently addresses four under-covered dimensions-video temporal consistency, robustness auditing, task-aware evaluation, and deployment reporting-and distills a practical checklist linking architectural choices to operating conditions and hardware budgets, enabling reproducible, task-relevant IVIF practice.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 1","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11426-0.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145561011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The exponential growth of Large Language Models (LLMs) continues to highlight the need for efficient strategies to meet ever-expanding computational and data demands. This survey provides a comprehensive analysis of two complementary paradigms: Knowledge Distillation (KD) and Dataset Distillation (DD), both aimed at compressing LLMs while preserving their advanced reasoning capabilities and linguistic diversity. We first examine key methodologies in KD, such as task-specific alignment, rationale-based training, and multi-teacher frameworks, alongside DD techniques that synthesize compact, high-impact datasets through optimization-based gradient matching, latent space regularization, and generative synthesis. Building on these foundations, we explore how integrating KD and DD can produce more effective and scalable compression strategies. Together, these approaches address persistent challenges in model scalability, architectural heterogeneity, and the preservation of emergent LLM abilities. We further highlight applications across domains such as healthcare and education, where distillation enables efficient deployment without sacrificing performance. Despite substantial progress, open challenges remain in preserving emergent reasoning and linguistic diversity, enabling efficient adaptation to continually evolving teacher models and datasets, and establishing comprehensive evaluation protocols. By synthesizing methodological innovations, theoretical foundations, and practical insights, our survey charts a path toward sustainable, resource-efficient LLMs through the tighter integration of KD and DD principles.
{"title":"Knowledge distillation and dataset distillation of large language models: emerging trends, challenges, and future directions","authors":"Luyang Fang, Xiaowei Yu, Jiazhang Cai, Yongkai Chen, Shushan Wu, Zhengliang Liu, Zhenyuan Yang, Haoran Lu, Xilin Gong, Yufang Liu, Terry Ma, Wei Ruan, Ali Abbasi, Jing Zhang, Tao Wang, Ehsan Latif, Wei Liu, Wei Zhang, Soheil Kolouri, Xiaoming Zhai, Dajiang Zhu, Wenxuan Zhong, Tianming Liu, Ping Ma","doi":"10.1007/s10462-025-11423-3","DOIUrl":"10.1007/s10462-025-11423-3","url":null,"abstract":"<div><p>The exponential growth of Large Language Models (LLMs) continues to highlight the need for efficient strategies to meet ever-expanding computational and data demands. This survey provides a comprehensive analysis of two complementary paradigms: Knowledge Distillation (KD) and Dataset Distillation (DD), both aimed at compressing LLMs while preserving their advanced reasoning capabilities and linguistic diversity. We first examine key methodologies in KD, such as task-specific alignment, rationale-based training, and multi-teacher frameworks, alongside DD techniques that synthesize compact, high-impact datasets through optimization-based gradient matching, latent space regularization, and generative synthesis. Building on these foundations, we explore how integrating KD and DD can produce more effective and scalable compression strategies. Together, these approaches address persistent challenges in model scalability, architectural heterogeneity, and the preservation of emergent LLM abilities. We further highlight applications across domains such as healthcare and education, where distillation enables efficient deployment without sacrificing performance. Despite substantial progress, open challenges remain in preserving emergent reasoning and linguistic diversity, enabling efficient adaptation to continually evolving teacher models and datasets, and establishing comprehensive evaluation protocols. By synthesizing methodological innovations, theoretical foundations, and practical insights, our survey charts a path toward sustainable, resource-efficient LLMs through the tighter integration of KD and DD principles.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 1","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11423-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145561546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-18DOI: 10.1007/s10462-025-11407-3
T. J. Mateo Sanguino
Advances in generative artificial intelligence (AI), such as recent developments in text, audio, and video production, have amplified societal concerns, with threat probabilities estimated between 5 and 50%. This manuscript undertakes a comprehensive study to understand the factors influencing AI development, focusing on the interplay between AI research, cinematographic representations, and regulatory policies. The study reveals a strong interaction between scientific advances and cultural representations, indicating shared concerns and themes across both domains. It also highlights broad support for ethical and responsible AI development, with temporal analyses showing the significant influence of films on public perception and slower growth in policy implementation relative to cultural diffusion. The findings discuss the presence of a Pygmalion effect, where cultural representations shape perceptions of AI, and a potential Golem effect, where increased regulation may limit the dangerous development of AI and its societal impact. The study underscores the importance of balanced and ethical AI development, requiring continued monitoring and careful management of the relationship between research, cultural representations, and regulatory frameworks.
{"title":"The Pygmalion effect in AI: influence of cultural narratives and policies on technological development","authors":"T. J. Mateo Sanguino","doi":"10.1007/s10462-025-11407-3","DOIUrl":"10.1007/s10462-025-11407-3","url":null,"abstract":"<div><p>Advances in generative artificial intelligence (AI), such as recent developments in text, audio, and video production, have amplified societal concerns, with threat probabilities estimated between 5 and 50%. This manuscript undertakes a comprehensive study to understand the factors influencing AI development, focusing on the interplay between AI research, cinematographic representations, and regulatory policies. The study reveals a strong interaction between scientific advances and cultural representations, indicating shared concerns and themes across both domains. It also highlights broad support for ethical and responsible AI development, with temporal analyses showing the significant influence of films on public perception and slower growth in policy implementation relative to cultural diffusion. The findings discuss the presence of a Pygmalion effect, where cultural representations shape perceptions of AI, and a potential Golem effect, where increased regulation may limit the dangerous development of AI and its societal impact. The study underscores the importance of balanced and ethical AI development, requiring continued monitoring and careful management of the relationship between research, cultural representations, and regulatory frameworks.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 1","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11407-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145560954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-18DOI: 10.1007/s10462-025-11410-8
Lincy Annet Abraham, Gopinath Palanisamy, Goutham Veerapu, J. S. Nisha
The analysis and treatment of brain tumors are among the most difficult medical conditions. Brain tumors must be detected accurately and promptly to improve patient outcomes and plan effective treatments. Recently used advanced technologies such as artificial intelligence (AI) and machine learning (ML) have increased interest in applying AI to detect brain tumors. However, concerns have emerged regarding the reliability and transparency of AI models in medical settings, as their decision-making processes are often opaque and difficult to interpret. This research is unique in its focus on explainability in AI-based brain tumor detection, prioritizing confidence, safety, and clinical adoption over mere accuracy. It gives a thorough overview of XAI methodologies, problems, and uses, linking scientific advances to the needs of real-world healthcare. XAI is a sub-section of artificial intelligence that seeks to solve this problem by offering understandable and straightforward and providing explanations for the choices made by AI representations. Applications such as healthcare, where the interpretability of AI models is essential for guaranteeing patient safety and fostering confidence between medical professionals and AI systems, have seen the introduction of XAI-based procedures. This paper reviews recent advancements in XAI-based brain tumor detection, focusing on methods that provide justifications for AI model predictions. The study highlights the advantages of XAI in improving patient outcomes and supporting medical decision-making. The findings reveal that ResNet 18 performed better, with 94% training accuracy, 96.86% testing accuracy, low loss (0.012), and a rapid time ((sim 6text {s})). ResNet 50 was a little slower ((sim 13text {s})) but stable, with 92.86% test accuracy. DenseNet121 (Adam W) achieved the highest accuracy at 97.71%, but it was not consistent across all optimizers. ViT-GRU also got 97% accuracy with very little loss (0.008), although it took a long time to compute (around 49 s). On the other hand, VGG models (around 94% test accuracy) and MobileNetV2 (loss up to 6.024) were less reliable, even though they trained faster. Additionally, it explores various opportunities, challenges, and clinical applications. Based on these findings, this research offers a comprehensive analysis of XAI-based brain tumor detection and encourages further investigation in specific areas.
脑肿瘤的分析和治疗是最困难的医疗条件之一。脑肿瘤必须准确、及时地检测出来,以改善患者的预后,并制定有效的治疗方案。最近,人工智能(AI)和机器学习(ML)等先进技术的应用使人们对人工智能在脑肿瘤检测中的应用越来越感兴趣。然而,人们对医疗环境中人工智能模型的可靠性和透明度感到担忧,因为它们的决策过程往往不透明且难以解释。这项研究的独特之处在于它专注于基于人工智能的脑肿瘤检测的可解释性,优先考虑信心、安全性和临床采用,而不仅仅是准确性。它全面概述了XAI方法、问题和用途,并将科学进步与现实世界的医疗保健需求联系起来。XAI是人工智能的一个分支,它试图通过提供可理解和直接的方法来解决这个问题,并为人工智能表示所做的选择提供解释。在医疗保健等应用中,人工智能模型的可解释性对于保证患者安全、培养医疗专业人员与人工智能系统之间的信任至关重要,这些应用已经引入了基于xai的程序。本文综述了基于xai的脑肿瘤检测的最新进展,重点介绍了为AI模型预测提供依据的方法。该研究强调了XAI在改善患者预后和支持医疗决策方面的优势。结果显示,ResNet 18的表现更好,为94% training accuracy, 96.86% testing accuracy, low loss (0.012), and a rapid time ((sim 6text {s})). ResNet 50 was a little slower ((sim 13text {s})) but stable, with 92.86% test accuracy. DenseNet121 (Adam W) achieved the highest accuracy at 97.71%, but it was not consistent across all optimizers. ViT-GRU also got 97% accuracy with very little loss (0.008), although it took a long time to compute (around 49 s). On the other hand, VGG models (around 94% test accuracy) and MobileNetV2 (loss up to 6.024) were less reliable, even though they trained faster. Additionally, it explores various opportunities, challenges, and clinical applications. Based on these findings, this research offers a comprehensive analysis of XAI-based brain tumor detection and encourages further investigation in specific areas.
{"title":"Exploring the potential of explainable AI in brain tumor detection and classification: a systematic review","authors":"Lincy Annet Abraham, Gopinath Palanisamy, Goutham Veerapu, J. S. Nisha","doi":"10.1007/s10462-025-11410-8","DOIUrl":"10.1007/s10462-025-11410-8","url":null,"abstract":"<div><p>The analysis and treatment of brain tumors are among the most difficult medical conditions. Brain tumors must be detected accurately and promptly to improve patient outcomes and plan effective treatments. Recently used advanced technologies such as artificial intelligence (AI) and machine learning (ML) have increased interest in applying AI to detect brain tumors. However, concerns have emerged regarding the reliability and transparency of AI models in medical settings, as their decision-making processes are often opaque and difficult to interpret. This research is unique in its focus on explainability in AI-based brain tumor detection, prioritizing confidence, safety, and clinical adoption over mere accuracy. It gives a thorough overview of XAI methodologies, problems, and uses, linking scientific advances to the needs of real-world healthcare. XAI is a sub-section of artificial intelligence that seeks to solve this problem by offering understandable and straightforward and providing explanations for the choices made by AI representations. Applications such as healthcare, where the interpretability of AI models is essential for guaranteeing patient safety and fostering confidence between medical professionals and AI systems, have seen the introduction of XAI-based procedures. This paper reviews recent advancements in XAI-based brain tumor detection, focusing on methods that provide justifications for AI model predictions. The study highlights the advantages of XAI in improving patient outcomes and supporting medical decision-making. The findings reveal that ResNet 18 performed better, with 94% training accuracy, 96.86% testing accuracy, low loss (0.012), and a rapid time <span>((sim 6text {s}))</span>. ResNet 50 was a little slower <span>((sim 13text {s}))</span> but stable, with 92.86% test accuracy. DenseNet121 (Adam W) achieved the highest accuracy at 97.71%, but it was not consistent across all optimizers. ViT-GRU also got 97% accuracy with very little loss (0.008), although it took a long time to compute (around 49 s). On the other hand, VGG models (around 94% test accuracy) and MobileNetV2 (loss up to 6.024) were less reliable, even though they trained faster. Additionally, it explores various opportunities, challenges, and clinical applications. Based on these findings, this research offers a comprehensive analysis of XAI-based brain tumor detection and encourages further investigation in specific areas.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 1","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11410-8.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145560956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-18DOI: 10.1007/s10462-025-11412-6
Maik Larooij, Petter Törnberg
Recent advances in Large Language Models (LLMs) have revitalized interest in Agent-Based Models (ABMs) by enabling “generative” simulations, with agents that can plan, reason, and interact through natural language. These developments promise greater realism and expressive power, but also revive long-standing concerns over empirical grounding, calibration, and validation—issues that have historically limited the uptake of ABMs in the social sciences. This paper systematically reviews the emerging literature on generative ABMs to assess how these long-standing challenges are being addressed. We map domains of application, categorize reported validation practices, and assess their alignment with the stated modeling goals. Our review suggests that the use of LLMs may exacerbate rather than alleviate the challenge of validating ABMs, given their black-box structure, cultural biases, and stochastic outputs. While the need for validation is increasingly acknowledged, studies often rely on face-validity or outcome measures that are only loosely tied to underlying mechanisms. Generative ABMs thus occupy an ambiguous methodological space—lacking both the parsimony of formal models and the empirical validity of data-driven approaches—and their contribution to cumulative social-scientific knowledge hinges on resolving this tension.
{"title":"Validation is the central challenge for generative social simulation: a critical review of LLMs in agent-based modeling","authors":"Maik Larooij, Petter Törnberg","doi":"10.1007/s10462-025-11412-6","DOIUrl":"10.1007/s10462-025-11412-6","url":null,"abstract":"<div><p>Recent advances in Large Language Models (LLMs) have revitalized interest in Agent-Based Models (ABMs) by enabling “generative” simulations, with agents that can plan, reason, and interact through natural language. These developments promise greater realism and expressive power, but also revive long-standing concerns over empirical grounding, calibration, and validation—issues that have historically limited the uptake of ABMs in the social sciences. This paper systematically reviews the emerging literature on generative ABMs to assess how these long-standing challenges are being addressed. We map domains of application, categorize reported validation practices, and assess their alignment with the stated modeling goals. Our review suggests that the use of LLMs may exacerbate rather than alleviate the challenge of validating ABMs, given their black-box structure, cultural biases, and stochastic outputs. While the need for validation is increasingly acknowledged, studies often rely on face-validity or outcome measures that are only loosely tied to underlying mechanisms. Generative ABMs thus occupy an ambiguous methodological space—lacking both the parsimony of formal models and the empirical validity of data-driven approaches—and their contribution to cumulative social-scientific knowledge hinges on resolving this tension.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 1","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11412-6.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145561098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-18DOI: 10.1007/s10462-025-11416-2
Helmi Ayari, Pr. Ramzi Guetari, Pr. Naoufel Kraïem
Over the past few decades, credit scoring has become an important tool in the financial sector. It enables banks and financial institutions to assess the creditworthiness of individuals and reduce the risk of default. As a result of significant advances in artificial intelligence techniques. Machine learning (ML) has made it possible to improve credit scoring by distinguishing between people with good creditworthiness and those with poorer creditworthiness. In this article, we propose a systematic literature review of ML-based financial credit scoring methods published between 2018 and 2024. A total of 330 research papers were extracted from four different online databases and digital libraries. After the study selection procedure, 63 research papers were selected for this systematic review. This paper aims to identify the major ML methods used in credit scoring, assess their strengths and limitations, and highlight notable trends and advancements. In addition, the review addresses the critical challenges faced in the adoption of ML models for credit scoring. This study not only contributes to the understanding of effective ML techniques used for credit scoring but also guides future research by highlighting the promising avenues in ML-based credit scoring efforts.
{"title":"Machine learning powered financial credit scoring: a systematic literature review","authors":"Helmi Ayari, Pr. Ramzi Guetari, Pr. Naoufel Kraïem","doi":"10.1007/s10462-025-11416-2","DOIUrl":"10.1007/s10462-025-11416-2","url":null,"abstract":"<div><p>Over the past few decades, credit scoring has become an important tool in the financial sector. It enables banks and financial institutions to assess the creditworthiness of individuals and reduce the risk of default. As a result of significant advances in artificial intelligence techniques. Machine learning (ML) has made it possible to improve credit scoring by distinguishing between people with good creditworthiness and those with poorer creditworthiness. In this article, we propose a systematic literature review of ML-based financial credit scoring methods published between 2018 and 2024. A total of 330 research papers were extracted from four different online databases and digital libraries. After the study selection procedure, 63 research papers were selected for this systematic review. This paper aims to identify the major ML methods used in credit scoring, assess their strengths and limitations, and highlight notable trends and advancements. In addition, the review addresses the critical challenges faced in the adoption of ML models for credit scoring. This study not only contributes to the understanding of effective ML techniques used for credit scoring but also guides future research by highlighting the promising avenues in ML-based credit scoring efforts.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 1","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11416-2.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145561091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-18DOI: 10.1007/s10462-025-11415-3
Sufyan Danish, Md. Jalil Piran, Samee Ullah Khan, Muhammad Attique Khan, L. Minh Dang, Yahya Zweiri, Hyoung-Kyu Song, Hyeonjoon Moon
In recent years, the intensity and frequency of fires have increased significantly, resulting in considerable damage to properties and the environment through wildfires, oil pipeline fires, hazardous gas emissions, and building fires. Effective fire management systems are essential for early detection, rapid response, and mitigation of fire impacts. To address this challenge, unmanned aerial vehicles (UAVs) integrated with advanced state-of-the-art deep learning techniques offer a transformative solution for real-time fire detection, monitoring, and response. As UAVs play an essential role in the detection, classification and segmentation of fire-affected regions, enhancing vision-based fire management through advanced computer vision and deep learning technologies. This comprehensive survey critically examines recent advancements in vision-based fire management systems enabled by autonomous UAVs. It explores how baseline deep learning models, including convolutional neural networks, attention mechanisms, YOLO variants, generative adversarial networks and transformers, enhance UAV capabilities for fire-related tasks. Unlike previous reviews that focus on conventional machine learning and general AI approaches, this survey emphasizes the unique advantages and applications of deep learning-driven UAV platforms in fire scenarios. It provides detailed insights into various architectures, performance and applications used in UAV-based fire management. Additionally, the paper provides detailed insights into the available fire datasets along with their download links and outlines critical challenges, including data imbalance, privacy concerns, and real-time processing limitations. Finally, the survey identifies promising future directions, including multimodal sensor fusion, lightweight neural network architectures optimized for UAV deployment, and vision-language models. By synthesizing current research and identifying future directions, this survey aims to support the development of robust, intelligent UAV-based solutions for next-generation fire management. Researchers and professionals can access the GitHub repository.
{"title":"Vision-based fire management system using autonomous unmanned aerial vehicles: a comprehensive survey","authors":"Sufyan Danish, Md. Jalil Piran, Samee Ullah Khan, Muhammad Attique Khan, L. Minh Dang, Yahya Zweiri, Hyoung-Kyu Song, Hyeonjoon Moon","doi":"10.1007/s10462-025-11415-3","DOIUrl":"10.1007/s10462-025-11415-3","url":null,"abstract":"<div><p>In recent years, the intensity and frequency of fires have increased significantly, resulting in considerable damage to properties and the environment through wildfires, oil pipeline fires, hazardous gas emissions, and building fires. Effective fire management systems are essential for early detection, rapid response, and mitigation of fire impacts. To address this challenge, unmanned aerial vehicles (UAVs) integrated with advanced state-of-the-art deep learning techniques offer a transformative solution for real-time fire detection, monitoring, and response. As UAVs play an essential role in the detection, classification and segmentation of fire-affected regions, enhancing vision-based fire management through advanced computer vision and deep learning technologies. This comprehensive survey critically examines recent advancements in vision-based fire management systems enabled by autonomous UAVs. It explores how baseline deep learning models, including convolutional neural networks, attention mechanisms, YOLO variants, generative adversarial networks and transformers, enhance UAV capabilities for fire-related tasks. Unlike previous reviews that focus on conventional machine learning and general AI approaches, this survey emphasizes the unique advantages and applications of deep learning-driven UAV platforms in fire scenarios. It provides detailed insights into various architectures, performance and applications used in UAV-based fire management. Additionally, the paper provides detailed insights into the available fire datasets along with their download links and outlines critical challenges, including data imbalance, privacy concerns, and real-time processing limitations. Finally, the survey identifies promising future directions, including multimodal sensor fusion, lightweight neural network architectures optimized for UAV deployment, and vision-language models. By synthesizing current research and identifying future directions, this survey aims to support the development of robust, intelligent UAV-based solutions for next-generation fire management. Researchers and professionals can access the GitHub repository.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 1","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11415-3.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145561092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Agentic AI represents a transformative shift in artificial intelligence, but its rapid advancement has led to a fragmented understanding, often conflating modern neural systems with outdated symbolic models—a practice known as conceptual retrofitting. This survey cuts through this confusion by introducing a novel dual-paradigm framework that categorizes agentic systems into two distinct lineages: the symbolic/classical (relying on algorithmic planning and persistent state) and the neural/generative (leveraging stochastic generation and prompt-driven orchestration). Through a systematic PRISMA-based review of 90 studies (2018–2025), we provide a comprehensive analysis structured around this framework across three dimensions: (1) the theoretical foundations and architectural principles defining each paradigm; (2) domain-specific implementations in healthcare, finance, and robotics, demonstrating how application constraints dictate paradigm selection; and (3) paradigm-specific ethical and governance challenges, revealing divergent risks and mitigation strategies. Our analysis reveals that the choice of paradigm is strategic: symbolic systems dominate safety-critical domains (e.g., healthcare), while neural systems prevail in adaptive, data-rich environments (e.g., finance). Furthermore, we identify critical research gaps, including a significant deficit in governance models for symbolic systems and a pressing need for hybrid neuro-symbolic architectures. The findings culminate in a strategic roadmap arguing that the future of Agentic AI lies not in the dominance of one paradigm, but in their intentional integration to create systems that are both adaptable and reliable. This work provides the essential conceptual toolkit to guide future research, development, and policy toward robust and trustworthy hybrid intelligent systems.
{"title":"Agentic AI: a comprehensive survey of architectures, applications, and future directions","authors":"Mohamad Abou Ali, Fadi Dornaika, Jinan Charafeddine","doi":"10.1007/s10462-025-11422-4","DOIUrl":"10.1007/s10462-025-11422-4","url":null,"abstract":"<div><p>Agentic AI represents a transformative shift in artificial intelligence, but its rapid advancement has led to a fragmented understanding, often conflating modern neural systems with outdated symbolic models—a practice known as <i>conceptual retrofitting</i>. This survey cuts through this confusion by introducing a novel dual-paradigm framework that categorizes agentic systems into two distinct lineages: the symbolic/classical (relying on algorithmic planning and persistent state) and the neural/generative (leveraging stochastic generation and prompt-driven orchestration). Through a systematic PRISMA-based review of 90 studies (2018–2025), we provide a comprehensive analysis structured around this framework across three dimensions: (1) the theoretical foundations and architectural principles defining each paradigm; (2) domain-specific implementations in healthcare, finance, and robotics, demonstrating how application constraints dictate paradigm selection; and (3) paradigm-specific ethical and governance challenges, revealing divergent risks and mitigation strategies. Our analysis reveals that the choice of paradigm is strategic: symbolic systems dominate safety-critical domains (e.g., healthcare), while neural systems prevail in adaptive, data-rich environments (e.g., finance). Furthermore, we identify critical research gaps, including a significant deficit in governance models for symbolic systems and a pressing need for hybrid neuro-symbolic architectures. The findings culminate in a strategic roadmap arguing that the future of Agentic AI lies not in the dominance of one paradigm, but in their intentional integration to create systems that are both <i>adaptable</i> and <i>reliable</i>. This work provides the essential conceptual toolkit to guide future research, development, and policy toward robust and trustworthy hybrid intelligent systems.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 1","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11422-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145511072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-14DOI: 10.1007/s10462-025-11414-4
Jiangjian Xie, Shanshan Xie, Yang Liu, Xin Jing, Mengkun Zhu, Linlin Xie, Junguo Zhang, Kun Qian, Björn W. Schuller
The broad application of passive acoustic monitoring provides a critical data foundation for studying soundscape ecology, necessitating automated analysis methods to accurately extract ecological information from vast soundscape data. This review comprehensively and cohesively examines two predominant approaches in soundscape analysis: soundscape component recognition and acoustic indices methods. Focusing on machine learning (ML)-based analysis methods for bird diversity assessment over the past five years, this review surveys representative research within each category, outlining their respective strengths and limitations. This not only addresses the growing interest in this field but also identifies research gaps and poses key questions for future studies. The insights from this review are anticipated to significantly enhance the understanding of ML applications in soundscape analysis, guiding subsequent investigative efforts in this rapidly evolving discipline, and thereby better supporting long-term biodiversity monitoring and conservation initiatives.
{"title":"Decoding nature’s melody: significance and challenges of machine learning in assessing bird diversity via soundscape analysis","authors":"Jiangjian Xie, Shanshan Xie, Yang Liu, Xin Jing, Mengkun Zhu, Linlin Xie, Junguo Zhang, Kun Qian, Björn W. Schuller","doi":"10.1007/s10462-025-11414-4","DOIUrl":"10.1007/s10462-025-11414-4","url":null,"abstract":"<div><p>The broad application of passive acoustic monitoring provides a critical data foundation for studying soundscape ecology, necessitating automated analysis methods to accurately extract ecological information from vast soundscape data. This review comprehensively and cohesively examines two predominant approaches in soundscape analysis: soundscape component recognition and acoustic indices methods. Focusing on machine learning (ML)-based analysis methods for bird diversity assessment over the past five years, this review surveys representative research within each category, outlining their respective strengths and limitations. This not only addresses the growing interest in this field but also identifies research gaps and poses key questions for future studies. The insights from this review are anticipated to significantly enhance the understanding of ML applications in soundscape analysis, guiding subsequent investigative efforts in this rapidly evolving discipline, and thereby better supporting long-term biodiversity monitoring and conservation initiatives.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 1","pages":""},"PeriodicalIF":13.9,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11414-4.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145511071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}