Pub Date : 2026-01-07DOI: 10.1007/s10462-025-11436-y
Yan Liu, Mingyu Yan, Yanqiu Xiao, Guangzhen Cui, Li Han
Depth estimation methods for autonomous driving application face numerous challenges, such as capturing fine details and handling varying lighting conditions. Based on these challenges, LRDepth is proposed to improve the depth estimation task, which includes a simple high frequency enhancement module (HFEM) and a progressive residual denoising diffusion (PRDD) module. HFEM aids in extracting high-frequency components and amplifying the features, such as object edge details, generating more precise depth predictions. Inspired by the strong performance of diffusion models in various vision tasks, PRDD is designed to refine the depth predictions by reducing noise and enhancing edge details, which ensures the accurate representation of distant objects and subtle features. Extensive experiments on the KITTI and DIODE datasets demonstrated that the proposed network boosts the performance of monocular depth estimation, achieving more accurate long range depth predictions and improving model robustness in various lighting environments. The experiment results verified the method's adaptability, and the model is potential for real-world applications, which is beneficial for the optimization of visual perception module in intelligent driving system.
{"title":"Finer monocular depth estimation with long range in various driving lighting environments","authors":"Yan Liu, Mingyu Yan, Yanqiu Xiao, Guangzhen Cui, Li Han","doi":"10.1007/s10462-025-11436-y","DOIUrl":"10.1007/s10462-025-11436-y","url":null,"abstract":"<div><p>Depth estimation methods for autonomous driving application face numerous challenges, such as capturing fine details and handling varying lighting conditions. Based on these challenges, LRDepth is proposed to improve the depth estimation task, which includes a simple high frequency enhancement module (HFEM) and a progressive residual denoising diffusion (PRDD) module. HFEM aids in extracting high-frequency components and amplifying the features, such as object edge details, generating more precise depth predictions. Inspired by the strong performance of diffusion models in various vision tasks, PRDD is designed to refine the depth predictions by reducing noise and enhancing edge details, which ensures the accurate representation of distant objects and subtle features. Extensive experiments on the KITTI and DIODE datasets demonstrated that the proposed network boosts the performance of monocular depth estimation, achieving more accurate long range depth predictions and improving model robustness in various lighting environments. The experiment results verified the method's adaptability, and the model is potential for real-world applications, which is beneficial for the optimization of visual perception module in intelligent driving system.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 2","pages":""},"PeriodicalIF":13.9,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11436-y.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145909045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-07DOI: 10.1007/s10462-025-11447-9
Muhammad Abrar, Mujeeb ur Rehman, Sohail Khalid, Rahmat Ullah
Mental health disorders are becoming a major global health concern and pose a significant burden on global healthcare systems. Nearly one billion people suffer from mental disorders, accounting for 13% of the global disease burden and $1 trillion in annual productivity loss. Depression is the leading cause of disability and suicide is the second leading cause of death among young individuals. Economic uncertainty, social isolation, climate change, shifting societal norms, political conflict, and increasing violence are key factors contributing to the high prevalence of mental health issues. In the future, increasing poverty and inequality are likely to worsen this trend, resulting in a greater incidence and burden of mental illness. Therefore, timely diagnosis and intervention are a high priority. Traditional diagnostic and intervention methods, such as self-report questionnaires, clinical interviews, psychotherapy, medication, electroconvulsive therapy, and occupational therapy, have drawbacks including subjectivity, time commitment, and the potential for prolonged treatment. Due to these limitations, advanced approaches are needed to improve diagnostic accuracy and precision and to develop more effective interventions. This review aims to explore and evaluate the applications of Artificial Intelligence in the diagnosis and treatment of mental health conditions. This study provides a thorough analysis of various artificial intelligence-driven techniques and their advancements in the diagnosis of mental health conditions. Artificial intelligence has the potential to greatly improve the accuracy and effectiveness of mental health conditions. Moreover, this work consolidates the research gaps in current techniques and provides research hypotheses on how to overcome the gaps using a proposed 3-tier solution.
{"title":"The intersection of artificial intelligence and assistive technologies in the diagnosis and intervention of mental health conditions","authors":"Muhammad Abrar, Mujeeb ur Rehman, Sohail Khalid, Rahmat Ullah","doi":"10.1007/s10462-025-11447-9","DOIUrl":"10.1007/s10462-025-11447-9","url":null,"abstract":"<div><p>Mental health disorders are becoming a major global health concern and pose a significant burden on global healthcare systems. Nearly one billion people suffer from mental disorders, accounting for 13% of the global disease burden and $1 trillion in annual productivity loss. Depression is the leading cause of disability and suicide is the second leading cause of death among young individuals. Economic uncertainty, social isolation, climate change, shifting societal norms, political conflict, and increasing violence are key factors contributing to the high prevalence of mental health issues. In the future, increasing poverty and inequality are likely to worsen this trend, resulting in a greater incidence and burden of mental illness. Therefore, timely diagnosis and intervention are a high priority. Traditional diagnostic and intervention methods, such as self-report questionnaires, clinical interviews, psychotherapy, medication, electroconvulsive therapy, and occupational therapy, have drawbacks including subjectivity, time commitment, and the potential for prolonged treatment. Due to these limitations, advanced approaches are needed to improve diagnostic accuracy and precision and to develop more effective interventions. This review aims to explore and evaluate the applications of Artificial Intelligence in the diagnosis and treatment of mental health conditions. This study provides a thorough analysis of various artificial intelligence-driven techniques and their advancements in the diagnosis of mental health conditions. Artificial intelligence has the potential to greatly improve the accuracy and effectiveness of mental health conditions. Moreover, this work consolidates the research gaps in current techniques and provides research hypotheses on how to overcome the gaps using a proposed 3-tier solution.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 2","pages":""},"PeriodicalIF":13.9,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11447-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145909044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-07DOI: 10.1007/s10462-025-11390-9
Xinpeng Long, Michael Kampouridis, Tasos Papastylianou
This study explores the integration of directional changes (DC), genetic programming (GP), and multi-objective optimisation (MOO) to develop advanced algorithmic trading strategies. Directional changes offer a dynamic, event-based approach to market analysis, identifying significant price movements and trends. Genetic programming evolves trading rules to discover effective and profitable strategies. However, financial trading presents a multi-objective challenge, balancing conflicting objectives such as returns and risk. We propose a novel algorithmic trading framework, termed MOO3, which integrates genetic programming with the NSGA-II multi-objective optimisation algorithm to optimise three fitness functions: total return, expected rate of return, and risk. While the use of NSGA-II itself is well-established, our contribution lies in how we apply it within a trading context that combines (i) directional changes, (ii) genetic programming with both DC-based and physical-time indicators, and (iii) a modified Sharpe Ratio for post-optimisation strategy selection based on trader preferences. Utilising indicators from both paradigms allows the GP algorithm to create profitable trading strategies, while the multi-objective fitness function allows it to simultaneously optimise for risk. A definitive strategy is chosen from Pareto-optimal solutions using the modified Sharpe Ratio, allowing traders to prioritise multiple objectives. Our methodology is tested on 110 stock datasets from 10 international markets, aiming to demonstrate that the multi-objective framework can yield superior trading strategies with lower risk. Results indicate that the MOO3 algorithm consistently and significantly outperforms single-objective optimisation (SOO) methods, even when the same SOO criterion is employed for choosing a single, definitive investment strategy from the Pareto front.
{"title":"Multi-objective genetic programming-based algorithmic trading, using directional changes and a modified sharpe ratio score for identifying optimal trading strategies","authors":"Xinpeng Long, Michael Kampouridis, Tasos Papastylianou","doi":"10.1007/s10462-025-11390-9","DOIUrl":"10.1007/s10462-025-11390-9","url":null,"abstract":"<div><p>This study explores the integration of directional changes (DC), genetic programming (GP), and multi-objective optimisation (MOO) to develop advanced algorithmic trading strategies. Directional changes offer a dynamic, event-based approach to market analysis, identifying significant price movements and trends. Genetic programming evolves trading rules to discover effective and profitable strategies. However, financial trading presents a multi-objective challenge, balancing conflicting objectives such as returns and risk. We propose a novel algorithmic trading framework, termed MOO3, which integrates genetic programming with the NSGA-II multi-objective optimisation algorithm to optimise three fitness functions: total return, expected rate of return, and risk. While the use of NSGA-II itself is well-established, our contribution lies in how we apply it within a trading context that combines (i) directional changes, (ii) genetic programming with both DC-based and physical-time indicators, and (iii) a modified Sharpe Ratio for post-optimisation strategy selection based on trader preferences. Utilising indicators from both paradigms allows the GP algorithm to create profitable trading strategies, while the multi-objective fitness function allows it to simultaneously optimise for risk. A definitive strategy is chosen from Pareto-optimal solutions using the modified Sharpe Ratio, allowing traders to prioritise multiple objectives. Our methodology is tested on 110 stock datasets from 10 international markets, aiming to demonstrate that the multi-objective framework can yield superior trading strategies with lower risk. Results indicate that the MOO3 algorithm consistently and significantly outperforms single-objective optimisation (SOO) methods, even when the same SOO criterion is employed for choosing a single, definitive investment strategy from the Pareto front.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 2","pages":""},"PeriodicalIF":13.9,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11390-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145909002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-07DOI: 10.1007/s10462-025-11374-9
Pankaj Kumari, Lavika Goel
Worldwide lung cancer is a significant reason for death resulting from cancer with early diagnosis crucial for enhancing patient results. This comprehensive survey looks at the most recent developments in methods for detecting lung cancer by using chest CT scan images. The study describes a broad variety of approaches includes methods for machine learning such random forests support vector machines logistic regression and k-nearest neighbors in addition to deep learning frameworks such as variational autoencoders recurrent neural networks convolutional neural networks and generative adversarial networks. Additionally the survey explores hybrid models that combine deep learning and machine learning with nature-inspired optimization techniques to enhance performance. All the techniques discussed in this paper mainly focus on the diagnosis of NSCLC i.e. non-small cell lung cancer as it is more prevalent. The paper also reviews multiple advanced techniques used in diagnosis of lung cancer, including 3D-CNN i.e. Convolutional Neural Networks, multimodal logistic regression models and Cyclic Variational Autoencoders. It highlights key publicly available datasets frequently used in this research area such as LIDC-IDRI (Lung Image Database Consortium and Image Database Resource Initiative), LUNA16 (Lung Nodule Analysis 2016), the Kaggle lung cancer dataset, NSCLC Radiogenomics and the NIH (National Institutes of Health) chest X-ray database. This survey provides a detailed comparison of each technique, describing their advantages, limitations, and reported performance metrics, especially in terms of classification accuracy. Transfer learning with Vision Transformer achieves the highest accuracy of 94.6%, while 3D Convolutional Neural Network (3D -CNN) achieves an accuracy of 93.7%, both of which are showcasing highest performance on applicable datasets. Furthermore, the research demonstrates the potential of emerging techniques like federated learning and explainable AI in addressing challenges pertaining to data privacy and model interpretability. This survey paper reviews several techniques and finds that deep learning is the most extensively researched area in lung cancer diagnosis. This approach is not only widely used but also exhibits notable success in identifying and categorizing lung cancer with a high degree of accuracy.
{"title":"Emerging computational intelligence based techniques for lung cancer diagnosis and classification on chest CT scan images: a comprehensive survey","authors":"Pankaj Kumari, Lavika Goel","doi":"10.1007/s10462-025-11374-9","DOIUrl":"10.1007/s10462-025-11374-9","url":null,"abstract":"<div><p>Worldwide lung cancer is a significant reason for death resulting from cancer with early diagnosis crucial for enhancing patient results. This comprehensive survey looks at the most recent developments in methods for detecting lung cancer by using chest CT scan images. The study describes a broad variety of approaches includes methods for machine learning such random forests support vector machines logistic regression and k-nearest neighbors in addition to deep learning frameworks such as variational autoencoders recurrent neural networks convolutional neural networks and generative adversarial networks. Additionally the survey explores hybrid models that combine deep learning and machine learning with nature-inspired optimization techniques to enhance performance. All the techniques discussed in this paper mainly focus on the diagnosis of NSCLC i.e. non-small cell lung cancer as it is more prevalent. The paper also reviews multiple advanced techniques used in diagnosis of lung cancer, including 3D-CNN i.e. Convolutional Neural Networks, multimodal logistic regression models and Cyclic Variational Autoencoders. It highlights key publicly available datasets frequently used in this research area such as LIDC-IDRI (Lung Image Database Consortium and Image Database Resource Initiative), LUNA16 (Lung Nodule Analysis 2016), the Kaggle lung cancer dataset, NSCLC Radiogenomics and the NIH (National Institutes of Health) chest X-ray database. This survey provides a detailed comparison of each technique, describing their advantages, limitations, and reported performance metrics, especially in terms of classification accuracy. Transfer learning with Vision Transformer achieves the highest accuracy of 94.6%, while 3D Convolutional Neural Network (3D -CNN) achieves an accuracy of 93.7%, both of which are showcasing highest performance on applicable datasets. Furthermore, the research demonstrates the potential of emerging techniques like federated learning and explainable AI in addressing challenges pertaining to data privacy and model interpretability. This survey paper reviews several techniques and finds that deep learning is the most extensively researched area in lung cancer diagnosis. This approach is not only widely used but also exhibits notable success in identifying and categorizing lung cancer with a high degree of accuracy.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 2","pages":""},"PeriodicalIF":13.9,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11374-9.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145909007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-07DOI: 10.1007/s10462-025-11443-z
Ahmad Taheri, Keyvan RahimiZadeh, Jan Baumbach, Amin Beheshti, Olga Zolotareva, Mohammed Azmi Al-Betar, Seyedali Mirjalili, Amir H. Gandomi
A novel metaheuristic optimization algorithm, namely the Farthest better or Nearest worse Optimizer (FNO) algorithm, is proposed in this paper. The idea behind the FNO algorithm is derived from the qualities and distances between agents’ positions in a search space. The process of searching in the FNO includes two phases. During the first phase of the FNO, it jumps over the nearest regions with lower potential to avoid local optima. In the second phase, the algorithm tries to explore the farthest positions with higher potential to reach or explore the global optimum. These operations aim to enhance population diversity and provide the FNO with opportunities to discover high-quality regions while avoiding low-quality regions. A structural component within FNO, called Dynamic Focus Strategy (DFS), is also presented for controlling the exploration ratio. The DFS applies a random vector as a coefficient to shrink the area around the farthest better positions throughout the search process. Several experimental studies have been conducted on well-known benchmark suites, comprising 45 benchmarks, to assess the efficacy of the FNO algorithm. Additionally, five engineering problems were used to evaluate the practical applicability of the proposed FNO algorithm. The Wilcoxon test, as a well-known non-parametric statistical test, is conducted to fairly compare results. The findings indicate that the FNO algorithm performs competitively against other state-of-the-art population-based metaheuristic algorithms on the tested problems.
{"title":"Farthest better or nearest worse optimizer: a novel metaheuristic algorithm","authors":"Ahmad Taheri, Keyvan RahimiZadeh, Jan Baumbach, Amin Beheshti, Olga Zolotareva, Mohammed Azmi Al-Betar, Seyedali Mirjalili, Amir H. Gandomi","doi":"10.1007/s10462-025-11443-z","DOIUrl":"10.1007/s10462-025-11443-z","url":null,"abstract":"<div><p>A novel metaheuristic optimization algorithm, namely the Farthest better or Nearest worse Optimizer (FNO) algorithm, is proposed in this paper. The idea behind the FNO algorithm is derived from the qualities and distances between agents’ positions in a search space. The process of searching in the FNO includes two phases. During the first phase of the FNO, it jumps over the nearest regions with lower potential to avoid local optima. In the second phase, the algorithm tries to explore the farthest positions with higher potential to reach or explore the global optimum. These operations aim to enhance population diversity and provide the FNO with opportunities to discover high-quality regions while avoiding low-quality regions. A structural component within FNO, called Dynamic Focus Strategy (DFS), is also presented for controlling the exploration ratio. The DFS applies a random vector as a coefficient to shrink the area around the farthest better positions throughout the search process. Several experimental studies have been conducted on well-known benchmark suites, comprising 45 benchmarks, to assess the efficacy of the FNO algorithm. Additionally, five engineering problems were used to evaluate the practical applicability of the proposed FNO algorithm. The Wilcoxon test, as a well-known non-parametric statistical test, is conducted to fairly compare results. The findings indicate that the FNO algorithm performs competitively against other state-of-the-art population-based metaheuristic algorithms on the tested problems.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 2","pages":""},"PeriodicalIF":13.9,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11443-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146027026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-07DOI: 10.1007/s10462-025-11354-z
Na Xu, Chaoxu Mu, Ke Wang, Liang Ma, Zhaoyang Liu
In this paper, we study a collaborative optimization scheduling approach for high-proportion renewable energy smart microgrids to achieve multi-energy management in a distributed execution framework with centralized training. First, we construct a multi-agent distributed microgrid optimization model for this optimization problem based on different types of renewable energy sources, energy storage, power exchange with the upper grid, and time-of-use electricity prices. Then, multiple long-term optimization objectives are designed to transform the cooperative optimization scheduling problem into a multi-agent multi-objective optimization problem, addressing the challenges of dynamic optimization. To enhance the correlation of policy sampling, we propose a novel multi-objective generalized normal distribution optimization (MGNDO) algorithm. By updating the covariance matrix, the policy correlations between different agents are better captured, resulting in more cooperative action sequences. Compared to traditional action sampling methods, this approach can better accommodate complex dynamic constraints and multi-objective requirements. Finally, a smart distribution network connected to three microgrids is taken as an example to realize the cooperative optimal scheduling problem by using the proposed algorithm, MADDPG algorithm and PSO algorithm, respectively. Operational cost and new energy consumption are compared separately to further illustrate the effectiveness of the proposed approach.
{"title":"Multi-agent generalized cooperative optimization scheduling for multi-energy complementarity in microgrids","authors":"Na Xu, Chaoxu Mu, Ke Wang, Liang Ma, Zhaoyang Liu","doi":"10.1007/s10462-025-11354-z","DOIUrl":"10.1007/s10462-025-11354-z","url":null,"abstract":"<div><p>In this paper, we study a collaborative optimization scheduling approach for high-proportion renewable energy smart microgrids to achieve multi-energy management in a distributed execution framework with centralized training. First, we construct a multi-agent distributed microgrid optimization model for this optimization problem based on different types of renewable energy sources, energy storage, power exchange with the upper grid, and time-of-use electricity prices. Then, multiple long-term optimization objectives are designed to transform the cooperative optimization scheduling problem into a multi-agent multi-objective optimization problem, addressing the challenges of dynamic optimization. To enhance the correlation of policy sampling, we propose a novel multi-objective generalized normal distribution optimization (MGNDO) algorithm. By updating the covariance matrix, the policy correlations between different agents are better captured, resulting in more cooperative action sequences. Compared to traditional action sampling methods, this approach can better accommodate complex dynamic constraints and multi-objective requirements. Finally, a smart distribution network connected to three microgrids is taken as an example to realize the cooperative optimal scheduling problem by using the proposed algorithm, MADDPG algorithm and PSO algorithm, respectively. Operational cost and new energy consumption are compared separately to further illustrate the effectiveness of the proposed approach.</p></div>","PeriodicalId":8449,"journal":{"name":"Artificial Intelligence Review","volume":"59 2","pages":""},"PeriodicalIF":13.9,"publicationDate":"2026-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10462-025-11354-z.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145909003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}