Pub Date : 2025-02-04DOI: 10.1016/j.asoc.2025.112840
Shuo Wang , Ziheng Li , Tianzuo Zhang , Mengqing Li , Liyao Wang , Jinglan Hong
An in-depth grasp of the complex relationship between climate and disease is crucial for fostering the development of public health security. However, certain limitations, such as overlooking spatial heterogeneity and lag effects, persist when revealing this complex relationship. With tuberculosis (TB) as a case, a pioneering space-distributed machine learning (SDML) framework is introduced to enhance TB prediction accuracy and unveil its complex relationship with climatic factors. Results demonstrate the nonlinear, intricate relationship between TB and climate variables, emphasizing the significant lag effect of climate variables on TB. Model comparisons demonstrate that SDML has a significant improvement in prediction, particularly in lag effect identification. The determination coefficient average of SDML (0.786) surpasses that of traditional machine learning (ML, 0.719). Utilizing an interpretable ML method to identify the impact of climate variables on TB, this study reveals evident spatial heterogeneity in the response of TB to climate. The spatial heterogeneity of the effects of extreme climate on TB suggests regionalized prevention and control strategies for diverse regions. This study provides a novel perspective on comprehending the intricate relationship between TB and climate, showcasing the feasibility of artificial intelligence-assisted scientific discovery.
{"title":"Space-distributed machine learning based on climate lag effect: Dynamic prediction of tuberculosis","authors":"Shuo Wang , Ziheng Li , Tianzuo Zhang , Mengqing Li , Liyao Wang , Jinglan Hong","doi":"10.1016/j.asoc.2025.112840","DOIUrl":"10.1016/j.asoc.2025.112840","url":null,"abstract":"<div><div>An in-depth grasp of the complex relationship between climate and disease is crucial for fostering the development of public health security. However, certain limitations, such as overlooking spatial heterogeneity and lag effects, persist when revealing this complex relationship. With tuberculosis (TB) as a case, a pioneering space-distributed machine learning (SDML) framework is introduced to enhance TB prediction accuracy and unveil its complex relationship with climatic factors. Results demonstrate the nonlinear, intricate relationship between TB and climate variables, emphasizing the significant lag effect of climate variables on TB. Model comparisons demonstrate that SDML has a significant improvement in prediction, particularly in lag effect identification. The determination coefficient average of SDML (0.786) surpasses that of traditional machine learning (ML, 0.719). Utilizing an interpretable ML method to identify the impact of climate variables on TB, this study reveals evident spatial heterogeneity in the response of TB to climate. The spatial heterogeneity of the effects of extreme climate on TB suggests regionalized prevention and control strategies for diverse regions. This study provides a novel perspective on comprehending the intricate relationship between TB and climate, showcasing the feasibility of artificial intelligence-assisted scientific discovery.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"171 ","pages":"Article 112840"},"PeriodicalIF":7.2,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143316946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-04DOI: 10.1016/j.asoc.2025.112835
Huankun Sheng , Ying Li
As an efficient representation of objects, the 3D point cloud is increasingly prevalent in various application fields. However, raw point clouds captured from scanning devices often contain noise, which significantly impairs the performance of downstream tasks such as surface reconstruction and object recognition. Consequently, point cloud denoising has emerged as a crucial task in geometry modeling and processing. Although deep learning has proven effective in this domain, existing learning-based methods predominantly focus on local information and tend to neglect the non-local features inherent in 3D point clouds. In this paper, we propose a deep non-local point cloud denoising network, DnPCD-Net, to address this issue. DnPCD-Net consists of three key components: 1) a feature extraction module that extracts local features for each point; 2) a densely-connected Transformer module that captures long-range dependencies across the input point set and feature channels; and 3) a feature fusion module that adaptively combines local and non-local features. Extensive experiments on both synthetic and real-scanned datasets demonstrate that DnPCD-Net achieves superior denoising performance, with statistically significant improvements in Chamfer Distance and Earth Mover’s Distance, as well as better visual quality, confirming its effectiveness and robustness in practical applications.
{"title":"Deep non-local point cloud denoising network","authors":"Huankun Sheng , Ying Li","doi":"10.1016/j.asoc.2025.112835","DOIUrl":"10.1016/j.asoc.2025.112835","url":null,"abstract":"<div><div>As an efficient representation of objects, the 3D point cloud is increasingly prevalent in various application fields. However, raw point clouds captured from scanning devices often contain noise, which significantly impairs the performance of downstream tasks such as surface reconstruction and object recognition. Consequently, point cloud denoising has emerged as a crucial task in geometry modeling and processing. Although deep learning has proven effective in this domain, existing learning-based methods predominantly focus on local information and tend to neglect the non-local features inherent in 3D point clouds. In this paper, we propose a deep non-local point cloud denoising network, DnPCD-Net, to address this issue. DnPCD-Net consists of three key components: 1) a feature extraction module that extracts local features for each point; 2) a densely-connected Transformer module that captures long-range dependencies across the input point set and feature channels; and 3) a feature fusion module that adaptively combines local and non-local features. Extensive experiments on both synthetic and real-scanned datasets demonstrate that DnPCD-Net achieves superior denoising performance, with statistically significant improvements in Chamfer Distance and Earth Mover’s Distance, as well as better visual quality, confirming its effectiveness and robustness in practical applications.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"171 ","pages":"Article 112835"},"PeriodicalIF":7.2,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143316944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-04DOI: 10.1016/j.asoc.2025.112824
Necip Fazıl Karakurt , Selcuk Cebi
Companies aim to maximize profits by effectively designing mobile applications to promote their services in a competitive market. However, identifying the design features that significantly impact mobile applications is challenging due to their subjective nature. Traditional Kano approaches face limitations, such as information loss caused by considering only the most frequent values. To address these limitations, this study proposes a novel fuzzy Kano approach to better manage the subjectivity in human judgments and the uncertainty in user preferences. This approach uncovers hidden preference levels, accounts for uncertainties, resolves dual classification issues, compares membership degrees, and emphasizes subtle details that may otherwise be overlooked. The fuzzy Kano approach was applied to survey data from 100 participants, covering 33 mobile application features. By classifying these features, the fuzzy Kano model examined their influence on user satisfaction and quality perception. The results demonstrated the feasibility and effectiveness of the proposed method, identifying key features—such as Product Details, Order Management and Returns, and Product Opinions and Reviews—that, if absent, could lead to customer dissatisfaction. Additionally, the findings revealed significant differences between the fuzzy and traditional Kano models and highlighted variations in mobile application characteristics across different demographic groups, providing valuable insights for mobile application design.
{"title":"A fuzzy Kano model proposal for sustainable product design: Mobile application feature analysis","authors":"Necip Fazıl Karakurt , Selcuk Cebi","doi":"10.1016/j.asoc.2025.112824","DOIUrl":"10.1016/j.asoc.2025.112824","url":null,"abstract":"<div><div>Companies aim to maximize profits by effectively designing mobile applications to promote their services in a competitive market. However, identifying the design features that significantly impact mobile applications is challenging due to their subjective nature. Traditional Kano approaches face limitations, such as information loss caused by considering only the most frequent values. To address these limitations, this study proposes a novel fuzzy Kano approach to better manage the subjectivity in human judgments and the uncertainty in user preferences. This approach uncovers hidden preference levels, accounts for uncertainties, resolves dual classification issues, compares membership degrees, and emphasizes subtle details that may otherwise be overlooked. The fuzzy Kano approach was applied to survey data from 100 participants, covering 33 mobile application features. By classifying these features, the fuzzy Kano model examined their influence on user satisfaction and quality perception. The results demonstrated the feasibility and effectiveness of the proposed method, identifying key features—such as Product Details, Order Management and Returns, and Product Opinions and Reviews—that, if absent, could lead to customer dissatisfaction. Additionally, the findings revealed significant differences between the fuzzy and traditional Kano models and highlighted variations in mobile application characteristics across different demographic groups, providing valuable insights for mobile application design.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"172 ","pages":"Article 112824"},"PeriodicalIF":7.2,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143437256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-03DOI: 10.1016/j.asoc.2025.112809
Xin Wang , Zuoming Zhang , Luchen Li
In various fields such as unmanned aerial vehicles (UAVs) and autonomous driving, monocular dense Simultaneous Localization and Mapping (SLAM) and Visual Odometry (VO) allow devices of above mentioned fields to estimate their position and orientation in real-time while constructing dense maps, relying solely on a single camera sensor. However, existing solutions for dense SLAM/VO systems often come with high computational costs and lead to issues, such as scale drift and reduced localization accuracy, making them less practical than their sparse counterparts. We present MVS-VIO, a novel dense monocular visual inertial odometry system composed of two main components: real-time pose estimation and global Truncated Signed Distance Function (TSDF) reconstruction. The first component is LW-MVSNET, a lightweight multi-view depth estimation network that utilizes only three views and 68 depth hypotheses. The adaptive view aggregation (AVA) and adaptive depth hypotheses (ADH) modules can effectively reject inaccurate depth estimation results, preventing significant error accumulation during runtime by adopting an uncertainty mask. The second is a tightly-coupled optimization method leveraging a deep photometric error. To address the problem of underutilization of information due to a delayed generation of depth estimation, we incorporate a delayed marginalization strategy to optimize all the variables. LW-MVSNET is trained on the Replica dataset and performs good generalization on the TUM-RGBD and the EuRoC datasets, and the ablation study further validates the effectiveness of our modules. Notably, in all real-world sequences of the EuRoC dataset, our proposed MVS-VIO system outperforms comparable dense monocular systems. It operates stably in all eleven sequences at a rate of 10.08 frames per second (FPS), and achieves an average absolute trajectory error (ATE) of 0.066 meters, which represents state-of-the-art performance. This demonstrates that our method can reconstruct dense maps in real-time while maintaining a level of accuracy comparable to that of sparse systems.
{"title":"A tightly-coupled dense monocular Visual-Inertial Odometry system with lightweight depth estimation network","authors":"Xin Wang , Zuoming Zhang , Luchen Li","doi":"10.1016/j.asoc.2025.112809","DOIUrl":"10.1016/j.asoc.2025.112809","url":null,"abstract":"<div><div>In various fields such as unmanned aerial vehicles (UAVs) and autonomous driving, monocular dense Simultaneous Localization and Mapping (SLAM) and Visual Odometry (VO) allow devices of above mentioned fields to estimate their position and orientation in real-time while constructing dense maps, relying solely on a single camera sensor. However, existing solutions for dense SLAM/VO systems often come with high computational costs and lead to issues, such as scale drift and reduced localization accuracy, making them less practical than their sparse counterparts. We present MVS-VIO, a novel dense monocular visual inertial odometry system composed of two main components: real-time pose estimation and global Truncated Signed Distance Function (TSDF) reconstruction. The first component is LW-MVSNET, a lightweight multi-view depth estimation network that utilizes only three views and 68 depth hypotheses. The adaptive view aggregation (AVA) and adaptive depth hypotheses (ADH) modules can effectively reject inaccurate depth estimation results, preventing significant error accumulation during runtime by adopting an uncertainty mask. The second is a tightly-coupled optimization method leveraging a deep photometric error. To address the problem of underutilization of information due to a delayed generation of depth estimation, we incorporate a delayed marginalization strategy to optimize all the variables. LW-MVSNET is trained on the Replica dataset and performs good generalization on the TUM-RGBD and the EuRoC datasets, and the ablation study further validates the effectiveness of our modules. Notably, in all real-world sequences of the EuRoC dataset, our proposed MVS-VIO system outperforms comparable dense monocular systems. It operates stably in all eleven sequences at a rate of 10.08 frames per second (FPS), and achieves an average absolute trajectory error (ATE) of 0.066 meters, which represents state-of-the-art performance. This demonstrates that our method can reconstruct dense maps in real-time while maintaining a level of accuracy comparable to that of sparse systems.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"171 ","pages":"Article 112809"},"PeriodicalIF":7.2,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143348762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-03DOI: 10.1016/j.asoc.2025.112812
Nankai Lin , Hongyan Wu , Aimin Yang , Lianxi Wang
Emotion analysis for COVID-19 is a domain-specific task, such as the epidemic, which plays a significant part in scientific research institutions and governments to track the emotional changes and trends of society. When introducing general domain textual information, currently used techniques just concentrate on learning the domain-invariant information to reduce domain discrepancy but ignore the maximum use of domain-general information to solve the problem of domain-specific data scarcity. As a result of this inspiration, we develop a domain-adapted contrastive learning-based emotion classification model, which consists of three modules: text representation, emotion identification, and domain adaptation. In this model, the text representation module is used to obtain a representation of sentences, and then the domain adaptation module is employed to pull the representation space of domain-specific data and domain-general data to overcome domain discrepancy and ultimately achieve better performance in the emotion identification module. To fortify our model, we propose two different contrastive learning strategies in the domain adaptation module. Experimental results on the SMP2020-EWECT show that our two strategies achieve F-values of 66.28% and 67.39% respectively, which significantly outperform the baselines despite the scarcity of domain-specific data. Interpretability analysis further demonstrates that the model employing domain-adapted contrastive learning can better understand domain text emotions.
{"title":"Emotional classification in COVID-19: Analyzing Chinese microblogs with domain-adapted contrastive learning","authors":"Nankai Lin , Hongyan Wu , Aimin Yang , Lianxi Wang","doi":"10.1016/j.asoc.2025.112812","DOIUrl":"10.1016/j.asoc.2025.112812","url":null,"abstract":"<div><div>Emotion analysis for COVID-19 is a domain-specific task, such as the epidemic, which plays a significant part in scientific research institutions and governments to track the emotional changes and trends of society. When introducing general domain textual information, currently used techniques just concentrate on learning the domain-invariant information to reduce domain discrepancy but ignore the maximum use of domain-general information to solve the problem of domain-specific data scarcity. As a result of this inspiration, we develop a domain-adapted contrastive learning-based emotion classification model, which consists of three modules: text representation, emotion identification, and domain adaptation. In this model, the text representation module is used to obtain a representation of sentences, and then the domain adaptation module is employed to pull the representation space of domain-specific data and domain-general data to overcome domain discrepancy and ultimately achieve better performance in the emotion identification module. To fortify our model, we propose two different contrastive learning strategies in the domain adaptation module. Experimental results on the SMP2020-EWECT show that our two strategies achieve F-values of 66.28% and 67.39% respectively, which significantly outperform the baselines despite the scarcity of domain-specific data. Interpretability analysis further demonstrates that the model employing domain-adapted contrastive learning can better understand domain text emotions.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"171 ","pages":"Article 112812"},"PeriodicalIF":7.2,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143316945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.asoc.2025.112737
Weixiong Jiang , Kaiwei Yu , Jun Wu , Tianjiao Dai , Haiping Zhu
Rotating machinery fault diagnosis plays a crucial role in industrial applications. However, existing methods face tremendous challenges in dealing with nonlinear noisy signals and intricate simultaneous-fault scenario. Dedicated to address this issue, a neoteric compound fault diagnosis method is proposed by using redefined signal quality indicator (RSQI) and parallel ensemble network. In this paper, RSQI is devised to eliminate noise components, and it can balance the noise reduction and signal fidelity. By further exploring the functionality of light gradient boosting machines (LGBM), parallel ensemble network containing two heterogeneous LGBMs is constructed. One is used to identify fault numbers, and the other is used for the single or simultaneous-fault scenario recognition. The proposed network is self-adaptive to the precious nature of the issue without user intervention for empirical threshold decision, and the two heterogeneous LGBMs can concurrently execute for responding to the diagnostic task in real time. Finally, two experimental studies are conducted to validate the proposed method. The experimental results of five multi-criteria decision-making (MCDM) methods indicate that the proposed method is competitive in the classification performance and algorithm robustness.
{"title":"Self-adaptive single and simultaneous fault diagnosis for rotating machinery via redefined signal quality indicator and parallel ensemble network","authors":"Weixiong Jiang , Kaiwei Yu , Jun Wu , Tianjiao Dai , Haiping Zhu","doi":"10.1016/j.asoc.2025.112737","DOIUrl":"10.1016/j.asoc.2025.112737","url":null,"abstract":"<div><div>Rotating machinery fault diagnosis plays a crucial role in industrial applications. However, existing methods face tremendous challenges in dealing with nonlinear noisy signals and intricate simultaneous-fault scenario. Dedicated to address this issue, a neoteric compound fault diagnosis method is proposed by using redefined signal quality indicator (RSQI) and parallel ensemble network. In this paper, RSQI is devised to eliminate noise components, and it can balance the noise reduction and signal fidelity. By further exploring the functionality of light gradient boosting machines (LGBM), parallel ensemble network containing two heterogeneous LGBMs is constructed. One is used to identify fault numbers, and the other is used for the single or simultaneous-fault scenario recognition. The proposed network is self-adaptive to the precious nature of the issue without user intervention for empirical threshold decision, and the two heterogeneous LGBMs can concurrently execute for responding to the diagnostic task in real time. Finally, two experimental studies are conducted to validate the proposed method. The experimental results of five multi-criteria decision-making (MCDM) methods indicate that the proposed method is competitive in the classification performance and algorithm robustness.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"170 ","pages":"Article 112737"},"PeriodicalIF":7.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143213277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.asoc.2025.112813
Eilen García Rodríguez , Enrique Reyes Archundia , José A. Gutiérrez Gnecchi , Arturo Méndez Patiño , Marco V. Chávez Báez , Oscar I. Coronado Reyes , Néstor F. Guerrero Rodríguez
The transition from conventional energy systems to decentralized generation based on renewable energy sources presents significant challenges. Sophisticated devices are required to monitor and manage the real-time flow and quality of energy. These tools require efficient algorithms that minimize computational complexity, particularly for real-time applications. This work proposes a novel, computationally efficient methodology for the real-time detection and classification of seven types of power quality disturbances (PQDs) based on Multiresolution Analysis of the Discrete Wavelet Transform (MRA-DWT) and feature extraction methods such as RMS and Logarithmic Energy Entropy. The extracted distinctive feature vector, consisting of seven elements, serves as input to a classifier based on a Feed Forward Neural Network (FFNN). The classifier identifies the type of disturbance in 8.30 microseconds, achieving classification accuracies of 97.7% with synthetic data and 98.57% with real data obtained from an arbitrary waveform generator. The proposed algorithm was implemented on the Pynq-Z1 board from Xilinx using Vitis IDE and enables online acquisition and feature extraction from approximation and detail coefficients across five levels of DWT decomposition. The system processes data within times shorter than the sampling period, remaining within 10% of the maximum processing speed required for a 10 kHz sampling rate. Its fully sequential operation avoids storing input signals or DWT coefficients. A detailed system performance analysis was also conducted, evaluating each input sample’s acquisition and processing times. The study considered 2000 samples obtained from the laboratory, demonstrating the system’s effectiveness for online and real-time applications.
{"title":"Methodology for online detection and classification of power quality disturbances based on FPGA","authors":"Eilen García Rodríguez , Enrique Reyes Archundia , José A. Gutiérrez Gnecchi , Arturo Méndez Patiño , Marco V. Chávez Báez , Oscar I. Coronado Reyes , Néstor F. Guerrero Rodríguez","doi":"10.1016/j.asoc.2025.112813","DOIUrl":"10.1016/j.asoc.2025.112813","url":null,"abstract":"<div><div>The transition from conventional energy systems to decentralized generation based on renewable energy sources presents significant challenges. Sophisticated devices are required to monitor and manage the real-time flow and quality of energy. These tools require efficient algorithms that minimize computational complexity, particularly for real-time applications. This work proposes a novel, computationally efficient methodology for the real-time detection and classification of seven types of power quality disturbances (PQDs) based on Multiresolution Analysis of the Discrete Wavelet Transform (MRA-DWT) and feature extraction methods such as RMS and Logarithmic Energy Entropy. The extracted distinctive feature vector, consisting of seven elements, serves as input to a classifier based on a Feed Forward Neural Network (FFNN). The classifier identifies the type of disturbance in 8.30 microseconds, achieving classification accuracies of 97.7% with synthetic data and 98.57% with real data obtained from an arbitrary waveform generator. The proposed algorithm was implemented on the Pynq-Z1 board from Xilinx using Vitis IDE and enables online acquisition and feature extraction from approximation and detail coefficients across five levels of DWT decomposition. The system processes data within times shorter than the sampling period, remaining within 10% of the maximum processing speed required for a 10 kHz sampling rate. Its fully sequential operation avoids storing input signals or DWT coefficients. A detailed system performance analysis was also conducted, evaluating each input sample’s acquisition and processing times. The study considered 2000 samples obtained from the laboratory, demonstrating the system’s effectiveness for online and real-time applications.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"171 ","pages":"Article 112813"},"PeriodicalIF":7.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143316637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.asoc.2024.112662
Seyedeh Asra Ahmadi , Peiman Ghasemi
In the field of economics and financial markets, optimal asset allocation strategies are essential for investor satisfaction and success. This paper delves into the complex landscape of multi-period portfolio selection, where the objective is to maximize wealth while minimizing investment risk. The core challenge of this research lies in addressing the complexity and uncertainty inherent in multi-period portfolio selection under stochastic conditions. The study introduces a framework for multi-period portfolio selection, considering risky assets over time periods. Stochastic return rates are modeled using a stochastic distribution, with the objective of maximizing wealth under risk constraints. The study presents an empirical case study involving the S&P500 market index, demonstrating the applicability of the proposed approach. Utilizing a random forest model, the paper predicts future returns, incorporating these predictions into a deterministic model via chance constraints. The contributions of the paper are substantial and multifaceted. Firstly, it introduces bankruptcy constraints, providing a more realistic approach to portfolio optimization and addressing an often-overlooked aspect of financial modeling. Secondly, transaction costs, a critical consideration in real-world scenarios, are integrated into the model, significantly enhancing the accuracy and practical relevance of portfolio optimization strategies. Thirdly, uncertainty management is rigorously tackled through stochastic approaches, ensuring the development of robust strategies that can accommodate varying market conditions. The paper also introduces risk-adjusted performance measures, enabling more informed decision-making by considering both risk and returns. Innovatively, this paper employs the Random Forest technique to predict return rates, thereby substantially enhancing the precision of investment predictions. Additionally, the Root System Growth Algorithm adds a heuristic dimension to problem-solving, effectively bridging the gap between computational and solution efficiency. The findings highlight the pivotal role of optimal allocation strategies in mitigating investment risks. The proposed approach yields impressive final wealth values and consistently performs well across different risk levels.
{"title":"A multi period portfolio optimization: Incorporating stochastic predictions and heuristic algorithms","authors":"Seyedeh Asra Ahmadi , Peiman Ghasemi","doi":"10.1016/j.asoc.2024.112662","DOIUrl":"10.1016/j.asoc.2024.112662","url":null,"abstract":"<div><div>In the field of economics and financial markets, optimal asset allocation strategies are essential for investor satisfaction and success. This paper delves into the complex landscape of multi-period portfolio selection, where the objective is to maximize wealth while minimizing investment risk. The core challenge of this research lies in addressing the complexity and uncertainty inherent in multi-period portfolio selection under stochastic conditions. The study introduces a framework for multi-period portfolio selection, considering <span><math><mi>N</mi></math></span> risky assets over <span><math><mi>T</mi></math></span> time periods. Stochastic return rates are modeled using a stochastic distribution, with the objective of maximizing wealth under risk constraints. The study presents an empirical case study involving the S&P500 market index, demonstrating the applicability of the proposed approach. Utilizing a random forest model, the paper predicts future returns, incorporating these predictions into a deterministic model via chance constraints. The contributions of the paper are substantial and multifaceted. Firstly, it introduces bankruptcy constraints, providing a more realistic approach to portfolio optimization and addressing an often-overlooked aspect of financial modeling. Secondly, transaction costs, a critical consideration in real-world scenarios, are integrated into the model, significantly enhancing the accuracy and practical relevance of portfolio optimization strategies. Thirdly, uncertainty management is rigorously tackled through stochastic approaches, ensuring the development of robust strategies that can accommodate varying market conditions. The paper also introduces risk-adjusted performance measures, enabling more informed decision-making by considering both risk and returns. Innovatively, this paper employs the Random Forest technique to predict return rates, thereby substantially enhancing the precision of investment predictions. Additionally, the Root System Growth Algorithm adds a heuristic dimension to problem-solving, effectively bridging the gap between computational and solution efficiency. The findings highlight the pivotal role of optimal allocation strategies in mitigating investment risks. The proposed approach yields impressive final wealth values and consistently performs well across different risk levels.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"170 ","pages":"Article 112662"},"PeriodicalIF":7.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143212925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.asoc.2024.112681
Ruxuan Ding, Shengbo Hu, Zehua Xing, Tingting Yan
The rapid evolution of UAV swarm technology has been becoming increasingly important in modern and future warfare. The low-small-slow characteristics of UAV swarms pose significant challenges to ground and air defense systems, especially in complex terrain conditions. A critical issue in this context is the defense coverage provided by multi-type radar systems, which is influenced by factors such as deployment environments, deployment algorithms, etc. Despite its importance, this topic has received limited attention in existing research. To address this gap, we first model radar coverage based on terrain constraints and define two metrics, the space coverage ratio and the height-level coverage ratio. These metrics are used to evaluate the effectiveness of multi-type radar coverage under complex terrain. Also, we present the optimization problem for multi-type radar deployment by dividing the defense region into the alert area (AA) and the priority detection area (PDA). Then, we propose a novel Fireworks Algorithm (FWA), named DPP-FWA, which incorporates the Determinantal Point Process (DPP) in its selection strategy for fireworks. This approach balances the quality and diversity of the fireworks, enhancing the effectiveness of the selection process. Finally, simulation results based on ten benchmark functions and multi-type radar deployment scenarios indicate that the proposed DPP-FWA outperforms the Enhanced Fireworks Algorithm (EFWA), Particle Swarm Optimization, and Genetic Algorithm in terms of stability, convergence speed, and accuracy when using SRTM (Shuttle Radar Topography Mission) terrain data. Notably, the results demonstrate that DPP-FWA achieves a high defense coverage ratio (90%). Furthermore, the analysis reveals that the algorithm’s complexity is acceptable. In conclusion, the proposed DPP-FWA effectively meets the UAV swarm defense coverage requirements for multi-type radar deployment under complex terrain, providing a valuable foundation for such defense strategies.
{"title":"Multi-type radar deployment for UAV swarms defense coverage using Firework Algorithm with Determinantal Point Processes under complex terrain","authors":"Ruxuan Ding, Shengbo Hu, Zehua Xing, Tingting Yan","doi":"10.1016/j.asoc.2024.112681","DOIUrl":"10.1016/j.asoc.2024.112681","url":null,"abstract":"<div><div>The rapid evolution of UAV swarm technology has been becoming increasingly important in modern and future warfare. The low-small-slow characteristics of UAV swarms pose significant challenges to ground and air defense systems, especially in complex terrain conditions. A critical issue in this context is the defense coverage provided by multi-type radar systems, which is influenced by factors such as deployment environments, deployment algorithms, etc. Despite its importance, this topic has received limited attention in existing research. To address this gap, we first model radar coverage based on terrain constraints and define two metrics, the space coverage ratio and the height-level coverage ratio. These metrics are used to evaluate the effectiveness of multi-type radar coverage under complex terrain. Also, we present the optimization problem for multi-type radar deployment by dividing the defense region into the alert area (AA) and the priority detection area (PDA). Then, we propose a novel Fireworks Algorithm (FWA), named DPP-FWA, which incorporates the Determinantal Point Process (DPP) in its selection strategy for fireworks. This approach balances the quality and diversity of the fireworks, enhancing the effectiveness of the selection process. Finally, simulation results based on ten benchmark functions and multi-type radar deployment scenarios indicate that the proposed DPP-FWA outperforms the Enhanced Fireworks Algorithm (EFWA), Particle Swarm Optimization, and Genetic Algorithm in terms of stability, convergence speed, and accuracy when using SRTM (Shuttle Radar Topography Mission) terrain data. Notably, the results demonstrate that DPP-FWA achieves a high defense coverage ratio (<span><math><mo>></mo></math></span>90%). Furthermore, the analysis reveals that the algorithm’s complexity is acceptable. In conclusion, the proposed DPP-FWA effectively meets the UAV swarm defense coverage requirements for multi-type radar deployment under complex terrain, providing a valuable foundation for such defense strategies.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"170 ","pages":"Article 112681"},"PeriodicalIF":7.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143212931","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-02-01DOI: 10.1016/j.asoc.2024.112655
Ha-Bang Ban, Dang-Hai Pham
The Clustered Traveling Repairman Problem (cTRP) is an extended variant of the Traveling Repairman Problem (TRP), where customers are grouped into clusters that must be visited contiguously. However, the problem in post-disaster contexts has not yet been considered under the following constraints. First, the repairman requires additional time to remove debris, which adds debris removal time to the travel cost. Second, vertices in each cluster have varying priorities depending on their importance, with higher-priority vertices offering greater benefits when reached. This paper addresses these challenges by first defining the problem in post-disaster scenarios and then introducing a novel metaheuristic, TS-MMP, based on Multitasking Multipopulation Optimization (MMPO). This approach enables concurrent and independent task execution by integrating Randomized Neighborhood Search (RNVS), Tabu Search (TS), and dynamic knowledge sharing to improve problem-solving efficiency. In TS-MMP, the dynamic knowledge transfer mechanism ensures diversification, while TS and RNVS enhance intensification capabilities. Tabu lists prevent the search process from revisiting previously explored solution spaces. As a result, TS-MMP achieves superior solutions compared to other algorithms. Empirical results demonstrate that optimal solutions for instances with up to 30 vertices can be solved exactly using both the proposed formulation and TS-VNS-MMP. Moreover, TS-VNS-MMP provides high-quality solutions within a reasonable time for larger instances, confirming its impressive efficiency.
{"title":"A multi-population multi-tasking Tabu Search with Variable Neighborhood Search algorithm to solve post-disaster clustered repairman problem with priorities","authors":"Ha-Bang Ban, Dang-Hai Pham","doi":"10.1016/j.asoc.2024.112655","DOIUrl":"10.1016/j.asoc.2024.112655","url":null,"abstract":"<div><div>The Clustered Traveling Repairman Problem (cTRP) is an extended variant of the Traveling Repairman Problem (TRP), where customers are grouped into clusters that must be visited contiguously. However, the problem in post-disaster contexts has not yet been considered under the following constraints. First, the repairman requires additional time to remove debris, which adds debris removal time to the travel cost. Second, vertices in each cluster have varying priorities depending on their importance, with higher-priority vertices offering greater benefits when reached. This paper addresses these challenges by first defining the problem in post-disaster scenarios and then introducing a novel metaheuristic, TS-MMP, based on Multitasking Multipopulation Optimization (MMPO). This approach enables concurrent and independent task execution by integrating Randomized Neighborhood Search (RNVS), Tabu Search (TS), and dynamic knowledge sharing to improve problem-solving efficiency. In TS-MMP, the dynamic knowledge transfer mechanism ensures diversification, while TS and RNVS enhance intensification capabilities. Tabu lists prevent the search process from revisiting previously explored solution spaces. As a result, TS-MMP achieves superior solutions compared to other algorithms. Empirical results demonstrate that optimal solutions for instances with up to 30 vertices can be solved exactly using both the proposed formulation and TS-VNS-MMP. Moreover, TS-VNS-MMP provides high-quality solutions within a reasonable time for larger instances, confirming its impressive efficiency.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"170 ","pages":"Article 112655"},"PeriodicalIF":7.2,"publicationDate":"2025-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143213065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}