Pub Date : 2026-01-21DOI: 10.1016/j.asoc.2026.114668
Sanjai Pathak , Ashish Mani , Amlan Chatterjee
Many real-world instances of the dynamic multi-criteria traveling salesman problem (DMC-TSP) present unpredictable challenges, making them complex dynamic optimization problems. These challenges arise from factors such as the addition of new locations, fluctuating travel times and costs due to changing traffic or weather conditions, or cancellations of pre-scheduled stops, requiring real-time adaptive optimization strategies. While ant colony optimization (ACO) has proven effective for static optimization problems, existing ACO variants have limited adaptability for dynamic multi-criteria environments. To address this significant gap, we introduce a generic test problem generator explicitly for DMC-TSP, capable of creating benchmark problems with known global-optimal solutions. We then propose a novel self-adaptive ant optimization (SAAO) algorithm tailored for DMC-TSP, integrating two new edge assembly crossover operators and a self-adaptive local search operator, explicitly designed to mitigate local-optima stagnation observed in traditional swarm algorithms. Our method demonstrates an effective balance between exploration and exploitation. Comprehensive comparative experiments against four state-of-the-art algorithms, supported by rigorous statistical validation, confirm that the proposed method significantly outperforms existing techniques—achieving a 15–22% improvement in offline performance over baseline algorithms—in terms of adaptability, robustness, and overall efficiency in solving DMC-TSPs.
{"title":"Adaptive ant colony optimization with crossover-guided search for dynamic multi-criteria traveling salesman problem","authors":"Sanjai Pathak , Ashish Mani , Amlan Chatterjee","doi":"10.1016/j.asoc.2026.114668","DOIUrl":"10.1016/j.asoc.2026.114668","url":null,"abstract":"<div><div>Many real-world instances of the dynamic multi-criteria traveling salesman problem (DMC-TSP) present unpredictable challenges, making them complex dynamic optimization problems. These challenges arise from factors such as the addition of new locations, fluctuating travel times and costs due to changing traffic or weather conditions, or cancellations of pre-scheduled stops, requiring real-time adaptive optimization strategies. While ant colony optimization (ACO) has proven effective for static optimization problems, existing ACO variants have limited adaptability for dynamic multi-criteria environments. To address this significant gap, we introduce a generic test problem generator explicitly for DMC-TSP, capable of creating benchmark problems with known global-optimal solutions. We then propose a novel self-adaptive ant optimization (SAAO) algorithm tailored for DMC-TSP, integrating two new edge assembly crossover operators and a self-adaptive local search operator, explicitly designed to mitigate local-optima stagnation observed in traditional swarm algorithms. Our method demonstrates an effective balance between exploration and exploitation. Comprehensive comparative experiments against four state-of-the-art algorithms, supported by rigorous statistical validation, confirm that the proposed method significantly outperforms existing techniques—achieving a 15–22% improvement in offline performance over baseline algorithms—in terms of adaptability, robustness, and overall efficiency in solving DMC-TSPs.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"191 ","pages":"Article 114668"},"PeriodicalIF":6.6,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Acoustic surveillance of oceans is of great interest for monitoring both wildlife and anthropogenic activities. In this paper, we focus on passive, non-communicating sonar systems that are suitable for efficiently monitoring the marine environment. With such a network, no live tracking is possible; instead, we aim to record activities in the most discreet manner for surveillance purposes. General guidelines can be established to assist in designing a network of underwater acoustic sensors. These guidelines serve as both objectives and constraints, such as sensor detection capabilities, deployment area alternatives, acoustic propagation properties, and potential noise source areas.
The optimization of network design, in terms of the distribution and locations of sensors (or groups of sensors physically stacked on a mooring line, for example, and referred to as anchors), involves trade-offs between network cost, coverage of the targeted marine area, network redundancy, and localization capabilities. In this paper, we propose several approaches to address these various constraints and optimization objectives. These approaches rely on numerical simulations of underwater acoustic propagation, which allow for accounting for environmental characteristics such as bathymetry, sea floor properties, and sound speed profiles.
A discretized model of the network design problem is presented. It relies on a grid representation of anchor locations, and a complete Integer Programming formulation of the problem is defined. The model is used to identify sets of solutions that represent trade-offs among the different optimization criteria. Both exact and heuristic multi-objective approaches are proposed to exploit the model. The former are applicable to smaller sets of instances and optimization objectives than the latter. However, the exact approaches enable the derivation of precise bounds on the optimized criteria, as well as initial solutions that can enhance heuristic multi-objective methods based on evolutionary multi-objective optimization frameworks. This supports the designer’s choice of the final network configuration.
Numerical experiments conducted on a set of sixteen semi-synthetic test cases, as well as on an actual network deployed at sea, demonstrate that an exact -constraint method is applicable to industrial-scale instances involving up to 2000 monitoring points. It provides the complete Pareto set for the coverage vs. cost problem in a few minutes. One of the proposed heuristic methods yields, in less than a second, sets of solutions with a loss of quality below 1% for the same design problem. When the triangulation capabilities of the network are optimized as a complementary objective for noise source localization, hybrid heuristic approaches are able to improve the results of the basic algorithm by up to 156% on average.
{"title":"Multi-objective evolutionary algorithms and integer programming for the optimization of underwater acoustic sensor network design","authors":"Laurent Lemarchand , Ronan Serré , Bilal Latrach , Mathis Hamelotte , Catherine Dezan , David Dellong , Myriam Lajaunie","doi":"10.1016/j.asoc.2026.114685","DOIUrl":"10.1016/j.asoc.2026.114685","url":null,"abstract":"<div><div>Acoustic surveillance of oceans is of great interest for monitoring both wildlife and anthropogenic activities. In this paper, we focus on passive, non-communicating sonar systems that are suitable for efficiently monitoring the marine environment. With such a network, no live tracking is possible; instead, we aim to record activities in the most discreet manner for surveillance purposes. General guidelines can be established to assist in designing a network of underwater acoustic sensors. These guidelines serve as both objectives and constraints, such as sensor detection capabilities, deployment area alternatives, acoustic propagation properties, and potential noise source areas.</div><div>The optimization of network design, in terms of the distribution and locations of sensors (or groups of sensors physically stacked on a mooring line, for example, and referred to as <em>anchors</em>), involves trade-offs between network cost, coverage of the targeted marine area, network redundancy, and localization capabilities. In this paper, we propose several approaches to address these various constraints and optimization objectives. These approaches rely on numerical simulations of underwater acoustic propagation, which allow for accounting for environmental characteristics such as bathymetry, sea floor properties, and sound speed profiles.</div><div>A discretized model of the network design problem is presented. It relies on a grid representation of anchor locations, and a complete Integer Programming formulation of the problem is defined. The model is used to identify sets of solutions that represent trade-offs among the different optimization criteria. Both exact and heuristic multi-objective approaches are proposed to exploit the model. The former are applicable to smaller sets of instances and optimization objectives than the latter. However, the exact approaches enable the derivation of precise bounds on the optimized criteria, as well as initial solutions that can enhance heuristic multi-objective methods based on evolutionary multi-objective optimization frameworks. This supports the designer’s choice of the final network configuration.</div><div>Numerical experiments conducted on a set of sixteen semi-synthetic test cases, as well as on an actual network deployed at sea, demonstrate that an exact <span><math><mi>ϵ</mi></math></span>-constraint method is applicable to industrial-scale instances involving up to 2000 monitoring points. It provides the complete Pareto set for the coverage <em>vs.</em> cost problem in a few minutes. One of the proposed heuristic methods yields, in less than a second, sets of solutions with a loss of quality below 1% for the same design problem. When the triangulation capabilities of the network are optimized as a complementary objective for noise source localization, hybrid heuristic approaches are able to improve the results of the basic algorithm by up to 156% on average.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"191 ","pages":"Article 114685"},"PeriodicalIF":6.6,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
After natural disasters such as floods and earthquakes occur, Earth observation satellites (EOSs) often need to revisit affected areas multiple times to acquire multitemporal images. Such observation tasks are referred to as multitemporal revisit tasks. In this paper, we study the Earth observation satellite scheduling problem for multitemporal revisit tasks, which is a combinatorial optimization problem involving multiple practical constraints. This problem exhibits more complex constraints and is more challenging to solve than the traditional Earth observation satellite scheduling problem. Firstly, we formally define the problem and formulate it with a mixed integer nonlinear programming model. Secondly, we develop a variable neighborhood search algorithm to search for the optimal solution. This algorithm embeds a dynamic greedy heuristic, which can efficiently generate a schedule for EOSs. Thirdly, computational experiments demonstrate the efficiency and stability of the algorithm. Moreover, the CPU time of the algorithm increases linearly with both task count and revisit number, indicating good scalability.
{"title":"Earth observation satellite scheduling problem for multitemporal revisit tasks: A variable neighborhood search algorithm","authors":"Ligang Xing , Xiaoxuan Hu , Waiming Zhu , Xutong Zhu , Wei Xia","doi":"10.1016/j.asoc.2026.114688","DOIUrl":"10.1016/j.asoc.2026.114688","url":null,"abstract":"<div><div>After natural disasters such as floods and earthquakes occur, Earth observation satellites (EOSs) often need to revisit affected areas multiple times to acquire multitemporal images. Such observation tasks are referred to as multitemporal revisit tasks. In this paper, we study the Earth observation satellite scheduling problem for multitemporal revisit tasks, which is a combinatorial optimization problem involving multiple practical constraints. This problem exhibits more complex constraints and is more challenging to solve than the traditional Earth observation satellite scheduling problem. Firstly, we formally define the problem and formulate it with a mixed integer nonlinear programming model. Secondly, we develop a variable neighborhood search algorithm to search for the optimal solution. This algorithm embeds a dynamic greedy heuristic, which can efficiently generate a schedule for EOSs. Thirdly, computational experiments demonstrate the efficiency and stability of the algorithm. Moreover, the CPU time of the algorithm increases linearly with both task count and revisit number, indicating good scalability.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"191 ","pages":"Article 114688"},"PeriodicalIF":6.6,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-21DOI: 10.1016/j.asoc.2026.114677
Xiaofeng Wen , Yaoyao Fan , Fuchun Sun , Xiaohong Zhang , Kai Sun , Hang Xiao , Pengkun Liu
Addressing the issue that existing fuzzy rough set models largely rely on t-norm/t-conorm operators satisfying the associative law, thereby limiting their adaptability and expressive power, this paper introduces two types of non-associative logic operators, 1-micanorm and 0-micanorm, to propose a novel variable precision (W, M)-fuzzy rough set (abbreviated as VWMFRS) model. On one hand, this model breaks through the limitation of relying on associative logical connectives when constructing upper and lower approximation operators. On the other hand, it flexibly adjusts the granularity size during the fuzzy approximation process through variable precision parameters, significantly enhancing the flexibility and adaptability of the new fuzzy rough set model. Based on the VWMFRS model, a new attribute reduction algorithm is designed to achieve more efficient feature space dimensionality reduction. Classification experiments on the UCI dataset show that VWMFRS achieves an average accuracy improvement of 14.69% compared to various typical fuzzy rough set models (including recently proposed improved fuzzy rough set models). Additionally, this paper applies the VWMFRS model to deep learning-based image segmentation tasks. By leveraging the fuzzy lower approximation operator in the VWMFRS model, a new loss function called VFRSLoss is designed. Through segmentation experiments on multiple typical image datasets using the UNet++ architecture, the results show that using UNet++ with VFRSLoss for image segmentation further improves metrics such as IoU and F1-score. The model’s unique strength in uncertainty quantification endows it with excellent performance in these two tasks, verifying its versatility and effectiveness for uncertainty-aware learning scenarios.
{"title":"A novel model of fuzzy rough set with applications in data classification and image segmentation","authors":"Xiaofeng Wen , Yaoyao Fan , Fuchun Sun , Xiaohong Zhang , Kai Sun , Hang Xiao , Pengkun Liu","doi":"10.1016/j.asoc.2026.114677","DOIUrl":"10.1016/j.asoc.2026.114677","url":null,"abstract":"<div><div>Addressing the issue that existing fuzzy rough set models largely rely on t-norm/t-conorm operators satisfying the associative law, thereby limiting their adaptability and expressive power, this paper introduces two types of non-associative logic operators, 1-micanorm and 0-micanorm, to propose a novel variable precision (W, M)-fuzzy rough set (abbreviated as VWMFRS) model. On one hand, this model breaks through the limitation of relying on associative logical connectives when constructing upper and lower approximation operators. On the other hand, it flexibly adjusts the granularity size during the fuzzy approximation process through variable precision parameters, significantly enhancing the flexibility and adaptability of the new fuzzy rough set model. Based on the VWMFRS model, a new attribute reduction algorithm is designed to achieve more efficient feature space dimensionality reduction. Classification experiments on the UCI dataset show that VWMFRS achieves an average accuracy improvement of 14.69% compared to various typical fuzzy rough set models (including recently proposed improved fuzzy rough set models). Additionally, this paper applies the VWMFRS model to deep learning-based image segmentation tasks. By leveraging the fuzzy lower approximation operator in the VWMFRS model, a new loss function called VFRSLoss is designed. Through segmentation experiments on multiple typical image datasets using the UNet++ architecture, the results show that using UNet++ with VFRSLoss for image segmentation further improves metrics such as IoU and F1-score. The model’s unique strength in uncertainty quantification endows it with excellent performance in these two tasks, verifying its versatility and effectiveness for uncertainty-aware learning scenarios.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"191 ","pages":"Article 114677"},"PeriodicalIF":6.6,"publicationDate":"2026-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-20DOI: 10.1016/j.asoc.2026.114659
Jianming Wang , Yang Xu , Zhouwang Yang
This paper explores the two-dimensional bin packing problem involving both rectangular and irregular shapes, with a focus on part priority, a critical factor in the furniture manufacturing and woodworking industries. Part priority is essential due to specific processing requirements and customer urgency. We propose a two-stage methodology aimed at minimizing both the number of bins containing priority parts and the total number of bins utilized. In the first stage, multiple parts are iteratively paired and approximated as rectangles to maximize the overall benefit of all pairings, diverging from traditional methods that focus on pairing two specific parts. The second stage introduces a rectangular packing module that incorporates two large-neighborhood search (LNS) algorithms. This module employs efficient operators that respect the priority objective, addressing the difficulty of large-scale priority problems. We evaluate the strengths and limitations of the rectangularization approach through experiments on irregular bin packing benchmarks and assess its applicability. Experiments on rectangular benchmark instances demonstrate the superiority of our approach in large-scale scenarios. Furthermore, tests on industrial data reveal that our method increases material utilization by 0.86% and reduces the number of priority bins by 1.71%, surpassing leading commercial software in both objectives. These results suggest that the proposed approach can be integrated into cutting software to provide practical and efficient solutions, thereby advancing intelligent manufacturing.
{"title":"A two-stage solution incorporating large neighborhood search for the priority-aware 2D bin packing problem in furniture manufacturing","authors":"Jianming Wang , Yang Xu , Zhouwang Yang","doi":"10.1016/j.asoc.2026.114659","DOIUrl":"10.1016/j.asoc.2026.114659","url":null,"abstract":"<div><div>This paper explores the two-dimensional bin packing problem involving both rectangular and irregular shapes, with a focus on part priority, a critical factor in the furniture manufacturing and woodworking industries. Part priority is essential due to specific processing requirements and customer urgency. We propose a two-stage methodology aimed at minimizing both the number of bins containing priority parts and the total number of bins utilized. In the first stage, multiple parts are iteratively paired and approximated as rectangles to maximize the overall benefit of all pairings, diverging from traditional methods that focus on pairing two specific parts. The second stage introduces a rectangular packing module that incorporates two large-neighborhood search (LNS) algorithms. This module employs efficient operators that respect the priority objective, addressing the difficulty of large-scale priority problems. We evaluate the strengths and limitations of the rectangularization approach through experiments on irregular bin packing benchmarks and assess its applicability. Experiments on rectangular benchmark instances demonstrate the superiority of our approach in large-scale scenarios. Furthermore, tests on industrial data reveal that our method increases material utilization by 0.86% and reduces the number of priority bins by 1.71%, surpassing leading commercial software in both objectives. These results suggest that the proposed approach can be integrated into cutting software to provide practical and efficient solutions, thereby advancing intelligent manufacturing.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"191 ","pages":"Article 114659"},"PeriodicalIF":6.6,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146080197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-20DOI: 10.1016/j.asoc.2026.114669
Zihao Li , Gang Liu , Yijing Chen , Quan Wang , Xvlong Zhao , Yuanze Zhang
Continuous authentication is crucial for ensuring security and privacy on smartphones amidst the widespread use of mobile internet and smartphones. Utilizing built-in sensors for continuous authentication has garnered interest, and deep learning demonstrates potential in extracting sensor information. However, existing methods require ample training data, which is often limited in the case of legitimate user samples. Additionally, some deep learning methods represented by convolutional neural network (CNN) struggle to capture long-range dependencies in behavioral sequences. To tackle these limitations, we present TCTAuth, a continuous authentication system underpinned by a triple convolutional transformer architecture. It utilizes data from motion sensors on smartphones to monitor user behavior patterns. TCTAuth can easily accommodate new users at any time, independent of the training data, without the need for retraining. We design a network called CoTNet that combines CNN and transformer for feature extraction. Convolutional layers and transformer encoders are stacked vertically in the network. CoTNet demonstrates advantages in learning local features from behavioral data and global features with long-range dependencies. To improve robustness, the model is trained by combining mini-batch hard mining (MBHM) triplet loss and binary cross-entropy (BCE) loss. We conduct extensive experiments on two publicly available datasets and a dataset collected by ourselves. TCTAuth achieves reliable authentication using only a single legitimate user sample, i.e., user interaction of 1 s. The experimental results demonstrate that TCTAuth achieves a maximum of 1.81% Equal Error Rate (EER) and 98.13% F1-Score, outperforming other representative methods.
{"title":"TCTAuth: Triple convolutional transformer-based continuous authentication on smartphones","authors":"Zihao Li , Gang Liu , Yijing Chen , Quan Wang , Xvlong Zhao , Yuanze Zhang","doi":"10.1016/j.asoc.2026.114669","DOIUrl":"10.1016/j.asoc.2026.114669","url":null,"abstract":"<div><div>Continuous authentication is crucial for ensuring security and privacy on smartphones amidst the widespread use of mobile internet and smartphones. Utilizing built-in sensors for continuous authentication has garnered interest, and deep learning demonstrates potential in extracting sensor information. However, existing methods require ample training data, which is often limited in the case of legitimate user samples. Additionally, some deep learning methods represented by convolutional neural network (CNN) struggle to capture long-range dependencies in behavioral sequences. To tackle these limitations, we present TCTAuth, a continuous authentication system underpinned by a triple convolutional transformer architecture. It utilizes data from motion sensors on smartphones to monitor user behavior patterns. TCTAuth can easily accommodate new users at any time, independent of the training data, without the need for retraining. We design a network called CoTNet that combines CNN and transformer for feature extraction. Convolutional layers and transformer encoders are stacked vertically in the network. CoTNet demonstrates advantages in learning local features from behavioral data and global features with long-range dependencies. To improve robustness, the model is trained by combining mini-batch hard mining (MBHM) triplet loss and binary cross-entropy (BCE) loss. We conduct extensive experiments on two publicly available datasets and a dataset collected by ourselves. TCTAuth achieves reliable authentication using only a single legitimate user sample, i.e., user interaction of 1 s. The experimental results demonstrate that TCTAuth achieves a maximum of 1.81% Equal Error Rate (EER) and 98.13% F1-Score, outperforming other representative methods.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"191 ","pages":"Article 114669"},"PeriodicalIF":6.6,"publicationDate":"2026-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-19DOI: 10.1016/j.asoc.2026.114648
Sangkeum Lee
Satellite communication systems (SCSs) deployed in tactical environments must maintain reliable links under severe interference, jamming attempts, and large path loss. From a soft computing perspective, frequency-hopping (FH) synchronization in such uncertain and highly dynamic channels is a sequential decision problem that benefits from adaptive, data-driven control. In this paper, we propose a reinforcement learning (RL)–driven FH synchronization framework for dehop–rehop SCSs, where a conventional serial search performs coarse acquisition and a proximal policy optimization (PPO) agent with a GCN–Bi-LSTM network refines the uplink hop timing. The RL agent interacts with the stochastic channel, observing signal-energy patterns and learning to minimize both mean acquisition time (MAT) and mean-squared error (MSE) of the timing estimate without requiring an explicit channel model. Mathematical analysis and Monte Carlo simulations show that the proposed hybrid method reduces the average number of hops required for synchronization by 58.17 % and the MSE of uplink hop-timing estimation by 76.95 % compared with a conventional serial-search scheme. Relative to an early–late-gate synchronization method that combines serial search with an LSTM network, the average number of hops is further reduced by 12.24 % and the MSE by 18.5 %. These results demonstrate that the PPO-based GCN–Bi-LSTM agent provides a flexible soft-computing solution that can adapt to rapidly varying SCS operating conditions while significantly improving FH synchronization performance.
{"title":"Frequency hopping synchronization for satellite communication system using reinforcement learning","authors":"Sangkeum Lee","doi":"10.1016/j.asoc.2026.114648","DOIUrl":"10.1016/j.asoc.2026.114648","url":null,"abstract":"<div><div>Satellite communication systems (SCSs) deployed in tactical environments must maintain reliable links under severe interference, jamming attempts, and large path loss. From a soft computing perspective, frequency-hopping (FH) synchronization in such uncertain and highly dynamic channels is a sequential decision problem that benefits from adaptive, data-driven control. In this paper, we propose a reinforcement learning (RL)–driven FH synchronization framework for dehop–rehop SCSs, where a conventional serial search performs coarse acquisition and a proximal policy optimization (PPO) agent with a GCN–Bi-LSTM network refines the uplink hop timing. The RL agent interacts with the stochastic channel, observing signal-energy patterns and learning to minimize both mean acquisition time (MAT) and mean-squared error (MSE) of the timing estimate without requiring an explicit channel model. Mathematical analysis and Monte Carlo simulations show that the proposed hybrid method reduces the average number of hops required for synchronization by 58.17 % and the MSE of uplink hop-timing estimation by 76.95 % compared with a conventional serial-search scheme. Relative to an early–late-gate synchronization method that combines serial search with an LSTM network, the average number of hops is further reduced by 12.24 % and the MSE by 18.5 %. These results demonstrate that the PPO-based GCN–Bi-LSTM agent provides a flexible soft-computing solution that can adapt to rapidly varying SCS operating conditions while significantly improving FH synchronization performance.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"191 ","pages":"Article 114648"},"PeriodicalIF":6.6,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-19DOI: 10.1016/j.asoc.2026.114653
Tiantian Jiang, Guolin Yu, Jun Ma
A novel -zone-insensitive two-parameter pinball loss function tailored for large-scale binary classification is introduced in this study. By integrating this loss with a capped -metric, a robust sparse classification framework termed -TPSP-TSVM is proposed, which is designed to jointly optimize computational efficiency and outlier robustness. Support vector cardinality of the model is dynamically regulated via parametric adaptation of , , and , which allows for scalable processing of high-dimensional data. To suppress outlier interference, a mechanism for minimizing intra-class distance dispersion under the capped -norm is incorporated into the model. To address the inherent non-convexity and non-smoothness of the optimization problem, a convergent iterative algorithm is devised, with the property of monotonic descent guaranteed. Each iteration is decomposed into sequential convex subproblems with closed-form solutions, which ensures computational tractability. Empirical evaluations conducted on 10 large-scale benchmark datasets show statistically significant improvements in classification accuracy and computational efficiency, while the model retains robustness in comparison with state-of-the-art methods. This framework provides support for the advancement of scalable, high-performance machine learning in noisy, high-dimensional regimes.
{"title":"A robust sparse twin SVM with ϵ-zone insensitive two-parameter pinball loss for large-scale binary classification","authors":"Tiantian Jiang, Guolin Yu, Jun Ma","doi":"10.1016/j.asoc.2026.114653","DOIUrl":"10.1016/j.asoc.2026.114653","url":null,"abstract":"<div><div>A novel <span><math><mi>ϵ</mi></math></span>-zone-insensitive two-parameter pinball loss function tailored for large-scale binary classification is introduced in this study. By integrating this loss with a capped <span><math><msub><mi>L</mi><mrow><mn>2</mn><mo>,</mo><mi>p</mi></mrow></msub></math></span>-metric, a robust sparse classification framework termed <span><math><mi>C</mi><msub><mrow><mi>L</mi></mrow><mrow><mn>2</mn><mo>,</mo><mi>p</mi></mrow></msub></math></span>-TPSP-TSVM is proposed, which is designed to jointly optimize computational efficiency and outlier robustness. Support vector cardinality of the model is dynamically regulated via parametric adaptation of <span><math><mi>S</mi></math></span>, <span><math><mi>s</mi></math></span>, and <span><math><mi>ϵ</mi></math></span>, which allows for scalable processing of high-dimensional data. To suppress outlier interference, a mechanism for minimizing intra-class distance dispersion under the capped <span><math><msub><mi>L</mi><mrow><mn>2</mn><mo>,</mo><mi>p</mi></mrow></msub></math></span>-norm is incorporated into the model. To address the inherent non-convexity and non-smoothness of the optimization problem, a convergent iterative algorithm is devised, with the property of monotonic descent guaranteed. Each iteration is decomposed into sequential convex subproblems with closed-form solutions, which ensures computational tractability. Empirical evaluations conducted on 10 large-scale benchmark datasets show statistically significant improvements in classification accuracy and computational efficiency, while the model retains robustness in comparison with state-of-the-art methods. This framework provides support for the advancement of scalable, high-performance machine learning in noisy, high-dimensional regimes.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"191 ","pages":"Article 114653"},"PeriodicalIF":6.6,"publicationDate":"2026-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-18DOI: 10.1016/j.asoc.2026.114666
Wulfran Fendzi Mbasso , Hassan M. Hussein Farh , Ambe Harrison , Abdullrahman A. Al-Shamma
Solving high-dimensional systems of Nonlinear Equation Systems (NES) requires search procedures that preserve diversity while converging reliably and, where applicable, discovering multiple roots. We propose the Adaptive Multi-stage Clustering Differential Evolution (AMCDE) algorithm, which couples fitness-aware clustering with entropy-triggered dynamic niching and feeds cluster-level convergence signals into mutation/crossover control in Differential Evolution (DE). We evaluate AMCDE on 110 NES benchmarks under a matched computation budget (population size of 100 and 10,000 function evaluations) and a uniform success threshold ‖F(x)‖₂ ≤ 10⁻⁸ (relaxed to 10⁻⁶ for flagged ill-conditioned subsets). Distinct-root discovery is quantified using Euclidean (ℓ₂) distance–based deduplication at 10⁻³ (optional pre-cluster at 10⁻⁴). We compare against recent NES-oriented competitors and adaptive DE baselines, including JADE (Adaptive Differential Evolution with Optional External Archive) and SHADE (Success-History Adaptive Differential Evolution), re-implemented on the same platform with grid-tuned settings for fairness. AMCDE attains the best median success rate and residuals, with higher distinct-root coverage at comparable runtimes; nonparametric Wilcoxon tests and Friedman tests with Holm correction confirm significance. A critical-difference diagram places AMCDE in the top clique. These findings indicate that coupling adaptive clustering with dynamic niching yields a robust, scalable NES solver that preserves exploration without sacrificing convergence efficiency. All comparative results are validated using non-parametric statistics (Wilcoxon signed-rank and Friedman tests with Holm post-hoc, α=0.05), with our method showing statistically significant gains over competing DE variants across the benchmark suite.
{"title":"An adaptive multi-stage clustering-based differential evolution algorithm with dynamic niching for robust solving of high-dimensional nonlinear equation systems","authors":"Wulfran Fendzi Mbasso , Hassan M. Hussein Farh , Ambe Harrison , Abdullrahman A. Al-Shamma","doi":"10.1016/j.asoc.2026.114666","DOIUrl":"10.1016/j.asoc.2026.114666","url":null,"abstract":"<div><div>Solving high-dimensional systems of Nonlinear Equation Systems (NES) requires search procedures that preserve diversity while converging reliably and, where applicable, discovering multiple roots. We propose the Adaptive Multi-stage Clustering Differential Evolution (AMCDE) algorithm, which couples fitness-aware clustering with entropy-triggered dynamic niching and feeds cluster-level convergence signals into mutation/crossover control in Differential Evolution (DE). We evaluate AMCDE on 110 NES benchmarks under a matched computation budget (population size of 100 and 10,000 function evaluations) and a uniform success threshold ‖F(x)‖₂ ≤ 10⁻⁸ (relaxed to 10⁻⁶ for flagged ill-conditioned subsets). Distinct-root discovery is quantified using Euclidean (ℓ₂) distance–based deduplication at 10⁻³ (optional pre-cluster at 10⁻⁴). We compare against recent NES-oriented competitors and adaptive DE baselines, including JADE (Adaptive Differential Evolution with Optional External Archive) and SHADE (Success-History Adaptive Differential Evolution), re-implemented on the same platform with grid-tuned settings for fairness. AMCDE attains the best median success rate and residuals, with higher distinct-root coverage at comparable runtimes; nonparametric Wilcoxon tests and Friedman tests with Holm correction confirm significance. A critical-difference diagram places AMCDE in the top clique. These findings indicate that coupling adaptive clustering with dynamic niching yields a robust, scalable NES solver that preserves exploration without sacrificing convergence efficiency. All comparative results are validated using non-parametric statistics (Wilcoxon signed-rank and Friedman tests with Holm post-hoc, α=0.05), with our method showing statistically significant gains over competing DE variants across the benchmark suite.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"191 ","pages":"Article 114666"},"PeriodicalIF":6.6,"publicationDate":"2026-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-17DOI: 10.1016/j.asoc.2026.114642
Freeha Qamar , Iqra Zareef , Muhammad Riaz , Muhammad Aslam , Vladimir Simic , Dragan Pamucar
For the purpose of optimizing their routes, last mile delivery (LMD) organizations use vehicle routing software (VRS). The topic of VRS selection is of the utmost significance for corporations that deal with managing deliveries at the last mile. This study presents an innovative VRS selection methodology specifically designed for LMD businesses. We regulate the issue within the confines of multi-criteria decision-making (MCDM). Criteria for assessment are based on solid literature, mathematical formulation, and professional judgments. The utilization of circular intuitionistic fuzzy set (CIFS) for this model offers a more adaptable and evocative method for expressing unclear and conflicting data. A novel operator termed as a CIFDBM operator is presented to serve as an aggregation operator to boost the effectiveness of aggregation, influenced by the fundamental Bonferroni mean (BM) operator and centered on Dombi t-norm and t-conorm. Our research sets out a novel hybrid structure that integrates CIFS-based decision-making along with MEREC (method based on the removal effects of criteria) as a weighting technique and RAFSI (ranking of alternatives through functional mapping of criterion subintervals into a single interval) as a ranking approach. A robust MCDM hybrid approach named CIFS-MEREC-RAFSI is designed, which provides a reliable, competent, and quality decision for VRS selection problems containing inconsistent, uncertain, and vague data. The system outperforms state-of-the-art CIF-based MCDM approaches by 17%–22% in terms of ranking stability, resulting in more consistent and dependable rankings for all options.
为了优化他们的路线,最后一英里交付(LMD)组织使用车辆路线软件(VRS)。VRS选择的主题对于处理最后一英里交付管理的公司来说是至关重要的。本研究提出了一种创新的VRS选择方法,专门为LMD企业设计。我们在多标准决策(MCDM)的范围内规范这个问题。评估标准是基于坚实的文献,数学公式和专业判断。该模型采用循环直觉模糊集(circular intuiistic fuzzy set, CIFS),为表达不清晰和冲突的数据提供了一种适应性更强、唤起性更强的方法。提出了一种新的算子CIFDBM算子,该算子受基本Bonferroni mean (BM)算子的影响,以Dombi t-范数和t-保形为中心,作为聚合算子来提高聚合的有效性。我们的研究提出了一种新的混合结构,它将基于cifs的决策与MEREC(基于标准去除效果的方法)作为加权技术和RAFSI(通过将标准子区间的功能映射到单个区间来对备选方案进行排序)作为排序方法集成在一起。设计了一种健壮的MCDM混合方法CIFS-MEREC-RAFSI,该方法为包含不一致、不确定和模糊数据的VRS选择问题提供了可靠、有效和高质量的决策。在排名稳定性方面,该系统比最先进的基于cif的MCDM方法高出17%-22%,从而使所有选项的排名更加一致和可靠。
{"title":"Circular intuitionistic fuzzy Dombi Bonferroni mean aggregation operators and MEREC-RAFSI approach for optimizing vehicle routing software","authors":"Freeha Qamar , Iqra Zareef , Muhammad Riaz , Muhammad Aslam , Vladimir Simic , Dragan Pamucar","doi":"10.1016/j.asoc.2026.114642","DOIUrl":"10.1016/j.asoc.2026.114642","url":null,"abstract":"<div><div>For the purpose of optimizing their routes, last mile delivery (LMD) organizations use vehicle routing software (VRS). The topic of VRS selection is of the utmost significance for corporations that deal with managing deliveries at the last mile. This study presents an innovative VRS selection methodology specifically designed for LMD businesses. We regulate the issue within the confines of multi-criteria decision-making (MCDM). Criteria for assessment are based on solid literature, mathematical formulation, and professional judgments. The utilization of circular intuitionistic fuzzy set (CIFS) for this model offers a more adaptable and evocative method for expressing unclear and conflicting data. A novel operator termed as a CIFDBM operator is presented to serve as an aggregation operator to boost the effectiveness of aggregation, influenced by the fundamental Bonferroni mean (BM) operator and centered on Dombi t-norm and t-conorm. Our research sets out a novel hybrid structure that integrates CIFS-based decision-making along with MEREC (method based on the removal effects of criteria) as a weighting technique and RAFSI (ranking of alternatives through functional mapping of criterion subintervals into a single interval) as a ranking approach. A robust MCDM hybrid approach named CIFS-MEREC-RAFSI is designed, which provides a reliable, competent, and quality decision for VRS selection problems containing inconsistent, uncertain, and vague data. The system outperforms state-of-the-art CIF-based MCDM approaches by 17%–22% in terms of ranking stability, resulting in more consistent and dependable rankings for all options.</div></div>","PeriodicalId":50737,"journal":{"name":"Applied Soft Computing","volume":"191 ","pages":"Article 114642"},"PeriodicalIF":6.6,"publicationDate":"2026-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146039854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}