Pub Date : 2025-01-31DOI: 10.1016/j.ins.2025.121927
Mustapha Maimouni , Badr Abou El Majd , Mohsine Bouya
Radio Frequency Identification (RFID) technology is recognised as an effective solution for Internet of Things (IoT) applications across various domains. However, scheduling an RFID system presents an NP-hard combinatorial optimisation problem known as the RFID network planning problem (RNP). This problem requires satisfying several criteria, including a minimum number of antennas for full coverage and a balanced load, while avoiding interference for optimal deployment. Previous studies have often relied on arbitrary initial parameters, particularly concerning the number of antennas, leading to suboptimal solutions. This study investigates a novel approach to address the RNP problem by employing a hybrid metaheuristic-based approach incorporating the advantages of artificial neural networks, whereby machine learning methods are implemented to automatically initiate the initial number of antennas. This study introduces a novel approach named ‘I-RAENNA’, which is evaluated against well-known instances and compared to established methods such as VNPSO, HPSO, CSP, RAE-NNA, and SLIWMBBO. The experimental results demonstrate that I-RAENNA significantly outperforms state-of-the-art solutions, proving its effectiveness in improving RFID system deployment.
{"title":"Optimising RFID network planning problem using an improved automated approach inspired by artificial neural networks","authors":"Mustapha Maimouni , Badr Abou El Majd , Mohsine Bouya","doi":"10.1016/j.ins.2025.121927","DOIUrl":"10.1016/j.ins.2025.121927","url":null,"abstract":"<div><div>Radio Frequency Identification (RFID) technology is recognised as an effective solution for Internet of Things (IoT) applications across various domains. However, scheduling an RFID system presents an NP-hard combinatorial optimisation problem known as the RFID network planning problem (RNP). This problem requires satisfying several criteria, including a minimum number of antennas for full coverage and a balanced load, while avoiding interference for optimal deployment. Previous studies have often relied on arbitrary initial parameters, particularly concerning the number of antennas, leading to suboptimal solutions. This study investigates a novel approach to address the RNP problem by employing a hybrid metaheuristic-based approach incorporating the advantages of artificial neural networks, whereby machine learning methods are implemented to automatically initiate the initial number of antennas. This study introduces a novel approach named ‘I-RAENNA’, which is evaluated against well-known instances and compared to established methods such as VNPSO, HPSO, CSP, RAE-NNA, and SLIWMBBO. The experimental results demonstrate that I-RAENNA significantly outperforms state-of-the-art solutions, proving its effectiveness in improving RFID system deployment.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"703 ","pages":"Article 121927"},"PeriodicalIF":8.1,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143103851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-31DOI: 10.1016/j.ins.2025.121923
Victoria Erofeeva , Sergei Parsegov
The Shapley value, a concept from cooperative game theory, plays a crucial role in fair distribution of payoffs among participants based on their individual contributions. However, the exact computation of the Shapley values is often impractical due to the exponential complexity. The currently available approximation methods offer some benefits but come with significant drawbacks, such as high computational overhead, variability in accuracy, and reliance on heuristics that may compromise fairness. Given these limitations, there is a pressing need for approaches that ensure consistent and reliable results. A deterministic method could not only improve computational efficiency but also ensure reproducibility and fairness. Leveraging principles from the so-called compressed sensing, techniques which exploit data sparsity, and elementary results from the matrix theory, this paper introduces a novel algorithm for approximating Shapley values, emphasizing deterministic computations that ensure reproducible data valuation and lessen computational demands. We illustrate the efficiency of this algorithm within the framework of data valuation in the two-settlement electricity market. The simulations convincingly indicate essential advantages of the proposed method over the existing ones. In particular, our method achieved an average increase of 33.8% in approximation accuracy, as measured by relative error, while maintaining consistent performance across multiple trials.
{"title":"A novel sparsity-based deterministic method for Shapley value approximation, with applications","authors":"Victoria Erofeeva , Sergei Parsegov","doi":"10.1016/j.ins.2025.121923","DOIUrl":"10.1016/j.ins.2025.121923","url":null,"abstract":"<div><div>The Shapley value, a concept from cooperative game theory, plays a crucial role in fair distribution of payoffs among participants based on their individual contributions. However, the exact computation of the Shapley values is often impractical due to the exponential complexity. The currently available approximation methods offer some benefits but come with significant drawbacks, such as high computational overhead, variability in accuracy, and reliance on heuristics that may compromise fairness. Given these limitations, there is a pressing need for approaches that ensure consistent and reliable results. A deterministic method could not only improve computational efficiency but also ensure reproducibility and fairness. Leveraging principles from the so-called compressed sensing, techniques which exploit data sparsity, and elementary results from the matrix theory, this paper introduces a novel algorithm for approximating Shapley values, emphasizing deterministic computations that ensure reproducible data valuation and lessen computational demands. We illustrate the efficiency of this algorithm within the framework of data valuation in the two-settlement electricity market. The simulations convincingly indicate essential advantages of the proposed method over the existing ones. In particular, our method achieved an average increase of 33.8% in approximation accuracy, as measured by relative error, while maintaining consistent performance across multiple trials.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"704 ","pages":"Article 121923"},"PeriodicalIF":8.1,"publicationDate":"2025-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143378823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-30DOI: 10.1016/j.ins.2025.121924
Shonal Chaudhry, Anuraganand Sharma
Curriculum learning has proven effective in enhancing the performance of a classifier by gradually training models on samples that range from simple to difficult based on prior information. We have previously explored the innovative curriculum learning approach known as Data Distribution-based Curriculum Learning (DDCL). In this study, we propose a novel extension to DDCL termed Dynamic DDCL, leveraging self-paced learning to create a more informed learner. Its dynamic curriculum promotes adaptive learning capabilities by adapting to the needs of the model as it evolves during training. We further introduce DDCL Ensemble, an ensemble learner that aggregates the enhancements of the distinct scoring methods present in DDCL and Dynamic DDCL. We assess the effectiveness of Dynamic DDCL using classifiers based on neural networks. The performance of DDCL Ensemble is evaluated against a counterpart ensemble learner which is devoid of any curriculum learning. Experimental findings highlight the superior performance and generalisation capabilities achieved by Dynamic DDCL and DDCL Ensemble, with performance increases ranging from 1% to 34% and 1% to 11% respectively, when compared to other self-paced learning methodologies and standard ensembles. In addition, they show potential in advancing the state-of-the-art in classifier optimisation for domains where training data is limited.
{"title":"Dynamic Data Distribution-based Curriculum Learning","authors":"Shonal Chaudhry, Anuraganand Sharma","doi":"10.1016/j.ins.2025.121924","DOIUrl":"10.1016/j.ins.2025.121924","url":null,"abstract":"<div><div>Curriculum learning has proven effective in enhancing the performance of a classifier by gradually training models on samples that range from simple to difficult based on prior information. We have previously explored the innovative curriculum learning approach known as Data Distribution-based Curriculum Learning (DDCL). In this study, we propose a novel extension to DDCL termed Dynamic DDCL, leveraging self-paced learning to create a more informed learner. Its dynamic curriculum promotes adaptive learning capabilities by adapting to the needs of the model as it evolves during training. We further introduce DDCL Ensemble, an ensemble learner that aggregates the enhancements of the distinct scoring methods present in DDCL and Dynamic DDCL. We assess the effectiveness of Dynamic DDCL using classifiers based on neural networks. The performance of DDCL Ensemble is evaluated against a counterpart ensemble learner which is devoid of any curriculum learning. Experimental findings highlight the superior performance and generalisation capabilities achieved by Dynamic DDCL and DDCL Ensemble, with performance increases ranging from 1% to 34% and 1% to 11% respectively, when compared to other self-paced learning methodologies and standard ensembles. In addition, they show potential in advancing the state-of-the-art in classifier optimisation for domains where training data is limited.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"702 ","pages":"Article 121924"},"PeriodicalIF":8.1,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143156646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-30DOI: 10.1016/j.ins.2025.121905
Antonio Emanuele Cinà , Ambra Demontis , Battista Biggio , Fabio Roli , Marcello Pelillo
Sponge examples are test-time inputs optimized to increase energy consumption and prediction latency of deep networks deployed on hardware accelerators. By increasing the fraction of neurons activated during classification, these attacks reduce sparsity in network activation patterns, worsening the performance of hardware accelerators. In this work, we present a novel training-time attack, named sponge poisoning, which aims to worsen energy consumption and prediction latency of neural networks on any test input without affecting classification accuracy. To stage this attack, we assume that the attacker can control only a few model updates during training — a likely scenario, e.g., when model training is outsourced to an untrusted third party or distributed via federated learning. Our extensive experiments on image classification tasks show that sponge poisoning is effective, and that fine-tuning poisoned models to repair them poses prohibitive costs for most users, highlighting that tackling sponge poisoning remains an open issue.
{"title":"Energy-latency attacks via sponge poisoning","authors":"Antonio Emanuele Cinà , Ambra Demontis , Battista Biggio , Fabio Roli , Marcello Pelillo","doi":"10.1016/j.ins.2025.121905","DOIUrl":"10.1016/j.ins.2025.121905","url":null,"abstract":"<div><div>Sponge examples are test-time inputs optimized to increase energy consumption and prediction latency of deep networks deployed on hardware accelerators. By increasing the fraction of neurons activated during classification, these attacks reduce sparsity in network activation patterns, worsening the performance of hardware accelerators. In this work, we present a novel training-time attack, named <em>sponge poisoning</em>, which aims to worsen energy consumption and prediction latency of neural networks on <em>any</em> test input without affecting classification accuracy. To stage this attack, we assume that the attacker can control only a few model updates during training — a likely scenario, e.g., when model training is outsourced to an untrusted third party or distributed via federated learning. Our extensive experiments on image classification tasks show that sponge poisoning is effective, and that fine-tuning poisoned models to repair them poses prohibitive costs for most users, highlighting that tackling sponge poisoning remains an open issue.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"702 ","pages":"Article 121905"},"PeriodicalIF":8.1,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143100203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-30DOI: 10.1016/j.ins.2025.121916
Hao Shu
Density-based clustering is the most popular clustering algorithm since it can identify clusters of arbitrary shape as long as they are separated by low-density regions. However, a high-density region that is not separated by low-density ones might also have different structures belonging to multiple clusters. As far as we know, all previous density-based clustering algorithms fail to detect such structures. In this paper, we provide a novel density-based clustering scheme to address this problem. It is the first clustering algorithm that can detect meticulous structures in a high-density region that is not separated by low-density ones and thus extends the range of applications of clustering. The algorithm employs secondary directed differential, hierarchy, normalized density, as well as the self-adaption coefficient, called Structure Detecting Cluster by Hierarchical Secondary Directed Differential with Normalized Density and Self-Adaption, dubbed SDC-HSDD-NDSA. Experiments on synthetic and real datasets are implemented to verify the effectiveness, robustness, and granularity independence of the algorithm, and the scheme is compared to unsupervised schemes in the Python package Scikit-learn. Results demonstrate that our algorithm outperforms previous ones in many situations, especially significantly when clusters have regular internal structures. For example, averaging over the eight noiseless synthetic datasets with structures employing ARI and NMI criteria, previous algorithms obtain scores below 0.6 and 0.7, while the presented algorithm obtains scores higher than 0.9 and 0.95, respectively.1
{"title":"SDC-HSDD-NDSA: Structure detecting cluster by hierarchical secondary directed differential with normalized density and self-adaption","authors":"Hao Shu","doi":"10.1016/j.ins.2025.121916","DOIUrl":"10.1016/j.ins.2025.121916","url":null,"abstract":"<div><div>Density-based clustering is the most popular clustering algorithm since it can identify clusters of arbitrary shape as long as they are separated by low-density regions. However, a high-density region that is not separated by low-density ones might also have different structures belonging to multiple clusters. As far as we know, all previous density-based clustering algorithms fail to detect such structures. In this paper, we provide a novel density-based clustering scheme to address this problem. It is the first clustering algorithm that can detect meticulous structures in a high-density region that is not separated by low-density ones and thus extends the range of applications of clustering. The algorithm employs secondary directed differential, hierarchy, normalized density, as well as the self-adaption coefficient, called Structure Detecting Cluster by Hierarchical Secondary Directed Differential with Normalized Density and Self-Adaption, dubbed SDC-HSDD-NDSA. Experiments on synthetic and real datasets are implemented to verify the effectiveness, robustness, and granularity independence of the algorithm, and the scheme is compared to unsupervised schemes in the Python package <em>Scikit-learn</em>. Results demonstrate that our algorithm outperforms previous ones in many situations, especially significantly when clusters have regular internal structures. For example, averaging over the eight noiseless synthetic datasets with structures employing ARI and NMI criteria, previous algorithms obtain scores below 0.6 and 0.7, while the presented algorithm obtains scores higher than 0.9 and 0.95, respectively.<span><span><sup>1</sup></span></span></div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"702 ","pages":"Article 121916"},"PeriodicalIF":8.1,"publicationDate":"2025-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143156643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-28DOI: 10.1016/j.ins.2025.121888
Minglan Xiong , Huawei Wang , Zhaoguo Hou , Yiik Diew Wong
Deep learning techniques have been widely applied in the study of risk assessment and prediction in civil aviation safety, as they can effectively learn patterns and rules from aviation safety data. However, the task becomes more challenging when addressing the hierarchical structure of aviation safety risk identification. In this context, a hierarchical branching (HB) structure endows risk identification models with stepwise decision-making capabilities. This study proposes a hierarchical multi-branch deep learning approach which integrates Convolutional Neural Networks-Bidirectional Long Short-Term Memory (CNN-BiLSTM) blocks into HB to form the HB-CNN-BiLSTM (HCBL) model for identifying multi-level civil aviation safety risk information. The proposed method simultaneously facilitates safety hazards detection, hazard attribute identification, and risk level assessment, thereby capturing finer-grained risk patterns and relationships. Comparative experiments were conducted on different civil aviation safety datasets. Experimental results show that the combination is efficient and robust.
{"title":"Multi-level information identification for civil aviation safety risks: A hierarchical multi-branch deep learning approach","authors":"Minglan Xiong , Huawei Wang , Zhaoguo Hou , Yiik Diew Wong","doi":"10.1016/j.ins.2025.121888","DOIUrl":"10.1016/j.ins.2025.121888","url":null,"abstract":"<div><div>Deep learning techniques have been widely applied in the study of risk assessment and prediction in civil aviation safety, as they can effectively learn patterns and rules from aviation safety data. However, the task becomes more challenging when addressing the hierarchical structure of aviation safety risk identification. In this context, a hierarchical branching (HB) structure endows risk identification models with stepwise decision-making capabilities. This study proposes a hierarchical multi-branch deep learning approach which integrates Convolutional Neural Networks-Bidirectional Long Short-Term Memory (CNN-BiLSTM) blocks into HB to form the HB-CNN-BiLSTM (HCBL) model for identifying multi-level civil aviation safety risk information. The proposed method simultaneously facilitates safety hazards detection, hazard attribute identification, and risk level assessment, thereby capturing finer-grained risk patterns and relationships. Comparative experiments were conducted on different civil aviation safety datasets. Experimental results show that the combination is efficient and robust.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"702 ","pages":"Article 121888"},"PeriodicalIF":8.1,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143100209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-28DOI: 10.1016/j.ins.2025.121910
Longbin Fu, Liwei An
This article investigates the event-triggered obstacle avoidance control problem for unmanned vehicles subject to velocity constraints. The existing barrier functions rely solely on the distance between the vehicle and the obstacle, which may cause the vehicle to unnecessarily evade (non-threatening) obstacles that are within the sensing range but are moving away at an increasing distance. To mitigate this conservatism, a novel distance-velocity barrier function is designed, which is introduced with the relative position and relative velocity between the vehicle and the obstacle as one of its conditions, thereby avoiding the vehicle perceiving these non-threatening obstacles. Furthermore, the issue of non-differentiable barrier functions caused by relative position and relative velocity is addressed through a second-order filter. Secondly, unlike the traditional fixed threshold, relative threshold, and switching threshold-triggered mechanisms that solely depend on control signals, we design an event-triggered mechanism based on velocity constraint functions to conserve communication resources, and its triggering interval decreases as the velocity increases. Through the Lyapunov method and boundedness analysis for the barrier functions, it is shown that the protocol achieves obstacle avoidance for the unmanned vehicle without violating the velocity constraints, while excluding the Zeno behavior. Numerical simulations are presented to demonstrate the efficacy of the proposed control strategy.
{"title":"Adaptive event-triggered obstacle avoidance control for unmanned vehicles via distance-velocity barrier functions","authors":"Longbin Fu, Liwei An","doi":"10.1016/j.ins.2025.121910","DOIUrl":"10.1016/j.ins.2025.121910","url":null,"abstract":"<div><div>This article investigates the event-triggered obstacle avoidance control problem for unmanned vehicles subject to velocity constraints. The existing barrier functions rely solely on the distance between the vehicle and the obstacle, which may cause the vehicle to unnecessarily evade (non-threatening) obstacles that are within the sensing range but are moving away at an increasing distance. To mitigate this conservatism, a novel distance-velocity barrier function is designed, which is introduced with the relative position and relative velocity between the vehicle and the obstacle as one of its conditions, thereby avoiding the vehicle perceiving these non-threatening obstacles. Furthermore, the issue of non-differentiable barrier functions caused by relative position and relative velocity is addressed through a second-order filter. Secondly, unlike the traditional fixed threshold, relative threshold, and switching threshold-triggered mechanisms that solely depend on control signals, we design an event-triggered mechanism based on velocity constraint functions to conserve communication resources, and its triggering interval decreases as the velocity increases. Through the Lyapunov method and boundedness analysis for the barrier functions, it is shown that the protocol achieves obstacle avoidance for the unmanned vehicle without violating the velocity constraints, while excluding the Zeno behavior. Numerical simulations are presented to demonstrate the efficacy of the proposed control strategy.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"702 ","pages":"Article 121910"},"PeriodicalIF":8.1,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143156644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-28DOI: 10.1016/j.ins.2025.121911
Anh Tuan Vo , Thanh Nguyen Truong , Hee-Jun Kang , Ngoc Hoai An Nguyen
This paper introduces a novel prescribed performance model-free controller tailored for industrial robot arms, seamlessly integrating adaptive sliding mode control (ASMC) and time-delay estimation (TDE). Leveraging TDE, our controller adeptly estimates both the inherent dynamics of the robot and unstructured uncertainties such as disturbances and parameter variations. However, TDE, which relies on past angular acceleration and input torque, inevitably introduces errors. To mitigate these, our approach compensates for current TDE errors using past error information. Additionally, we introduce a fixed-time sliding mode surface from prescribed performance control and an auxiliary system to improve performance under input saturation. Moreover, we propose an adaptive law to ensure the positivity of the adaptive parameter by considering the current adaptive parameter value and the sampling period. Through extensive simulated studies conducted on industrial robot arms, we demonstrate the effectiveness of our control approach, showcasing robustness, reduced chattering, and high accuracy across diverse scenarios.
{"title":"Prescribed performance model-free sliding mode control using time-delay estimation and adaptive technique applied to industrial robot arms","authors":"Anh Tuan Vo , Thanh Nguyen Truong , Hee-Jun Kang , Ngoc Hoai An Nguyen","doi":"10.1016/j.ins.2025.121911","DOIUrl":"10.1016/j.ins.2025.121911","url":null,"abstract":"<div><div>This paper introduces a novel prescribed performance model-free controller tailored for industrial robot arms, seamlessly integrating adaptive sliding mode control (ASMC) and time-delay estimation (TDE). Leveraging TDE, our controller adeptly estimates both the inherent dynamics of the robot and unstructured uncertainties such as disturbances and parameter variations. However, TDE, which relies on past angular acceleration and input torque, inevitably introduces errors. To mitigate these, our approach compensates for current TDE errors using past error information. Additionally, we introduce a fixed-time sliding mode surface from prescribed performance control and an auxiliary system to improve performance under input saturation. Moreover, we propose an adaptive law to ensure the positivity of the adaptive parameter by considering the current adaptive parameter value and the sampling period. Through extensive simulated studies conducted on industrial robot arms, we demonstrate the effectiveness of our control approach, showcasing robustness, reduced chattering, and high accuracy across diverse scenarios.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"702 ","pages":"Article 121911"},"PeriodicalIF":8.1,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143100046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gyroscopes are widely used in navigation systems of ships and vehicles, stabilizing systems for devices such as cameras and drones, robotic control systems, and orientation systems of spacecraft and satellites. However, their control systems are challenging due to disturbances such as vibrations, temperature changes, electromagnetic interference, integration with other sensors and control systems, and complex dynamics. Maintaining precise control, especially in applications that require high levels of accuracy, is challenging and requires advanced systems. In this paper, a new applied hybrid fuzzy controller is introduced for gyroscopes. First a linear quadratic (LQG) controller is designed, and then the error dynamics are modeled using T3-FLSs, and a nonlinear supervisor controller is developed. The type-3 (T3) fuzzy logic systems (FLSs) are used to model the dynamics of the gyroscope, and they are online updated through the stability tuning rules. The effects of tuning errors are considered and analyzed in the suggested stability theorem. The upper bounds of tuning errors are approximated by online adaptation laws and their effects on stability and tracking efficiency are eliminated by suggested adaptive compensators. The efficiency of the designed applied controller is verified by experimental studies and several simulations. In various scenarios, the accuracy and stability of the control scheme are examined, and its feasibility is proved (see a video of the experimental study in https://youtu.be/d40-2tTPF2k?si=WVHxcHSUAYA064el).
{"title":"Observer-based type-3 fuzzy control for gyroscopes: Experimental/theoretical study","authors":"Chunwei Zhang , Changdong Du , Rathinasamy Sakthivel , Ardashir Mohammadzadeh","doi":"10.1016/j.ins.2025.121907","DOIUrl":"10.1016/j.ins.2025.121907","url":null,"abstract":"<div><div>Gyroscopes are widely used in navigation systems of ships and vehicles, stabilizing systems for devices such as cameras and drones, robotic control systems, and orientation systems of spacecraft and satellites. However, their control systems are challenging due to disturbances such as vibrations, temperature changes, electromagnetic interference, integration with other sensors and control systems, and complex dynamics. Maintaining precise control, especially in applications that require high levels of accuracy, is challenging and requires advanced systems. In this paper, a new applied hybrid fuzzy controller is introduced for gyroscopes. First a linear quadratic (LQG) controller is designed, and then the error dynamics are modeled using T3-FLSs, and a nonlinear supervisor controller is developed. The type-3 (T3) fuzzy logic systems (FLSs) are used to model the dynamics of the gyroscope, and they are online updated through the stability tuning rules. The effects of tuning errors are considered and analyzed in the suggested stability theorem. The upper bounds of tuning errors are approximated by online adaptation laws and their effects on stability and tracking efficiency are eliminated by suggested adaptive compensators. The efficiency of the designed applied controller is verified by experimental studies and several simulations. In various scenarios, the accuracy and stability of the control scheme are examined, and its feasibility is proved (see a video of the experimental study in <span><span>https://youtu.be/d40-2tTPF2k?si=WVHxcHSUAYA064el</span><svg><path></path></svg></span>).</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"702 ","pages":"Article 121907"},"PeriodicalIF":8.1,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143100043","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-28DOI: 10.1016/j.ins.2025.121913
Lijia Ma , Long Xu , Xiaoqing Fan , Lingjie Li , Qiuzhen Lin , Jianqiang Li , Maoguo Gong
The node combinatorial optimization (NCO) tasks in complex networks aim to activate a set of influential nodes that can maximally affect the network performance under certain influence models, including influence maximization, robustness optimization, minimum node coverage, minimum dominant set, and maximum independent set, and they are usually nondeterministic polynomial (NP)-hard. The existing works mainly solve these tasks separately, and none of them can effectively solve all tasks due to their difference in influence models and NP-hard property. To tackle this issue, in this article, we first theoretically demonstrate the similarity among these NCO tasks, and model them as a multitask NCO problem. Then, we transform this multitask NCO problem into the weight optimization of a multi-depth Q network (multi-head DQN), which adopts a multi-head DQN to model the activation of influential nodes and uses the shared head and unshared output DQN layers to capture the similarity and difference among tasks, respectively. Finally, we propose a Multifactorial Evolutionary Deep Reinforcement Learning (MF-EDRL) for solving the multitask NCO problem under the multi-head DQN optimization framework, which enables to promote the implicit knowledge transfer between similar tasks. Extensive experiments on both benchmark and real-world networks show the clear advantages of the proposed MF-EDRL over the state-of-the-art in tackling all NCO tasks. Most notably, the results also reflect the effectiveness of information transfer between tasks in accelerating optimization and improving performance.
{"title":"Multifactorial evolutionary deep reinforcement learning for multitask node combinatorial optimization in complex networks","authors":"Lijia Ma , Long Xu , Xiaoqing Fan , Lingjie Li , Qiuzhen Lin , Jianqiang Li , Maoguo Gong","doi":"10.1016/j.ins.2025.121913","DOIUrl":"10.1016/j.ins.2025.121913","url":null,"abstract":"<div><div>The node combinatorial optimization (NCO) tasks in complex networks aim to activate a set of influential nodes that can maximally affect the network performance under certain influence models, including influence maximization, robustness optimization, minimum node coverage, minimum dominant set, and maximum independent set, and they are usually nondeterministic polynomial (NP)-hard. The existing works mainly solve these tasks separately, and none of them can effectively solve all tasks due to their difference in influence models and NP-hard property. To tackle this issue, in this article, we first theoretically demonstrate the similarity among these NCO tasks, and model them as a multitask NCO problem. Then, we transform this multitask NCO problem into the weight optimization of a multi-depth Q network (multi-head DQN), which adopts a multi-head DQN to model the activation of influential nodes and uses the shared head and unshared output DQN layers to capture the similarity and difference among tasks, respectively. Finally, we propose a Multifactorial Evolutionary Deep Reinforcement Learning (MF-EDRL) for solving the multitask NCO problem under the multi-head DQN optimization framework, which enables to promote the implicit knowledge transfer between similar tasks. Extensive experiments on both benchmark and real-world networks show the clear advantages of the proposed MF-EDRL over the state-of-the-art in tackling all NCO tasks. Most notably, the results also reflect the effectiveness of information transfer between tasks in accelerating optimization and improving performance.</div></div>","PeriodicalId":51063,"journal":{"name":"Information Sciences","volume":"702 ","pages":"Article 121913"},"PeriodicalIF":8.1,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143100206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}