Pub Date : 2024-04-03DOI: 10.1007/s00607-024-01281-2
Abstract
Cloud storage adoption has increased over the years given the high demand for fast processing, low access latency, and ever-increasing amount of data being generated by, e.g., Internet of Things applications. In order to meet the users’ demands and provide a cost-effective solution, cloud service providers offer tiered storage; however, keeping the data in one tier is not cost-effective. In this respect, cloud storage tier optimization involves aligning data storage needs with the most suitable and cost-effective storage tier, thus reducing costs while ensuring data availability and meeting performance requirements. Ideally, this process considers the trade-off between performance and cost, as different storage tiers offer different levels of performance and durability. It also encompasses data lifecycle management, where data is automatically moved between tiers based on access patterns, which in turn impacts the storage cost. In this respect, this article explores two novel classification approaches, rule-based and game theory-based, to optimize cloud storage cost by reassigning data between different storage tiers. Four distinct storage tiers are considered: premium, hot, cold, and archive. The viability and potential of the proposed approaches are demonstrated by comparing cost savings and analyzing the computational cost using both fully-synthetic and semi-synthetic datasets with static and dynamic access patterns. The results indicate that the proposed approaches have the potential to significantly reduce cloud storage cost, while being computationally feasible for practical applications. Both approaches are lightweight and industry- and platform-independent.
{"title":"Cloud storage tier optimization through storage object classification","authors":"","doi":"10.1007/s00607-024-01281-2","DOIUrl":"https://doi.org/10.1007/s00607-024-01281-2","url":null,"abstract":"<h3>Abstract</h3> <p>Cloud storage adoption has increased over the years given the high demand for fast processing, low access latency, and ever-increasing amount of data being generated by, e.g., Internet of Things applications. In order to meet the users’ demands and provide a cost-effective solution, cloud service providers offer tiered storage; however, keeping the data in one tier is not cost-effective. In this respect, cloud storage tier optimization involves aligning data storage needs with the most suitable and cost-effective storage tier, thus reducing costs while ensuring data availability and meeting performance requirements. Ideally, this process considers the trade-off between performance and cost, as different storage tiers offer different levels of performance and durability. It also encompasses data lifecycle management, where data is automatically moved between tiers based on access patterns, which in turn impacts the storage cost. In this respect, this article explores two novel classification approaches, rule-based and game theory-based, to optimize cloud storage cost by reassigning data between different storage tiers. Four distinct storage tiers are considered: premium, hot, cold, and archive. The viability and potential of the proposed approaches are demonstrated by comparing cost savings and analyzing the computational cost using both fully-synthetic and semi-synthetic datasets with static and dynamic access patterns. The results indicate that the proposed approaches have the potential to significantly reduce cloud storage cost, while being computationally feasible for practical applications. Both approaches are lightweight and industry- and platform-independent.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"41 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140560930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-02DOI: 10.1007/s00607-024-01279-w
Xin Jiang, Hongbo Liu, Liping Yang, Bo Zhang, Tomas E. Ward, Václav Snášel
Link prediction aims to capture the evolution of network structure, especially in real social networks, which is conducive to friend recommendations, human contact trajectory simulation, and more. However, the challenge of the stochastic social behaviors and the unstable space-time distribution in such networks often leads to unexplainable and inaccurate link predictions. Therefore, taking inspiration from the success of imitation learning in simulating human driver behavior, we propose a dynamic network link prediction method based on inverse reinforcement learning (DN-IRL) to unravel the motivations behind social behaviors in social networks. Specifically, the historical social behaviors (link sequences) and a next behavior (a single link) are regarded as the current environmental state and the action taken by the agent, respectively. Subsequently, the reward function, which is designed to maximize the cumulative expected reward from expert behaviors in the raw data, is optimized and utilized to learn the agent’s social policy. Furthermore, our approach incorporates the neighborhood structure based node embedding and the self-attention modules, enabling sensitivity to network structure and traceability to predicted links. Experimental results on real-world dynamic social networks demonstrate that DN-IRL achieves more accurate and explainable of prediction compared to the baselines.
{"title":"Unraveling human social behavior motivations via inverse reinforcement learning-based link prediction","authors":"Xin Jiang, Hongbo Liu, Liping Yang, Bo Zhang, Tomas E. Ward, Václav Snášel","doi":"10.1007/s00607-024-01279-w","DOIUrl":"https://doi.org/10.1007/s00607-024-01279-w","url":null,"abstract":"<p>Link prediction aims to capture the evolution of network structure, especially in real social networks, which is conducive to friend recommendations, human contact trajectory simulation, and more. However, the challenge of the stochastic social behaviors and the unstable space-time distribution in such networks often leads to unexplainable and inaccurate link predictions. Therefore, taking inspiration from the success of imitation learning in simulating human driver behavior, we propose a dynamic network link prediction method based on inverse reinforcement learning (DN-IRL) to unravel the motivations behind social behaviors in social networks. Specifically, the historical social behaviors (link sequences) and a next behavior (a single link) are regarded as the current environmental state and the action taken by the agent, respectively. Subsequently, the reward function, which is designed to maximize the cumulative expected reward from expert behaviors in the raw data, is optimized and utilized to learn the agent’s social policy. Furthermore, our approach incorporates the neighborhood structure based node embedding and the self-attention modules, enabling sensitivity to network structure and traceability to predicted links. Experimental results on real-world dynamic social networks demonstrate that DN-IRL achieves more accurate and explainable of prediction compared to the baselines.\u0000</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"1 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140560935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-25DOI: 10.1007/s00607-024-01270-5
Kangning Yin, Zhen Ding, Zhihua Dong, Xinhui Ji, Zhipei Wang, Dongsheng Chen, Ye Li, Guangqiang Yin, Zhiguo Wang
Aiming at the problem of low accuracy of person re-identification (Re-ID) algorithm caused by occlusion, low distinctiveness of person features and unclear detail features in complex environment, we propose a Re-ID method based on fine-grained feature fusion and self-attention mechanism. First, we design a dilated non-local module (DNLM), which combines dilated convolution with the non-local module and embeds it between layers of the backbone network, enhancing the self-attention and receptive field of the model and improving the performance on occlusion tasks. Second, the fine-grained feature fusion screening module (3FSM) is improved based on the outlook attention module, which can realize adaptive feature selection and enhance the recognition ability to similar samples of the model. Finally, combined with the feature pyramid in the field of object detection, we propose a multi-scale feature fusion pyramid (MFFP) to improve the Re-ID tasks, in which we use different levels of features to perform feature enhancement. Ablation and comprehensive experiment results based on multiple datasets validate the effectiveness of our proposal. The mean Average Precision (mAP) of Market1501 and DukeMTMC-reID is 92.5 and 87.7%, and Rank-1 is 95.1 and 91.1% respectively. Compared with the current mainstream Re-ID algorithm, our method has excellent Re-ID performance.
{"title":"Person re-identification method based on fine-grained feature fusion and self-attention mechanism","authors":"Kangning Yin, Zhen Ding, Zhihua Dong, Xinhui Ji, Zhipei Wang, Dongsheng Chen, Ye Li, Guangqiang Yin, Zhiguo Wang","doi":"10.1007/s00607-024-01270-5","DOIUrl":"https://doi.org/10.1007/s00607-024-01270-5","url":null,"abstract":"<p>Aiming at the problem of low accuracy of person re-identification (Re-ID) algorithm caused by occlusion, low distinctiveness of person features and unclear detail features in complex environment, we propose a Re-ID method based on fine-grained feature fusion and self-attention mechanism. First, we design a dilated non-local module (DNLM), which combines dilated convolution with the non-local module and embeds it between layers of the backbone network, enhancing the self-attention and receptive field of the model and improving the performance on occlusion tasks. Second, the fine-grained feature fusion screening module (3FSM) is improved based on the outlook attention module, which can realize adaptive feature selection and enhance the recognition ability to similar samples of the model. Finally, combined with the feature pyramid in the field of object detection, we propose a multi-scale feature fusion pyramid (MFFP) to improve the Re-ID tasks, in which we use different levels of features to perform feature enhancement. Ablation and comprehensive experiment results based on multiple datasets validate the effectiveness of our proposal. The mean Average Precision (mAP) of Market1501 and DukeMTMC-reID is 92.5 and 87.7%, and Rank-1 is 95.1 and 91.1% respectively. Compared with the current mainstream Re-ID algorithm, our method has excellent Re-ID performance.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"30 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140300878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The exposure of zero trust security in the Industrial Internet of Things (IIoT) increased in importance in the era where there is a huge risk of injection of malicious entities and owning the device by an unauthorized user. The gap in the existing approach of zero trust security is that continuous verification of devices is a time-consuming process and adversely affects the promising nature of the zero-trust model. Every time the node enters, even if the node is a member of the network, authorization of the node is necessary to ensure authenticity. This verification section of zero trust hinders the seamless working of the IIoT infrastructure. Therefore, the main objective of this paper is to propose the solution for the above-mentioned problem by enabling “device profiling” via deep reinforcement learning so that the same device can be identified and permitted access without hindering the working of Industrial Internet of Things infrastructure. The overall proposed approach works in different phases including the compression function for ensuring data confidentiality and integrity, then the device profiling is performed based on the features a device possesses, and lastly, deep reinforcement learning for anomaly detection. To test and validate the proposed approach, extensive experimentations were performed using measures such as false positive rate, data confidentiality rate, data integrity rate, and network access time, and results showed that the proposed technique titled “MMODPAD-DRL” outperforms the existing approaches in false positive rate by 27%, data confidentiality rate by 4% and data integrity rate by 3%, in addition, lessen the network access time by 20%.
{"title":"Matyas–Meyer Oseas based device profiling for anomaly detection via deep reinforcement learning (MMODPAD-DRL) in zero trust security network","authors":"Rajesh Kumar Dhanaraj, Anamika Singh, Anand Nayyar","doi":"10.1007/s00607-024-01269-y","DOIUrl":"https://doi.org/10.1007/s00607-024-01269-y","url":null,"abstract":"<p>The exposure of zero trust security in the Industrial Internet of Things (IIoT) increased in importance in the era where there is a huge risk of injection of malicious entities and owning the device by an unauthorized user. The gap in the existing approach of zero trust security is that continuous verification of devices is a time-consuming process and adversely affects the promising nature of the zero-trust model. Every time the node enters, even if the node is a member of the network, authorization of the node is necessary to ensure authenticity. This verification section of zero trust hinders the seamless working of the IIoT infrastructure. Therefore, the main objective of this paper is to propose the solution for the above-mentioned problem by enabling “device profiling” via deep reinforcement learning so that the same device can be identified and permitted access without hindering the working of Industrial Internet of Things infrastructure. The overall proposed approach works in different phases including the compression function for ensuring data confidentiality and integrity, then the device profiling is performed based on the features a device possesses, and lastly, deep reinforcement learning for anomaly detection. To test and validate the proposed approach, extensive experimentations were performed using measures such as false positive rate, data confidentiality rate, data integrity rate, and network access time, and results showed that the proposed technique titled “MMODPAD-DRL” outperforms the existing approaches in false positive rate by 27%, data confidentiality rate by 4% and data integrity rate by 3%, in addition, lessen the network access time by 20%.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"31 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140199123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-23DOI: 10.1007/s00607-024-01277-y
Suguna Paramasivam, R. Leela Velusamy, J. V. Nishaanth
Network traffic classification is a fundamental and intricate component of network management in the modern, high-tech era of 5G architectural design, planning of resources, and other areas. Investigation of traffic classification is a key responsibility of traffic engineering in SDN. SDN is a network programmability technology used in 5G networks that divides the control plane from the data plane. It also points the way for autonomous and dynamic network control. SDN needs data from the classification system’s flow statistics to apply the appropriate network flow policies. To control the volume of heterogeneous network traffic data in 5G network service, the network administrator must implement a carefully supervised traffic investigation system. This study uses machine learning techniques to examine alternative ways of handling heterogeneous network traffic. The suggested approach is Ensemble Learning for Automated Network Traffic Categorization. i.e., CatBoosting for Automated network traffic classification for multiclass (Cat-ANTC) predicts traffic categorization and offers a higher prediction accuracy than individual models and a more regularized model formalization to decrease over-fitting and boost efficiency. The Cat-ANTC is evaluated using benchmark network traffic datasets that are openly accessible and contrasted with current classifiers and optimization methods. It is clear that when compared to the currently used ensemble techniques, the suggested ensemble methodology produces promising outcomes. Additionally, the proposed method is tested and shown to perform better than the classification of traffic flow using the current model.
在 5G 架构设计、资源规划等现代高科技时代,网络流量分类是网络管理的基础和复杂组成部分。流量分类调查是 SDN 中流量工程的重要职责。SDN 是一种用于 5G 网络的网络可编程技术,它将控制平面与数据平面划分开来。它还为自主和动态网络控制指明了方向。SDN 需要从分类系统的流量统计中获取数据,以应用适当的网络流量策略。为了控制 5G 网络服务中的异构网络流量数据量,网络管理员必须实施精心监督的流量调查系统。本研究利用机器学习技术研究了处理异构网络流量的其他方法。所建议的方法是用于自动网络流量分类的集合学习(Ensemble Learning for Automated Network Traffic Categorization),即用于多类自动网络流量分类的 CatBoosting(Cat-ANTC)预测流量分类,它比单个模型具有更高的预测准确性,并且具有更正规化的模型形式化以减少过拟合并提高效率。Cat-ANTC 使用可公开访问的基准网络流量数据集进行评估,并与当前的分类器和优化方法进行对比。很明显,与目前使用的集合技术相比,建议的集合方法产生了很好的结果。此外,经测试表明,建议的方法比使用当前模型进行交通流分类的效果更好。
{"title":"Categorical learning for automated network traffic categorization for future generation networks in SDN","authors":"Suguna Paramasivam, R. Leela Velusamy, J. V. Nishaanth","doi":"10.1007/s00607-024-01277-y","DOIUrl":"https://doi.org/10.1007/s00607-024-01277-y","url":null,"abstract":"<p>Network traffic classification is a fundamental and intricate component of network management in the modern, high-tech era of 5G architectural design, planning of resources, and other areas. Investigation of traffic classification is a key responsibility of traffic engineering in SDN. SDN is a network programmability technology used in 5G networks that divides the control plane from the data plane. It also points the way for autonomous and dynamic network control. SDN needs data from the classification system’s flow statistics to apply the appropriate network flow policies. To control the volume of heterogeneous network traffic data in 5G network service, the network administrator must implement a carefully supervised traffic investigation system. This study uses machine learning techniques to examine alternative ways of handling heterogeneous network traffic. The suggested approach is Ensemble Learning for Automated Network Traffic Categorization. i.e., CatBoosting for Automated network traffic classification for multiclass (Cat-ANTC) predicts traffic categorization and offers a higher prediction accuracy than individual models and a more regularized model formalization to decrease over-fitting and boost efficiency. The Cat-ANTC is evaluated using benchmark network traffic datasets that are openly accessible and contrasted with current classifiers and optimization methods. It is clear that when compared to the currently used ensemble techniques, the suggested ensemble methodology produces promising outcomes. Additionally, the proposed method is tested and shown to perform better than the classification of traffic flow using the current model.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"53 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140199339","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-21DOI: 10.1007/s00607-024-01276-z
Abstract
Influence maximization (IM) problem for messages propagation is an important topic in mobile social networks. The success of the spreading process depends on the mechanism for selection of the influential user. Beside selection of influential users, the computation and running time should be considered in this mechanism to ensure the accurecy and efficient. In this paper, considering that the overhead of exact computation varies nonlinearly with fluctuations in data size, random algorithm with smoother complexity change was designed to solve the IM problem in combination with greedy algorithm. Firstly, we proposed a method named two-hop neighbor network influence estimator to evaluate the influence of all nodes in the two-hop neighbor network. Then, we developed a novel greedy algorithm, the random walk probability cost-effective with lazy-forward (RWP-CELF) algorithm by modifying cost-effective with lazy-forward (CELF) with random algorithm, which uses 25–50 orders of magnitude less time than the state-of-the-art algorithms. We compared the influence spread effect of RWP-CELF on real datasets with a theoretically proven algorithm that is guaranteed to be approximately optimal. Experiments show that the spread effect of RWP-CELF is comparable to this algorithm, and the running time is much lower than this algorithm.
{"title":"Influence maximization in mobile social networks based on RWP-CELF","authors":"","doi":"10.1007/s00607-024-01276-z","DOIUrl":"https://doi.org/10.1007/s00607-024-01276-z","url":null,"abstract":"<h3>Abstract</h3> <p>Influence maximization (IM) problem for messages propagation is an important topic in mobile social networks. The success of the spreading process depends on the mechanism for selection of the influential user. Beside selection of influential users, the computation and running time should be considered in this mechanism to ensure the accurecy and efficient. In this paper, considering that the overhead of exact computation varies nonlinearly with fluctuations in data size, random algorithm with smoother complexity change was designed to solve the IM problem in combination with greedy algorithm. Firstly, we proposed a method named two-hop neighbor network influence estimator to evaluate the influence of all nodes in the two-hop neighbor network. Then, we developed a novel greedy algorithm, the random walk probability cost-effective with lazy-forward (RWP-CELF) algorithm by modifying cost-effective with lazy-forward (CELF) with random algorithm, which uses 25–50 orders of magnitude less time than the state-of-the-art algorithms. We compared the influence spread effect of RWP-CELF on real datasets with a theoretically proven algorithm that is guaranteed to be approximately optimal. Experiments show that the spread effect of RWP-CELF is comparable to this algorithm, and the running time is much lower than this algorithm.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"19 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140199340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-19DOI: 10.1007/s00607-024-01278-x
Tamer E. Fahim, Sherif I. Rabia, Ahmed H. Abd El-Malek, Waheed K. Zahra
Timely status updating in mobile edge computing (MEC) systems has recently gained the utmost interest in internet of things (IoT) networks, where status updates may need higher computations to be interpreted. Moreover, in real-life situations, the status update streams may also be of different priority classes according to their importance and timeliness constraints. The classical disciplines used for priority service differentiation, preemptive and non-preemptive disciplines, pose a dilemma of information freshness dissatisfaction for the whole priority network. This work proposes a hybrid preemptive/non-preemptive discipline under an M/M/1/2 priority queueing model to regulate the priority-based contention of the status update streams in MEC systems. For this hybrid discipline, a probabilistic discretionary rule for preemption is deployed to govern the server and buffer access independently, introducing distinct probability parameters to control the system performance. The stochastic hybrid system approach is utilized to analyze the average age of information (AoI) along with its higher moments for any number of classes. Then, a numerical study on a three-class network is conducted by evaluating the average AoI performance and the corresponding dispersion. The numerical observations underpin the significance of the hybrid-discipline parameters in ensuring the reliability of the whole priority network. Hence, four different approaches are introduced to demonstrate the setting of these parameters. Under these approaches, some outstanding features are manifested: exploiting the buffering resources efficiently, conserving the aggregate sensing power, and optimizing the whole network satisfaction. For this last feature, a near-optimal low-complex heuristic method is proposed.
{"title":"Enhancing information freshness in multi-class mobile edge computing systems using a hybrid discipline","authors":"Tamer E. Fahim, Sherif I. Rabia, Ahmed H. Abd El-Malek, Waheed K. Zahra","doi":"10.1007/s00607-024-01278-x","DOIUrl":"https://doi.org/10.1007/s00607-024-01278-x","url":null,"abstract":"<p>Timely status updating in mobile edge computing (MEC) systems has recently gained the utmost interest in internet of things (IoT) networks, where status updates may need higher computations to be interpreted. Moreover, in real-life situations, the status update streams may also be of different priority classes according to their importance and timeliness constraints. The classical disciplines used for priority service differentiation, preemptive and non-preemptive disciplines, pose a dilemma of information freshness dissatisfaction for the whole priority network. This work proposes a hybrid preemptive/non-preemptive discipline under an M/M/1/2 priority queueing model to regulate the priority-based contention of the status update streams in MEC systems. For this hybrid discipline, a probabilistic discretionary rule for preemption is deployed to govern the server and buffer access independently, introducing distinct probability parameters to control the system performance. The stochastic hybrid system approach is utilized to analyze the average age of information (AoI) along with its higher moments for any number of classes. Then, a numerical study on a three-class network is conducted by evaluating the average AoI performance and the corresponding dispersion. The numerical observations underpin the significance of the hybrid-discipline parameters in ensuring the reliability of the whole priority network. Hence, four different approaches are introduced to demonstrate the setting of these parameters. Under these approaches, some outstanding features are manifested: exploiting the buffering resources efficiently, conserving the aggregate sensing power, and optimizing the whole network satisfaction. For this last feature, a near-optimal low-complex heuristic method is proposed.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"7 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140168421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-15DOI: 10.1007/s00607-024-01275-0
Jian Wan, Luc Jaulin
Set computation methods have been widely used to compute reachable sets, design invariant sets and estimate system state for dynamic systems. The wrapping effect of such set computation methods plays an essential role in the accuracy of their solutions. This paper studies the wrapping effect of existing interval, zonotopic and polytopic set computation methods and proposes novel approaches to reduce the wrapping effect for these set computation methods based on the task of computing the dynamic evolution of a nonlinear uncertain discrete-time system with a set as the initial state. The proposed novel approaches include the partition of a polytopic set via Delaunay triangulation and also the representation of a polytopic set by the union of small zonotopes for the following set propagation. The proposed novel approaches with the reduced wrapping effect has been further applied to state estimation of a nonlinear uncertain discrete-time system with improved accuracy. Similar to bisection for interval and zonotopic sets, Delaunay triangulation has been introduced as a set partition tool for polytopic sets, which has opened new research directions in terms of novel set partition, set representation and set propagation for reducing the wrapping effect of set computation.
{"title":"Reducing the wrapping effect of set computation via Delaunay triangulation for guaranteed state estimation of nonlinear discrete-time systems","authors":"Jian Wan, Luc Jaulin","doi":"10.1007/s00607-024-01275-0","DOIUrl":"https://doi.org/10.1007/s00607-024-01275-0","url":null,"abstract":"<p>Set computation methods have been widely used to compute reachable sets, design invariant sets and estimate system state for dynamic systems. The wrapping effect of such set computation methods plays an essential role in the accuracy of their solutions. This paper studies the wrapping effect of existing interval, zonotopic and polytopic set computation methods and proposes novel approaches to reduce the wrapping effect for these set computation methods based on the task of computing the dynamic evolution of a nonlinear uncertain discrete-time system with a set as the initial state. The proposed novel approaches include the partition of a polytopic set via Delaunay triangulation and also the representation of a polytopic set by the union of small zonotopes for the following set propagation. The proposed novel approaches with the reduced wrapping effect has been further applied to state estimation of a nonlinear uncertain discrete-time system with improved accuracy. Similar to bisection for interval and zonotopic sets, Delaunay triangulation has been introduced as a set partition tool for polytopic sets, which has opened new research directions in terms of novel set partition, set representation and set propagation for reducing the wrapping effect of set computation.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"43 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140150511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-15DOI: 10.1007/s00607-024-01272-3
Weida Song, Shanxin Zhang, Wenlong Ge, Wei Wang
The large number of objectives in many-objective optimization problems (MaOPs) has posed significant challenges to the performance of multi-objective evolutionary algorithms (MOEAs) in terms of convergence and diversity. To design a more balanced MOEA, a multiple indicator-based two-archive algorithm named IBTA is proposed to deal with problems with complicated Pareto fronts. Specifically, a two-archive framework is introduced to focus on convergence and diversity separately. In IBTA, we assign different selection principles to the two archives. In the convergence archive, the inverted generational distance with noncontributing solution detection (IGD-NS) indicator is applied to choose the solutions with favorable convergence in each generation. In the diversity archive, we use crowdedness and fitness to select solutions with favorable diversity. To evaluate the performance of IBTA on MaOPs, we compare it with several state-of-the-art MOEAs on various benchmark problems with different Pareto fronts. The experimental results demonstrate that IBTA can deal with multi-objective optimization problems (MOPs)/MaOPs with satisfactory convergence and diversity.
{"title":"An improved indicator-based two-archive algorithm for many-objective optimization problems","authors":"Weida Song, Shanxin Zhang, Wenlong Ge, Wei Wang","doi":"10.1007/s00607-024-01272-3","DOIUrl":"https://doi.org/10.1007/s00607-024-01272-3","url":null,"abstract":"<p>The large number of objectives in many-objective optimization problems (MaOPs) has posed significant challenges to the performance of multi-objective evolutionary algorithms (MOEAs) in terms of convergence and diversity. To design a more balanced MOEA, a multiple indicator-based two-archive algorithm named IBTA is proposed to deal with problems with complicated Pareto fronts. Specifically, a two-archive framework is introduced to focus on convergence and diversity separately. In IBTA, we assign different selection principles to the two archives. In the convergence archive, the inverted generational distance with noncontributing solution detection (IGD-NS) indicator is applied to choose the solutions with favorable convergence in each generation. In the diversity archive, we use crowdedness and fitness to select solutions with favorable diversity. To evaluate the performance of IBTA on MaOPs, we compare it with several state-of-the-art MOEAs on various benchmark problems with different Pareto fronts. The experimental results demonstrate that IBTA can deal with multi-objective optimization problems (MOPs)/MaOPs with satisfactory convergence and diversity.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"98 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140150301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-12DOI: 10.1007/s00607-024-01273-2
Zahoor Ali Khan, Muhammad Awais, Turki Ali Alghamdi, Nadeem Javaid
Nowadays, the Internet of Things (IoT) networks provide benefits to humans in numerous domains by empowering the projects of smart cities, healthcare, industrial enhancement and so forth. The IoT networks include nodes, which deliver the data to the destination. However, the network nodes’ connectivity is affected by the nodes’ removal caused due to the malicious attacks. The ideal plan is to construct a topology that maintains nodes’ connectivity after the attacks and subsequently increases the network robustness. Therefore, for constructing a robust scale-free network, two different mechanisms are adopted in this paper. First, a Multi-Population Genetic Algorithm (MPGA) is used to deal with premature convergence in GA. Then, an entropy based mechanism is used, which replaces the worst solution of high entropy population with the best solution of low entropy population to improve the network robustness. Second, two types of Edge Swap Mechanisms (ESMs) are proposed. The Efficiency based Edge Swap Mechanism (EESM) selects the pair of edges with high efficiency. While the second ESM named as EESM-Assortativity, transforms the network topology into an onion-like structure to achieve maximum connectivity between similar degree network nodes. Further, Hill Climbing (HC) and Simulated Annealing (SA) methods are used for optimizing the network robustness. The simulation results show that the proposed MPGA Entropy has 9% better network robustness as compared to MPGA. Moreover, both the proposed ESMs effectively increase the network robustness with an average of 15% better robustness as compared to HC and SA. Furthermore, they increase the graph density as well as network’s connectivity.
如今,物联网(IoT)网络通过支持智能城市、医疗保健、工业提升等项目,在众多领域为人类造福。物联网网络包括将数据传送到目的地的节点。然而,恶意攻击导致的节点移除会影响网络节点的连接性。理想的方案是构建一种拓扑结构,在受到攻击后保持节点的连通性,从而提高网络的鲁棒性。因此,为了构建鲁棒的无标度网络,本文采用了两种不同的机制。首先,采用多群体遗传算法(MPGA)来处理 GA 过早收敛的问题。然后,采用基于熵的机制,用低熵种群的最优解替换高熵种群的最差解,以提高网络的鲁棒性。其次,提出了两种边缘交换机制(ESM)。基于效率的边缘交换机制(ESM)选择效率高的边缘对。第二种机制被称为 EESM-排列组合机制,它将网络拓扑结构转化为洋葱状结构,以实现相似度网络节点之间的最大连通性。此外,还采用了爬山法(HC)和模拟退火法(SA)来优化网络的鲁棒性。仿真结果表明,与 MPGA 相比,拟议的 MPGA Entropy 的网络鲁棒性提高了 9%。此外,与 HC 和 SA 相比,提出的两种 ESM 都能有效提高网络鲁棒性,平均提高 15%。此外,它们还提高了图密度和网络的连通性。
{"title":"Employing topology modification strategies in scale-free IoT networks for robustness optimization","authors":"Zahoor Ali Khan, Muhammad Awais, Turki Ali Alghamdi, Nadeem Javaid","doi":"10.1007/s00607-024-01273-2","DOIUrl":"https://doi.org/10.1007/s00607-024-01273-2","url":null,"abstract":"<p>Nowadays, the Internet of Things (IoT) networks provide benefits to humans in numerous domains by empowering the projects of smart cities, healthcare, industrial enhancement and so forth. The IoT networks include nodes, which deliver the data to the destination. However, the network nodes’ connectivity is affected by the nodes’ removal caused due to the malicious attacks. The ideal plan is to construct a topology that maintains nodes’ connectivity after the attacks and subsequently increases the network robustness. Therefore, for constructing a robust scale-free network, two different mechanisms are adopted in this paper. First, a Multi-Population Genetic Algorithm (MPGA) is used to deal with premature convergence in GA. Then, an entropy based mechanism is used, which replaces the worst solution of high entropy population with the best solution of low entropy population to improve the network robustness. Second, two types of Edge Swap Mechanisms (ESMs) are proposed. The Efficiency based Edge Swap Mechanism (EESM) selects the pair of edges with high efficiency. While the second ESM named as EESM-Assortativity, transforms the network topology into an onion-like structure to achieve maximum connectivity between similar degree network nodes. Further, Hill Climbing (HC) and Simulated Annealing (SA) methods are used for optimizing the network robustness. The simulation results show that the proposed MPGA Entropy has 9% better network robustness as compared to MPGA. Moreover, both the proposed ESMs effectively increase the network robustness with an average of 15% better robustness as compared to HC and SA. Furthermore, they increase the graph density as well as network’s connectivity.</p>","PeriodicalId":10718,"journal":{"name":"Computing","volume":"15 1","pages":""},"PeriodicalIF":3.7,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140126563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}