Pub Date : 2024-08-16DOI: 10.1007/s11227-024-06430-0
Siwei Wang, Donglin Zhu, Changjun Zhou, Gaoji Sun
Unmanned aerial vehicle (UAV) has been widely used in many fields, especially in low-altitude penetration defence, which showcases superior performance. UAV requires obstacle avoidance for safe flight and must adhere to various flight constraints, such as altitude changes and turning angles, during path planning. Excellent flight paths can enhance flight efficiency and safety, saving time and energy when performing specific tasks, directly impacting mission accomplishment. To address these challenges, this paper improves the original grey wolf algorithm (GWO). In this enhanced version, the three head wolves randomly assign influence weights to execute the position updating mechanism. A dynamic weight influence strategy is designed, which accelerates convergence in the late optimization stages, aiding in finding the global optimum. Meanwhile, the logistic mapping is introduced into the convergence factor, and a micro-vibrational convergence factor is constructed. This allows the algorithm to have a better ability to find a globally optimal solution in the search space while also being able to search deeper using areas near the currently known information. In order to validate the proposed algorithm, a simulated flight environment is established, conducting simulation experiments within safe flight environments featuring 5, 10, and 15 obstacles. Comparative analysis with seven other algorithms demonstrates the superiority of the proposed algorithm. The experimental results demonstrate that the proposed algorithm has better superiority. In terms of path length on three maps, DLGWO paths are 10.3 km, 15.5 km, and 2.6 km shorter than the second-placed MEPSO, SOGWO, and WOA, respectively. Furthermore, the planned path in this study exhibits the smallest fluctuations in altitude and turning angles.
{"title":"Improved grey wolf algorithm based on dynamic weight and logistic mapping for safe path planning of UAV low-altitude penetration","authors":"Siwei Wang, Donglin Zhu, Changjun Zhou, Gaoji Sun","doi":"10.1007/s11227-024-06430-0","DOIUrl":"https://doi.org/10.1007/s11227-024-06430-0","url":null,"abstract":"<p>Unmanned aerial vehicle (UAV) has been widely used in many fields, especially in low-altitude penetration defence, which showcases superior performance. UAV requires obstacle avoidance for safe flight and must adhere to various flight constraints, such as altitude changes and turning angles, during path planning. Excellent flight paths can enhance flight efficiency and safety, saving time and energy when performing specific tasks, directly impacting mission accomplishment. To address these challenges, this paper improves the original grey wolf algorithm (GWO). In this enhanced version, the three head wolves randomly assign influence weights to execute the position updating mechanism. A dynamic weight influence strategy is designed, which accelerates convergence in the late optimization stages, aiding in finding the global optimum. Meanwhile, the logistic mapping is introduced into the convergence factor, and a micro-vibrational convergence factor is constructed. This allows the algorithm to have a better ability to find a globally optimal solution in the search space while also being able to search deeper using areas near the currently known information. In order to validate the proposed algorithm, a simulated flight environment is established, conducting simulation experiments within safe flight environments featuring 5, 10, and 15 obstacles. Comparative analysis with seven other algorithms demonstrates the superiority of the proposed algorithm. The experimental results demonstrate that the proposed algorithm has better superiority. In terms of path length on three maps, DLGWO paths are 10.3 km, 15.5 km, and 2.6 km shorter than the second-placed MEPSO, SOGWO, and WOA, respectively. Furthermore, the planned path in this study exhibits the smallest fluctuations in altitude and turning angles.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-16DOI: 10.1007/s11227-024-06354-9
Chao Ma, Jiwei Qin, Tao Wang, Aohua Gao
In light of the remarkable capacity of graph convolutional network (GCN) in representation learning, researchers have incorporated it into collaborative filtering recommendation systems to capture high-order collaborative signals. However, existing GCN-based collaborative filtering models still exhibit three deficiencies: the failure to consider differences between users’ activity and preferences for items’ popularity, the low-order feature information of users and items has been inadequately employed, and neglecting the correlated relationships among isomorphic nodes. To address these shortcomings, this paper proposes a degree-aware embedding-based multi-correlated graph convolutional collaborative filtering (Da-MCGCF). Firstly, Da-MCGCF combines users’ activity and preferences for items’ popularity to perform neighborhood aggregation in the user-item bipartite graph, thereby generating more precise representations of users and items. Secondly, Da-MCGCF employs a low-order feature fusion strategy to integrate low-order features into the process of mining high-order features, which enhances feature representation capabilities, and enables the exploration of deeper relationships. Furthermore, we construct two isomorphic graphs by employing an adaptive approach to explore correlated relationships at the isomorphic level between users and items. Subsequently, we aggregate the features of isomorphic users and items separately to complement their representations. Finally, we conducted extensive experiments on four public datasets, thereby validating the effectiveness of our proposed model.
{"title":"Degree-aware embedding-based multi-correlated graph convolutional collaborative filtering","authors":"Chao Ma, Jiwei Qin, Tao Wang, Aohua Gao","doi":"10.1007/s11227-024-06354-9","DOIUrl":"https://doi.org/10.1007/s11227-024-06354-9","url":null,"abstract":"<p>In light of the remarkable capacity of graph convolutional network (GCN) in representation learning, researchers have incorporated it into collaborative filtering recommendation systems to capture high-order collaborative signals. However, existing GCN-based collaborative filtering models still exhibit three deficiencies: the failure to consider differences between users’ activity and preferences for items’ popularity, the low-order feature information of users and items has been inadequately employed, and neglecting the correlated relationships among isomorphic nodes. To address these shortcomings, this paper proposes a degree-aware embedding-based multi-correlated graph convolutional collaborative filtering (Da-MCGCF). Firstly, Da-MCGCF combines users’ activity and preferences for items’ popularity to perform neighborhood aggregation in the user-item bipartite graph, thereby generating more precise representations of users and items. Secondly, Da-MCGCF employs a low-order feature fusion strategy to integrate low-order features into the process of mining high-order features, which enhances feature representation capabilities, and enables the exploration of deeper relationships. Furthermore, we construct two isomorphic graphs by employing an adaptive approach to explore correlated relationships at the isomorphic level between users and items. Subsequently, we aggregate the features of isomorphic users and items separately to complement their representations. Finally, we conducted extensive experiments on four public datasets, thereby validating the effectiveness of our proposed model.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As a prominent and appealing technology, neural networks have been widely applied in numerous fields, with one of the most notable applications being autonomous driving. However, the intrinsic structure of neural networks presents a black box problem, leading to emergent security issues in driving and networking that remain unresolved. To this end, we introduce a novel method for robust validation of neural networks, named as Dual Boundary Robust (DBR). Specifically, we creatively integrate adversarial attack design, including perturbations like outliers, with outer boundary defenses, in which the inner and outer boundaries are combined with methods such as floating-point polyhedra and boundary intervals. Demonstrate the robustness of the DBR’s anti-interference ability and security performance, and to reduce the black box-induced emergent security problems of neural networks. Compared with the traditional method, the outer boundary of DBR combined with the theory of convex relaxation can appropriately tighten the boundary interval of DBR used in neural networks, which significantly reduces the over-tightening of the potential for severe security issues and has better robustness. Furthermore, extensive experimentation on individually trained neural networks validates the flexibility and scalability of DBR in safeguarding larger regions.
{"title":"A dual boundary robust verification method for neural networks","authors":"Yueyue Yang, Qun Fang, Yajing Tang, Yuchen Feng, Yihui Yan, Yong Xu","doi":"10.1007/s11227-024-06402-4","DOIUrl":"https://doi.org/10.1007/s11227-024-06402-4","url":null,"abstract":"<p>As a prominent and appealing technology, neural networks have been widely applied in numerous fields, with one of the most notable applications being autonomous driving. However, the intrinsic structure of neural networks presents a black box problem, leading to emergent security issues in driving and networking that remain unresolved. To this end, we introduce a novel method for robust validation of neural networks, named as Dual Boundary Robust (DBR). Specifically, we creatively integrate adversarial attack design, including perturbations like outliers, with outer boundary defenses, in which the inner and outer boundaries are combined with methods such as floating-point polyhedra and boundary intervals. Demonstrate the robustness of the DBR’s anti-interference ability and security performance, and to reduce the black box-induced emergent security problems of neural networks. Compared with the traditional method, the outer boundary of DBR combined with the theory of convex relaxation can appropriately tighten the boundary interval of DBR used in neural networks, which significantly reduces the over-tightening of the potential for severe security issues and has better robustness. Furthermore, extensive experimentation on individually trained neural networks validates the flexibility and scalability of DBR in safeguarding larger regions.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-14DOI: 10.1007/s11227-024-06423-z
Honghua Jin, Haiyan Wang, Jian Luo
In cloud-edge-end scenarios, how to achieve rational resource allocation, implement effective service deployment, and ensure high service quality has become a hot research topic in academic domains. Service providers usually deploy services by considering the characteristics of different geographical regions, which helps to meet the diverse needs of users in different regions and optimize resource allocation and utilization. However, due to the widespread distribution of users and limited server resources, providing all types of services to users in every geographical region is not feasible. In addition, edge servers are prone to operational failures caused by software anomalies, hardware malfunctions, and malicious attacks, which will decrease service reliability. To address the problems above, this paper proposes a metric for service priorities based on user demands and regional characteristics for different geographical regions. Building upon this foundation, a Multi-Service Geographic region Deployment based on Priority (MS-GD-P) is proposed. This method takes user coverage and service reliability into consideration, which facilitates users’ needs for multiple services in different geographical regions. Experimental results on real datasets demonstrate that MS-GD-P outperforms baseline methods in user coverage and service reliability.
{"title":"MS-GD-P: priority-based service deployment for cloud-edge-end scenarios","authors":"Honghua Jin, Haiyan Wang, Jian Luo","doi":"10.1007/s11227-024-06423-z","DOIUrl":"https://doi.org/10.1007/s11227-024-06423-z","url":null,"abstract":"<p>In cloud-edge-end scenarios, how to achieve rational resource allocation, implement effective service deployment, and ensure high service quality has become a hot research topic in academic domains. Service providers usually deploy services by considering the characteristics of different geographical regions, which helps to meet the diverse needs of users in different regions and optimize resource allocation and utilization. However, due to the widespread distribution of users and limited server resources, providing all types of services to users in every geographical region is not feasible. In addition, edge servers are prone to operational failures caused by software anomalies, hardware malfunctions, and malicious attacks, which will decrease service reliability. To address the problems above, this paper proposes a metric for service priorities based on user demands and regional characteristics for different geographical regions. Building upon this foundation, a Multi-Service Geographic region Deployment based on Priority (MS-GD-P) is proposed. This method takes user coverage and service reliability into consideration, which facilitates users’ needs for multiple services in different geographical regions. Experimental results on real datasets demonstrate that MS-GD-P outperforms baseline methods in user coverage and service reliability.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-14DOI: 10.1007/s11227-024-06384-3
Xiao Wu, Shaobo Li, Xinghe Jiang, Yanqiu Zhou
This paper addresses the increasing complexity of challenges in the field of continuous nonlinear optimization by proposing an innovative algorithm called information acquisition optimizer (IAO), which is inspired by human information acquisition behaviors and consists of three crucial strategies: information collection, information filtering and evaluation, and information analysis and organization to accommodate diverse optimization requirements. Firstly, comparative assessments of performance are conducted between the IAO and 15 widely recognized algorithms using the standard test function suites from CEC2014, CEC2017, CEC2020, and CEC2022. The results demonstrate that IAO is robustly competitive regarding convergence rate, solution accuracy, and stability. Additionally, the outcomes of the Wilcoxon signed rank test and Friedman mean ranking strongly validate the effectiveness and reliability of IAO. Moreover, the time comparison analysis experiments indicate its high efficiency. Finally, comparative tests on five real-world optimization difficulties affirm the remarkable applicability of IAO in handling complex issues with unknown search spaces. The code for the IAO algorithm is available at https://ww2.mathworks.cn/matlabcentral/fileexchange/169331-information-acquisition-optimizer.
{"title":"Information acquisition optimizer: a new efficient algorithm for solving numerical and constrained engineering optimization problems","authors":"Xiao Wu, Shaobo Li, Xinghe Jiang, Yanqiu Zhou","doi":"10.1007/s11227-024-06384-3","DOIUrl":"https://doi.org/10.1007/s11227-024-06384-3","url":null,"abstract":"<p>This paper addresses the increasing complexity of challenges in the field of continuous nonlinear optimization by proposing an innovative algorithm called information acquisition optimizer (IAO), which is inspired by human information acquisition behaviors and consists of three crucial strategies: information collection, information filtering and evaluation, and information analysis and organization to accommodate diverse optimization requirements. Firstly, comparative assessments of performance are conducted between the IAO and 15 widely recognized algorithms using the standard test function suites from CEC2014, CEC2017, CEC2020, and CEC2022. The results demonstrate that IAO is robustly competitive regarding convergence rate, solution accuracy, and stability. Additionally, the outcomes of the Wilcoxon signed rank test and Friedman mean ranking strongly validate the effectiveness and reliability of IAO. Moreover, the time comparison analysis experiments indicate its high efficiency. Finally, comparative tests on five real-world optimization difficulties affirm the remarkable applicability of IAO in handling complex issues with unknown search spaces. The code for the IAO algorithm is available at https://ww2.mathworks.cn/matlabcentral/fileexchange/169331-information-acquisition-optimizer.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"55 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Increasing urban traffic and congestion have led to significant issues such as rising air pollution and wasted time, highlighting the need for an intelligent traffic light control (TLC) system to minimize vehicle waiting times. This paper presents a novel TLC system that leverages the Internet of Things (IoT) for data collection and employs the random forest algorithm for preprocessing and feature extraction. A deep belief network predicts future traffic conditions, and a support vector regression network is integrated to enhance prediction accuracy. Additionally, the traffic light control strategy is optimized using reinforcement learning. The proposed method is evaluated through two different scenarios. The first scenario is compared with fixed-time control and the double dueling deep neural network (3DQN) methods. The second scenario compares it with the SVM, KNN, and MAADAC approaches. Simulation results demonstrate that the proposed method significantly outperforms these alternative approaches, showing substantial improvements in average vehicle waiting times by more than 20%, 32%, and 45%, respectively. Using a deep belief network, supplemented by support vector regression, ensures high precision in forecasting traffic patterns. Furthermore, the reinforcement learning-based optimization of the traffic light control strategy effectively adapts to changing traffic conditions, providing superior traffic flow management. The results indicate that the proposed system can substantially reduce traffic congestion and improve urban traffic flow.
{"title":"Fusion of deep belief network and SVM regression for intelligence of urban traffic control system","authors":"Alireza Soleimani, Yousef Farhang, Amin Babazadeh Sangar","doi":"10.1007/s11227-024-06386-1","DOIUrl":"https://doi.org/10.1007/s11227-024-06386-1","url":null,"abstract":"<p>Increasing urban traffic and congestion have led to significant issues such as rising air pollution and wasted time, highlighting the need for an intelligent traffic light control (TLC) system to minimize vehicle waiting times. This paper presents a novel TLC system that leverages the Internet of Things (IoT) for data collection and employs the random forest algorithm for preprocessing and feature extraction. A deep belief network predicts future traffic conditions, and a support vector regression network is integrated to enhance prediction accuracy. Additionally, the traffic light control strategy is optimized using reinforcement learning. The proposed method is evaluated through two different scenarios. The first scenario is compared with fixed-time control and the double dueling deep neural network (3DQN) methods. The second scenario compares it with the SVM, KNN, and MAADAC approaches. Simulation results demonstrate that the proposed method significantly outperforms these alternative approaches, showing substantial improvements in average vehicle waiting times by more than 20%, 32%, and 45%, respectively. Using a deep belief network, supplemented by support vector regression, ensures high precision in forecasting traffic patterns. Furthermore, the reinforcement learning-based optimization of the traffic light control strategy effectively adapts to changing traffic conditions, providing superior traffic flow management. The results indicate that the proposed system can substantially reduce traffic congestion and improve urban traffic flow.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"59 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-13DOI: 10.1007/s11227-024-06362-9
Meiting Liu, Wenxin Yu, Zuanbo Zhou
Chaotic systems are often used as random sequence generators due to their excellent pseudo-randomness, but there is limitation that the discretization of complex chaotic systems requires a long computational time. Therefore, a parallel discretization method for chaotic system, and an accelerated chaotic image secure communication system based on the Zynq-7000 platform are proposed in this paper. Firstly, a 3-dimensional (3-D) chaotic system is constructed to generate random sequence, which has high Shannon entropy (SE) complexity. Then, chaotic system is parallelly discretized through finite state machine, which sequences are combined with scrambling and diffusion algorithms to construct an accelerated chaotic image secure communication system. Finally, the secure communication process based on the Zynq-7000 platform is completed, and the analysis of hardware experimental results shows that the system has safe performances, simple structure and excellent operational efficiency.
{"title":"An accelerated chaotic image secure communication system based on Zynq-7000 platform","authors":"Meiting Liu, Wenxin Yu, Zuanbo Zhou","doi":"10.1007/s11227-024-06362-9","DOIUrl":"https://doi.org/10.1007/s11227-024-06362-9","url":null,"abstract":"<p>Chaotic systems are often used as random sequence generators due to their excellent pseudo-randomness, but there is limitation that the discretization of complex chaotic systems requires a long computational time. Therefore, a parallel discretization method for chaotic system, and an accelerated chaotic image secure communication system based on the Zynq-7000 platform are proposed in this paper. Firstly, a 3-dimensional (3-D) chaotic system is constructed to generate random sequence, which has high Shannon entropy (SE) complexity. Then, chaotic system is parallelly discretized through finite state machine, which sequences are combined with scrambling and diffusion algorithms to construct an accelerated chaotic image secure communication system. Finally, the secure communication process based on the Zynq-7000 platform is completed, and the analysis of hardware experimental results shows that the system has safe performances, simple structure and excellent operational efficiency.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182384","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-13DOI: 10.1007/s11227-024-06424-y
Ömer Atılım Koca, Halime Özge Kabak, Volkan Kılıç
Continuous glucose monitoring (CGM) devices provide a considerable amount of data that can be used to predict future values, enabling sustainable control of blood glucose levels to prevent hypo-/hyperglycemic events and associated complications. However, it is a challenging task in diabetes management as the data from CGM are sequential, time-varying, nonlinear, and non-stationary. Due to their ability to deal with these types of data, artificial intelligence (AI)-based methods have emerged as a useful tool. The traditional approach is to implement AI methods in baseline form, which results in exploiting less sequential information from the data, thus reducing the prediction accuracy. To address this issue, we propose a novel glucose prediction approach within the encoder–decoder framework, aimed at improving prediction accuracy despite the complex and non-stationary nature of CGM data. Sequential information is extracted using a convolutional neural network-based encoder, while predictions are generated by a gated recurrent unit (GRU)-based decoder. In our approach, the decoder is designed with the multilayer GRU attached to an attention layer to ensure the modulation of the most relevant information so that it leads to a more accurate prediction. The proposed attention-based multilayer GRU approach has been extensively evaluated on the OhioT1DM dataset, and experimental results demonstrate the advantage of our proposed approach over the state-of-the-art approaches. Furthermore, the proposed approach is also integrated with our custom-designed Android application called “GlucoWizard” to perform glucose prediction for diabetes.
{"title":"Attention-based multilayer GRU decoder for on-site glucose prediction on smartphone","authors":"Ömer Atılım Koca, Halime Özge Kabak, Volkan Kılıç","doi":"10.1007/s11227-024-06424-y","DOIUrl":"https://doi.org/10.1007/s11227-024-06424-y","url":null,"abstract":"<p>Continuous glucose monitoring (CGM) devices provide a considerable amount of data that can be used to predict future values, enabling sustainable control of blood glucose levels to prevent hypo-/hyperglycemic events and associated complications. However, it is a challenging task in diabetes management as the data from CGM are sequential, time-varying, nonlinear, and non-stationary. Due to their ability to deal with these types of data, artificial intelligence (AI)-based methods have emerged as a useful tool. The traditional approach is to implement AI methods in baseline form, which results in exploiting less sequential information from the data, thus reducing the prediction accuracy. To address this issue, we propose a novel glucose prediction approach within the encoder–decoder framework, aimed at improving prediction accuracy despite the complex and non-stationary nature of CGM data. Sequential information is extracted using a convolutional neural network-based encoder, while predictions are generated by a gated recurrent unit (GRU)-based decoder. In our approach, the decoder is designed with the multilayer GRU attached to an attention layer to ensure the modulation of the most relevant information so that it leads to a more accurate prediction. The proposed attention-based multilayer GRU approach has been extensively evaluated on the OhioT1DM dataset, and experimental results demonstrate the advantage of our proposed approach over the state-of-the-art approaches. Furthermore, the proposed approach is also integrated with our custom-designed Android application called “<i>GlucoWizard</i>” to perform glucose prediction for diabetes.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-13DOI: 10.1007/s11227-024-06427-9
M. Prakash, K. Ramesh
The Internet of Things (IoT) has seen significant growth, enabling connectivity and intelligence in various domains which use RFID communication most. However, this growth has also brought forth significant security challenges, particularly concerning replay attacks, which have troubled previous works. In our study, we introduce an innovative security solution that uses elliptic curve cryptography (ECC) with zero-knowledge proof (ZKP) specifically tailored for RFID-communicated applications. By leveraging ECC with ZKP, we not only improve the security of IoT systems but also reduce the persistent threat of replay attacks. Unlike traditional methods, our approach ensures that sensitive data is securely transmitted and authenticated without the risk of unauthorized duplication. We validated our approach using Scyther and BAN logic, well-known tools for assessing security protocols. These validations confirm the robustness of our solution in addressing security challenges and provide further assurance of its effectiveness in protecting IoT systems against various threats, including replay attacks. Our comprehensive analysis revealed that our approach outperforms existing solutions in terms of communication costs and computation costs. The improved efficiency in these key areas underscores the practicality and viability of our solution, further solidifying its position as a leading option for safeguarding IoT ecosystems against emerging threats.
物联网(IoT)取得了长足的发展,在使用 RFID 通信最多的各个领域实现了连接和智能。然而,这种增长也带来了巨大的安全挑战,尤其是重放攻击,这困扰着以往的研究。在我们的研究中,我们介绍了一种创新的安全解决方案,该方案使用椭圆曲线加密算法(ECC)和零知识证明(ZKP),专为 RFID 通信应用量身定制。通过利用带有 ZKP 的 ECC,我们不仅提高了物联网系统的安全性,还降低了重放攻击的持续威胁。与传统方法不同,我们的方法可确保敏感数据的安全传输和验证,而不会出现未经授权的复制风险。我们使用著名的安全协议评估工具 Scyther 和 BAN 逻辑验证了我们的方法。这些验证证实了我们的解决方案在应对安全挑战方面的稳健性,并进一步保证了它在保护物联网系统免受包括重放攻击在内的各种威胁方面的有效性。我们的综合分析表明,我们的方法在通信成本和计算成本方面优于现有解决方案。在这些关键领域效率的提高凸显了我们解决方案的实用性和可行性,进一步巩固了其作为保护物联网生态系统免受新兴威胁的领先选择的地位。
{"title":"ECAUT: ECC-infused efficient authentication for internet of things systems based on zero-knowledge proof","authors":"M. Prakash, K. Ramesh","doi":"10.1007/s11227-024-06427-9","DOIUrl":"https://doi.org/10.1007/s11227-024-06427-9","url":null,"abstract":"<p>The Internet of Things (IoT) has seen significant growth, enabling connectivity and intelligence in various domains which use RFID communication most. However, this growth has also brought forth significant security challenges, particularly concerning replay attacks, which have troubled previous works. In our study, we introduce an innovative security solution that uses elliptic curve cryptography (ECC) with zero-knowledge proof (ZKP) specifically tailored for RFID-communicated applications. By leveraging ECC with ZKP, we not only improve the security of IoT systems but also reduce the persistent threat of replay attacks. Unlike traditional methods, our approach ensures that sensitive data is securely transmitted and authenticated without the risk of unauthorized duplication. We validated our approach using Scyther and BAN logic, well-known tools for assessing security protocols. These validations confirm the robustness of our solution in addressing security challenges and provide further assurance of its effectiveness in protecting IoT systems against various threats, including replay attacks. Our comprehensive analysis revealed that our approach outperforms existing solutions in terms of communication costs and computation costs. The improved efficiency in these key areas underscores the practicality and viability of our solution, further solidifying its position as a leading option for safeguarding IoT ecosystems against emerging threats.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"51 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182386","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-12DOI: 10.1007/s11227-024-06374-5
Riccardo Ceccaroni, Lorenzo Di Rocco, Umberto Ferraro Petrillo, Pierpaolo Brutti
Persistent homology (PH) is a powerful mathematical method to automatically extract relevant insights from images, such as those obtained by high-resolution imaging devices like electron microscopes or new-generation telescopes. However, the application of this method comes at a very high computational cost that is bound to explode more because new imaging devices generate an ever-growing amount of data. In this paper, we present PixHomology, a novel algorithm for efficiently computing zero-dimensional PH on 2D images, optimizing memory and processing time. By leveraging the Apache Spark framework, we also present a distributed version of our algorithm with several optimized variants, able to concurrently process large batches of astronomical images. Finally, we present the results of an experimental analysis showing that our algorithm and its distributed version are efficient in terms of required memory, execution time, and scalability, consistently outperforming existing state-of-the-art PH computation tools when used to process large datasets.
{"title":"A distributed approach for persistent homology computation on a large scale","authors":"Riccardo Ceccaroni, Lorenzo Di Rocco, Umberto Ferraro Petrillo, Pierpaolo Brutti","doi":"10.1007/s11227-024-06374-5","DOIUrl":"https://doi.org/10.1007/s11227-024-06374-5","url":null,"abstract":"<p>Persistent homology (PH) is a powerful mathematical method to automatically extract relevant insights from images, such as those obtained by high-resolution imaging devices like electron microscopes or new-generation telescopes. However, the application of this method comes at a very high computational cost that is bound to explode more because new imaging devices generate an ever-growing amount of data. In this paper, we present <i>PixHomology</i>, a novel algorithm for efficiently computing zero-dimensional PH on <span>2D</span> images, optimizing memory and processing time. By leveraging the Apache Spark framework, we also present a distributed version of our algorithm with several optimized variants, able to concurrently process large batches of astronomical images. Finally, we present the results of an experimental analysis showing that our algorithm and its distributed version are efficient in terms of required memory, execution time, and scalability, consistently outperforming existing state-of-the-art PH computation tools when used to process large datasets.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"57 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141941719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}