Fog computing is a revolutionary technology that, by expanding the cloud computing paradigm to the network edge, brings a significant achievement in the resource-constrained IoT applications in intelligent environments. However, security matters still challenge the extensive deployment of fog computing infrastructure. Ciphertext policy attribute-based encryption prepares a solution for data sharing and security preservation issues in fog-enhanced intelligent environments. Nevertheless, the lack of an effective mechanism to moderate the execution time of CP-ABE schemes due to the diversity of attributes used in secret key and access structure, as well as ensuring data security, practically restricts the deployment of such schemes. In this regard, a collaborative semantic model, including an outsourced CP-ABE scheme with the attribute revocation ability, together with an impressive AES algorithm relying on an ensemble learning system, was proposed in this study. The ensemble learning model uses multiple classifiers, including the GMDH, SVM, and KNN, to specify attributes corresponding to CP-ABE. The Dragonfly algorithm with a semantic leveling method generates outstanding and practical feature subsets. The experimental results on five smart building datasets indicate that the recommended model performs more accurately than existing methods. Also, the encryption, decryption, and attribute revocation execution time are significantly modified with the average time of 1.95, 2.11, and 14.64 ms, respectively, compared to existing works and conducted the scheme’s security.
{"title":"A semantic model based on ensemble learning and attribute-based encryption to increase security of smart buildings in fog computing","authors":"Ronita Rezapour, Parvaneh Asghari, Hamid Haj Seyyed Javadi, Shamsollah Ghanbari","doi":"10.1007/s11227-024-06408-y","DOIUrl":"https://doi.org/10.1007/s11227-024-06408-y","url":null,"abstract":"<p>Fog computing is a revolutionary technology that, by expanding the cloud computing paradigm to the network edge, brings a significant achievement in the resource-constrained IoT applications in intelligent environments. However, security matters still challenge the extensive deployment of fog computing infrastructure. Ciphertext policy attribute-based encryption prepares a solution for data sharing and security preservation issues in fog-enhanced intelligent environments. Nevertheless, the lack of an effective mechanism to moderate the execution time of CP-ABE schemes due to the diversity of attributes used in secret key and access structure, as well as ensuring data security, practically restricts the deployment of such schemes. In this regard, a collaborative semantic model, including an outsourced CP-ABE scheme with the attribute revocation ability, together with an impressive AES algorithm relying on an ensemble learning system, was proposed in this study. The ensemble learning model uses multiple classifiers, including the GMDH, SVM, and KNN, to specify attributes corresponding to CP-ABE. The Dragonfly algorithm with a semantic leveling method generates outstanding and practical feature subsets. The experimental results on five smart building datasets indicate that the recommended model performs more accurately than existing methods. Also, the encryption, decryption, and attribute revocation execution time are significantly modified with the average time of 1.95, 2.11, and 14.64 ms, respectively, compared to existing works and conducted the scheme’s security.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"15 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-29DOI: 10.1007/s11227-024-06420-2
Kamal A. ElDahshan, Gaber E. Abutaleb, Berihan R. Elemary, Ebeid A. Ebeid, AbdAllah A. AlHabshy
As data grow exponentially, the demand for advanced intelligent solutions has become increasingly urgent. Unfortunately, not all businesses have the expertise to utilize machine learning algorithms effectively. To bridge this gap, the present paper introduces a cost-effective, user-friendly, dependable, adaptable, and scalable solution for visualizing, analyzing, processing, and extracting valuable insights from data. The proposed solution is an optimized open-source unsupervised machine learning as a service (MLaaS) framework that caters to both experts and non-experts in machine learning. The framework aims to assist companies and organizations in solving problems related to clustering and anomaly detection, even without prior experience or internal infrastructure. With a focus on several clustering and anomaly detection techniques, the proposed framework automates data processing while allowing user intervention. The proposed framework includes default algorithms for clustering and outlier detection. In the clustering category, it features three algorithms: k-means, hierarchical clustering, and DBScan clustering. For outlier detection, it includes local outlier factor, K-nearest neighbors, and Gaussian mixture model. Furthermore, the proposed solution is expandable; it may include additional algorithms. It is versatile and capable of handling diverse datasets by generating separate rapid artificial intelligence models for each dataset and facilitating their comparison rapidly. The proposed framework provides a solution through a representational state transfer application programming interface, enabling seamless integration with various systems. Real-world testing of the proposed framework on customer segmentation and fraud detection data demonstrates that it is reliable, efficient, cost-effective, and time-saving. With the innovative MLaaS framework, companies may harness the full potential of business analysis.
{"title":"An optimized intelligent open-source MLaaS framework for user-friendly clustering and anomaly detection","authors":"Kamal A. ElDahshan, Gaber E. Abutaleb, Berihan R. Elemary, Ebeid A. Ebeid, AbdAllah A. AlHabshy","doi":"10.1007/s11227-024-06420-2","DOIUrl":"https://doi.org/10.1007/s11227-024-06420-2","url":null,"abstract":"<p>As data grow exponentially, the demand for advanced intelligent solutions has become increasingly urgent. Unfortunately, not all businesses have the expertise to utilize machine learning algorithms effectively. To bridge this gap, the present paper introduces a cost-effective, user-friendly, dependable, adaptable, and scalable solution for visualizing, analyzing, processing, and extracting valuable insights from data. The proposed solution is an optimized open-source unsupervised machine learning as a service (MLaaS) framework that caters to both experts and non-experts in machine learning. The framework aims to assist companies and organizations in solving problems related to clustering and anomaly detection, even without prior experience or internal infrastructure. With a focus on several clustering and anomaly detection techniques, the proposed framework automates data processing while allowing user intervention. The proposed framework includes default algorithms for clustering and outlier detection. In the clustering category, it features three algorithms: k-means, hierarchical clustering, and DBScan clustering. For outlier detection, it includes local outlier factor, K-nearest neighbors, and Gaussian mixture model. Furthermore, the proposed solution is expandable; it may include additional algorithms. It is versatile and capable of handling diverse datasets by generating separate rapid artificial intelligence models for each dataset and facilitating their comparison rapidly. The proposed framework provides a solution through a representational state transfer application programming interface, enabling seamless integration with various systems. Real-world testing of the proposed framework on customer segmentation and fraud detection data demonstrates that it is reliable, efficient, cost-effective, and time-saving. With the innovative MLaaS framework, companies may harness the full potential of business analysis.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"116 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-29DOI: 10.1007/s11227-024-06409-x
Aya G. Ayad, Nehal A. Sakr, Noha A. Hikal
The exponential growth of Internet of Things (IoT) devices underscores the need for robust security measures against cyber-attacks. Extensive research in the IoT security community has centered on effective traffic detection models, with a particular focus on anomaly intrusion detection systems (AIDS). This paper specifically addresses the preprocessing stage for IoT datasets and feature selection approaches to reduce the complexity of the data. The goal is to develop an efficient AIDS that strikes a balance between high accuracy and low detection time. To achieve this goal, we propose a hybrid feature selection approach that combines filter and wrapper methods. This approach is integrated into a two-level anomaly intrusion detection system. At level 1, our approach classifies network packets into normal or attack, with level 2 further classifying the attack to determine its specific category. One critical aspect we consider is the imbalance in these datasets, which is addressed using the Synthetic Minority Over-sampling Technique (SMOTE). To evaluate how the selected features affect the performance of the machine learning model across different algorithms, namely Decision Tree, Random Forest, Gaussian Naive Bayes, and k-Nearest Neighbor, we employ benchmark datasets: BoT-IoT, TON-IoT, and CIC-DDoS2019. Evaluation metrics encompass detection accuracy, precision, recall, and F1-score. Results indicate that the decision tree achieves high detection accuracy, ranging between 99.82 and 100%, with short detection times ranging between 0.02 and 0.15 s, outperforming existing AIDS architectures for IoT networks and establishing its superiority in achieving both accuracy and efficient detection times.
物联网(IoT)设备的指数级增长凸显了采取强有力的安全措施防范网络攻击的必要性。物联网安全领域的大量研究都集中在有效的流量检测模型上,尤其关注异常入侵检测系统(AIDS)。本文专门讨论了物联网数据集的预处理阶段以及降低数据复杂性的特征选择方法。我们的目标是开发一种高效的艾滋病检测系统,在高准确率和低检测时间之间取得平衡。为了实现这一目标,我们提出了一种混合特征选择方法,它结合了过滤器和包装方法。这种方法被集成到一个两级异常入侵检测系统中。在第一级,我们的方法将网络数据包分类为正常或攻击,第二级进一步对攻击进行分类,以确定其具体类别。我们考虑的一个重要方面是这些数据集中的不平衡,我们使用合成少数群体过度采样技术(SMOTE)来解决这个问题。为了评估所选特征如何影响机器学习模型在决策树、随机森林、高斯直觉贝叶斯和 k 近邻等不同算法中的性能,我们采用了基准数据集:我们采用了基准数据集:BoT-IoT、TON-IoT 和 CIC-DDoS2019。评估指标包括检测准确率、精确度、召回率和 F1 分数。结果表明,决策树实现了较高的检测准确率(介于 99.82 和 100%之间)和较短的检测时间(介于 0.02 和 0.15 秒之间),优于物联网网络中现有的 AIDS 架构,并确立了其在实现准确率和高效检测时间方面的优势。
{"title":"A hybrid approach for efficient feature selection in anomaly intrusion detection for IoT networks","authors":"Aya G. Ayad, Nehal A. Sakr, Noha A. Hikal","doi":"10.1007/s11227-024-06409-x","DOIUrl":"https://doi.org/10.1007/s11227-024-06409-x","url":null,"abstract":"<p>The exponential growth of Internet of Things (IoT) devices underscores the need for robust security measures against cyber-attacks. Extensive research in the IoT security community has centered on effective traffic detection models, with a particular focus on anomaly intrusion detection systems (AIDS). This paper specifically addresses the preprocessing stage for IoT datasets and feature selection approaches to reduce the complexity of the data. The goal is to develop an efficient AIDS that strikes a balance between high accuracy and low detection time. To achieve this goal, we propose a hybrid feature selection approach that combines filter and wrapper methods. This approach is integrated into a two-level anomaly intrusion detection system. At level 1, our approach classifies network packets into normal or attack, with level 2 further classifying the attack to determine its specific category. One critical aspect we consider is the imbalance in these datasets, which is addressed using the Synthetic Minority Over-sampling Technique (SMOTE). To evaluate how the selected features affect the performance of the machine learning model across different algorithms, namely Decision Tree, Random Forest, Gaussian Naive Bayes, and k-Nearest Neighbor, we employ benchmark datasets: BoT-IoT, TON-IoT, and CIC-DDoS2019. Evaluation metrics encompass detection accuracy, precision, recall, and F1-score. Results indicate that the decision tree achieves high detection accuracy, ranging between 99.82 and 100%, with short detection times ranging between 0.02 and 0.15 s, outperforming existing AIDS architectures for IoT networks and establishing its superiority in achieving both accuracy and efficient detection times.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"122 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-29DOI: 10.1007/s11227-024-06459-1
Monti Babulal Pal, Sanjay Agrawal
Graph Neural Networks (GNNs) models, a current machine learning hotspot, have increasingly started to be applied in fraud detection in conjunction with user reviews in recent years. The accessible material is complicated and varied, the aggregated user evaluations cover a diverse range of topics, and erroneous information among vast amounts of user-generated content is typically rare. The review system is modeled as a heterogeneous network to address the issue of feature heterogeneity and uneven data distribution, and a new social theory-based graphical neural network model (SGNN) is suggested. The rich user behavior information in the heterogeneous network may be fully leveraged to acquire richer semantic representations for comments by integrating the hierarchical attention structure. Under the ensemble learning bagging framework, various distinct SGNN sub-models are combined. The sampling technique realizes the diversity aggregation of the base learners, which reduces the loss of useful information and improves the ability to identify bogus comments. According to testing results on real datasets from Amazon and YelpChi, the SGNN approach provides strong anomaly detection performance. It is demonstrated that the SGNN process has good robustness against fraudulent entities in the use of skewed distribution of data categories when compared to the existing approach.
{"title":"Graph neural network-based attention mechanism to classify spam review over heterogeneous social networks","authors":"Monti Babulal Pal, Sanjay Agrawal","doi":"10.1007/s11227-024-06459-1","DOIUrl":"https://doi.org/10.1007/s11227-024-06459-1","url":null,"abstract":"<p>Graph Neural Networks (GNNs) models, a current machine learning hotspot, have increasingly started to be applied in fraud detection in conjunction with user reviews in recent years. The accessible material is complicated and varied, the aggregated user evaluations cover a diverse range of topics, and erroneous information among vast amounts of user-generated content is typically rare. The review system is modeled as a heterogeneous network to address the issue of feature heterogeneity and uneven data distribution, and a new social theory-based graphical neural network model (SGNN) is suggested. The rich user behavior information in the heterogeneous network may be fully leveraged to acquire richer semantic representations for comments by integrating the hierarchical attention structure. Under the ensemble learning bagging framework, various distinct SGNN sub-models are combined. The sampling technique realizes the diversity aggregation of the base learners, which reduces the loss of useful information and improves the ability to identify bogus comments. According to testing results on real datasets from Amazon and YelpChi, the SGNN approach provides strong anomaly detection performance. It is demonstrated that the SGNN process has good robustness against fraudulent entities in the use of skewed distribution of data categories when compared to the existing approach.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"88 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-28DOI: 10.1007/s11227-024-06414-0
Qi Chen, Yajie Wang, Yunfei Sun
UAV path planning poses the challenge of determining the most efficient route from an initial location to a desired destination, while considering mission objectives and adhering to various flight restrictions. This is a challenging optimization problem with high dimensionality that demands efficient path planning methods. To tackle the intricate UAV path planning problem within complex 3D environments, we propose an improved dung beetle optimizer (IDBO) for UAV path planning. Firstly, we formulate a cost function that converts the UAV path planning problem into a multidimensional function optimization problem, considering both trajectory restrictions and safety restrictions of the UAV. This enables us to effectively search for the optimal path. Secondly, we introduce a chaotic strategy to initialize the population, ensuring a comprehensive exploration of the solution space and enhancing population diversity. Additionally, we incorporate exponentially decreasing inertia weights into the algorithm, which improves convergence speed and exploration capability. Furthermore, to tackle the issue of decreasing population diversity during the late stages of convergence, we employ an adaptive Cauchy mutation strategy to enhance population diversity. Through simulation results, we demonstrate that IDBO achieves faster convergence and generates better paths compared to existing approaches in the same environment. These results demonstrate the remarkable efficacy of the proposed improved algorithm in effectively tackling the UAV path planning problem.
{"title":"An improved dung beetle optimizer for UAV 3D path planning","authors":"Qi Chen, Yajie Wang, Yunfei Sun","doi":"10.1007/s11227-024-06414-0","DOIUrl":"https://doi.org/10.1007/s11227-024-06414-0","url":null,"abstract":"<p>UAV path planning poses the challenge of determining the most efficient route from an initial location to a desired destination, while considering mission objectives and adhering to various flight restrictions. This is a challenging optimization problem with high dimensionality that demands efficient path planning methods. To tackle the intricate UAV path planning problem within complex 3D environments, we propose an improved dung beetle optimizer (IDBO) for UAV path planning. Firstly, we formulate a cost function that converts the UAV path planning problem into a multidimensional function optimization problem, considering both trajectory restrictions and safety restrictions of the UAV. This enables us to effectively search for the optimal path. Secondly, we introduce a chaotic strategy to initialize the population, ensuring a comprehensive exploration of the solution space and enhancing population diversity. Additionally, we incorporate exponentially decreasing inertia weights into the algorithm, which improves convergence speed and exploration capability. Furthermore, to tackle the issue of decreasing population diversity during the late stages of convergence, we employ an adaptive Cauchy mutation strategy to enhance population diversity. Through simulation results, we demonstrate that IDBO achieves faster convergence and generates better paths compared to existing approaches in the same environment. These results demonstrate the remarkable efficacy of the proposed improved algorithm in effectively tackling the UAV path planning problem.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"91 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Introducing a novel approach for assessing connectivity in dynamic optical networks, we propose the quantum-driven particle swarm-optimized self-adaptive support vector machine (QPSO-SASVM) model. By integrating quantum computing and machine learning, this advanced framework offers enhanced convergence and robustness. Tested against a network simulation with 187 nodes and 96 DWDM channels, QPSO-SASVM outperforms traditional benchmarks such as LSTM, Naive method, E-DLSTM, and GRU. Evaluation using metrics such as signal-to-noise ratio, ROC curve, RMSE, and R2 consistently demonstrates superior predictive accuracy and adaptability. These results underscore QPSO-SASVM as a powerful tool for precise and reliable prediction in dynamic optical network environments.
{"title":"Optimizing connectivity: a novel AI approach to assess transmission levels in optical networks","authors":"Mehaboob Mujawar, S. Manikandan, Monica Kalbande, Puneet Kumar Aggarwal, Nallam Krishnaiah, Yasin Genc","doi":"10.1007/s11227-024-06410-4","DOIUrl":"https://doi.org/10.1007/s11227-024-06410-4","url":null,"abstract":"<p>Introducing a novel approach for assessing connectivity in dynamic optical networks, we propose the quantum-driven particle swarm-optimized self-adaptive support vector machine (QPSO-SASVM) model. By integrating quantum computing and machine learning, this advanced framework offers enhanced convergence and robustness. Tested against a network simulation with 187 nodes and 96 DWDM channels, QPSO-SASVM outperforms traditional benchmarks such as LSTM, Naive method, E-DLSTM, and GRU. Evaluation using metrics such as signal-to-noise ratio, ROC curve, RMSE, and <i>R</i><sup>2</sup> consistently demonstrates superior predictive accuracy and adaptability. These results underscore QPSO-SASVM as a powerful tool for precise and reliable prediction in dynamic optical network environments.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"22 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the explosive growth of electronic information technology, mobile devices generate massive amounts of data and requirements, which poses a significant challenge to mobile devices with limited computing and battery capacity. Task offloading can transfer computing-intensive tasks from resource-constrained mobile devices to resource-rich servers, thereby significantly reducing the consumption of task execution. How to optimize the task offloading strategy in complex environments with multi-layers and multi-devices to improve efficiency becomes a challenge for the task offloading problem. We optimize the vertical assignment of tasks in a multi-layer system using deep reinforcement learning algorithms, which encompass the cloud, edge, and device layers. To balance the load among multiple devices, we employ the KNN algorithm. Subsequently, we introduce a task state discrimination method based on fuzzy control theory to enhance the performance of computing nodes under high load conditions. By optimizing task offloading policies and execution orders, we successfully reduce the average task execution time and energy consumption of mobile devices. We implemented the proposed algorithm in the PureEdgeSim simulator and performed simulations using different device densities to verify the algorithm’s scalability. The simulation results show that the method we proposed outperforms the methods in previous work. Our method can significantly improve performance in high-device density scenarios.
{"title":"Multi-layer collaborative task offloading optimization: balancing competition and cooperation across local edge and cloud resources","authors":"Bowen Ling, Xiaoheng Deng, Yuning Huang, Jingjing Zhang, JinSong Gui, Yurong Qian","doi":"10.1007/s11227-024-06448-4","DOIUrl":"https://doi.org/10.1007/s11227-024-06448-4","url":null,"abstract":"<p>With the explosive growth of electronic information technology, mobile devices generate massive amounts of data and requirements, which poses a significant challenge to mobile devices with limited computing and battery capacity. Task offloading can transfer computing-intensive tasks from resource-constrained mobile devices to resource-rich servers, thereby significantly reducing the consumption of task execution. How to optimize the task offloading strategy in complex environments with multi-layers and multi-devices to improve efficiency becomes a challenge for the task offloading problem. We optimize the vertical assignment of tasks in a multi-layer system using deep reinforcement learning algorithms, which encompass the cloud, edge, and device layers. To balance the load among multiple devices, we employ the KNN algorithm. Subsequently, we introduce a task state discrimination method based on fuzzy control theory to enhance the performance of computing nodes under high load conditions. By optimizing task offloading policies and execution orders, we successfully reduce the average task execution time and energy consumption of mobile devices. We implemented the proposed algorithm in the PureEdgeSim simulator and performed simulations using different device densities to verify the algorithm’s scalability. The simulation results show that the method we proposed outperforms the methods in previous work. Our method can significantly improve performance in high-device density scenarios.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"100 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trajectory prediction is highly essential for accurate navigation. Existing deep learning-based approaches always encounter serious performance degradation when facing shifted data or unseen scenarios. For learning transferable representations across different scenarios, the promising pretraining technique is applied to trajectory prediction tasks. However, relevant studies employ point-level masking mechanisms, which cannot capture local motion information across multiple time steps. Additionally, for trajectory data that couples multiple motion states, extracting the temporal dependencies within each state sequence remains highly challenging. To tackle this issue, we propose a channel-independent pretrained network via tokenized patching for efficient vehicle trajectory prediction, and it is composed of tokenized patch masking, channel-independent extractor (CiE), and state decoupling-mixing (SDM). Specifically, first of all, based on the designed tokenized patching scheme, TPM is established to represent local information and long-term relations in masked sequences. Then, through a series of weight-shared dense layers, CiE is designed to capture the individual dependencies among state sequences in an unsupervised pretraining manner. Moreover, by decoupling the complicated trajectory into pseudo-state representations, SDM is proposed to independently reconstruct the state sequences and further carry out representation mixing operations, to realize available trajectory predictions. Finally, extensive experiments show that our framework is effective and achieves the state-of-the-art performance on the INTERACTION and Argoverse2 datasets.
{"title":"CiPN-TP: a channel-independent pretrained network via tokenized patching for trajectory prediction","authors":"Qifan Xue, Feng Yang, Shengyi Li, Xuanpeng Li, Guangyu Li, Weigong Zhang","doi":"10.1007/s11227-024-06462-6","DOIUrl":"https://doi.org/10.1007/s11227-024-06462-6","url":null,"abstract":"<p>Trajectory prediction is highly essential for accurate navigation. Existing deep learning-based approaches always encounter serious performance degradation when facing shifted data or unseen scenarios. For learning transferable representations across different scenarios, the promising pretraining technique is applied to trajectory prediction tasks. However, relevant studies employ point-level masking mechanisms, which cannot capture local motion information across multiple time steps. Additionally, for trajectory data that couples multiple motion states, extracting the temporal dependencies within each state sequence remains highly challenging. To tackle this issue, we propose a channel-independent pretrained network via tokenized patching for efficient vehicle trajectory prediction, and it is composed of tokenized patch masking, channel-independent extractor (CiE), and state decoupling-mixing (SDM). Specifically, first of all, based on the designed tokenized patching scheme, TPM is established to represent local information and long-term relations in masked sequences. Then, through a series of weight-shared dense layers, CiE is designed to capture the individual dependencies among state sequences in an unsupervised pretraining manner. Moreover, by decoupling the complicated trajectory into pseudo-state representations, SDM is proposed to independently reconstruct the state sequences and further carry out representation mixing operations, to realize available trajectory predictions. Finally, extensive experiments show that our framework is effective and achieves the state-of-the-art performance on the INTERACTION and Argoverse2 datasets.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"54 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-28DOI: 10.1007/s11227-024-06404-2
Seokjin Lee, Seongryong Kim, Jungeun Kim
Public transportation systems play a vital role in modern cities, enhancing the quality of life and fostering sustainable economic growth. Modeling and understanding the complexities of these transportation networks are crucial for effective urban planning and management. Traditional models often fall short in capturing the intricate interactions and interdependencies in multimodal public transportation systems. To address this challenge, recent research has embraced multilayer network models, offering a more sophisticated representation of these networks. However, there is a need to explore and develop robustness analysis techniques tailored to these general multilayer networks to fully assess their complexities in real-world scenarios. In this paper, we employ a general multilayer network model to comprehensively analyze a real-world multimodal transportation network in Seoul, South Korea. We leverage a large volume of traffic data to model, visualize, and evaluate the city’s mobility patterns. Additionally, we introduce two novel methodologies for robustness analysis, one based on random walk coverage and the other on eigenvalue, specifically designed for general multilayer networks. Extensive experiments using the large volume of real-world data sets demonstrate the effectiveness of the proposed approaches.
{"title":"Robustness Analysis of Public Transportation Systems in Seoul Using General Multilayer Network Models","authors":"Seokjin Lee, Seongryong Kim, Jungeun Kim","doi":"10.1007/s11227-024-06404-2","DOIUrl":"https://doi.org/10.1007/s11227-024-06404-2","url":null,"abstract":"<p>Public transportation systems play a vital role in modern cities, enhancing the quality of life and fostering sustainable economic growth. Modeling and understanding the complexities of these transportation networks are crucial for effective urban planning and management. Traditional models often fall short in capturing the intricate interactions and interdependencies in multimodal public transportation systems. To address this challenge, recent research has embraced multilayer network models, offering a more sophisticated representation of these networks. However, there is a need to explore and develop robustness analysis techniques tailored to these general multilayer networks to fully assess their complexities in real-world scenarios. In this paper, we employ a general multilayer network model to comprehensively analyze a real-world multimodal transportation network in Seoul, South Korea. We leverage a large volume of traffic data to model, visualize, and evaluate the city’s mobility patterns. Additionally, we introduce two novel methodologies for robustness analysis, one based on random walk coverage and the other on eigenvalue, specifically designed for general multilayer networks. Extensive experiments using the large volume of real-world data sets demonstrate the effectiveness of the proposed approaches.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"88 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-26DOI: 10.1007/s11227-024-06428-8
Yang Gao, Gang Quan, Soamar Homsi, Wujie Wen, Liqiang Wang
Despite the enormous technical and financial advantages of cloud computing, security and privacy have always been the primary concerns for adopting cloud computing facilities, especially for government agencies and commercial sectors with high-security requirements. Homomorphic encryption (HE) has recently emerged as an effective tool in ensuring privacy and security for sensitive applications by allowing computing on encrypted data. One major obstacle to employing HE-based computation, however, is its excessive computational cost, which can be orders of magnitude higher than its counterpart based on the plaintext. In this paper, we study the problem of how to reduce the HE-based computational cost for general matrix multiplication, i.e., a fundamental building block for numerous practical applications, by taking advantage of the single instruction multiple data operations supported by HE schemes. Specifically, we develop a novel element-wise algorithm for general matrix multiplication, based on which we propose two HE-based general matrix multiplication algorithms to reduce the HE computation cost. Our experimental results show that our algorithms significantly outperform the state-of-the-art approaches of HE-based matrix multiplication.
尽管云计算具有巨大的技术和经济优势,但安全和隐私一直是采用云计算设施的首要问题,尤其是对具有高安全要求的政府机构和商业部门而言。同态加密(HE)允许对加密数据进行计算,是确保敏感应用隐私和安全的有效工具。然而,采用基于 HE 的计算的一个主要障碍是计算成本过高,可能比基于明文的计算成本高出几个数量级。在本文中,我们研究了如何利用 HE 方案支持的单指令多数据操作,降低基于 HE 的通用矩阵乘法计算成本的问题。具体来说,我们开发了一种新颖的通用矩阵乘法按元素计算的算法,并在此基础上提出了两种基于 HE 的通用矩阵乘法算法,以降低 HE 计算成本。实验结果表明,我们的算法明显优于最先进的基于 HE 的矩阵乘法方法。
{"title":"Secure and efficient general matrix multiplication on cloud using homomorphic encryption","authors":"Yang Gao, Gang Quan, Soamar Homsi, Wujie Wen, Liqiang Wang","doi":"10.1007/s11227-024-06428-8","DOIUrl":"https://doi.org/10.1007/s11227-024-06428-8","url":null,"abstract":"<p>Despite the enormous technical and financial advantages of cloud computing, security and privacy have always been the primary concerns for adopting cloud computing facilities, especially for government agencies and commercial sectors with high-security requirements. Homomorphic encryption (HE) has recently emerged as an effective tool in ensuring privacy and security for sensitive applications by allowing computing on encrypted data. One major obstacle to employing HE-based computation, however, is its excessive computational cost, which can be orders of magnitude higher than its counterpart based on the plaintext. In this paper, we study the problem of how to reduce the HE-based computational cost for general matrix multiplication, i.e., a fundamental building block for numerous practical applications, by taking advantage of the single instruction multiple data operations supported by HE schemes. Specifically, we develop a novel element-wise algorithm for general matrix multiplication, based on which we propose two HE-based general matrix multiplication algorithms to reduce the HE computation cost. Our experimental results show that our algorithms significantly outperform the state-of-the-art approaches of HE-based matrix multiplication.</p>","PeriodicalId":501596,"journal":{"name":"The Journal of Supercomputing","volume":"54 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142182521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}