Pub Date : 2025-04-15DOI: 10.1016/j.hcc.2025.100315
Quntao Zhu , Mengfan Li , Yuanjun Gao, Yao Wan, Xuanhua Shi, Hai Jin
Knowledge graph (KG) representation learning aims to map entities and relations into a low-dimensional representation space, showing significant potential in many tasks. Existing approaches follow two categories: (1) Graph-based approaches encode KG elements into vectors using structural score functions. (2) Text-based approaches embed text descriptions of entities and relations via pre-trained language models (PLMs), further fine-tuned with triples. We argue that graph-based approaches struggle with sparse data, while text-based approaches face challenges with complex relations. To address these limitations, we propose a unified Text-Augmented Attention-based Recurrent Network, bridging the gap between graph and natural language. Specifically, we employ a graph attention network based on local influence weights to model local structural information and utilize a PLM based prompt learning to learn textual information, enhanced by a mask-reconstruction strategy based on global influence weights and textual contrastive learning for improved robustness and generalizability. Besides, to effectively model multi-hop relations, we propose a novel semantic-depth guided path extraction algorithm and integrate cross-attention layers into recurrent neural networks to facilitate learning the long-term relation dependency and offer an adaptive attention mechanism for varied-length information. Extensive experiments demonstrate that our model exhibits superiority over existing models across KG completion and question-answering tasks.
{"title":"Text-augmented long-term relation dependency learning for knowledge graph representation","authors":"Quntao Zhu , Mengfan Li , Yuanjun Gao, Yao Wan, Xuanhua Shi, Hai Jin","doi":"10.1016/j.hcc.2025.100315","DOIUrl":"10.1016/j.hcc.2025.100315","url":null,"abstract":"<div><div>Knowledge graph (KG) representation learning aims to map entities and relations into a low-dimensional representation space, showing significant potential in many tasks. Existing approaches follow two categories: (1) Graph-based approaches encode KG elements into vectors using structural score functions. (2) Text-based approaches embed text descriptions of entities and relations via pre-trained language models (PLMs), further fine-tuned with triples. We argue that graph-based approaches struggle with sparse data, while text-based approaches face challenges with complex relations. To address these limitations, we propose a unified Text-Augmented Attention-based Recurrent Network, bridging the gap between graph and natural language. Specifically, we employ a graph attention network based on local influence weights to model local structural information and utilize a PLM based prompt learning to learn textual information, enhanced by a mask-reconstruction strategy based on global influence weights and textual contrastive learning for improved robustness and generalizability. Besides, to effectively model multi-hop relations, we propose a novel semantic-depth guided path extraction algorithm and integrate cross-attention layers into recurrent neural networks to facilitate learning the long-term relation dependency and offer an adaptive attention mechanism for varied-length information. Extensive experiments demonstrate that our model exhibits superiority over existing models across KG completion and question-answering tasks.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 4","pages":"Article 100315"},"PeriodicalIF":3.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145105831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hierarchical Federated Learning (HFL) extends traditional Federated Learning (FL) by introducing multi-level aggregation in which model updates pass through clients, edge servers, and a global server. While this hierarchical structure enhances scalability, it also increases vulnerability to adversarial attacks — such as data poisoning and model poisoning — that disrupt learning by introducing discrepancies at the edge server level. These discrepancies propagate through aggregation, affecting model consistency and overall integrity. Existing studies on adversarial behaviour in FL primarily rely on single-metric approaches — such as cosine similarity or Euclidean distance — to assess model discrepancies and filter out anomalous updates. However, these methods fail to capture the diverse ways adversarial attacks influence model updates, particularly in highly heterogeneous data environments and hierarchical structures. Attackers can exploit the limitations of single-metric defences by crafting updates that seem benign under one metric while remaining anomalous under another. Moreover, prior studies have not systematically analysed how model discrepancies evolve over time, vary across regions, or affect clustering structures in HFL architectures. To address these limitations, we propose the Model Discrepancy Score (MDS), a multi-metric framework that integrates Dissimilarity, Distance, Uncorrelation, and Divergence to provide a comprehensive analysis of how adversarial activity affects model discrepancies. Through temporal, spatial, and clustering analyses, we examine how attacks affect model discrepancies at the edge server level in 3LHFL and 4LHFL architectures and evaluate MDS’s ability to distinguish between benign and malicious servers. Our results show that while 4LHFL effectively mitigates discrepancies in regional attack scenarios, it struggles with distributed attacks due to additional aggregation layers that obscure distinguishable discrepancy patterns over time, across regions, and within clustering structures. Factors influencing detection include data heterogeneity, attack sophistication, and hierarchical aggregation depth. These findings highlight the limitations of single-metric approaches and emphasize the need for multi-metric strategies such as MDS to enhance HFL security.
{"title":"Analysis of deep learning under adversarial attacks in hierarchical federated learning","authors":"Duaa S. Alqattan , Vaclav Snasel , Rajiv Ranjan , Varun Ojha","doi":"10.1016/j.hcc.2025.100321","DOIUrl":"10.1016/j.hcc.2025.100321","url":null,"abstract":"<div><div>Hierarchical Federated Learning (HFL) extends traditional Federated Learning (FL) by introducing multi-level aggregation in which model updates pass through clients, edge servers, and a global server. While this hierarchical structure enhances scalability, it also increases vulnerability to adversarial attacks — such as data poisoning and model poisoning — that disrupt learning by introducing discrepancies at the edge server level. These discrepancies propagate through aggregation, affecting model consistency and overall integrity. Existing studies on adversarial behaviour in FL primarily rely on single-metric approaches — such as cosine similarity or Euclidean distance — to assess model discrepancies and filter out anomalous updates. However, these methods fail to capture the diverse ways adversarial attacks influence model updates, particularly in highly heterogeneous data environments and hierarchical structures. Attackers can exploit the limitations of single-metric defences by crafting updates that seem benign under one metric while remaining anomalous under another. Moreover, prior studies have not systematically analysed how model discrepancies evolve over time, vary across regions, or affect clustering structures in HFL architectures. To address these limitations, we propose the Model Discrepancy Score (MDS), a multi-metric framework that integrates Dissimilarity, Distance, Uncorrelation, and Divergence to provide a comprehensive analysis of how adversarial activity affects model discrepancies. Through temporal, spatial, and clustering analyses, we examine how attacks affect model discrepancies at the edge server level in 3LHFL and 4LHFL architectures and evaluate MDS’s ability to distinguish between benign and malicious servers. Our results show that while 4LHFL effectively mitigates discrepancies in regional attack scenarios, it struggles with distributed attacks due to additional aggregation layers that obscure distinguishable discrepancy patterns over time, across regions, and within clustering structures. Factors influencing detection include data heterogeneity, attack sophistication, and hierarchical aggregation depth. These findings highlight the limitations of single-metric approaches and emphasize the need for multi-metric strategies such as MDS to enhance HFL security.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 4","pages":"Article 100321"},"PeriodicalIF":3.0,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145158385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-25DOI: 10.1016/j.hcc.2025.100320
Hao Yu , Guijuan Wang , Anming Dong , Yubing Han , Yawei Wang , Jiguo Yu
With the growth of the Internet of Things (IoT), millions of users, devices, and applications compose a complex and heterogeneous network, which increases the complexity of digital identity management. Traditional centralized digital identity management systems (DIMS) confront single points of failure and privacy leakages. The emergence of blockchain technology presents an opportunity for DIMS to handle the single point of failure problem associated with centralized architectures. However, the transparency inherent in blockchain technology still exposes DIMS to privacy leakages. In this paper, we propose the privacy-protected IoT DIMS (PPID), a novel blockchain-based distributed identity system to protect the privacy of on-chain identity data. The PPID achieves the unlinkability of identity-credential-verification. Specifically, the PPID adopts the Zero Knowledge Proof (ZKP) algorithm and Shamir secret sharing (SSS) to safeguard privacy security, resist replay attacks, and ensure data integrity. Finally, we evaluate the performance of ZKP computation in PPID, as well as the transaction fees of smart contract on the Ethereum blockchain.
{"title":"Blockchain-enabled privacy protection scheme for IoT digital identity management","authors":"Hao Yu , Guijuan Wang , Anming Dong , Yubing Han , Yawei Wang , Jiguo Yu","doi":"10.1016/j.hcc.2025.100320","DOIUrl":"10.1016/j.hcc.2025.100320","url":null,"abstract":"<div><div>With the growth of the Internet of Things (IoT), millions of users, devices, and applications compose a complex and heterogeneous network, which increases the complexity of digital identity management. Traditional centralized digital identity management systems (DIMS) confront single points of failure and privacy leakages. The emergence of blockchain technology presents an opportunity for DIMS to handle the single point of failure problem associated with centralized architectures. However, the transparency inherent in blockchain technology still exposes DIMS to privacy leakages. In this paper, we propose the privacy-protected IoT DIMS (PPID), a novel blockchain-based distributed identity system to protect the privacy of on-chain identity data. The PPID achieves the unlinkability of identity-credential-verification. Specifically, the PPID adopts the Zero Knowledge Proof (ZKP) algorithm and Shamir secret sharing (SSS) to safeguard privacy security, resist replay attacks, and ensure data integrity. Finally, we evaluate the performance of ZKP computation in PPID, as well as the transaction fees of smart contract on the Ethereum blockchain.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 4","pages":"Article 100320"},"PeriodicalIF":3.0,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145105833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-24DOI: 10.1016/j.hcc.2025.100318
Xiao Wang , Yanqi Zhao , Lingyue Zhang , Min Xie , Yong Yu , Huilin Li
With the emergence of illegal behaviors such as money laundering and extortion, the regulation of privacy-preserving cryptocurrency has become increasingly important. However, existing regulated privacy-preserving cryptocurrencies usually rely on a single regulator, which seriously threatens users’ privacy once the regulator is corrupt. To address this issue, we propose a linkable group signature against malicious regulators (ALGS) for regulated privacy-preserving cryptocurrencies. Specifically, a set of regulators work together to regulate users’ behavior during cryptocurrencies transactions. Even if a certain number of regulators are corrupted, our scheme still ensures the identity security of a legal user. Meanwhile, our scheme can prevent double-spending during cryptocurrency transactions. We first propose the model of ALGS and define its security properties. Then, we present a concrete construction of ALGS, which provides CCA-2 anonymity, traceability, non-frameability, and linkability. We finally evaluate our ALGS scheme and report its advantages by comparing other schemes. The implementation result shows that the runtime of our signature algorithm is reduced by 17% compared to Emura et al. (2017) and 49% compared to KSS19 (Krenn et al. 2019), while the verification time is reduced by 31% compared to Emura et al. and 47% compared to KSS19.
{"title":"Linkable group signatures against malicious regulators for regulated privacy-preserving cryptocurrencies","authors":"Xiao Wang , Yanqi Zhao , Lingyue Zhang , Min Xie , Yong Yu , Huilin Li","doi":"10.1016/j.hcc.2025.100318","DOIUrl":"10.1016/j.hcc.2025.100318","url":null,"abstract":"<div><div>With the emergence of illegal behaviors such as money laundering and extortion, the regulation of privacy-preserving cryptocurrency has become increasingly important. However, existing regulated privacy-preserving cryptocurrencies usually rely on a single regulator, which seriously threatens users’ privacy once the regulator is corrupt. To address this issue, we propose a linkable group signature against malicious regulators (ALGS) for regulated privacy-preserving cryptocurrencies. Specifically, a set of regulators work together to regulate users’ behavior during cryptocurrencies transactions. Even if a certain number of regulators are corrupted, our scheme still ensures the identity security of a legal user. Meanwhile, our scheme can prevent double-spending during cryptocurrency transactions. We first propose the model of ALGS and define its security properties. Then, we present a concrete construction of ALGS, which provides CCA-2 anonymity, traceability, non-frameability, and linkability. We finally evaluate our ALGS scheme and report its advantages by comparing other schemes. The implementation result shows that the runtime of our signature algorithm is reduced by 17% compared to Emura et al. (2017) and 49% compared to KSS19 (Krenn et al. 2019), while the verification time is reduced by 31% compared to Emura et al. and 47% compared to KSS19.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 4","pages":"Article 100318"},"PeriodicalIF":3.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145105827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-21DOI: 10.1016/j.hcc.2025.100319
Yanqi Zhao , Jie Zhang , Xiaoyi Yang , Minghong Sun , Yuxin Zhang , Yong Yu , Huilin Li
Monero uses ring signatures to protect users’ privacy. However, Monero’s anonymity covers various illicit activities, such as money laundering, as it becomes difficult to identify and punish malicious users. Therefore, it is necessary to regulate illegal transactions while protecting the privacy of legal users. We present a revocable linkable ring signature scheme (RLRS), which balances the privacy and supervision for privacy-preserving blockchain transactions. By setting the role of revocation authority, we can trace the malicious user and revoke it in time. We define the security model of the revocable linkable ring signature and give the concrete construction of RLRS. We employ accumulator and ElGamal encryption to achieve the functionalities of revocation and tracing. In addition, we compress the ring signature size to the logarithmic level by using non-interactive sum arguments of knowledge (NISA). Then, we prove the security of RLRS, which satisfies anonymity, unforgeability, linkability, and non-frameability. Lastly, we compare RLRS with other ring signature schemes. RLRS is linkable, traceable, and revocable with logarithmic communication complexity and less computational overhead. We also implement RLRS scheme and the results show that its verification time is 1.5s with 500 ring members.
{"title":"A logarithmic size revocable linkable ring signature for privacy-preserving blockchain transactions","authors":"Yanqi Zhao , Jie Zhang , Xiaoyi Yang , Minghong Sun , Yuxin Zhang , Yong Yu , Huilin Li","doi":"10.1016/j.hcc.2025.100319","DOIUrl":"10.1016/j.hcc.2025.100319","url":null,"abstract":"<div><div>Monero uses ring signatures to protect users’ privacy. However, Monero’s anonymity covers various illicit activities, such as money laundering, as it becomes difficult to identify and punish malicious users. Therefore, it is necessary to regulate illegal transactions while protecting the privacy of legal users. We present a revocable linkable ring signature scheme (RLRS), which balances the privacy and supervision for privacy-preserving blockchain transactions. By setting the role of revocation authority, we can trace the malicious user and revoke it in time. We define the security model of the revocable linkable ring signature and give the concrete construction of RLRS. We employ accumulator and ElGamal encryption to achieve the functionalities of revocation and tracing. In addition, we compress the ring signature size to the logarithmic level by using non-interactive sum arguments of knowledge (NISA). Then, we prove the security of RLRS, which satisfies anonymity, unforgeability, linkability, and non-frameability. Lastly, we compare RLRS with other ring signature schemes. RLRS is linkable, traceable, and revocable with logarithmic communication complexity and less computational overhead. We also implement RLRS scheme and the results show that its verification time is 1.5s with 500 ring members.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 4","pages":"Article 100319"},"PeriodicalIF":3.0,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145324755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-18DOI: 10.1016/j.hcc.2024.100270
Lin Li, Shiye Wang, Changsheng Li, Ye Yuan, Guoren Wang
Continual learning, characterized by the sequential acquisition of multiple tasks, has emerged as a prominent challenge in deep learning. During the process of continual learning, deep neural networks experience a phenomenon known as catastrophic forgetting, wherein networks lose the acquired knowledge related to previous tasks when training on new tasks. Recently, parameter-efficient fine-tuning (PEFT) methods have gained prominence in tackling the challenge of catastrophic forgetting. However, within the realm of domain incremental learning, a type characteristic of continual learning, there exists an additional overlooked inductive bias, which warrants attention beyond existing approaches. In this paper, we propose a novel PEFT method called Domain Correlation Low-Rank Adaptation for domain incremental learning. Our approach put forward a domain correlated loss, which encourages the weights of the LoRA module for adjacent tasks to become more similar, thereby leveraging the correlation between different task domains. Furthermore, we consolidate the classifiers of different task domains to improve prediction performance by capitalizing on the knowledge acquired from diverse tasks. To validate the effectiveness of our method, we conduct comparative experiments and ablation studies on publicly available domain incremental learning benchmark dataset. The experimental results demonstrate that our method outperforms state-of-the-art approaches.
{"title":"DC-LoRA: Domain correlation low-rank adaptation for domain incremental learning","authors":"Lin Li, Shiye Wang, Changsheng Li, Ye Yuan, Guoren Wang","doi":"10.1016/j.hcc.2024.100270","DOIUrl":"10.1016/j.hcc.2024.100270","url":null,"abstract":"<div><div>Continual learning, characterized by the sequential acquisition of multiple tasks, has emerged as a prominent challenge in deep learning. During the process of continual learning, deep neural networks experience a phenomenon known as catastrophic forgetting, wherein networks lose the acquired knowledge related to previous tasks when training on new tasks. Recently, parameter-efficient fine-tuning (PEFT) methods have gained prominence in tackling the challenge of catastrophic forgetting. However, within the realm of domain incremental learning, a type characteristic of continual learning, there exists an additional overlooked inductive bias, which warrants attention beyond existing approaches. In this paper, we propose a novel PEFT method called Domain Correlation Low-Rank Adaptation for domain incremental learning. Our approach put forward a domain correlated loss, which encourages the weights of the LoRA module for adjacent tasks to become more similar, thereby leveraging the correlation between different task domains. Furthermore, we consolidate the classifiers of different task domains to improve prediction performance by capitalizing on the knowledge acquired from diverse tasks. To validate the effectiveness of our method, we conduct comparative experiments and ablation studies on publicly available domain incremental learning benchmark dataset. The experimental results demonstrate that our method outperforms state-of-the-art approaches.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 4","pages":"Article 100270"},"PeriodicalIF":3.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145049160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-18DOI: 10.1016/j.hcc.2025.100317
Zulfiqar Ali Khan, Izzatdin Abdul Aziz
Cloud computing has been the core infrastructure for providing services to the offloaded workloads from IoT devices. However, for time-sensitive tasks, reducing end-to-end delay is a major concern. With advancements in the IoT industry, the computation requirements of incoming tasks at the cloud are escalating, resulting in compromised quality of service. Fog computing emerged to alleviate such issues. However, the resources at the fog layer are limited and require efficient usage. The Whale Optimization Algorithm is a promising meta-heuristic algorithm extensively used to solve various optimization problems. However, being an exploitation-driven technique, its exploration potential is limited, resulting in reduced solution diversity, local optima, and poor convergence. To address these issues, this study proposes a dynamic opposition learning approach to enhance the Whale Optimization Algorithm to offload independent tasks. Opposition-Based Learning (OBL) has been extensively used to improve the exploration capability of the Whale Optimization Algorithm. However, it is computationally expensive and requires efficient utilization of appropriate OBL strategies to fully realize its advantages. Therefore, our proposed algorithm employs three OBL strategies at different stages to minimize end-to-end delay and improve load balancing during task offloading. First, basic OBL and quasi-OBL are employed during population initialization. Then, the proposed dynamic partial-opposition method enhances search space exploration using an information-based triggering mechanism that tracks the status of each agent. The results illustrate significant performance improvements by the proposed algorithm compared to SACO, PSOGA, IPSO, and oppoCWOA using the NASA Ames iPSC and HPC2N workload datasets.
{"title":"Dynamic OBL-driven whale optimization algorithm for independent tasks offloading in fog computing","authors":"Zulfiqar Ali Khan, Izzatdin Abdul Aziz","doi":"10.1016/j.hcc.2025.100317","DOIUrl":"10.1016/j.hcc.2025.100317","url":null,"abstract":"<div><div>Cloud computing has been the core infrastructure for providing services to the offloaded workloads from IoT devices. However, for time-sensitive tasks, reducing end-to-end delay is a major concern. With advancements in the IoT industry, the computation requirements of incoming tasks at the cloud are escalating, resulting in compromised quality of service. Fog computing emerged to alleviate such issues. However, the resources at the fog layer are limited and require efficient usage. The Whale Optimization Algorithm is a promising meta-heuristic algorithm extensively used to solve various optimization problems. However, being an exploitation-driven technique, its exploration potential is limited, resulting in reduced solution diversity, local optima, and poor convergence. To address these issues, this study proposes a dynamic opposition learning approach to enhance the Whale Optimization Algorithm to offload independent tasks. Opposition-Based Learning (OBL) has been extensively used to improve the exploration capability of the Whale Optimization Algorithm. However, it is computationally expensive and requires efficient utilization of appropriate OBL strategies to fully realize its advantages. Therefore, our proposed algorithm employs three OBL strategies at different stages to minimize end-to-end delay and improve load balancing during task offloading. First, basic OBL and quasi-OBL are employed during population initialization. Then, the proposed dynamic partial-opposition method enhances search space exploration using an information-based triggering mechanism that tracks the status of each agent. The results illustrate significant performance improvements by the proposed algorithm compared to SACO, PSOGA, IPSO, and oppoCWOA using the NASA Ames iPSC and HPC2N workload datasets.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 4","pages":"Article 100317"},"PeriodicalIF":3.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145105832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-17DOI: 10.1016/j.hcc.2025.100316
Yongdan Wang , Haibin Zhang , Baohan Huang , Zhijun Lin , Chuan Pang
The stock market is a vital component of the financial sector. Due to the inherent uncertainty and volatility of the stock market, stock price prediction has always been both intriguing and challenging. To improve the accuracy of stock predictions, we construct a model that integrates investor sentiment with Long Short-Term Memory (LSTM) networks. By extracting sentiment data from the “Financial Post” and quantifying it with the Vader sentiment lexicon, we add a sentiment index to improve stock price forecasting. We combine sentiment factors with traditional trading indicators, making predictions more accurate. Furthermore, we deploy our system on the blockchain to enhance data security, reduce the risk of malicious attacks, and improve system robustness. This integration of sentiment analysis and blockchain offers a novel approach to stock market predictions, providing secure and reliable decision support for investors and financial institutions. We deploy our system and demonstrate that our system is both efficient and practical. For 312 bytes of stock data, we achieve a latency of 434.42 ms with one node and 565.69 ms with five nodes. For 1700 bytes of sentiment data, we achieve a latency of 1405.25 ms with one node and 1750.25 ms with five nodes.
{"title":"LSTM stock prediction model based on blockchain","authors":"Yongdan Wang , Haibin Zhang , Baohan Huang , Zhijun Lin , Chuan Pang","doi":"10.1016/j.hcc.2025.100316","DOIUrl":"10.1016/j.hcc.2025.100316","url":null,"abstract":"<div><div>The stock market is a vital component of the financial sector. Due to the inherent uncertainty and volatility of the stock market, stock price prediction has always been both intriguing and challenging. To improve the accuracy of stock predictions, we construct a model that integrates investor sentiment with Long Short-Term Memory (LSTM) networks. By extracting sentiment data from the “Financial Post” and quantifying it with the Vader sentiment lexicon, we add a sentiment index to improve stock price forecasting. We combine sentiment factors with traditional trading indicators, making predictions more accurate. Furthermore, we deploy our system on the blockchain to enhance data security, reduce the risk of malicious attacks, and improve system robustness. This integration of sentiment analysis and blockchain offers a novel approach to stock market predictions, providing secure and reliable decision support for investors and financial institutions. We deploy our system and demonstrate that our system is both efficient and practical. For 312 bytes of stock data, we achieve a latency of 434.42 ms with one node and 565.69 ms with five nodes. For 1700 bytes of sentiment data, we achieve a latency of 1405.25 ms with one node and 1750.25 ms with five nodes.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 4","pages":"Article 100316"},"PeriodicalIF":3.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-03-03DOI: 10.1016/j.hcc.2025.100313
Ao Xiong , Chenbin Qiao , Wenjing Li , Dong Wang , Da Li , Bo Gao , Weixian Wang
Anomaly detection in blockchain transactions faces several challenges, the most prominent being the imbalance between positive and negative samples. Most transaction data are normal, with only a small fraction of anomalous data. Additionally, blockchain transaction datasets tend to be small and often incomplete, which complicates the process of anomaly detection. When using simple AI models, selecting the appropriate model and tuning parameters becomes difficult, resulting in poor performance. To address these issues, this paper proposes GANAnomaly, an anomaly detection model based on Generative Adversarial Networks (GANs) and Autoencoders. The model consists of three components: a data generation model, an encoding model, and a detection model. Firstly, the Wasserstein GAN (WGAN) is employed as the data generation model. The generated data is then used to train an encoding model that performs feature extraction and dimensionality reduction. Finally, the trained encoder serves as the feature extractor for the detection model. This approach leverages GANs to mitigate the challenges of low data volume and data imbalance, while the encoder extracts relevant features and reduces dimensionality. Experimental results demonstrate that the proposed anomaly detection model outperforms traditional methods by more accurately identifying anomalous blockchain transactions, reducing the false positive rate, and improving both accuracy and efficiency.
区块链事务中的异常检测面临着几个挑战,最突出的是正样本和负样本之间的不平衡。大多数事务数据是正常的,只有一小部分异常数据。此外,区块链事务数据集往往很小,而且往往不完整,这使得异常检测过程变得复杂。当使用简单的AI模型时,选择合适的模型和调优参数变得困难,导致性能不佳。为了解决这些问题,本文提出了一种基于生成对抗网络(GANs)和自编码器的异常检测模型——GANAnomaly。该模型由三个部分组成:数据生成模型、编码模型和检测模型。首先,采用Wasserstein GAN (WGAN)作为数据生成模型。生成的数据然后用于训练编码模型,该模型执行特征提取和降维。最后,训练好的编码器作为检测模型的特征提取器。该方法利用gan来缓解低数据量和数据不平衡的挑战,而编码器提取相关特征并降低维数。实验结果表明,该异常检测模型能够更准确地识别异常区块链交易,降低误报率,提高准确率和效率,优于传统的异常检测方法。
{"title":"Block-chain abnormal transaction detection method based on generative adversarial network and autoencoder","authors":"Ao Xiong , Chenbin Qiao , Wenjing Li , Dong Wang , Da Li , Bo Gao , Weixian Wang","doi":"10.1016/j.hcc.2025.100313","DOIUrl":"10.1016/j.hcc.2025.100313","url":null,"abstract":"<div><div>Anomaly detection in blockchain transactions faces several challenges, the most prominent being the imbalance between positive and negative samples. Most transaction data are normal, with only a small fraction of anomalous data. Additionally, blockchain transaction datasets tend to be small and often incomplete, which complicates the process of anomaly detection. When using simple AI models, selecting the appropriate model and tuning parameters becomes difficult, resulting in poor performance. To address these issues, this paper proposes GANAnomaly, an anomaly detection model based on Generative Adversarial Networks (GANs) and Autoencoders. The model consists of three components: a data generation model, an encoding model, and a detection model. Firstly, the Wasserstein GAN (WGAN) is employed as the data generation model. The generated data is then used to train an encoding model that performs feature extraction and dimensionality reduction. Finally, the trained encoder serves as the feature extractor for the detection model. This approach leverages GANs to mitigate the challenges of low data volume and data imbalance, while the encoder extracts relevant features and reduces dimensionality. Experimental results demonstrate that the proposed anomaly detection model outperforms traditional methods by more accurately identifying anomalous blockchain transactions, reducing the false positive rate, and improving both accuracy and efficiency.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 4","pages":"Article 100313"},"PeriodicalIF":3.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145158370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Edge computing is becoming ever more relevant to offload compute-heavy tasks in vehicular networks. In this context, the concept of vehicular micro clouds (VMCs) has been proposed to use compute and storage resources on nearby vehicles to complete computational tasks. As many tasks in this application domain are time critical, offloading to the cloud is prohibitive. Additionally, task deadlines have to be dealt with. This paper addresses two main challenges. First, we present a task migration algorithm supporting deadlines in vehicular edge computing. The algorithm is following the earliest deadline first model but in presence of dynamic processing resources, i.e, vehicles joining and leaving a VMC. This task offloading is very sensitive to the mobility of vehicles in a VMC, i.e, the so-called dwell time a vehicles spends in the VMC. Thus, secondly, we propose a machine learning-based solution for dwell time prediction. Our dwell time prediction model uses a random forest approach to estimate how long a vehicle will stay in a VMC. Our approach is evaluated using mobility traces of an artificial simple intersection scenario as well as of real urban traffic in cities of Luxembourg and Nagoya. Our proposed approach is able to realize low-delay and low-failure task migration in dynamic vehicular conditions, advancing the state of the art in vehicular edge computing.
{"title":"Task migration with deadlines using machine learning-based dwell time prediction in vehicular micro clouds","authors":"Ziqi Zhou , Agon Memedi , Chunghan Lee , Seyhan Ucar , Onur Altintas , Falko Dressler","doi":"10.1016/j.hcc.2025.100314","DOIUrl":"10.1016/j.hcc.2025.100314","url":null,"abstract":"<div><div>Edge computing is becoming ever more relevant to offload compute-heavy tasks in vehicular networks. In this context, the concept of vehicular micro clouds (VMCs) has been proposed to use compute and storage resources on nearby vehicles to complete computational tasks. As many tasks in this application domain are time critical, offloading to the cloud is prohibitive. Additionally, task deadlines have to be dealt with. This paper addresses two main challenges. First, we present a task migration algorithm supporting deadlines in vehicular edge computing. The algorithm is following the earliest deadline first model but in presence of dynamic processing resources, <em>i.e</em>, vehicles joining and leaving a VMC. This task offloading is very sensitive to the mobility of vehicles in a VMC, <em>i.e</em>, the so-called dwell time a vehicles spends in the VMC. Thus, secondly, we propose a machine learning-based solution for dwell time prediction. Our dwell time prediction model uses a random forest approach to estimate how long a vehicle will stay in a VMC. Our approach is evaluated using mobility traces of an artificial simple intersection scenario as well as of real urban traffic in cities of Luxembourg and Nagoya. Our proposed approach is able to realize low-delay and low-failure task migration in dynamic vehicular conditions, advancing the state of the art in vehicular edge computing.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 2","pages":"Article 100314"},"PeriodicalIF":3.2,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143891463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}