首页 > 最新文献

High-Confidence Computing最新文献

英文 中文
Text-augmented long-term relation dependency learning for knowledge graph representation 知识图表示的文本增强长期关系依赖学习
IF 3 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-04-15 DOI: 10.1016/j.hcc.2025.100315
Quntao Zhu , Mengfan Li , Yuanjun Gao, Yao Wan, Xuanhua Shi, Hai Jin
Knowledge graph (KG) representation learning aims to map entities and relations into a low-dimensional representation space, showing significant potential in many tasks. Existing approaches follow two categories: (1) Graph-based approaches encode KG elements into vectors using structural score functions. (2) Text-based approaches embed text descriptions of entities and relations via pre-trained language models (PLMs), further fine-tuned with triples. We argue that graph-based approaches struggle with sparse data, while text-based approaches face challenges with complex relations. To address these limitations, we propose a unified Text-Augmented Attention-based Recurrent Network, bridging the gap between graph and natural language. Specifically, we employ a graph attention network based on local influence weights to model local structural information and utilize a PLM based prompt learning to learn textual information, enhanced by a mask-reconstruction strategy based on global influence weights and textual contrastive learning for improved robustness and generalizability. Besides, to effectively model multi-hop relations, we propose a novel semantic-depth guided path extraction algorithm and integrate cross-attention layers into recurrent neural networks to facilitate learning the long-term relation dependency and offer an adaptive attention mechanism for varied-length information. Extensive experiments demonstrate that our model exhibits superiority over existing models across KG completion and question-answering tasks.
知识图表示学习旨在将实体和关系映射到低维表示空间中,在许多任务中显示出巨大的潜力。现有的方法分为两大类:(1)基于图的方法利用结构分数函数将KG元素编码为向量。(2)基于文本的方法通过预先训练的语言模型(PLMs)嵌入实体和关系的文本描述,并进一步使用三元组进行微调。我们认为基于图的方法难以处理稀疏数据,而基于文本的方法则面临复杂关系的挑战。为了解决这些限制,我们提出了一个统一的基于文本增强注意力的循环网络,弥合了图形和自然语言之间的差距。具体而言,我们采用基于局部影响权重的图关注网络来建模局部结构信息,并利用基于PLM的提示学习来学习文本信息,并通过基于全局影响权重和文本对比学习的掩模重建策略来增强鲁棒性和泛化性。此外,为了有效地建模多跳关系,我们提出了一种新的语义深度引导路径提取算法,并将交叉注意层集成到递归神经网络中,以促进长期关系依赖的学习,并提供对变长信息的自适应注意机制。大量的实验表明,我们的模型在KG完成和问答任务方面表现出优于现有模型的优势。
{"title":"Text-augmented long-term relation dependency learning for knowledge graph representation","authors":"Quntao Zhu ,&nbsp;Mengfan Li ,&nbsp;Yuanjun Gao,&nbsp;Yao Wan,&nbsp;Xuanhua Shi,&nbsp;Hai Jin","doi":"10.1016/j.hcc.2025.100315","DOIUrl":"10.1016/j.hcc.2025.100315","url":null,"abstract":"<div><div>Knowledge graph (KG) representation learning aims to map entities and relations into a low-dimensional representation space, showing significant potential in many tasks. Existing approaches follow two categories: (1) Graph-based approaches encode KG elements into vectors using structural score functions. (2) Text-based approaches embed text descriptions of entities and relations via pre-trained language models (PLMs), further fine-tuned with triples. We argue that graph-based approaches struggle with sparse data, while text-based approaches face challenges with complex relations. To address these limitations, we propose a unified Text-Augmented Attention-based Recurrent Network, bridging the gap between graph and natural language. Specifically, we employ a graph attention network based on local influence weights to model local structural information and utilize a PLM based prompt learning to learn textual information, enhanced by a mask-reconstruction strategy based on global influence weights and textual contrastive learning for improved robustness and generalizability. Besides, to effectively model multi-hop relations, we propose a novel semantic-depth guided path extraction algorithm and integrate cross-attention layers into recurrent neural networks to facilitate learning the long-term relation dependency and offer an adaptive attention mechanism for varied-length information. Extensive experiments demonstrate that our model exhibits superiority over existing models across KG completion and question-answering tasks.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 4","pages":"Article 100315"},"PeriodicalIF":3.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145105831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Analysis of deep learning under adversarial attacks in hierarchical federated learning 层次联邦学习中对抗性攻击下的深度学习分析
IF 3 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-04-08 DOI: 10.1016/j.hcc.2025.100321
Duaa S. Alqattan , Vaclav Snasel , Rajiv Ranjan , Varun Ojha
Hierarchical Federated Learning (HFL) extends traditional Federated Learning (FL) by introducing multi-level aggregation in which model updates pass through clients, edge servers, and a global server. While this hierarchical structure enhances scalability, it also increases vulnerability to adversarial attacks — such as data poisoning and model poisoning — that disrupt learning by introducing discrepancies at the edge server level. These discrepancies propagate through aggregation, affecting model consistency and overall integrity. Existing studies on adversarial behaviour in FL primarily rely on single-metric approaches — such as cosine similarity or Euclidean distance — to assess model discrepancies and filter out anomalous updates. However, these methods fail to capture the diverse ways adversarial attacks influence model updates, particularly in highly heterogeneous data environments and hierarchical structures. Attackers can exploit the limitations of single-metric defences by crafting updates that seem benign under one metric while remaining anomalous under another. Moreover, prior studies have not systematically analysed how model discrepancies evolve over time, vary across regions, or affect clustering structures in HFL architectures. To address these limitations, we propose the Model Discrepancy Score (MDS), a multi-metric framework that integrates Dissimilarity, Distance, Uncorrelation, and Divergence to provide a comprehensive analysis of how adversarial activity affects model discrepancies. Through temporal, spatial, and clustering analyses, we examine how attacks affect model discrepancies at the edge server level in 3LHFL and 4LHFL architectures and evaluate MDS’s ability to distinguish between benign and malicious servers. Our results show that while 4LHFL effectively mitigates discrepancies in regional attack scenarios, it struggles with distributed attacks due to additional aggregation layers that obscure distinguishable discrepancy patterns over time, across regions, and within clustering structures. Factors influencing detection include data heterogeneity, attack sophistication, and hierarchical aggregation depth. These findings highlight the limitations of single-metric approaches and emphasize the need for multi-metric strategies such as MDS to enhance HFL security.
分层联邦学习(HFL)通过引入多级聚合来扩展传统的联邦学习(FL),其中模型更新通过客户机、边缘服务器和全局服务器进行传递。虽然这种分层结构增强了可伸缩性,但它也增加了对抗性攻击的脆弱性——例如数据中毒和模型中毒——它们通过在边缘服务器级别引入差异来破坏学习。这些差异通过聚合传播,影响模型一致性和整体完整性。现有的关于FL对抗行为的研究主要依赖于单度量方法——如余弦相似度或欧几里得距离——来评估模型差异并过滤掉异常更新。然而,这些方法无法捕捉对抗性攻击影响模型更新的各种方式,特别是在高度异构的数据环境和分层结构中。攻击者可以利用单一指标防御的局限性,制作在一个指标下看起来无害,而在另一个指标下仍然异常的更新。此外,先前的研究并没有系统地分析模型差异如何随时间演变,如何在不同地区变化,或如何影响HFL架构中的聚类结构。为了解决这些限制,我们提出了模型差异评分(MDS),这是一个集成了不相似性、距离、不相关性和差异的多度量框架,以提供对抗性活动如何影响模型差异的全面分析。通过时间、空间和聚类分析,我们研究了攻击如何影响3LHFL和4LHFL架构中边缘服务器级别的模型差异,并评估了MDS区分良性和恶意服务器的能力。我们的结果表明,虽然4LHFL有效地减轻了区域攻击场景中的差异,但由于额外的聚合层模糊了时间、区域和集群结构内可区分的差异模式,因此它难以应对分布式攻击。影响检测的因素包括数据异构性、攻击复杂程度和分层聚合深度。这些发现突出了单度量方法的局限性,并强调需要多度量策略,如MDS来增强HFL安全性。
{"title":"Analysis of deep learning under adversarial attacks in hierarchical federated learning","authors":"Duaa S. Alqattan ,&nbsp;Vaclav Snasel ,&nbsp;Rajiv Ranjan ,&nbsp;Varun Ojha","doi":"10.1016/j.hcc.2025.100321","DOIUrl":"10.1016/j.hcc.2025.100321","url":null,"abstract":"<div><div>Hierarchical Federated Learning (HFL) extends traditional Federated Learning (FL) by introducing multi-level aggregation in which model updates pass through clients, edge servers, and a global server. While this hierarchical structure enhances scalability, it also increases vulnerability to adversarial attacks — such as data poisoning and model poisoning — that disrupt learning by introducing discrepancies at the edge server level. These discrepancies propagate through aggregation, affecting model consistency and overall integrity. Existing studies on adversarial behaviour in FL primarily rely on single-metric approaches — such as cosine similarity or Euclidean distance — to assess model discrepancies and filter out anomalous updates. However, these methods fail to capture the diverse ways adversarial attacks influence model updates, particularly in highly heterogeneous data environments and hierarchical structures. Attackers can exploit the limitations of single-metric defences by crafting updates that seem benign under one metric while remaining anomalous under another. Moreover, prior studies have not systematically analysed how model discrepancies evolve over time, vary across regions, or affect clustering structures in HFL architectures. To address these limitations, we propose the Model Discrepancy Score (MDS), a multi-metric framework that integrates Dissimilarity, Distance, Uncorrelation, and Divergence to provide a comprehensive analysis of how adversarial activity affects model discrepancies. Through temporal, spatial, and clustering analyses, we examine how attacks affect model discrepancies at the edge server level in 3LHFL and 4LHFL architectures and evaluate MDS’s ability to distinguish between benign and malicious servers. Our results show that while 4LHFL effectively mitigates discrepancies in regional attack scenarios, it struggles with distributed attacks due to additional aggregation layers that obscure distinguishable discrepancy patterns over time, across regions, and within clustering structures. Factors influencing detection include data heterogeneity, attack sophistication, and hierarchical aggregation depth. These findings highlight the limitations of single-metric approaches and emphasize the need for multi-metric strategies such as MDS to enhance HFL security.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 4","pages":"Article 100321"},"PeriodicalIF":3.0,"publicationDate":"2025-04-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145158385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blockchain-enabled privacy protection scheme for IoT digital identity management 基于区块链的物联网数字身份管理隐私保护方案
IF 3 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-03-25 DOI: 10.1016/j.hcc.2025.100320
Hao Yu , Guijuan Wang , Anming Dong , Yubing Han , Yawei Wang , Jiguo Yu
With the growth of the Internet of Things (IoT), millions of users, devices, and applications compose a complex and heterogeneous network, which increases the complexity of digital identity management. Traditional centralized digital identity management systems (DIMS) confront single points of failure and privacy leakages. The emergence of blockchain technology presents an opportunity for DIMS to handle the single point of failure problem associated with centralized architectures. However, the transparency inherent in blockchain technology still exposes DIMS to privacy leakages. In this paper, we propose the privacy-protected IoT DIMS (PPID), a novel blockchain-based distributed identity system to protect the privacy of on-chain identity data. The PPID achieves the unlinkability of identity-credential-verification. Specifically, the PPID adopts the Zero Knowledge Proof (ZKP) algorithm and Shamir secret sharing (SSS) to safeguard privacy security, resist replay attacks, and ensure data integrity. Finally, we evaluate the performance of ZKP computation in PPID, as well as the transaction fees of smart contract on the Ethereum blockchain.
随着物联网的发展,数以百万计的用户、设备和应用组成了一个复杂的异构网络,这增加了数字身份管理的复杂性。传统的集中式数字身份管理系统(DIMS)面临单点故障和隐私泄露的问题。区块链技术的出现为DIMS处理与集中式体系结构相关的单点故障问题提供了机会。然而,区块链技术固有的透明性仍然使DIMS面临隐私泄露的风险。在本文中,我们提出了隐私保护的IoT DIMS (PPID),这是一种新型的基于区块链的分布式身份系统,用于保护链上身份数据的隐私。PPID实现了身份-凭证-验证的不可链接性。具体来说,PPID采用零知识证明(Zero Knowledge Proof, ZKP)算法和Shamir secret sharing (SSS)算法来保护隐私安全,抵御重放攻击,保证数据的完整性。最后,我们评估了PPID中ZKP计算的性能,以及以太坊区块链上智能合约的交易费用。
{"title":"Blockchain-enabled privacy protection scheme for IoT digital identity management","authors":"Hao Yu ,&nbsp;Guijuan Wang ,&nbsp;Anming Dong ,&nbsp;Yubing Han ,&nbsp;Yawei Wang ,&nbsp;Jiguo Yu","doi":"10.1016/j.hcc.2025.100320","DOIUrl":"10.1016/j.hcc.2025.100320","url":null,"abstract":"<div><div>With the growth of the Internet of Things (IoT), millions of users, devices, and applications compose a complex and heterogeneous network, which increases the complexity of digital identity management. Traditional centralized digital identity management systems (DIMS) confront single points of failure and privacy leakages. The emergence of blockchain technology presents an opportunity for DIMS to handle the single point of failure problem associated with centralized architectures. However, the transparency inherent in blockchain technology still exposes DIMS to privacy leakages. In this paper, we propose the privacy-protected IoT DIMS (PPID), a novel blockchain-based distributed identity system to protect the privacy of on-chain identity data. The PPID achieves the unlinkability of identity-credential-verification. Specifically, the PPID adopts the Zero Knowledge Proof (ZKP) algorithm and Shamir secret sharing (SSS) to safeguard privacy security, resist replay attacks, and ensure data integrity. Finally, we evaluate the performance of ZKP computation in PPID, as well as the transaction fees of smart contract on the Ethereum blockchain.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 4","pages":"Article 100320"},"PeriodicalIF":3.0,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145105833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Linkable group signatures against malicious regulators for regulated privacy-preserving cryptocurrencies 针对受监管的保护隐私的加密货币的恶意监管机构的可链接组签名
IF 3 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-03-24 DOI: 10.1016/j.hcc.2025.100318
Xiao Wang , Yanqi Zhao , Lingyue Zhang , Min Xie , Yong Yu , Huilin Li
With the emergence of illegal behaviors such as money laundering and extortion, the regulation of privacy-preserving cryptocurrency has become increasingly important. However, existing regulated privacy-preserving cryptocurrencies usually rely on a single regulator, which seriously threatens users’ privacy once the regulator is corrupt. To address this issue, we propose a linkable group signature against malicious regulators (ALGS) for regulated privacy-preserving cryptocurrencies. Specifically, a set of regulators work together to regulate users’ behavior during cryptocurrencies transactions. Even if a certain number of regulators are corrupted, our scheme still ensures the identity security of a legal user. Meanwhile, our scheme can prevent double-spending during cryptocurrency transactions. We first propose the model of ALGS and define its security properties. Then, we present a concrete construction of ALGS, which provides CCA-2 anonymity, traceability, non-frameability, and linkability. We finally evaluate our ALGS scheme and report its advantages by comparing other schemes. The implementation result shows that the runtime of our signature algorithm is reduced by 17% compared to Emura et al. (2017) and 49% compared to KSS19 (Krenn et al. 2019), while the verification time is reduced by 31% compared to Emura et al. and 47% compared to KSS19.
随着洗钱、勒索等非法行为的出现,对加密货币隐私保护的监管变得越来越重要。然而,现有的受监管的保护隐私的加密货币通常依赖于单一的监管机构,一旦监管机构腐败,这将严重威胁用户的隐私。为了解决这个问题,我们提出了一种针对恶意监管机构(ALGS)的可链接组签名,用于受监管的保护隐私的加密货币。具体来说,一组监管机构共同监管用户在加密货币交易中的行为。即使有一定数量的监管机构被破坏,我们的方案仍然可以确保合法用户的身份安全。同时,我们的方案可以防止加密货币交易中的双重支出。首先提出了ALGS模型,并定义了其安全特性。在此基础上,提出了一种具有CCA-2匿名性、可追溯性、不可帧性和可链接性的ALGS的具体结构。最后通过与其他方案的比较,对我们的ALGS方案进行了评价,并报告了其优点。实现结果表明,我们的签名算法的运行时间与Emura等人(2017)相比减少了17%,与KSS19 (Krenn等人,2019)相比减少了49%,而验证时间与Emura等人相比减少了31%,与KSS19相比减少了47%。
{"title":"Linkable group signatures against malicious regulators for regulated privacy-preserving cryptocurrencies","authors":"Xiao Wang ,&nbsp;Yanqi Zhao ,&nbsp;Lingyue Zhang ,&nbsp;Min Xie ,&nbsp;Yong Yu ,&nbsp;Huilin Li","doi":"10.1016/j.hcc.2025.100318","DOIUrl":"10.1016/j.hcc.2025.100318","url":null,"abstract":"<div><div>With the emergence of illegal behaviors such as money laundering and extortion, the regulation of privacy-preserving cryptocurrency has become increasingly important. However, existing regulated privacy-preserving cryptocurrencies usually rely on a single regulator, which seriously threatens users’ privacy once the regulator is corrupt. To address this issue, we propose a linkable group signature against malicious regulators (ALGS) for regulated privacy-preserving cryptocurrencies. Specifically, a set of regulators work together to regulate users’ behavior during cryptocurrencies transactions. Even if a certain number of regulators are corrupted, our scheme still ensures the identity security of a legal user. Meanwhile, our scheme can prevent double-spending during cryptocurrency transactions. We first propose the model of ALGS and define its security properties. Then, we present a concrete construction of ALGS, which provides CCA-2 anonymity, traceability, non-frameability, and linkability. We finally evaluate our ALGS scheme and report its advantages by comparing other schemes. The implementation result shows that the runtime of our signature algorithm is reduced by 17% compared to Emura et al. (2017) and 49% compared to KSS19 (Krenn et al. 2019), while the verification time is reduced by 31% compared to Emura et al. and 47% compared to KSS19.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 4","pages":"Article 100318"},"PeriodicalIF":3.0,"publicationDate":"2025-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145105827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A logarithmic size revocable linkable ring signature for privacy-preserving blockchain transactions 用于保护隐私的区块链事务的对数大小的可撤销链接环签名
IF 3 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-03-21 DOI: 10.1016/j.hcc.2025.100319
Yanqi Zhao , Jie Zhang , Xiaoyi Yang , Minghong Sun , Yuxin Zhang , Yong Yu , Huilin Li
Monero uses ring signatures to protect users’ privacy. However, Monero’s anonymity covers various illicit activities, such as money laundering, as it becomes difficult to identify and punish malicious users. Therefore, it is necessary to regulate illegal transactions while protecting the privacy of legal users. We present a revocable linkable ring signature scheme (RLRS), which balances the privacy and supervision for privacy-preserving blockchain transactions. By setting the role of revocation authority, we can trace the malicious user and revoke it in time. We define the security model of the revocable linkable ring signature and give the concrete construction of RLRS. We employ accumulator and ElGamal encryption to achieve the functionalities of revocation and tracing. In addition, we compress the ring signature size to the logarithmic level by using non-interactive sum arguments of knowledge (NISA). Then, we prove the security of RLRS, which satisfies anonymity, unforgeability, linkability, and non-frameability. Lastly, we compare RLRS with other ring signature schemes. RLRS is linkable, traceable, and revocable with logarithmic communication complexity and less computational overhead. We also implement RLRS scheme and the results show that its verification time is 1.5s with 500 ring members.
门罗币使用环签名来保护用户隐私。然而,门罗币的匿名性涵盖了洗钱等各种非法活动,因为很难识别和惩罚恶意用户。因此,有必要在保护合法用户隐私的同时规范非法交易。提出了一种可撤销链接环签名方案(RLRS),该方案在保护隐私的区块链交易中平衡了隐私和监督。通过设置撤销权限的角色,可以跟踪恶意用户,及时撤销恶意用户。定义了可撤销链接环签名的安全模型,给出了可撤销链接环签名的具体构造。我们采用累加器和ElGamal加密来实现撤销和跟踪功能。此外,我们使用非交互的知识和参数(NISA)将环签名大小压缩到对数级别。然后,我们证明了RLRS的安全性,它满足匿名性、不可伪造性、可链接性和不可帧性。最后,我们将RLRS与其他环签名方案进行了比较。RLRS具有可链接、可跟踪和可撤销的特点,具有对数级的通信复杂度和较少的计算开销。我们还实现了RLRS方案,结果表明,在500个环成员的情况下,该方案的验证时间为1.5s。
{"title":"A logarithmic size revocable linkable ring signature for privacy-preserving blockchain transactions","authors":"Yanqi Zhao ,&nbsp;Jie Zhang ,&nbsp;Xiaoyi Yang ,&nbsp;Minghong Sun ,&nbsp;Yuxin Zhang ,&nbsp;Yong Yu ,&nbsp;Huilin Li","doi":"10.1016/j.hcc.2025.100319","DOIUrl":"10.1016/j.hcc.2025.100319","url":null,"abstract":"<div><div>Monero uses ring signatures to protect users’ privacy. However, Monero’s anonymity covers various illicit activities, such as money laundering, as it becomes difficult to identify and punish malicious users. Therefore, it is necessary to regulate illegal transactions while protecting the privacy of legal users. We present a revocable linkable ring signature scheme (RLRS), which balances the privacy and supervision for privacy-preserving blockchain transactions. By setting the role of revocation authority, we can trace the malicious user and revoke it in time. We define the security model of the revocable linkable ring signature and give the concrete construction of RLRS. We employ accumulator and ElGamal encryption to achieve the functionalities of revocation and tracing. In addition, we compress the ring signature size to the logarithmic level by using non-interactive sum arguments of knowledge (NISA). Then, we prove the security of RLRS, which satisfies anonymity, unforgeability, linkability, and non-frameability. Lastly, we compare RLRS with other ring signature schemes. RLRS is linkable, traceable, and revocable with logarithmic communication complexity and less computational overhead. We also implement RLRS scheme and the results show that its verification time is 1.5s with 500 ring members.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 4","pages":"Article 100319"},"PeriodicalIF":3.0,"publicationDate":"2025-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145324755","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DC-LoRA: Domain correlation low-rank adaptation for domain incremental learning 面向领域增量学习的领域相关低秩自适应
IF 3 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-03-18 DOI: 10.1016/j.hcc.2024.100270
Lin Li, Shiye Wang, Changsheng Li, Ye Yuan, Guoren Wang
Continual learning, characterized by the sequential acquisition of multiple tasks, has emerged as a prominent challenge in deep learning. During the process of continual learning, deep neural networks experience a phenomenon known as catastrophic forgetting, wherein networks lose the acquired knowledge related to previous tasks when training on new tasks. Recently, parameter-efficient fine-tuning (PEFT) methods have gained prominence in tackling the challenge of catastrophic forgetting. However, within the realm of domain incremental learning, a type characteristic of continual learning, there exists an additional overlooked inductive bias, which warrants attention beyond existing approaches. In this paper, we propose a novel PEFT method called Domain Correlation Low-Rank Adaptation for domain incremental learning. Our approach put forward a domain correlated loss, which encourages the weights of the LoRA module for adjacent tasks to become more similar, thereby leveraging the correlation between different task domains. Furthermore, we consolidate the classifiers of different task domains to improve prediction performance by capitalizing on the knowledge acquired from diverse tasks. To validate the effectiveness of our method, we conduct comparative experiments and ablation studies on publicly available domain incremental learning benchmark dataset. The experimental results demonstrate that our method outperforms state-of-the-art approaches.
以连续获取多个任务为特征的持续学习已经成为深度学习中的一个突出挑战。在持续学习的过程中,深度神经网络经历了一种被称为灾难性遗忘的现象,即网络在接受新任务训练时失去了与先前任务相关的已获得的知识。最近,参数有效微调(PEFT)方法在解决灾难性遗忘的挑战中获得了突出的地位。然而,在领域增量学习(一种持续学习的类型特征)的领域中,存在额外的被忽视的归纳偏差,值得在现有方法之外予以关注。本文提出了一种新的PEFT方法——域相关低秩自适应,用于域增量学习。我们的方法提出了域相关损失,促使相邻任务的LoRA模块的权重变得更加相似,从而利用不同任务域之间的相关性。此外,我们整合了不同任务域的分类器,通过利用从不同任务中获得的知识来提高预测性能。为了验证我们方法的有效性,我们在公开可用的领域增量学习基准数据集上进行了对比实验和消融研究。实验结果表明,我们的方法优于最先进的方法。
{"title":"DC-LoRA: Domain correlation low-rank adaptation for domain incremental learning","authors":"Lin Li,&nbsp;Shiye Wang,&nbsp;Changsheng Li,&nbsp;Ye Yuan,&nbsp;Guoren Wang","doi":"10.1016/j.hcc.2024.100270","DOIUrl":"10.1016/j.hcc.2024.100270","url":null,"abstract":"<div><div>Continual learning, characterized by the sequential acquisition of multiple tasks, has emerged as a prominent challenge in deep learning. During the process of continual learning, deep neural networks experience a phenomenon known as catastrophic forgetting, wherein networks lose the acquired knowledge related to previous tasks when training on new tasks. Recently, parameter-efficient fine-tuning (PEFT) methods have gained prominence in tackling the challenge of catastrophic forgetting. However, within the realm of domain incremental learning, a type characteristic of continual learning, there exists an additional overlooked inductive bias, which warrants attention beyond existing approaches. In this paper, we propose a novel PEFT method called Domain Correlation Low-Rank Adaptation for domain incremental learning. Our approach put forward a domain correlated loss, which encourages the weights of the LoRA module for adjacent tasks to become more similar, thereby leveraging the correlation between different task domains. Furthermore, we consolidate the classifiers of different task domains to improve prediction performance by capitalizing on the knowledge acquired from diverse tasks. To validate the effectiveness of our method, we conduct comparative experiments and ablation studies on publicly available domain incremental learning benchmark dataset. The experimental results demonstrate that our method outperforms state-of-the-art approaches.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 4","pages":"Article 100270"},"PeriodicalIF":3.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145049160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic OBL-driven whale optimization algorithm for independent tasks offloading in fog computing 雾计算中独立任务卸载的动态obl驱动鲸鱼优化算法
IF 3 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-03-18 DOI: 10.1016/j.hcc.2025.100317
Zulfiqar Ali Khan, Izzatdin Abdul Aziz
Cloud computing has been the core infrastructure for providing services to the offloaded workloads from IoT devices. However, for time-sensitive tasks, reducing end-to-end delay is a major concern. With advancements in the IoT industry, the computation requirements of incoming tasks at the cloud are escalating, resulting in compromised quality of service. Fog computing emerged to alleviate such issues. However, the resources at the fog layer are limited and require efficient usage. The Whale Optimization Algorithm is a promising meta-heuristic algorithm extensively used to solve various optimization problems. However, being an exploitation-driven technique, its exploration potential is limited, resulting in reduced solution diversity, local optima, and poor convergence. To address these issues, this study proposes a dynamic opposition learning approach to enhance the Whale Optimization Algorithm to offload independent tasks. Opposition-Based Learning (OBL) has been extensively used to improve the exploration capability of the Whale Optimization Algorithm. However, it is computationally expensive and requires efficient utilization of appropriate OBL strategies to fully realize its advantages. Therefore, our proposed algorithm employs three OBL strategies at different stages to minimize end-to-end delay and improve load balancing during task offloading. First, basic OBL and quasi-OBL are employed during population initialization. Then, the proposed dynamic partial-opposition method enhances search space exploration using an information-based triggering mechanism that tracks the status of each agent. The results illustrate significant performance improvements by the proposed algorithm compared to SACO, PSOGA, IPSO, and oppoCWOA using the NASA Ames iPSC and HPC2N workload datasets.
云计算一直是为从物联网设备卸载的工作负载提供服务的核心基础设施。然而,对于时间敏感的任务,减少端到端延迟是一个主要问题。随着物联网行业的发展,云端传入任务的计算需求不断升级,导致服务质量下降。雾计算的出现就是为了缓解这些问题。然而,雾层的资源是有限的,需要有效利用。鲸鱼优化算法是一种很有前途的元启发式算法,广泛用于解决各种优化问题。然而,作为一种开发驱动的技术,其勘探潜力有限,导致解的多样性降低,局部最优,收敛性差。为了解决这些问题,本研究提出了一种动态对立学习方法来增强鲸鱼优化算法以卸载独立任务。基于对立的学习(OBL)被广泛用于提高鲸鱼优化算法的探索能力。然而,它的计算成本很高,需要有效地利用适当的OBL策略才能充分发挥其优势。因此,我们提出的算法在不同的阶段采用了三种OBL策略,以最小化端到端延迟,提高任务卸载过程中的负载均衡。首先,在种群初始化过程中使用基本OBL和准OBL。然后,提出的动态部分对抗方法利用基于信息的触发机制来跟踪每个代理的状态,从而增强搜索空间的探索能力。使用NASA Ames iPSC和HPC2N工作负载数据集,结果表明,与SACO、PSOGA、IPSO和oppoCWOA相比,该算法的性能有了显著提高。
{"title":"Dynamic OBL-driven whale optimization algorithm for independent tasks offloading in fog computing","authors":"Zulfiqar Ali Khan,&nbsp;Izzatdin Abdul Aziz","doi":"10.1016/j.hcc.2025.100317","DOIUrl":"10.1016/j.hcc.2025.100317","url":null,"abstract":"<div><div>Cloud computing has been the core infrastructure for providing services to the offloaded workloads from IoT devices. However, for time-sensitive tasks, reducing end-to-end delay is a major concern. With advancements in the IoT industry, the computation requirements of incoming tasks at the cloud are escalating, resulting in compromised quality of service. Fog computing emerged to alleviate such issues. However, the resources at the fog layer are limited and require efficient usage. The Whale Optimization Algorithm is a promising meta-heuristic algorithm extensively used to solve various optimization problems. However, being an exploitation-driven technique, its exploration potential is limited, resulting in reduced solution diversity, local optima, and poor convergence. To address these issues, this study proposes a dynamic opposition learning approach to enhance the Whale Optimization Algorithm to offload independent tasks. Opposition-Based Learning (OBL) has been extensively used to improve the exploration capability of the Whale Optimization Algorithm. However, it is computationally expensive and requires efficient utilization of appropriate OBL strategies to fully realize its advantages. Therefore, our proposed algorithm employs three OBL strategies at different stages to minimize end-to-end delay and improve load balancing during task offloading. First, basic OBL and quasi-OBL are employed during population initialization. Then, the proposed dynamic partial-opposition method enhances search space exploration using an information-based triggering mechanism that tracks the status of each agent. The results illustrate significant performance improvements by the proposed algorithm compared to SACO, PSOGA, IPSO, and oppoCWOA using the NASA Ames iPSC and HPC2N workload datasets.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 4","pages":"Article 100317"},"PeriodicalIF":3.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145105832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LSTM stock prediction model based on blockchain 基于区块链的LSTM库存预测模型
IF 3 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-03-17 DOI: 10.1016/j.hcc.2025.100316
Yongdan Wang , Haibin Zhang , Baohan Huang , Zhijun Lin , Chuan Pang
The stock market is a vital component of the financial sector. Due to the inherent uncertainty and volatility of the stock market, stock price prediction has always been both intriguing and challenging. To improve the accuracy of stock predictions, we construct a model that integrates investor sentiment with Long Short-Term Memory (LSTM) networks. By extracting sentiment data from the “Financial Post” and quantifying it with the Vader sentiment lexicon, we add a sentiment index to improve stock price forecasting. We combine sentiment factors with traditional trading indicators, making predictions more accurate. Furthermore, we deploy our system on the blockchain to enhance data security, reduce the risk of malicious attacks, and improve system robustness. This integration of sentiment analysis and blockchain offers a novel approach to stock market predictions, providing secure and reliable decision support for investors and financial institutions. We deploy our system and demonstrate that our system is both efficient and practical. For 312 bytes of stock data, we achieve a latency of 434.42 ms with one node and 565.69 ms with five nodes. For 1700 bytes of sentiment data, we achieve a latency of 1405.25 ms with one node and 1750.25 ms with five nodes.
股票市场是金融部门的重要组成部分。由于股票市场固有的不确定性和波动性,股票价格预测一直是一个既有趣又具有挑战性的问题。为了提高股票预测的准确性,我们构建了一个整合投资者情绪和长短期记忆(LSTM)网络的模型。通过从《金融邮报》中提取情绪数据,并使用维德情绪词汇对其进行量化,我们添加了一个情绪指数来改进股价预测。我们将情绪因素与传统交易指标结合起来,使预测更加准确。此外,我们将系统部署在区块链上,以增强数据安全性,降低恶意攻击的风险,并提高系统的健壮性。这种情绪分析和区块链的整合为股市预测提供了一种新颖的方法,为投资者和金融机构提供了安全可靠的决策支持。我们对系统进行了部署,并证明了系统的有效性和实用性。对于312字节的存量数据,我们在一个节点上实现了434.42 ms的延迟,在五个节点上实现了565.69 ms的延迟。对于1700字节的情绪数据,我们实现了一个节点1405.25 ms的延迟,五个节点1750.25 ms的延迟。
{"title":"LSTM stock prediction model based on blockchain","authors":"Yongdan Wang ,&nbsp;Haibin Zhang ,&nbsp;Baohan Huang ,&nbsp;Zhijun Lin ,&nbsp;Chuan Pang","doi":"10.1016/j.hcc.2025.100316","DOIUrl":"10.1016/j.hcc.2025.100316","url":null,"abstract":"<div><div>The stock market is a vital component of the financial sector. Due to the inherent uncertainty and volatility of the stock market, stock price prediction has always been both intriguing and challenging. To improve the accuracy of stock predictions, we construct a model that integrates investor sentiment with Long Short-Term Memory (LSTM) networks. By extracting sentiment data from the “Financial Post” and quantifying it with the Vader sentiment lexicon, we add a sentiment index to improve stock price forecasting. We combine sentiment factors with traditional trading indicators, making predictions more accurate. Furthermore, we deploy our system on the blockchain to enhance data security, reduce the risk of malicious attacks, and improve system robustness. This integration of sentiment analysis and blockchain offers a novel approach to stock market predictions, providing secure and reliable decision support for investors and financial institutions. We deploy our system and demonstrate that our system is both efficient and practical. For 312 bytes of stock data, we achieve a latency of 434.42 ms with one node and 565.69 ms with five nodes. For 1700 bytes of sentiment data, we achieve a latency of 1405.25 ms with one node and 1750.25 ms with five nodes.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 4","pages":"Article 100316"},"PeriodicalIF":3.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145266604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Block-chain abnormal transaction detection method based on generative adversarial network and autoencoder 基于生成对抗网络和自编码器的区块链异常交易检测方法
IF 3 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-03-03 DOI: 10.1016/j.hcc.2025.100313
Ao Xiong , Chenbin Qiao , Wenjing Li , Dong Wang , Da Li , Bo Gao , Weixian Wang
Anomaly detection in blockchain transactions faces several challenges, the most prominent being the imbalance between positive and negative samples. Most transaction data are normal, with only a small fraction of anomalous data. Additionally, blockchain transaction datasets tend to be small and often incomplete, which complicates the process of anomaly detection. When using simple AI models, selecting the appropriate model and tuning parameters becomes difficult, resulting in poor performance. To address these issues, this paper proposes GANAnomaly, an anomaly detection model based on Generative Adversarial Networks (GANs) and Autoencoders. The model consists of three components: a data generation model, an encoding model, and a detection model. Firstly, the Wasserstein GAN (WGAN) is employed as the data generation model. The generated data is then used to train an encoding model that performs feature extraction and dimensionality reduction. Finally, the trained encoder serves as the feature extractor for the detection model. This approach leverages GANs to mitigate the challenges of low data volume and data imbalance, while the encoder extracts relevant features and reduces dimensionality. Experimental results demonstrate that the proposed anomaly detection model outperforms traditional methods by more accurately identifying anomalous blockchain transactions, reducing the false positive rate, and improving both accuracy and efficiency.
区块链事务中的异常检测面临着几个挑战,最突出的是正样本和负样本之间的不平衡。大多数事务数据是正常的,只有一小部分异常数据。此外,区块链事务数据集往往很小,而且往往不完整,这使得异常检测过程变得复杂。当使用简单的AI模型时,选择合适的模型和调优参数变得困难,导致性能不佳。为了解决这些问题,本文提出了一种基于生成对抗网络(GANs)和自编码器的异常检测模型——GANAnomaly。该模型由三个部分组成:数据生成模型、编码模型和检测模型。首先,采用Wasserstein GAN (WGAN)作为数据生成模型。生成的数据然后用于训练编码模型,该模型执行特征提取和降维。最后,训练好的编码器作为检测模型的特征提取器。该方法利用gan来缓解低数据量和数据不平衡的挑战,而编码器提取相关特征并降低维数。实验结果表明,该异常检测模型能够更准确地识别异常区块链交易,降低误报率,提高准确率和效率,优于传统的异常检测方法。
{"title":"Block-chain abnormal transaction detection method based on generative adversarial network and autoencoder","authors":"Ao Xiong ,&nbsp;Chenbin Qiao ,&nbsp;Wenjing Li ,&nbsp;Dong Wang ,&nbsp;Da Li ,&nbsp;Bo Gao ,&nbsp;Weixian Wang","doi":"10.1016/j.hcc.2025.100313","DOIUrl":"10.1016/j.hcc.2025.100313","url":null,"abstract":"<div><div>Anomaly detection in blockchain transactions faces several challenges, the most prominent being the imbalance between positive and negative samples. Most transaction data are normal, with only a small fraction of anomalous data. Additionally, blockchain transaction datasets tend to be small and often incomplete, which complicates the process of anomaly detection. When using simple AI models, selecting the appropriate model and tuning parameters becomes difficult, resulting in poor performance. To address these issues, this paper proposes GANAnomaly, an anomaly detection model based on Generative Adversarial Networks (GANs) and Autoencoders. The model consists of three components: a data generation model, an encoding model, and a detection model. Firstly, the Wasserstein GAN (WGAN) is employed as the data generation model. The generated data is then used to train an encoding model that performs feature extraction and dimensionality reduction. Finally, the trained encoder serves as the feature extractor for the detection model. This approach leverages GANs to mitigate the challenges of low data volume and data imbalance, while the encoder extracts relevant features and reduces dimensionality. Experimental results demonstrate that the proposed anomaly detection model outperforms traditional methods by more accurately identifying anomalous blockchain transactions, reducing the false positive rate, and improving both accuracy and efficiency.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 4","pages":"Article 100313"},"PeriodicalIF":3.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145158370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Task migration with deadlines using machine learning-based dwell time prediction in vehicular micro clouds 基于机器学习的车辆微云停留时间预测的任务迁移
IF 3.2 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2025-03-03 DOI: 10.1016/j.hcc.2025.100314
Ziqi Zhou , Agon Memedi , Chunghan Lee , Seyhan Ucar , Onur Altintas , Falko Dressler
Edge computing is becoming ever more relevant to offload compute-heavy tasks in vehicular networks. In this context, the concept of vehicular micro clouds (VMCs) has been proposed to use compute and storage resources on nearby vehicles to complete computational tasks. As many tasks in this application domain are time critical, offloading to the cloud is prohibitive. Additionally, task deadlines have to be dealt with. This paper addresses two main challenges. First, we present a task migration algorithm supporting deadlines in vehicular edge computing. The algorithm is following the earliest deadline first model but in presence of dynamic processing resources, i.e, vehicles joining and leaving a VMC. This task offloading is very sensitive to the mobility of vehicles in a VMC, i.e, the so-called dwell time a vehicles spends in the VMC. Thus, secondly, we propose a machine learning-based solution for dwell time prediction. Our dwell time prediction model uses a random forest approach to estimate how long a vehicle will stay in a VMC. Our approach is evaluated using mobility traces of an artificial simple intersection scenario as well as of real urban traffic in cities of Luxembourg and Nagoya. Our proposed approach is able to realize low-delay and low-failure task migration in dynamic vehicular conditions, advancing the state of the art in vehicular edge computing.
边缘计算在减轻车载网络中计算量大的任务方面变得越来越重要。在此背景下,提出了车辆微云(vehicular micro cloud, vmc)的概念,利用附近车辆的计算和存储资源来完成计算任务。由于此应用程序领域中的许多任务都是时间关键型的,因此将其卸载到云是令人望而却步的。此外,任务的最后期限也必须处理。本文解决了两个主要挑战。首先,我们提出了一种支持车辆边缘计算最后期限的任务迁移算法。该算法遵循最早截止日期优先模型,但存在动态处理资源,即车辆加入和离开VMC。该任务卸载对车辆在VMC中的机动性非常敏感,即车辆在VMC中花费的所谓停留时间。因此,其次,我们提出了一种基于机器学习的驻留时间预测解决方案。我们的停留时间预测模型使用随机森林方法来估计车辆将在VMC中停留多长时间。我们的方法是使用人工简单十字路口场景的移动轨迹以及卢森堡和名古屋城市的真实城市交通来评估的。我们提出的方法能够在动态车辆条件下实现低延迟和低故障的任务迁移,推动了车辆边缘计算的发展。
{"title":"Task migration with deadlines using machine learning-based dwell time prediction in vehicular micro clouds","authors":"Ziqi Zhou ,&nbsp;Agon Memedi ,&nbsp;Chunghan Lee ,&nbsp;Seyhan Ucar ,&nbsp;Onur Altintas ,&nbsp;Falko Dressler","doi":"10.1016/j.hcc.2025.100314","DOIUrl":"10.1016/j.hcc.2025.100314","url":null,"abstract":"<div><div>Edge computing is becoming ever more relevant to offload compute-heavy tasks in vehicular networks. In this context, the concept of vehicular micro clouds (VMCs) has been proposed to use compute and storage resources on nearby vehicles to complete computational tasks. As many tasks in this application domain are time critical, offloading to the cloud is prohibitive. Additionally, task deadlines have to be dealt with. This paper addresses two main challenges. First, we present a task migration algorithm supporting deadlines in vehicular edge computing. The algorithm is following the earliest deadline first model but in presence of dynamic processing resources, <em>i.e</em>, vehicles joining and leaving a VMC. This task offloading is very sensitive to the mobility of vehicles in a VMC, <em>i.e</em>, the so-called dwell time a vehicles spends in the VMC. Thus, secondly, we propose a machine learning-based solution for dwell time prediction. Our dwell time prediction model uses a random forest approach to estimate how long a vehicle will stay in a VMC. Our approach is evaluated using mobility traces of an artificial simple intersection scenario as well as of real urban traffic in cities of Luxembourg and Nagoya. Our proposed approach is able to realize low-delay and low-failure task migration in dynamic vehicular conditions, advancing the state of the art in vehicular edge computing.</div></div>","PeriodicalId":100605,"journal":{"name":"High-Confidence Computing","volume":"5 2","pages":"Article 100314"},"PeriodicalIF":3.2,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143891463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
High-Confidence Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1