Pub Date : 2026-01-14DOI: 10.1016/j.csi.2026.104127
Xiwen Wang , Junqing Gong , Kai Zhang , Haifeng Qian
In multi-client searchable symmetric encryption (MC-SSE), multiple clients have the capability to conduct keyword searches on encrypted data hosted in cloud, where the outsourced data is contributed by a data owner. Unfortunately, all known MC-SSE addressing key escrow problem required establishing a secure channel between data owner and user, and might suffer from significant key storage overhead. Therefore, we present an effective decentralized MC-SSE (DMC-SSE) system without the key escrow problem for secure cloud storage, eliminating the secure channel between data owner and data user. In DMC-SSE, each client independently picks its public/secret key, while a bulletin board of user public keys takes the place of the central authority. Technically, we introduce a semi-generic construction framework of DMC-SSE, building upon Cash et al.’s OXT structure (CRYPTO 2013), which roughly combines Kolonelos, Malavolta and Wee’s distributed broadcast encryption scheme (ASIACRYPT 2023) and additionally introduces a distributed keyed pseudorandom function module for securely aggregating each client’s secret key.
{"title":"Decentralized multi-client boolean keyword search for encrypted cloud storage","authors":"Xiwen Wang , Junqing Gong , Kai Zhang , Haifeng Qian","doi":"10.1016/j.csi.2026.104127","DOIUrl":"10.1016/j.csi.2026.104127","url":null,"abstract":"<div><div>In multi-client searchable symmetric encryption (MC-SSE), multiple clients have the capability to conduct keyword searches on encrypted data hosted in cloud, where the outsourced data is contributed by a data owner. Unfortunately, all known MC-SSE addressing key escrow problem required establishing a secure channel between data owner and user, and might suffer from significant key storage overhead. Therefore, we present an effective decentralized MC-SSE (DMC-SSE) system without the key escrow problem for secure cloud storage, eliminating the secure channel between data owner and data user. In DMC-SSE, each client independently picks its public/secret key, while a bulletin board of user public keys takes the place of the central authority. Technically, we introduce a semi-generic construction framework of DMC-SSE, building upon Cash et al.’s OXT structure (CRYPTO 2013), which roughly combines Kolonelos, Malavolta and Wee’s distributed broadcast encryption scheme (ASIACRYPT 2023) and additionally introduces a distributed keyed pseudorandom function module for securely aggregating each client’s secret key.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104127"},"PeriodicalIF":3.1,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-12DOI: 10.1016/j.csi.2026.104128
Shangping Wang, Haotong Cao, Ruoxin Yan
Sharding technique has significantly increased the transaction processing capacity and scalability in blockchain systems but compromise security, particularly with increased vulnerability to “1/3 attacks” as shard numbers rise. Clearly, more shards are not always better, how to determine the optimal number of shards is essential yet often overlooked in existing research. To address the aforementioned issues, we propose a broadly applicable algorithm for determining the optimal number of shards in sharded blockchain based on security and performance (SPSN). Firstly, this work proposes a novel optimization model for sharded blockchain by considering the impacts of sharding on system security and performance. The idea is to find an optimal shard number that balances system efficiency (time required to process transactions) and security (system failure probability), ensuring that the system failure probability within acceptable limits while maximizing efficiency. Secondly, we propose a widely applicable algorithm to determine the optimal number of shards, which can be executed independently before sharding operations to ascertain the shard count that maximizes performance while ensuring system security. Lastly, experiments are conducted under four different system settings to demonstrate the specific methods. The results show that the proposed algorithm can effectively calculate the optimal shard number for most systems, demonstrating broad applicability and effectiveness, helping achieve a high-security, high-performance sharded blockchain system.
{"title":"Optimal shard number determination algorithm based on security and performance in sharded blockchain","authors":"Shangping Wang, Haotong Cao, Ruoxin Yan","doi":"10.1016/j.csi.2026.104128","DOIUrl":"10.1016/j.csi.2026.104128","url":null,"abstract":"<div><div>Sharding technique has significantly increased the transaction processing capacity and scalability in blockchain systems but compromise security, particularly with increased vulnerability to “1/3 attacks” as shard numbers rise. Clearly, more shards are not always better, how to determine the optimal number of shards is essential yet often overlooked in existing research. To address the aforementioned issues, we propose a broadly applicable algorithm for determining the optimal number of shards in sharded blockchain based on security and performance (SPSN). Firstly, this work proposes a novel optimization model for sharded blockchain by considering the impacts of sharding on system security and performance. The idea is to find an optimal shard number that balances system efficiency (time required to process transactions) and security (system failure probability), ensuring that the system failure probability within acceptable limits while maximizing efficiency. Secondly, we propose a widely applicable algorithm to determine the optimal number of shards, which can be executed independently before sharding operations to ascertain the shard count that maximizes performance while ensuring system security. Lastly, experiments are conducted under four different system settings to demonstrate the specific methods. The results show that the proposed algorithm can effectively calculate the optimal shard number for most systems, demonstrating broad applicability and effectiveness, helping achieve a high-security, high-performance sharded blockchain system.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104128"},"PeriodicalIF":3.1,"publicationDate":"2026-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146077330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-09DOI: 10.1016/j.csi.2026.104129
Wenhua Huang, Yuwei Deng, Jingyu Feng, Gang Han, Wenbo Zhang
Cross-chain interoperability has emerged as a pivotal factor in enabling seamless data interaction and value circulation across diverse blockchain networks. Nevertheless, current cross-chain technologies necessitate advancements to satisfy the escalating need for efficient bidirectional data exchange. Addressing this, our work focuses on refining cross-chain protocols, with a core emphasis on elevating transaction efficiency, reliability, and security. Our innovation centers around a strengthened hashed timelock cross-chain protocol grounded in trusted middlemen. To safeguard the security and confidentiality of middlemen engaged in cross-chain transactions, we introduce an ingenious anonymous identity authentication mechanism. This mechanism empowers middlemen to execute auxiliary cross-chain transactions while concealing their actual identities. Additionally, we propose a behavior-based assessment of middlemen, utilizing distinct indicators to gauge their trustworthiness in each cross-chain transaction. We introduce both current and historical trust values, providing insights into middlemen's real-time reliability and long-term stability. This approach effectively thwarts attempts by malicious middlemen to manipulate trust values, mitigating security vulnerabilities in cross-chain transactions. Furthermore, by clearing redundant blocks, we not only decrease storage consumption but also facilitate the storage of a substantial amount of identity data and trust data of middlemen. Rigorous security analysis demonstrates our scheme's alignment with foundational security requirements and resilience against common attacks. Furthermore, our simulation results underscore the potency of our trust evaluation scheme, substantiating its efficacy in ensuring middlemen credibility and detecting malicious actors.
{"title":"Securing hashed timelock cross-chain protocol with trusted middleman in blockchain networks","authors":"Wenhua Huang, Yuwei Deng, Jingyu Feng, Gang Han, Wenbo Zhang","doi":"10.1016/j.csi.2026.104129","DOIUrl":"10.1016/j.csi.2026.104129","url":null,"abstract":"<div><div>Cross-chain interoperability has emerged as a pivotal factor in enabling seamless data interaction and value circulation across diverse blockchain networks. Nevertheless, current cross-chain technologies necessitate advancements to satisfy the escalating need for efficient bidirectional data exchange. Addressing this, our work focuses on refining cross-chain protocols, with a core emphasis on elevating transaction efficiency, reliability, and security. Our innovation centers around a strengthened hashed timelock cross-chain protocol grounded in trusted middlemen. To safeguard the security and confidentiality of middlemen engaged in cross-chain transactions, we introduce an ingenious anonymous identity authentication mechanism. This mechanism empowers middlemen to execute auxiliary cross-chain transactions while concealing their actual identities. Additionally, we propose a behavior-based assessment of middlemen, utilizing distinct indicators to gauge their trustworthiness in each cross-chain transaction. We introduce both current and historical trust values, providing insights into middlemen's real-time reliability and long-term stability. This approach effectively thwarts attempts by malicious middlemen to manipulate trust values, mitigating security vulnerabilities in cross-chain transactions. Furthermore, by clearing redundant blocks, we not only decrease storage consumption but also facilitate the storage of a substantial amount of identity data and trust data of middlemen. Rigorous security analysis demonstrates our scheme's alignment with foundational security requirements and resilience against common attacks. Furthermore, our simulation results underscore the potency of our trust evaluation scheme, substantiating its efficacy in ensuring middlemen credibility and detecting malicious actors.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104129"},"PeriodicalIF":3.1,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-31DOI: 10.1016/j.csi.2025.104126
B. Muthusenthil , K. Devi
Since Cloud Computing (CC) expands and makes use of information technology (IT) infrastructure, conventional operating systems and applications, it is now susceptible to IT threats. Therefore, a Deep Convolutional Spiking Neural Network and Block Chain based Dynamic Random Byzantine Fault Tolerance Consensus Algorithm fostered Intrusion Detection System is proposed in this paper for improving Privacy and Safety in the Cloud Computing Environment (DCSNN-BC-DRBFT-IDS-CC). Here, the data are gathered through NSL-KDD and CICIDS-2017 benchmark datasets. First-level privacy procedure is performed by the block chain-dependent Dynamic Random Byzantine Fault Tolerance Consensus Algorithm (DRBFT). The secondary level privacy procedure is performed by pre-processing. During data processing, the Markov Chain Random Field (MCRF) is used to remove the unwanted content and filter relevant data. The pre-processing output is provided into feature selection. The optimum feature is selected by using Dynamic Recursive Feature Selection (DRFS). The Deep Convolutional Spiking Neural Network (DCSNN) is employed for classifying data as normal and abnormal. The proposed DCSNN-BC-DRBFT-IDS-CC method is implemented using performance metrics. The DCSNN-BC-DRBFT-IDS-CC achieves 39.185 %, 14.37 %, 31.8 % and 27.06 % better accuracy,25.13 %, 21.75 %, 27.54 % and 23.08 % less computation time,8.15 %, 2.57 %, 3.64 %, 5.85 % higher AUC when compared to other existing models.
由于云计算(CC)扩展并利用了信息技术(IT)基础设施、传统操作系统和应用程序,它现在很容易受到IT威胁。为此,本文提出了一种基于深度卷积尖峰神经网络和区块链的动态随机拜占庭容错一致性算法的入侵检测系统(DCSNN-BC-DRBFT-IDS-CC),以提高云计算环境下的隐私和安全。这里的数据是通过NSL-KDD和CICIDS-2017基准数据集收集的。第一级隐私过程由区块链相关的动态随机拜占庭容错共识算法(DRBFT)执行。二级隐私过程通过预处理来执行。在数据处理过程中,利用马尔可夫链随机场(Markov Chain Random Field, MCRF)去除不需要的内容,过滤相关数据。将预处理输出提供给特征选择。采用动态递归特征选择(DRFS)方法选择最优特征。采用深度卷积脉冲神经网络(DCSNN)对数据进行正常和异常分类。采用性能指标实现了DCSNN-BC-DRBFT-IDS-CC方法。与其他模型相比,dcsnn - bc - drbft - ads - cc的准确率分别提高了39.185%、14.37%、31.8%和27.06%,计算时间分别减少了25.13%、21.75%、27.54%和23.08%,AUC分别提高了8.15%、2.57%、3.64%、5.85%。
{"title":"Deep convolutional spiking neural network and block chain based intrusion detection framework for enhancing privacy and security in cloud computing environment","authors":"B. Muthusenthil , K. Devi","doi":"10.1016/j.csi.2025.104126","DOIUrl":"10.1016/j.csi.2025.104126","url":null,"abstract":"<div><div>Since Cloud Computing (CC) expands and makes use of information technology (IT) infrastructure, conventional operating systems and applications, it is now susceptible to IT threats. Therefore, a Deep Convolutional Spiking Neural Network and Block Chain based Dynamic Random Byzantine Fault Tolerance Consensus Algorithm fostered Intrusion Detection System is proposed in this paper for improving Privacy and Safety in the Cloud Computing Environment (DCSNN-BC-DRBFT-IDS-CC). Here, the data are gathered through NSL-KDD and CICIDS-2017 benchmark datasets. First-level privacy procedure is performed by the block chain-dependent Dynamic Random Byzantine Fault Tolerance Consensus Algorithm (DRBFT). The secondary level privacy procedure is performed by pre-processing. During data processing, the Markov Chain Random Field (MCRF) is used to remove the unwanted content and filter relevant data. The pre-processing output is provided into feature selection. The optimum feature is selected by using Dynamic Recursive Feature Selection (DRFS). The Deep Convolutional Spiking Neural Network (DCSNN) is employed for classifying data as normal and abnormal. The proposed DCSNN-BC-DRBFT-IDS-CC method is implemented using performance metrics. The DCSNN-BC-DRBFT-IDS-CC achieves 39.185 %, 14.37 %, 31.8 % and 27.06 % better accuracy,25.13 %, 21.75 %, 27.54 % and 23.08 % less computation time,8.15 %, 2.57 %, 3.64 %, 5.85 % higher AUC when compared to other existing models.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104126"},"PeriodicalIF":3.1,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-31DOI: 10.1016/j.csi.2025.104123
Xueting Huang , Xiangwei Meng , Kai Zhang , Ce Yang , Wei Liang , Kuan-Ching Li
Sharding technology effectively improves system throughput by distributing the blockchain transaction load to multiple shards for parallel processing, and it is the core solution to the scalability problem of blockchain. However, as the number of shards increases, the frequency of cross-shard transactions increases significantly, leading to increased communication and computational overhead, transaction delays, uneven resource allocation, and load imbalance, which becomes a key bottleneck for performance expansion. To this end, this article proposes the cross-shard transaction protocol V-Bridge, which draws on the concept of off-chain payment channels to establish distributed virtual fund channels between Trustors in different shards, convert cross-shard transactions into off-chain transactions and realize the logical flow of funds. To further enhance cross-shard transaction performance, our V-Bridge integrates an intelligent sharding adjustment mechanism, and a cross-shard optimized critical path protection algorithm (CSOCPPA) to dynamically balance shard loads, alleviate resource allocation issues, and minimize performance bottlenecks. Experimental results show that compared with existing state-of-the-art protocols, our proposed V-Bridge’s average throughput is increased by 26% to 46%, and transaction delays are reduced by 15% to 24%.
{"title":"V-Bridge: A dynamic cross-shard blockchain protocol based on off-chain payment channel","authors":"Xueting Huang , Xiangwei Meng , Kai Zhang , Ce Yang , Wei Liang , Kuan-Ching Li","doi":"10.1016/j.csi.2025.104123","DOIUrl":"10.1016/j.csi.2025.104123","url":null,"abstract":"<div><div>Sharding technology effectively improves system throughput by distributing the blockchain transaction load to multiple shards for parallel processing, and it is the core solution to the scalability problem of blockchain. However, as the number of shards increases, the frequency of cross-shard transactions increases significantly, leading to increased communication and computational overhead, transaction delays, uneven resource allocation, and load imbalance, which becomes a key bottleneck for performance expansion. To this end, this article proposes the cross-shard transaction protocol V-Bridge, which draws on the concept of off-chain payment channels to establish distributed virtual fund channels between Trustors in different shards, convert cross-shard transactions into off-chain transactions and realize the logical flow of funds. To further enhance cross-shard transaction performance, our V-Bridge integrates an intelligent sharding adjustment mechanism, and a cross-shard optimized critical path protection algorithm (CSOCPPA) to dynamically balance shard loads, alleviate resource allocation issues, and minimize performance bottlenecks. Experimental results show that compared with existing state-of-the-art protocols, our proposed V-Bridge’s average throughput is increased by 26% to 46%, and transaction delays are reduced by 15% to 24%.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104123"},"PeriodicalIF":3.1,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-30DOI: 10.1016/j.csi.2025.104125
Yongxin Zhao , Chundong Wang , Hao Lin , Xumeng Wang , Yixuan Song , Qiuyu Du
Trajectory data are widely used in AI-based spatiotemporal analysis but raise privacy concerns due to their fine-grained nature and the potential for individual re-identification. Existing differential privacy (DP) approaches often apply uniform perturbation, which compromises spatial continuity, or adopt personalized mechanisms that overlook structural utility. This study introduces AdaTraj-DP, an adaptive differential privacy framework designed to balance trajectory-level protection and analytical utility. The framework combines context-aware sensitivity detection with hierarchical aggregation. Specifically, a dynamic sensitivity model evaluates privacy risks according to spatial density and semantic context, enabling adaptive allocation of privacy budgets. An adaptive perturbation mechanism then injects noise proportionally to the estimated sensitivity and represents trajectories through Hilbert-based encoding for prefix-oriented hierarchical aggregation with layer-wise budget distribution. Experiments conducted on the T-Drive and GeoLife datasets indicate that AdaTraj-DP maintains stable query accuracy, spatial consistency, and downstream analytical utility across varying privacy budgets while satisfying formal differential privacy guarantees.
{"title":"AdaTraj-DP: An adaptive privacy framework for context-aware trajectory data publishing","authors":"Yongxin Zhao , Chundong Wang , Hao Lin , Xumeng Wang , Yixuan Song , Qiuyu Du","doi":"10.1016/j.csi.2025.104125","DOIUrl":"10.1016/j.csi.2025.104125","url":null,"abstract":"<div><div>Trajectory data are widely used in AI-based spatiotemporal analysis but raise privacy concerns due to their fine-grained nature and the potential for individual re-identification. Existing differential privacy (DP) approaches often apply uniform perturbation, which compromises spatial continuity, or adopt personalized mechanisms that overlook structural utility. This study introduces AdaTraj-DP, an adaptive differential privacy framework designed to balance trajectory-level protection and analytical utility. The framework combines context-aware sensitivity detection with hierarchical aggregation. Specifically, a dynamic sensitivity model evaluates privacy risks according to spatial density and semantic context, enabling adaptive allocation of privacy budgets. An adaptive perturbation mechanism then injects noise proportionally to the estimated sensitivity and represents trajectories through Hilbert-based encoding for prefix-oriented hierarchical aggregation with layer-wise budget distribution. Experiments conducted on the T-Drive and GeoLife datasets indicate that AdaTraj-DP maintains stable query accuracy, spatial consistency, and downstream analytical utility across varying privacy budgets while satisfying formal differential privacy guarantees.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104125"},"PeriodicalIF":3.1,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145883438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-24DOI: 10.1016/j.csi.2025.104122
João Carlos Lourenço , João Varajão
The evaluation of project success is widely recognised as valuable for improving IT (Information Technology) project performance and impact. However, many processes fail to adequately address the requirements for a sound evaluation due to their inherent complexity or by not complying with fundamental practical and theoretical concepts. This paper presents a process that combines a problem structuring method with a multi-criteria decision analysis approach to evaluate the success of IT projects. Put into practice in the context of a software development project developed for a leading global supplier of technology and services, it offers a new way of creating a model for evaluating project success and tackling uncertainty, bringing clarity and consistency to the overall assessment process. A strong advantage of this process is that it is theoretically sound and can be easily applied to other evaluation problems involving other criteria. It also serves as a call to action for the development of formal standards in evaluation processes. Practical pathways to achieve such standardization include collaboration through industry consortia, development and adoption of ISO frameworks, and embedding evaluation processes within established maturity models. These pathways can foster consistency, comparability, and continuous improvement across organizations, paving the way for more robust and transparent evaluation practices.
{"title":"A multi-criteria process for IT project success evaluation–Addressing a critical gap in standard practices","authors":"João Carlos Lourenço , João Varajão","doi":"10.1016/j.csi.2025.104122","DOIUrl":"10.1016/j.csi.2025.104122","url":null,"abstract":"<div><div>The evaluation of project success is widely recognised as valuable for improving IT (Information Technology) project performance and impact. However, many processes fail to adequately address the requirements for a sound evaluation due to their inherent complexity or by not complying with fundamental practical and theoretical concepts. This paper presents a process that combines a problem structuring method with a multi-criteria decision analysis approach to evaluate the success of IT projects. Put into practice in the context of a software development project developed for a leading global supplier of technology and services, it offers a new way of creating a model for evaluating project success and tackling uncertainty, bringing clarity and consistency to the overall assessment process. A strong advantage of this process is that it is theoretically sound and can be easily applied to other evaluation problems involving other criteria. It also serves as a call to action for the development of formal standards in evaluation processes. Practical pathways to achieve such standardization include collaboration through industry consortia, development and adoption of ISO frameworks, and embedding evaluation processes within established maturity models. These pathways can foster consistency, comparability, and continuous improvement across organizations, paving the way for more robust and transparent evaluation practices.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104122"},"PeriodicalIF":3.1,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145883440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-23DOI: 10.1016/j.csi.2025.104121
Jiasheng Chen , Zhenfu Cao , Liangliang Wang , Jiachen Shen , Xiaolei Dong
Secure sharing mechanism in the cloud environment not only needs to realize efficient ciphertext storage of resource-constrained clients, but also needs to build a trusted data sharing system. Aiming at the limitations of existing schemes in terms of user identity privacy protection, insufficient access control granularity, and data sharing security, we propose a fuzzy certificateless proxy re-encryption (FCL-PRE) scheme. In order to achieve much better fine-grained delegation and effective conditional privacy, our scheme regards the conditions as an attribute set associated with pseudo-identities, and re-encryption can be performed if and only if the overlap distance of the sender’s and receiver’s attribute sets meets a specific threshold. Moreover, the FCL-PRE scheme ensures anonymity, preventing the exposure of users’ real identities through ciphertexts containing identity information during transmission. In the random oracle model, FCL-PRE not only guarantees confidentiality, anonymity, and collusion resistance but also leverages the fuzziness of re-encryption to provide a certain level of error tolerance in the cloud-sharing architecture. Experimental results indicate that, compared to other existing schemes, FCL-PRE offers up to a 44.6% increase in decryption efficiency while maintaining the lowest overall computational overhead.
{"title":"Sharing as You Desire: A fuzzy certificateless proxy re-encryption scheme for efficient and privacy-preserving cloud data sharing","authors":"Jiasheng Chen , Zhenfu Cao , Liangliang Wang , Jiachen Shen , Xiaolei Dong","doi":"10.1016/j.csi.2025.104121","DOIUrl":"10.1016/j.csi.2025.104121","url":null,"abstract":"<div><div>Secure sharing mechanism in the cloud environment not only needs to realize efficient ciphertext storage of resource-constrained clients, but also needs to build a trusted data sharing system. Aiming at the limitations of existing schemes in terms of user identity privacy protection, insufficient access control granularity, and data sharing security, we propose a fuzzy certificateless proxy re-encryption (FCL-PRE) scheme. In order to achieve much better fine-grained delegation and effective conditional privacy, our scheme regards the conditions as an attribute set associated with pseudo-identities, and re-encryption can be performed if and only if the overlap distance of the sender’s and receiver’s attribute sets meets a specific threshold. Moreover, the FCL-PRE scheme ensures anonymity, preventing the exposure of users’ real identities through ciphertexts containing identity information during transmission. In the random oracle model, FCL-PRE not only guarantees confidentiality, anonymity, and collusion resistance but also leverages the fuzziness of re-encryption to provide a certain level of error tolerance in the cloud-sharing architecture. Experimental results indicate that, compared to other existing schemes, FCL-PRE offers up to a 44.6% increase in decryption efficiency while maintaining the lowest overall computational overhead.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104121"},"PeriodicalIF":3.1,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145839848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-22DOI: 10.1016/j.csi.2025.104120
Andrea Apicella , Pasquale Arpaia , Luigi Capobianco , Francesco Caputo , Antonella Cioffi , Antonio Esposito , Francesco Isgrò , Rosanna Manzo , Nicola Moccaldi , Danilo Pau , Ettore Toscano
This manuscript proposes a new method to improve the MLCommons protocol for measuring power consumption on Microcontroller Units (MCUs) when running edge Artificial Intelligence (AI). In particular, the proposed approach (i) selectively measures the power consumption attributable to the inferences (namely, the predictions performed by Artificial Neural Networks — ANN), preventing the impact of other operations, (ii) accurately identifies the time window for acquiring the sample of the current thanks to the simultaneous measurement of power consumption and inference duration, and (iii) precisely synchronize the measurement windows and the inferences. The method is validated on three use cases: (i) Rockchip RV1106, a neural MCU that implements ANN via hardware neural processing unit through a dedicated accelerator, (ii) STM32 H7, and (iii) STM32 U5, high-performance and ultra-low-power general-purpose microcontroller, respectively. The proposed method returns higher power consumption for the two devices with respect to the MLCommons approach. This result is compatible with an improvement of selectivity and accuracy. Furthermore, the method reduces measurement uncertainty on the Rockchip RV1106 and STM32 boards by factors of 6 and 12, respectively.
{"title":"Energy consumption assessment in embedded AI: Metrological improvements of benchmarks for edge devices","authors":"Andrea Apicella , Pasquale Arpaia , Luigi Capobianco , Francesco Caputo , Antonella Cioffi , Antonio Esposito , Francesco Isgrò , Rosanna Manzo , Nicola Moccaldi , Danilo Pau , Ettore Toscano","doi":"10.1016/j.csi.2025.104120","DOIUrl":"10.1016/j.csi.2025.104120","url":null,"abstract":"<div><div>This manuscript proposes a new method to improve the MLCommons protocol for measuring power consumption on Microcontroller Units (MCUs) when running edge Artificial Intelligence (AI). In particular, the proposed approach (i) selectively measures the power consumption attributable to the inferences (namely, the predictions performed by Artificial Neural Networks — ANN), preventing the impact of other operations, (ii) accurately identifies the time window for acquiring the sample of the current thanks to the simultaneous measurement of power consumption and inference duration, and (iii) precisely synchronize the measurement windows and the inferences. The method is validated on three use cases: (i) Rockchip RV1106, a neural MCU that implements ANN via hardware neural processing unit through a dedicated accelerator, (ii) STM32 H7, and (iii) STM32 U5, high-performance and ultra-low-power general-purpose microcontroller, respectively. The proposed method returns higher power consumption for the two devices with respect to the MLCommons approach. This result is compatible with an improvement of selectivity and accuracy. Furthermore, the method reduces measurement uncertainty on the Rockchip RV1106 and STM32 boards by factors of 6 and 12, respectively.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104120"},"PeriodicalIF":3.1,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145839847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-17DOI: 10.1016/j.csi.2025.104114
Vikas Chouhan , Mohammed Aldarwbi , Somayeh Sadeghi , Ali Ghorbani , Aaron Chow , Robby Burko
Cryptography is fundamental to securing digital data and communications, yet established algorithms face increasing risk from emerging quantum capabilities. With the progression of quantum computing, the urgency for cryptographic standards that remain secure in both classical and quantum settings has intensified, governed not only by cryptanalytic risk but also by compliance, interoperability, and country-specific regulatory frameworks. This paper presents a structured evaluation framework that depicts the hierarchy of cryptographic standards, encompassing block ciphers, stream ciphers, hash and MAC functions, key establishment mechanisms, digital signatures, lightweight cryptography, entity authentication, public key infrastructure, and authentication and communication protocols. We define a standards-to-protocol recommendation flow that propagates compliant guidance across layers, from foundational primitives to PKI/authentication and hybridization, and extends to country-specific recommendations and protocols. Our contributions include explicit decision criteria for assessing cryptographic primitives under classical and quantum threat models, yielding both immediate and alternative deployment recommendations aligned with NIST-compliant guidelines. We further analyze hybrid schemes to ensure backward compatibility and secure integration, quantifying storage and network overheads for signatures, encryption, and key exchange to identify practical engineering trade-offs. Consolidated results are presented in reference tables detailing standardization year, purpose, notes, and migration recommendations for both classical and post-quantum contexts. Additionally, we examine the security strength of cryptographic primitives that are currently classically secure or quantum-resistant. This framework offers a reproducible, extensible path toward quantum-ready cryptographic systems.
{"title":"Assessing the quantum readiness of cryptographic standards: Recommendations toward quantum-era compliance","authors":"Vikas Chouhan , Mohammed Aldarwbi , Somayeh Sadeghi , Ali Ghorbani , Aaron Chow , Robby Burko","doi":"10.1016/j.csi.2025.104114","DOIUrl":"10.1016/j.csi.2025.104114","url":null,"abstract":"<div><div>Cryptography is fundamental to securing digital data and communications, yet established algorithms face increasing risk from emerging quantum capabilities. With the progression of quantum computing, the urgency for cryptographic standards that remain secure in both classical and quantum settings has intensified, governed not only by cryptanalytic risk but also by compliance, interoperability, and country-specific regulatory frameworks. This paper presents a structured evaluation framework that depicts the hierarchy of cryptographic standards, encompassing block ciphers, stream ciphers, hash and MAC functions, key establishment mechanisms, digital signatures, lightweight cryptography, entity authentication, public key infrastructure, and authentication and communication protocols. We define a standards-to-protocol recommendation flow that propagates compliant guidance across layers, from foundational primitives to PKI/authentication and hybridization, and extends to country-specific recommendations and protocols. Our contributions include explicit decision criteria for assessing cryptographic primitives under classical and quantum threat models, yielding both immediate and alternative deployment recommendations aligned with NIST-compliant guidelines. We further analyze hybrid schemes to ensure backward compatibility and secure integration, quantifying storage and network overheads for signatures, encryption, and key exchange to identify practical engineering trade-offs. Consolidated results are presented in reference tables detailing standardization year, purpose, notes, and migration recommendations for both classical and post-quantum contexts. Additionally, we examine the security strength of cryptographic primitives that are currently classically secure or quantum-resistant. This framework offers a reproducible, extensible path toward quantum-ready cryptographic systems.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104114"},"PeriodicalIF":3.1,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145839789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}