Pub Date : 2026-01-14DOI: 10.1016/j.csi.2026.104127
Xiwen Wang , Junqing Gong , Kai Zhang , Haifeng Qian
In multi-client searchable symmetric encryption (MC-SSE), multiple clients have the capability to conduct keyword searches on encrypted data hosted in cloud, where the outsourced data is contributed by a data owner. Unfortunately, all known MC-SSE addressing key escrow problem required establishing a secure channel between data owner and user, and might suffer from significant key storage overhead. Therefore, we present an effective decentralized MC-SSE (DMC-SSE) system without the key escrow problem for secure cloud storage, eliminating the secure channel between data owner and data user. In DMC-SSE, each client independently picks its public/secret key, while a bulletin board of user public keys takes the place of the central authority. Technically, we introduce a semi-generic construction framework of DMC-SSE, building upon Cash et al.’s OXT structure (CRYPTO 2013), which roughly combines Kolonelos, Malavolta and Wee’s distributed broadcast encryption scheme (ASIACRYPT 2023) and additionally introduces a distributed keyed pseudorandom function module for securely aggregating each client’s secret key.
{"title":"Decentralized multi-client boolean keyword search for encrypted cloud storage","authors":"Xiwen Wang , Junqing Gong , Kai Zhang , Haifeng Qian","doi":"10.1016/j.csi.2026.104127","DOIUrl":"10.1016/j.csi.2026.104127","url":null,"abstract":"<div><div>In multi-client searchable symmetric encryption (MC-SSE), multiple clients have the capability to conduct keyword searches on encrypted data hosted in cloud, where the outsourced data is contributed by a data owner. Unfortunately, all known MC-SSE addressing key escrow problem required establishing a secure channel between data owner and user, and might suffer from significant key storage overhead. Therefore, we present an effective decentralized MC-SSE (DMC-SSE) system without the key escrow problem for secure cloud storage, eliminating the secure channel between data owner and data user. In DMC-SSE, each client independently picks its public/secret key, while a bulletin board of user public keys takes the place of the central authority. Technically, we introduce a semi-generic construction framework of DMC-SSE, building upon Cash et al.’s OXT structure (CRYPTO 2013), which roughly combines Kolonelos, Malavolta and Wee’s distributed broadcast encryption scheme (ASIACRYPT 2023) and additionally introduces a distributed keyed pseudorandom function module for securely aggregating each client’s secret key.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104127"},"PeriodicalIF":3.1,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-09DOI: 10.1016/j.csi.2026.104129
Wenhua Huang, Yuwei Deng, Jingyu Feng, Gang Han, Wenbo Zhang
Cross-chain interoperability has emerged as a pivotal factor in enabling seamless data interaction and value circulation across diverse blockchain networks. Nevertheless, current cross-chain technologies necessitate advancements to satisfy the escalating need for efficient bidirectional data exchange. Addressing this, our work focuses on refining cross-chain protocols, with a core emphasis on elevating transaction efficiency, reliability, and security. Our innovation centers around a strengthened hashed timelock cross-chain protocol grounded in trusted middlemen. To safeguard the security and confidentiality of middlemen engaged in cross-chain transactions, we introduce an ingenious anonymous identity authentication mechanism. This mechanism empowers middlemen to execute auxiliary cross-chain transactions while concealing their actual identities. Additionally, we propose a behavior-based assessment of middlemen, utilizing distinct indicators to gauge their trustworthiness in each cross-chain transaction. We introduce both current and historical trust values, providing insights into middlemen's real-time reliability and long-term stability. This approach effectively thwarts attempts by malicious middlemen to manipulate trust values, mitigating security vulnerabilities in cross-chain transactions. Furthermore, by clearing redundant blocks, we not only decrease storage consumption but also facilitate the storage of a substantial amount of identity data and trust data of middlemen. Rigorous security analysis demonstrates our scheme's alignment with foundational security requirements and resilience against common attacks. Furthermore, our simulation results underscore the potency of our trust evaluation scheme, substantiating its efficacy in ensuring middlemen credibility and detecting malicious actors.
{"title":"Securing hashed timelock cross-chain protocol with trusted middleman in blockchain networks","authors":"Wenhua Huang, Yuwei Deng, Jingyu Feng, Gang Han, Wenbo Zhang","doi":"10.1016/j.csi.2026.104129","DOIUrl":"10.1016/j.csi.2026.104129","url":null,"abstract":"<div><div>Cross-chain interoperability has emerged as a pivotal factor in enabling seamless data interaction and value circulation across diverse blockchain networks. Nevertheless, current cross-chain technologies necessitate advancements to satisfy the escalating need for efficient bidirectional data exchange. Addressing this, our work focuses on refining cross-chain protocols, with a core emphasis on elevating transaction efficiency, reliability, and security. Our innovation centers around a strengthened hashed timelock cross-chain protocol grounded in trusted middlemen. To safeguard the security and confidentiality of middlemen engaged in cross-chain transactions, we introduce an ingenious anonymous identity authentication mechanism. This mechanism empowers middlemen to execute auxiliary cross-chain transactions while concealing their actual identities. Additionally, we propose a behavior-based assessment of middlemen, utilizing distinct indicators to gauge their trustworthiness in each cross-chain transaction. We introduce both current and historical trust values, providing insights into middlemen's real-time reliability and long-term stability. This approach effectively thwarts attempts by malicious middlemen to manipulate trust values, mitigating security vulnerabilities in cross-chain transactions. Furthermore, by clearing redundant blocks, we not only decrease storage consumption but also facilitate the storage of a substantial amount of identity data and trust data of middlemen. Rigorous security analysis demonstrates our scheme's alignment with foundational security requirements and resilience against common attacks. Furthermore, our simulation results underscore the potency of our trust evaluation scheme, substantiating its efficacy in ensuring middlemen credibility and detecting malicious actors.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104129"},"PeriodicalIF":3.1,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145977134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-31DOI: 10.1016/j.csi.2025.104123
Xueting Huang , Xiangwei Meng , Kai Zhang , Ce Yang , Wei Liang , Kuan-Ching Li
Sharding technology effectively improves system throughput by distributing the blockchain transaction load to multiple shards for parallel processing, and it is the core solution to the scalability problem of blockchain. However, as the number of shards increases, the frequency of cross-shard transactions increases significantly, leading to increased communication and computational overhead, transaction delays, uneven resource allocation, and load imbalance, which becomes a key bottleneck for performance expansion. To this end, this article proposes the cross-shard transaction protocol V-Bridge, which draws on the concept of off-chain payment channels to establish distributed virtual fund channels between Trustors in different shards, convert cross-shard transactions into off-chain transactions and realize the logical flow of funds. To further enhance cross-shard transaction performance, our V-Bridge integrates an intelligent sharding adjustment mechanism, and a cross-shard optimized critical path protection algorithm (CSOCPPA) to dynamically balance shard loads, alleviate resource allocation issues, and minimize performance bottlenecks. Experimental results show that compared with existing state-of-the-art protocols, our proposed V-Bridge’s average throughput is increased by 26% to 46%, and transaction delays are reduced by 15% to 24%.
{"title":"V-Bridge: A dynamic cross-shard blockchain protocol based on off-chain payment channel","authors":"Xueting Huang , Xiangwei Meng , Kai Zhang , Ce Yang , Wei Liang , Kuan-Ching Li","doi":"10.1016/j.csi.2025.104123","DOIUrl":"10.1016/j.csi.2025.104123","url":null,"abstract":"<div><div>Sharding technology effectively improves system throughput by distributing the blockchain transaction load to multiple shards for parallel processing, and it is the core solution to the scalability problem of blockchain. However, as the number of shards increases, the frequency of cross-shard transactions increases significantly, leading to increased communication and computational overhead, transaction delays, uneven resource allocation, and load imbalance, which becomes a key bottleneck for performance expansion. To this end, this article proposes the cross-shard transaction protocol V-Bridge, which draws on the concept of off-chain payment channels to establish distributed virtual fund channels between Trustors in different shards, convert cross-shard transactions into off-chain transactions and realize the logical flow of funds. To further enhance cross-shard transaction performance, our V-Bridge integrates an intelligent sharding adjustment mechanism, and a cross-shard optimized critical path protection algorithm (CSOCPPA) to dynamically balance shard loads, alleviate resource allocation issues, and minimize performance bottlenecks. Experimental results show that compared with existing state-of-the-art protocols, our proposed V-Bridge’s average throughput is increased by 26% to 46%, and transaction delays are reduced by 15% to 24%.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104123"},"PeriodicalIF":3.1,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-30DOI: 10.1016/j.csi.2025.104125
Yongxin Zhao , Chundong Wang , Hao Lin , Xumeng Wang , Yixuan Song , Qiuyu Du
Trajectory data are widely used in AI-based spatiotemporal analysis but raise privacy concerns due to their fine-grained nature and the potential for individual re-identification. Existing differential privacy (DP) approaches often apply uniform perturbation, which compromises spatial continuity, or adopt personalized mechanisms that overlook structural utility. This study introduces AdaTraj-DP, an adaptive differential privacy framework designed to balance trajectory-level protection and analytical utility. The framework combines context-aware sensitivity detection with hierarchical aggregation. Specifically, a dynamic sensitivity model evaluates privacy risks according to spatial density and semantic context, enabling adaptive allocation of privacy budgets. An adaptive perturbation mechanism then injects noise proportionally to the estimated sensitivity and represents trajectories through Hilbert-based encoding for prefix-oriented hierarchical aggregation with layer-wise budget distribution. Experiments conducted on the T-Drive and GeoLife datasets indicate that AdaTraj-DP maintains stable query accuracy, spatial consistency, and downstream analytical utility across varying privacy budgets while satisfying formal differential privacy guarantees.
{"title":"AdaTraj-DP: An adaptive privacy framework for context-aware trajectory data publishing","authors":"Yongxin Zhao , Chundong Wang , Hao Lin , Xumeng Wang , Yixuan Song , Qiuyu Du","doi":"10.1016/j.csi.2025.104125","DOIUrl":"10.1016/j.csi.2025.104125","url":null,"abstract":"<div><div>Trajectory data are widely used in AI-based spatiotemporal analysis but raise privacy concerns due to their fine-grained nature and the potential for individual re-identification. Existing differential privacy (DP) approaches often apply uniform perturbation, which compromises spatial continuity, or adopt personalized mechanisms that overlook structural utility. This study introduces AdaTraj-DP, an adaptive differential privacy framework designed to balance trajectory-level protection and analytical utility. The framework combines context-aware sensitivity detection with hierarchical aggregation. Specifically, a dynamic sensitivity model evaluates privacy risks according to spatial density and semantic context, enabling adaptive allocation of privacy budgets. An adaptive perturbation mechanism then injects noise proportionally to the estimated sensitivity and represents trajectories through Hilbert-based encoding for prefix-oriented hierarchical aggregation with layer-wise budget distribution. Experiments conducted on the T-Drive and GeoLife datasets indicate that AdaTraj-DP maintains stable query accuracy, spatial consistency, and downstream analytical utility across varying privacy budgets while satisfying formal differential privacy guarantees.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104125"},"PeriodicalIF":3.1,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145883438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-24DOI: 10.1016/j.csi.2025.104122
João Carlos Lourenço , João Varajão
The evaluation of project success is widely recognised as valuable for improving IT (Information Technology) project performance and impact. However, many processes fail to adequately address the requirements for a sound evaluation due to their inherent complexity or by not complying with fundamental practical and theoretical concepts. This paper presents a process that combines a problem structuring method with a multi-criteria decision analysis approach to evaluate the success of IT projects. Put into practice in the context of a software development project developed for a leading global supplier of technology and services, it offers a new way of creating a model for evaluating project success and tackling uncertainty, bringing clarity and consistency to the overall assessment process. A strong advantage of this process is that it is theoretically sound and can be easily applied to other evaluation problems involving other criteria. It also serves as a call to action for the development of formal standards in evaluation processes. Practical pathways to achieve such standardization include collaboration through industry consortia, development and adoption of ISO frameworks, and embedding evaluation processes within established maturity models. These pathways can foster consistency, comparability, and continuous improvement across organizations, paving the way for more robust and transparent evaluation practices.
{"title":"A multi-criteria process for IT project success evaluation–Addressing a critical gap in standard practices","authors":"João Carlos Lourenço , João Varajão","doi":"10.1016/j.csi.2025.104122","DOIUrl":"10.1016/j.csi.2025.104122","url":null,"abstract":"<div><div>The evaluation of project success is widely recognised as valuable for improving IT (Information Technology) project performance and impact. However, many processes fail to adequately address the requirements for a sound evaluation due to their inherent complexity or by not complying with fundamental practical and theoretical concepts. This paper presents a process that combines a problem structuring method with a multi-criteria decision analysis approach to evaluate the success of IT projects. Put into practice in the context of a software development project developed for a leading global supplier of technology and services, it offers a new way of creating a model for evaluating project success and tackling uncertainty, bringing clarity and consistency to the overall assessment process. A strong advantage of this process is that it is theoretically sound and can be easily applied to other evaluation problems involving other criteria. It also serves as a call to action for the development of formal standards in evaluation processes. Practical pathways to achieve such standardization include collaboration through industry consortia, development and adoption of ISO frameworks, and embedding evaluation processes within established maturity models. These pathways can foster consistency, comparability, and continuous improvement across organizations, paving the way for more robust and transparent evaluation practices.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104122"},"PeriodicalIF":3.1,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145883440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-23DOI: 10.1016/j.csi.2025.104121
Jiasheng Chen , Zhenfu Cao , Liangliang Wang , Jiachen Shen , Xiaolei Dong
Secure sharing mechanism in the cloud environment not only needs to realize efficient ciphertext storage of resource-constrained clients, but also needs to build a trusted data sharing system. Aiming at the limitations of existing schemes in terms of user identity privacy protection, insufficient access control granularity, and data sharing security, we propose a fuzzy certificateless proxy re-encryption (FCL-PRE) scheme. In order to achieve much better fine-grained delegation and effective conditional privacy, our scheme regards the conditions as an attribute set associated with pseudo-identities, and re-encryption can be performed if and only if the overlap distance of the sender’s and receiver’s attribute sets meets a specific threshold. Moreover, the FCL-PRE scheme ensures anonymity, preventing the exposure of users’ real identities through ciphertexts containing identity information during transmission. In the random oracle model, FCL-PRE not only guarantees confidentiality, anonymity, and collusion resistance but also leverages the fuzziness of re-encryption to provide a certain level of error tolerance in the cloud-sharing architecture. Experimental results indicate that, compared to other existing schemes, FCL-PRE offers up to a 44.6% increase in decryption efficiency while maintaining the lowest overall computational overhead.
{"title":"Sharing as You Desire: A fuzzy certificateless proxy re-encryption scheme for efficient and privacy-preserving cloud data sharing","authors":"Jiasheng Chen , Zhenfu Cao , Liangliang Wang , Jiachen Shen , Xiaolei Dong","doi":"10.1016/j.csi.2025.104121","DOIUrl":"10.1016/j.csi.2025.104121","url":null,"abstract":"<div><div>Secure sharing mechanism in the cloud environment not only needs to realize efficient ciphertext storage of resource-constrained clients, but also needs to build a trusted data sharing system. Aiming at the limitations of existing schemes in terms of user identity privacy protection, insufficient access control granularity, and data sharing security, we propose a fuzzy certificateless proxy re-encryption (FCL-PRE) scheme. In order to achieve much better fine-grained delegation and effective conditional privacy, our scheme regards the conditions as an attribute set associated with pseudo-identities, and re-encryption can be performed if and only if the overlap distance of the sender’s and receiver’s attribute sets meets a specific threshold. Moreover, the FCL-PRE scheme ensures anonymity, preventing the exposure of users’ real identities through ciphertexts containing identity information during transmission. In the random oracle model, FCL-PRE not only guarantees confidentiality, anonymity, and collusion resistance but also leverages the fuzziness of re-encryption to provide a certain level of error tolerance in the cloud-sharing architecture. Experimental results indicate that, compared to other existing schemes, FCL-PRE offers up to a 44.6% increase in decryption efficiency while maintaining the lowest overall computational overhead.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104121"},"PeriodicalIF":3.1,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145839848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-22DOI: 10.1016/j.csi.2025.104120
Andrea Apicella , Pasquale Arpaia , Luigi Capobianco , Francesco Caputo , Antonella Cioffi , Antonio Esposito , Francesco Isgrò , Rosanna Manzo , Nicola Moccaldi , Danilo Pau , Ettore Toscano
This manuscript proposes a new method to improve the MLCommons protocol for measuring power consumption on Microcontroller Units (MCUs) when running edge Artificial Intelligence (AI). In particular, the proposed approach (i) selectively measures the power consumption attributable to the inferences (namely, the predictions performed by Artificial Neural Networks — ANN), preventing the impact of other operations, (ii) accurately identifies the time window for acquiring the sample of the current thanks to the simultaneous measurement of power consumption and inference duration, and (iii) precisely synchronize the measurement windows and the inferences. The method is validated on three use cases: (i) Rockchip RV1106, a neural MCU that implements ANN via hardware neural processing unit through a dedicated accelerator, (ii) STM32 H7, and (iii) STM32 U5, high-performance and ultra-low-power general-purpose microcontroller, respectively. The proposed method returns higher power consumption for the two devices with respect to the MLCommons approach. This result is compatible with an improvement of selectivity and accuracy. Furthermore, the method reduces measurement uncertainty on the Rockchip RV1106 and STM32 boards by factors of 6 and 12, respectively.
{"title":"Energy consumption assessment in embedded AI: Metrological improvements of benchmarks for edge devices","authors":"Andrea Apicella , Pasquale Arpaia , Luigi Capobianco , Francesco Caputo , Antonella Cioffi , Antonio Esposito , Francesco Isgrò , Rosanna Manzo , Nicola Moccaldi , Danilo Pau , Ettore Toscano","doi":"10.1016/j.csi.2025.104120","DOIUrl":"10.1016/j.csi.2025.104120","url":null,"abstract":"<div><div>This manuscript proposes a new method to improve the MLCommons protocol for measuring power consumption on Microcontroller Units (MCUs) when running edge Artificial Intelligence (AI). In particular, the proposed approach (i) selectively measures the power consumption attributable to the inferences (namely, the predictions performed by Artificial Neural Networks — ANN), preventing the impact of other operations, (ii) accurately identifies the time window for acquiring the sample of the current thanks to the simultaneous measurement of power consumption and inference duration, and (iii) precisely synchronize the measurement windows and the inferences. The method is validated on three use cases: (i) Rockchip RV1106, a neural MCU that implements ANN via hardware neural processing unit through a dedicated accelerator, (ii) STM32 H7, and (iii) STM32 U5, high-performance and ultra-low-power general-purpose microcontroller, respectively. The proposed method returns higher power consumption for the two devices with respect to the MLCommons approach. This result is compatible with an improvement of selectivity and accuracy. Furthermore, the method reduces measurement uncertainty on the Rockchip RV1106 and STM32 boards by factors of 6 and 12, respectively.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104120"},"PeriodicalIF":3.1,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145839847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-17DOI: 10.1016/j.csi.2025.104114
Vikas Chouhan , Mohammed Aldarwbi , Somayeh Sadeghi , Ali Ghorbani , Aaron Chow , Robby Burko
Cryptography is fundamental to securing digital data and communications, yet established algorithms face increasing risk from emerging quantum capabilities. With the progression of quantum computing, the urgency for cryptographic standards that remain secure in both classical and quantum settings has intensified, governed not only by cryptanalytic risk but also by compliance, interoperability, and country-specific regulatory frameworks. This paper presents a structured evaluation framework that depicts the hierarchy of cryptographic standards, encompassing block ciphers, stream ciphers, hash and MAC functions, key establishment mechanisms, digital signatures, lightweight cryptography, entity authentication, public key infrastructure, and authentication and communication protocols. We define a standards-to-protocol recommendation flow that propagates compliant guidance across layers, from foundational primitives to PKI/authentication and hybridization, and extends to country-specific recommendations and protocols. Our contributions include explicit decision criteria for assessing cryptographic primitives under classical and quantum threat models, yielding both immediate and alternative deployment recommendations aligned with NIST-compliant guidelines. We further analyze hybrid schemes to ensure backward compatibility and secure integration, quantifying storage and network overheads for signatures, encryption, and key exchange to identify practical engineering trade-offs. Consolidated results are presented in reference tables detailing standardization year, purpose, notes, and migration recommendations for both classical and post-quantum contexts. Additionally, we examine the security strength of cryptographic primitives that are currently classically secure or quantum-resistant. This framework offers a reproducible, extensible path toward quantum-ready cryptographic systems.
{"title":"Assessing the quantum readiness of cryptographic standards: Recommendations toward quantum-era compliance","authors":"Vikas Chouhan , Mohammed Aldarwbi , Somayeh Sadeghi , Ali Ghorbani , Aaron Chow , Robby Burko","doi":"10.1016/j.csi.2025.104114","DOIUrl":"10.1016/j.csi.2025.104114","url":null,"abstract":"<div><div>Cryptography is fundamental to securing digital data and communications, yet established algorithms face increasing risk from emerging quantum capabilities. With the progression of quantum computing, the urgency for cryptographic standards that remain secure in both classical and quantum settings has intensified, governed not only by cryptanalytic risk but also by compliance, interoperability, and country-specific regulatory frameworks. This paper presents a structured evaluation framework that depicts the hierarchy of cryptographic standards, encompassing block ciphers, stream ciphers, hash and MAC functions, key establishment mechanisms, digital signatures, lightweight cryptography, entity authentication, public key infrastructure, and authentication and communication protocols. We define a standards-to-protocol recommendation flow that propagates compliant guidance across layers, from foundational primitives to PKI/authentication and hybridization, and extends to country-specific recommendations and protocols. Our contributions include explicit decision criteria for assessing cryptographic primitives under classical and quantum threat models, yielding both immediate and alternative deployment recommendations aligned with NIST-compliant guidelines. We further analyze hybrid schemes to ensure backward compatibility and secure integration, quantifying storage and network overheads for signatures, encryption, and key exchange to identify practical engineering trade-offs. Consolidated results are presented in reference tables detailing standardization year, purpose, notes, and migration recommendations for both classical and post-quantum contexts. Additionally, we examine the security strength of cryptographic primitives that are currently classically secure or quantum-resistant. This framework offers a reproducible, extensible path toward quantum-ready cryptographic systems.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104114"},"PeriodicalIF":3.1,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145839789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-17DOI: 10.1016/j.csi.2025.104117
Mahmoud Mohamed, Fayaz AlJuaid
Introduction:
Adversarial attacks represent a major challenge to deep learning models deployed in critical fields such as healthcare diagnostics and financial fraud detection. This paper addresses the limitations of single-strategy defenses by introducing ARMOR (Adaptive Resilient Multi-layer Orchestrated Response), a novel multi-layered architecture that seamlessly integrates multiple defense mechanisms.
Methodology:
We evaluate ARMOR against seven state-of-the-art defense methods through extensive experiments across multiple datasets and five attack methodologies. Our approach combines adversarial detection, input transformation, model hardening, and adaptive response layers that operate with intentional dependencies and feedback mechanisms.
Results:
Quantitative results demonstrate that ARMOR significantly outperforms individual defense methods, achieving a 91.7% attack mitigation rate (18.3% improvement over ensemble averaging), 87.5% clean accuracy preservation (8.9% improvement over adversarial training alone), and 76.4% robustness against adaptive attacks (23.2% increase over the strongest baseline).
Discussion:
The modular framework design enables flexibility against emerging threats while requiring only 1.42 computational overhead compared to unprotected models, making it suitable for resource-constrained environments. Our findings demonstrate that activating and integrating complementary defense mechanisms represents a significant advance in adversarial resilience.
{"title":"ARMOR: A multi-layered adaptive defense framework for robust deep learning systems against evolving adversarial threats","authors":"Mahmoud Mohamed, Fayaz AlJuaid","doi":"10.1016/j.csi.2025.104117","DOIUrl":"10.1016/j.csi.2025.104117","url":null,"abstract":"<div><h3>Introduction:</h3><div>Adversarial attacks represent a major challenge to deep learning models deployed in critical fields such as healthcare diagnostics and financial fraud detection. This paper addresses the limitations of single-strategy defenses by introducing ARMOR (Adaptive Resilient Multi-layer Orchestrated Response), a novel multi-layered architecture that seamlessly integrates multiple defense mechanisms.</div></div><div><h3>Methodology:</h3><div>We evaluate ARMOR against seven state-of-the-art defense methods through extensive experiments across multiple datasets and five attack methodologies. Our approach combines adversarial detection, input transformation, model hardening, and adaptive response layers that operate with intentional dependencies and feedback mechanisms.</div></div><div><h3>Results:</h3><div>Quantitative results demonstrate that ARMOR significantly outperforms individual defense methods, achieving a 91.7% attack mitigation rate (18.3% improvement over ensemble averaging), 87.5% clean accuracy preservation (8.9% improvement over adversarial training alone), and 76.4% robustness against adaptive attacks (23.2% increase over the strongest baseline).</div></div><div><h3>Discussion:</h3><div>The modular framework design enables flexibility against emerging threats while requiring only 1.42<span><math><mo>×</mo></math></span> computational overhead compared to unprotected models, making it suitable for resource-constrained environments. Our findings demonstrate that activating and integrating complementary defense mechanisms represents a significant advance in adversarial resilience.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104117"},"PeriodicalIF":3.1,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-15DOI: 10.1016/j.csi.2025.104116
Emrah Esen , Akhan Akbulut , Cagatay Catal
This study analyzes the implementation of Chaos Engineering in modern microservice systems. It identifies key methods, tools, and practices used to effectively enhance the resilience of software systems in production environments. In this context, our Systematic Literature Review (SLR) of 31 research articles has uncovered 38 tools crucial for carrying out fault injection methods, including several tools such as Chaos Toolkit, Gremlin, and Chaos Machine. The study also explores the platforms used for chaos experiments and how centralized management of chaos engineering can facilitate the coordination of these experiments across complex systems. The evaluated literature reveals the efficacy of chaos engineering in improving fault tolerance and robustness of software systems, particularly those based on microservice architectures. The paper underlines the importance of careful planning and execution in implementing chaos engineering and encourages further research in this field to uncover more effective practices for the resilience improvement of microservice systems.
{"title":"Chaos experiments in microservice architectures: A systematic literature review","authors":"Emrah Esen , Akhan Akbulut , Cagatay Catal","doi":"10.1016/j.csi.2025.104116","DOIUrl":"10.1016/j.csi.2025.104116","url":null,"abstract":"<div><div>This study analyzes the implementation of Chaos Engineering in modern microservice systems. It identifies key methods, tools, and practices used to effectively enhance the resilience of software systems in production environments. In this context, our Systematic Literature Review (SLR) of 31 research articles has uncovered 38 tools crucial for carrying out fault injection methods, including several tools such as Chaos Toolkit, Gremlin, and Chaos Machine. The study also explores the platforms used for chaos experiments and how centralized management of chaos engineering can facilitate the coordination of these experiments across complex systems. The evaluated literature reveals the efficacy of chaos engineering in improving fault tolerance and robustness of software systems, particularly those based on microservice architectures. The paper underlines the importance of careful planning and execution in implementing chaos engineering and encourages further research in this field to uncover more effective practices for the resilience improvement of microservice systems.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104116"},"PeriodicalIF":3.1,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}