Pub Date : 2025-12-31DOI: 10.1016/j.csi.2025.104126
B. Muthusenthil , K. Devi
Since Cloud Computing (CC) expands and makes use of information technology (IT) infrastructure, conventional operating systems and applications, it is now susceptible to IT threats. Therefore, a Deep Convolutional Spiking Neural Network and Block Chain based Dynamic Random Byzantine Fault Tolerance Consensus Algorithm fostered Intrusion Detection System is proposed in this paper for improving Privacy and Safety in the Cloud Computing Environment (DCSNN-BC-DRBFT-IDS-CC). Here, the data are gathered through NSL-KDD and CICIDS-2017 benchmark datasets. First-level privacy procedure is performed by the block chain-dependent Dynamic Random Byzantine Fault Tolerance Consensus Algorithm (DRBFT). The secondary level privacy procedure is performed by pre-processing. During data processing, the Markov Chain Random Field (MCRF) is used to remove the unwanted content and filter relevant data. The pre-processing output is provided into feature selection. The optimum feature is selected by using Dynamic Recursive Feature Selection (DRFS). The Deep Convolutional Spiking Neural Network (DCSNN) is employed for classifying data as normal and abnormal. The proposed DCSNN-BC-DRBFT-IDS-CC method is implemented using performance metrics. The DCSNN-BC-DRBFT-IDS-CC achieves 39.185 %, 14.37 %, 31.8 % and 27.06 % better accuracy,25.13 %, 21.75 %, 27.54 % and 23.08 % less computation time,8.15 %, 2.57 %, 3.64 %, 5.85 % higher AUC when compared to other existing models.
由于云计算(CC)扩展并利用了信息技术(IT)基础设施、传统操作系统和应用程序,它现在很容易受到IT威胁。为此,本文提出了一种基于深度卷积尖峰神经网络和区块链的动态随机拜占庭容错一致性算法的入侵检测系统(DCSNN-BC-DRBFT-IDS-CC),以提高云计算环境下的隐私和安全。这里的数据是通过NSL-KDD和CICIDS-2017基准数据集收集的。第一级隐私过程由区块链相关的动态随机拜占庭容错共识算法(DRBFT)执行。二级隐私过程通过预处理来执行。在数据处理过程中,利用马尔可夫链随机场(Markov Chain Random Field, MCRF)去除不需要的内容,过滤相关数据。将预处理输出提供给特征选择。采用动态递归特征选择(DRFS)方法选择最优特征。采用深度卷积脉冲神经网络(DCSNN)对数据进行正常和异常分类。采用性能指标实现了DCSNN-BC-DRBFT-IDS-CC方法。与其他模型相比,dcsnn - bc - drbft - ads - cc的准确率分别提高了39.185%、14.37%、31.8%和27.06%,计算时间分别减少了25.13%、21.75%、27.54%和23.08%,AUC分别提高了8.15%、2.57%、3.64%、5.85%。
{"title":"Deep convolutional spiking neural network and block chain based intrusion detection framework for enhancing privacy and security in cloud computing environment","authors":"B. Muthusenthil , K. Devi","doi":"10.1016/j.csi.2025.104126","DOIUrl":"10.1016/j.csi.2025.104126","url":null,"abstract":"<div><div>Since Cloud Computing (CC) expands and makes use of information technology (IT) infrastructure, conventional operating systems and applications, it is now susceptible to IT threats. Therefore, a Deep Convolutional Spiking Neural Network and Block Chain based Dynamic Random Byzantine Fault Tolerance Consensus Algorithm fostered Intrusion Detection System is proposed in this paper for improving Privacy and Safety in the Cloud Computing Environment (DCSNN-BC-DRBFT-IDS-CC). Here, the data are gathered through NSL-KDD and CICIDS-2017 benchmark datasets. First-level privacy procedure is performed by the block chain-dependent Dynamic Random Byzantine Fault Tolerance Consensus Algorithm (DRBFT). The secondary level privacy procedure is performed by pre-processing. During data processing, the Markov Chain Random Field (MCRF) is used to remove the unwanted content and filter relevant data. The pre-processing output is provided into feature selection. The optimum feature is selected by using Dynamic Recursive Feature Selection (DRFS). The Deep Convolutional Spiking Neural Network (DCSNN) is employed for classifying data as normal and abnormal. The proposed DCSNN-BC-DRBFT-IDS-CC method is implemented using performance metrics. The DCSNN-BC-DRBFT-IDS-CC achieves 39.185 %, 14.37 %, 31.8 % and 27.06 % better accuracy,25.13 %, 21.75 %, 27.54 % and 23.08 % less computation time,8.15 %, 2.57 %, 3.64 %, 5.85 % higher AUC when compared to other existing models.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104126"},"PeriodicalIF":3.1,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146037246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-31DOI: 10.1016/j.csi.2025.104123
Xueting Huang , Xiangwei Meng , Kai Zhang , Ce Yang , Wei Liang , Kuan-Ching Li
Sharding technology effectively improves system throughput by distributing the blockchain transaction load to multiple shards for parallel processing, and it is the core solution to the scalability problem of blockchain. However, as the number of shards increases, the frequency of cross-shard transactions increases significantly, leading to increased communication and computational overhead, transaction delays, uneven resource allocation, and load imbalance, which becomes a key bottleneck for performance expansion. To this end, this article proposes the cross-shard transaction protocol V-Bridge, which draws on the concept of off-chain payment channels to establish distributed virtual fund channels between Trustors in different shards, convert cross-shard transactions into off-chain transactions and realize the logical flow of funds. To further enhance cross-shard transaction performance, our V-Bridge integrates an intelligent sharding adjustment mechanism, and a cross-shard optimized critical path protection algorithm (CSOCPPA) to dynamically balance shard loads, alleviate resource allocation issues, and minimize performance bottlenecks. Experimental results show that compared with existing state-of-the-art protocols, our proposed V-Bridge’s average throughput is increased by 26% to 46%, and transaction delays are reduced by 15% to 24%.
{"title":"V-Bridge: A dynamic cross-shard blockchain protocol based on off-chain payment channel","authors":"Xueting Huang , Xiangwei Meng , Kai Zhang , Ce Yang , Wei Liang , Kuan-Ching Li","doi":"10.1016/j.csi.2025.104123","DOIUrl":"10.1016/j.csi.2025.104123","url":null,"abstract":"<div><div>Sharding technology effectively improves system throughput by distributing the blockchain transaction load to multiple shards for parallel processing, and it is the core solution to the scalability problem of blockchain. However, as the number of shards increases, the frequency of cross-shard transactions increases significantly, leading to increased communication and computational overhead, transaction delays, uneven resource allocation, and load imbalance, which becomes a key bottleneck for performance expansion. To this end, this article proposes the cross-shard transaction protocol V-Bridge, which draws on the concept of off-chain payment channels to establish distributed virtual fund channels between Trustors in different shards, convert cross-shard transactions into off-chain transactions and realize the logical flow of funds. To further enhance cross-shard transaction performance, our V-Bridge integrates an intelligent sharding adjustment mechanism, and a cross-shard optimized critical path protection algorithm (CSOCPPA) to dynamically balance shard loads, alleviate resource allocation issues, and minimize performance bottlenecks. Experimental results show that compared with existing state-of-the-art protocols, our proposed V-Bridge’s average throughput is increased by 26% to 46%, and transaction delays are reduced by 15% to 24%.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104123"},"PeriodicalIF":3.1,"publicationDate":"2025-12-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145925039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-30DOI: 10.1016/j.csi.2025.104125
Yongxin Zhao , Chundong Wang , Hao Lin , Xumeng Wang , Yixuan Song , Qiuyu Du
Trajectory data are widely used in AI-based spatiotemporal analysis but raise privacy concerns due to their fine-grained nature and the potential for individual re-identification. Existing differential privacy (DP) approaches often apply uniform perturbation, which compromises spatial continuity, or adopt personalized mechanisms that overlook structural utility. This study introduces AdaTraj-DP, an adaptive differential privacy framework designed to balance trajectory-level protection and analytical utility. The framework combines context-aware sensitivity detection with hierarchical aggregation. Specifically, a dynamic sensitivity model evaluates privacy risks according to spatial density and semantic context, enabling adaptive allocation of privacy budgets. An adaptive perturbation mechanism then injects noise proportionally to the estimated sensitivity and represents trajectories through Hilbert-based encoding for prefix-oriented hierarchical aggregation with layer-wise budget distribution. Experiments conducted on the T-Drive and GeoLife datasets indicate that AdaTraj-DP maintains stable query accuracy, spatial consistency, and downstream analytical utility across varying privacy budgets while satisfying formal differential privacy guarantees.
{"title":"AdaTraj-DP: An adaptive privacy framework for context-aware trajectory data publishing","authors":"Yongxin Zhao , Chundong Wang , Hao Lin , Xumeng Wang , Yixuan Song , Qiuyu Du","doi":"10.1016/j.csi.2025.104125","DOIUrl":"10.1016/j.csi.2025.104125","url":null,"abstract":"<div><div>Trajectory data are widely used in AI-based spatiotemporal analysis but raise privacy concerns due to their fine-grained nature and the potential for individual re-identification. Existing differential privacy (DP) approaches often apply uniform perturbation, which compromises spatial continuity, or adopt personalized mechanisms that overlook structural utility. This study introduces AdaTraj-DP, an adaptive differential privacy framework designed to balance trajectory-level protection and analytical utility. The framework combines context-aware sensitivity detection with hierarchical aggregation. Specifically, a dynamic sensitivity model evaluates privacy risks according to spatial density and semantic context, enabling adaptive allocation of privacy budgets. An adaptive perturbation mechanism then injects noise proportionally to the estimated sensitivity and represents trajectories through Hilbert-based encoding for prefix-oriented hierarchical aggregation with layer-wise budget distribution. Experiments conducted on the T-Drive and GeoLife datasets indicate that AdaTraj-DP maintains stable query accuracy, spatial consistency, and downstream analytical utility across varying privacy budgets while satisfying formal differential privacy guarantees.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104125"},"PeriodicalIF":3.1,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145883438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-24DOI: 10.1016/j.csi.2025.104122
João Carlos Lourenço , João Varajão
The evaluation of project success is widely recognised as valuable for improving IT (Information Technology) project performance and impact. However, many processes fail to adequately address the requirements for a sound evaluation due to their inherent complexity or by not complying with fundamental practical and theoretical concepts. This paper presents a process that combines a problem structuring method with a multi-criteria decision analysis approach to evaluate the success of IT projects. Put into practice in the context of a software development project developed for a leading global supplier of technology and services, it offers a new way of creating a model for evaluating project success and tackling uncertainty, bringing clarity and consistency to the overall assessment process. A strong advantage of this process is that it is theoretically sound and can be easily applied to other evaluation problems involving other criteria. It also serves as a call to action for the development of formal standards in evaluation processes. Practical pathways to achieve such standardization include collaboration through industry consortia, development and adoption of ISO frameworks, and embedding evaluation processes within established maturity models. These pathways can foster consistency, comparability, and continuous improvement across organizations, paving the way for more robust and transparent evaluation practices.
{"title":"A multi-criteria process for IT project success evaluation–Addressing a critical gap in standard practices","authors":"João Carlos Lourenço , João Varajão","doi":"10.1016/j.csi.2025.104122","DOIUrl":"10.1016/j.csi.2025.104122","url":null,"abstract":"<div><div>The evaluation of project success is widely recognised as valuable for improving IT (Information Technology) project performance and impact. However, many processes fail to adequately address the requirements for a sound evaluation due to their inherent complexity or by not complying with fundamental practical and theoretical concepts. This paper presents a process that combines a problem structuring method with a multi-criteria decision analysis approach to evaluate the success of IT projects. Put into practice in the context of a software development project developed for a leading global supplier of technology and services, it offers a new way of creating a model for evaluating project success and tackling uncertainty, bringing clarity and consistency to the overall assessment process. A strong advantage of this process is that it is theoretically sound and can be easily applied to other evaluation problems involving other criteria. It also serves as a call to action for the development of formal standards in evaluation processes. Practical pathways to achieve such standardization include collaboration through industry consortia, development and adoption of ISO frameworks, and embedding evaluation processes within established maturity models. These pathways can foster consistency, comparability, and continuous improvement across organizations, paving the way for more robust and transparent evaluation practices.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104122"},"PeriodicalIF":3.1,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145883440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-23DOI: 10.1016/j.csi.2025.104121
Jiasheng Chen , Zhenfu Cao , Liangliang Wang , Jiachen Shen , Xiaolei Dong
Secure sharing mechanism in the cloud environment not only needs to realize efficient ciphertext storage of resource-constrained clients, but also needs to build a trusted data sharing system. Aiming at the limitations of existing schemes in terms of user identity privacy protection, insufficient access control granularity, and data sharing security, we propose a fuzzy certificateless proxy re-encryption (FCL-PRE) scheme. In order to achieve much better fine-grained delegation and effective conditional privacy, our scheme regards the conditions as an attribute set associated with pseudo-identities, and re-encryption can be performed if and only if the overlap distance of the sender’s and receiver’s attribute sets meets a specific threshold. Moreover, the FCL-PRE scheme ensures anonymity, preventing the exposure of users’ real identities through ciphertexts containing identity information during transmission. In the random oracle model, FCL-PRE not only guarantees confidentiality, anonymity, and collusion resistance but also leverages the fuzziness of re-encryption to provide a certain level of error tolerance in the cloud-sharing architecture. Experimental results indicate that, compared to other existing schemes, FCL-PRE offers up to a 44.6% increase in decryption efficiency while maintaining the lowest overall computational overhead.
{"title":"Sharing as You Desire: A fuzzy certificateless proxy re-encryption scheme for efficient and privacy-preserving cloud data sharing","authors":"Jiasheng Chen , Zhenfu Cao , Liangliang Wang , Jiachen Shen , Xiaolei Dong","doi":"10.1016/j.csi.2025.104121","DOIUrl":"10.1016/j.csi.2025.104121","url":null,"abstract":"<div><div>Secure sharing mechanism in the cloud environment not only needs to realize efficient ciphertext storage of resource-constrained clients, but also needs to build a trusted data sharing system. Aiming at the limitations of existing schemes in terms of user identity privacy protection, insufficient access control granularity, and data sharing security, we propose a fuzzy certificateless proxy re-encryption (FCL-PRE) scheme. In order to achieve much better fine-grained delegation and effective conditional privacy, our scheme regards the conditions as an attribute set associated with pseudo-identities, and re-encryption can be performed if and only if the overlap distance of the sender’s and receiver’s attribute sets meets a specific threshold. Moreover, the FCL-PRE scheme ensures anonymity, preventing the exposure of users’ real identities through ciphertexts containing identity information during transmission. In the random oracle model, FCL-PRE not only guarantees confidentiality, anonymity, and collusion resistance but also leverages the fuzziness of re-encryption to provide a certain level of error tolerance in the cloud-sharing architecture. Experimental results indicate that, compared to other existing schemes, FCL-PRE offers up to a 44.6% increase in decryption efficiency while maintaining the lowest overall computational overhead.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104121"},"PeriodicalIF":3.1,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145839848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-22DOI: 10.1016/j.csi.2025.104120
Andrea Apicella , Pasquale Arpaia , Luigi Capobianco , Francesco Caputo , Antonella Cioffi , Antonio Esposito , Francesco Isgrò , Rosanna Manzo , Nicola Moccaldi , Danilo Pau , Ettore Toscano
This manuscript proposes a new method to improve the MLCommons protocol for measuring power consumption on Microcontroller Units (MCUs) when running edge Artificial Intelligence (AI). In particular, the proposed approach (i) selectively measures the power consumption attributable to the inferences (namely, the predictions performed by Artificial Neural Networks — ANN), preventing the impact of other operations, (ii) accurately identifies the time window for acquiring the sample of the current thanks to the simultaneous measurement of power consumption and inference duration, and (iii) precisely synchronize the measurement windows and the inferences. The method is validated on three use cases: (i) Rockchip RV1106, a neural MCU that implements ANN via hardware neural processing unit through a dedicated accelerator, (ii) STM32 H7, and (iii) STM32 U5, high-performance and ultra-low-power general-purpose microcontroller, respectively. The proposed method returns higher power consumption for the two devices with respect to the MLCommons approach. This result is compatible with an improvement of selectivity and accuracy. Furthermore, the method reduces measurement uncertainty on the Rockchip RV1106 and STM32 boards by factors of 6 and 12, respectively.
{"title":"Energy consumption assessment in embedded AI: Metrological improvements of benchmarks for edge devices","authors":"Andrea Apicella , Pasquale Arpaia , Luigi Capobianco , Francesco Caputo , Antonella Cioffi , Antonio Esposito , Francesco Isgrò , Rosanna Manzo , Nicola Moccaldi , Danilo Pau , Ettore Toscano","doi":"10.1016/j.csi.2025.104120","DOIUrl":"10.1016/j.csi.2025.104120","url":null,"abstract":"<div><div>This manuscript proposes a new method to improve the MLCommons protocol for measuring power consumption on Microcontroller Units (MCUs) when running edge Artificial Intelligence (AI). In particular, the proposed approach (i) selectively measures the power consumption attributable to the inferences (namely, the predictions performed by Artificial Neural Networks — ANN), preventing the impact of other operations, (ii) accurately identifies the time window for acquiring the sample of the current thanks to the simultaneous measurement of power consumption and inference duration, and (iii) precisely synchronize the measurement windows and the inferences. The method is validated on three use cases: (i) Rockchip RV1106, a neural MCU that implements ANN via hardware neural processing unit through a dedicated accelerator, (ii) STM32 H7, and (iii) STM32 U5, high-performance and ultra-low-power general-purpose microcontroller, respectively. The proposed method returns higher power consumption for the two devices with respect to the MLCommons approach. This result is compatible with an improvement of selectivity and accuracy. Furthermore, the method reduces measurement uncertainty on the Rockchip RV1106 and STM32 boards by factors of 6 and 12, respectively.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104120"},"PeriodicalIF":3.1,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145839847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-17DOI: 10.1016/j.csi.2025.104114
Vikas Chouhan , Mohammed Aldarwbi , Somayeh Sadeghi , Ali Ghorbani , Aaron Chow , Robby Burko
Cryptography is fundamental to securing digital data and communications, yet established algorithms face increasing risk from emerging quantum capabilities. With the progression of quantum computing, the urgency for cryptographic standards that remain secure in both classical and quantum settings has intensified, governed not only by cryptanalytic risk but also by compliance, interoperability, and country-specific regulatory frameworks. This paper presents a structured evaluation framework that depicts the hierarchy of cryptographic standards, encompassing block ciphers, stream ciphers, hash and MAC functions, key establishment mechanisms, digital signatures, lightweight cryptography, entity authentication, public key infrastructure, and authentication and communication protocols. We define a standards-to-protocol recommendation flow that propagates compliant guidance across layers, from foundational primitives to PKI/authentication and hybridization, and extends to country-specific recommendations and protocols. Our contributions include explicit decision criteria for assessing cryptographic primitives under classical and quantum threat models, yielding both immediate and alternative deployment recommendations aligned with NIST-compliant guidelines. We further analyze hybrid schemes to ensure backward compatibility and secure integration, quantifying storage and network overheads for signatures, encryption, and key exchange to identify practical engineering trade-offs. Consolidated results are presented in reference tables detailing standardization year, purpose, notes, and migration recommendations for both classical and post-quantum contexts. Additionally, we examine the security strength of cryptographic primitives that are currently classically secure or quantum-resistant. This framework offers a reproducible, extensible path toward quantum-ready cryptographic systems.
{"title":"Assessing the quantum readiness of cryptographic standards: Recommendations toward quantum-era compliance","authors":"Vikas Chouhan , Mohammed Aldarwbi , Somayeh Sadeghi , Ali Ghorbani , Aaron Chow , Robby Burko","doi":"10.1016/j.csi.2025.104114","DOIUrl":"10.1016/j.csi.2025.104114","url":null,"abstract":"<div><div>Cryptography is fundamental to securing digital data and communications, yet established algorithms face increasing risk from emerging quantum capabilities. With the progression of quantum computing, the urgency for cryptographic standards that remain secure in both classical and quantum settings has intensified, governed not only by cryptanalytic risk but also by compliance, interoperability, and country-specific regulatory frameworks. This paper presents a structured evaluation framework that depicts the hierarchy of cryptographic standards, encompassing block ciphers, stream ciphers, hash and MAC functions, key establishment mechanisms, digital signatures, lightweight cryptography, entity authentication, public key infrastructure, and authentication and communication protocols. We define a standards-to-protocol recommendation flow that propagates compliant guidance across layers, from foundational primitives to PKI/authentication and hybridization, and extends to country-specific recommendations and protocols. Our contributions include explicit decision criteria for assessing cryptographic primitives under classical and quantum threat models, yielding both immediate and alternative deployment recommendations aligned with NIST-compliant guidelines. We further analyze hybrid schemes to ensure backward compatibility and secure integration, quantifying storage and network overheads for signatures, encryption, and key exchange to identify practical engineering trade-offs. Consolidated results are presented in reference tables detailing standardization year, purpose, notes, and migration recommendations for both classical and post-quantum contexts. Additionally, we examine the security strength of cryptographic primitives that are currently classically secure or quantum-resistant. This framework offers a reproducible, extensible path toward quantum-ready cryptographic systems.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104114"},"PeriodicalIF":3.1,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145839789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-17DOI: 10.1016/j.csi.2025.104117
Mahmoud Mohamed, Fayaz AlJuaid
Introduction:
Adversarial attacks represent a major challenge to deep learning models deployed in critical fields such as healthcare diagnostics and financial fraud detection. This paper addresses the limitations of single-strategy defenses by introducing ARMOR (Adaptive Resilient Multi-layer Orchestrated Response), a novel multi-layered architecture that seamlessly integrates multiple defense mechanisms.
Methodology:
We evaluate ARMOR against seven state-of-the-art defense methods through extensive experiments across multiple datasets and five attack methodologies. Our approach combines adversarial detection, input transformation, model hardening, and adaptive response layers that operate with intentional dependencies and feedback mechanisms.
Results:
Quantitative results demonstrate that ARMOR significantly outperforms individual defense methods, achieving a 91.7% attack mitigation rate (18.3% improvement over ensemble averaging), 87.5% clean accuracy preservation (8.9% improvement over adversarial training alone), and 76.4% robustness against adaptive attacks (23.2% increase over the strongest baseline).
Discussion:
The modular framework design enables flexibility against emerging threats while requiring only 1.42 computational overhead compared to unprotected models, making it suitable for resource-constrained environments. Our findings demonstrate that activating and integrating complementary defense mechanisms represents a significant advance in adversarial resilience.
{"title":"ARMOR: A multi-layered adaptive defense framework for robust deep learning systems against evolving adversarial threats","authors":"Mahmoud Mohamed, Fayaz AlJuaid","doi":"10.1016/j.csi.2025.104117","DOIUrl":"10.1016/j.csi.2025.104117","url":null,"abstract":"<div><h3>Introduction:</h3><div>Adversarial attacks represent a major challenge to deep learning models deployed in critical fields such as healthcare diagnostics and financial fraud detection. This paper addresses the limitations of single-strategy defenses by introducing ARMOR (Adaptive Resilient Multi-layer Orchestrated Response), a novel multi-layered architecture that seamlessly integrates multiple defense mechanisms.</div></div><div><h3>Methodology:</h3><div>We evaluate ARMOR against seven state-of-the-art defense methods through extensive experiments across multiple datasets and five attack methodologies. Our approach combines adversarial detection, input transformation, model hardening, and adaptive response layers that operate with intentional dependencies and feedback mechanisms.</div></div><div><h3>Results:</h3><div>Quantitative results demonstrate that ARMOR significantly outperforms individual defense methods, achieving a 91.7% attack mitigation rate (18.3% improvement over ensemble averaging), 87.5% clean accuracy preservation (8.9% improvement over adversarial training alone), and 76.4% robustness against adaptive attacks (23.2% increase over the strongest baseline).</div></div><div><h3>Discussion:</h3><div>The modular framework design enables flexibility against emerging threats while requiring only 1.42<span><math><mo>×</mo></math></span> computational overhead compared to unprotected models, making it suitable for resource-constrained environments. Our findings demonstrate that activating and integrating complementary defense mechanisms represents a significant advance in adversarial resilience.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104117"},"PeriodicalIF":3.1,"publicationDate":"2025-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-15DOI: 10.1016/j.csi.2025.104116
Emrah Esen , Akhan Akbulut , Cagatay Catal
This study analyzes the implementation of Chaos Engineering in modern microservice systems. It identifies key methods, tools, and practices used to effectively enhance the resilience of software systems in production environments. In this context, our Systematic Literature Review (SLR) of 31 research articles has uncovered 38 tools crucial for carrying out fault injection methods, including several tools such as Chaos Toolkit, Gremlin, and Chaos Machine. The study also explores the platforms used for chaos experiments and how centralized management of chaos engineering can facilitate the coordination of these experiments across complex systems. The evaluated literature reveals the efficacy of chaos engineering in improving fault tolerance and robustness of software systems, particularly those based on microservice architectures. The paper underlines the importance of careful planning and execution in implementing chaos engineering and encourages further research in this field to uncover more effective practices for the resilience improvement of microservice systems.
{"title":"Chaos experiments in microservice architectures: A systematic literature review","authors":"Emrah Esen , Akhan Akbulut , Cagatay Catal","doi":"10.1016/j.csi.2025.104116","DOIUrl":"10.1016/j.csi.2025.104116","url":null,"abstract":"<div><div>This study analyzes the implementation of Chaos Engineering in modern microservice systems. It identifies key methods, tools, and practices used to effectively enhance the resilience of software systems in production environments. In this context, our Systematic Literature Review (SLR) of 31 research articles has uncovered 38 tools crucial for carrying out fault injection methods, including several tools such as Chaos Toolkit, Gremlin, and Chaos Machine. The study also explores the platforms used for chaos experiments and how centralized management of chaos engineering can facilitate the coordination of these experiments across complex systems. The evaluated literature reveals the efficacy of chaos engineering in improving fault tolerance and robustness of software systems, particularly those based on microservice architectures. The paper underlines the importance of careful planning and execution in implementing chaos engineering and encourages further research in this field to uncover more effective practices for the resilience improvement of microservice systems.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104116"},"PeriodicalIF":3.1,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145790410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-15DOI: 10.1016/j.csi.2025.104118
Kübra Seyhan , Sedat Akleylek , Ahmet Faruk Dursun
In this paper, an efficient post-quantum secure password-authenticated key exchange (PAKE) scheme from a well-structured lattice-based key encapsulation mechanism (KEM) is proposed. The generic KEM to PAKE idea, OCAKE, is modified by considering hybrid module learning with errors (MLWE) + module learning with rounding (MLWR) assumptions to obtain explicit password-based authentication from SMAUG-T.KEM procedures. As a KEM primitive, SMAUG-T.KEM is chosen due to its performance against the National Institute of Standards and Technology (NIST) standard Crystals-Kyber (Kyber) to obtain an efficient and post-quantum secure PAKE scheme. Firstly, the anonymity and fuzziness properties of SMAUG-T.KEM are proven to fit the OCAKE approach in constructing the PAKE version of Smaug.KEM. Then, the post-quantum security of the proposed SMAUG-T.PAKE is analyzed in the universal composability (UC) model based on the hybrid security assumptions and proved properties. The reference C and JAVA codes are written to evaluate whether the targeted efficiency is achieved in different platforms. Based on the central processing unit (CPU) and memory usage, run time, and energy consumption metrics, the proposed solution is compared with current PAKE proposals. The performance results showed that SMAUG-T.PAKE, with two optional encryption modes, Advanced Encryption Standard (AES) or Ascon, presents better performance than the other module-based PAKE solutions from lattices in terms of both reference and mobile results.
{"title":"Post-quantum PAKE over lattices revised: Smaug-T.PAKE for mobile devices","authors":"Kübra Seyhan , Sedat Akleylek , Ahmet Faruk Dursun","doi":"10.1016/j.csi.2025.104118","DOIUrl":"10.1016/j.csi.2025.104118","url":null,"abstract":"<div><div>In this paper, an efficient post-quantum secure password-authenticated key exchange (PAKE) scheme from a well-structured lattice-based key encapsulation mechanism (KEM) is proposed. The generic KEM to PAKE idea, OCAKE, is modified by considering hybrid module learning with errors (MLWE) + module learning with rounding (MLWR) assumptions to obtain explicit password-based authentication from SMAUG-T.KEM procedures. As a KEM primitive, SMAUG-T.KEM is chosen due to its performance against the National Institute of Standards and Technology (NIST) standard Crystals-Kyber (Kyber) to obtain an efficient and post-quantum secure PAKE scheme. Firstly, the anonymity and fuzziness properties of SMAUG-T.KEM are proven to fit the OCAKE approach in constructing the PAKE version of Smaug.KEM. Then, the post-quantum security of the proposed SMAUG-T.PAKE is analyzed in the universal composability (UC) model based on the hybrid security assumptions and proved properties. The reference C and JAVA codes are written to evaluate whether the targeted efficiency is achieved in different platforms. Based on the central processing unit (CPU) and memory usage, run time, and energy consumption metrics, the proposed solution is compared with current PAKE proposals. The performance results showed that SMAUG-T.PAKE, with two optional encryption modes, Advanced Encryption Standard (AES) or Ascon, presents better performance than the other module-based PAKE solutions from lattices in terms of both reference and mobile results.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"97 ","pages":"Article 104118"},"PeriodicalIF":3.1,"publicationDate":"2025-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145839846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}