Pub Date : 2025-11-11DOI: 10.1016/j.sysarc.2025.103625
Ting-Chieh Ho, Yuh-Min Tseng, Sen-Shan Huang
An authenticated key exchange (AKE) protocol plays a critical role in public-key cryptography (PKC), providing essential mechanisms to establish secure communication and mutual authentication between communicating participants. Recently, to withstand side-channel attacks that allow adversaries to obtain partial information of private keys during computation rounds, some AKE protocols have been designed to provide leakage resilience. However, there has been limited work on AKE protocols with leakage resilience for client–server environments, and the existing protocols are suitable only for a single PKC, namely, both clients and a server are based on the same PKC. To overcome this limitation, we propose the first efficient and compatible authenticated key exchange protocol with leakage resilience for heterogeneous client–server environments (CAKE-LR). In the proposed protocol, clients can be heterogeneous PKC participants, namely, the public-key infrastructure PKC (PKI-PKC) or the identity-based PKC (ID-PKC). For security analysis, we provide formal security proofs in the generic bilinear group (GBG) model, based on security assumptions including the secure hash function (SHF), the discrete logarithm (DL), and the computational Diffie–Hellman (CDH) assumptions. Finally, performance evaluations and comparisons demonstrate that our protocol offers several advantages over the existing AKE protocols, making it well-suited for practical deployment in heterogeneous client–server environments.
{"title":"An efficient and compatible authenticated key exchange protocol with leakage resilience for heterogeneous client–server environments","authors":"Ting-Chieh Ho, Yuh-Min Tseng, Sen-Shan Huang","doi":"10.1016/j.sysarc.2025.103625","DOIUrl":"10.1016/j.sysarc.2025.103625","url":null,"abstract":"<div><div>An authenticated key exchange (AKE) protocol plays a critical role in public-key cryptography (PKC), providing essential mechanisms to establish secure communication and mutual authentication between communicating participants. Recently, to withstand side-channel attacks that allow adversaries to obtain partial information of private keys during computation rounds, some AKE protocols have been designed to provide leakage resilience. However, there has been limited work on AKE protocols with leakage resilience for client–server environments, and the existing protocols are suitable only for a single PKC, namely, both clients and a server are based on the same PKC. To overcome this limitation, we propose the first efficient and compatible authenticated key exchange protocol with leakage resilience for heterogeneous client–server environments (CAKE-LR). In the proposed protocol, clients can be heterogeneous PKC participants, namely, the public-key infrastructure PKC (PKI-PKC) or the identity-based PKC (ID-PKC). For security analysis, we provide formal security proofs in the generic bilinear group (GBG) model, based on security assumptions including the secure hash function (SHF), the discrete logarithm (DL), and the computational Diffie–Hellman (CDH) assumptions. Finally, performance evaluations and comparisons demonstrate that our protocol offers several advantages over the existing AKE protocols, making it well-suited for practical deployment in heterogeneous client–server environments.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"170 ","pages":"Article 103625"},"PeriodicalIF":4.1,"publicationDate":"2025-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145520785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-10DOI: 10.1016/j.sysarc.2025.103624
Yuansheng Luo , Hao Yang , Bing Xiong , Shi Qiu
With the accelerated deployment of AIoT (artificial intelligence of things), massive edge terminals and distributed sensing introduce high concurrency and bursty network loads. Resource-constrained devices are susceptible to hijacking and can be leveraged to form botnets, significantly amplifying the risk of volumetric DDoS (distributed denial-of-service) attacks. Traditional software-based defense schemes often struggle to meet line-rate and real-time requirements under large-scale attacks due to high processing latency and substantial resource consumption. To address this, this paper proposes LARDM—a DDoS detection and mitigation framework fully deployed on the programmable data plane. The framework is based on P4-programmable switches and comprises three core components: a burst stream filter, a stream feature collector, and a decision tree module, enabling real-time detection and accurate localization of volumetric DDoS attacks. The burst stream filter utilizes hash collision and probabilistic decay mechanisms to efficiently filter mice flows and focus resources on detecting potential attack streams; the stream feature collector captures key statistical features at multiple checkpoints; and the decision tree module performs lightweight inference directly in the data plane, reporting to the controller to issue blacklists, whitelists, and mitigation rules when the confidence level exceeds the threshold. The framework innovatively introduces Gini impurity to quantify network anomalies and performs flow aggregation based on suspicious source or destination IPs when anomalies are detected, significantly enhancing the tracking and localization of distributed attack sources. Experimental results show that LARDM achieves 90 % coverage of Top-K elephant flows on BMv2 programmable switches, with a flow classification accuracy of 99.3 %, outperforming existing data plane detection methods. The system can rapidly identify anomalies and initiate mitigation within a short window after an attack, effectively reducing the impact of attack traffic on network performance. The lightweight nature of the scheme is further validated by space complexity analysis, demonstrating its suitability for resource-constrained data planes.
{"title":"LARDM: Lightweight and aggregation-driven real-time detection and mitigation of volumetric DDoS attacks in the programmable data plane","authors":"Yuansheng Luo , Hao Yang , Bing Xiong , Shi Qiu","doi":"10.1016/j.sysarc.2025.103624","DOIUrl":"10.1016/j.sysarc.2025.103624","url":null,"abstract":"<div><div>With the accelerated deployment of AIoT (artificial intelligence of things), massive edge terminals and distributed sensing introduce high concurrency and bursty network loads. Resource-constrained devices are susceptible to hijacking and can be leveraged to form botnets, significantly amplifying the risk of volumetric DDoS (distributed denial-of-service) attacks. Traditional software-based defense schemes often struggle to meet line-rate and real-time requirements under large-scale attacks due to high processing latency and substantial resource consumption. To address this, this paper proposes LARDM—a DDoS detection and mitigation framework fully deployed on the programmable data plane. The framework is based on P4-programmable switches and comprises three core components: a burst stream filter, a stream feature collector, and a decision tree module, enabling real-time detection and accurate localization of volumetric DDoS attacks. The burst stream filter utilizes hash collision and probabilistic decay mechanisms to efficiently filter mice flows and focus resources on detecting potential attack streams; the stream feature collector captures key statistical features at multiple checkpoints; and the decision tree module performs lightweight inference directly in the data plane, reporting to the controller to issue blacklists, whitelists, and mitigation rules when the confidence level exceeds the threshold. The framework innovatively introduces Gini impurity to quantify network anomalies and performs flow aggregation based on suspicious source or destination IPs when anomalies are detected, significantly enhancing the tracking and localization of distributed attack sources. Experimental results show that LARDM achieves 90 % coverage of Top-K elephant flows on BMv2 programmable switches, with a flow classification accuracy of 99.3 %, outperforming existing data plane detection methods. The system can rapidly identify anomalies and initiate mitigation within a short window after an attack, effectively reducing the impact of attack traffic on network performance. The lightweight nature of the scheme is further validated by space complexity analysis, demonstrating its suitability for resource-constrained data planes.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"170 ","pages":"Article 103624"},"PeriodicalIF":4.1,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145569892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-07DOI: 10.1016/j.sysarc.2025.103592
Gianluca Bellocchi , Daniel Madronal , Alessandro Capotondi , Francesca Palumbo , Andrea Marongiu
Smart and Precision Agriculture (SPA) methods and technologies, such as autonomous robots, AI/ML, sensors, and actuators, enhance farming productivity by automating the retrieval of environmental parameters and the decision-making process, while Fog- and Edge-based paradigms enable more informed and responsive practices. Unmanned Aerial Vehicles (UAVs) can autonomously inspect crops and promptly cooperate with terrestrial vehicles to perform treatments, as recently demonstrated by the EU-funded COMP4DRONES (C4D) research project, focused on the provisioning of innovative UAV technologies for civilian applications. Modern companion-equipped UAVs leverage Heterogeneous Systems-on-Chip (HeSoCs) to execute complex on-board tasks. HeSoCs generally combine a general-purpose, multi-core processor with a domain-specific accelerator-rich subsystem, massively integrating application-specific accelerators. Field Programmable Gate Arrays (FPGAs) are ideal fabrics to attain high performance and energy efficiency because of their massively parallel, deeply pipelined, non-Von-Neumann processing logic and custom memory hierarchies. Automated hardware-software co-design methodologies, e.g., FPGA overlays and toolflows, largely simplify the design phases, including the optimization of the accelerator interfaces, such as the merging of redundant components to reduce area usage. In this context, our contribution consists of a System-Level Design (SLD) methodology for the design of overlay-based UAV companion computers, including a modular and scalable accelerator-rich RISC-V HeSoC, a heterogeneous software stack, and an automation toolchain to generate and integrate application-specific accelerators into our overlay. Our results show three optimized overlay variants targeting an UAV-based system employed in a SPA context. Experimental results denote improvements in performance and area usage, up to on a FPGA-based HeSoC with respect to traditional design flows.
{"title":"An FPGA-based accelerator design methodology for smart UAVs in precision agriculture: A case study","authors":"Gianluca Bellocchi , Daniel Madronal , Alessandro Capotondi , Francesca Palumbo , Andrea Marongiu","doi":"10.1016/j.sysarc.2025.103592","DOIUrl":"10.1016/j.sysarc.2025.103592","url":null,"abstract":"<div><div>Smart and Precision Agriculture (SPA) methods and technologies, such as autonomous robots, AI/ML, sensors, and actuators, enhance farming productivity by automating the retrieval of environmental parameters and the decision-making process, while Fog- and Edge-based paradigms enable more informed and responsive practices. Unmanned Aerial Vehicles (UAVs) can autonomously inspect crops and promptly cooperate with terrestrial vehicles to perform treatments, as recently demonstrated by the EU-funded COMP4DRONES (C4D) research project, focused on the provisioning of innovative UAV technologies for civilian applications. Modern companion-equipped UAVs leverage Heterogeneous Systems-on-Chip (HeSoCs) to execute complex on-board tasks. HeSoCs generally combine a general-purpose, multi-core processor with a domain-specific accelerator-rich subsystem, massively integrating application-specific accelerators. Field Programmable Gate Arrays (FPGAs) are ideal fabrics to attain high performance and energy efficiency because of their massively parallel, deeply pipelined, non-Von-Neumann processing logic and custom memory hierarchies. Automated hardware-software co-design methodologies, e.g., FPGA overlays and toolflows, largely simplify the design phases, including the optimization of the accelerator interfaces, such as the merging of redundant components to reduce area usage. In this context, our contribution consists of a System-Level Design (SLD) methodology for the design of overlay-based UAV companion computers, including a modular and scalable accelerator-rich RISC-V HeSoC, a heterogeneous software stack, and an automation toolchain to generate and integrate application-specific accelerators into our overlay. Our results show three optimized overlay variants targeting an UAV-based system employed in a SPA context. Experimental results denote improvements in performance and area usage, up to <span><math><mi>18.5%</mi></math></span> on a FPGA-based HeSoC with respect to traditional design flows.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"170 ","pages":"Article 103592"},"PeriodicalIF":4.1,"publicationDate":"2025-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145520786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-04DOI: 10.1016/j.sysarc.2025.103622
Dawei Yang , Chenhao Ma , Xiuhui Deng , Jason Junwei Zeng , Jianning Zhang , Wei Huang , Zhe Jiang , Ying Huo
Various video segmentation tasks can be summarized as segmenting target objects in the video using prior guidance. Based on what priors are used, these tasks can be categorized into video instance segmentation (VIS), referring video object segmentation (RVOS), and audio-guided video object segmentation (AVOS), which take predefined categories, text descriptions, and audio cues as guidance, respectively. Previous works primarily focused on each task individually, designing specialized architectures for optimal performance. However, these architectures cannot easily generalize to different tasks. To address this, we present a joint-training video segmentation transformer (JVST) capable of solving these tasks using a single architecture. Specifically, we extract features from prior guidance and unify them into embeddings to act as queries, indicating the model which task to conduct. Then, prior and visual features interact in our prior-to-vision and vision-to-prior modules to improve the representation of each other. Finally, the enhanced visual features and queries are input into our frame-level and clip-level models to generate predictions. Joint training on datasets from different tasks enables the model to learn more general and robust knowledge. Extensive experiments verify the effectiveness of our joint training paradigm and the superiority of JVST over previous task-specific methods.
{"title":"Joint learning video segmentation with different prior guidance","authors":"Dawei Yang , Chenhao Ma , Xiuhui Deng , Jason Junwei Zeng , Jianning Zhang , Wei Huang , Zhe Jiang , Ying Huo","doi":"10.1016/j.sysarc.2025.103622","DOIUrl":"10.1016/j.sysarc.2025.103622","url":null,"abstract":"<div><div>Various video segmentation tasks can be summarized as segmenting target objects in the video using prior guidance. Based on what priors are used, these tasks can be categorized into video instance segmentation (VIS), referring video object segmentation (RVOS), and audio-guided video object segmentation (AVOS), which take predefined categories, text descriptions, and audio cues as guidance, respectively. Previous works primarily focused on each task individually, designing specialized architectures for optimal performance. However, these architectures cannot easily generalize to different tasks. To address this, we present a joint-training video segmentation transformer (JVST) capable of solving these tasks using a single architecture. Specifically, we extract features from prior guidance and unify them into embeddings to act as queries, indicating the model which task to conduct. Then, prior and visual features interact in our prior-to-vision and vision-to-prior modules to improve the representation of each other. Finally, the enhanced visual features and queries are input into our frame-level and clip-level models to generate predictions. Joint training on datasets from different tasks enables the model to learn more general and robust knowledge. Extensive experiments verify the effectiveness of our joint training paradigm and the superiority of JVST over previous task-specific methods.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"170 ","pages":"Article 103622"},"PeriodicalIF":4.1,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145468567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-01DOI: 10.1016/j.sysarc.2025.103623
Setareh Ahsaei, Mohsen Raji
Vision Transformers (ViTs) have achieved remarkable success across various vision tasks. However, their deployment in safety-critical applications raises serious concerns about their resilience to hardware faults such as soft errors. Traditional soft error tolerance techniques are effective but impose considerable memory and computational overhead, making them unsuitable for resource-constrained embedded systems. This paper presents a Low Overhead Soft error Tolerance methodology for ViTs (called LOST-ViT), leveraging model compression and selective bit-level redundancy. LOST-ViT begins by pruning low-saliency weights to reduce the parameters that are potentially vulnerable to faults and meanwhile, lowering both memory and computational overhead of forthcoming soft error mitigation approach. The proposed methodology takes advantage of a Zero-memory Overhead Bit-level data Redundancy (named ZOBiR) to improve the soft error tolerance of ViTs. The core idea of ZoBiR is to replicate a selected bit segments of the model parameters and store them in place of the common bit segments that remain identical across all parameters. To manage the computational overhead, a selective approach is introduced according to a comprehensive vulnerability analysis across different components of ViT model. Extensive experiments demonstrate the high resilience of the proposed method to memory soft errors, with very low computational and no memory overhead.
{"title":"LOST-ViT: a low overhead soft error tolerance framework for vision transformers via model compression and selective bit-level redundancy","authors":"Setareh Ahsaei, Mohsen Raji","doi":"10.1016/j.sysarc.2025.103623","DOIUrl":"10.1016/j.sysarc.2025.103623","url":null,"abstract":"<div><div>Vision Transformers (ViTs) have achieved remarkable success across various vision tasks. However, their deployment in safety-critical applications raises serious concerns about their resilience to hardware faults such as soft errors. Traditional soft error tolerance techniques are effective but impose considerable memory and computational overhead, making them unsuitable for resource-constrained embedded systems. This paper presents a <u>L</u>ow <u>O</u>verhead <u>S</u>oft error <u>T</u>olerance methodology for <u>ViT</u>s (called LOST-ViT), leveraging model compression and selective bit-level redundancy. LOST-ViT begins by pruning low-saliency weights to reduce the parameters that are potentially vulnerable to faults and meanwhile, lowering both memory and computational overhead of forthcoming soft error mitigation approach. The proposed methodology takes advantage of a Zero-memory Overhead Bit-level data Redundancy (named ZOBiR) to improve the soft error tolerance of ViTs. The core idea of ZoBiR is to replicate a selected bit segments of the model parameters and store them in place of the common bit segments that remain identical across all parameters. To manage the computational overhead, a selective approach is introduced according to a comprehensive vulnerability analysis across different components of ViT model. Extensive experiments demonstrate the high resilience of the proposed method to memory soft errors, with very low computational and no memory overhead.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"170 ","pages":"Article 103623"},"PeriodicalIF":4.1,"publicationDate":"2025-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145520787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-31DOI: 10.1016/j.sysarc.2025.103606
Yuxi Li , Jingjing Chen , Dong Ji , Qingxu Deng
Within the Internet of Medical Things (IoMT), wearables and edge sensors continuously stream physiological data with precise spatiotemporal labels, producing spatiotemporal electronic health records (EHRs) at scale. Offloading raw telemetry to the cloud burdens analytics and storage and raises compliance and profiling risks. We present MedHST, an edge-first framework that secures the IoMT analytics pipeline end-to-end with fine-grained access control. Each timestamp–grid block is protected with labeled additive homomorphic encryption (LabHE) for encrypted range aggregation. Per-dimension first-difference masking with constrained pseudorandom function (cPRF)-derived seeds enables constant-time verify-then-decrypt for axis-aligned windows. A hierarchical quadtree-dyadic index together with a homomorphic MAC (HoMAC) binds each answer to a fresh nonce and its query context, providing end-to-end integrity. Least-privilege sharing uses ciphertext-policy attribute-based encryption (CP-ABE)-wrapped range seeds to support epoch-bounded revocation and logarithmic-size authorization headers, without exposing plaintext indices. On a gateway-class platform, MedHST returns constant-size answers and maintains client verify-then-decrypt work; cryptographic paths run in the regime with a predictable integrity overhead; end-to-end latency remains across window sizes; and ingest scales to tens of millions of blocks. Collectively, these properties establish MedHST as a practical, scalable, and verifiable security layer for privacy-preserving IoMT analytics from device to cloud.
{"title":"MedHST: Secure spatiotemporal EHR analytics with fine-grained access control for IoMT","authors":"Yuxi Li , Jingjing Chen , Dong Ji , Qingxu Deng","doi":"10.1016/j.sysarc.2025.103606","DOIUrl":"10.1016/j.sysarc.2025.103606","url":null,"abstract":"<div><div>Within the Internet of Medical Things (IoMT), wearables and edge sensors continuously stream physiological data with precise spatiotemporal labels, producing spatiotemporal electronic health records (EHRs) at scale. Offloading raw telemetry to the cloud burdens analytics and storage and raises compliance and profiling risks. We present <span>MedHST</span>, an edge-first framework that secures the IoMT analytics pipeline end-to-end with fine-grained access control. Each timestamp–grid block is protected with labeled additive homomorphic encryption (LabHE) for encrypted range aggregation. Per-dimension first-difference masking with constrained pseudorandom function (cPRF)-derived seeds enables constant-time verify-then-decrypt for axis-aligned windows. A hierarchical quadtree-dyadic index together with a homomorphic MAC (HoMAC) binds each answer to a fresh nonce and its query context, providing end-to-end integrity. Least-privilege sharing uses ciphertext-policy attribute-based encryption (CP-ABE)-wrapped range seeds to support epoch-bounded revocation and logarithmic-size authorization headers, without exposing plaintext indices. On a gateway-class platform, <span>MedHST</span> returns constant-size answers and maintains <span><math><mrow><mi>O</mi><mrow><mo>(</mo><mn>1</mn><mo>)</mo></mrow></mrow></math></span> client verify-then-decrypt work; cryptographic paths run in the <span><math><mrow><mi>μ</mi><mi>s</mi></mrow></math></span> regime with a predictable <span><math><mrow><mo>≈</mo><mspace></mspace><mn>2</mn><mo>×</mo></mrow></math></span> integrity overhead; end-to-end latency remains <span><math><mrow><mn>1</mn><mspace></mspace><mo>−</mo><mspace></mspace><mn>2</mn><mspace></mspace><mi>ms</mi></mrow></math></span> across window sizes; and ingest scales to tens of millions of blocks. Collectively, these properties establish <span>MedHST</span> as a practical, scalable, and verifiable security layer for privacy-preserving IoMT analytics from device to cloud.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"170 ","pages":"Article 103606"},"PeriodicalIF":4.1,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145468569","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-31DOI: 10.1016/j.sysarc.2025.103617
Yiren Chen, Xiaobo Yang, Fangming Dong, Bo Jiang, Zhigang Lu, Baoxu Liu
With the advancement of password-cracking technologies, database security is encountering critical challenges. Honeywords, decoy passwords alongside the real password, serve as a key mechanism to detect unauthorized access from password leaks. However, most existing honeyword generation techniques (HGTs) rely on static strategies or single-model generators, resulting in insufficient robustness across threat scenarios. To alleviate this issue, we propose MoPHoney, an adaptive HGT based on mixture-of-prompts (MoP) powered by large language models (LLMs). MoPHoney initially employs a LightGBM-based router to predict a soft probability distribution over password types, which determines weights of the corresponding prompts for LLM. Then, adaptive styles of honeywords are generated through diverse prompt-guided pipelines, each enhanced via retrieval-augmented generation (RAG) to improve contextual realism. Next, the output is filtered by an LLM-based adversary that discards failed honeywords. Finally, honeyword files are stored using a new strategy to further enhance the complexity of password-guessing. We evaluate MoPHoney against four representative honeyword threat techniques on three real-world datasets and a PII-based password dataset. Compared with baseline HGTs, MoPHoney achieves superior flatness (average -flatness below 0.078 at ), success-number, and resistance to DoS attack (average FPP below 0.004). Even when varies from 5 to 50, MoPHoney maintains stable flatness and keeps false alarms under 0.5%, demonstrating robust scalability across different honeyword counts. These results not only highlight the effectiveness of input-adaptive prompts, in-context passwords, and adversarial strategies in HGTs but also show the feasibility of LLMs for generating decoys for cyber threat hunting.
{"title":"MoPHoney: An adaptive honeyword generation system based on Mixture-of-prompts","authors":"Yiren Chen, Xiaobo Yang, Fangming Dong, Bo Jiang, Zhigang Lu, Baoxu Liu","doi":"10.1016/j.sysarc.2025.103617","DOIUrl":"10.1016/j.sysarc.2025.103617","url":null,"abstract":"<div><div>With the advancement of password-cracking technologies, database security is encountering critical challenges. Honeywords, decoy passwords alongside the real password, serve as a key mechanism to detect unauthorized access from password leaks. However, most existing honeyword generation techniques (HGTs) rely on static strategies or single-model generators, resulting in insufficient robustness across threat scenarios. To alleviate this issue, we propose MoPHoney, an adaptive HGT based on mixture-of-prompts (MoP) powered by large language models (LLMs). MoPHoney initially employs a LightGBM-based router to predict a soft probability distribution over password types, which determines weights of the corresponding prompts for LLM. Then, adaptive styles of honeywords are generated through diverse prompt-guided pipelines, each enhanced via retrieval-augmented generation (RAG) to improve contextual realism. Next, the output is filtered by an LLM-based adversary that discards failed honeywords. Finally, honeyword files are stored using a new strategy to further enhance the complexity of password-guessing. We evaluate MoPHoney against four representative honeyword threat techniques on three real-world datasets and a PII-based password dataset. Compared with baseline HGTs, MoPHoney achieves superior flatness (average <span><math><msub><mrow><mi>ɛ</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span>-flatness below 0.078 at <span><math><mrow><mi>k</mi><mo>=</mo><mn>20</mn></mrow></math></span>), success-number, and resistance to DoS attack (average FPP below 0.004). Even when <span><math><mi>k</mi></math></span> varies from 5 to 50, MoPHoney maintains stable flatness and keeps false alarms under 0.5%, demonstrating robust scalability across different honeyword counts. These results not only highlight the effectiveness of input-adaptive prompts, in-context passwords, and adversarial strategies in HGTs but also show the feasibility of LLMs for generating decoys for cyber threat hunting.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"170 ","pages":"Article 103617"},"PeriodicalIF":4.1,"publicationDate":"2025-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145419555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-30DOI: 10.1016/j.sysarc.2025.103619
Sudeep Ghosh , SK Hafizul Islam , Athanasios V. Vasilakos
Cloud computing is widely used in modern healthcare to manage Electronic Medical Records (EMRs), allowing organizations to store, access, and share patient data efficiently. Storing encrypted EMRs presents challenges for secure search and access control when using untrusted third-party cloud environments. Public Key Encryption with Keyword Search (PEKS) enables searching over encrypted data but suffers from keyword guessing attacks, inefficient multi-user search, and the requirement for secure communication channels. This paper proposes a secure Identity-Based Medical Data Sharing framework (BCT-IMDS) that leverages a hybrid cloud-assisted blockchain system comprising private and consortium blockchains. In BCT-IMDS, each hospital maintains a private blockchain, where each department operates a computer that acts as a node in the private blockchain network. Multiple hospitals establish a consortium blockchain network using their respective cloud servers. BCT-IMDS eliminates the need for pre-selecting data consumers, supports secure multi-user search, and ensures ciphertext and trapdoor indistinguishability. We formally analyze the security of the BCT-IMDS scheme, verify it using the Scyther tool, and evaluate the performance of BCT-IMDS at different security levels (80, 112, 128, 192, and 256 bits). The analysis demonstrates that BCT-IMDS is highly secure with practical computational, communication, and storage efficiency, and outperforms state-of-the-art PEKS-based medical data-sharing schemes.
{"title":"Blockchain-assisted provably secure identity-based public key encryption with keyword search scheme for medical data sharing","authors":"Sudeep Ghosh , SK Hafizul Islam , Athanasios V. Vasilakos","doi":"10.1016/j.sysarc.2025.103619","DOIUrl":"10.1016/j.sysarc.2025.103619","url":null,"abstract":"<div><div>Cloud computing is widely used in modern healthcare to manage Electronic Medical Records (EMRs), allowing organizations to store, access, and share patient data efficiently. Storing encrypted EMRs presents challenges for secure search and access control when using untrusted third-party cloud environments. Public Key Encryption with Keyword Search (PEKS) enables searching over encrypted data but suffers from keyword guessing attacks, inefficient multi-user search, and the requirement for secure communication channels. This paper proposes a secure Identity-Based Medical Data Sharing framework (BCT-IMDS) that leverages a hybrid cloud-assisted blockchain system comprising private and consortium blockchains. In BCT-IMDS, each hospital maintains a private blockchain, where each department operates a computer that acts as a node in the private blockchain network. Multiple hospitals establish a consortium blockchain network using their respective cloud servers. BCT-IMDS eliminates the need for pre-selecting data consumers, supports secure multi-user search, and ensures ciphertext and trapdoor indistinguishability. We formally analyze the security of the BCT-IMDS scheme, verify it using the Scyther tool, and evaluate the performance of BCT-IMDS at different security levels (80, 112, 128, 192, and 256 bits). The analysis demonstrates that BCT-IMDS is highly secure with practical computational, communication, and storage efficiency, and outperforms state-of-the-art PEKS-based medical data-sharing schemes.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"170 ","pages":"Article 103619"},"PeriodicalIF":4.1,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145468568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-30DOI: 10.1016/j.sysarc.2025.103618
Josna Philomina , Rekha K. James , Palash Das , Shirshendu Das , Daleesha M. Viswanathan
Spin Transfer Torque Magnetic Random Access Memory (STT-RAM) has emerged as a promising alternative to conventional on-chip memory due to its high density, non-volatility, scalability, and CMOS compatibility. Beyond its use in designing last-level caches (LLCs), recent efforts have explored replacing traditional SRAM buffers inside Network-on-Chip (NoC) routers with STT-RAM. However, STT-RAM suffers from expensive write operations in terms of both latency and endurance. To prolong the lifetime of STT-RAM buffers, it is essential to minimize write variation by evenly distributing write operations across the memory cells. The existing virtual channel (VC) allocation policies of NoC attempt to address this by spreading writes uniformly across buffer entries. In this paper, we propose a novel hardware Trojan (HT) attack that targets the VC allocation mechanism in NoC routers. The HT maliciously alters the VC allocation to increase the write intensity on specific STT-RAM locations, thereby accelerating their wear-out and reducing the overall buffer lifespan. We analyze the impact of this attack on different VC allocation strategies and evaluate its effects using the gem5 simulator. Our results show that the proposed HT significantly increases the write variation in STT-RAM buffers, leading to a marked degradation in their endurance.
{"title":"Exploiting virtual channel allocation policies in STT-RAM buffers of NoC routers through hardware Trojan","authors":"Josna Philomina , Rekha K. James , Palash Das , Shirshendu Das , Daleesha M. Viswanathan","doi":"10.1016/j.sysarc.2025.103618","DOIUrl":"10.1016/j.sysarc.2025.103618","url":null,"abstract":"<div><div>Spin Transfer Torque Magnetic Random Access Memory (STT-RAM) has emerged as a promising alternative to conventional on-chip memory due to its high density, non-volatility, scalability, and CMOS compatibility. Beyond its use in designing last-level caches (LLCs), recent efforts have explored replacing traditional SRAM buffers inside Network-on-Chip (NoC) routers with STT-RAM. However, STT-RAM suffers from expensive write operations in terms of both latency and endurance. To prolong the lifetime of STT-RAM buffers, it is essential to minimize write variation by evenly distributing write operations across the memory cells. The existing virtual channel (VC) allocation policies of NoC attempt to address this by spreading writes uniformly across buffer entries. In this paper, we propose a novel hardware Trojan (HT) attack that targets the VC allocation mechanism in NoC routers. The HT maliciously alters the VC allocation to increase the write intensity on specific STT-RAM locations, thereby accelerating their wear-out and reducing the overall buffer lifespan. We analyze the impact of this attack on different VC allocation strategies and evaluate its effects using the gem5 simulator. Our results show that the proposed HT significantly increases the write variation in STT-RAM buffers, leading to a marked degradation in their endurance.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"170 ","pages":"Article 103618"},"PeriodicalIF":4.1,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145468571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-30DOI: 10.1016/j.sysarc.2025.103599
Zhixin Zeng , Zuxin Yu , Yiming Chen , Long Li , Yining Liu , Huadong Liu
The real-time collection of electricity consumption data enables smart grids to optimize supply–demand balance and detect electricity theft. However, the utilization of such consumption data poses significant privacy risks to consumers. Privacy-preserving data aggregation (PPDA) techniques offer a means to safeguard the privacy of electricity consumers/users. But, a significant limitation of current implementations is that the aggregated data typically only enables the calculation of total consumption across all consumers within a residential area. In practical applications, aggregated data fails to meet diverse query requirements. Therefore, a multi-functional privacy-preserving data aggregation scheme is proposed to enhance the utility of data without compromising privacy. First, a blind-factor-enhanced PPDA algorithm based on inner product functional encryption (IPFE) is introduced to safeguard the privacy of individual data. The proposed solution allows the control center and electricity consumers to perform some function-specific queries on encrypted data. Second, a dynamic pseudonym-based authentication protocol is designed to resist identity inference attacks. Security analysis indicates that the proposed scheme fulfills security and privacy requirements. Extensive experimental results reveal that the proposed scheme can not only support multi-functional queries in real scenarios but also outperform other comparable schemes in terms of computation cost, communication overhead, and storage overhead.
{"title":"A multi-functional and privacy-preserving data aggregation scheme for smart grid","authors":"Zhixin Zeng , Zuxin Yu , Yiming Chen , Long Li , Yining Liu , Huadong Liu","doi":"10.1016/j.sysarc.2025.103599","DOIUrl":"10.1016/j.sysarc.2025.103599","url":null,"abstract":"<div><div>The real-time collection of electricity consumption data enables smart grids to optimize supply–demand balance and detect electricity theft. However, the utilization of such consumption data poses significant privacy risks to consumers. Privacy-preserving data aggregation (PPDA) techniques offer a means to safeguard the privacy of electricity consumers/users. But, a significant limitation of current implementations is that the aggregated data typically only enables the calculation of total consumption across all consumers within a residential area. In practical applications, aggregated data fails to meet diverse query requirements. Therefore, a multi-functional privacy-preserving data aggregation scheme is proposed to enhance the utility of data without compromising privacy. First, a blind-factor-enhanced PPDA algorithm based on inner product functional encryption (IPFE) is introduced to safeguard the privacy of individual data. The proposed solution allows the control center and electricity consumers to perform some function-specific queries on encrypted data. Second, a dynamic pseudonym-based authentication protocol is designed to resist identity inference attacks. Security analysis indicates that the proposed scheme fulfills security and privacy requirements. Extensive experimental results reveal that the proposed scheme can not only support multi-functional queries in real scenarios but also outperform other comparable schemes in terms of computation cost, communication overhead, and storage overhead.</div></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"170 ","pages":"Article 103599"},"PeriodicalIF":4.1,"publicationDate":"2025-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145419557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}