Pub Date : 2024-06-12DOI: 10.1016/j.sysarc.2024.103199
Keyang Hu , Wang Huang , Lei Wang , Ce Mo , Runxiang Wang , Yu Chen , Ju Ren , Bo Jiang
Unikernels are simple, customizable, efficient, and small in code size, which makes them highly applicable to embedded scenarios. However, most existing unikernels are developed and optimized for cloud computing, and they do not fully meet the requirements of high reliability and platform customization in embedded environments. We propose Unishyper, a reliable and high-performance embedded unikernel in Rust. To support memory isolation between user applications, user code, and kernel code, Unishyper designs the Zone mechanism on top of Intel MPK. Unishyper further proposes a thread-level unwind strategy for safe fault handling while avoiding memory leakage. Finally, Unishyper supports fine-grained customization, seamlessly integrates with the Rust ecosystem, and uses Unilib for function offloading to further reduce image size. Our evaluation results show that Unishyper achieves better performance than peer unikernels on major micro-benchmarks, can effectively stop illegal memory accesses across application boundaries, and has a minimal memory footprint of less than 100 KB.
{"title":"Unishyper: A Rust-based unikernel enhancing reliability and efficiency of embedded systems","authors":"Keyang Hu , Wang Huang , Lei Wang , Ce Mo , Runxiang Wang , Yu Chen , Ju Ren , Bo Jiang","doi":"10.1016/j.sysarc.2024.103199","DOIUrl":"10.1016/j.sysarc.2024.103199","url":null,"abstract":"<div><p>Unikernels are simple, customizable, efficient, and small in code size, which makes them highly applicable to embedded scenarios. However, most existing unikernels are developed and optimized for cloud computing, and they do not fully meet the requirements of high reliability and platform customization in embedded environments. We propose Unishyper, a reliable and high-performance embedded unikernel in Rust. To support memory isolation between user applications, user code, and kernel code, Unishyper designs the Zone mechanism on top of Intel MPK. Unishyper further proposes a thread-level unwind strategy for safe fault handling while avoiding memory leakage. Finally, Unishyper supports fine-grained customization, seamlessly integrates with the Rust ecosystem, and uses Unilib for function offloading to further reduce image size. Our evaluation results show that Unishyper achieves better performance than peer unikernels on major micro-benchmarks, can effectively stop illegal memory accesses across application boundaries, and has a minimal memory footprint of less than 100 KB.</p></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"153 ","pages":"Article 103199"},"PeriodicalIF":3.7,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141406671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.1016/j.sysarc.2024.103213
Youngbin Kim, Yoojin Lim, Chaedeok Lim
Intermittent computing supports execution of the systems experiencing frequent power failures, such as battery-less devices powered by energy-harvesting. In such systems, checkpoint and recovery is a commonly adopted technique, where volatile system states are regularly saved to Non-Volatile Memory (NVM), to preserve computing progress between power cycles. Since checkpoint involves a large number of NVM accesses, which is expensive in terms of both latency and energy, reducing its overhead has been a significant research challenge. In this paper, we present LACT (Liveness-Aware CheckpoinTing), a compiler optimization technique to minimize checkpoint overhead in intermittent systems. At the time of checkpoint execution, there exist dead values in general, which will not be used or overwritten in the future. LACT examines such liveness information, especially in arrays, based on compile-time analysis and excludes the dead values from the checkpoint to reduce required checkpoint data, which is a previously unexplored optimization opportunity. Our evaluation shows that LACT can reduce 46.4% of required checkpoint data without any runtime support, leading to reduction of 31.5% in execution time and a 5.2% decrease in power consumption on average. Our experiments in real energy-harvesting environment demonstrates that such improvement translates to a 31.6% improvement in end-to-end execution time.
{"title":"LACT: Liveness-Aware Checkpointing to reduce checkpoint overheads in intermittent systems","authors":"Youngbin Kim, Yoojin Lim, Chaedeok Lim","doi":"10.1016/j.sysarc.2024.103213","DOIUrl":"10.1016/j.sysarc.2024.103213","url":null,"abstract":"<div><p>Intermittent computing supports execution of the systems experiencing frequent power failures, such as battery-less devices powered by energy-harvesting. In such systems, checkpoint and recovery is a commonly adopted technique, where volatile system states are regularly saved to Non-Volatile Memory (NVM), to preserve computing progress between power cycles. Since checkpoint involves a large number of NVM accesses, which is expensive in terms of both latency and energy, reducing its overhead has been a significant research challenge. In this paper, we present LACT (Liveness-Aware CheckpoinTing), a compiler optimization technique to minimize checkpoint overhead in intermittent systems. At the time of checkpoint execution, there exist <em>dead</em> values in general, which will not be used or overwritten in the future. LACT examines such liveness information, especially in arrays, based on compile-time analysis and excludes the dead values from the checkpoint to reduce required checkpoint data, which is a previously unexplored optimization opportunity. Our evaluation shows that LACT can reduce 46.4% of required checkpoint data without any runtime support, leading to reduction of 31.5% in execution time and a 5.2% decrease in power consumption on average. Our experiments in real energy-harvesting environment demonstrates that such improvement translates to a 31.6% improvement in end-to-end execution time.</p></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"153 ","pages":"Article 103213"},"PeriodicalIF":4.5,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141409052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-08DOI: 10.1016/j.sysarc.2024.103212
Xiaomin Wei, Cong Sun, Xinghua Li, Jianfeng Ma
Unmanned Aerial Vehicles (UAVs) are typical real-time embedded systems, which require precise locations for completing flight missions. The Global Navigation Satellite System (GNSS) plays a crucial role in navigation and positioning for UAVs. However, GNSS spoofing attacks pose an increasing threat to GNSS-dependent UAVs. Existing spoofing detection methods primarily rely on simulated data, perception data from multiple UAVs, or various control parameters. This paper proposes SigFeaDet, a signal feature-based GNSS spoofing detection approach for UAVs utilizing machine learning techniques. The core concept revolves around identifying anomalies in signal features arising from differences between authentic and spoofing signals. Key signal features, including Carrier-to-Noise Density Ratio (CN0) and Doppler frequency crucial for GNSS positioning, are employed to discern spoofing signals. Various machine learning algorithms are leveraged to train on GNSS signal data, determining the most effective classifier. TEXBAT GNSS dataset is processed to extract spoofing signal data, and flight experiments are conducted to gather GNSS data, augmenting the authentic GNSS signal dataset. The detection accuracy exceeds 95%. Equal Error Rate (EER) is approximately 5%. We evaluate various impact factors on SigFeaDet to show its robustness, including differences in velocities, altitudes, and experimental locations (10 kilometers apart), and the accuracy consistently surpasses 99%.
{"title":"GNSS spoofing detection for UAVs using Doppler frequency and Carrier-to-Noise Density Ratio","authors":"Xiaomin Wei, Cong Sun, Xinghua Li, Jianfeng Ma","doi":"10.1016/j.sysarc.2024.103212","DOIUrl":"https://doi.org/10.1016/j.sysarc.2024.103212","url":null,"abstract":"<div><p>Unmanned Aerial Vehicles (UAVs) are typical real-time embedded systems, which require precise locations for completing flight missions. The Global Navigation Satellite System (GNSS) plays a crucial role in navigation and positioning for UAVs. However, GNSS spoofing attacks pose an increasing threat to GNSS-dependent UAVs. Existing spoofing detection methods primarily rely on simulated data, perception data from multiple UAVs, or various control parameters. This paper proposes SigFeaDet, a signal feature-based GNSS spoofing detection approach for UAVs utilizing machine learning techniques. The core concept revolves around identifying anomalies in signal features arising from differences between authentic and spoofing signals. Key signal features, including Carrier-to-Noise Density Ratio (CN0) and Doppler frequency crucial for GNSS positioning, are employed to discern spoofing signals. Various machine learning algorithms are leveraged to train on GNSS signal data, determining the most effective classifier. TEXBAT GNSS dataset is processed to extract spoofing signal data, and flight experiments are conducted to gather GNSS data, augmenting the authentic GNSS signal dataset. The detection accuracy exceeds 95%. Equal Error Rate (EER) is approximately 5%. We evaluate various impact factors on SigFeaDet to show its robustness, including differences in velocities, altitudes, and experimental locations (10 kilometers apart), and the accuracy consistently surpasses 99%.</p></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"153 ","pages":"Article 103212"},"PeriodicalIF":4.5,"publicationDate":"2024-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141325943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-07DOI: 10.1016/j.sysarc.2024.103189
Xuanliang Deng , Ashrarul H. Sifat , Shao-Yu Huang , Sen Wang , Jia-Bin Huang , Changhee Jung , Ryan Williams , Haibo Zeng
This paper is motivated by robotic systems that solve difficult real-world problems such as search and rescue (SAR) or precision agriculture 1. These applications require robots to operate in complex, uncertain environments while maintaining safe interactions with human teammates within a specified level of performance. In this paper, we study the scheduling of real-time applications on heterogeneous hardware platforms inspired by such contexts. To capture the stochasticity due to unpredictable environments, we propose the stochastic heterogeneous parallel conditional DAG (SHPC-DAG) model, which extends the most recent HPC-DAG model in two regards. First, it uses conditional DAG nodes to model the execution of computational pipelines based on context, while the stochasticity of DAG edges captures the uncertain nature of a system’s environment or the reliability of its hardware. Second, considering the pessimism of deterministic worst-case execution time (WCET), it uses probability distributions to model the execution times of subtasks (DAG nodes). We propose a new partitioning algorithm Least Latency Partitioned (LLP), which considers precedence constraints among nodes during the allocation process. Coupled with a scheduling algorithm that accounts for varying subtask criticality and constraints, the end-to-end latencies of safety-critical paths/nodes are then minimized. We use tasksets inspired by real robotics to demonstrate that our framework allows for efficient scheduling in complex computational pipelines, with more flexible representation of timing constraints, and ultimately, safety-performance tradeoffs.
本文的灵感来自于解决现实世界中困难问题的机器人系统,如搜救(SAR)或精准农业1。这些应用要求机器人在复杂、不确定的环境中工作,同时在规定的性能水平内保持与人类队友的安全互动。在本文中,我们受此类环境的启发,研究了异构硬件平台上实时应用的调度问题。为了捕捉不可预测环境造成的随机性,我们提出了随机异构并行条件 DAG(SHPC-DAG)模型,该模型在两个方面扩展了最新的 HPC-DAG 模型。首先,它使用条件 DAG 节点来模拟基于上下文的计算流水线的执行,而 DAG 边的随机性则捕捉了系统环境或硬件可靠性的不确定性。其次,考虑到确定性最坏情况执行时间(WCET)的悲观性,它使用概率分布来模拟子任务(DAG 节点)的执行时间。我们提出了一种新的分区算法 Least Latency Partitioned (LLP),它在分配过程中考虑了节点间的优先级限制。再加上考虑到不同子任务关键性和约束的调度算法,安全关键路径/节点的端到端延迟就会降到最低。我们利用受真实机器人技术启发的任务集来证明,我们的框架可以在复杂的计算流水线中实现高效调度,更灵活地表示时序约束,并最终实现安全性能权衡。
{"title":"Partitioned scheduling with safety-performance trade-offs in stochastic conditional DAG models","authors":"Xuanliang Deng , Ashrarul H. Sifat , Shao-Yu Huang , Sen Wang , Jia-Bin Huang , Changhee Jung , Ryan Williams , Haibo Zeng","doi":"10.1016/j.sysarc.2024.103189","DOIUrl":"10.1016/j.sysarc.2024.103189","url":null,"abstract":"<div><p>This paper is motivated by robotic systems that solve difficult real-world problems such as search and rescue (SAR) or precision agriculture <span><sup>1</sup></span>. These applications require robots to operate in complex, uncertain environments while maintaining safe interactions with human teammates within a specified level of performance. In this paper, we study the scheduling of real-time applications on heterogeneous hardware platforms inspired by such contexts. To capture the <em>stochasticity</em> due to unpredictable environments, we propose the stochastic heterogeneous parallel conditional DAG (SHPC-DAG) model, which extends the most recent HPC-DAG model in two regards. First, it uses conditional DAG nodes to model the execution of computational pipelines based on <em>context</em>, while the stochasticity of DAG edges captures the uncertain nature of a system’s environment or the reliability of its hardware. Second, considering the pessimism of deterministic worst-case execution time (WCET), it uses <em>probability distributions</em> to model the execution times of subtasks (DAG nodes). We propose a new partitioning algorithm <em>Least Latency Partitioned (LLP)</em>, which considers precedence constraints among nodes during the allocation process. Coupled with a scheduling algorithm that accounts for varying subtask criticality and constraints, the end-to-end latencies of safety-critical paths/nodes are then minimized. We use tasksets inspired by real robotics to demonstrate that our framework allows for efficient scheduling in complex computational pipelines, with more flexible representation of timing constraints, and ultimately, safety-performance tradeoffs.</p></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"153 ","pages":"Article 103189"},"PeriodicalIF":4.5,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141395697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-07DOI: 10.1016/j.sysarc.2024.103196
Pramod Kumar , Arup Kumar Pal , SK Hafizul Islam
The trend for deploying Video Surveillance Systems (VSSs) in public places has become common practice to maintain effective law and order in modern civilization. Further, data access control and the proper management of surveillance data with valid users are desirable for the safety and security of the communities. This paper aims to develop practical solutions to protect VSSs against evolving threats and challenges. This paper proposes a Two-Factor Mutual Authentication and Session Key Agreement usable in VSS (2F-MASK-VSS) environments for real-time data storage and access. In 2F-MASK-VSS, lightweight cryptographic tools, viz. hash function and symmetric key encryption, are used to maintain the desirable security features. In 2F-MASK-VSS, a surveillance camera captures real-time data and sends them securely to a central server for storage through the established session key agreement among valid concerns. Moreover, 2F-MASK-VSS can protect access control among valid users. The security strength of 2F-MASK-VSS has been proven by formal and informal analysis. The BAN logic model, AVISPA and Scyther tools validate the attack-resilience of 2F-MASK-VSS. Furthermore, the security analysis in the random oracle model shows that 2F-MASK-VSS is provably secure. In addition, 2F-MASK-VSS has been implemented using the Raspberry PI testbed to demonstrate its practical implementation.
{"title":"2F-MASK-VSS: Two-factor mutual authentication and session key agreement scheme for video surveillance system","authors":"Pramod Kumar , Arup Kumar Pal , SK Hafizul Islam","doi":"10.1016/j.sysarc.2024.103196","DOIUrl":"https://doi.org/10.1016/j.sysarc.2024.103196","url":null,"abstract":"<div><p>The trend for deploying Video Surveillance Systems (VSSs) in public places has become common practice to maintain effective law and order in modern civilization. Further, data access control and the proper management of surveillance data with valid users are desirable for the safety and security of the communities. This paper aims to develop practical solutions to protect VSSs against evolving threats and challenges. This paper proposes a Two-Factor Mutual Authentication and Session Key Agreement usable in VSS (2F-MASK-VSS) environments for real-time data storage and access. In 2F-MASK-VSS, lightweight cryptographic tools, viz. hash function and symmetric key encryption, are used to maintain the desirable security features. In 2F-MASK-VSS, a surveillance camera captures real-time data and sends them securely to a central server for storage through the established session key agreement among valid concerns. Moreover, 2F-MASK-VSS can protect access control among valid users. The security strength of 2F-MASK-VSS has been proven by formal and informal analysis. The BAN logic model, AVISPA and Scyther tools validate the attack-resilience of 2F-MASK-VSS. Furthermore, the security analysis in the random oracle model shows that 2F-MASK-VSS is provably secure. In addition, 2F-MASK-VSS has been implemented using the Raspberry PI testbed to demonstrate its practical implementation.</p></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"153 ","pages":"Article 103196"},"PeriodicalIF":4.5,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141325951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-06DOI: 10.1016/j.sysarc.2024.103193
Hadi Ghaemi , Zakieh Alizadehsani , Amin Shahraki , Juan M. Corchado
Transformers have revolutionized natural language processing (NLP) and have had a huge impact on automating tasks. Recently, transformers have led to the development of powerful large language models (LLMs), which have advanced automatic code generation. This study provides a review of code generation concepts and transformer applications in this field. First, the fundamental concepts of the attention mechanism embedded into transformers are explored. Then, predominant automated code generation approaches are briefly reviewed, including non-learning code generation (e.g., rule-based), shallow learning (e.g., heuristic rules, grammar-based), and deep learning models. Afterward, this survey reviews pre-training and fine-tuning techniques for code generation, focusing on the application of efficient transformer methods such as parameter-efficient tuning, instruction tuning, and prompt tuning. Additionally, this work briefly outlines resources for code generation (e.g., datasets, benchmarks, packages) and evaluation metrics utilized in code generation processes. Finally, the challenges and potential research directions (e.g., multimodal learning) are investigated in depth.
{"title":"Transformers in source code generation: A comprehensive survey","authors":"Hadi Ghaemi , Zakieh Alizadehsani , Amin Shahraki , Juan M. Corchado","doi":"10.1016/j.sysarc.2024.103193","DOIUrl":"https://doi.org/10.1016/j.sysarc.2024.103193","url":null,"abstract":"<div><p>Transformers have revolutionized natural language processing (NLP) and have had a huge impact on automating tasks. Recently, transformers have led to the development of powerful large language models (LLMs), which have advanced automatic code generation. This study provides a review of code generation concepts and transformer applications in this field. First, the fundamental concepts of the attention mechanism embedded into transformers are explored. Then, predominant automated code generation approaches are briefly reviewed, including non-learning code generation (e.g., rule-based), shallow learning (e.g., heuristic rules, grammar-based), and deep learning models. Afterward, this survey reviews pre-training and fine-tuning techniques for code generation, focusing on the application of efficient transformer methods such as parameter-efficient tuning, instruction tuning, and prompt tuning. Additionally, this work briefly outlines resources for code generation (e.g., datasets, benchmarks, packages) and evaluation metrics utilized in code generation processes. Finally, the challenges and potential research directions (e.g., multimodal learning) are investigated in depth.</p></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"153 ","pages":"Article 103193"},"PeriodicalIF":4.5,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141326052","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-06DOI: 10.1016/j.sysarc.2024.103210
Qing Wu , Guoqiang Meng , Leyou Zhang , Yue Lei
The application of medical cloud storage technology in healthcare and the sharing of electronic medical records (EMR) bring convenience for patients and medical institutes. However, two barriers limit further expansions of the above (i.e. key escrow and key abuse issues). In this paper, we construct a large universe data-sharing scheme based on attribute-based encryption. We design all attribute authorities simultaneously to participate in the key computation, and the user performs aggregation. By issuing anonymous credentials to the recipients, their identity information is protected. To achieve complete tracing of the traitor, we blend the two mechanisms, white-box traceability and black-box traceability, together. Detailed security proofs have been carried out for various types of possible attackers, and theoretical analyses have verified the security of the proposed scheme. We performed performance evaluations in conjunction with existing schemes, and numerical experience shows that the burden on the user side is also minimal.
{"title":"An anonymous and large-universe data-sharing scheme with traceability for medical cloud storage","authors":"Qing Wu , Guoqiang Meng , Leyou Zhang , Yue Lei","doi":"10.1016/j.sysarc.2024.103210","DOIUrl":"https://doi.org/10.1016/j.sysarc.2024.103210","url":null,"abstract":"<div><p>The application of medical cloud storage technology in healthcare and the sharing of electronic medical records (EMR) bring convenience for patients and medical institutes. However, two barriers limit further expansions of the above (i.e. key escrow and key abuse issues). In this paper, we construct a large universe data-sharing scheme based on attribute-based encryption. We design all attribute authorities simultaneously to participate in the key computation, and the user performs aggregation. By issuing anonymous credentials to the recipients, their identity information is protected. To achieve complete tracing of the traitor, we blend the two mechanisms, white-box traceability and black-box traceability, together. Detailed security proofs have been carried out for various types of possible attackers, and theoretical analyses have verified the security of the proposed scheme. We performed performance evaluations in conjunction with existing schemes, and numerical experience shows that the burden on the user side is also minimal.</p></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"153 ","pages":"Article 103210"},"PeriodicalIF":4.5,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141326063","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-05DOI: 10.1016/j.sysarc.2024.103209
Shasha Guo , Lei Wang , Zhijie Yang , Yuliang Lu
Spiking Neural Networks (SNNs) are increasingly deployed in applications on resource constraint embedding systems due to their low power. Unfortunately, SNNs are vulnerable to adversarial examples which threaten the application security. Existing denoising filters can protect SNNs from adversarial examples. However, the reason why filters can defend against adversarial examples remains unclear and thus it cannot ensure a trusty defense. In this work, we aim to explain the reason and provide a more robust filter against different adversarial examples. First, we propose two new norms and to describe the spatial and temporal features of adversarial events for understanding the working principles of filters. Second, we propose to combine filters to provide a robust defense against different perturbation events. To make up the gap between the goal and the ability of existing filters, we propose a new filter that can defend against both spatially and temporally dense perturbation events. We conduct the experiments on two widely used neuromorphic datasets, NMNIST and IBM DVSGesture. Experimental results show that the combined defense can restore the accuracy to over 80% of the original SNN accuracy.
{"title":"A robust defense for spiking neural networks against adversarial examples via input filtering","authors":"Shasha Guo , Lei Wang , Zhijie Yang , Yuliang Lu","doi":"10.1016/j.sysarc.2024.103209","DOIUrl":"https://doi.org/10.1016/j.sysarc.2024.103209","url":null,"abstract":"<div><p>Spiking Neural Networks (SNNs) are increasingly deployed in applications on resource constraint embedding systems due to their low power. Unfortunately, SNNs are vulnerable to adversarial examples which threaten the application security. Existing denoising filters can protect SNNs from adversarial examples. However, the reason why filters can defend against adversarial examples remains unclear and thus it cannot ensure a trusty defense. In this work, we aim to explain the reason and provide a more robust filter against different adversarial examples. First, we propose two new norms <span><math><msub><mrow><mi>l</mi></mrow><mrow><mn>0</mn></mrow></msub></math></span> and <span><math><msub><mrow><mi>l</mi></mrow><mrow><mi>∞</mi></mrow></msub></math></span> to describe the spatial and temporal features of adversarial events for understanding the working principles of filters. Second, we propose to combine filters to provide a robust defense against different perturbation events. To make up the gap between the goal and the ability of existing filters, we propose a new filter that can defend against both spatially and temporally dense perturbation events. We conduct the experiments on two widely used neuromorphic datasets, NMNIST and IBM DVSGesture. Experimental results show that the combined defense can restore the accuracy to over 80% of the original SNN accuracy.</p></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"153 ","pages":"Article 103209"},"PeriodicalIF":4.5,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141313645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-04DOI: 10.1016/j.sysarc.2024.103198
Juan M. Rivas , J. Javier Gutiérrez , Ana Guasque , Patricia Balbastre
This paper considers the offline assignment of fixed priorities in partitioned preemptive real-time systems where tasks have precedence constraints. This problem is crucial in this type of systems, as having a good fixed priority assignment allows for an efficient use of the processing resources while meeting all the deadlines. In the literature, we can find several proposals to solve this problem, which offer varying trade-offs between the quality of their results and their computational complexities. In this paper, we propose a new approach, leveraging existing algorithms that are widely exploited in the field of Machine Learning: Gradient Descent, the Adam Optimizer, and Gradient Noise. We show how to adapt these algorithms to the problem of fixed priority assignment in conjunction with existing worst-case response time analyses. We demonstrate the performance of our proposal on synthetic task-sets with different sizes. This evaluation shows that our proposal is able to find more schedulable solutions than previous heuristics, approximating optimal but intractable algorithms such as MILP or brute-force, while requiring reasonable execution times.
{"title":"Gradient descent algorithm for the optimization of fixed priorities in real-time systems","authors":"Juan M. Rivas , J. Javier Gutiérrez , Ana Guasque , Patricia Balbastre","doi":"10.1016/j.sysarc.2024.103198","DOIUrl":"10.1016/j.sysarc.2024.103198","url":null,"abstract":"<div><p>This paper considers the offline assignment of fixed priorities in partitioned preemptive real-time systems where tasks have precedence constraints. This problem is crucial in this type of systems, as having a <em>good</em> fixed priority assignment allows for an efficient use of the processing resources while meeting all the deadlines. In the literature, we can find several proposals to solve this problem, which offer varying trade-offs between the quality of their results and their computational complexities. <em>In this paper</em>, we propose a new approach, leveraging existing algorithms that are widely exploited in the field of Machine Learning: Gradient Descent, the Adam Optimizer, and Gradient Noise. We show how to adapt these algorithms to the problem of fixed priority assignment in conjunction with existing worst-case response time analyses. We demonstrate the performance of our proposal on synthetic task-sets with different sizes. This evaluation shows that our proposal is able to find more schedulable solutions than previous heuristics, approximating optimal but intractable algorithms such as MILP or brute-force, while requiring reasonable execution times.</p></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"153 ","pages":"Article 103198"},"PeriodicalIF":4.5,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1383762124001358/pdfft?md5=e6db572d1a849663538d79c2985ac16d&pid=1-s2.0-S1383762124001358-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141394346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-04DOI: 10.1016/j.sysarc.2024.103185
Linwei Niu, Jonathan Musselwhite
For real-time embedded systems, feasibility, Quality of Service (QoS), reliability, and energy constraint are among the primary design concerns. In this research, we proposed a reliability-aware scheduling scheme for real-time embedded systems with -firm deadlines under hard energy budget constraint. The -firm systems require that at least out of any consecutive jobs of a real-time task meet their deadlines. To achieve the dual goals of maximizing the feasibility and QoS for such kind of systems while satisfying the reliability requirement under given energy budget constraint, we propose to reserve recovery space for real-time jobs in an adaptive way based on the mandatory/optional job partitioning strategy. The evaluation results demonstrate that the proposed techniques significantly outperform the previous research in maximizing the feasibility and QoS for -firm real-time embedded systems while preserving the system reliability under hard energy budget constraint. Moreover, the proposed work has also addressed some insufficiency in Niu (2020) in terms of preserving the system reliability.
对于实时嵌入式系统来说,可行性、服务质量(QoS)、可靠性和能源约束是设计中的主要关注点。在这项研究中,我们提出了一种可靠性感知调度方案,适用于在硬能源预算约束下具有(m,k)确认截止日期的实时嵌入式系统。(m,k)-firm 系统要求实时任务的任意 k 个连续作业中至少有 m 个作业能在截止日期前完成。为了实现此类系统可行性和服务质量最大化的双重目标,同时满足给定能源预算约束下的可靠性要求,我们提出了基于强制/可选作业分区策略的自适应方式,为实时作业预留恢复空间。评估结果表明,对于 (m,k) 确认的实时嵌入式系统,所提出的技术在最大化可行性和 QoS 方面明显优于之前的研究,同时还能在硬能源预算约束下保持系统可靠性。此外,所提出的工作还解决了 Niu(2020)在保护系统可靠性方面的一些不足。
{"title":"Reliability-aware scheduling for (m,k)-firm real-time embedded systems under hard energy budget constraint","authors":"Linwei Niu, Jonathan Musselwhite","doi":"10.1016/j.sysarc.2024.103185","DOIUrl":"10.1016/j.sysarc.2024.103185","url":null,"abstract":"<div><p>For real-time embedded systems, feasibility, Quality of Service (QoS), reliability, and energy constraint are among the primary design concerns. In this research, we proposed a reliability-aware scheduling scheme for real-time embedded systems with <span><math><mrow><mo>(</mo><mi>m</mi><mo>,</mo><mi>k</mi><mo>)</mo></mrow></math></span>-firm deadlines under hard energy budget constraint. The <span><math><mrow><mo>(</mo><mi>m</mi><mo>,</mo><mi>k</mi><mo>)</mo></mrow></math></span>-firm systems require that at least <span><math><mi>m</mi></math></span> out of any <span><math><mi>k</mi></math></span> consecutive jobs of a real-time task meet their deadlines. To achieve the dual goals of maximizing the feasibility and QoS for such kind of systems while satisfying the reliability requirement under given energy budget constraint, we propose to reserve recovery space for real-time jobs in an adaptive way based on the mandatory/optional job partitioning strategy. The evaluation results demonstrate that the proposed techniques significantly outperform the previous research in maximizing the feasibility and QoS for <span><math><mrow><mo>(</mo><mi>m</mi><mo>,</mo><mi>k</mi><mo>)</mo></mrow></math></span>-firm real-time embedded systems while preserving the system reliability under hard energy budget constraint. Moreover, the proposed work has also addressed some insufficiency in Niu (2020) in terms of preserving the system reliability.</p></div>","PeriodicalId":50027,"journal":{"name":"Journal of Systems Architecture","volume":"154 ","pages":"Article 103185"},"PeriodicalIF":3.7,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141405221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}