Pub Date : 2024-07-01DOI: 10.1109/TDSC.2023.3334277
Chenxu Wang, Yunjie Deng, Zhenyu Ning, Kevin Leach, Jin Li, Shoumeng Yan, Zheng-hao He, Jiannong Cao, Fengwei Zhang
A wide range of Arm endpoints leverage integrated and discrete GPUs to accelerate computation. However, Arm GPU security has not been explored by the community. Existing work has used Trusted Execution Environments (TEEs) to address GPU security concerns on Intel-based platforms, but there are numerous architectural differences that lead to novel technical challenges in deploying TEEs for Arm GPUs. There is a need for generalizable and efficient Arm-based GPU security mechanisms. To address these problems, we present StrongBox, the first GPU TEE for secured general computation on Arm endpoints. StrongBox provides an isolated execution environment by ensuring exclusive access to GPU. Our approach is based in part on a dynamic, fine-grained memory protection policy as Arm-based GPUs typically share a unified memory with the CPU. Furthermore, StrongBox reduces runtime overhead from the redundant security introspection operations. We also design an effective defense mechanism within secure world to protect the confidential GPU computation. Our design leverages the widely-deployed Arm TrustZone and generic Arm features, without hardware modification or architectural changes. We prototype StrongBox using an off-the-shelf Arm Mali GPU and perform an extensive evaluation. Results show that StrongBox successfully ensures GPU computation security with a low (4.70%–15.26%) overhead.
各种 Arm 终端利用集成和离散 GPU 加速计算。然而,业界尚未对 Arm GPU 的安全性进行探讨。现有工作使用可信执行环境(TEE)来解决基于英特尔平台的 GPU 安全问题,但由于存在大量架构差异,为 Arm GPU 部署 TEE 会面临新的技术挑战。我们需要可通用且高效的基于 Arm 的 GPU 安全机制。为了解决这些问题,我们推出了 StrongBox,它是首个在 Arm 端点上进行安全通用计算的 GPU TEE。StrongBox 通过确保对 GPU 的独占访问,提供了一个隔离的执行环境。我们的方法部分基于动态、细粒度的内存保护策略,因为基于 Arm 的 GPU 通常与 CPU 共享统一的内存。此外,StrongBox 还减少了冗余安全反省操作带来的运行时开销。我们还在安全世界中设计了一种有效的防御机制,以保护机密的 GPU 计算。我们的设计利用了广泛部署的 Arm TrustZone 和通用 Arm 功能,无需修改硬件或架构。我们使用现成的 Arm Mali GPU 制作了 StrongBox 原型,并进行了广泛的评估。结果表明,StrongBox 以较低的开销(4.70%-15.26%)成功确保了 GPU 计算的安全性。
{"title":"Building a Lightweight Trusted Execution Environment for Arm GPUs","authors":"Chenxu Wang, Yunjie Deng, Zhenyu Ning, Kevin Leach, Jin Li, Shoumeng Yan, Zheng-hao He, Jiannong Cao, Fengwei Zhang","doi":"10.1109/TDSC.2023.3334277","DOIUrl":"https://doi.org/10.1109/TDSC.2023.3334277","url":null,"abstract":"A wide range of Arm endpoints leverage integrated and discrete GPUs to accelerate computation. However, Arm GPU security has not been explored by the community. Existing work has used Trusted Execution Environments (TEEs) to address GPU security concerns on Intel-based platforms, but there are numerous architectural differences that lead to novel technical challenges in deploying TEEs for Arm GPUs. There is a need for generalizable and efficient Arm-based GPU security mechanisms. To address these problems, we present StrongBox, the first GPU TEE for secured general computation on Arm endpoints. StrongBox provides an isolated execution environment by ensuring exclusive access to GPU. Our approach is based in part on a dynamic, fine-grained memory protection policy as Arm-based GPUs typically share a unified memory with the CPU. Furthermore, StrongBox reduces runtime overhead from the redundant security introspection operations. We also design an effective defense mechanism within secure world to protect the confidential GPU computation. Our design leverages the widely-deployed Arm TrustZone and generic Arm features, without hardware modification or architectural changes. We prototype StrongBox using an off-the-shelf Arm Mali GPU and perform an extensive evaluation. Results show that StrongBox successfully ensures GPU computation security with a low (4.70%–15.26%) overhead.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141703693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rapid development of indoor location-based services (LBS) has raised concerns about location privacy protection in the 3-dimensional (3D) space. The existing 2-dimensional (2D) location privacy protection mechanisms (LPPMs) cannot effectively resist attacks in 3D environments. Furthermore, users may have various sensitive attributes at different locations and times. In this article, we first formally study the relationship between two complementary notions of geo-indistinguishability and distortion privacy (i.e., expected inference error) in the 3D space and develop a two-phase personalized 3D LPPM (P3DLPPM). In Phase I, we search for neighboring locations to formulate a protection location set (PLS) for hiding the actual location based on the above-mentioned relationship. To realize this, we develop a 3D Hilbert curve-based minimum distance searching algorithm to find the PLS with minimum diameter for each location while guaranteeing differential privacy. In Phase II, we put forth a novel Permute-and-Flip mechanism for location perturbation, which maps its initial application in data publishing privacy protection to a location perturbation mechanism. It generates fake locations with smaller perturbation distances while improving the balance between privacy and quality of service (QoS). Simulation results show that the proposed P3DLPPM can significantly improve personalized privacy protection while meeting the user's QoS needs.
{"title":"Personalized 3D Location Privacy Protection With Differential and Distortion Geo-Perturbation","authors":"Minghui Min, Haopeng Zhu, Jiahao Ding, Shiyin Li, Liang Xiao, Miao Pan, Zhu Han","doi":"10.1109/TDSC.2023.3335374","DOIUrl":"https://doi.org/10.1109/TDSC.2023.3335374","url":null,"abstract":"The rapid development of indoor location-based services (LBS) has raised concerns about location privacy protection in the 3-dimensional (3D) space. The existing 2-dimensional (2D) location privacy protection mechanisms (LPPMs) cannot effectively resist attacks in 3D environments. Furthermore, users may have various sensitive attributes at different locations and times. In this article, we first formally study the relationship between two complementary notions of geo-indistinguishability and distortion privacy (i.e., expected inference error) in the 3D space and develop a two-phase personalized 3D LPPM (P3DLPPM). In Phase I, we search for neighboring locations to formulate a protection location set (PLS) for hiding the actual location based on the above-mentioned relationship. To realize this, we develop a 3D Hilbert curve-based minimum distance searching algorithm to find the PLS with minimum diameter for each location while guaranteeing differential privacy. In Phase II, we put forth a novel Permute-and-Flip mechanism for location perturbation, which maps its initial application in data publishing privacy protection to a location perturbation mechanism. It generates fake locations with smaller perturbation distances while improving the balance between privacy and quality of service (QoS). Simulation results show that the proposed P3DLPPM can significantly improve personalized privacy protection while meeting the user's QoS needs.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141709262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1109/TDSC.2023.3348772
Yuan Xu, Yungang Bao, Sa Wang, Tianwei Zhang
Robot apps are becoming more automated, complex and diverse. An app usually consists of many functions, interacting with each other and the environment. This allows robots to conduct various tasks. However, it also opens a new door for cyber attacks: adversaries can leverage these interactions to threaten the safety of robot operations. Unfortunately, this issue is rarely explored in past works. We present the first systematic investigation about the function interactions in common robot apps. First, we disclose the potential risks and damages caused by malicious interactions. We introduce a comprehensive graph to model the function interactions in robot apps by analyzing 3,100 packages from the Robot Operating System (ROS) platform. From this graph, we identify and categorize three types of interaction risks. Second, we propose novel methodologies to detect and mitigate these risks and protect the operations of robot apps. We introduce security policies for each type of risks, and design coordination nodes to enforce the policies and regulate the interactions. We conduct extensive experiments on 110 robot apps from the ROS platform and two complex apps (Baidu Apollo and Autoware) widely adopted in industry. Evaluation results showed our methodologies can correctly identify and mitigate all potential risks.
机器人应用程序正变得越来越自动化、复杂和多样化。一个应用程序通常由许多功能组成,这些功能之间以及它们与环境之间可以相互影响。这使得机器人可以执行各种任务。然而,这也为网络攻击打开了一扇新的门:对手可以利用这些交互来威胁机器人操作的安全。遗憾的是,过去的研究很少探讨这一问题。我们首次对常见机器人应用程序中的功能交互进行了系统研究。首先,我们揭示了恶意交互可能带来的风险和破坏。通过分析机器人操作系统(ROS)平台上的 3100 个软件包,我们引入了一个全面的图来模拟机器人应用程序中的功能交互。从该图中,我们识别并划分出三种类型的交互风险。其次,我们提出了新颖的方法来检测和缓解这些风险,保护机器人应用程序的运行。我们为每种类型的风险引入了安全策略,并设计了协调节点来执行策略和规范交互。我们在 ROS 平台上的 110 个机器人应用程序和两个在行业中广泛采用的复杂应用程序(百度 Apollo 和 Autoware)上进行了大量实验。评估结果表明,我们的方法能够正确识别并降低所有潜在风险。
{"title":"Function Interaction Risks in Robot Apps: Analysis and Policy-Based Solution","authors":"Yuan Xu, Yungang Bao, Sa Wang, Tianwei Zhang","doi":"10.1109/TDSC.2023.3348772","DOIUrl":"https://doi.org/10.1109/TDSC.2023.3348772","url":null,"abstract":"Robot apps are becoming more automated, complex and diverse. An app usually consists of many functions, interacting with each other and the environment. This allows robots to conduct various tasks. However, it also opens a new door for cyber attacks: adversaries can leverage these interactions to threaten the safety of robot operations. Unfortunately, this issue is rarely explored in past works. We present the first systematic investigation about the function interactions in common robot apps. First, we disclose the potential risks and damages caused by malicious interactions. We introduce a comprehensive graph to model the function interactions in robot apps by analyzing 3,100 packages from the Robot Operating System (ROS) platform. From this graph, we identify and categorize three types of interaction risks. Second, we propose novel methodologies to detect and mitigate these risks and protect the operations of robot apps. We introduce security policies for each type of risks, and design coordination nodes to enforce the policies and regulate the interactions. We conduct extensive experiments on 110 robot apps from the ROS platform and two complex apps (Baidu Apollo and Autoware) widely adopted in industry. Evaluation results showed our methodologies can correctly identify and mitigate all potential risks.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141694196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1109/TDSC.2023.3346183
Xiangping Kang, Guoxian Yu, Lanju Kong, C. Domeniconi, Xiangliang Zhang, Qingzhong Li
Crowdsourcing is a promising computing paradigm for processing computer-hard tasks by harnessing human intelligence. How to protect online workers’ privacy is a hindrance for deploying crowdsourcing in the real world. Attempts have been made to address this issue by injecting noise or encrypting sensitive data, which cause quality loss and/or heavy computation and communication load. In this paper, we propose an approach, called FedTA (Federated Worthy Task Assignment for Crowd Workers), to protect a crowd worker's private data while ensuring quality. FedTA trains a client model based on the private data and annotations owned by a worker and uploads client models to aggregate the server model, without leaking the privacy of task data. To account for the varying task distributions (i.e., non-i.i.d.) and error-prone annotations of tasks, it leverages the feature similarity and semantic similarity separately derived from client and server models on local tasks, to quantify the quality of annotations and clients. Based on those, it further introduces a task assignment strategy to notify the clients which tasks are worthy and suitable for annotations. This strategy can incrementally improve the performance of client and server models. At the same time, it disregards the unworthy tasks to save the budget and to avoid their negative impact. Experimental results show that FedTA can complete secure crowdsourcing projects with high quality and low budget.
{"title":"FedTA: Federated Worthy Task Assignment for Crowd Workers","authors":"Xiangping Kang, Guoxian Yu, Lanju Kong, C. Domeniconi, Xiangliang Zhang, Qingzhong Li","doi":"10.1109/TDSC.2023.3346183","DOIUrl":"https://doi.org/10.1109/TDSC.2023.3346183","url":null,"abstract":"Crowdsourcing is a promising computing paradigm for processing computer-hard tasks by harnessing human intelligence. How to protect online workers’ privacy is a hindrance for deploying crowdsourcing in the real world. Attempts have been made to address this issue by injecting noise or encrypting sensitive data, which cause quality loss and/or heavy computation and communication load. In this paper, we propose an approach, called FedTA (Federated Worthy Task Assignment for Crowd Workers), to protect a crowd worker's private data while ensuring quality. FedTA trains a client model based on the private data and annotations owned by a worker and uploads client models to aggregate the server model, without leaking the privacy of task data. To account for the varying task distributions (i.e., non-i.i.d.) and error-prone annotations of tasks, it leverages the feature similarity and semantic similarity separately derived from client and server models on local tasks, to quantify the quality of annotations and clients. Based on those, it further introduces a task assignment strategy to notify the clients which tasks are worthy and suitable for annotations. This strategy can incrementally improve the performance of client and server models. At the same time, it disregards the unworthy tasks to save the budget and to avoid their negative impact. Experimental results show that FedTA can complete secure crowdsourcing projects with high quality and low budget.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141694664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, code reuse attacks impose a substantial threat to the security of operating system kernels. Control-flow graph-based CFI techniques, while effective, bring considerable performance overhead, thus limiting their practical adoption in real-world products. As an alternative approach, recent research suggests safeguarding the integrity of sensitive pointers as a countermeasure against manipulation attempts. Unfortunately, existing pointer integrity protection schemes only protect sensitive pointers partially and ignore assembly code, leaving protection gaps. To fill up these protection gaps, we propose a novel security concept named full life-cycle integrity, which enforces the integrity of a sensitive pointer at every step on its value flow chain. To realize full life-cycle integrity, we propose three novel techniques, including assembly-aware sensitivity for analyzing assembly code, Merkle PAC tree for protecting interrupt context securely and efficiently, and pointer-grained authentication for defeating spatial substitution attacks. We have developed a practical implementation of comprehensive life-cycle integrity for the Linux kernel, called ”kernel Code Pointer Authentication” (kCPA), which leverages the ARM Pointer Authentication (PAuth) mechanism. This implementation has been extended to the Apple M1 architecture for real-world evaluation on PAuth hardware. Our assessment demonstrates that kCPA effectively mitigates a range of real-world attacks while incurring a minimal 2.5% performance overhead for the Phoronix Test Suite and nearly negligible performance impact for SPEC2017 benchmarks.
{"title":"kCPA: Towards Sensitive Pointer Full Life Cycle Authentication for OS Kernels","authors":"Yutian Yang, Jinjiang Tu, Wenbo Shen, Songbo Zhu, Rui Chang, Yajin Zhou","doi":"10.1109/TDSC.2023.3334268","DOIUrl":"https://doi.org/10.1109/TDSC.2023.3334268","url":null,"abstract":"Nowadays, code reuse attacks impose a substantial threat to the security of operating system kernels. Control-flow graph-based CFI techniques, while effective, bring considerable performance overhead, thus limiting their practical adoption in real-world products. As an alternative approach, recent research suggests safeguarding the integrity of sensitive pointers as a countermeasure against manipulation attempts. Unfortunately, existing pointer integrity protection schemes only protect sensitive pointers partially and ignore assembly code, leaving protection gaps. To fill up these protection gaps, we propose a novel security concept named full life-cycle integrity, which enforces the integrity of a sensitive pointer at every step on its value flow chain. To realize full life-cycle integrity, we propose three novel techniques, including assembly-aware sensitivity for analyzing assembly code, Merkle PAC tree for protecting interrupt context securely and efficiently, and pointer-grained authentication for defeating spatial substitution attacks. We have developed a practical implementation of comprehensive life-cycle integrity for the Linux kernel, called ”kernel Code Pointer Authentication” (kCPA), which leverages the ARM Pointer Authentication (PAuth) mechanism. This implementation has been extended to the Apple M1 architecture for real-world evaluation on PAuth hardware. Our assessment demonstrates that kCPA effectively mitigates a range of real-world attacks while incurring a minimal 2.5% performance overhead for the Phoronix Test Suite and nearly negligible performance impact for SPEC2017 benchmarks.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141701084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1109/TDSC.2023.3348760
Seunghwan Lee, Hankyung Ko, Jihye Kim, Hyunok Oh
It is becoming important for the client to be able to check whether the AI inference services have been correctly calculated. Since the weight values in a CNN model are assets of service providers, the client should be able to check the correctness of the result without them. The Zero-knowledge Succinct Non-interactive Argument of Knowledge (zk-SNARK) allows verifying the result without input and weight values. However, the proving time in zk-SNARK is too slow to be applied to real AI applications. This article proposes a new efficient verifiable convolutional neural network (vCNN) framework that greatly accelerates the proving performance. We introduce a new efficient relation representation for convolution equations, reducing the proving complexity of convolution from O(ln) to O(l+n) compared to existing zero-knowledge succinct non-interactive argument of knowledge (zk-SNARK) approaches, where l and n denote the size of the kernel and the data in CNNs. Experimental results show that the proposed vCNN improves proving performance by 20-fold for a simple MNIST and 18,000-fold for VGG16. The security of the proposed scheme is formally proven.
{"title":"vCNN: Verifiable Convolutional Neural Network Based on zk-SNARKs","authors":"Seunghwan Lee, Hankyung Ko, Jihye Kim, Hyunok Oh","doi":"10.1109/TDSC.2023.3348760","DOIUrl":"https://doi.org/10.1109/TDSC.2023.3348760","url":null,"abstract":"It is becoming important for the client to be able to check whether the AI inference services have been correctly calculated. Since the weight values in a CNN model are assets of service providers, the client should be able to check the correctness of the result without them. The Zero-knowledge Succinct Non-interactive Argument of Knowledge (zk-SNARK) allows verifying the result without input and weight values. However, the proving time in zk-SNARK is too slow to be applied to real AI applications. This article proposes a new efficient verifiable convolutional neural network (vCNN) framework that greatly accelerates the proving performance. We introduce a new efficient relation representation for convolution equations, reducing the proving complexity of convolution from O(ln) to O(l+n) compared to existing zero-knowledge succinct non-interactive argument of knowledge (zk-SNARK) approaches, where l and n denote the size of the kernel and the data in CNNs. Experimental results show that the proposed vCNN improves proving performance by 20-fold for a simple MNIST and 18,000-fold for VGG16. The security of the proposed scheme is formally proven.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141702506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1109/TDSC.2023.3335961
Guichuan Zhao, Qi Jiang, Ding Wang, Xindi Ma, Xinghua Li
The increasing use of multi-biometric authentication has raised concerns about the security of biometric templates. Many template protection methods based on convolutional neural network have been presented, but most involve a trade-off between authentication accuracy and template security. In this paper, we present a cancelable multi-biometric template protection scheme that combines deep hashing with cancelable distance-preserving encryption (CDPE), which provides high template security without degrading the authentication performance. Specifically, a deep hashing based architecture that minimizes the quantization loss is designed to map face and iris traits to binary codes. Next, CDPE is proposed to generate a protected template given the face binary code and a user-specific key obtained from the iris binary code, which preserves the distance between original templates in the protected domain to ensure authentication performance equivalent to unprotected systems. Digital lockers instead of the key are stored to further enhance the security, which can be unlocked with genuine biometric traits to get the correct key during authentication. Theoretical and experimental results on real face and iris datasets show that our scheme can achieve equal error rate of 0.23% and genuine accept rate of 97.54%, while guaranteeing irreversibility, revocability and unlinkability of protected templates.
{"title":"Deep Hashing Based Cancelable Multi-Biometric Template Protection","authors":"Guichuan Zhao, Qi Jiang, Ding Wang, Xindi Ma, Xinghua Li","doi":"10.1109/TDSC.2023.3335961","DOIUrl":"https://doi.org/10.1109/TDSC.2023.3335961","url":null,"abstract":"The increasing use of multi-biometric authentication has raised concerns about the security of biometric templates. Many template protection methods based on convolutional neural network have been presented, but most involve a trade-off between authentication accuracy and template security. In this paper, we present a cancelable multi-biometric template protection scheme that combines deep hashing with cancelable distance-preserving encryption (CDPE), which provides high template security without degrading the authentication performance. Specifically, a deep hashing based architecture that minimizes the quantization loss is designed to map face and iris traits to binary codes. Next, CDPE is proposed to generate a protected template given the face binary code and a user-specific key obtained from the iris binary code, which preserves the distance between original templates in the protected domain to ensure authentication performance equivalent to unprotected systems. Digital lockers instead of the key are stored to further enhance the security, which can be unlocked with genuine biometric traits to get the correct key during authentication. Theoretical and experimental results on real face and iris datasets show that our scheme can achieve equal error rate of 0.23% and genuine accept rate of 97.54%, while guaranteeing irreversibility, revocability and unlinkability of protected templates.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141700436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1109/TDSC.2023.3333913
Jiansong Zhang, Kejiang Chen, Chuan Qin, Weiming Zhang, Neng H. Yu
In recent years, steganalysis based on deep learning has evolved rapidly. However, training deep learning models is data-consuming. The models are prone to overfitting when data is limited. Data augmentation is an effective method to mitigate overfitting. Existing data augmentation methods in steganalysis can be categorized into cover enrichment and virtual augmentation. They are used in different stages. Cover enrichment refers to introducing additional cover-stego pairs in some ways, which is performed prior to training. In contrast, virtual augmentation augments data during training. Existing virtual augmentation methods are designed heuristically and rely on expert knowledge. In this paper, we propose the first automatic virtual data augmentation method for steganalysis. Specifically, we design an augmentation network that augments cover and stego images by intelligently adding noises. The augmentation network is trained adversarially with the steganalyzer to generate diverse data. Meanwhile, a “class-invariant” module prevents the augmentation network from changing the original data distribution too much. A “stabilizer” loss function is designed that keeps the adversarial training stable by constraining the number of noises. The experimental results show that the proposed method outperforms existing virtual augmentation methods. Moreover, combining the proposed method and cover enrichment can further boost performance.
{"title":"AAS: Automatic Virtual Data Augmentation for Deep Image Steganalysis","authors":"Jiansong Zhang, Kejiang Chen, Chuan Qin, Weiming Zhang, Neng H. Yu","doi":"10.1109/TDSC.2023.3333913","DOIUrl":"https://doi.org/10.1109/TDSC.2023.3333913","url":null,"abstract":"In recent years, steganalysis based on deep learning has evolved rapidly. However, training deep learning models is data-consuming. The models are prone to overfitting when data is limited. Data augmentation is an effective method to mitigate overfitting. Existing data augmentation methods in steganalysis can be categorized into cover enrichment and virtual augmentation. They are used in different stages. Cover enrichment refers to introducing additional cover-stego pairs in some ways, which is performed prior to training. In contrast, virtual augmentation augments data during training. Existing virtual augmentation methods are designed heuristically and rely on expert knowledge. In this paper, we propose the first automatic virtual data augmentation method for steganalysis. Specifically, we design an augmentation network that augments cover and stego images by intelligently adding noises. The augmentation network is trained adversarially with the steganalyzer to generate diverse data. Meanwhile, a “class-invariant” module prevents the augmentation network from changing the original data distribution too much. A “stabilizer” loss function is designed that keeps the adversarial training stable by constraining the number of noises. The experimental results show that the proposed method outperforms existing virtual augmentation methods. Moreover, combining the proposed method and cover enrichment can further boost performance.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141716884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-07-01DOI: 10.1109/tdsc.2023.3345543
M. Farrell, Matthew Bradbury, Rafael C. Cardoso, Michael Fisher, Louise A. Dennis, Clare Dixon, A. Sheik, Hu Yuan, Carsten Maple
Autonomous robotic systems systems are both safety- and security-critical, since a breach in system security may impact safety. In such critical systems, formal verification is used to model the system and verify that it obeys specific functional and safety properties. Independently, threat modeling is used to analyse and manage the cyber security threats that such systems may encounter. Both verification and threat analysis serve the purpose of ensuring that the system will be reliable, albeit from differing perspectives. In prior work, we argued that these analyses should be used to inform one another and, in this paper, we extend our previously defined methodology for security-minded verification by incorporating runtime verification. To illustrate our approach, we analyse an algorithm for sending Cooperative Awareness Messages between autonomous vehicles. Our analysis centres on identifying STRIDE security threats. We show how these can be formalised, and subsequently verified, using a combination of formal tools for static aspects, namely Promela/SPIN and Dafny, and generate runtime monitors for dynamic verification. Our approach allows us to focus our verification effort on those security properties that are particularly important and to consider safety and security in tandem, both statically and at runtime.
{"title":"Security-Minded Verification of Cooperative Awareness Messages","authors":"M. Farrell, Matthew Bradbury, Rafael C. Cardoso, Michael Fisher, Louise A. Dennis, Clare Dixon, A. Sheik, Hu Yuan, Carsten Maple","doi":"10.1109/tdsc.2023.3345543","DOIUrl":"https://doi.org/10.1109/tdsc.2023.3345543","url":null,"abstract":"Autonomous robotic systems systems are both safety- and security-critical, since a breach in system security may impact safety. In such critical systems, formal verification is used to model the system and verify that it obeys specific functional and safety properties. Independently, threat modeling is used to analyse and manage the cyber security threats that such systems may encounter. Both verification and threat analysis serve the purpose of ensuring that the system will be reliable, albeit from differing perspectives. In prior work, we argued that these analyses should be used to inform one another and, in this paper, we extend our previously defined methodology for security-minded verification by incorporating runtime verification. To illustrate our approach, we analyse an algorithm for sending Cooperative Awareness Messages between autonomous vehicles. Our analysis centres on identifying STRIDE security threats. We show how these can be formalised, and subsequently verified, using a combination of formal tools for static aspects, namely Promela/SPIN and Dafny, and generate runtime monitors for dynamic verification. Our approach allows us to focus our verification effort on those security properties that are particularly important and to consider safety and security in tandem, both statically and at runtime.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141711658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In spatial crowdsourcing, location-based task recommendation schemes are widely used to match appropriate workers in desired geographic areas with relevant tasks from data requesters. To ensure data confidentiality, various privacy-preserving location-based task recommendation schemes have been proposed, as cloud servers behave semi-honestly. However, existing schemes reveal access patterns, and the dimension of the geographic query increases significantly when additional information beyond locations is used to filter appropriate workers. To address the above challenges, this article proposes two efficient and privacy-preserving location-based task recommendation (EPTR) schemes that support high-dimensional queries and access pattern privacy protection. First, we propose a basic EPTR scheme (EPTR-I) that utilizes randomizable matrix multiplication and public position intersection test (PPIT) to achieve linear search complexity and full access pattern privacy protection. Then, we explore the trade-off between efficiency and security and develop a tree-based EPTR scheme (EPTR-II) to achieve sub-linear search complexity. Security analysis demonstrates that both schemes protect the confidentiality of worker locations, requester queries, and query results and achieve different security properties on access pattern assurance. Extensive performance evaluation shows that both EPTR schemes are efficient in terms of computational cost, with EPTR-II being $10^{3}times$103× faster than the state-of-the-art scheme in task recommendation.
{"title":"Achieving Efficient and Privacy-Preserving Location-Based Task Recommendation in Spatial Crowdsourcing","authors":"Fuyuan Song, Jinwen Liang, Chuan Zhang, Zhangjie Fu, Zhen Qin, Song Guo","doi":"10.1109/TDSC.2023.3342239","DOIUrl":"https://doi.org/10.1109/TDSC.2023.3342239","url":null,"abstract":"In spatial crowdsourcing, location-based task recommendation schemes are widely used to match appropriate workers in desired geographic areas with relevant tasks from data requesters. To ensure data confidentiality, various privacy-preserving location-based task recommendation schemes have been proposed, as cloud servers behave semi-honestly. However, existing schemes reveal access patterns, and the dimension of the geographic query increases significantly when additional information beyond locations is used to filter appropriate workers. To address the above challenges, this article proposes two efficient and privacy-preserving location-based task recommendation (EPTR) schemes that support high-dimensional queries and access pattern privacy protection. First, we propose a basic EPTR scheme (EPTR-I) that utilizes randomizable matrix multiplication and public position intersection test (PPIT) to achieve linear search complexity and full access pattern privacy protection. Then, we explore the trade-off between efficiency and security and develop a tree-based EPTR scheme (EPTR-II) to achieve sub-linear search complexity. Security analysis demonstrates that both schemes protect the confidentiality of worker locations, requester queries, and query results and achieve different security properties on access pattern assurance. Extensive performance evaluation shows that both EPTR schemes are efficient in terms of computational cost, with EPTR-II being <inline-formula><tex-math notation=\"LaTeX\">$10^{3}times$</tex-math><alternatives><mml:math><mml:mrow><mml:msup><mml:mn>10</mml:mn><mml:mn>3</mml:mn></mml:msup><mml:mo>×</mml:mo></mml:mrow></mml:math><inline-graphic xlink:href=\"liang-ieq1-3342239.gif\"/></alternatives></inline-formula> faster than the state-of-the-art scheme in task recommendation.","PeriodicalId":13047,"journal":{"name":"IEEE Transactions on Dependable and Secure Computing","volume":null,"pages":null},"PeriodicalIF":7.0,"publicationDate":"2024-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141712081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}