首页 > 最新文献

Proceedings of the 2022 ACM Workshop on Information Hiding and Multimedia Security最新文献

英文 中文
Covert Channels in Network Time Security 网络时间安全中的隐蔽通道
Kevin Lamshöft, J. Dittmann
Network Time Security (NTS) specified in RFC8915 is a mechanism to provide cryptographic security for clock synchronization using the Network Time Protocol (NTP) as foundation. By using Transport Layer Security (TLS) and Authenticated Encryption with Associated Data (AEAD) NTS is able to ensure integrity and authenticity between server and clients synchronizing time. However, in the past it was shown that time synchronisation protocols such as the Network Time Protocol (NTP) and the Precision Time Protocol (PTP) might be leveraged as carrier for covert channels, potentially infiltrating or exfiltrating information or to be used as Command-and-Control channels in case of malware infections. By systematically analyzing the NTS specification, we identified 12 potential covert channels, which we describe and discuss in this paper. From the 12 channels, we exemplary selected an client-side approach for a proof-of-concept implementation using NTS random UIDs. Further, we analyze and investigate potential countermeasures and propose a design for an active warden capable of mitigating the covert channels described in this paper.
RFC8915中规定的NTS (Network Time Security)是一种以NTP (Network Time Protocol)为基础,为时钟同步提供加密安全的机制。通过使用TLS (Transport Layer Security)和AEAD (Authenticated Encryption with Associated Data)技术,NTS可以保证服务器和客户端同步时间的完整性和真实性。然而,过去的研究表明,时间同步协议,如网络时间协议(NTP)和精确时间协议(PTP)可能被用作隐蔽通道的载体,潜在地渗透或泄露信息,或在恶意软件感染的情况下用作命令和控制通道。通过系统地分析NTS规范,我们确定了12个潜在的隐蔽通道,并在本文中进行了描述和讨论。从12个通道中,我们选择了一种客户端方法,使用NTS随机uid进行概念验证实现。此外,我们分析和调查了潜在的对策,并提出了一种能够减轻本文中描述的隐蔽通道的主动监狱长的设计。
{"title":"Covert Channels in Network Time Security","authors":"Kevin Lamshöft, J. Dittmann","doi":"10.1145/3531536.3532947","DOIUrl":"https://doi.org/10.1145/3531536.3532947","url":null,"abstract":"Network Time Security (NTS) specified in RFC8915 is a mechanism to provide cryptographic security for clock synchronization using the Network Time Protocol (NTP) as foundation. By using Transport Layer Security (TLS) and Authenticated Encryption with Associated Data (AEAD) NTS is able to ensure integrity and authenticity between server and clients synchronizing time. However, in the past it was shown that time synchronisation protocols such as the Network Time Protocol (NTP) and the Precision Time Protocol (PTP) might be leveraged as carrier for covert channels, potentially infiltrating or exfiltrating information or to be used as Command-and-Control channels in case of malware infections. By systematically analyzing the NTS specification, we identified 12 potential covert channels, which we describe and discuss in this paper. From the 12 channels, we exemplary selected an client-side approach for a proof-of-concept implementation using NTS random UIDs. Further, we analyze and investigate potential countermeasures and propose a design for an active warden capable of mitigating the covert channels described in this paper.","PeriodicalId":164949,"journal":{"name":"Proceedings of the 2022 ACM Workshop on Information Hiding and Multimedia Security","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127425331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Session details: Session 4: Steganography I 会议详情:第四部分:隐写术1
J. Fridrich
{"title":"Session details: Session 4: Steganography I","authors":"J. Fridrich","doi":"10.1145/3545214","DOIUrl":"https://doi.org/10.1145/3545214","url":null,"abstract":"","PeriodicalId":164949,"journal":{"name":"Proceedings of the 2022 ACM Workshop on Information Hiding and Multimedia Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127202533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hiding Needles in a Haystack: Towards Constructing Neural Networks that Evade Verification 大海捞针:构建逃避验证的神经网络
Árpád Berta, Gábor Danner, István Hegedüs, Márk Jelasity
Machine learning models are vulnerable to adversarial attacks, where a small, invisible, malicious perturbation of the input changes the predicted label. A large area of research is concerned with verification techniques that attempt to decide whether a given model has adversarial inputs close to a given benign input. Here, we show that current approaches to verification have a key vulnerability: we construct a model that is not robust but passes current verifiers. The idea is to insert artificial adversarial perturbations by adding a backdoor to a robust neural network model. In our construction, the adversarial input subspace that triggers the backdoor has a very small volume, and outside this subspace the gradient of the model is identical to that of the clean model. In other words, we seek to create a "needle in a haystack" search problem. For practical purposes, we also require that the adversarial samples be robust to JPEG compression. Large "needle in the haystack" problems are practically impossible to solve with any search algorithm. Formal verifiers can handle this in principle, but they do not scale up to real-world networks at the moment, and achieving this is a challenge because the verification problem is NP-complete. Our construction is based on training a hiding and a revealing network using deep steganography. Using the revealing network, we create a separate backdoor network and integrate it into the target network. We train our deep steganography networks over the CIFAR-10 dataset. We then evaluate our construction using state-of-the-art adversarial attacks and backdoor detectors over the CIFAR-10 and the ImageNet datasets. We made the code and models publicly available at https://github.com/szegedai/hiding-needles-in-a-haystack.
机器学习模型容易受到对抗性攻击,在这种攻击中,输入的一个小的、看不见的、恶意的扰动会改变预测的标签。一个很大的研究领域是与验证技术有关的,这些技术试图确定给定模型是否具有接近给定良性输入的敌对输入。在这里,我们展示了当前的验证方法有一个关键的弱点:我们构建了一个不健壮但通过当前验证者的模型。这个想法是通过在鲁棒神经网络模型中添加后门来插入人工的对抗性扰动。在我们的构造中,触发后门的对抗性输入子空间具有非常小的体积,并且在该子空间之外,模型的梯度与干净模型的梯度相同。换句话说,我们试图创造一个“大海捞针”的搜索问题。出于实际目的,我们还要求对抗性样本对JPEG压缩具有鲁棒性。大的“大海捞针”问题实际上是不可能用任何搜索算法解决的。正式的验证器原则上可以处理这个问题,但目前它们不能扩展到现实世界的网络,实现这一点是一个挑战,因为验证问题是np完备的。我们的构建是基于使用深度隐写术训练隐藏和显示网络。利用揭露网络,我们创建一个单独的后门网络,并将其整合到目标网络中。我们在CIFAR-10数据集上训练我们的深度隐写网络。然后,我们在CIFAR-10和ImageNet数据集上使用最先进的对抗性攻击和后门检测器来评估我们的构建。我们在https://github.com/szegedai/hiding-needles-in-a-haystack上公开了代码和模型。
{"title":"Hiding Needles in a Haystack: Towards Constructing Neural Networks that Evade Verification","authors":"Árpád Berta, Gábor Danner, István Hegedüs, Márk Jelasity","doi":"10.1145/3531536.3532966","DOIUrl":"https://doi.org/10.1145/3531536.3532966","url":null,"abstract":"Machine learning models are vulnerable to adversarial attacks, where a small, invisible, malicious perturbation of the input changes the predicted label. A large area of research is concerned with verification techniques that attempt to decide whether a given model has adversarial inputs close to a given benign input. Here, we show that current approaches to verification have a key vulnerability: we construct a model that is not robust but passes current verifiers. The idea is to insert artificial adversarial perturbations by adding a backdoor to a robust neural network model. In our construction, the adversarial input subspace that triggers the backdoor has a very small volume, and outside this subspace the gradient of the model is identical to that of the clean model. In other words, we seek to create a \"needle in a haystack\" search problem. For practical purposes, we also require that the adversarial samples be robust to JPEG compression. Large \"needle in the haystack\" problems are practically impossible to solve with any search algorithm. Formal verifiers can handle this in principle, but they do not scale up to real-world networks at the moment, and achieving this is a challenge because the verification problem is NP-complete. Our construction is based on training a hiding and a revealing network using deep steganography. Using the revealing network, we create a separate backdoor network and integrate it into the target network. We train our deep steganography networks over the CIFAR-10 dataset. We then evaluate our construction using state-of-the-art adversarial attacks and backdoor detectors over the CIFAR-10 and the ImageNet datasets. We made the code and models publicly available at https://github.com/szegedai/hiding-needles-in-a-haystack.","PeriodicalId":164949,"journal":{"name":"Proceedings of the 2022 ACM Workshop on Information Hiding and Multimedia Security","volume":"223 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114600573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Sparse Trigger Pattern Guided Deep Learning Model Watermarking 稀疏触发模式引导深度学习模型水印
Chun-Shien Lu
Watermarking neural networks (NNs) for ownership protection has received considerable attention recently. Resisting both model pruning and fine-tuning is commonly considered to evaluate the robustness of a watermarked NN. However, the rationale behind such a robustness is still relatively unexplored in the literature. In this paper, we study this problem to propose a so-called sparse trigger pattern (STP) guided deep learning model watermarking method. We provide empirical evidence to show that trigger patterns are able to make the distribution of model parameters compact, and thus exhibit interpretable resilience to model pruning and fine-tuning. We find the effect of STP can also be technically interpreted as the first layer dropout. Extensive experiments demonstrate the robustness of our method.
近年来,基于水印神经网络的所有权保护受到了广泛的关注。抵抗模型修剪和微调通常被认为是评估一个水印神经网络的鲁棒性。然而,这种稳健性背后的基本原理在文献中仍然相对未被探索。本文针对这一问题,提出了一种基于稀疏触发模式(STP)的深度学习模型水印方法。我们提供的经验证据表明,触发模式能够使模型参数的分布紧凑,从而对模型修剪和微调表现出可解释的弹性。我们发现STP的影响在技术上也可以解释为第一层脱落。大量的实验证明了该方法的鲁棒性。
{"title":"Sparse Trigger Pattern Guided Deep Learning Model Watermarking","authors":"Chun-Shien Lu","doi":"10.1145/3531536.3532961","DOIUrl":"https://doi.org/10.1145/3531536.3532961","url":null,"abstract":"Watermarking neural networks (NNs) for ownership protection has received considerable attention recently. Resisting both model pruning and fine-tuning is commonly considered to evaluate the robustness of a watermarked NN. However, the rationale behind such a robustness is still relatively unexplored in the literature. In this paper, we study this problem to propose a so-called sparse trigger pattern (STP) guided deep learning model watermarking method. We provide empirical evidence to show that trigger patterns are able to make the distribution of model parameters compact, and thus exhibit interpretable resilience to model pruning and fine-tuning. We find the effect of STP can also be technically interpreted as the first layer dropout. Extensive experiments demonstrate the robustness of our method.","PeriodicalId":164949,"journal":{"name":"Proceedings of the 2022 ACM Workshop on Information Hiding and Multimedia Security","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134555329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
AMR Steganalysis based on Adversarial Bi-GRU and Data Distillation 基于对抗性Bi-GRU和数据蒸馏的AMR隐写分析
Z. Wu, Junjun Guo
Existing AMR (Adaptive Multi-Rate) steganalysis algorithms based on pitch delay have low detection accuracy on samples with short time or low embedding rate, and the model shows fragility under the attack of adversarial samples. To solve this problem, we design an advanced AMR steganalysis method based on adversarial Bi-GRU (Bi-directional Gated Recurrent Unit) and data distillation. First, Gaussian white noise is randomly added to part of the original speech to form adversarial data set, then artificially annotate a small amount of voice to train the model. Second, perform three transformations of 1.5 times speed, 0.5 times speed, and mirror flip on the remaining original voice data, then put them into Bi-GRU for classification, and the final predicted label obtained by the decision fusion corresponds to the original data. All data with the label is put back into the Bi-GRU model for final training at last. What needs to be pointed out is that each batch of final training data includes normal and adversarial samples. This method adopts a semi-supervised learning method, which greatly saves the resources consumed by manual labeling, and introduces adversarial Bi-GRU, which can realize the two-direction analysis of samples for a long time. Based on improving the detection accuracy, the safety and robustness of the model are greatly improved. The experimental results show that for normal and adversarial samples, the algorithm can achieve accuracy of 96.73% and 95.6% respectively.
现有的基于基音延迟的AMR (Adaptive Multi-Rate)隐写算法对嵌入时间短或嵌入率低的样本检测精度较低,且模型在对抗性样本的攻击下表现出脆弱性。为了解决这一问题,我们设计了一种基于对抗性双向门控循环单元(Bi-GRU)和数据蒸馏的先进AMR隐写方法。首先将高斯白噪声随机加入到部分原始语音中形成对抗数据集,然后对少量语音进行人工标注来训练模型。其次,对剩余的原始语音数据进行1.5倍速度、0.5倍速度、镜像翻转三次变换,并将其放入Bi-GRU中进行分类,决策融合得到的最终预测标签与原始数据对应。最后将所有带标签的数据放回Bi-GRU模型中进行最终训练。需要指出的是,每一批最终的训练数据都包括正常样本和对抗样本。该方法采用半监督学习方法,大大节省了人工标注所消耗的资源,并引入对抗性Bi-GRU,可以长时间实现样本的双向分析。在提高检测精度的基础上,大大提高了模型的安全性和鲁棒性。实验结果表明,对于正常样本和对抗样本,该算法的准确率分别达到96.73%和95.6%。
{"title":"AMR Steganalysis based on Adversarial Bi-GRU and Data Distillation","authors":"Z. Wu, Junjun Guo","doi":"10.1145/3531536.3532958","DOIUrl":"https://doi.org/10.1145/3531536.3532958","url":null,"abstract":"Existing AMR (Adaptive Multi-Rate) steganalysis algorithms based on pitch delay have low detection accuracy on samples with short time or low embedding rate, and the model shows fragility under the attack of adversarial samples. To solve this problem, we design an advanced AMR steganalysis method based on adversarial Bi-GRU (Bi-directional Gated Recurrent Unit) and data distillation. First, Gaussian white noise is randomly added to part of the original speech to form adversarial data set, then artificially annotate a small amount of voice to train the model. Second, perform three transformations of 1.5 times speed, 0.5 times speed, and mirror flip on the remaining original voice data, then put them into Bi-GRU for classification, and the final predicted label obtained by the decision fusion corresponds to the original data. All data with the label is put back into the Bi-GRU model for final training at last. What needs to be pointed out is that each batch of final training data includes normal and adversarial samples. This method adopts a semi-supervised learning method, which greatly saves the resources consumed by manual labeling, and introduces adversarial Bi-GRU, which can realize the two-direction analysis of samples for a long time. Based on improving the detection accuracy, the safety and robustness of the model are greatly improved. The experimental results show that for normal and adversarial samples, the algorithm can achieve accuracy of 96.73% and 95.6% respectively.","PeriodicalId":164949,"journal":{"name":"Proceedings of the 2022 ACM Workshop on Information Hiding and Multimedia Security","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133941457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Nearest Neighbor Under-sampling Strategy for Vertical Federated Learning in Financial Domain 金融领域垂直联邦学习的最近邻欠采样策略
Denghao Li, Jianzong Wang, Lingwei Kong, Shijing Si, Zhangcheng Huang, Chenyu Huang, Jing Xiao
Machine learning techniques have been widely applied in modern financial activities. Participants in the field are aware of the importance of data privacy. Vertical federated learning (VFL) was proposed as a solution to multi-party secure computation for machine learning to obtain the huge data required by the models as well as keep the privacy of the data holders. However, previous research majorly analyzed the algorithms under ideal conditions. Data imbalance in VFL is still an open problem. In this paper, we propose a privacy-preserving sampling strategy for imbalanced VFL based on federated graph embedding of the samples, without leaking any distribution information. The participants of the federation provide partial neighbor information for each sample during the intersection stage and the controversial negative sample will be filtered out. Experiments were conducted on commonly used financial datasets and one real-world dataset. Our proposed approach obtained the leading F1 score on all tested datasets on comparing with the baseline under sampling strategies for VFL.
机器学习技术在现代金融活动中得到了广泛的应用。该领域的参与者都意识到数据隐私的重要性。垂直联邦学习(Vertical federated learning, VFL)作为机器学习多方安全计算的解决方案,既能获取模型所需的海量数据,又能保护数据持有者的隐私。然而,以往的研究主要是在理想条件下分析算法。VFL中的数据不平衡仍然是一个有待解决的问题。在本文中,我们提出了一种不泄露任何分布信息的基于样本联邦图嵌入的非平衡VFL隐私保护采样策略。在交叉阶段,联邦参与者为每个样本提供部分邻居信息,有争议的负样本将被过滤掉。在常用的金融数据集和一个真实数据集上进行了实验。与VFL采样策略下的基线相比,我们提出的方法在所有测试数据集上获得了领先的F1分数。
{"title":"A Nearest Neighbor Under-sampling Strategy for Vertical Federated Learning in Financial Domain","authors":"Denghao Li, Jianzong Wang, Lingwei Kong, Shijing Si, Zhangcheng Huang, Chenyu Huang, Jing Xiao","doi":"10.1145/3531536.3532960","DOIUrl":"https://doi.org/10.1145/3531536.3532960","url":null,"abstract":"Machine learning techniques have been widely applied in modern financial activities. Participants in the field are aware of the importance of data privacy. Vertical federated learning (VFL) was proposed as a solution to multi-party secure computation for machine learning to obtain the huge data required by the models as well as keep the privacy of the data holders. However, previous research majorly analyzed the algorithms under ideal conditions. Data imbalance in VFL is still an open problem. In this paper, we propose a privacy-preserving sampling strategy for imbalanced VFL based on federated graph embedding of the samples, without leaking any distribution information. The participants of the federation provide partial neighbor information for each sample during the intersection stage and the controversial negative sample will be filtered out. Experiments were conducted on commonly used financial datasets and one real-world dataset. Our proposed approach obtained the leading F1 score on all tested datasets on comparing with the baseline under sampling strategies for VFL.","PeriodicalId":164949,"journal":{"name":"Proceedings of the 2022 ACM Workshop on Information Hiding and Multimedia Security","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131156312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Domain Adaptational Text Steganalysis Based on Transductive Learning 基于转换学习的领域自适应文本隐写分析
Yiming Xue, Boya Yang, Yaqian Deng, Wanli Peng, Juan Wen
Traditional text steganalysis methods rely on a large amount of labeled data. At the same time, the test data should be independent and identically distributed with the training data. However, in practice, a large number of text types make it difficult to satisfy the i.i.d condition between the training set and the test set, which leads to the problem of domain mismatch and significantly reduces the detection performance. In this paper, we draw on the ideas of domain adaptation and transductive learning to design a novel text steganalysis method. In this method, we design a distributed adaptation layer and adopt three loss functions to achieve domain adaptation, so that the model can learn the domain-invariant text features. The experimental results show that the method has better steganalysis performance in the case of domain mismatch.
传统的文本隐写分析方法依赖于大量的标记数据。同时,测试数据应与训练数据独立,分布一致。然而,在实际应用中,大量的文本类型使得训练集和测试集之间的id条件难以满足,从而导致域不匹配问题,显著降低了检测性能。本文借鉴领域自适应和转换学习的思想,设计了一种新的文本隐写分析方法。在该方法中,我们设计了一个分布式的自适应层,并采用三个损失函数来实现域自适应,从而使模型能够学习到域不变的文本特征。实验结果表明,该方法在域不匹配的情况下具有较好的隐写性能。
{"title":"Domain Adaptational Text Steganalysis Based on Transductive Learning","authors":"Yiming Xue, Boya Yang, Yaqian Deng, Wanli Peng, Juan Wen","doi":"10.1145/3531536.3532963","DOIUrl":"https://doi.org/10.1145/3531536.3532963","url":null,"abstract":"Traditional text steganalysis methods rely on a large amount of labeled data. At the same time, the test data should be independent and identically distributed with the training data. However, in practice, a large number of text types make it difficult to satisfy the i.i.d condition between the training set and the test set, which leads to the problem of domain mismatch and significantly reduces the detection performance. In this paper, we draw on the ideas of domain adaptation and transductive learning to design a novel text steganalysis method. In this method, we design a distributed adaptation layer and adopt three loss functions to achieve domain adaptation, so that the model can learn the domain-invariant text features. The experimental results show that the method has better steganalysis performance in the case of domain mismatch.","PeriodicalId":164949,"journal":{"name":"Proceedings of the 2022 ACM Workshop on Information Hiding and Multimedia Security","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123728156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Session details: Session 2: Security of Machine Learning 会议详情:会议2:机器学习的安全性
Yassine Yousfi
{"title":"Session details: Session 2: Security of Machine Learning","authors":"Yassine Yousfi","doi":"10.1145/3545212","DOIUrl":"https://doi.org/10.1145/3545212","url":null,"abstract":"","PeriodicalId":164949,"journal":{"name":"Proceedings of the 2022 ACM Workshop on Information Hiding and Multimedia Security","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132518028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fighting the Reverse JPEG Compatibility Attack: Pick your Side 对抗反向JPEG兼容性攻击:选择你的立场
Jan Butora, P. Bas
In this work we aim to design a steganographic scheme undetectable by the Reverse JPEG Compatibility Attack (RJCA). The RJCA, while only effective for JPEG images compressed with quality factors 99 and 100, was shown to work mainly due to change in variance of the rounding errors after decompression of the DCT coefficients, which is induced by embedding changes incompatible with the JPEG format. One remedy to preserve the aforementioned format is utilizing during the embedding the rounding errors created during the JPEG compression, but no steganographic method is known to be resilient to RJCA without this knowledge. Inspecting the effect of embedding changes on variance and also mean of decompression rounding errors, we propose a steganographic method allowing resistance against RJCA without any side-information. To resist RJCA, we propose a distortion metric making all embedding changes within a DCT block dependent, resulting in a lattice-based embedding. Then it turns out it is enough to cleverly pick the side of the (binary) embedding changes through inspection of their effect on the variance of decompression rounding errors and simply use uniform costs in order to enforce their sparsity across DCT blocks. To increase security against detectors in the spatial (pixel) domain, we show an easy way of combining the proposed methodology with steganography designed for spatial domain security, further improving the undetectability for quality factor 99. The improvements over existing non-informed steganography are up to 40% in terms of detector's accuracy.
在这项工作中,我们的目标是设计一种无法被反向JPEG兼容性攻击(RJCA)检测到的隐写方案。虽然RJCA仅对质量因子为99和100的JPEG图像有效,但其工作主要是由于DCT系数解压缩后舍入误差方差的变化,这是由嵌入与JPEG格式不兼容的更改引起的。保留上述格式的一种补救措施是在嵌入期间利用JPEG压缩期间产生的舍入误差,但是如果不知道这一点,没有任何隐写方法可以适应RJCA。检查嵌入变化对方差和解压缩舍入误差均值的影响,我们提出了一种无需任何侧信息即可抵抗RJCA的隐写方法。为了抵抗RJCA,我们提出了一种失真度量,使DCT块内的所有嵌入变化都依赖于此,从而产生基于晶格的嵌入。然后,事实证明,通过检查它们对解压缩舍入误差方差的影响,巧妙地选择(二进制)嵌入变化的一侧,并简单地使用统一的代价来强制它们在DCT块上的稀疏性,就足够了。为了提高空间(像素)域对检测器的安全性,我们展示了一种将所提出的方法与为空间域安全性设计的隐写术相结合的简单方法,进一步提高了质量因子99的不可检测性。与现有的非知情隐写术相比,检测器的准确率提高了40%。
{"title":"Fighting the Reverse JPEG Compatibility Attack: Pick your Side","authors":"Jan Butora, P. Bas","doi":"10.1145/3531536.3532955","DOIUrl":"https://doi.org/10.1145/3531536.3532955","url":null,"abstract":"In this work we aim to design a steganographic scheme undetectable by the Reverse JPEG Compatibility Attack (RJCA). The RJCA, while only effective for JPEG images compressed with quality factors 99 and 100, was shown to work mainly due to change in variance of the rounding errors after decompression of the DCT coefficients, which is induced by embedding changes incompatible with the JPEG format. One remedy to preserve the aforementioned format is utilizing during the embedding the rounding errors created during the JPEG compression, but no steganographic method is known to be resilient to RJCA without this knowledge. Inspecting the effect of embedding changes on variance and also mean of decompression rounding errors, we propose a steganographic method allowing resistance against RJCA without any side-information. To resist RJCA, we propose a distortion metric making all embedding changes within a DCT block dependent, resulting in a lattice-based embedding. Then it turns out it is enough to cleverly pick the side of the (binary) embedding changes through inspection of their effect on the variance of decompression rounding errors and simply use uniform costs in order to enforce their sparsity across DCT blocks. To increase security against detectors in the spatial (pixel) domain, we show an easy way of combining the proposed methodology with steganography designed for spatial domain security, further improving the undetectability for quality factor 99. The improvements over existing non-informed steganography are up to 40% in terms of detector's accuracy.","PeriodicalId":164949,"journal":{"name":"Proceedings of the 2022 ACM Workshop on Information Hiding and Multimedia Security","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133766318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Looking for Signals: A Systems Security Perspective 寻找信号:系统安全视角
Christopher Kruegel
Over the last 20 years, my students and I have built systems that look for signals of malice in large datasets. These datasets include network traffic, program code, web transactions, and social media posts. For many of our detection systems, we used feature engineering to model properties of the data and then leveraged different types of machine learning to find outliers or to build classifiers that could recognize unwanted inputs. In this presentation, I will cover three recent works that go beyond that basic approach. First, I will talk about cross-dataset analysis. The key idea is that we look at the same data from different vantage points. Instead of directly detecting malicious instances, the analysis compares the views across multiple angles and finds those cases where these views meaningfully differ. Second, I will cover an approach to perform meta-analysis of the outputs (events) that a detection model might produce. Sometimes, looking at a single event is insufficient to determine whether it is malicious. In such cases, it is necessary to correlate multiple events. We have built a semi-supervised analysis that leverages the context of an event to determine whether it should be treated as malicious or not. Third, I will discuss ways in which attackers might attempt to thwart our efforts to build detectors. Specifically, I will talk about a fast and efficient clean-label dataset poisoning attack. In this attack, correctly labeled poison samples are injected into the training dataset. While these poison samples look legitimate to a human observer, they contain malicious characteristics that trigger a targeted misclassification during detection (inference).
在过去的20年里,我和我的学生们建立了一个系统,可以在大型数据集中寻找恶意信号。这些数据集包括网络流量、程序代码、web交易和社交媒体帖子。对于我们的许多检测系统,我们使用特征工程来建模数据的属性,然后利用不同类型的机器学习来找到异常值或构建可以识别不需要输入的分类器。在这次演讲中,我将介绍最近的三个超越基本方法的作品。首先,我将讨论跨数据集分析。关键思想是我们从不同的有利位置看同样的数据。该分析不是直接检测恶意实例,而是从多个角度比较视图,并找到这些视图有意义不同的情况。其次,我将介绍一种对检测模型可能产生的输出(事件)执行元分析的方法。有时,仅查看单个事件不足以确定其是否为恶意事件。在这种情况下,有必要将多个事件关联起来。我们已经建立了一个半监督分析,它利用事件的上下文来确定它是否应该被视为恶意。第三,我将讨论攻击者可能试图阻挠我们建立检测器的方法。具体来说,我将讨论一种快速有效的干净标签数据集中毒攻击。在这种攻击中,正确标记的有毒样本被注入到训练数据集中。虽然这些有毒样本在人类观察者看来是合法的,但它们含有恶意特征,会在检测(推理)期间触发有针对性的错误分类。
{"title":"Looking for Signals: A Systems Security Perspective","authors":"Christopher Kruegel","doi":"10.1145/3531536.3533774","DOIUrl":"https://doi.org/10.1145/3531536.3533774","url":null,"abstract":"Over the last 20 years, my students and I have built systems that look for signals of malice in large datasets. These datasets include network traffic, program code, web transactions, and social media posts. For many of our detection systems, we used feature engineering to model properties of the data and then leveraged different types of machine learning to find outliers or to build classifiers that could recognize unwanted inputs. In this presentation, I will cover three recent works that go beyond that basic approach. First, I will talk about cross-dataset analysis. The key idea is that we look at the same data from different vantage points. Instead of directly detecting malicious instances, the analysis compares the views across multiple angles and finds those cases where these views meaningfully differ. Second, I will cover an approach to perform meta-analysis of the outputs (events) that a detection model might produce. Sometimes, looking at a single event is insufficient to determine whether it is malicious. In such cases, it is necessary to correlate multiple events. We have built a semi-supervised analysis that leverages the context of an event to determine whether it should be treated as malicious or not. Third, I will discuss ways in which attackers might attempt to thwart our efforts to build detectors. Specifically, I will talk about a fast and efficient clean-label dataset poisoning attack. In this attack, correctly labeled poison samples are injected into the training dataset. While these poison samples look legitimate to a human observer, they contain malicious characteristics that trigger a targeted misclassification during detection (inference).","PeriodicalId":164949,"journal":{"name":"Proceedings of the 2022 ACM Workshop on Information Hiding and Multimedia Security","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123606230","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 2022 ACM Workshop on Information Hiding and Multimedia Security
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1