首页 > 最新文献

arXiv - CS - Cryptography and Security最新文献

英文 中文
Hackphyr: A Local Fine-Tuned LLM Agent for Network Security Environments Hackphyr:用于网络安全环境的本地微调 LLM 代理
Pub Date : 2024-09-17 DOI: arxiv-2409.11276
Maria Rigaki, Carlos Catania, Sebastian Garcia
Large Language Models (LLMs) have shown remarkable potential across variousdomains, including cybersecurity. Using commercial cloud-based LLMs may beundesirable due to privacy concerns, costs, and network connectivityconstraints. In this paper, we present Hackphyr, a locally fine-tuned LLM to beused as a red-team agent within network security environments. Our fine-tuned 7billion parameter model can run on a single GPU card and achieves performancecomparable with much larger and more powerful commercial models such as GPT-4.Hackphyr clearly outperforms other models, including GPT-3.5-turbo, andbaselines, such as Q-learning agents in complex, previously unseen scenarios.To achieve this performance, we generated a new task-specific cybersecuritydataset to enhance the base model's capabilities. Finally, we conducted acomprehensive analysis of the agents' behaviors that provides insights into theplanning abilities and potential shortcomings of such agents, contributing tothe broader understanding of LLM-based agents in cybersecurity contexts
大型语言模型(LLM)在包括网络安全在内的各个领域都显示出了巨大的潜力。由于隐私问题、成本和网络连接限制,使用基于云的商用 LLM 可能并不可取。在本文中,我们介绍了 Hackphyr,一种经过本地微调的 LLM,可用作网络安全环境中的红队代理。Hackphyr 的性能明显优于其他模型(包括 GPT-3.5-turbo 模型)和基准模型,例如 Q-learning 代理在复杂的、以前从未见过的场景中的表现。最后,我们对代理的行为进行了全面分析,深入了解了此类代理的规划能力和潜在缺陷,有助于更广泛地理解网络安全环境中基于 LLM 的代理。
{"title":"Hackphyr: A Local Fine-Tuned LLM Agent for Network Security Environments","authors":"Maria Rigaki, Carlos Catania, Sebastian Garcia","doi":"arxiv-2409.11276","DOIUrl":"https://doi.org/arxiv-2409.11276","url":null,"abstract":"Large Language Models (LLMs) have shown remarkable potential across various\u0000domains, including cybersecurity. Using commercial cloud-based LLMs may be\u0000undesirable due to privacy concerns, costs, and network connectivity\u0000constraints. In this paper, we present Hackphyr, a locally fine-tuned LLM to be\u0000used as a red-team agent within network security environments. Our fine-tuned 7\u0000billion parameter model can run on a single GPU card and achieves performance\u0000comparable with much larger and more powerful commercial models such as GPT-4.\u0000Hackphyr clearly outperforms other models, including GPT-3.5-turbo, and\u0000baselines, such as Q-learning agents in complex, previously unseen scenarios.\u0000To achieve this performance, we generated a new task-specific cybersecurity\u0000dataset to enhance the base model's capabilities. Finally, we conducted a\u0000comprehensive analysis of the agents' behaviors that provides insights into the\u0000planning abilities and potential shortcomings of such agents, contributing to\u0000the broader understanding of LLM-based agents in cybersecurity contexts","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"89 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing Security Testing Software for Systems that Cannot be Subjected to the Risks of Penetration Testing Through the Incorporation of Multi-threading and and Other Capabilities 通过纳入多线程和其他功能,为无法承受渗透测试风险的系统增强安全测试软件
Pub Date : 2024-09-17 DOI: arxiv-2409.10893
Matthew Tassava, Cameron Kolodjski, Jordan Milbrath, Jeremy Straub
The development of a system vulnerability analysis tool (SVAT) for complexmission critical systems (CMCS) produced the software for operation and networkattack results review (SONARR). This software builds upon the BlackboardArchitecture and uses its a rule-fact logic to assess model networks toidentify potential pathways that an attacker might take through them via theexploitation of vulnerabilities within the network. The SONARR objects andalgorithm were developed previously; however, performance was insufficient foranalyzing large networks. This paper describes and analyzes the performance ofa multi-threaded SONARR algorithm and other enhancements which were developedto increase SONARR's performance and facilitate the analysis of large networks.
为复杂关键任务系统(CMCS)开发的系统漏洞分析工具(SVAT)产生了操作和网络攻击结果审查软件(SONARR)。该软件建立在黑板架构(BlackboardArchitecture)的基础上,利用其规则-事实逻辑来评估模型网络,以识别攻击者通过利用网络中的漏洞而可能通过的路径。SONARR 对象和算法是以前开发的,但其性能不足以分析大型网络。本文介绍并分析了多线程 SONARR 算法的性能,以及为提高 SONARR 性能和促进大型网络分析而开发的其他增强功能。
{"title":"Enhancing Security Testing Software for Systems that Cannot be Subjected to the Risks of Penetration Testing Through the Incorporation of Multi-threading and and Other Capabilities","authors":"Matthew Tassava, Cameron Kolodjski, Jordan Milbrath, Jeremy Straub","doi":"arxiv-2409.10893","DOIUrl":"https://doi.org/arxiv-2409.10893","url":null,"abstract":"The development of a system vulnerability analysis tool (SVAT) for complex\u0000mission critical systems (CMCS) produced the software for operation and network\u0000attack results review (SONARR). This software builds upon the Blackboard\u0000Architecture and uses its a rule-fact logic to assess model networks to\u0000identify potential pathways that an attacker might take through them via the\u0000exploitation of vulnerabilities within the network. The SONARR objects and\u0000algorithm were developed previously; however, performance was insufficient for\u0000analyzing large networks. This paper describes and analyzes the performance of\u0000a multi-threaded SONARR algorithm and other enhancements which were developed\u0000to increase SONARR's performance and facilitate the analysis of large networks.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"188 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Novel Malicious Packet Recognition: A Few-Shot Learning Approach 新型恶意数据包识别:少量学习方法
Pub Date : 2024-09-17 DOI: arxiv-2409.11254
Kyle Stein, Andrew A. Mahyari, Guillermo Francia III, Eman El-Sheikh
As the complexity and connectivity of networks increase, the need for novelmalware detection approaches becomes imperative. Traditional security defensesare becoming less effective against the advanced tactics of today'scyberattacks. Deep Packet Inspection (DPI) has emerged as a key technology instrengthening network security, offering detailed analysis of network trafficthat goes beyond simple metadata analysis. DPI examines not only the packetheaders but also the payload content within, offering a thorough insight intothe data traversing the network. This study proposes a novel approach thatleverages a large language model (LLM) and few-shot learning to accuratelyrecognizes novel, unseen malware types with few labels samples. Our proposedapproach uses a pretrained LLM on known malware types to extract the embeddingsfrom packets. The embeddings are then used alongside few labeled samples of anunseen malware type. This technique is designed to acclimate the model todifferent malware representations, further enabling it to generate robustembeddings for each trained and unseen classes. Following the extraction ofembeddings from the LLM, few-shot learning is utilized to enhance performancewith minimal labeled data. Our evaluation, which utilized two renowneddatasets, focused on identifying malware types within network traffic andInternet of Things (IoT) environments. Our approach shows promising resultswith an average accuracy of 86.35% and F1-Score of 86.40% on different malwaretypes across the two datasets.
随着网络复杂性和连通性的增加,对新型恶意软件检测方法的需求变得势在必行。面对当今网络攻击的先进战术,传统的安全防御措施变得越来越无效。深度包检测(DPI)已成为加强网络安全的一项关键技术,它能对网络流量进行详细分析,而不仅仅是简单的元数据分析。DPI 不仅能检查包头,还能检查其中的有效载荷内容,从而对穿越网络的数据进行全面深入的分析。本研究提出了一种新方法,即利用大型语言模型(LLM)和少量学习来准确识别新型、未见过的恶意软件类型。我们提出的方法使用对已知恶意软件类型进行预训练的 LLM 从数据包中提取嵌入。然后将这些嵌入信息与未见恶意软件类型的少量标记样本一起使用。这种技术旨在使模型适应不同的恶意软件表征,进一步使其能够为每个训练有素的和未见过的类别生成稳健的嵌入。从 LLM 中提取前缀后,利用少量学习来提高使用最少标记数据的性能。我们的评估利用了两个著名的数据集,重点是识别网络流量和物联网(IoT)环境中的恶意软件类型。我们的方法在两个数据集的不同恶意软件类型上取得了很好的结果,平均准确率为 86.35%,F1-Score 为 86.40%。
{"title":"Towards Novel Malicious Packet Recognition: A Few-Shot Learning Approach","authors":"Kyle Stein, Andrew A. Mahyari, Guillermo Francia III, Eman El-Sheikh","doi":"arxiv-2409.11254","DOIUrl":"https://doi.org/arxiv-2409.11254","url":null,"abstract":"As the complexity and connectivity of networks increase, the need for novel\u0000malware detection approaches becomes imperative. Traditional security defenses\u0000are becoming less effective against the advanced tactics of today's\u0000cyberattacks. Deep Packet Inspection (DPI) has emerged as a key technology in\u0000strengthening network security, offering detailed analysis of network traffic\u0000that goes beyond simple metadata analysis. DPI examines not only the packet\u0000headers but also the payload content within, offering a thorough insight into\u0000the data traversing the network. This study proposes a novel approach that\u0000leverages a large language model (LLM) and few-shot learning to accurately\u0000recognizes novel, unseen malware types with few labels samples. Our proposed\u0000approach uses a pretrained LLM on known malware types to extract the embeddings\u0000from packets. The embeddings are then used alongside few labeled samples of an\u0000unseen malware type. This technique is designed to acclimate the model to\u0000different malware representations, further enabling it to generate robust\u0000embeddings for each trained and unseen classes. Following the extraction of\u0000embeddings from the LLM, few-shot learning is utilized to enhance performance\u0000with minimal labeled data. Our evaluation, which utilized two renowned\u0000datasets, focused on identifying malware types within network traffic and\u0000Internet of Things (IoT) environments. Our approach shows promising results\u0000with an average accuracy of 86.35% and F1-Score of 86.40% on different malware\u0000types across the two datasets.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261661","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Shaking the Fake: Detecting Deepfake Videos in Real Time via Active Probes 抖动假视频通过主动探测实时检测深度伪造视频
Pub Date : 2024-09-17 DOI: arxiv-2409.10889
Zhixin Xie, Jun Luo
Real-time deepfake, a type of generative AI, is capable of "creating"non-existing contents (e.g., swapping one's face with another) in a video. Ithas been, very unfortunately, misused to produce deepfake videos (during webconferences, video calls, and identity authentication) for malicious purposes,including financial scams and political misinformation. Deepfake detection, asthe countermeasure against deepfake, has attracted considerable attention fromthe academic community, yet existing works typically rely on learning passivefeatures that may perform poorly beyond seen datasets. In this paper, wepropose SFake, a new real-time deepfake detection method that innovativelyexploits deepfake models' inability to adapt to physical interference.Specifically, SFake actively sends probes to trigger mechanical vibrations onthe smartphone, resulting in the controllable feature on the footage.Consequently, SFake determines whether the face is swapped by deepfake based onthe consistency of the facial area with the probe pattern. We implement SFake,evaluate its effectiveness on a self-built dataset, and compare it with sixother detection methods. The results show that SFake outperforms otherdetection methods with higher detection accuracy, faster process speed, andlower memory consumption.
实时深度伪造是一种生成式人工智能,能够在视频中 "创建 "不存在的内容(例如,将一个人的脸与另一个人的脸互换)。不幸的是,人工智能已被滥用于制作深度伪造视频(在网络会议、视频通话和身份验证过程中),用于金融诈骗和政治误导等恶意目的。深度伪造检测作为对付深度伪造的对策,已经引起了学术界的极大关注,但现有的研究通常依赖于学习被动特征,而这些特征在所见数据集之外的表现可能很差。在本文中,我们提出了一种新的实时深度防伪检测方法 SFake,它创新性地利用了深度防伪模型无法适应物理干扰的缺陷。具体来说,SFake 会主动发送探针来触发智能手机上的机械振动,从而在镜头上产生可控特征。因此,SFake 会根据面部区域与探针图案的一致性来判断人脸是否被深度防伪所替换。我们实现了 SFake,在自建的数据集上评估了其有效性,并将其与其他六种检测方法进行了比较。结果表明,SFake 以更高的检测精度、更快的处理速度和更低的内存消耗优于其他检测方法。
{"title":"Shaking the Fake: Detecting Deepfake Videos in Real Time via Active Probes","authors":"Zhixin Xie, Jun Luo","doi":"arxiv-2409.10889","DOIUrl":"https://doi.org/arxiv-2409.10889","url":null,"abstract":"Real-time deepfake, a type of generative AI, is capable of \"creating\"\u0000non-existing contents (e.g., swapping one's face with another) in a video. It\u0000has been, very unfortunately, misused to produce deepfake videos (during web\u0000conferences, video calls, and identity authentication) for malicious purposes,\u0000including financial scams and political misinformation. Deepfake detection, as\u0000the countermeasure against deepfake, has attracted considerable attention from\u0000the academic community, yet existing works typically rely on learning passive\u0000features that may perform poorly beyond seen datasets. In this paper, we\u0000propose SFake, a new real-time deepfake detection method that innovatively\u0000exploits deepfake models' inability to adapt to physical interference.\u0000Specifically, SFake actively sends probes to trigger mechanical vibrations on\u0000the smartphone, resulting in the controllable feature on the footage.\u0000Consequently, SFake determines whether the face is swapped by deepfake based on\u0000the consistency of the facial area with the probe pattern. We implement SFake,\u0000evaluate its effectiveness on a self-built dataset, and compare it with six\u0000other detection methods. The results show that SFake outperforms other\u0000detection methods with higher detection accuracy, faster process speed, and\u0000lower memory consumption.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"8 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prompt Obfuscation for Large Language Models 大型语言模型的提示混淆
Pub Date : 2024-09-17 DOI: arxiv-2409.11026
David Pape, Thorsten Eisenhofer, Lea Schönherr
System prompts that include detailed instructions to describe the taskperformed by the underlying large language model (LLM) can easily transformfoundation models into tools and services with minimal overhead. Because oftheir crucial impact on the utility, they are often considered intellectualproperty, similar to the code of a software product. However, extracting systemprompts is easily possible by using prompt injection. As of today, there is noeffective countermeasure to prevent the stealing of system prompts and allsafeguarding efforts could be evaded with carefully crafted prompt injectionsthat bypass all protection mechanisms.In this work, we propose an alternativeto conventional system prompts. We introduce prompt obfuscation to prevent theextraction of the system prompt while maintaining the utility of the systemitself with only little overhead. The core idea is to find a representation ofthe original system prompt that leads to the same functionality, while theobfuscated system prompt does not contain any information that allowsconclusions to be drawn about the original system prompt. We implement anoptimization-based method to find an obfuscated prompt representation whilemaintaining the functionality. To evaluate our approach, we investigate eightdifferent metrics to compare the performance of a system using the original andthe obfuscated system prompts, and we show that the obfuscated version isconstantly on par with the original one. We further perform three differentdeobfuscation attacks and show that with access to the obfuscated prompt andthe LLM itself, we are not able to consistently extract meaningful information.Overall, we showed that prompt obfuscation can be an effective method toprotect intellectual property while maintaining the same utility as theoriginal system prompt.
系统提示包括详细的指令,用于描述底层大型语言模型(LLM)所执行的任务,能够以最小的开销轻松地将基础模型转化为工具和服务。由于系统提示对实用性的重要影响,它们通常被视为知识产权,类似于软件产品的代码。然而,通过使用提示注入(prompt injection)技术,可以轻松提取系统信息。到目前为止,还没有有效的对策来防止系统提示信息被窃取,精心制作的提示信息注入可以绕过所有保护机制,从而规避所有保护措施。我们引入了提示语混淆技术,以防止系统提示语被提取,同时以极小的开销保持系统本身的实用性。其核心思想是找到一种能实现相同功能的原始系统提示的表示方法,而经过混淆处理的系统提示不包含任何可以得出原始系统提示结论的信息。我们采用了一种基于优化的方法,在保持功能的同时找到混淆的提示表示。为了评估我们的方法,我们研究了八种不同的指标,以比较使用原始系统提示和经过混淆处理的系统提示的系统性能。我们还进一步实施了三种不同的混淆攻击,结果表明,在访问混淆提示和 LLM 本身的情况下,我们无法持续提取有意义的信息。
{"title":"Prompt Obfuscation for Large Language Models","authors":"David Pape, Thorsten Eisenhofer, Lea Schönherr","doi":"arxiv-2409.11026","DOIUrl":"https://doi.org/arxiv-2409.11026","url":null,"abstract":"System prompts that include detailed instructions to describe the task\u0000performed by the underlying large language model (LLM) can easily transform\u0000foundation models into tools and services with minimal overhead. Because of\u0000their crucial impact on the utility, they are often considered intellectual\u0000property, similar to the code of a software product. However, extracting system\u0000prompts is easily possible by using prompt injection. As of today, there is no\u0000effective countermeasure to prevent the stealing of system prompts and all\u0000safeguarding efforts could be evaded with carefully crafted prompt injections\u0000that bypass all protection mechanisms.In this work, we propose an alternative\u0000to conventional system prompts. We introduce prompt obfuscation to prevent the\u0000extraction of the system prompt while maintaining the utility of the system\u0000itself with only little overhead. The core idea is to find a representation of\u0000the original system prompt that leads to the same functionality, while the\u0000obfuscated system prompt does not contain any information that allows\u0000conclusions to be drawn about the original system prompt. We implement an\u0000optimization-based method to find an obfuscated prompt representation while\u0000maintaining the functionality. To evaluate our approach, we investigate eight\u0000different metrics to compare the performance of a system using the original and\u0000the obfuscated system prompts, and we show that the obfuscated version is\u0000constantly on par with the original one. We further perform three different\u0000deobfuscation attacks and show that with access to the obfuscated prompt and\u0000the LLM itself, we are not able to consistently extract meaningful information.\u0000Overall, we showed that prompt obfuscation can be an effective method to\u0000protect intellectual property while maintaining the same utility as the\u0000original system prompt.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Execution-time opacity control for timed automata 定时自动机的执行时间不透明控制
Pub Date : 2024-09-16 DOI: arxiv-2409.10336
Étienne André, Marie Duflot, Laetitia Laversa, Engel Lefaucheux
Timing leaks in timed automata (TA) can occur whenever an attacker is able todeduce a secret by observing some timed behavior. In execution-time opacity,the attacker aims at deducing whether a private location was visited, byobserving only the execution time. It can be decided whether a TA is opaque inthis setting. In this work, we tackle control, and show that we are able todecide whether a TA can be controlled at runtime to ensure opacity. Our methodis constructive, in the sense that we can exhibit such a controller. We alsoaddress the case when the attacker cannot have an infinite precision in itsobservations.
定时自动机(TA)中的定时泄露可能发生在攻击者能够通过观察某些定时行为来推测秘密的时候。在执行时间不透明的情况下,攻击者的目的是通过只观察执行时间来推断某个私人位置是否被访问过。在这种情况下,可以判定 TA 是否不透明。在这项工作中,我们解决了控制问题,并证明我们能够判定 TA 是否能在运行时被控制以确保不透明。我们的方法是建设性的,因为我们可以展示这样一个控制器。我们还解决了攻击者无法无限精确观测的情况。
{"title":"Execution-time opacity control for timed automata","authors":"Étienne André, Marie Duflot, Laetitia Laversa, Engel Lefaucheux","doi":"arxiv-2409.10336","DOIUrl":"https://doi.org/arxiv-2409.10336","url":null,"abstract":"Timing leaks in timed automata (TA) can occur whenever an attacker is able to\u0000deduce a secret by observing some timed behavior. In execution-time opacity,\u0000the attacker aims at deducing whether a private location was visited, by\u0000observing only the execution time. It can be decided whether a TA is opaque in\u0000this setting. In this work, we tackle control, and show that we are able to\u0000decide whether a TA can be controlled at runtime to ensure opacity. Our method\u0000is constructive, in the sense that we can exhibit such a controller. We also\u0000address the case when the attacker cannot have an infinite precision in its\u0000observations.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"89 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing the Impact of Sanctions in the Crypto Ecosystem: Effective Measures or Ineffective Deterrents? 评估制裁对加密货币生态系统的影响:有效措施还是无效威慑?
Pub Date : 2024-09-16 DOI: arxiv-2409.10031
Francesco Zola, Jon Ander Medina, Raul Orduna
Regulatory authorities aim to tackle illegal activities by targeting theeconomic incentives that drive such behaviour. This is typically achievedthrough the implementation of financial sanctions against the entities involvedin the crimes. However, the rise of cryptocurrencies has presented newchallenges, allowing entities to evade these sanctions and continue criminaloperations. Consequently, enforcement measures have been expanded to includecrypto assets information of sanctioned entities. Yet, due to the nature of thecrypto ecosystem, blocking or freezing these digital assets is harder and, insome cases, such as with Bitcoin, unfeasible. Therefore, sanctions serve merelyas deterrents. For this reason, in this study, we aim to assess the impact ofthese sanctions on entities' crypto activities, particularly those related tothe Bitcoin ecosystem. Our objective is to shed light on the validity andeffectiveness (or lack thereof) of such countermeasures. Specifically, weanalyse the transactions and the amount of USD moved by punished entities thatpossess crypto addresses after being sanctioned by the authority agency.Results indicate that while sanctions have been effective for half of theexamined entities, the others continue to move funds through sanctionedaddresses. Furthermore, punished entities demonstrate a preference forutilising rapid exchange services to convert their funds, rather than employingdedicated money laundering services. To the best of our knowledge, this studyoffers valuable insights into how entities use crypto assets to circumventsanctions.
监管当局的目标是通过针对推动此类行为的经济激励措施来打击非法活动。这通常是通过对涉案实体实施金融制裁来实现的。然而,加密货币的兴起带来了新的挑战,使实体可以规避这些制裁,继续进行犯罪活动。因此,执法措施已经扩大到包括受制裁实体的加密资产信息。然而,由于加密货币生态系统的性质,封锁或冻结这些数字资产比较困难,在某些情况下,例如比特币,是不可行的。因此,制裁只是起到威慑作用。因此,在本研究中,我们旨在评估这些制裁对实体加密活动的影响,尤其是与比特币生态系统相关的活动。我们的目标是阐明这些反制措施的有效性和有效性(或缺乏有效性)。具体来说,我们分析了拥有加密地址的受罚实体在受到权威机构制裁后的交易情况和转移的美元金额。结果表明,虽然制裁对半数受调查实体有效,但其他实体仍在通过受制裁地址转移资金。此外,受惩罚的实体表现出更倾向于利用快速兑换服务转换资金,而不是使用专门的洗钱服务。据我们所知,这项研究为了解实体如何利用加密资产规避制裁提供了宝贵的见解。
{"title":"Assessing the Impact of Sanctions in the Crypto Ecosystem: Effective Measures or Ineffective Deterrents?","authors":"Francesco Zola, Jon Ander Medina, Raul Orduna","doi":"arxiv-2409.10031","DOIUrl":"https://doi.org/arxiv-2409.10031","url":null,"abstract":"Regulatory authorities aim to tackle illegal activities by targeting the\u0000economic incentives that drive such behaviour. This is typically achieved\u0000through the implementation of financial sanctions against the entities involved\u0000in the crimes. However, the rise of cryptocurrencies has presented new\u0000challenges, allowing entities to evade these sanctions and continue criminal\u0000operations. Consequently, enforcement measures have been expanded to include\u0000crypto assets information of sanctioned entities. Yet, due to the nature of the\u0000crypto ecosystem, blocking or freezing these digital assets is harder and, in\u0000some cases, such as with Bitcoin, unfeasible. Therefore, sanctions serve merely\u0000as deterrents. For this reason, in this study, we aim to assess the impact of\u0000these sanctions on entities' crypto activities, particularly those related to\u0000the Bitcoin ecosystem. Our objective is to shed light on the validity and\u0000effectiveness (or lack thereof) of such countermeasures. Specifically, we\u0000analyse the transactions and the amount of USD moved by punished entities that\u0000possess crypto addresses after being sanctioned by the authority agency.\u0000Results indicate that while sanctions have been effective for half of the\u0000examined entities, the others continue to move funds through sanctioned\u0000addresses. Furthermore, punished entities demonstrate a preference for\u0000utilising rapid exchange services to convert their funds, rather than employing\u0000dedicated money laundering services. To the best of our knowledge, this study\u0000offers valuable insights into how entities use crypto assets to circumvent\u0000sanctions.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CaBaGe: Data-Free Model Extraction using ClAss BAlanced Generator Ensemble CaBaGe:使用 ClAss BAlanced Generator Ensemble 进行无数据模型提取
Pub Date : 2024-09-16 DOI: arxiv-2409.10643
Jonathan Rosenthal, Shanchao Liang, Kevin Zhang, Lin Tan
Machine Learning as a Service (MLaaS) is often provided as a pay-per-query,black-box system to clients. Such a black-box approach not only hinders openreplication, validation, and interpretation of model results, but also makes itharder for white-hat researchers to identify vulnerabilities in the MLaaSsystems. Model extraction is a promising technique to address these challengesby reverse-engineering black-box models. Since training data is typicallyunavailable for MLaaS models, this paper focuses on the realistic version ofit: data-free model extraction. We propose a data-free model extractionapproach, CaBaGe, to achieve higher model extraction accuracy with a smallnumber of queries. Our innovations include (1) a novel experience replay forfocusing on difficult training samples; (2) an ensemble of generators forsteadily producing diverse synthetic data; and (3) a selective filteringprocess for querying the victim model with harder, more balanced samples. Inaddition, we create a more realistic setting, for the first time, where theattacker has no knowledge of the number of classes in the victim training data,and create a solution to learn the number of classes on the fly. Our evaluationshows that CaBaGe outperforms existing techniques on seven datasets -- MNIST,FMNIST, SVHN, CIFAR-10, CIFAR-100, ImageNet-subset, and Tiny ImageNet -- withan accuracy improvement of the extracted models by up to 43.13%. Furthermore,the number of queries required to extract a clone model matching the finalaccuracy of prior work is reduced by up to 75.7%.
机器学习即服务(MLaaS)通常作为按查询付费的黑盒系统提供给客户。这种黑盒方法不仅阻碍了模型结果的公开复制、验证和解释,也使白帽研究人员更难识别 MLaaS 系统中的漏洞。模型提取是通过逆向工程黑盒模型来应对这些挑战的一种有前途的技术。由于 MLaaS 模型通常无法获得训练数据,本文将重点关注其现实版本:无数据模型提取。我们提出了一种无数据模型提取方法--CaBaGe,以便在查询次数较少的情况下实现更高的模型提取准确率。我们的创新包括:(1) 一种新颖的经验重放,用于集中处理困难的训练样本;(2) 一组生成器,用于轻松生成多样化的合成数据;(3) 一种选择性过滤过程,用于用更难、更均衡的样本查询受害者模型。此外,我们还首次创建了一个更现实的环境,即攻击者不知道受害者训练数据中的类数,并创建了一个即时学习类数的解决方案。我们的评估结果表明,CaBaGe 在七个数据集(MNIST、FMNIST、SVHN、CIFAR-10、CIFAR-100、ImageNet-subset 和 Tiny ImageNet)上的表现优于现有技术,提取模型的准确率提高了 43.13%。此外,提取与先前工作的最终准确度相匹配的克隆模型所需的查询次数最多减少了 75.7%。
{"title":"CaBaGe: Data-Free Model Extraction using ClAss BAlanced Generator Ensemble","authors":"Jonathan Rosenthal, Shanchao Liang, Kevin Zhang, Lin Tan","doi":"arxiv-2409.10643","DOIUrl":"https://doi.org/arxiv-2409.10643","url":null,"abstract":"Machine Learning as a Service (MLaaS) is often provided as a pay-per-query,\u0000black-box system to clients. Such a black-box approach not only hinders open\u0000replication, validation, and interpretation of model results, but also makes it\u0000harder for white-hat researchers to identify vulnerabilities in the MLaaS\u0000systems. Model extraction is a promising technique to address these challenges\u0000by reverse-engineering black-box models. Since training data is typically\u0000unavailable for MLaaS models, this paper focuses on the realistic version of\u0000it: data-free model extraction. We propose a data-free model extraction\u0000approach, CaBaGe, to achieve higher model extraction accuracy with a small\u0000number of queries. Our innovations include (1) a novel experience replay for\u0000focusing on difficult training samples; (2) an ensemble of generators for\u0000steadily producing diverse synthetic data; and (3) a selective filtering\u0000process for querying the victim model with harder, more balanced samples. In\u0000addition, we create a more realistic setting, for the first time, where the\u0000attacker has no knowledge of the number of classes in the victim training data,\u0000and create a solution to learn the number of classes on the fly. Our evaluation\u0000shows that CaBaGe outperforms existing techniques on seven datasets -- MNIST,\u0000FMNIST, SVHN, CIFAR-10, CIFAR-100, ImageNet-subset, and Tiny ImageNet -- with\u0000an accuracy improvement of the extracted models by up to 43.13%. Furthermore,\u0000the number of queries required to extract a clone model matching the final\u0000accuracy of prior work is reduced by up to 75.7%.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"89 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FreeMark: A Non-Invasive White-Box Watermarking for Deep Neural Networks FreeMark:用于深度神经网络的非侵入式白盒水印技术
Pub Date : 2024-09-16 DOI: arxiv-2409.09996
Yuzhang Chen, Jiangnan Zhu, Yujie Gu, Minoru Kuribayashi, Kouichi Sakurai
Deep neural networks (DNNs) have achieved significant success in real-worldapplications. However, safeguarding their intellectual property (IP) remainsextremely challenging. Existing DNN watermarking for IP protection oftenrequire modifying DNN models, which reduces model performance and limits theirpracticality. This paper introduces FreeMark, a novel DNN watermarking framework thatleverages cryptographic principles without altering the original host DNNmodel, thereby avoiding any reduction in model performance. Unlike traditionalDNN watermarking methods, FreeMark innovatively generates secret keys from apre-generated watermark vector and the host model using gradient descent. Thesesecret keys, used to extract watermark from the model's activation values, aresecurely stored with a trusted third party, enabling reliable watermarkextraction from suspect models. Extensive experiments demonstrate that FreeMarkeffectively resists various watermark removal attacks while maintaining highwatermark capacity.
深度神经网络(DNN)在现实世界的应用中取得了巨大成功。然而,保护其知识产权(IP)仍然极具挑战性。现有的用于知识产权保护的 DNN 水印通常需要修改 DNN 模型,这降低了模型性能,限制了其实用性。本文介绍的 FreeMark 是一种新颖的 DNN 水印框架,它利用加密原理而不改变原始主机 DNN 模型,从而避免了模型性能的降低。与传统的 DNN 水印方法不同,FreeMark 创新性地使用梯度下降法,从预先生成的水印向量和主机模型中生成秘钥。这些秘钥用于从模型的激活值中提取水印,并安全地存储在受信任的第三方,从而可以从可疑模型中可靠地提取水印。大量实验证明,FreeMarke 能有效抵御各种水印去除攻击,同时保持较高的水印容量。
{"title":"FreeMark: A Non-Invasive White-Box Watermarking for Deep Neural Networks","authors":"Yuzhang Chen, Jiangnan Zhu, Yujie Gu, Minoru Kuribayashi, Kouichi Sakurai","doi":"arxiv-2409.09996","DOIUrl":"https://doi.org/arxiv-2409.09996","url":null,"abstract":"Deep neural networks (DNNs) have achieved significant success in real-world\u0000applications. However, safeguarding their intellectual property (IP) remains\u0000extremely challenging. Existing DNN watermarking for IP protection often\u0000require modifying DNN models, which reduces model performance and limits their\u0000practicality. This paper introduces FreeMark, a novel DNN watermarking framework that\u0000leverages cryptographic principles without altering the original host DNN\u0000model, thereby avoiding any reduction in model performance. Unlike traditional\u0000DNN watermarking methods, FreeMark innovatively generates secret keys from a\u0000pre-generated watermark vector and the host model using gradient descent. These\u0000secret keys, used to extract watermark from the model's activation values, are\u0000securely stored with a trusted third party, enabling reliable watermark\u0000extraction from suspect models. Extensive experiments demonstrate that FreeMark\u0000effectively resists various watermark removal attacks while maintaining high\u0000watermark capacity.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"47 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PrePaMS: Privacy-Preserving Participant Management System for Studies with Rewards and Prerequisites PrePaMS:用于有奖励和先决条件研究的保护隐私的参与者管理系统
Pub Date : 2024-09-16 DOI: arxiv-2409.10192
Echo Meißner, Frank Kargl, Benjamin Erb, Felix Engelmann
Taking part in surveys, experiments, and studies is often compensated byrewards to increase the number of participants and encourage attendance. Whileprivacy requirements are usually considered for participation, privacy aspectsof the reward procedure are mostly ignored. To this end, we introduce PrePaMS,an efficient participation management system that supports prerequisite checksand participation rewards in a privacy-preserving way. Our system organizesparticipations with potential (dis-)qualifying dependencies and enables securereward payoffs. By leveraging a set of proven cryptographic primitives andmechanisms such as anonymous credentials and zero-knowledge proofs,participations are protected so that service providers and organizers cannotderive the identity of participants even within the reward process. In thispaper, we have designed and implemented a prototype of PrePaMS to show itseffectiveness and evaluated its performance under realistic workloads. PrePaMScovers the information whether subjects have participated in surveys,experiments, or studies. When combined with other secure solutions for theactual data collection within these events, PrePaMS can represent a cornerstonefor more privacy-preserving empirical research.
参与调查、实验和研究通常会得到奖励,以增加参与人数并鼓励参与。虽然参与时通常会考虑隐私要求,但奖励程序的隐私方面却大多被忽视。为此,我们引入了一个高效的参与管理系统 PrePaMS,它以保护隐私的方式支持前提条件检查和参与奖励。我们的系统可以组织具有潜在(不)资格依赖性的参与,并实现安全的奖励回报。通过利用匿名凭证和零知识证明等一系列经过验证的加密原语和机制,参与得到了保护,因此即使在奖励过程中,服务提供商和组织者也无法得知参与者的身份。在本文中,我们设计并实现了 PrePaMS 的原型,以展示其有效性,并评估了其在实际工作负载下的性能。PrePaMS 可以获取受试者是否参与过调查、实验或研究的信息。当与这些事件中实际数据收集的其他安全解决方案相结合时,PrePaMS 将成为更多隐私保护实证研究的基石。
{"title":"PrePaMS: Privacy-Preserving Participant Management System for Studies with Rewards and Prerequisites","authors":"Echo Meißner, Frank Kargl, Benjamin Erb, Felix Engelmann","doi":"arxiv-2409.10192","DOIUrl":"https://doi.org/arxiv-2409.10192","url":null,"abstract":"Taking part in surveys, experiments, and studies is often compensated by\u0000rewards to increase the number of participants and encourage attendance. While\u0000privacy requirements are usually considered for participation, privacy aspects\u0000of the reward procedure are mostly ignored. To this end, we introduce PrePaMS,\u0000an efficient participation management system that supports prerequisite checks\u0000and participation rewards in a privacy-preserving way. Our system organizes\u0000participations with potential (dis-)qualifying dependencies and enables secure\u0000reward payoffs. By leveraging a set of proven cryptographic primitives and\u0000mechanisms such as anonymous credentials and zero-knowledge proofs,\u0000participations are protected so that service providers and organizers cannot\u0000derive the identity of participants even within the reward process. In this\u0000paper, we have designed and implemented a prototype of PrePaMS to show its\u0000effectiveness and evaluated its performance under realistic workloads. PrePaMS\u0000covers the information whether subjects have participated in surveys,\u0000experiments, or studies. When combined with other secure solutions for the\u0000actual data collection within these events, PrePaMS can represent a cornerstone\u0000for more privacy-preserving empirical research.","PeriodicalId":501332,"journal":{"name":"arXiv - CS - Cryptography and Security","volume":"30 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142261720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
arXiv - CS - Cryptography and Security
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1