Many individuals, organizations, and industries rely on web applications for the daily operations of their businesses. With the increasing deployment and dependence on these applications, significant attention has been directed towards developing more accurate and secure mechanisms to safeguard them from malicious web-based attacks. The slow adoption of the latest security protocols, coupled with the utilization of inaccurate and inadequately tested security measures, has hindered the establishment of efficient and effective security measures for web apps. This paper reviews recent research and their recommendations for web security over the last four years. It identifies code injection as one of the recent most prevalent web-based attacks. The recommendations presented in this paper offer a practical guide, enabling individuals and security personnel across various industries and organizations to implement tested and proven security measures for web applications. Furthermore, it serves as a roadmap for security developers, aiding them in creating more accurate and quantifiable measures and mechanisms for web security .
{"title":"Web Application Security: A Pragmatic Exposé","authors":"Clement C. Aladi","doi":"10.1145/3644394","DOIUrl":"https://doi.org/10.1145/3644394","url":null,"abstract":"\u0000 Many individuals, organizations, and industries rely on web applications for the daily operations of their businesses. With the increasing deployment and dependence on these applications, significant attention has been directed towards developing more accurate and secure mechanisms to safeguard them from malicious web-based attacks. The slow adoption of the latest security protocols, coupled with the utilization of inaccurate and inadequately tested security measures, has hindered the establishment of efficient and effective security measures for web apps. This paper reviews recent research and their recommendations for web security over the last four years. It identifies code injection as one of the recent most prevalent web-based attacks. The recommendations presented in this paper offer a practical guide, enabling individuals and security personnel across various industries and organizations to implement tested and proven security measures for web applications. Furthermore, it serves as a roadmap for security developers, aiding them in creating more accurate and quantifiable measures and mechanisms for web security\u0000 .\u0000","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"75 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139855473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cybersecurity operations centers (CSOCs) protect organizations by monitoring network traffic and detecting suspicious activities in the form of alerts. The security response team within CSOCs is responsible for investigating and mitigating alerts. However, an imbalance between alert volume and available analysts creates a backlog, putting the network at risk of exploitation. Recent research has focused on improving the alert management process by triaging alerts, optimizing analyst scheduling, and reducing analyst workload through systematic discarding of alerts. However, these works overlook the delays caused in alert investigations by several factors, including: (i) False or benign alerts contributing to the backlog. (ii) Analysts experiencing cognitive burden from repeatedly reviewing unrelated alerts. (iii) Analysts being assigned to alerts that do not match well with their expertise. We propose a novel framework that considers these factors and utilizes machine learning and mathematical optimization methods to dynamically improve throughput during work shifts. The framework achieves efficiency by automating the identification and removal of a portion of benign alerts, forming clusters of similar alerts, and assigning analysts to alerts with matching attributes. Experiments conducted using real-world CSOC data demonstrate a 60.16% reduction in the alert backlog for an 8-hour work shift compared to currently employed approach.
{"title":"A Machine Learning and Optimization Framework for Efficient Alert Management in a Cybersecurity Operations Center","authors":"Jalal Ghadermazi, Ankit Shah, Sushil Jajodia","doi":"10.1145/3644393","DOIUrl":"https://doi.org/10.1145/3644393","url":null,"abstract":"Cybersecurity operations centers (CSOCs) protect organizations by monitoring network traffic and detecting suspicious activities in the form of alerts. The security response team within CSOCs is responsible for investigating and mitigating alerts. However, an imbalance between alert volume and available analysts creates a backlog, putting the network at risk of exploitation. Recent research has focused on improving the alert management process by triaging alerts, optimizing analyst scheduling, and reducing analyst workload through systematic discarding of alerts. However, these works overlook the delays caused in alert investigations by several factors, including: (i) False or benign alerts contributing to the backlog. (ii) Analysts experiencing cognitive burden from repeatedly reviewing unrelated alerts. (iii) Analysts being assigned to alerts that do not match well with their expertise. We propose a novel framework that considers these factors and utilizes machine learning and mathematical optimization methods to dynamically improve throughput during work shifts. The framework achieves efficiency by automating the identification and removal of a portion of benign alerts, forming clusters of similar alerts, and assigning analysts to alerts with matching attributes. Experiments conducted using real-world CSOC data demonstrate a 60.16% reduction in the alert backlog for an 8-hour work shift compared to currently employed approach.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"30 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139804195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cybersecurity operations centers (CSOCs) protect organizations by monitoring network traffic and detecting suspicious activities in the form of alerts. The security response team within CSOCs is responsible for investigating and mitigating alerts. However, an imbalance between alert volume and available analysts creates a backlog, putting the network at risk of exploitation. Recent research has focused on improving the alert management process by triaging alerts, optimizing analyst scheduling, and reducing analyst workload through systematic discarding of alerts. However, these works overlook the delays caused in alert investigations by several factors, including: (i) False or benign alerts contributing to the backlog. (ii) Analysts experiencing cognitive burden from repeatedly reviewing unrelated alerts. (iii) Analysts being assigned to alerts that do not match well with their expertise. We propose a novel framework that considers these factors and utilizes machine learning and mathematical optimization methods to dynamically improve throughput during work shifts. The framework achieves efficiency by automating the identification and removal of a portion of benign alerts, forming clusters of similar alerts, and assigning analysts to alerts with matching attributes. Experiments conducted using real-world CSOC data demonstrate a 60.16% reduction in the alert backlog for an 8-hour work shift compared to currently employed approach.
{"title":"A Machine Learning and Optimization Framework for Efficient Alert Management in a Cybersecurity Operations Center","authors":"Jalal Ghadermazi, Ankit Shah, Sushil Jajodia","doi":"10.1145/3644393","DOIUrl":"https://doi.org/10.1145/3644393","url":null,"abstract":"Cybersecurity operations centers (CSOCs) protect organizations by monitoring network traffic and detecting suspicious activities in the form of alerts. The security response team within CSOCs is responsible for investigating and mitigating alerts. However, an imbalance between alert volume and available analysts creates a backlog, putting the network at risk of exploitation. Recent research has focused on improving the alert management process by triaging alerts, optimizing analyst scheduling, and reducing analyst workload through systematic discarding of alerts. However, these works overlook the delays caused in alert investigations by several factors, including: (i) False or benign alerts contributing to the backlog. (ii) Analysts experiencing cognitive burden from repeatedly reviewing unrelated alerts. (iii) Analysts being assigned to alerts that do not match well with their expertise. We propose a novel framework that considers these factors and utilizes machine learning and mathematical optimization methods to dynamically improve throughput during work shifts. The framework achieves efficiency by automating the identification and removal of a portion of benign alerts, forming clusters of similar alerts, and assigning analysts to alerts with matching attributes. Experiments conducted using real-world CSOC data demonstrate a 60.16% reduction in the alert backlog for an 8-hour work shift compared to currently employed approach.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"37 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139864173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Introduction to the Special Issue on Information Sharing","authors":"Angel Hueca, Sharon Mudd, Timothy Shimeall","doi":"10.1145/3635391","DOIUrl":"https://doi.org/10.1145/3635391","url":null,"abstract":"","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"35 5","pages":"1 - 2"},"PeriodicalIF":0.0,"publicationDate":"2024-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139527092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Network services are processes running on a system with network exposure. A key activity for any network defender, penetration tester, or red team is network attack surface mapping, the act of detecting and categorizing those services through which a threat actor could attempt malicious activity. Many tools have arisen over the years to probe, identify, and classify these services for information and vulnerabilities. In this paper, we survey network attack surface mapping by reviewing several prominent tools and their features and then discussing recent works reflecting unique research using those tools. We conclude by covering several promising directions for future research.
{"title":"A Survey on Network Attack Surface Mapping","authors":"D. Everson, Long Cheng","doi":"10.1145/3640019","DOIUrl":"https://doi.org/10.1145/3640019","url":null,"abstract":"Network services are processes running on a system with network exposure. A key activity for any network defender, penetration tester, or red team is network attack surface mapping, the act of detecting and categorizing those services through which a threat actor could attempt malicious activity. Many tools have arisen over the years to probe, identify, and classify these services for information and vulnerabilities. In this paper, we survey network attack surface mapping by reviewing several prominent tools and their features and then discussing recent works reflecting unique research using those tools. We conclude by covering several promising directions for future research.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"3 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139440044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Existing literature on adversarial Machine Learning (ML) focuses either on showing attacks that break every ML model, or defenses that withstand most attacks. Unfortunately, little consideration is given to the actual feasibility of the attack or the defense. Moreover, adversarial samples are often crafted in the “feature-space”, making the corresponding evaluations of questionable value. Simply put, the current situation does not allow to estimate the actual threat posed by adversarial attacks, leading to a lack of secure ML systems. We aim to clarify such confusion in this paper. By considering the application of ML for Phishing Website Detection (PWD), we formalize the “evasion-space” in which an adversarial perturbation can be introduced to fool an ML-PWD—demonstrating that even perturbations in the “feature-space” are useful. Then, we propose a realistic threat model describing evasion attacks against ML-PWD that are cheap to stage, and hence intrinsically more attractive for real phishers. After that, we perform the first statistically validated assessment of state-of-the-art ML-PWD against 12 evasion attacks. Our evaluation shows (i) the true efficacy of evasion attempts that are more likely to occur; and (ii) the impact of perturbations crafted in different evasion-spaces; Our realistic evasion attempts induce a statistically significant degradation (3–10% at p < 0.05), and their cheap cost makes them a subtle threat. Notably, however, some ML-PWD are immune to our most realistic attacks (p=0.22). Finally, as an additional contribution of this journal publication, we are the first to propose and empirically evaluate the intriguing case wherein an attacker introduces perturbations in multiple evasion-spaces at the same time. These new results show that simultaneously applying perturbations in the problem- and feature-space can cause a drop in the detection rate from 0.95 to 0. Our contribution paves the way for a much-needed re-assessment of adversarial attacks against ML systems for cybersecurity.
关于对抗式机器学习(ML)的现有文献要么侧重于展示攻破每个 ML 模型的攻击,要么侧重于展示抵御大多数攻击的防御。遗憾的是,这些文献很少考虑攻击或防御的实际可行性。此外,对抗样本往往是在 "特征空间 "中精心制作的,因此相应的评估价值值得怀疑。简而言之,目前的情况无法估计对抗性攻击带来的实际威胁,导致缺乏安全的 ML 系统。我们希望在本文中澄清这种困惑。通过考虑网络钓鱼网站检测(PWD)的应用,我们正式提出了 "规避空间",在这个空间中,可以引入对抗性扰动来欺骗网络钓鱼网站检测(ML-PWD)--证明即使是 "特征空间 "中的扰动也是有用的。然后,我们提出了一个现实的威胁模型,描述了针对 ML-PWD 的规避攻击,这种攻击的实施成本很低,因此对真正的网络钓鱼者来说更有吸引力。之后,我们针对 12 种规避攻击对最先进的 ML-PWD 进行了首次统计验证评估。我们的评估结果表明:(i) 更有可能发生的规避尝试的真实功效;(ii) 在不同规避空间中精心设计的扰动的影响;我们的真实规避尝试会导致统计意义上的显著降低(P < 0.05 时为 3-10%),其低廉的成本使其成为一种微妙的威胁。不过,值得注意的是,一些 ML-PWD 对我们最逼真的攻击具有免疫力(p=0.22)。最后,作为本期刊发表的另一项贡献,我们首次提出并根据经验评估了攻击者同时在多个规避空间引入扰动的有趣情况。这些新结果表明,同时在问题空间和特征空间应用扰动会导致检测率从 0.95 降至 0。我们的贡献为重新评估针对网络安全的 ML 系统的对抗性攻击铺平了道路。
{"title":"Multi-SpacePhish: Extending the Evasion-space of Adversarial Attacks against Phishing Website Detectors using Machine Learning","authors":"Ying Yuan, Giovanni Apruzzese, Mauro Conti","doi":"10.1145/3638253","DOIUrl":"https://doi.org/10.1145/3638253","url":null,"abstract":"Existing literature on adversarial Machine Learning (ML) focuses either on showing attacks that break every ML model, or defenses that withstand most attacks. Unfortunately, little consideration is given to the actual feasibility of the attack or the defense. Moreover, adversarial samples are often crafted in the “feature-space”, making the corresponding evaluations of questionable value. Simply put, the current situation does not allow to estimate the actual threat posed by adversarial attacks, leading to a lack of secure ML systems. We aim to clarify such confusion in this paper. By considering the application of ML for Phishing Website Detection (PWD), we formalize the “evasion-space” in which an adversarial perturbation can be introduced to fool an ML-PWD—demonstrating that even perturbations in the “feature-space” are useful. Then, we propose a realistic threat model describing evasion attacks against ML-PWD that are cheap to stage, and hence intrinsically more attractive for real phishers. After that, we perform the first statistically validated assessment of state-of-the-art ML-PWD against 12 evasion attacks. Our evaluation shows (i) the true efficacy of evasion attempts that are more likely to occur; and (ii) the impact of perturbations crafted in different evasion-spaces; Our realistic evasion attempts induce a statistically significant degradation (3–10% at p < 0.05), and their cheap cost makes them a subtle threat. Notably, however, some ML-PWD are immune to our most realistic attacks (p=0.22). Finally, as an additional contribution of this journal publication, we are the first to propose and empirically evaluate the intriguing case wherein an attacker introduces perturbations in multiple evasion-spaces at the same time. These new results show that simultaneously applying perturbations in the problem- and feature-space can cause a drop in the detection rate from 0.95 to 0. Our contribution paves the way for a much-needed re-assessment of adversarial attacks against ML systems for cybersecurity.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"25 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138955882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yechan Bae, Sarbartha Banerjee, Sangho Lee, Marcus Peinado
Space sharing services like vacation rentals and meeting rooms are being equipped with smart devices such as cameras, door locks and many other sensors. However, the sharing of such devices poses privacy and security problems, as there is typically no clear control transfer between owners and users. In this paper, we propose Spacelord, a system to time-share smart devices contained in a shared space privately and securely while allowing users to configure them. When a user stays at a space, Spacelord ensures that the smart devices contained in it run code and configurations the user trusts while removing pre-installed code and configurations. When the user leaves the space, Spacelord reverts any changes the user has introduced to the smart devices, deletes any remaining private data and lets the owner take back control over the devices. We evaluate Spacelord for two realistic space-sharing cases—smart home and coworking meeting room—and observe smart space provisioning delays of ∼ 82 s across three different platforms. Moreover, the average runtime overhead of our system varies from 7.8% to 11.8% across different hub hardware, running native applications.
{"title":"Spacelord: Private and Secure Smart Space Sharing","authors":"Yechan Bae, Sarbartha Banerjee, Sangho Lee, Marcus Peinado","doi":"10.1145/3637879","DOIUrl":"https://doi.org/10.1145/3637879","url":null,"abstract":"Space sharing services like vacation rentals and meeting rooms are being equipped with smart devices such as cameras, door locks and many other sensors. However, the sharing of such devices poses privacy and security problems, as there is typically no clear control transfer between owners and users. In this paper, we propose Spacelord, a system to time-share smart devices contained in a shared space privately and securely while allowing users to configure them. When a user stays at a space, Spacelord ensures that the smart devices contained in it run code and configurations the user trusts while removing pre-installed code and configurations. When the user leaves the space, Spacelord reverts any changes the user has introduced to the smart devices, deletes any remaining private data and lets the owner take back control over the devices. We evaluate Spacelord for two realistic space-sharing cases—smart home and coworking meeting room—and observe smart space provisioning delays of ∼ 82 s across three different platforms. Moreover, the average runtime overhead of our system varies from 7.8% to 11.8% across different hub hardware, running native applications.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":" 17","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138961355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Budi Arief, Lena Connolly, Julio Hernandez-Castro, Allan Liska, Peter Y A. Ryan
{"title":"Introduction to the Special Issue on Ransomware","authors":"Budi Arief, Lena Connolly, Julio Hernandez-Castro, Allan Liska, Peter Y A. Ryan","doi":"10.1145/3629999","DOIUrl":"https://doi.org/10.1145/3629999","url":null,"abstract":"","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"64 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139262841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graph Neural Networks (GNNs) have gained significant popularity as powerful deep learning methods for processing graph data. However, centralized GNNs face challenges in data-sensitive scenarios due to privacy concerns and regulatory restrictions. Federated learning (FL) has emerged as a promising technology that enables collaborative training of a shared global model while preserving privacy. While FL has been applied to train GNNs, no research focuses on the robustness of Federated GNNs against backdoor attacks. This paper bridges this research gap by investigating two types of backdoor attacks in Federated GNNs: centralized backdoor attacks (CBA) and distributed backdoor attacks (DBA). Through extensive experiments, we demonstrate that DBA exhibits a higher success rate than CBA across various scenarios. To further explore the characteristics of these backdoor attacks in Federated GNNs, we evaluate their performance under different scenarios, including varying numbers of clients, trigger sizes, poisoning intensities, and trigger densities. Additionally, we explore the resilience of DBA and CBA against two defense mechanisms. Our findings reveal that both defenses can not eliminate DBA and CBA without affecting the original task. This highlights the necessity of developing tailored defenses to mitigate the novel threat of backdoor attacks in Federated GNNs.
{"title":"Unveiling the Threat: Investigating Distributed and Centralized Backdoor Attacks in Federated Graph Neural Networks","authors":"Jing Xu, Stefanos Koffas, S. Picek","doi":"10.1145/3633206","DOIUrl":"https://doi.org/10.1145/3633206","url":null,"abstract":"Graph Neural Networks (GNNs) have gained significant popularity as powerful deep learning methods for processing graph data. However, centralized GNNs face challenges in data-sensitive scenarios due to privacy concerns and regulatory restrictions. Federated learning (FL) has emerged as a promising technology that enables collaborative training of a shared global model while preserving privacy. While FL has been applied to train GNNs, no research focuses on the robustness of Federated GNNs against backdoor attacks. This paper bridges this research gap by investigating two types of backdoor attacks in Federated GNNs: centralized backdoor attacks (CBA) and distributed backdoor attacks (DBA). Through extensive experiments, we demonstrate that DBA exhibits a higher success rate than CBA across various scenarios. To further explore the characteristics of these backdoor attacks in Federated GNNs, we evaluate their performance under different scenarios, including varying numbers of clients, trigger sizes, poisoning intensities, and trigger densities. Additionally, we explore the resilience of DBA and CBA against two defense mechanisms. Our findings reveal that both defenses can not eliminate DBA and CBA without affecting the original task. This highlights the necessity of developing tailored defenses to mitigate the novel threat of backdoor attacks in Federated GNNs.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"23 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139270573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As adversaries evolve their Tactics, Techniques, and Procedures (TTPs) to stay ahead of defenders, Microsoft’s .NET Framework emerges as a common component found in the tradecraft of many contemporary Advanced Persistent Threats (APTs), whether through PowerShell or C#. Because of .NET’s ease of use and availability on every recent Windows system, it is at the forefront of modern TTPs and is a primary means of exploitation. This article considers the .NET Dynamic Language Runtime as an attack vector, and how APTs have utilized it for offensive purposes. The technique under scrutiny is Bring Your Own Interpreter (BYOI), which is the ability of developers to embed dynamic languages into .NET using an engine. The focus of this analysis is an adversarial use case in which APT Turla utilized BYOI as an evasion technique, using an IronPython .NET Injector named IronNetInjector. This research analyzes IronNetInjector and how it was used to reflectively load .NET assemblies. It also evaluates the role of Antimalware Scan Interface (AMSI) in defending Windows. Due to AMSI being at the core of Windows malware mitigation, this article further evaluates the memory patching bypass technique by demonstrating a novel AMSI bypass method in IronPython using Platform Invoke (P/Invoke).
随着攻击者不断发展他们的战术、技术和程序(TTPs)以领先于防御者,微软的。net框架成为许多当代高级持续性威胁(apt)的常见组件,无论是通过PowerShell还是c#。由于。net在每一个最新的Windows系统上的易用性和可用性,它处于现代https的前沿,是一种主要的利用手段。本文将。net动态语言运行时视为攻击向量,以及apt如何利用它进行攻击。该技术是自带解释器(Bring Your Own Interpreter, BYOI),它是开发人员使用引擎将动态语言嵌入到。net中的能力。本分析的重点是一个对抗性用例,在这个用例中,APT Turla使用了一个名为IronNetInjector的IronPython . net注入器,利用BYOI作为规避技术。本研究分析了IronNetInjector以及如何使用它来反射加载。net程序集。本文还评估了反恶意软件扫描接口(AMSI)在防御Windows中的作用。由于AMSI是Windows恶意软件缓解的核心,本文通过在IronPython中使用平台调用(P/Invoke)演示一种新的AMSI绕过方法,进一步评估内存补丁绕过技术。
{"title":"IronNetInjector: Weaponizing .NET Dynamic Language Runtime Engines","authors":"Anthony Rose, S. Graham, Jacob Krasnov","doi":"10.1145/3603506","DOIUrl":"https://doi.org/10.1145/3603506","url":null,"abstract":"As adversaries evolve their Tactics, Techniques, and Procedures (TTPs) to stay ahead of defenders, Microsoft’s .NET Framework emerges as a common component found in the tradecraft of many contemporary Advanced Persistent Threats (APTs), whether through PowerShell or C#. Because of .NET’s ease of use and availability on every recent Windows system, it is at the forefront of modern TTPs and is a primary means of exploitation. This article considers the .NET Dynamic Language Runtime as an attack vector, and how APTs have utilized it for offensive purposes. The technique under scrutiny is Bring Your Own Interpreter (BYOI), which is the ability of developers to embed dynamic languages into .NET using an engine. The focus of this analysis is an adversarial use case in which APT Turla utilized BYOI as an evasion technique, using an IronPython .NET Injector named IronNetInjector. This research analyzes IronNetInjector and how it was used to reflectively load .NET assemblies. It also evaluates the role of Antimalware Scan Interface (AMSI) in defending Windows. Due to AMSI being at the core of Windows malware mitigation, this article further evaluates the memory patching bypass technique by demonstrating a novel AMSI bypass method in IronPython using Platform Invoke (P/Invoke).","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132231616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}