Lisa Rzepka, Jennifer R. Ottmann, Felix Freiling, Harald Baier
Main memory contains valuable information for criminal investigations, e.g., process information or keys for disk encryption. Taking snapshots of memory is therefore common practice during a digital forensic examination. Inconsistencies in such memory dumps can, however, hamper their analysis. In this paper, we perform a systematic assessment of causal inconsistencies in memory dumps taken on a Windows 10 machine using the kernel-level acquisition tool WinPmem. We use two approaches to measure the quantity of inconsistencies in Windows 10: (1) causal inconsistencies within self-injected memory data structures using a known methodology transferred from the Linux operating system, and (2) inconsistencies in the memory management data structures of the Windows kernel using a novel measurement technique based on properties of the virtual address descriptor (VAD) tree. Our evaluation is based on a dataset of more than 180 memory dumps. As a central result, both types of inconsistency measurement reveal that a high number of inconsistencies is the norm rather than the exception. We also correlate workload and execution time of the memory acquisition tool to the number of inconsistencies in the respective memory snapshot. By controlling these factors it is possible to (somewhat) control the level of inconsistencies in Windows memory dumps.
主存储器包含对刑事调查有价值的信息,如进程信息或磁盘加密密钥。因此,拍摄内存快照是数字取证检查中的常见做法。然而,这种内存转储中的不一致性会妨碍分析。在本文中,我们使用内核级采集工具 WinPmem 对 Windows 10 机器上的内存转储中的因果不一致性进行了系统评估。我们使用两种方法来测量 Windows 10 中不一致的数量:(1)使用从 Linux 操作系统移植过来的已知方法测量自注入内存数据结构中的因果不一致;(2)使用基于虚拟地址描述符(VAD)树属性的新型测量技术测量 Windows 内核内存管理数据结构中的不一致。我们的评估基于 180 多个内存转储数据集。主要结果是,这两种不一致性测量方法都显示,大量不一致性是常态而非例外。我们还将内存获取工具的工作量和执行时间与相应内存快照中的不一致性数量联系起来。通过控制这些因素,我们可以(在一定程度上)控制 Windows 内存转储中的不一致程度。
{"title":"Causal Inconsistencies are Normal in Windows Memory Dumps (too)","authors":"Lisa Rzepka, Jennifer R. Ottmann, Felix Freiling, Harald Baier","doi":"10.1145/3680293","DOIUrl":"https://doi.org/10.1145/3680293","url":null,"abstract":"Main memory contains valuable information for criminal investigations, e.g., process information or keys for disk encryption. Taking snapshots of memory is therefore common practice during a digital forensic examination. Inconsistencies in such memory dumps can, however, hamper their analysis. In this paper, we perform a systematic assessment of causal inconsistencies in memory dumps taken on a Windows 10 machine using the kernel-level acquisition tool WinPmem. We use two approaches to measure the quantity of inconsistencies in Windows 10: (1) causal inconsistencies within self-injected memory data structures using a known methodology transferred from the Linux operating system, and (2) inconsistencies in the memory management data structures of the Windows kernel using a novel measurement technique based on properties of the virtual address descriptor (VAD) tree. Our evaluation is based on a dataset of more than 180 memory dumps. As a central result, both types of inconsistency measurement reveal that a high number of inconsistencies is the norm rather than the exception. We also correlate workload and execution time of the memory acquisition tool to the number of inconsistencies in the respective memory snapshot. By controlling these factors it is possible to (somewhat) control the level of inconsistencies in Windows memory dumps.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141810664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Remote forensic investigations, i.e., the covert lawful infiltration of computing devices, are a generic method to acquire evidence in the presence of strong defensive security. A precondition for such investigations is the ability to execute software with sufficient privileges on target devices. The standard way to achieve such remote access is by exploiting yet unpatched software vulnerabilities. This in turn puts other users at risk, resulting in a dilemma for state authorities that aim to protect the general public (by patching such vulnerabilities) and those that need remote access in criminal investigations. As a partial solution, we present a framework that enables privileged remote forensic access without using privileged exploits. The idea is to separate the remote forensic software into two parts: a Forensic Software, designed by law enforcement agencies to execute investigative actions, and a (privileged) Control Software, provided by the device vendor to selectively grant privileges to the Forensic Software based on a court warrant within the rules of criminal procedure. By leveraging trusted execution environments for running the Control Software in a tamper-proof manner, we enable trustful deployment and operation of remote forensic software. We provide a proof-of-concept implementation of InvesTEE that is based on ARMv8-A TrustZone.
{"title":"InvesTEE: A TEE-supported Framework for Lawful Remote Forensic Investigations","authors":"Christian Lindenmeier, Jan Gruber, Felix Freiling","doi":"10.1145/3680294","DOIUrl":"https://doi.org/10.1145/3680294","url":null,"abstract":"Remote forensic investigations, i.e., the covert lawful infiltration of computing devices, are a generic method to acquire evidence in the presence of strong defensive security. A precondition for such investigations is the ability to execute software with sufficient privileges on target devices. The standard way to achieve such remote access is by exploiting yet unpatched software vulnerabilities. This in turn puts other users at risk, resulting in a dilemma for state authorities that aim to protect the general public (by patching such vulnerabilities) and those that need remote access in criminal investigations. As a partial solution, we present a framework that enables privileged remote forensic access without using privileged exploits. The idea is to separate the remote forensic software into two parts: a Forensic Software, designed by law enforcement agencies to execute investigative actions, and a (privileged) Control Software, provided by the device vendor to selectively grant privileges to the Forensic Software based on a court warrant within the rules of criminal procedure. By leveraging trusted execution environments for running the Control Software in a tamper-proof manner, we enable trustful deployment and operation of remote forensic software. We provide a proof-of-concept implementation of InvesTEE that is based on ARMv8-A TrustZone.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"24 16","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141816751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The significant rise in digital threats and attacks has led to an increase in the use of cyber insurance as a risk treatment method intended to support organisations in the event of a breach. Insurance providers are set up to assume such residual risk, but they often require organisations to implement certain security controls a priori to reduce their exposure. We examine the assertion that cyber insurance promotes cyber security best practice by conducting a critical examination of cyber insurance application forms to determine how well they align with ISO 27001, the NIST Cybersecurity Framework and the UK’s Cyber Essentials security standards. We achieve this by mapping questions and requirements expressed in insurance forms to the security controls covered in each of the standards. This allows us to identify security controls and standards that are considered – and likely most valued – by insurers and those that are neglected. We find that while there is some reasonable coverage across forms, there is an underrepresentation of best practice standards and controls generally, and particularly in some control areas (e.g., procedural/governance controls, incident response and recovery).
数字威胁和攻击的大幅增加导致网络保险的使用增加,网络保险是一种风险处理方法,目的是在出现漏洞时为组织提供支持。保险提供商是为承担此类残余风险而设立的,但他们往往要求组织事先实施某些安全控制措施,以降低风险。我们对网络保险申请表进行了严格审查,以确定它们在多大程度上符合 ISO 27001、NIST 网络安全框架和英国网络基本安全标准,从而对网络保险促进网络安全最佳实践的说法进行研究。为此,我们将保险表格中的问题和要求与每项标准中涵盖的安全控制进行了映射。这样,我们就能找出保险公司认为最重要的安全控制和标准,以及那些被忽视的安全控制和标准。我们发现,虽然各种表格都有一些合理的覆盖范围,但总体而言,最佳实践标准和控制的代表性不足,特别是在某些控制领域(如程序/治理控制、事故响应和恢复)。
{"title":"Does Cyber Insurance promote Cyber Security Best Practice? An Analysis based on Insurance Application Forms","authors":"Rodney Adriko, Jason R.C. Nurse","doi":"10.1145/3676283","DOIUrl":"https://doi.org/10.1145/3676283","url":null,"abstract":"The significant rise in digital threats and attacks has led to an increase in the use of cyber insurance as a risk treatment method intended to support organisations in the event of a breach. Insurance providers are set up to assume such residual risk, but they often require organisations to implement certain security controls a priori to reduce their exposure. We examine the assertion that cyber insurance promotes cyber security best practice by conducting a critical examination of cyber insurance application forms to determine how well they align with ISO 27001, the NIST Cybersecurity Framework and the UK’s Cyber Essentials security standards. We achieve this by mapping questions and requirements expressed in insurance forms to the security controls covered in each of the standards. This allows us to identify security controls and standards that are considered – and likely most valued – by insurers and those that are neglected. We find that while there is some reasonable coverage across forms, there is an underrepresentation of best practice standards and controls generally, and particularly in some control areas (e.g., procedural/governance controls, incident response and recovery).","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":" 31","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141678293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emirhan Böge, Murat Bilgehan Ertan, Halit Alptekin, Orçun Çetin
In this paper, we leverage natural language processing and machine learning algorithms to profile threat actors based on their behavioral signatures to establish identification for soft attribution. Our unique dataset comprises various actors and the commands they have executed, with a significant proportion using the Cobalt Strike framework in August 2020-October 2022. We implemented a hybrid deep learning structure combining transformers and convolutional neural networks to benefit global and local contextual information within the sequence of commands, which provides a detailed view of the behavioral patterns of threat actors. We evaluated our hybrid architecture against pre-trained transformer-based models such as BERT, RoBERTa, SecureBERT, and DarkBERT with our high-count, medium-count, and low-count datasets. Hybrid architecture has achieved F1-score of 95.11% and an accuracy score of 95.13% on the high-count dataset, F1-score of 93.60% and accuracy score of 93.77% on the medium-count dataset, and F1-score of 88.95% and accuracy score of 89.25% on the low-count dataset. Our approach has the potential to substantially reduce the workload of incident response experts who are processing the collected cybersecurity data to identify patterns.
{"title":"Unveiling Cyber Threat Actors: A Hybrid Deep Learning Approach for Behavior-based Attribution","authors":"Emirhan Böge, Murat Bilgehan Ertan, Halit Alptekin, Orçun Çetin","doi":"10.1145/3676284","DOIUrl":"https://doi.org/10.1145/3676284","url":null,"abstract":"In this paper, we leverage natural language processing and machine learning algorithms to profile threat actors based on their behavioral signatures to establish identification for soft attribution. Our unique dataset comprises various actors and the commands they have executed, with a significant proportion using the Cobalt Strike framework in August 2020-October 2022. We implemented a hybrid deep learning structure combining transformers and convolutional neural networks to benefit global and local contextual information within the sequence of commands, which provides a detailed view of the behavioral patterns of threat actors. We evaluated our hybrid architecture against pre-trained transformer-based models such as BERT, RoBERTa, SecureBERT, and DarkBERT with our high-count, medium-count, and low-count datasets. Hybrid architecture has achieved F1-score of 95.11% and an accuracy score of 95.13% on the high-count dataset, F1-score of 93.60% and accuracy score of 93.77% on the medium-count dataset, and F1-score of 88.95% and accuracy score of 89.25% on the low-count dataset. Our approach has the potential to substantially reduce the workload of incident response experts who are processing the collected cybersecurity data to identify patterns.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"5 20","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141684852","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Benjamin E. Bagozzi, Rajni Goel, Brunilda Lugo-De-Fabritz, Kelly Knickmeier-Cummings, Karthik Balasubramanian
Recent years have seen advancements in machine learning methods for the detection of misinformation on social media. Yet, these methods still often ignore or improperly incorporate key information on the topical-tactics used by misinformation agents. To what extent does this affect the (non)detection of misinformation? We investigate how supervised machine learning approaches can be enhanced to better detect misinformation on social media. Our aim in this regard is to enhance the abilities of academics and practitioners to understand, anticipate, and preempt the sources and impacts of misinformation on the web. To do so, this paper leverages a large sample of verified Russian state-based misinformation tweets and non-misinformation tweets from Twitter. It first assesses standard supervised approaches for detecting Twitter-based misinformation both quantitatively (with respect to classification) and qualitatively (with respect to topical-tactics of Russian misinformation). It then presents a novel framework for integrating topical-tactics of misinformation into standard ‘bag of words’-oriented classification approaches in a manner that avoids data leakage and related measurement challenges. We find that doing so substantially improves the out-of-sample detection of Russian state-based misinformation tweets.
{"title":"A Framework for Enhancing Social Media Misinformation Detection with Topical-Tactics","authors":"Benjamin E. Bagozzi, Rajni Goel, Brunilda Lugo-De-Fabritz, Kelly Knickmeier-Cummings, Karthik Balasubramanian","doi":"10.1145/3670694","DOIUrl":"https://doi.org/10.1145/3670694","url":null,"abstract":"Recent years have seen advancements in machine learning methods for the detection of misinformation on social media. Yet, these methods still often ignore or improperly incorporate key information on the topical-tactics used by misinformation agents. To what extent does this affect the (non)detection of misinformation? We investigate how supervised machine learning approaches can be enhanced to better detect misinformation on social media. Our aim in this regard is to enhance the abilities of academics and practitioners to understand, anticipate, and preempt the sources and impacts of misinformation on the web. To do so, this paper leverages a large sample of verified Russian state-based misinformation tweets and non-misinformation tweets from Twitter. It first assesses standard supervised approaches for detecting Twitter-based misinformation both quantitatively (with respect to classification) and qualitatively (with respect to topical-tactics of Russian misinformation). It then presents a novel framework for integrating topical-tactics of misinformation into standard ‘bag of words’-oriented classification approaches in a manner that avoids data leakage and related measurement challenges. We find that doing so substantially improves the out-of-sample detection of Russian state-based misinformation tweets.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":" 35","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141368032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Annual Computer Security Applications Conference (ACSAC) brings together cutting-edge researchers, with a broad cross-section of security professionals drawn from academia, industry, and government, gathered to present and discuss the latest security results and topics. ACSAC’s core mission is to investigate practical solutions for computer and network security technology. The 38th ACSAC was held in Austin, Texas from December 5-9, 2022. As in the previous year, ACSAC especially encouraged contributions on a hard topic theme, in this year in the area of Trustworthy Systems . Trustworthy systems generally involve the development of capabilities that offer security, safety, and reliability guarantees. ACSAC has always solicited work on applied security; with this hard topic, we put great emphasize on deployable trustworthy systems, including (but not limited to) approaches applied at the intersection of operation systems, formal methods, and programming languages; approaches applied at the architecture level; trustworthy artificial intelligence with emphasize on explainability, correctness, and robustness to attacks; zero-trust solutions that assume no implicit trust, but continually assess risk; and trustworthy systems form a user’s perspective. This topic does not necessarily mean building a complete solution, but identifying key challenges, explaining the deficiencies in state-of-the-art solutions, and demonstrating the effectiveness of the proposed approaches and (potential) impact to the real world. In addition, ACSAC continues to encourage authors of accepted papers to submit software and data artifacts and make them publicly available to the entire community. Releasing software and data artifacts represents an important step towards facilitating the reproducibility of research results, and ultimately contributes to the real-world deployment of novel security solutions. For this special issue we invited authors of papers that appeared at ACSAC 2022 and that successfully passed an evaluation of their software and/or data artifacts to submit an extended version of their papers. This selection criteria ensured that the research has a high potential for being deployed in real-world environments and to be used to implement practical defense systems. This volume contains three manuscripts on topics from three different areas: IoT security and privacy, adversarial machine learning, and backdoor attacks against federated learning. In “SPACELORD: Private and Secure Smart Space Sharing,” Bae et al. address security and privacy issues of smart devices when installed in shared spaces, such as vacation rentals and co-working meeting rooms. Their approach allows for securely time-sharing by transferring control and the configuration of devices to temporary users, as well as resetting devices and removing any private information when a user leaves a space. The authors extended their original solution with different hardware and software confi
{"title":"Introduction to the ACSAC’22 Special Issue","authors":"Martina Lindorfer, Gianluca Stringhini","doi":"10.1145/3659210","DOIUrl":"https://doi.org/10.1145/3659210","url":null,"abstract":"The Annual Computer Security Applications Conference (ACSAC) brings together cutting-edge researchers, with a broad cross-section of security professionals drawn from academia, industry, and government, gathered to present and discuss the latest security results and topics. ACSAC’s core mission is to investigate practical solutions for computer and network security technology.\u0000 \u0000 The 38th ACSAC was held in Austin, Texas from December 5-9, 2022. As in the previous year, ACSAC especially encouraged contributions on a hard topic theme, in this year in the area of\u0000 Trustworthy Systems\u0000 . Trustworthy systems generally involve the development of capabilities that offer security, safety, and reliability guarantees. ACSAC has always solicited work on applied security; with this hard topic, we put great emphasize on deployable trustworthy systems, including (but not limited to) approaches applied at the intersection of operation systems, formal methods, and programming languages; approaches applied at the architecture level; trustworthy artificial intelligence with emphasize on explainability, correctness, and robustness to attacks; zero-trust solutions that assume no implicit trust, but continually assess risk; and trustworthy systems form a user’s perspective. This topic does not necessarily mean building a complete solution, but identifying key challenges, explaining the deficiencies in state-of-the-art solutions, and demonstrating the effectiveness of the proposed approaches and (potential) impact to the real world.\u0000 \u0000 In addition, ACSAC continues to encourage authors of accepted papers to submit software and data artifacts and make them publicly available to the entire community. Releasing software and data artifacts represents an important step towards facilitating the reproducibility of research results, and ultimately contributes to the real-world deployment of novel security solutions.\u0000 For this special issue we invited authors of papers that appeared at ACSAC 2022 and that successfully passed an evaluation of their software and/or data artifacts to submit an extended version of their papers. This selection criteria ensured that the research has a high potential for being deployed in real-world environments and to be used to implement practical defense systems.\u0000 This volume contains three manuscripts on topics from three different areas: IoT security and privacy, adversarial machine learning, and backdoor attacks against federated learning.\u0000 In “SPACELORD: Private and Secure Smart Space Sharing,” Bae et al. address security and privacy issues of smart devices when installed in shared spaces, such as vacation rentals and co-working meeting rooms. Their approach allows for securely time-sharing by transferring control and the configuration of devices to temporary users, as well as resetting devices and removing any private information when a user leaves a space. The authors extended their original solution with different hardware and software confi","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":" 19","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140690271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The interactions between software and hardware are increasingly important to computer system security. This research collected microprocessor control signal sequences to develop machine learning models that identify software tasks. In contrast with prior work that relies on hardware performance counters to collect data for task identification, this research is based on creating additional digital logic to record sequences of control signals inside a processor’s microarchitecture. The proposed approach considers software task identification in hardware as a general problem, with attacks treated as a subset of software tasks. Three lines of effort are presented. First, a data collection approach is described to extract sequences of control signals labeled by task identity during actual (i.e., non-simulated) system operation. Second, experimental design selects hardware and software configurations to train and evaluate machine learning models. The machine learning models significantly outperform a naïve classifier based on Euclidean distances from class means. Various experiment configurations produced a range of balanced accuracy scores. Third, task classification is addressed using decision boundaries defined with thresholds chosen by an optimization strategy to develop non-neural network classifiers. When implemented in hardware, the non-neural network classifiers could require less digital logic to implement compared to neural network models.
{"title":"CuMONITOR: Continuous Monitoring of Microarchitecture for Software Task Identification and Classification","authors":"Tor J. Langehaug, Scott R. Graham","doi":"10.1145/3652861","DOIUrl":"https://doi.org/10.1145/3652861","url":null,"abstract":"The interactions between software and hardware are increasingly important to computer system security. This research collected microprocessor control signal sequences to develop machine learning models that identify software tasks. In contrast with prior work that relies on hardware performance counters to collect data for task identification, this research is based on creating additional digital logic to record sequences of control signals inside a processor’s microarchitecture. The proposed approach considers software task identification in hardware as a general problem, with attacks treated as a subset of software tasks. Three lines of effort are presented. First, a data collection approach is described to extract sequences of control signals labeled by task identity during actual (i.e., non-simulated) system operation. Second, experimental design selects hardware and software configurations to train and evaluate machine learning models. The machine learning models significantly outperform a naïve classifier based on Euclidean distances from class means. Various experiment configurations produced a range of balanced accuracy scores. Third, task classification is addressed using decision boundaries defined with thresholds chosen by an optimization strategy to develop non-neural network classifiers. When implemented in hardware, the non-neural network classifiers could require less digital logic to implement compared to neural network models.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"115 25","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140370689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information security has undoubtedly become a critical aspect of modern cybersecurity practices. Over the last half-decade, numerous academic and industry groups have sought to develop machine learning, deep learning, and other areas of artificial intelligence-enabled analytics into information security practices. The Conference on Applied Machine Learning (CAMLIS) is an emerging venue that seeks to gather researchers and practitioners to discuss applied and fundamental research on machine learning for information security applications. In 2021, CAMLIS partnered with ACM Digital Threats: Research and Practice (DTRAP) to provide opportunities for authors of accepted CAMLIS papers to submit their research for consideration into ACM DTRAP via a Special Issue on Applied Machine Learning for Information Security. This editorial summarizes the results of this Special Issue.
{"title":"Applied Machine Learning for Information Security","authors":"Sagar Samtani, Edward Raff, Hyrum Anderson","doi":"10.1145/3652029","DOIUrl":"https://doi.org/10.1145/3652029","url":null,"abstract":"\u0000 Information security has undoubtedly become a critical aspect of modern cybersecurity practices. Over the last half-decade, numerous academic and industry groups have sought to develop machine learning, deep learning, and other areas of artificial intelligence-enabled analytics into information security practices. The Conference on Applied Machine Learning (CAMLIS) is an emerging venue that seeks to gather researchers and practitioners to discuss applied and fundamental research on machine learning for information security applications. In 2021, CAMLIS partnered with\u0000 ACM Digital Threats: Research and Practice (DTRAP)\u0000 to provide opportunities for authors of accepted CAMLIS papers to submit their research for consideration into\u0000 ACM DTRAP\u0000 via a Special Issue on Applied Machine Learning for Information Security. This editorial summarizes the results of this Special Issue.\u0000","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"28 15","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140253844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
P. Charan, Subhasis Mukhopadhyay, Subhajit Manna, Nanda Rani, Ansh Vaid, Hrushikesh Chunduri, P. Anand, Sandeep K. Shukla
Honeypots serve as a valuable deception technology, enabling security teams to gain insights into the behaviour patterns of attackers and investigate cyber security breaches. However, traditional honeypots prove ineffective against advanced adversaries like APT groups due to their evasion tactics and awareness of typical honeypot solutions. This paper emphasises the need to capture these attackers for enhanced threat intelligence, detection, and protection. To address this, we propose the design and deployment of a customized honeypot network based on adaptive camouflaging techniques. Our work focuses on orchestrating a behavioral honeypot network tailored for three APT groups, with strategically positioned attack paths aligning with their Tactics, Techniques, and Procedures, covering all cyber kill chain phases. We introduce a novel approach, deploying a camouflaged chatterbox application within the honeypot network. This application offers a regular chat interface while periodically tracking attacker activity by enabling periodic log transfers. Deployed for 100 days, our orchestrated honeypot recorded 13,906,945 hits from 4,238 unique IP addresses. Our approach categorizes attackers, discerning varying levels of sophistication, and identifies attacks from Hong Kong with similarities to known Chinese threat groups. This research significantly advances honeypot technology and enhances the understanding of sophisticated threat actors’ strategies in real operating networks.
{"title":"ADAPT: Adaptive Camouflage Based Deception Orchestration For Trapping Advanced Persistent Threats","authors":"P. Charan, Subhasis Mukhopadhyay, Subhajit Manna, Nanda Rani, Ansh Vaid, Hrushikesh Chunduri, P. Anand, Sandeep K. Shukla","doi":"10.1145/3651991","DOIUrl":"https://doi.org/10.1145/3651991","url":null,"abstract":"Honeypots serve as a valuable deception technology, enabling security teams to gain insights into the behaviour patterns of attackers and investigate cyber security breaches. However, traditional honeypots prove ineffective against advanced adversaries like APT groups due to their evasion tactics and awareness of typical honeypot solutions. This paper emphasises the need to capture these attackers for enhanced threat intelligence, detection, and protection. To address this, we propose the design and deployment of a customized honeypot network based on adaptive camouflaging techniques. Our work focuses on orchestrating a behavioral honeypot network tailored for three APT groups, with strategically positioned attack paths aligning with their Tactics, Techniques, and Procedures, covering all cyber kill chain phases. We introduce a novel approach, deploying a camouflaged chatterbox application within the honeypot network. This application offers a regular chat interface while periodically tracking attacker activity by enabling periodic log transfers. Deployed for 100 days, our orchestrated honeypot recorded 13,906,945 hits from 4,238 unique IP addresses. Our approach categorizes attackers, discerning varying levels of sophistication, and identifies attacks from Hong Kong with similarities to known Chinese threat groups. This research significantly advances honeypot technology and enhances the understanding of sophisticated threat actors’ strategies in real operating networks.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"36 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140077463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many individuals, organizations, and industries rely on web applications for the daily operations of their businesses. With the increasing deployment and dependence on these applications, significant attention has been directed towards developing more accurate and secure mechanisms to safeguard them from malicious web-based attacks. The slow adoption of the latest security protocols, coupled with the utilization of inaccurate and inadequately tested security measures, has hindered the establishment of efficient and effective security measures for web apps. This paper reviews recent research and their recommendations for web security over the last four years. It identifies code injection as one of the recent most prevalent web-based attacks. The recommendations presented in this paper offer a practical guide, enabling individuals and security personnel across various industries and organizations to implement tested and proven security measures for web applications. Furthermore, it serves as a roadmap for security developers, aiding them in creating more accurate and quantifiable measures and mechanisms for web security .
{"title":"Web Application Security: A Pragmatic Exposé","authors":"Clement C. Aladi","doi":"10.1145/3644394","DOIUrl":"https://doi.org/10.1145/3644394","url":null,"abstract":"\u0000 Many individuals, organizations, and industries rely on web applications for the daily operations of their businesses. With the increasing deployment and dependence on these applications, significant attention has been directed towards developing more accurate and secure mechanisms to safeguard them from malicious web-based attacks. The slow adoption of the latest security protocols, coupled with the utilization of inaccurate and inadequately tested security measures, has hindered the establishment of efficient and effective security measures for web apps. This paper reviews recent research and their recommendations for web security over the last four years. It identifies code injection as one of the recent most prevalent web-based attacks. The recommendations presented in this paper offer a practical guide, enabling individuals and security personnel across various industries and organizations to implement tested and proven security measures for web applications. Furthermore, it serves as a roadmap for security developers, aiding them in creating more accurate and quantifiable measures and mechanisms for web security\u0000 .\u0000","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"2 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139795597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}