National Computer Security Incident Response Teams (CSIRTs) have been established worldwide to coordinate responses to computer security incidents at the national level. While it is known that national CSIRTs routinely use different types of tools and data from various sources in their cyber incident investigations, limited studies are available about how national CSIRTs evaluate and choose which tools and data to use for incident response. Such an evaluation is important to ensure that these tools and data are of good quality and, consequently, help to increase the effectiveness of the incident response process and the quality of incident response investigations. Seven online focus group discussions with 20 participants (all staff members) from 15 national CSIRTs across Africa, Asia Pacific, Europe, and North and South America were carried out to address this gap. Results from the focus groups led to four significant findings: (1) there is a confirmed need for a systematic evaluation of tools and data used in national CSIRTs, (2) there is a lack of a generally accepted standard procedure for evaluating tools and data in national CSIRTs, (3) there is a general agreement among all focus group participants regarding the challenges that impinge a systematic evaluation of tools and data by national CSIRTs, and (4) we identified a list of candidate criteria that can help inform the design of a standard procedure for evaluating tools and data by national CSIRTs. Based on our findings, we call on the cyber security community and national CSIRTs to develop standard procedures and criteria for evaluating tools and data that CSIRTs, in general, can use.
{"title":"Understanding How National CSIRTs Evaluate Cyber Incident Response Tools and Data: Findings from Focus Group Discussions","authors":"Sharifah Roziah Mohd Kassim, Shujun Li, B. Arief","doi":"10.1145/3609230","DOIUrl":"https://doi.org/10.1145/3609230","url":null,"abstract":"National Computer Security Incident Response Teams (CSIRTs) have been established worldwide to coordinate responses to computer security incidents at the national level. While it is known that national CSIRTs routinely use different types of tools and data from various sources in their cyber incident investigations, limited studies are available about how national CSIRTs evaluate and choose which tools and data to use for incident response. Such an evaluation is important to ensure that these tools and data are of good quality and, consequently, help to increase the effectiveness of the incident response process and the quality of incident response investigations. Seven online focus group discussions with 20 participants (all staff members) from 15 national CSIRTs across Africa, Asia Pacific, Europe, and North and South America were carried out to address this gap. Results from the focus groups led to four significant findings: (1) there is a confirmed need for a systematic evaluation of tools and data used in national CSIRTs, (2) there is a lack of a generally accepted standard procedure for evaluating tools and data in national CSIRTs, (3) there is a general agreement among all focus group participants regarding the challenges that impinge a systematic evaluation of tools and data by national CSIRTs, and (4) we identified a list of candidate criteria that can help inform the design of a standard procedure for evaluating tools and data by national CSIRTs. Based on our findings, we call on the cyber security community and national CSIRTs to develop standard procedures and criteria for evaluating tools and data that CSIRTs, in general, can use.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125556875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andreas Hammer, Mathis Ohlig, J. Geus, F. Freiling
Due to their ease of use and their reliability, managed storage services in the cloud have become a standard way to store files for many users. Consequently, data from cloud storage services and remote file systems in general is an increasingly valuable source of digital evidence in forensic investigations. In this respect, two questions appear relevant: (1) What effect does data acquisition by the client have on the data stored on the server? (2) Does the technology support delayed verification of data acquisition? The two questions refer to critical aspects of forensic evidence collection, namely, in what way does evidence collection interfere with the evidence, and how easy is it to prove the provenance of data in a forensic investigation. We formalize the above questions and use this formalization to classify common storage services. We argue that this classification has direct consequences with regard to the probative value of data acquired from them. We, therefore, discuss the legal implications of this classification with regard to probative value so that IT expert witnesses can adapt their procedures during evidence acquisition and legal practitioners know how to assess such procedures and the evidence obtained through them from cloud storage services.
{"title":"A Functional Classification of Forensic Access to Storage and its Legal Implications","authors":"Andreas Hammer, Mathis Ohlig, J. Geus, F. Freiling","doi":"10.1145/3609231","DOIUrl":"https://doi.org/10.1145/3609231","url":null,"abstract":"Due to their ease of use and their reliability, managed storage services in the cloud have become a standard way to store files for many users. Consequently, data from cloud storage services and remote file systems in general is an increasingly valuable source of digital evidence in forensic investigations. In this respect, two questions appear relevant: (1) What effect does data acquisition by the client have on the data stored on the server? (2) Does the technology support delayed verification of data acquisition? The two questions refer to critical aspects of forensic evidence collection, namely, in what way does evidence collection interfere with the evidence, and how easy is it to prove the provenance of data in a forensic investigation. We formalize the above questions and use this formalization to classify common storage services. We argue that this classification has direct consequences with regard to the probative value of data acquired from them. We, therefore, discuss the legal implications of this classification with regard to probative value so that IT expert witnesses can adapt their procedures during evidence acquisition and legal practitioners know how to assess such procedures and the evidence obtained through them from cloud storage services.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134016883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The number of Child Sexual Abuse Material (CSAM) cases has increased dramatically in recent years. This leads to the need to automate various steps in digital forensic processing, especially for CSAM investigations. For instance, if CSAM pictures are found on a device, the investigator aim at finding traces about the origin and possible further dissemination, respectively. In this article, we address this challenge with respect to the widespread Windows operating system. We model different common scenarios of system use by CSAM offenders in the scope of file inbound and outbound transfer channels. This gives us insights about digital traces in the Windows operating system and its applications to get knowledge about origin and possible destination of a file. We review available concepts and applications to support this issue. Furthermore, we develop a recursive-based approach and provide a prototype as plugin for the open source application Autopsy. We call our prototype AutoTrack. Our evaluation against the different models of Windows system usage reveals that Autotrack is superior to existing solutions and provides support for an investigator to find digital traces about the origin and possible further dissemination of files. We publish our AutoTrack plugin and thus provide full reproducibility of our approach.
{"title":"Back and Forth—On Automatic Exposure of Origin and Dissemination of Files on Windows","authors":"Samantha Klier, Janneke Varenkamp, Harald Baier","doi":"10.1145/3609232","DOIUrl":"https://doi.org/10.1145/3609232","url":null,"abstract":"The number of Child Sexual Abuse Material (CSAM) cases has increased dramatically in recent years. This leads to the need to automate various steps in digital forensic processing, especially for CSAM investigations. For instance, if CSAM pictures are found on a device, the investigator aim at finding traces about the origin and possible further dissemination, respectively. In this article, we address this challenge with respect to the widespread Windows operating system. We model different common scenarios of system use by CSAM offenders in the scope of file inbound and outbound transfer channels. This gives us insights about digital traces in the Windows operating system and its applications to get knowledge about origin and possible destination of a file. We review available concepts and applications to support this issue. Furthermore, we develop a recursive-based approach and provide a prototype as plugin for the open source application Autopsy. We call our prototype AutoTrack. Our evaluation against the different models of Windows system usage reveals that Autotrack is superior to existing solutions and provides support for an investigator to find digital traces about the origin and possible further dissemination of files. We publish our AutoTrack plugin and thus provide full reproducibility of our approach.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123328977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Edita Bajramovic, Christofer Fein, Marius Frinken, Paul Rösler, F. Freiling
Log files provide essential information regarding the actions of processes in critical computer systems. If an attacker modifies log entries, then critical digital evidence is lost. Therefore, many algorithms for secure logging have been devised, each achieving different security goals under different assumptions. We analyze these algorithms and identify their essential security features. Within a common system and attacker model, we integrate these algorithms into a single (parameterizable) “meta” algorithm called LAVA that possesses the union of the security features and can be parameterized to yield the security features of former algorithms. We present a security and efficiency analysis and provide a Python module that can be used to provide secure logging for forensics and incident response.
{"title":"LAVA: Log Authentication and Verification Algorithm","authors":"Edita Bajramovic, Christofer Fein, Marius Frinken, Paul Rösler, F. Freiling","doi":"10.1145/3609233","DOIUrl":"https://doi.org/10.1145/3609233","url":null,"abstract":"Log files provide essential information regarding the actions of processes in critical computer systems. If an attacker modifies log entries, then critical digital evidence is lost. Therefore, many algorithms for secure logging have been devised, each achieving different security goals under different assumptions. We analyze these algorithms and identify their essential security features. Within a common system and attacker model, we integrate these algorithms into a single (parameterizable) “meta” algorithm called LAVA that possesses the union of the security features and can be parameterized to yield the security features of former algorithms. We present a security and efficiency analysis and provide a Python module that can be used to provide secure logging for forensics and incident response.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130016059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Digital investigations are largely concerned with reconstructing past events based on traces in digital systems. Given their importance, many concepts have been established to describe their quality—most of them concerned with procedural aspects, i.e., authenticity and integrity, for example. Besides that, there exist principal concepts that have been overlooked in the past: Two of those criteria are relevance and expressiveness of digital evidence. Unlike others, those are directly concerned with reaching the investigative goal. Therefore, we approach these two overlooked concepts of digital evidence by giving formal definitions. To illustrate the usefulness, we present two applications: First, we demonstrate that the notions of expressiveness and completeness can be used to guide investigations by presenting the Facet-oriented Criminalistic Cycle as a thinking model, which extends the well-established criminalistic cycle. Second, we put the concepts into practice by calculating the expressiveness of facets from a state machine representation of a digital system utilizing temporal logic and a model checker. Furthermore, we sketch out the implications of this improved way of defining relevance and expressiveness. Accordingly, this article aims to improve the understanding of these critical aspects of the overall investigative process.
{"title":"A Formal Treatment of Expressiveness and Relevanceof Digital Evidence","authors":"Jan Gruber, Merlin Humml","doi":"10.1145/3608485","DOIUrl":"https://doi.org/10.1145/3608485","url":null,"abstract":"Digital investigations are largely concerned with reconstructing past events based on traces in digital systems. Given their importance, many concepts have been established to describe their quality—most of them concerned with procedural aspects, i.e., authenticity and integrity, for example. Besides that, there exist principal concepts that have been overlooked in the past: Two of those criteria are relevance and expressiveness of digital evidence. Unlike others, those are directly concerned with reaching the investigative goal. Therefore, we approach these two overlooked concepts of digital evidence by giving formal definitions. To illustrate the usefulness, we present two applications: First, we demonstrate that the notions of expressiveness and completeness can be used to guide investigations by presenting the Facet-oriented Criminalistic Cycle as a thinking model, which extends the well-established criminalistic cycle. Second, we put the concepts into practice by calculating the expressiveness of facets from a state machine representation of a digital system utilizing temporal logic and a model checker. Furthermore, we sketch out the implications of this improved way of defining relevance and expressiveness. Accordingly, this article aims to improve the understanding of these critical aspects of the overall investigative process.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133783835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tobias Hilbig, Thomas Geras, Erwin Kupris, T. Schreck
Determining the correct contact person for a particular system or organization is challenging in today’s Internet architecture. However, there are various stakeholders who will need to have such information, such as national security teams, security researchers, or Internet service providers, among others. To address this problem, RFC 9116, or better known as “security.txt,” was developed. If implemented correctly, then it can help these stakeholders in finding contact information to be used to notify an organization of any security issues. Further, there is another proposal called “dnssecuritytxt,” which uses DNS records for this purpose. In this research article, we evaluated the prevalence of websites that have implemented security.txt and their conformity with the standard. Through a longitudinal analysis of the top one million websites, we investigated the adoption and usage of this standard among organizations. Our results show that the overall adoption of security.txt remains low, especially among less popular websites. To drive its acceptance among organizations, security researchers, and developers, we derived several recommendations, including partnerships with vendors of browsers and content management systems.
{"title":"security.txt Revisited: Analysis of Prevalence and Conformity in 2022","authors":"Tobias Hilbig, Thomas Geras, Erwin Kupris, T. Schreck","doi":"10.1145/3609234","DOIUrl":"https://doi.org/10.1145/3609234","url":null,"abstract":"Determining the correct contact person for a particular system or organization is challenging in today’s Internet architecture. However, there are various stakeholders who will need to have such information, such as national security teams, security researchers, or Internet service providers, among others. To address this problem, RFC 9116, or better known as “security.txt,” was developed. If implemented correctly, then it can help these stakeholders in finding contact information to be used to notify an organization of any security issues. Further, there is another proposal called “dnssecuritytxt,” which uses DNS records for this purpose. In this research article, we evaluated the prevalence of websites that have implemented security.txt and their conformity with the standard. Through a longitudinal analysis of the top one million websites, we investigated the adoption and usage of this standard among organizations. Our results show that the overall adoption of security.txt remains low, especially among less popular websites. To drive its acceptance among organizations, security researchers, and developers, we derived several recommendations, including partnerships with vendors of browsers and content management systems.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123497776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ransomware has been one of the most prevalent forms of malware over the previous decade, and it continues to be one of the most significant threats today. Recently, ransomware strategies such as double extortion and rapid encryption have encouraged attacker communities to consider ransomware as a business model. With the advent of Ransomware as a Service (RaaS) models, ransomware spread and operations continue to increase. Even though machine learning and signature-based detection methods for ransomware have been proposed, they often fail to achieve very accurate detection. Ransomware that evades detection moves to the execution phase after initial access and installation. Due to the catastrophic nature of a ransomware attack, it is crucial to detect in its early stages of execution. If there is a method to detect ransomware in its execution phase early enough, then one can kill the processes to stop the ransomware attack. However, early detection with dynamic API call analysis is not an ideal solution, as the contemporary ransomware variants use low-level system calls to circumvent the detection methods. In this work, we use hardware performance counters (HPC) as features to detect the ransomware within 3-4 seconds - which may be sufficient, at least in the case of ransomware that takes longer to complete its full execution.
{"title":"HiPeR - Early Detection of a Ransomware Attack using Hardware Performance Counters","authors":"P. Anand, P. Charan, S. Shukla","doi":"10.1145/3608484","DOIUrl":"https://doi.org/10.1145/3608484","url":null,"abstract":"Ransomware has been one of the most prevalent forms of malware over the previous decade, and it continues to be one of the most significant threats today. Recently, ransomware strategies such as double extortion and rapid encryption have encouraged attacker communities to consider ransomware as a business model. With the advent of Ransomware as a Service (RaaS) models, ransomware spread and operations continue to increase. Even though machine learning and signature-based detection methods for ransomware have been proposed, they often fail to achieve very accurate detection. Ransomware that evades detection moves to the execution phase after initial access and installation. Due to the catastrophic nature of a ransomware attack, it is crucial to detect in its early stages of execution. If there is a method to detect ransomware in its execution phase early enough, then one can kill the processes to stop the ransomware attack. However, early detection with dynamic API call analysis is not an ideal solution, as the contemporary ransomware variants use low-level system calls to circumvent the detection methods. In this work, we use hardware performance counters (HPC) as features to detect the ransomware within 3-4 seconds - which may be sufficient, at least in the case of ransomware that takes longer to complete its full execution.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132911082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many of the services offered in this electronic era depend on the detection and manipulation of sensor data. The data that is gathered is, however, extremely susceptible to leakage, malicious modification, violation of confidentiality and integrity, and other assaults. Data security is a major issue, and the sensitive sensor data can be compromised either at the source of creation or when it is in transit through the various containers that carry this information for the many services. Determining the context of the information and using that to protect it from digital threats by assessing and skillfully modifying them is thus necessary to close the gap we now have. The complexity is increased by the absence of security standards in the world of sensing devices, which demand the development of technical solutions that can serve as countermeasures. This special issue has two major contributions:
{"title":"Special Issue on Actionable Information for Digital Threat Discovery Using Contextualized Data or Multi Sensor Data Fusion","authors":"S. S. Iyengar, B. Thuraisingham, Marek Zmuda","doi":"10.1145/3585079","DOIUrl":"https://doi.org/10.1145/3585079","url":null,"abstract":"Many of the services offered in this electronic era depend on the detection and manipulation of sensor data. The data that is gathered is, however, extremely susceptible to leakage, malicious modification, violation of confidentiality and integrity, and other assaults. Data security is a major issue, and the sensitive sensor data can be compromised either at the source of creation or when it is in transit through the various containers that carry this information for the many services. Determining the context of the information and using that to protect it from digital threats by assessing and skillfully modifying them is thus necessary to close the gap we now have. The complexity is increased by the absence of security standards in the world of sensing devices, which demand the development of technical solutions that can serve as countermeasures. This special issue has two major contributions:","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129329807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ransomware have evolved into one of the most severe cyberthreats against private and public sector alike. Organizations are inundated with a barrage of intrusion attempts that ultimately morph into full-scale ransomware attacks. Efforts to combat these threats tend to primarily focus on detection and prevention and while thwarting an attack is always the best approach, we must additionally improve our response and recovery efforts with a post-breach mindset. Assume that the defenses have failed and the risk has materialized. Are we then prepared to best salvage the situation with efficient, ransomware-specific incident response procedures? In this work, we present a ransomware response framework that can be leveraged to create highly effective ransomware response strategies. We provide a level of detail in this framework that balances adaptability versus actionability that both technical and executive stakeholders will find of use.
{"title":"Know Thy Ransomware Response: A Detailed Framework for Devising Effective Ransomware Response Strategies","authors":"Pranshu Bajpai, R. Enbody","doi":"10.1145/3606022","DOIUrl":"https://doi.org/10.1145/3606022","url":null,"abstract":"Ransomware have evolved into one of the most severe cyberthreats against private and public sector alike. Organizations are inundated with a barrage of intrusion attempts that ultimately morph into full-scale ransomware attacks. Efforts to combat these threats tend to primarily focus on detection and prevention and while thwarting an attack is always the best approach, we must additionally improve our response and recovery efforts with a post-breach mindset. Assume that the defenses have failed and the risk has materialized. Are we then prepared to best salvage the situation with efficient, ransomware-specific incident response procedures? In this work, we present a ransomware response framework that can be leveraged to create highly effective ransomware response strategies. We provide a level of detail in this framework that balances adaptability versus actionability that both technical and executive stakeholders will find of use.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126791530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automatic detection of the precise occurrence and duration of an attack reflected in time-series logs generated by cyber-physical systems is a challenging problem. This problem is exacerbated when performing this analysis using logs with limited system information. In a realistic scenario, multiple and differing attack methods may be employed in rapid succession. Modern or legacy systems operate in multiple modes and contain multiple devices recording a variety of continuous and categorical data streams. This work presents a non-parametric Bayesian framework that addresses these challenges using the sticky Hierarchical Dirichlet Process Hidden Markov Model (sHDP-HMM). Additionally, we explore metrics for measuring the accuracy of the detected events: their timings and durations and compares the computational efficiency of different inference implementations of the model. The efficacy of attack detection is demonstrated in two settings: an avionics testbed and a consumer robot.
{"title":"Towards Attack Detection in Multimodal Cyber-Physical Systems with Sticky HDP-HMM based Time Series Analysis","authors":"Andrew E. Hong, P. Malinovsky, Suresh Damodaran","doi":"10.1145/3604434","DOIUrl":"https://doi.org/10.1145/3604434","url":null,"abstract":"Automatic detection of the precise occurrence and duration of an attack reflected in time-series logs generated by cyber-physical systems is a challenging problem. This problem is exacerbated when performing this analysis using logs with limited system information. In a realistic scenario, multiple and differing attack methods may be employed in rapid succession. Modern or legacy systems operate in multiple modes and contain multiple devices recording a variety of continuous and categorical data streams. This work presents a non-parametric Bayesian framework that addresses these challenges using the sticky Hierarchical Dirichlet Process Hidden Markov Model (sHDP-HMM). Additionally, we explore metrics for measuring the accuracy of the detected events: their timings and durations and compares the computational efficiency of different inference implementations of the model. The efficacy of attack detection is demonstrated in two settings: an avionics testbed and a consumer robot.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122356709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}