In this article, we consider that, in practice, attack scenarios involving side-channel analysis combine two successive phases: an analysis phase, targeting the extraction of information about the target and the identification of possible vulnerabilities, and an exploitation phase, applying attack techniques on candidate vulnerabilities. We advocate that protections need to cover these two phases to be effective against real-life attacks. We present PolEn, a toolchain and a processor architecture that combine countermeasures to provide an effective mitigation of side-channel attacks: As a countermeasure against the analysis phase, our approach considers the use of code encryption; as a countermeasure against the exploitation phase, our approach considers the use of code polymorphism, because it relies on runtime code generation, and its combination with code encryption is particularly challenging. Code encryption is supported by a processor extension such that machine instructions are only decrypted inside the CPU, which effectively prevents reverse engineering or any extraction of useful information from memory dumps. Code polymorphism is implemented by software means. It regularly changes the observable behaviour of the program, making it unpredictable for an attacker, hence reducing the possibility to exploit side-channel leakages. We present a prototype implementation, based on the RISC-V Spike simulator and a modified LLVM toolchain. In our experimental evaluation, we illustrate that PolEn effectively reduces side-channel leakages. For the protected functions evaluated, static memory use increases by a factor of 5 to 22, corresponding to the joint application of code encryption and code polymorphism. The overhead, in terms of execution time, ranges between a factor of 1.8 and 4.6.
{"title":"Code Polymorphism Meets Code Encryption: Confidentiality and Side-channel Protection of Software Components","authors":"L. Morel, Damien Couroussé, Thomas Hiscock","doi":"10.1145/3487058","DOIUrl":"https://doi.org/10.1145/3487058","url":null,"abstract":"In this article, we consider that, in practice, attack scenarios involving side-channel analysis combine two successive phases: an analysis phase, targeting the extraction of information about the target and the identification of possible vulnerabilities, and an exploitation phase, applying attack techniques on candidate vulnerabilities. We advocate that protections need to cover these two phases to be effective against real-life attacks. We present PolEn, a toolchain and a processor architecture that combine countermeasures to provide an effective mitigation of side-channel attacks: As a countermeasure against the analysis phase, our approach considers the use of code encryption; as a countermeasure against the exploitation phase, our approach considers the use of code polymorphism, because it relies on runtime code generation, and its combination with code encryption is particularly challenging. Code encryption is supported by a processor extension such that machine instructions are only decrypted inside the CPU, which effectively prevents reverse engineering or any extraction of useful information from memory dumps. Code polymorphism is implemented by software means. It regularly changes the observable behaviour of the program, making it unpredictable for an attacker, hence reducing the possibility to exploit side-channel leakages. We present a prototype implementation, based on the RISC-V Spike simulator and a modified LLVM toolchain. In our experimental evaluation, we illustrate that PolEn effectively reduces side-channel leakages. For the protected functions evaluated, static memory use increases by a factor of 5 to 22, corresponding to the joint application of code encryption and code polymorphism. The overhead, in terms of execution time, ranges between a factor of 1.8 and 4.6.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125767442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The service of authentication constitutes the spine of all security properties. It is the phase where entities prove their identities to each other and generally establish and derive cryptographic keys to provide confidentiality, data integrity, non-repudiation, and availability. Due to the heterogeneity and the particular security requirements of IoT (Internet of Things), developing secure, low-cost, and lightweight authentication protocols has become a serious challenge. This has excited the research community to design and develop new authentication protocols that meet IoT requirements. An interesting hardware technology, called PUFs (Physical Unclonable Functions), has been the subject of many subsequent publications on lightweight, low-cost, and secure-by-design authentication protocols. This has turned our attention to investigate the most recent PUF-based authentication protocols for IoT. In this paper, we review the security of these protocols. We first provide the necessary background on PUFs, their types, and related attacks. Also, we discuss how PUFs are used for authentication. Then, we analyze the security of PUF-based authentication protocols to identify and report common security issues and design flaws, as well as to provide recommendations for future authentication protocol designers.
{"title":"Lessons Learned: Analysis of PUF-based Authentication Protocols for IoT","authors":"K. Lounis, Mohammad Zulkernine","doi":"10.1145/3487060","DOIUrl":"https://doi.org/10.1145/3487060","url":null,"abstract":"The service of authentication constitutes the spine of all security properties. It is the phase where entities prove their identities to each other and generally establish and derive cryptographic keys to provide confidentiality, data integrity, non-repudiation, and availability. Due to the heterogeneity and the particular security requirements of IoT (Internet of Things), developing secure, low-cost, and lightweight authentication protocols has become a serious challenge. This has excited the research community to design and develop new authentication protocols that meet IoT requirements. An interesting hardware technology, called PUFs (Physical Unclonable Functions), has been the subject of many subsequent publications on lightweight, low-cost, and secure-by-design authentication protocols. This has turned our attention to investigate the most recent PUF-based authentication protocols for IoT. In this paper, we review the security of these protocols. We first provide the necessary background on PUFs, their types, and related attacks. Also, we discuss how PUFs are used for authentication. Then, we analyze the security of PUF-based authentication protocols to identify and report common security issues and design flaws, as well as to provide recommendations for future authentication protocol designers.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115686985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Girish Vaidya, Prabhakar T.V., N. Gnani, Ryan Shah, Shishir Nagaraja
The traceability of components on a supply chain from a production facility to deployment and maintenance depends upon its irrefutable identity. There are two well-known identification methods: an identity code stored in the memory and embedding custom identification hardware. While storing the identity code is susceptible to malicious and unintentional attacks, the approach of embedding a custom identification hardware is infeasible for sensor nodes assembled with Commercially-Off-the-Shelf devices. We propose a novel identifier - Acoustic PUF based on the innate properties of the sensor node. Acoustic PUF combines the uniqueness component and the position component of the sensor device signature. The uniqueness component is derived by exploiting the manufacturing tolerances, thus making the signature unclonable. The position component is derived through acoustic fingerprinting, thus giving a sticky identity to the sensor device. We evaluate Acoustic PUF for Uniqueness, Repeatability, and Position identity with a deployment spanning several weeks. Through our experimental evaluation and further numerical analysis, we prove that Acoustic PUF can uniquely identify thousands of devices with 99% accuracy while simultaneously detecting the change in position. We use the physical position of a device within a synthetic sound-field both as an identity measure as well as to validate physical integrity of the device.
{"title":"Sensor Identification via Acoustic Physically Unclonable Function","authors":"Girish Vaidya, Prabhakar T.V., N. Gnani, Ryan Shah, Shishir Nagaraja","doi":"10.1145/3488306","DOIUrl":"https://doi.org/10.1145/3488306","url":null,"abstract":"The traceability of components on a supply chain from a production facility to deployment and maintenance depends upon its irrefutable identity. There are two well-known identification methods: an identity code stored in the memory and embedding custom identification hardware. While storing the identity code is susceptible to malicious and unintentional attacks, the approach of embedding a custom identification hardware is infeasible for sensor nodes assembled with Commercially-Off-the-Shelf devices. We propose a novel identifier - Acoustic PUF based on the innate properties of the sensor node. Acoustic PUF combines the uniqueness component and the position component of the sensor device signature. The uniqueness component is derived by exploiting the manufacturing tolerances, thus making the signature unclonable. The position component is derived through acoustic fingerprinting, thus giving a sticky identity to the sensor device. We evaluate Acoustic PUF for Uniqueness, Repeatability, and Position identity with a deployment spanning several weeks. Through our experimental evaluation and further numerical analysis, we prove that Acoustic PUF can uniquely identify thousands of devices with 99% accuracy while simultaneously detecting the change in position. We use the physical position of a device within a synthetic sound-field both as an identity measure as well as to validate physical integrity of the device.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125594557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this article, we study the vulnerability management dimension in smart city initiatives. As many cities across the globe invest a considerable amount of effort, resources and budget to modernise their infrastructure by deploying a series of technologies such as 5G, Software Defined Networks, and IoT, we conduct an empirical analysis of their current exposure to existing vulnerabilities. We use an updated vulnerability dataset that is further enriched by quantitative research data from independent studies evaluating the maturity and accomplishments of cities in their journey to become smart. We particularly focus on cities that aspire to implement a (data-driven) Circular Economy agenda that we consider to potentially yield the highest risk from a vulnerabilities exposure perspective. Findings show that although a smarter city is attributed with a higher vulnerability exposure, investments on technology and human capital moderate this exposure in a way that it can be reduced.
{"title":"Vulnerability Exposure Driven Intelligence in Smart, Circular Cities","authors":"Paul-David Jarvis, Amalia Damianou, Cosmin Ciobanu, Vasilis Katos","doi":"10.1145/3487059","DOIUrl":"https://doi.org/10.1145/3487059","url":null,"abstract":"In this article, we study the vulnerability management dimension in smart city initiatives. As many cities across the globe invest a considerable amount of effort, resources and budget to modernise their infrastructure by deploying a series of technologies such as 5G, Software Defined Networks, and IoT, we conduct an empirical analysis of their current exposure to existing vulnerabilities. We use an updated vulnerability dataset that is further enriched by quantitative research data from independent studies evaluating the maturity and accomplishments of cities in their journey to become smart. We particularly focus on cities that aspire to implement a (data-driven) Circular Economy agenda that we consider to potentially yield the highest risk from a vulnerabilities exposure perspective. Findings show that although a smarter city is attributed with a higher vulnerability exposure, investments on technology and human capital moderate this exposure in a way that it can be reduced.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126330804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shreyas Srinivasa, J. Pedersen, Emmanouil Vasilomanolakis
Honeypots are decoy systems that lure attackers by presenting them with a seemingly vulnerable system. They provide an early detection mechanism as well as a method for learning how adversaries work and think. However, over the past years, several researchers have shown methods for fingerprinting honeypots. This significantly decreases the value of a honeypot; if an attacker is able to recognize the existence of such a system, they can evade it. In this article, we revisit the honeypot identification field, by providing a holistic framework that includes state-of-the-art and novel fingerprinting components. We decrease the probability of false positives by proposing a rigid multi-step approach for labeling a system as a honeypot. We perform extensive scans covering 2.9 billion addresses of the IPv4 space and identify a total of 21,855 honeypot instances. Moreover, we present several interesting side findings such as the identification of around 355,000 non-honeypot systems that represent potentially misconfigured or unpatched vulnerable servers (e.g., SSH servers with default password configurations and vulnerable versions). We ethically disclose our findings to network administrators about the default configuration and the honeypot developers about the gaps in implementation that lead to possible honeypot fingerprinting. Last, we discuss countermeasures against honeypot fingerprinting techniques.
{"title":"Gotta Catch ’em All: A Multistage Framework for Honeypot Fingerprinting","authors":"Shreyas Srinivasa, J. Pedersen, Emmanouil Vasilomanolakis","doi":"10.1145/3584976","DOIUrl":"https://doi.org/10.1145/3584976","url":null,"abstract":"Honeypots are decoy systems that lure attackers by presenting them with a seemingly vulnerable system. They provide an early detection mechanism as well as a method for learning how adversaries work and think. However, over the past years, several researchers have shown methods for fingerprinting honeypots. This significantly decreases the value of a honeypot; if an attacker is able to recognize the existence of such a system, they can evade it. In this article, we revisit the honeypot identification field, by providing a holistic framework that includes state-of-the-art and novel fingerprinting components. We decrease the probability of false positives by proposing a rigid multi-step approach for labeling a system as a honeypot. We perform extensive scans covering 2.9 billion addresses of the IPv4 space and identify a total of 21,855 honeypot instances. Moreover, we present several interesting side findings such as the identification of around 355,000 non-honeypot systems that represent potentially misconfigured or unpatched vulnerable servers (e.g., SSH servers with default password configurations and vulnerable versions). We ethically disclose our findings to network administrators about the default configuration and the honeypot developers about the gaps in implementation that lead to possible honeypot fingerprinting. Last, we discuss countermeasures against honeypot fingerprinting techniques.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117251096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the adoption and diversity of threat intelligence solutions continue to grow, questions about their effectiveness, particularly in regards to the quality of the data they provide, remain unanswered. Several studies have highlighted data quality issues as one of the most common barriers to effective threat intelligence sharing. Furthermore, research and practice lack a common understanding of the expected quality of threat intelligence. To investigate these issues, our research utilised a systematic literature review followed by a modified Delphi study that involved 30 threat intelligence experts in Europe. We identified a set of threat intelligence quality dimensions along with revised definitions for threat data, information, and intelligence.
{"title":"Threat Intelligence Quality Dimensions for Research and Practice","authors":"Adam Zibak, Clemens Sauerwein, Andrew C. Simpson","doi":"10.1145/3484202","DOIUrl":"https://doi.org/10.1145/3484202","url":null,"abstract":"As the adoption and diversity of threat intelligence solutions continue to grow, questions about their effectiveness, particularly in regards to the quality of the data they provide, remain unanswered. Several studies have highlighted data quality issues as one of the most common barriers to effective threat intelligence sharing. Furthermore, research and practice lack a common understanding of the expected quality of threat intelligence. To investigate these issues, our research utilised a systematic literature review followed by a modified Delphi study that involved 30 threat intelligence experts in Europe. We identified a set of threat intelligence quality dimensions along with revised definitions for threat data, information, and intelligence.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131464323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jinxin Liu, Murat Simsek, B. Kantarci, M. Erol-Kantarci, A. Malton, Andrew Walenstein
Access to resources by users may need to be granted only upon certain conditions and contexts, perhaps particularly in cyber-physical settings. Unfortunately, creating and modifying context-sensitive access control solutions in dynamic environments creates ongoing challenges to manage the authorization contexts. This article proposes RASA, a context-sensitive access authorization approach and mechanism leveraging unsupervised machine learning to automatically infer risk-based authorization decision boundaries. We explore RASA in a healthcare usage environment, wherein cyber and physical conditions create context-specific risks for protecting private health information. The risk levels are associated with access control decisions recommended by a security policy. A coupling method is introduced to track coexistence of the objects within context using frequency and duration of coexistence, and these are clustered to reveal sets of actions with common risk levels; these are used to create authorization decision boundaries. In addition, we propose a method for assessing the risk level and labelling the clusters with respect to their corresponding risk levels. We evaluate the promise of RASA-generated policies against a heuristic rule-based policy. By employing three different coupling features (frequency-based, duration-based, and combined features), the decisions of the unsupervised method and that of the policy are more than 99% consistent.
{"title":"Risk-aware Fine-grained Access Control in Cyber-physical Contexts","authors":"Jinxin Liu, Murat Simsek, B. Kantarci, M. Erol-Kantarci, A. Malton, Andrew Walenstein","doi":"10.1145/3480468","DOIUrl":"https://doi.org/10.1145/3480468","url":null,"abstract":"Access to resources by users may need to be granted only upon certain conditions and contexts, perhaps particularly in cyber-physical settings. Unfortunately, creating and modifying context-sensitive access control solutions in dynamic environments creates ongoing challenges to manage the authorization contexts. This article proposes RASA, a context-sensitive access authorization approach and mechanism leveraging unsupervised machine learning to automatically infer risk-based authorization decision boundaries. We explore RASA in a healthcare usage environment, wherein cyber and physical conditions create context-specific risks for protecting private health information. The risk levels are associated with access control decisions recommended by a security policy. A coupling method is introduced to track coexistence of the objects within context using frequency and duration of coexistence, and these are clustered to reveal sets of actions with common risk levels; these are used to create authorization decision boundaries. In addition, we propose a method for assessing the risk level and labelling the clusters with respect to their corresponding risk levels. We evaluate the promise of RASA-generated policies against a heuristic rule-based policy. By employing three different coupling features (frequency-based, duration-based, and combined features), the decisions of the unsupervised method and that of the policy are more than 99% consistent.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130012684","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In modern-day software development, a vast amount of public software libraries enable the reuse of existing implementations for reoccurring tasks and common problems. While this practice does yield significant benefits in productivity, it also puts an increasing amount of responsibility on library maintainers. If a security flaw is contained in a library release, then it may directly affect thousands of applications that are depending on it. Given the fact that libraries are often interconnected, meaning they are depending on other libraries for certain sub-tasks, the impact of a single vulnerability may be large, and is hard to quantify. Recent studies have shown that developers in fact struggle with upgrading vulnerable dependencies, despite ever-increasing support by automated tools, which are often publicly available. With our work, we aim to improve on this situation by providing an in-depth analysis on how developers handle vulnerability patches and dependency upgrades. To do so, we contribute a miner for artifact dependency graphs supporting different programming platforms, which annotates the graph with vulnerability information. We execute our application and generate a data set for the artifact repositories Maven Central, NuGet.org, and the NPM Registry, with the resulting graph being stored in a Neo4j graph database. Afterwards, we conduct an extensive analysis of our data, which is aimed at understanding the impact of vulnerabilities for the three different repositories. Finally, we summarize the resulting risks and derive possible mitigation strategies for library maintainers and software developers based on our findings. We found that NuGet.org, the smallest artifact repository in our sample, is subject to fewer security concerns than Maven Central or the NPM Registry. However, for all repositories, we found that vulnerabilities may influence libraries via long transitive dependency chains and that a vulnerability in a single library may affect thousands of other libraries transitively.
{"title":"Analyzing the Direct and Transitive Impact of Vulnerabilities onto Different Artifact Repositories","authors":"Johannes Düsing, Ben Hermann","doi":"10.1145/3472811","DOIUrl":"https://doi.org/10.1145/3472811","url":null,"abstract":"In modern-day software development, a vast amount of public software libraries enable the reuse of existing implementations for reoccurring tasks and common problems. While this practice does yield significant benefits in productivity, it also puts an increasing amount of responsibility on library maintainers. If a security flaw is contained in a library release, then it may directly affect thousands of applications that are depending on it. Given the fact that libraries are often interconnected, meaning they are depending on other libraries for certain sub-tasks, the impact of a single vulnerability may be large, and is hard to quantify. Recent studies have shown that developers in fact struggle with upgrading vulnerable dependencies, despite ever-increasing support by automated tools, which are often publicly available. With our work, we aim to improve on this situation by providing an in-depth analysis on how developers handle vulnerability patches and dependency upgrades. To do so, we contribute a miner for artifact dependency graphs supporting different programming platforms, which annotates the graph with vulnerability information. We execute our application and generate a data set for the artifact repositories Maven Central, NuGet.org, and the NPM Registry, with the resulting graph being stored in a Neo4j graph database. Afterwards, we conduct an extensive analysis of our data, which is aimed at understanding the impact of vulnerabilities for the three different repositories. Finally, we summarize the resulting risks and derive possible mitigation strategies for library maintainers and software developers based on our findings. We found that NuGet.org, the smallest artifact repository in our sample, is subject to fewer security concerns than Maven Central or the NPM Registry. However, for all repositories, we found that vulnerabilities may influence libraries via long transitive dependency chains and that a vulnerability in a single library may affect thousands of other libraries transitively.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-07-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115856599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Josephine Lamp, Carlos E. Rubio-Medrano, Ziming Zhao, Gail-Joon Ahn
No longer just prophesied about, cyber-attacks to Energy Delivery Systems (EDS) (e.g., the power grid, gas and oil industries) are now very real dangers that result in non-trivial economical losses and inconveniences to modern societies. In such a context, risk analysis has been proposed as a valuable way to identify, analyze, and mitigate potential vulnerabilities, threats, and attack vectors. However, performing risk analysis for EDS is difficult due to their innate structural diversity and interdependencies, along with an always-increasing threatscape. Therefore, there is a need for a methodology to evaluate the current system state, identify vulnerabilities, and qualify risk at multiple granularities in a collaborative manner among different actors in the context of EDS. With this in mind, this article presents ExSol, a collaborative, real-time, risk assessment ecosystem that features an approach for modeling real-life EDS infrastructures, an ontology traversal technique that retrieves well-defined security requirements from well-reputed documents on cyber-protection for EDS infrastructures, as well as a methodology for calculating risk for a single asset and for an entire system. Moreover, we also provide experimental evidence involving a series of attack scenarios in both simulated and real-world EDS environments, which ultimately encourage the adoption of ExSol in practice.
{"title":"ExSol","authors":"Josephine Lamp, Carlos E. Rubio-Medrano, Ziming Zhao, Gail-Joon Ahn","doi":"10.1145/3428156","DOIUrl":"https://doi.org/10.1145/3428156","url":null,"abstract":"No longer just prophesied about, cyber-attacks to Energy Delivery Systems (EDS) (e.g., the power grid, gas and oil industries) are now very real dangers that result in non-trivial economical losses and inconveniences to modern societies. In such a context, risk analysis has been proposed as a valuable way to identify, analyze, and mitigate potential vulnerabilities, threats, and attack vectors. However, performing risk analysis for EDS is difficult due to their innate structural diversity and interdependencies, along with an always-increasing threatscape. Therefore, there is a need for a methodology to evaluate the current system state, identify vulnerabilities, and qualify risk at multiple granularities in a collaborative manner among different actors in the context of EDS. With this in mind, this article presents ExSol, a collaborative, real-time, risk assessment ecosystem that features an approach for modeling real-life EDS infrastructures, an ontology traversal technique that retrieves well-defined security requirements from well-reputed documents on cyber-protection for EDS infrastructures, as well as a methodology for calculating risk for a single asset and for an entire system. Moreover, we also provide experimental evidence involving a series of attack scenarios in both simulated and real-world EDS environments, which ultimately encourage the adoption of ExSol in practice.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121307788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Happa, Ioannis Agrafiotis, Martin Helmhout, Thomas Bashford-Rogers, M. Goldsmith, S. Creese
It is difficult to discern real-world consequences of attacks on an enterprise when investigating network-centric data alone. In recent years, many tools have been developed to help understand attacks using visualisation, but few aim to predict real-world consequences. We have developed a visualisation tool that aims to improve decision support during attacks in Security Operation Centres (SOCs). Our tool visualises propagation of risks from sensor alert data to Business Process (BP) tasks. This is an important capability gap present in many SOCs today, as most threat detection tools are technology-centric. In this article, we present a user study that assesses our tool’s usability and ability to support the analyst. Ten analysts from seven SOCs performed carefully designed tasks related to understanding risks and recovery decision-making. The study was conducted in laboratory conditions with simulated attacks and used a mixed-method approach to collect data from questionnaires, eye tracking, and semi-structured interviews. Our findings suggest that relating business tasks to network asset in visualisations can help analysts prioritise response strategies. Finally, our article also provides an in-depth discussion on user studies conducted with SOC analysts more generally, including lessons learned, recommendations and a critique of our own study.
{"title":"Assessing a Decision Support Tool for SOC Analysts","authors":"J. Happa, Ioannis Agrafiotis, Martin Helmhout, Thomas Bashford-Rogers, M. Goldsmith, S. Creese","doi":"10.1145/3430753","DOIUrl":"https://doi.org/10.1145/3430753","url":null,"abstract":"It is difficult to discern real-world consequences of attacks on an enterprise when investigating network-centric data alone. In recent years, many tools have been developed to help understand attacks using visualisation, but few aim to predict real-world consequences. We have developed a visualisation tool that aims to improve decision support during attacks in Security Operation Centres (SOCs). Our tool visualises propagation of risks from sensor alert data to Business Process (BP) tasks. This is an important capability gap present in many SOCs today, as most threat detection tools are technology-centric. In this article, we present a user study that assesses our tool’s usability and ability to support the analyst. Ten analysts from seven SOCs performed carefully designed tasks related to understanding risks and recovery decision-making. The study was conducted in laboratory conditions with simulated attacks and used a mixed-method approach to collect data from questionnaires, eye tracking, and semi-structured interviews. Our findings suggest that relating business tasks to network asset in visualisations can help analysts prioritise response strategies. Finally, our article also provides an in-depth discussion on user studies conducted with SOC analysts more generally, including lessons learned, recommendations and a critique of our own study.","PeriodicalId":202552,"journal":{"name":"Digital Threats: Research and Practice","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130126395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}