Software applications should be secure, usable and privacy-friendly. However, recurring headlines about data leaks in applications show that it is not so easy to develop software that meets these three challenges. Studies show that it is better to think about these challenges during a software development process. Many ideas and approaches exist in the research community that define these challenges as goals within a software development process. In addition, major companies have published their own software development processes and methods addressing these goals in part. But major companies have very different work conditions compared to a work setting faced by an App-Developer (AD), a Small and Medium Company (SMC) and the Open Source Community (OSC) respectively. This leads us to the question: Are work settings of ADs, SMCs, or the OSCs considered sufficiently by research in order to make software development processes with special focus on security, usability and privacy goals work? Therefore we performed a literature review in order to investigate the current state of research. Using an appropriate query, publications relevant for our question were identified and categorised by two independent reviewers. Our work shows that there are some publications proposing software processes supporting usability goals and taking work settings into account. We were not able to identify any contribution that proposes a software development process which addresses privacy, usability and security goals together and differentiates the work setting of ADs or as found in SMCs and in OSCs respectively.
{"title":"Software Development Processes for ADs, SMCs and OSCs supporting Usability, Security, and Privacy Goals – an Overview","authors":"Tim Bender, Rolf Huesmann, A. Heinemann","doi":"10.1145/3465481.3470022","DOIUrl":"https://doi.org/10.1145/3465481.3470022","url":null,"abstract":"Software applications should be secure, usable and privacy-friendly. However, recurring headlines about data leaks in applications show that it is not so easy to develop software that meets these three challenges. Studies show that it is better to think about these challenges during a software development process. Many ideas and approaches exist in the research community that define these challenges as goals within a software development process. In addition, major companies have published their own software development processes and methods addressing these goals in part. But major companies have very different work conditions compared to a work setting faced by an App-Developer (AD), a Small and Medium Company (SMC) and the Open Source Community (OSC) respectively. This leads us to the question: Are work settings of ADs, SMCs, or the OSCs considered sufficiently by research in order to make software development processes with special focus on security, usability and privacy goals work? Therefore we performed a literature review in order to investigate the current state of research. Using an appropriate query, publications relevant for our question were identified and categorised by two independent reviewers. Our work shows that there are some publications proposing software processes supporting usability goals and taking work settings into account. We were not able to identify any contribution that proposes a software development process which addresses privacy, usability and security goals together and differentiates the work setting of ADs or as found in SMCs and in OSCs respectively.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133630887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current Anonymous Communication Systems (ACS) lack fault tolerance and thus risk becoming unavailable when failures occur, forcing users offline or to less private messengers. In this work, we evaluate end-to-end message transmission latencies and resource demands of state-of-the-art mixnet Vuvuzela and CPIR system Pung under different network failure scenarios on an ACS test bed across four continents. We compare Vuvuzela and Pung to proof-of-concept mixnet FTMix that we equip with simple fault tolerance measures. Our analysis shows that FTMix maintains the smallest divergence of end-to-end latencies under failures from their respective baseline among all three ACS, while also achieving a balanced resource consumption trade-off. Thus, we consider fault tolerance effective in ensuring service availability and a crucial design principle for future ACS proposals.
{"title":"Strong Anonymity is not Enough: Introducing Fault Tolerance to Planet-Scale Anonymous Communication Systems","authors":"Lennart Oldenburg, Florian Tschorsch","doi":"10.1145/3465481.3469189","DOIUrl":"https://doi.org/10.1145/3465481.3469189","url":null,"abstract":"Current Anonymous Communication Systems (ACS) lack fault tolerance and thus risk becoming unavailable when failures occur, forcing users offline or to less private messengers. In this work, we evaluate end-to-end message transmission latencies and resource demands of state-of-the-art mixnet Vuvuzela and CPIR system Pung under different network failure scenarios on an ACS test bed across four continents. We compare Vuvuzela and Pung to proof-of-concept mixnet FTMix that we equip with simple fault tolerance measures. Our analysis shows that FTMix maintains the smallest divergence of end-to-end latencies under failures from their respective baseline among all three ACS, while also achieving a balanced resource consumption trade-off. Thus, we consider fault tolerance effective in ensuring service availability and a crucial design principle for future ACS proposals.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132855710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Onion routing is a promising approach to implement anonymous voice calls. Uniform-sized voice packets are routed via multiple relays and encrypted in layers to avoid a correlation of packet content in different parts in the network. By using pre-built circuits, onion encryption may use efficient symmetric ciphers. However, if packets are forwarded by relays as fast as possible—to minimize end-to-end latency—network flow watermarking may still de-anonymize users. A recently proposed countermeasure synchronizes the start time of many calls and batch processes voice packets with the same sequence number in relays. However, if only a single link with high latency is used, it will also negatively affect latency of all other calls. This article explores the limits of this approach by formulating a mixed integer linear program (MILP) that minimizes latency “bottlenecks” in path selection. Furthermore, we suggest a different scheduling strategy for voice packets, i.e. implementing independent de-jitter buffers for all flows. In this case, a MILP is used to minimize the average latency of selected paths. For comparison, we solve the MILPs using latency and bandwidth datasets obtained from the Tor network. Our results show that batch processing cannot reliably achieve acceptable end-to-end latency (below 400 ms) in such a scenario, where link latencies are too heterogeneous. In contrast, when using de-jitter buffers for packet scheduling, path selection benefits from low latency links without degrading anonymity. Consequently, acceptable end-to-end latency is possible for a large majority of calls.
{"title":"Optimizing Packet Scheduling and Path Selection for Anonymous Voice Calls","authors":"David Schatz, M. Rossberg, G. Schaefer","doi":"10.1145/3465481.3465768","DOIUrl":"https://doi.org/10.1145/3465481.3465768","url":null,"abstract":"Onion routing is a promising approach to implement anonymous voice calls. Uniform-sized voice packets are routed via multiple relays and encrypted in layers to avoid a correlation of packet content in different parts in the network. By using pre-built circuits, onion encryption may use efficient symmetric ciphers. However, if packets are forwarded by relays as fast as possible—to minimize end-to-end latency—network flow watermarking may still de-anonymize users. A recently proposed countermeasure synchronizes the start time of many calls and batch processes voice packets with the same sequence number in relays. However, if only a single link with high latency is used, it will also negatively affect latency of all other calls. This article explores the limits of this approach by formulating a mixed integer linear program (MILP) that minimizes latency “bottlenecks” in path selection. Furthermore, we suggest a different scheduling strategy for voice packets, i.e. implementing independent de-jitter buffers for all flows. In this case, a MILP is used to minimize the average latency of selected paths. For comparison, we solve the MILPs using latency and bandwidth datasets obtained from the Tor network. Our results show that batch processing cannot reliably achieve acceptable end-to-end latency (below 400 ms) in such a scenario, where link latencies are too heterogeneous. In contrast, when using de-jitter buffers for packet scheduling, path selection benefits from low latency links without degrading anonymity. Consequently, acceptable end-to-end latency is possible for a large majority of calls.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"47 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133108929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual machine introspection (VMI) is a technique for the external monitoring of virtual machines. Through previous work, it became apparent that VMI can contribute to the security of distributed systems and cloud architectures by facilitating stealthy intrusion detection, malware analysis, and digital forensics. The main shortcomings of active VMI-based approaches such as program tracing or process injection in production environments result from the side effects of writing to virtual address spaces and the parallel execution of shared main memory on multiple processor cores. In this paper, we present RapidVMI, a framework for active virtual machine introspection that enables fine-grained, multi-core aware VMI-based memory access on virtual address spaces. It was built to overcome the outlined shortcomings of existing VMI solutions and facilitate the development of introspection applications as if they run in the monitored virtual machine itself. Furthermore, we demonstrate that hypervisor support for this concept improves introspection performance in prevalent virtual machine tracing applications considerably up to 98 times.
{"title":"RapidVMI: Fast and multi-core aware active virtual machine introspection","authors":"Thomas Dangl, Benjamin Taubmann, Hans P. Reiser","doi":"10.1145/3465481.3465752","DOIUrl":"https://doi.org/10.1145/3465481.3465752","url":null,"abstract":"Virtual machine introspection (VMI) is a technique for the external monitoring of virtual machines. Through previous work, it became apparent that VMI can contribute to the security of distributed systems and cloud architectures by facilitating stealthy intrusion detection, malware analysis, and digital forensics. The main shortcomings of active VMI-based approaches such as program tracing or process injection in production environments result from the side effects of writing to virtual address spaces and the parallel execution of shared main memory on multiple processor cores. In this paper, we present RapidVMI, a framework for active virtual machine introspection that enables fine-grained, multi-core aware VMI-based memory access on virtual address spaces. It was built to overcome the outlined shortcomings of existing VMI solutions and facilitate the development of introspection applications as if they run in the monitored virtual machine itself. Furthermore, we demonstrate that hypervisor support for this concept improves introspection performance in prevalent virtual machine tracing applications considerably up to 98 times.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130034898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Salvatore Manfredi, M. Ceccato, Giada Sciarretta, Silvio Ranise
Several automated tools have been proposed to detect vulnerabilities. These tools are mainly evaluated in terms of their accuracy in detecting vulnerabilities, but the evaluation of their usability is a commonly neglected topic. Usability of automated security tools is particularly crucial when dealing with problems of cryptographic protocols for which even small—apparently insignificant—changes in their configuration can result in vulnerabilities that, if exploited, pave the way to attacks with dramatic consequences for the confidentiality and integrity of exchanged messages. This becomes even more acute when considering such ubiquitous protocols as the one for Transport Layer Security (TLS for short). In this paper, we present the design and the lessons learned of a user study, meant to compare two different approaches when reporting misconfigurations. Results reveal that including contextualized actionable mitigations in security reports significantly impact the accuracy and the time needed to patch TLS vulnerabilities. Along with the lessons learned, we share the experimental material that can be used during cybersecurity labs to let students configure and patch TLS first-hand.
{"title":"Do Security Reports Meet Usability?: Lessons Learned from Using Actionable Mitigations for Patching TLS Misconfigurations","authors":"Salvatore Manfredi, M. Ceccato, Giada Sciarretta, Silvio Ranise","doi":"10.1145/3465481.3469187","DOIUrl":"https://doi.org/10.1145/3465481.3469187","url":null,"abstract":"Several automated tools have been proposed to detect vulnerabilities. These tools are mainly evaluated in terms of their accuracy in detecting vulnerabilities, but the evaluation of their usability is a commonly neglected topic. Usability of automated security tools is particularly crucial when dealing with problems of cryptographic protocols for which even small—apparently insignificant—changes in their configuration can result in vulnerabilities that, if exploited, pave the way to attacks with dramatic consequences for the confidentiality and integrity of exchanged messages. This becomes even more acute when considering such ubiquitous protocols as the one for Transport Layer Security (TLS for short). In this paper, we present the design and the lessons learned of a user study, meant to compare two different approaches when reporting misconfigurations. Results reveal that including contextualized actionable mitigations in security reports significantly impact the accuracy and the time needed to patch TLS vulnerabilities. Along with the lessons learned, we share the experimental material that can be used during cybersecurity labs to let students configure and patch TLS first-hand.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129735205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The appearance of novel ideas for network covert channels leads to an urge for developing new detection approaches. One of these new ideas are reversible network covert channels that are able to restore the original overt information without leaving any direct evidence of their appearance. Some of these reversible covert channels are based upon computational intensive operations, like for example encoding hidden information in the authentication hashes of a hash chain based one-time password. For such a covert channel implementation, the hash function has to be called repeatedly to extract the hidden message and to restore the original information. In this paper, we investigate the influence of repeated MD5 and SHA3 hash operations on the runtime of an authentication request-response. We first define two alphabets, one which leads to the fewest hash operations and one which leads to the most hash operations to be performed. Further, for each alphabet, we carry out three experiments. One without a covert channel, one with a covert channel altering all hashes, and finally, one with a covert channel altering every second hash. We further investigate the detection rates of computational intensive reversible covert channels for all scenarios by applying a threshold-based detection upon the average packet runtime without encoded covert information. Finally, we describe countermeasures and the limitations of this detection approach.
{"title":"Hunting Shadows: Towards Packet Runtime-based Detection Of Computational Intensive Reversible Covert Channels","authors":"Tobias Schmidbauer, S. Wendzel","doi":"10.1145/3465481.3470085","DOIUrl":"https://doi.org/10.1145/3465481.3470085","url":null,"abstract":"The appearance of novel ideas for network covert channels leads to an urge for developing new detection approaches. One of these new ideas are reversible network covert channels that are able to restore the original overt information without leaving any direct evidence of their appearance. Some of these reversible covert channels are based upon computational intensive operations, like for example encoding hidden information in the authentication hashes of a hash chain based one-time password. For such a covert channel implementation, the hash function has to be called repeatedly to extract the hidden message and to restore the original information. In this paper, we investigate the influence of repeated MD5 and SHA3 hash operations on the runtime of an authentication request-response. We first define two alphabets, one which leads to the fewest hash operations and one which leads to the most hash operations to be performed. Further, for each alphabet, we carry out three experiments. One without a covert channel, one with a covert channel altering all hashes, and finally, one with a covert channel altering every second hash. We further investigate the detection rates of computational intensive reversible covert channels for all scenarios by applying a threshold-based detection upon the average packet runtime without encoded covert information. Finally, we describe countermeasures and the limitations of this detection approach.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126376423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cyberattacks targeting modern network infrastructures are increasing in number and impact. This growing phenomenon emphasizes the central role of cybersecurity and, in particular, the reaction against ongoing threats targeting assets within the protected system. Such centrality is reflected in the literature, where several works have been presented to propose full-fledged reaction methodologies to tackle offensive incidents’ consequences. In this direction, the work in [18] developed an immuno-based response approach based on the application of the Artificial Immune System (AIS) methodology. That is, the AIS-powered reaction is able to calculate the optimal set of atomic countermeasure to enforce on the asset within the monitored system, minimizing the risk to which those are exposed in a more than adequate time. To further contribute to this line, the paper at hand presents AISGA, a multi-objective approach that leverages the capabilities of a Genetic Algorithm (GA) to optimize the selection of the input parameters of the AIS methodology. Specifically, AISGA selects the optimal ranges of inputs that balance the tradeoff between minimizing the global risk and the execution time of the methodology. Additionally, by flooding the AIS-powered reaction with a wide range of possible inputs, AISGA intends to demonstrate the robustness of such a model. Exhaustive experiments are executed to precisely compute the optimal ranges of parameters, demonstrating that the proposed multi-objective optimization prefers a fast-but-effective reaction.
{"title":"AISGA: Multi-objective parameters optimization for countermeasures selection through genetic algorithm","authors":"P. Nespoli, Félix Gómez Mármol, G. Kambourakis","doi":"10.1145/3465481.3470074","DOIUrl":"https://doi.org/10.1145/3465481.3470074","url":null,"abstract":"Cyberattacks targeting modern network infrastructures are increasing in number and impact. This growing phenomenon emphasizes the central role of cybersecurity and, in particular, the reaction against ongoing threats targeting assets within the protected system. Such centrality is reflected in the literature, where several works have been presented to propose full-fledged reaction methodologies to tackle offensive incidents’ consequences. In this direction, the work in [18] developed an immuno-based response approach based on the application of the Artificial Immune System (AIS) methodology. That is, the AIS-powered reaction is able to calculate the optimal set of atomic countermeasure to enforce on the asset within the monitored system, minimizing the risk to which those are exposed in a more than adequate time. To further contribute to this line, the paper at hand presents AISGA, a multi-objective approach that leverages the capabilities of a Genetic Algorithm (GA) to optimize the selection of the input parameters of the AIS methodology. Specifically, AISGA selects the optimal ranges of inputs that balance the tradeoff between minimizing the global risk and the execution time of the methodology. Additionally, by flooding the AIS-powered reaction with a wide range of possible inputs, AISGA intends to demonstrate the robustness of such a model. Exhaustive experiments are executed to precisely compute the optimal ranges of parameters, demonstrating that the proposed multi-objective optimization prefers a fast-but-effective reaction.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126434619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kartick Kolachala, Ecem Simsek, Mohammed Ababneh, Roopa Vishwanathan
Money laundering using cryptocurrencies has become increasingly prevalent, and global and national regulatory authorities have announced plans to implement stringent anti-money laundering regulations. In this paper, we examine current anti-money laundering (AML) mechanisms in cryptocurrencies and payment networks from a technical and policy perspective, and point out practical challenges in implementing and enforcing them. We first discuss blacklisting, a recently proposed technique to combat money laundering, which seems appealing, but leaves several unanswered questions and challenges with regard to its enforcement. We then discuss payment networks and find that there are unique problems in the payment network domain that might require custom-designed AML solutions, as opposed to general cryptocurrency AML techniques. Finally, we examine the regulatory guidelines and recommendations as laid out by the global Financial Action Task Force (FATF), and the U.S. based Financial Crimes Enforcement Network (FinCEN), and find that there are several ambiguities in their interpretation and implementation. To quantify the effects of money laundering, we conduct experiments on real-world transaction datasets. Our goal in this paper is to survey the landscape of existing AML mechanisms, and focus the attention of the research community on this issue. Our findings indicate the community must endeavor to treat AML regulations and technical methods as an integral part of the systems they build and must strive to design solutions from the ground up that respect AML regulatory frameworks. We hope that this paper will serve as a point of reference for researchers that wish to build systems with AML mechanisms, and will help them understand the challenges that lie ahead.
{"title":"SoK: Money Laundering in Cryptocurrencies","authors":"Kartick Kolachala, Ecem Simsek, Mohammed Ababneh, Roopa Vishwanathan","doi":"10.1145/3465481.3465774","DOIUrl":"https://doi.org/10.1145/3465481.3465774","url":null,"abstract":"Money laundering using cryptocurrencies has become increasingly prevalent, and global and national regulatory authorities have announced plans to implement stringent anti-money laundering regulations. In this paper, we examine current anti-money laundering (AML) mechanisms in cryptocurrencies and payment networks from a technical and policy perspective, and point out practical challenges in implementing and enforcing them. We first discuss blacklisting, a recently proposed technique to combat money laundering, which seems appealing, but leaves several unanswered questions and challenges with regard to its enforcement. We then discuss payment networks and find that there are unique problems in the payment network domain that might require custom-designed AML solutions, as opposed to general cryptocurrency AML techniques. Finally, we examine the regulatory guidelines and recommendations as laid out by the global Financial Action Task Force (FATF), and the U.S. based Financial Crimes Enforcement Network (FinCEN), and find that there are several ambiguities in their interpretation and implementation. To quantify the effects of money laundering, we conduct experiments on real-world transaction datasets. Our goal in this paper is to survey the landscape of existing AML mechanisms, and focus the attention of the research community on this issue. Our findings indicate the community must endeavor to treat AML regulations and technical methods as an integral part of the systems they build and must strive to design solutions from the ground up that respect AML regulatory frameworks. We hope that this paper will serve as a point of reference for researchers that wish to build systems with AML mechanisms, and will help them understand the challenges that lie ahead.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128912916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Besides providing data sharing, commercial cloud-based storage services (e.g., Dropbox) also enforce access control, i.e. permit users to decide who can access which data. In this paper we advocate the separation between the sharing of data and the access control function. We specifically promote an overlay approach which provides end-to-end encryption and empowers the end users with the possibility to enforce access control policies without involving the cloud provider itself. To this end, our proposal, named ABEBox, relies on the careful combination of i) attribute-based encryption for custom policy definition and management, with ii) proxy re-encryption to provide scalable re-keying and protection to key-scraping attacks, with a novel revocation procedure. Moreover, iii) we concretely embed our protection mechanisms inside a public domain virtual file system module to provide an overlay and trivial-to-use transparent service which can be deployed on top of any arbitrary cloud storage provider.
{"title":"ABEBox: A data driven access control for securing public cloud storage with efficient key revocation","authors":"E. Raso, L. Bracciale, P. Loreti, G. Bianchi","doi":"10.1145/3465481.3469206","DOIUrl":"https://doi.org/10.1145/3465481.3469206","url":null,"abstract":"Besides providing data sharing, commercial cloud-based storage services (e.g., Dropbox) also enforce access control, i.e. permit users to decide who can access which data. In this paper we advocate the separation between the sharing of data and the access control function. We specifically promote an overlay approach which provides end-to-end encryption and empowers the end users with the possibility to enforce access control policies without involving the cloud provider itself. To this end, our proposal, named ABEBox, relies on the careful combination of i) attribute-based encryption for custom policy definition and management, with ii) proxy re-encryption to provide scalable re-keying and protection to key-scraping attacks, with a novel revocation procedure. Moreover, iii) we concretely embed our protection mechanisms inside a public domain virtual file system module to provide an overlay and trivial-to-use transparent service which can be deployed on top of any arbitrary cloud storage provider.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"452 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123368286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tyler Balon, Krikor Herlopian, I. Baggili, Cinthya Grajeda-Mendez
Current methods for artifact analysis and understanding depend on investigator expertise. Experienced and technically savvy examiners spend a lot of time reverse engineering applications while attempting to find crumbs they leave behind on systems. This takes away valuable time from the investigative process, and slows down forensic examination. Furthermore, when specific artifact knowledge is gained, it stays within the respective forensic units. To combat these challenges, we present ForensicAF, an approach for leveraging curated, crowd-sourced artifacts from the Artifact Genome Project (AGP). The approach has the overarching goal of uncovering forensically relevant artifacts from storage media. We explain our approach and construct it as an Autopsy Ingest Module. Our implementation focused on both File and Registry artifacts. We evaluated ForensicAF using systematic and random sampling experiments. While ForensicAF showed consistent results with registry artifacts across all experiments, it also revealed that deeper folder traversal yields more File Artifacts during data source ingestion. When experiments were conducted on case scenario disk images without apriori knowledge, ForensicAF uncovered artifacts of forensic relevance that help in solving those scenarios. We contend that ForensicAF is a promising approach for artifact extraction from storage media, and its utility will advance as more artifacts are crowd-sourced by AGP.
{"title":"Forensic Artifact Finder (ForensicAF): An Approach & Tool for Leveraging Crowd-Sourced Curated Forensic Artifacts","authors":"Tyler Balon, Krikor Herlopian, I. Baggili, Cinthya Grajeda-Mendez","doi":"10.1145/3465481.3470051","DOIUrl":"https://doi.org/10.1145/3465481.3470051","url":null,"abstract":"Current methods for artifact analysis and understanding depend on investigator expertise. Experienced and technically savvy examiners spend a lot of time reverse engineering applications while attempting to find crumbs they leave behind on systems. This takes away valuable time from the investigative process, and slows down forensic examination. Furthermore, when specific artifact knowledge is gained, it stays within the respective forensic units. To combat these challenges, we present ForensicAF, an approach for leveraging curated, crowd-sourced artifacts from the Artifact Genome Project (AGP). The approach has the overarching goal of uncovering forensically relevant artifacts from storage media. We explain our approach and construct it as an Autopsy Ingest Module. Our implementation focused on both File and Registry artifacts. We evaluated ForensicAF using systematic and random sampling experiments. While ForensicAF showed consistent results with registry artifacts across all experiments, it also revealed that deeper folder traversal yields more File Artifacts during data source ingestion. When experiments were conducted on case scenario disk images without apriori knowledge, ForensicAF uncovered artifacts of forensic relevance that help in solving those scenarios. We contend that ForensicAF is a promising approach for artifact extraction from storage media, and its utility will advance as more artifacts are crowd-sourced by AGP.","PeriodicalId":417395,"journal":{"name":"Proceedings of the 16th International Conference on Availability, Reliability and Security","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123117741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}