Data usage control is concerned with requirements on data after access has been granted. In order to enforce usage control requirements, it is necessary to track the different representations that the data may take (among others, file, window content, network packet). These representations exist at different layers of abstraction. As a consequence, in order to enforce usage control requirements, multiple data flow tracking and usage control enforcement monitors must exist, one at each layer. If a new representation is created at some layer of abstraction, e.g., if a cache file is created for a picture after downloading it with a browser, then the initiating layer (in the example, the browser) must notify the layer at which the new representation is created (in the example, the operating system). We present a bus system for system-wide usage control that, for security and performance reasons, is implemented in a hyper visor. We evaluate its security and performance.
{"title":"A Hypervisor-Based Bus System for Usage Control","authors":"Cornelius Moucha, Enrico Lovat, A. Pretschner","doi":"10.1109/ARES.2011.44","DOIUrl":"https://doi.org/10.1109/ARES.2011.44","url":null,"abstract":"Data usage control is concerned with requirements on data after access has been granted. In order to enforce usage control requirements, it is necessary to track the different representations that the data may take (among others, file, window content, network packet). These representations exist at different layers of abstraction. As a consequence, in order to enforce usage control requirements, multiple data flow tracking and usage control enforcement monitors must exist, one at each layer. If a new representation is created at some layer of abstraction, e.g., if a cache file is created for a picture after downloading it with a browser, then the initiating layer (in the example, the browser) must notify the layer at which the new representation is created (in the example, the operating system). We present a bus system for system-wide usage control that, for security and performance reasons, is implemented in a hyper visor. We evaluate its security and performance.","PeriodicalId":254443,"journal":{"name":"2011 Sixth International Conference on Availability, Reliability and Security","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115786858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The idea of security aware system development from the start of the engineering process is generally accepted nowadays and is becoming applied in practice. Many recent initiatives support this idea with special focus on security requirements elicitation. However, there are so far no techniques that provide integrated overviews of security threats and system architecture. One way to achieve this is by combining misuse cases with use case maps into misuse case maps (MUCM). This paper presents an experimental evaluation of MUCM diagrams focusing on identification of vulnerabilities and mitigations. The controlled experiment with 33 IT students included a complex hacker intrusion from the literature, illustrated either with MUCM or with alternative diagrams. The results suggest that participants using MUCM found significantly more mitigations than participants using regular misuse cases combined with system architecture diagrams.
{"title":"Experimental Comparison of Misuse Case Maps with Misuse Cases and System Architecture Diagrams for Eliciting Security Vulnerabilities and Mitigations","authors":"P. Kárpáti, A. Opdahl, G. Sindre","doi":"10.1109/ARES.2011.77","DOIUrl":"https://doi.org/10.1109/ARES.2011.77","url":null,"abstract":"The idea of security aware system development from the start of the engineering process is generally accepted nowadays and is becoming applied in practice. Many recent initiatives support this idea with special focus on security requirements elicitation. However, there are so far no techniques that provide integrated overviews of security threats and system architecture. One way to achieve this is by combining misuse cases with use case maps into misuse case maps (MUCM). This paper presents an experimental evaluation of MUCM diagrams focusing on identification of vulnerabilities and mitigations. The controlled experiment with 33 IT students included a complex hacker intrusion from the literature, illustrated either with MUCM or with alternative diagrams. The results suggest that participants using MUCM found significantly more mitigations than participants using regular misuse cases combined with system architecture diagrams.","PeriodicalId":254443,"journal":{"name":"2011 Sixth International Conference on Availability, Reliability and Security","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123493688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an obfuscation strategy to protect a program against injection attacks. The strategy represents the program as a set of code fragments in-between two consecutive system calls (the system blocks) and a graph that represents the execution order of the fragment (the system block graph). The system blocks and the system block graph are partitioned between two virtual machines (VMs). The Blocks-VM stores and executes the system blocks but does not store any information on how control flows across the system blocks. This information is represented only by the system block graph stored in the Graph-VM, which correctly sequentializes the system blocks by analyzing the system block graph and accessing the Blocks-VM. At run-time, each time a system block ends, i.e. the program issues a system call, the execution of the Blocks-VM is frozen and control is transferred to the Graph-VM. After deducing the next system block to be executed from the system block graph, the current system block and the current system call, the Graph-VM updates the return address in the Blocks-VM so that the correct system block is executed and then resumes the Blocks-VM. To protect code integrity, the Graph-VM also stores a hash of each block. The overall strategy results in a clean separation between the program and its control-flow and this is important whenever the Graph-VM is in full control of the user whereas the Blocks-VM may be attacked through code injection. The Graph-VM can discover these attacks because either the current system call is not present in the original program or the hash of the current block is invalid. In all these cases, the Graph-VM halts the execution of the program. We present the algorithm that maps the program source code into the system blocks and the system block graph and discuss a first implementation of the run-time architecture along with some performance results.
{"title":"An Obfuscation-Based Approach against Injection Attacks","authors":"F. Baiardi, D. Sgandurra","doi":"10.1109/ARES.2011.17","DOIUrl":"https://doi.org/10.1109/ARES.2011.17","url":null,"abstract":"We present an obfuscation strategy to protect a program against injection attacks. The strategy represents the program as a set of code fragments in-between two consecutive system calls (the system blocks) and a graph that represents the execution order of the fragment (the system block graph). The system blocks and the system block graph are partitioned between two virtual machines (VMs). The Blocks-VM stores and executes the system blocks but does not store any information on how control flows across the system blocks. This information is represented only by the system block graph stored in the Graph-VM, which correctly sequentializes the system blocks by analyzing the system block graph and accessing the Blocks-VM. At run-time, each time a system block ends, i.e. the program issues a system call, the execution of the Blocks-VM is frozen and control is transferred to the Graph-VM. After deducing the next system block to be executed from the system block graph, the current system block and the current system call, the Graph-VM updates the return address in the Blocks-VM so that the correct system block is executed and then resumes the Blocks-VM. To protect code integrity, the Graph-VM also stores a hash of each block. The overall strategy results in a clean separation between the program and its control-flow and this is important whenever the Graph-VM is in full control of the user whereas the Blocks-VM may be attacked through code injection. The Graph-VM can discover these attacks because either the current system call is not present in the original program or the hash of the current block is invalid. In all these cases, the Graph-VM halts the execution of the program. We present the algorithm that maps the program source code into the system blocks and the system block graph and discuss a first implementation of the run-time architecture along with some performance results.","PeriodicalId":254443,"journal":{"name":"2011 Sixth International Conference on Availability, Reliability and Security","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127299249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Enterprises are confronted with an increasing amount of data. This data overload makes it difficult to provide knowledge-workers and decision-makers with the needed information. Particularly challenging in this context is the integrated provision of both structured and unstructured information depending on the current process context and user, i.e., the context-aware, personalized delivery of process information. Examples of unstructured process information include all kinds of office documents or e-mails. Examples of structured process information are business process models or data from enterprise information systems. Picking up the need for a context-aware, personalized delivery of process information, this paper presents results from three empirical studies: two exploratory case studies from the automotive domain and the healthcare sector, and an online survey among 219 participants. In a first step, we identify and describe problems with respect to process-oriented information management in general and the personalized provision of process information in particular. In a second step, we derive requirements on the user-adequate handling of process information.
{"title":"On the Context-aware, Personalized Delivery of Process Information: Viewpoints, Problems, and Requirements","authors":"Markus Hipp, Bela Mutschler, M. Reichert","doi":"10.1109/ARES.2011.65","DOIUrl":"https://doi.org/10.1109/ARES.2011.65","url":null,"abstract":"Enterprises are confronted with an increasing amount of data. This data overload makes it difficult to provide knowledge-workers and decision-makers with the needed information. Particularly challenging in this context is the integrated provision of both structured and unstructured information depending on the current process context and user, i.e., the context-aware, personalized delivery of process information. Examples of unstructured process information include all kinds of office documents or e-mails. Examples of structured process information are business process models or data from enterprise information systems. Picking up the need for a context-aware, personalized delivery of process information, this paper presents results from three empirical studies: two exploratory case studies from the automotive domain and the healthcare sector, and an online survey among 219 participants. In a first step, we identify and describe problems with respect to process-oriented information management in general and the personalized provision of process information in particular. In a second step, we derive requirements on the user-adequate handling of process information.","PeriodicalId":254443,"journal":{"name":"2011 Sixth International Conference on Availability, Reliability and Security","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130873880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper deals with a study of probabilistic methods to manage the dependability of a networked distributed system, in the presence of inaccuracies and partial knowledge of system models pertaining to failures. A distributed networked system (DIS) that collects raw data from sensors deployed in the field and delivers a composite data to an end-user is faced with two types of uncertainties: at 'information level' due to the multi-modal nature of raw data collected from the environment, and at 'control level' due to the incompleteness in knowledge about the application model. These have a compounded effect on the quality of fault-tolerance exhibited by a DIS. Based on service-layer abstractions, the paper identifies application-oriented metrics to quantify the quality of information flowing through a DIS. Even with imperfect information, the paper demonstrates how the high-level quality metrics and control algorithms enable achieving a reasonable degree of fault-tolerance in a probabilistic manner. A case study of replicated web services is also described.
{"title":"Probabilistic Fault-tolerance of Distributed Services: A Paradigm for Dependable Applications","authors":"K. Ravindran","doi":"10.1109/ARES.2011.21","DOIUrl":"https://doi.org/10.1109/ARES.2011.21","url":null,"abstract":"The paper deals with a study of probabilistic methods to manage the dependability of a networked distributed system, in the presence of inaccuracies and partial knowledge of system models pertaining to failures. A distributed networked system (DIS) that collects raw data from sensors deployed in the field and delivers a composite data to an end-user is faced with two types of uncertainties: at 'information level' due to the multi-modal nature of raw data collected from the environment, and at 'control level' due to the incompleteness in knowledge about the application model. These have a compounded effect on the quality of fault-tolerance exhibited by a DIS. Based on service-layer abstractions, the paper identifies application-oriented metrics to quantify the quality of information flowing through a DIS. Even with imperfect information, the paper demonstrates how the high-level quality metrics and control algorithms enable achieving a reasonable degree of fault-tolerance in a probabilistic manner. A case study of replicated web services is also described.","PeriodicalId":254443,"journal":{"name":"2011 Sixth International Conference on Availability, Reliability and Security","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126704993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reza Rawassizadeh, Johannes Heurix, Soheil Khosravipour, A. Tjoa
Sharing personal information benefits both data providers and data consumers in many ways. Recent advances in sensor networks and personal archives enable users to record personal information including emails, social networking activities, or life events (life logging). These information objects are usually privacy sensitive and thus need to be protected adequately when being shared. In this work, we present a lightweight pseudonymization framework which allows users to benefit from sharing their personal information while still preserving their privacy. Furthermore, this approach increases the data owners' awareness of what information they are sharing, thus rendering data publishing more transparent.
{"title":"LiDSec- A Lightweight Pseudonymization Approach for Privacy-Preserving Publishing of Textual Personal Information","authors":"Reza Rawassizadeh, Johannes Heurix, Soheil Khosravipour, A. Tjoa","doi":"10.1109/ARES.2011.93","DOIUrl":"https://doi.org/10.1109/ARES.2011.93","url":null,"abstract":"Sharing personal information benefits both data providers and data consumers in many ways. Recent advances in sensor networks and personal archives enable users to record personal information including emails, social networking activities, or life events (life logging). These information objects are usually privacy sensitive and thus need to be protected adequately when being shared. In this work, we present a lightweight pseudonymization framework which allows users to benefit from sharing their personal information while still preserving their privacy. Furthermore, this approach increases the data owners' awareness of what information they are sharing, thus rendering data publishing more transparent.","PeriodicalId":254443,"journal":{"name":"2011 Sixth International Conference on Availability, Reliability and Security","volume":"38 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126745541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Adware represents a possible threat to the security and privacy of computer users. Traditional signature-based and heuristic-based methods have not been proven to be successful at detecting this type of software. This paper presents an adware detection approach based on the application of data mining on disassembled code. The main contributions of the paper is a large publicly available adware data set, an accurate adware detection algorithm, and an extensive empirical evaluation of several candidate machine learning techniques that can be used in conjunction with the algorithm. We have extracted sequences of opcodes from adware and benign software and we have then applied feature selection, using different configurations, to obtain 63 data sets. Six data mining algorithms have been evaluated on these data sets in order to find an efficient and accurate detector. Our experimental results show that the proposed approach can be used to accurately detect both novel and known adware instances even though the binary difference between adware and legitimate software is usually small.
{"title":"Accurate Adware Detection Using Opcode Sequence Extraction","authors":"R. Shahzad, Niklas Lavesson, H. Johnson","doi":"10.1109/ARES.2011.35","DOIUrl":"https://doi.org/10.1109/ARES.2011.35","url":null,"abstract":"Adware represents a possible threat to the security and privacy of computer users. Traditional signature-based and heuristic-based methods have not been proven to be successful at detecting this type of software. This paper presents an adware detection approach based on the application of data mining on disassembled code. The main contributions of the paper is a large publicly available adware data set, an accurate adware detection algorithm, and an extensive empirical evaluation of several candidate machine learning techniques that can be used in conjunction with the algorithm. We have extracted sequences of opcodes from adware and benign software and we have then applied feature selection, using different configurations, to obtain 63 data sets. Six data mining algorithms have been evaluated on these data sets in order to find an efficient and accurate detector. Our experimental results show that the proposed approach can be used to accurately detect both novel and known adware instances even though the binary difference between adware and legitimate software is usually small.","PeriodicalId":254443,"journal":{"name":"2011 Sixth International Conference on Availability, Reliability and Security","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125824758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper reports on ongoing work on SWAT, a new toolkit for security workflow analysis. SWAT provides a platform for the realization and testing of well-founded methods to detect information leaks in workflows, both for the workflow certification and for audit based upon the execution traces. Besides presenting the SWAT's functionality and high-level architecture, an example illustrates its operation.
{"title":"SWAT: A Security Workflow Analysis Toolkit for Reliably Secure Process-aware Information Systems","authors":"R. Accorsi, Claus Wonnemann, S. Dochow","doi":"10.1109/ARES.2011.108","DOIUrl":"https://doi.org/10.1109/ARES.2011.108","url":null,"abstract":"This paper reports on ongoing work on SWAT, a new toolkit for security workflow analysis. SWAT provides a platform for the realization and testing of well-founded methods to detect information leaks in workflows, both for the workflow certification and for audit based upon the execution traces. Besides presenting the SWAT's functionality and high-level architecture, an example illustrates its operation.","PeriodicalId":254443,"journal":{"name":"2011 Sixth International Conference on Availability, Reliability and Security","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126615101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information security management is a very complex task which involves the implementation and monitoring of more than 130 security controls. To achieve greater efficiency in this process it is necessary to automate as many controls as possible. This paper provides an analysis of how many controls can be automated, based on the standards ISO 27001 and NIST SP800-53. Furthermore, we take the automation potential of controls included in the Consensus Audit Guidelines into account. Finally, we provide an overview of security applications that support automation in the operation of information security controls to increase the efficiency of information security management.
{"title":"Information Security Automation: How Far Can We Go?","authors":"Raydel Montesino, Stefan Fenz","doi":"10.1109/ARES.2011.48","DOIUrl":"https://doi.org/10.1109/ARES.2011.48","url":null,"abstract":"Information security management is a very complex task which involves the implementation and monitoring of more than 130 security controls. To achieve greater efficiency in this process it is necessary to automate as many controls as possible. This paper provides an analysis of how many controls can be automated, based on the standards ISO 27001 and NIST SP800-53. Furthermore, we take the automation potential of controls included in the Consensus Audit Guidelines into account. Finally, we provide an overview of security applications that support automation in the operation of information security controls to increase the efficiency of information security management.","PeriodicalId":254443,"journal":{"name":"2011 Sixth International Conference on Availability, Reliability and Security","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115981393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As information systems extensively exchange information between participants, privacy concerns may arise from its potential misuse. A Privacy by Design (PbD) approach considers privacy requirements of different stakeholders during the design and the implementation of a system. Currently, a comprehensive approach for privacy requirement engineering, implementation, and verification is largely missing. This paper extends current design methods by additional (formal) steps which take advantage of ontologies. The proposed extensions result in a systematic approach that better protects privacy in future information systems.
{"title":"Privacy Verification Using Ontologies","authors":"M. Kost, J. Freytag, F. Kargl, A. Kung","doi":"10.1109/ARES.2011.97","DOIUrl":"https://doi.org/10.1109/ARES.2011.97","url":null,"abstract":"As information systems extensively exchange information between participants, privacy concerns may arise from its potential misuse. A Privacy by Design (PbD) approach considers privacy requirements of different stakeholders during the design and the implementation of a system. Currently, a comprehensive approach for privacy requirement engineering, implementation, and verification is largely missing. This paper extends current design methods by additional (formal) steps which take advantage of ontologies. The proposed extensions result in a systematic approach that better protects privacy in future information systems.","PeriodicalId":254443,"journal":{"name":"2011 Sixth International Conference on Availability, Reliability and Security","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124908779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}