Axel Schröpfer, A. Schaad, F. Kerschbaum, H. Boehm, Joerg Jooss
Benchmarking is the comparison of one company's key performance indicators (KPI) to the statistics of the same KPIs of its peer group. A KPI is a statistical quantity measuring the performance of a business process. Privacy by means of controlling access to data is of the utmost importance in benchmarking. Companies are reluctant to share their business performance data due to the risk of losing a competitive advantage or being embarrassed. We present a cryptographic protocol for securely computing benchmarks between multiple parties and describe the technical aspects of a proof of concept implementation of SAP's research prototype Global Benchmarking Service (GBS) on Microsoft's cloud technology Windows Azure.
{"title":"Secure benchmarking in the cloud","authors":"Axel Schröpfer, A. Schaad, F. Kerschbaum, H. Boehm, Joerg Jooss","doi":"10.1145/2462410.2462430","DOIUrl":"https://doi.org/10.1145/2462410.2462430","url":null,"abstract":"Benchmarking is the comparison of one company's key performance indicators (KPI) to the statistics of the same KPIs of its peer group. A KPI is a statistical quantity measuring the performance of a business process. Privacy by means of controlling access to data is of the utmost importance in benchmarking. Companies are reluctant to share their business performance data due to the risk of losing a competitive advantage or being embarrassed. We present a cryptographic protocol for securely computing benchmarks between multiple parties and describe the technical aspects of a proof of concept implementation of SAP's research prototype Global Benchmarking Service (GBS) on Microsoft's cloud technology Windows Azure.","PeriodicalId":74509,"journal":{"name":"Proceedings of the ... ACM symposium on access control models and technologies. ACM Symposium on Access Control Models and Technologies","volume":"16 1","pages":"197-200"},"PeriodicalIF":0.0,"publicationDate":"2013-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77406799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this demonstration paper, we describe the implementation of a versatile access control prototype based on multimodal biometrics and graphical passwords that had been designed and developed aligned with the current mobile, multichannel, multiservice, and usability demanding world. The BYOD scenario had also been considered to address the challenges related to protect both corporate and personal information that exist in mobile devices with no more boundaries.
{"title":"A versatile access control implementation: secure box","authors":"B. Botelho, D. Pelluzi, E. Nakamura","doi":"10.1145/2462410.2463208","DOIUrl":"https://doi.org/10.1145/2462410.2463208","url":null,"abstract":"In this demonstration paper, we describe the implementation of a versatile access control prototype based on multimodal biometrics and graphical passwords that had been designed and developed aligned with the current mobile, multichannel, multiservice, and usability demanding world. The BYOD scenario had also been considered to address the challenges related to protect both corporate and personal information that exist in mobile devices with no more boundaries.","PeriodicalId":74509,"journal":{"name":"Proceedings of the ... ACM symposium on access control models and technologies. ACM Symposium on Access Control Models and Technologies","volume":"33 1","pages":"249-252"},"PeriodicalIF":0.0,"publicationDate":"2013-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74696130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ian Molloy, Mahesh V. Tripunitara, V. Lotz, M. Kuhlmann, C. Schaufler, V. Atluri
This panel will address the following question. Does an increase in the granularity of access control systems produce a measurable reduction in risk and help meet the goals of the organization, or is the cost prohibitively high? After decades of access control research, products, and practice, there has been a trend towards more complex access control policies and models that more finely restrict (or allow) access to resources. This allows policy administrators to more closely specify any high level abstract policy they may have in mind, or accurately enforce regulations such as HIPPA, SOX, or PCI. The end goal is to allow only those actions that are desirable in hindsight, or via an approach to which Bishop et al. refer as the Oracle Policy. As the expressive power of access control models can vary, an administrator may need a more powerful model to specify the high level policy they need for their particular application. It is not uncommon for new models to add new key-attributes, data-sources, features, or relations to provide a richer set of tools. This has resulted in an explosion of new one-off models in the literature, few of which make their way to real products or deployment. To increase the expressive power of a model, increase its granularity, reduce the complexity of administration and to answer desirable security queries such as safety, a plethora of new concepts have been added to access control models. To name a few: groups and roles; hierarchies and constraints; parameterized permissions; exceptions; time and location of users and resources; relationships between subjects; attributes of subjects, objects, and actions; information flow; conflict of interest classes; obligations; trust, benefit, and risk; workflows; delegation; situational awareness and context; and so on. All of these constructs build to a meta-model, as Barker observes. This granularity has resulted in many novel and useful findings, new algorithms, and challenging open research issues, but poses potential problems as well. With granularity often comes complexity which manifests itself in specifying policies, managing and maintaining policies over time, and auditing logs to ensure compliance. This panel will discuss issues surrounding the problem of complexity in access control. From designing and specifying new models, designing enforcement mechanisms on real-world systems, policy lifecycle, and the role of analytics from automatically generating policies to auditing logs. So, is this complexity worth it? Does increasing the granularity produce a measurable reduction in the risk to sensitive resources and protect the goals of the organization or is the cost prohibitively high? Can we ever truly specify a "correct" and "complete" policy, which may be too dynamic and require the interpretation of the courts to decide, especially when policies are intended to enforce ambiguous regulations. Finally, at what cost should we strive for a perfect, fine-grained pol
{"title":"Panel on granularity in access control","authors":"Ian Molloy, Mahesh V. Tripunitara, V. Lotz, M. Kuhlmann, C. Schaufler, V. Atluri","doi":"10.1145/2462410.2462889","DOIUrl":"https://doi.org/10.1145/2462410.2462889","url":null,"abstract":"This panel will address the following question. Does an increase in the granularity of access control systems produce a measurable reduction in risk and help meet the goals of the organization, or is the cost prohibitively high?\u0000 After decades of access control research, products, and practice, there has been a trend towards more complex access control policies and models that more finely restrict (or allow) access to resources. This allows policy administrators to more closely specify any high level abstract policy they may have in mind, or accurately enforce regulations such as HIPPA, SOX, or PCI. The end goal is to allow only those actions that are desirable in hindsight, or via an approach to which Bishop et al. refer as the Oracle Policy.\u0000 As the expressive power of access control models can vary, an administrator may need a more powerful model to specify the high level policy they need for their particular application. It is not uncommon for new models to add new key-attributes, data-sources, features, or relations to provide a richer set of tools. This has resulted in an explosion of new one-off models in the literature, few of which make their way to real products or deployment.\u0000 To increase the expressive power of a model, increase its granularity, reduce the complexity of administration and to answer desirable security queries such as safety, a plethora of new concepts have been added to access control models. To name a few: groups and roles; hierarchies and constraints; parameterized permissions; exceptions; time and location of users and resources; relationships between subjects; attributes of subjects, objects, and actions; information flow; conflict of interest classes; obligations; trust, benefit, and risk; workflows; delegation; situational awareness and context; and so on.\u0000 All of these constructs build to a meta-model, as Barker observes.\u0000 This granularity has resulted in many novel and useful findings, new algorithms, and challenging open research issues, but poses potential problems as well. With granularity often comes complexity which manifests itself in specifying policies, managing and maintaining policies over time, and auditing logs to ensure compliance.\u0000 This panel will discuss issues surrounding the problem of complexity in access control. From designing and specifying new models, designing enforcement mechanisms on real-world systems, policy lifecycle, and the role of analytics from automatically generating policies to auditing logs. So, is this complexity worth it? Does increasing the granularity produce a measurable reduction in the risk to sensitive resources and protect the goals of the organization or is the cost prohibitively high?\u0000 Can we ever truly specify a \"correct\" and \"complete\" policy, which may be too dynamic and require the interpretation of the courts to decide, especially when policies are intended to enforce ambiguous regulations. Finally, at what cost should we strive for a perfect, fine-grained pol","PeriodicalId":74509,"journal":{"name":"Proceedings of the ... ACM symposium on access control models and technologies. ACM Symposium on Access Control Models and Technologies","volume":"3 1","pages":"85-86"},"PeriodicalIF":0.0,"publicationDate":"2013-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75161537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, the importance of including obligations as part of access control systems for privilege management, for example, in healthcare information systems, has been well recognized. In an access control system, an a posteriori obligation states which actions need to be performed by a user after he has accessed a resource. There is no guarantee that a user will fulfill a posteriori obligations. Not fulfilling these obligations may incur financial loss, or loss of goodwill and productivity to the organization. In this paper, we propose a trust-and-obligation based framework that reduces the risk exposure of an organization associated with a posteriori obligations. We propose a methodology to assign trust values to users to indicate how trustworthy they are with regards to fulfilling their obligations. When access requests that trigger a posteriori obligations are evaluated, the requesting users' trust values and the criticality of the associated obligations are used. Our framework detects and mitigates insider attacks and unintentional damages that may result from violating a posteriori obligations. Our framework also provides mechanisms to determine misconfigurations of obligation policies. We evaluate our framework through simulations and demonstrate its effectiveness.
{"title":"Beyond accountability: using obligations to reduce risk exposure and deter insider attacks","authors":"N. Baracaldo, J. Joshi","doi":"10.1145/2462410.2462411","DOIUrl":"https://doi.org/10.1145/2462410.2462411","url":null,"abstract":"Recently, the importance of including obligations as part of access control systems for privilege management, for example, in healthcare information systems, has been well recognized. In an access control system, an a posteriori obligation states which actions need to be performed by a user after he has accessed a resource. There is no guarantee that a user will fulfill a posteriori obligations. Not fulfilling these obligations may incur financial loss, or loss of goodwill and productivity to the organization. In this paper, we propose a trust-and-obligation based framework that reduces the risk exposure of an organization associated with a posteriori obligations. We propose a methodology to assign trust values to users to indicate how trustworthy they are with regards to fulfilling their obligations. When access requests that trigger a posteriori obligations are evaluated, the requesting users' trust values and the criticality of the associated obligations are used. Our framework detects and mitigates insider attacks and unintentional damages that may result from violating a posteriori obligations. Our framework also provides mechanisms to determine misconfigurations of obligation policies. We evaluate our framework through simulations and demonstrate its effectiveness.","PeriodicalId":74509,"journal":{"name":"Proceedings of the ... ACM symposium on access control models and technologies. ACM Symposium on Access Control Models and Technologies","volume":"15 1","pages":"213-224"},"PeriodicalIF":0.0,"publicationDate":"2013-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78436126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We motivate and address the problem of testing for properties of interest in real-world implementations of authorization systems. We adopt a 4-stage process: (1) express a property precisely using existential second-order logic, (2) establish types of traces that are necessary and sufficient to establish a property, (3) adopt finitizing assumptions and show that under those assumptions, verifying a property is in PSPACE, and, (4) use a model-checker as a trace-generator to generate instances of traces, and exercise the implementation to check for those traces. We discuss our design of a corresponding testing-system, and its use to test for qualitatively different kinds of properties in two commercial authorization systems. One is a database system that we call the D system, and the other is a file-sharing system that we call the I system. (We use pseudonyms at the request of the respective vendors.) In the context of the D system, our testing has uncovered several issues with its authorization system in the context of procedures that aggregate SQL statements that, to our knowledge, are new to the research literature. For the I system, we have established that it possesses several properties of interest.
{"title":"Property-testing real-world authorization systems","authors":"A. Sharifi, P. Bottinelli, Mahesh V. Tripunitara","doi":"10.1145/2462410.2463207","DOIUrl":"https://doi.org/10.1145/2462410.2463207","url":null,"abstract":"We motivate and address the problem of testing for properties of interest in real-world implementations of authorization systems. We adopt a 4-stage process: (1) express a property precisely using existential second-order logic, (2) establish types of traces that are necessary and sufficient to establish a property, (3) adopt finitizing assumptions and show that under those assumptions, verifying a property is in PSPACE, and, (4) use a model-checker as a trace-generator to generate instances of traces, and exercise the implementation to check for those traces. We discuss our design of a corresponding testing-system, and its use to test for qualitatively different kinds of properties in two commercial authorization systems. One is a database system that we call the D system, and the other is a file-sharing system that we call the I system. (We use pseudonyms at the request of the respective vendors.) In the context of the D system, our testing has uncovered several issues with its authorization system in the context of procedures that aggregate SQL statements that, to our knowledge, are new to the research literature. For the I system, we have established that it possesses several properties of interest.","PeriodicalId":74509,"journal":{"name":"Proceedings of the ... ACM symposium on access control models and technologies. ACM Symposium on Access Control Models and Technologies","volume":"90 3 1","pages":"225-236"},"PeriodicalIF":0.0,"publicationDate":"2013-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76659584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the near future, clouds will provide situational monitoring services using streaming data. Examples of such services include health monitoring, stock market monitoring, shopping cart monitoring, and emergency control and threat management. Offering such services require securely processing data streams generated by multiple, possibly competing and/or complementing, organizations. Processing of data streams also should not cause any overt or covert leakage of information across organizations. We propose an information flow control model adapted from the Chinese Wall policy that can be used to protect against sensitive data disclosure. We propose architectures that are suitable for securely and efficiently processing streaming information belonging to different organizations. We discuss how performance can be further improved by sharing the processing of multiple queries. We demonstrate the feasibility of our approach by implementing a prototype of our system and show the overhead incurred due to the information flow constraints.
{"title":"Information flow control for stream processing in clouds","authors":"Xing Xie, I. Ray, R. Adaikkalavan, R. Gamble","doi":"10.1145/2462410.2463205","DOIUrl":"https://doi.org/10.1145/2462410.2463205","url":null,"abstract":"In the near future, clouds will provide situational monitoring services using streaming data. Examples of such services include health monitoring, stock market monitoring, shopping cart monitoring, and emergency control and threat management. Offering such services require securely processing data streams generated by multiple, possibly competing and/or complementing, organizations. Processing of data streams also should not cause any overt or covert leakage of information across organizations. We propose an information flow control model adapted from the Chinese Wall policy that can be used to protect against sensitive data disclosure. We propose architectures that are suitable for securely and efficiently processing streaming information belonging to different organizations. We discuss how performance can be further improved by sharing the processing of multiple queries. We demonstrate the feasibility of our approach by implementing a prototype of our system and show the overhead incurred due to the information flow constraints.","PeriodicalId":74509,"journal":{"name":"Proceedings of the ... ACM symposium on access control models and technologies. ACM Symposium on Access Control Models and Technologies","volume":"56 1","pages":"89-100"},"PeriodicalIF":0.0,"publicationDate":"2013-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83354376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To date, most work regarding the formal analysis of access control schemes has focused on quantifying and comparing the expressive power of a set of schemes. Although expressive power is important, it is a property that exists in an *absolute* sense, detached from the application context within which an access control scheme will ultimately be deployed. By contrast, we formalize the access control *suitability analysis problem*, which seeks to evaluate the degree to which a set of candidate access control schemes can meet the needs of an application-specific workload. This process involves both reductions to assess whether a scheme is *capable* of implementing a workload (qualitative analysis), as well as cost analysis using ordered measures to quantify the *overheads* of using each candidate scheme to service the workload (quantitative analysis). We formalize the two-facet suitability analysis problem, which formally describes this task. We then develop a mathematical framework for this type of analysis, and evaluate this framework both formally, by quantifying its efficiency and accuracy properties, and practically, by exploring an academic program committee workload.
{"title":"An actor-based, application-aware access control evaluation framework","authors":"W. C. Garrison, Adam J. Lee, Timothy L. Hinrichs","doi":"10.1145/2613087.2613099","DOIUrl":"https://doi.org/10.1145/2613087.2613099","url":null,"abstract":"To date, most work regarding the formal analysis of access control schemes has focused on quantifying and comparing the expressive power of a set of schemes. Although expressive power is important, it is a property that exists in an *absolute* sense, detached from the application context within which an access control scheme will ultimately be deployed. By contrast, we formalize the access control *suitability analysis problem*, which seeks to evaluate the degree to which a set of candidate access control schemes can meet the needs of an application-specific workload. This process involves both reductions to assess whether a scheme is *capable* of implementing a workload (qualitative analysis), as well as cost analysis using ordered measures to quantify the *overheads* of using each candidate scheme to service the workload (quantitative analysis). We formalize the two-facet suitability analysis problem, which formally describes this task. We then develop a mathematical framework for this type of analysis, and evaluate this framework both formally, by quantifying its efficiency and accuracy properties, and practically, by exploring an academic program committee workload.","PeriodicalId":74509,"journal":{"name":"Proceedings of the ... ACM symposium on access control models and technologies. ACM Symposium on Access Control Models and Technologies","volume":"481 1","pages":"199-210"},"PeriodicalIF":0.0,"publicationDate":"2013-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79960527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A workflow specification defines a set of steps and the order in which those steps must be executed. Security requirements and business rules may impose constraints on which users are permitted to perform those steps. A workflow specification is said to be satisfiable if there exists an assignment of authorized users to workflow steps that satisfies all the constraints. An algorithm for determining whether such an assignment exists is important, both as a static analysis tool for workflow specifications, and for the construction of run-time reference monitors for workflow management systems. We develop new methods for determining workflow satisfiability based on the concept of constraint expressions, which were introduced recently by Khan and Fong. These methods are surprising versatile, enabling us to develop algorithms for, and determine the complexity of, a number of different problems related to workflow satisfiability.
{"title":"Constraint expressions and workflow satisfiability","authors":"J. Crampton, G. Gutin","doi":"10.1145/2462410.2462419","DOIUrl":"https://doi.org/10.1145/2462410.2462419","url":null,"abstract":"A workflow specification defines a set of steps and the order in which those steps must be executed. Security requirements and business rules may impose constraints on which users are permitted to perform those steps. A workflow specification is said to be satisfiable if there exists an assignment of authorized users to workflow steps that satisfies all the constraints. An algorithm for determining whether such an assignment exists is important, both as a static analysis tool for workflow specifications, and for the construction of run-time reference monitors for workflow management systems. We develop new methods for determining workflow satisfiability based on the concept of constraint expressions, which were introduced recently by Khan and Fong. These methods are surprising versatile, enabling us to develop algorithms for, and determine the complexity of, a number of different problems related to workflow satisfiability.","PeriodicalId":74509,"journal":{"name":"Proceedings of the ... ACM symposium on access control models and technologies. ACM Symposium on Access Control Models and Technologies","volume":"28 1","pages":"73-84"},"PeriodicalIF":0.0,"publicationDate":"2013-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82211453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Distributed usage control is concerned with how data may or may not be used after initial access to it has been granted and is therefore particularly important in distributed system environments. We present an application- and application-protocol-independent infrastructure that allows for the enforcement of usage control policies in a distributed environment. We instantiate the infrastructure for transferring files using FTP and for a scenario where smart meters are connected to a Facebook application.
{"title":"Towards a policy enforcement infrastructure for distributed usage control","authors":"Florian Kelbert, A. Pretschner","doi":"10.1145/2295136.2295159","DOIUrl":"https://doi.org/10.1145/2295136.2295159","url":null,"abstract":"Distributed usage control is concerned with how data may or may not be used after initial access to it has been granted and is therefore particularly important in distributed system environments. We present an application- and application-protocol-independent infrastructure that allows for the enforcement of usage control policies in a distributed environment. We instantiate the infrastructure for transferring files using FTP and for a scenario where smart meters are connected to a Facebook application.","PeriodicalId":74509,"journal":{"name":"Proceedings of the ... ACM symposium on access control models and technologies. ACM Symposium on Access Control Models and Technologies","volume":"409 1","pages":"119-122"},"PeriodicalIF":0.0,"publicationDate":"2012-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80208163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Santiago Pina Ros, Mario Lischka, Félix Gómez Mármol
The amount of private information in the Internet is constantly increasing with the explosive growth of cloud computing and social networks. XACML is one of the most important standards for specifying access control policies for web services. The number of XACML policies grows really fast and evaluation processing time becomes longer. The XEngine approach proposes to rearrange the matching tree according to the attributes used in the target sections, but for speed reasons they only support equality of attribute values. For a fast termination the combining algorithms are transformed into a first applicable policy, which does not support obligations correctly. In our approach all comparison functions defined in XACML as well as obligations are supported. In this paper we propose an optimization for XACML policies evaluation based on two tree structures. The first one, called Matching Tree, is created for a fast searching of applicable rules. The second one, called Combining Tree, is used for the evaluation of the applicable rules. Finally, we propose an exploring method for the Matching Tree based on the binary search algorithm. The experimental results show that our approach is orders of magnitude better than Sun PDP.
{"title":"Graph-based XACML evaluation","authors":"Santiago Pina Ros, Mario Lischka, Félix Gómez Mármol","doi":"10.1145/2295136.2295153","DOIUrl":"https://doi.org/10.1145/2295136.2295153","url":null,"abstract":"The amount of private information in the Internet is constantly increasing with the explosive growth of cloud computing and social networks. XACML is one of the most important standards for specifying access control policies for web services. The number of XACML policies grows really fast and evaluation processing time becomes longer. The XEngine approach proposes to rearrange the matching tree according to the attributes used in the target sections, but for speed reasons they only support equality of attribute values. For a fast termination the combining algorithms are transformed into a first applicable policy, which does not support obligations correctly.\u0000 In our approach all comparison functions defined in XACML as well as obligations are supported. In this paper we propose an optimization for XACML policies evaluation based on two tree structures. The first one, called Matching Tree, is created for a fast searching of applicable rules. The second one, called Combining Tree, is used for the evaluation of the applicable rules. Finally, we propose an exploring method for the Matching Tree based on the binary search algorithm. The experimental results show that our approach is orders of magnitude better than Sun PDP.","PeriodicalId":74509,"journal":{"name":"Proceedings of the ... ACM symposium on access control models and technologies. ACM Symposium on Access Control Models and Technologies","volume":"5 1","pages":"83-92"},"PeriodicalIF":0.0,"publicationDate":"2012-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77781225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}