Exchange of attribute credentials is a means to establish mutual trust between strangers wishing to share resources or conduct business transactions. Automated Trust Negotiation (ATN) is an approach to regulate the exchange of sensitive information during this process. It treats credentials as potentially sensitive resources, access to which is under policy control. Negotiations that correctly enforce policies have been called safe in the literature. Prior work on ATN lacks an adequate definition of this safety notion. In large part, this is because fundamental questions such as what needs to be protected in ATN? and what are the security requirements? are not adequately answered. As a result, many prior methods of ATN have serious security holes. We introduce a formal framework for ATN in which we give precise, usable, and intuitive definitions of correct enforcement of policies in ATN. We argue that our chief safety notion captures intuitive security goals under both possibilistic and probabilistic analysis. We give precise comparisons of this notion with two alternative safety notions that may seem intuitive, but that are seen to be inadequate under closer inspection. We prove that an approach to ATN from the literature meets the requirements set forth in the preferred safety definition, thus validating the safety of that approach, as well as the usability of the definition.
{"title":"Safety in automated trust negotiation","authors":"W. Winsborough, Ninghui Li","doi":"10.1145/1178618.1178623","DOIUrl":"https://doi.org/10.1145/1178618.1178623","url":null,"abstract":"Exchange of attribute credentials is a means to establish mutual trust between strangers wishing to share resources or conduct business transactions. Automated Trust Negotiation (ATN) is an approach to regulate the exchange of sensitive information during this process. It treats credentials as potentially sensitive resources, access to which is under policy control. Negotiations that correctly enforce policies have been called safe in the literature. Prior work on ATN lacks an adequate definition of this safety notion. In large part, this is because fundamental questions such as what needs to be protected in ATN? and what are the security requirements? are not adequately answered. As a result, many prior methods of ATN have serious security holes. We introduce a formal framework for ATN in which we give precise, usable, and intuitive definitions of correct enforcement of policies in ATN. We argue that our chief safety notion captures intuitive security goals under both possibilistic and probabilistic analysis. We give precise comparisons of this notion with two alternative safety notions that may seem intuitive, but that are seen to be inadequate under closer inspection. We prove that an approach to ATN from the literature meets the requirements set forth in the preferred safety definition, thus validating the safety of that approach, as well as the usability of the definition.","PeriodicalId":447471,"journal":{"name":"IEEE Symposium on Security and Privacy, 2004. Proceedings. 2004","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115660318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-05-09DOI: 10.1109/SECPRI.2004.1301313
Tadayoshi Kohno, A. Stubblefield, A. Rubin, D. Wallach
With significant U.S. federal funds now available to replace outdated punch-card and mechanical voting systems, municipalities and states throughout the U.S. are adopting paperless electronic voting systems from a number of different vendors. We present a security analysis of the source code to one such machine used in a significant share of the market. Our analysis shows that this voting system is far below even the most minimal security standards applicable in other contexts. We identify several problems including unauthorized privilege escalation, incorrect use of cryptography, vulnerabilities to network threats, and poor software development processes. We show that voters, without any insider privileges, can cast unlimited votes without being detected by any mechanisms within the voting terminal software. Furthermore, we show that even the most serious of our outsider attacks could have been discovered and executed without access to the source code. In the face of such attacks, the usual worries about insider threats are not the only concerns; outsiders can do the damage. That said, we demonstrate that the insider threat is also quite considerable, showing that not only can an insider, such as a poll worker, modify the votes, but that insiders can also violate voter privacy and match votes with the voters who cast them. We conclude that this voting system is unsuitable for use in a general election. Any paperless electronic voting system might suffer similar flaws, despite any certification it could have otherwise received. We suggest that the best solutions are voting systems having a voter-verifiable audit trail, where a computerized voting system might print a paper ballot that can be read and verified by the voter.
{"title":"Analysis of an electronic voting system","authors":"Tadayoshi Kohno, A. Stubblefield, A. Rubin, D. Wallach","doi":"10.1109/SECPRI.2004.1301313","DOIUrl":"https://doi.org/10.1109/SECPRI.2004.1301313","url":null,"abstract":"With significant U.S. federal funds now available to replace outdated punch-card and mechanical voting systems, municipalities and states throughout the U.S. are adopting paperless electronic voting systems from a number of different vendors. We present a security analysis of the source code to one such machine used in a significant share of the market. Our analysis shows that this voting system is far below even the most minimal security standards applicable in other contexts. We identify several problems including unauthorized privilege escalation, incorrect use of cryptography, vulnerabilities to network threats, and poor software development processes. We show that voters, without any insider privileges, can cast unlimited votes without being detected by any mechanisms within the voting terminal software. Furthermore, we show that even the most serious of our outsider attacks could have been discovered and executed without access to the source code. In the face of such attacks, the usual worries about insider threats are not the only concerns; outsiders can do the damage. That said, we demonstrate that the insider threat is also quite considerable, showing that not only can an insider, such as a poll worker, modify the votes, but that insiders can also violate voter privacy and match votes with the voters who cast them. We conclude that this voting system is unsuitable for use in a general election. Any paperless electronic voting system might suffer similar flaws, despite any certification it could have otherwise received. We suggest that the best solutions are voting systems having a voter-verifiable audit trail, where a computerized voting system might print a paper ballot that can be read and verified by the voter.","PeriodicalId":447471,"journal":{"name":"IEEE Symposium on Security and Privacy, 2004. Proceedings. 2004","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116751440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-05-09DOI: 10.1109/SECPRI.2004.1301325
Jaeyeon Jung, V. Paxson, A. Berger, H. Balakrishnan
Attackers routinely perform random portscans of IP addresses to find vulnerable servers to compromise. Network intrusion detection systems (NIDS) attempt to detect such behavior and flag these portscanners as malicious. An important need in such systems is prompt response: the sooner a NIDS detects malice, the lower the resulting damage. At the same time, a NIDS should not falsely implicate benign remote hosts as malicious. Balancing the goals of promptness and accuracy in detecting malicious scanners is a delicate and difficult task. We develop a connection between this problem and the theory of sequential hypothesis testing and show that one can model accesses to local IP addresses as a random walk on one of two stochastic processes, corresponding respectively to the access patterns of benign remote hosts and malicious ones. The detection problem then becomes one of observing a particular trajectory and inferring from it the most likely classification for the remote host. We use this insight to develop TRW (Threshold Random Walk), an online detection algorithm that identifies malicious remote hosts. Using an analysis of traces from two qualitatively different sites, we show that TRW requires a much smaller number of connection attempts (4 or 5 in practice) to detect malicious activity compared to previous schemes, while also providing theoretical bounds on the low (and configurable) probabilities of missed detection and false alarms. In summary, TRW performs significantly faster and also more accurately than other current solutions.
{"title":"Fast portscan detection using sequential hypothesis testing","authors":"Jaeyeon Jung, V. Paxson, A. Berger, H. Balakrishnan","doi":"10.1109/SECPRI.2004.1301325","DOIUrl":"https://doi.org/10.1109/SECPRI.2004.1301325","url":null,"abstract":"Attackers routinely perform random portscans of IP addresses to find vulnerable servers to compromise. Network intrusion detection systems (NIDS) attempt to detect such behavior and flag these portscanners as malicious. An important need in such systems is prompt response: the sooner a NIDS detects malice, the lower the resulting damage. At the same time, a NIDS should not falsely implicate benign remote hosts as malicious. Balancing the goals of promptness and accuracy in detecting malicious scanners is a delicate and difficult task. We develop a connection between this problem and the theory of sequential hypothesis testing and show that one can model accesses to local IP addresses as a random walk on one of two stochastic processes, corresponding respectively to the access patterns of benign remote hosts and malicious ones. The detection problem then becomes one of observing a particular trajectory and inferring from it the most likely classification for the remote host. We use this insight to develop TRW (Threshold Random Walk), an online detection algorithm that identifies malicious remote hosts. Using an analysis of traces from two qualitatively different sites, we show that TRW requires a much smaller number of connection attempts (4 or 5 in practice) to detect malicious activity compared to previous schemes, while also providing theoretical bounds on the low (and configurable) probabilities of missed detection and false alarms. In summary, TRW performs significantly faster and also more accurately than other current solutions.","PeriodicalId":447471,"journal":{"name":"IEEE Symposium on Security and Privacy, 2004. Proceedings. 2004","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129613530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-05-09DOI: 10.1109/SECPRI.2004.1301328
Sencun Zhu, Sanjeev Setia, S. Jajodia, P. Ning
Sensor networks are often deployed in unattended environments, thus leaving these networks vulnerable to false data injection attacks in which an adversary injects false data into the network with the goal of deceiving the base station or depleting the resources of the relaying nodes. Standard authentication mechanisms cannot prevent this attack if the adversary has compromised one or a small number of sensor nodes. In this paper, we present an interleaved hop-by-hop authentication scheme that guarantees that the base station will detect any injected false data packets when no more than a certain number t nodes are compromised. Further, our scheme provides an upper bound B for the number of hops that a false data packet could be forwarded before it is detected and dropped, given that there are up to t colluding compromised nodes. We show that in the worst case B is O(t/sup 2/). Through performance analysis, we show that our scheme is efficient with respect to the security it provides, and it also allows a tradeoff between security and performance.
{"title":"An interleaved hop-by-hop authentication scheme for filtering of injected false data in sensor networks","authors":"Sencun Zhu, Sanjeev Setia, S. Jajodia, P. Ning","doi":"10.1109/SECPRI.2004.1301328","DOIUrl":"https://doi.org/10.1109/SECPRI.2004.1301328","url":null,"abstract":"Sensor networks are often deployed in unattended environments, thus leaving these networks vulnerable to false data injection attacks in which an adversary injects false data into the network with the goal of deceiving the base station or depleting the resources of the relaying nodes. Standard authentication mechanisms cannot prevent this attack if the adversary has compromised one or a small number of sensor nodes. In this paper, we present an interleaved hop-by-hop authentication scheme that guarantees that the base station will detect any injected false data packets when no more than a certain number t nodes are compromised. Further, our scheme provides an upper bound B for the number of hops that a false data packet could be forwarded before it is detected and dropped, given that there are up to t colluding compromised nodes. We show that in the worst case B is O(t/sup 2/). Through performance analysis, we show that our scheme is efficient with respect to the security it provides, and it also allows a tradeoff between security and performance.","PeriodicalId":447471,"journal":{"name":"IEEE Symposium on Security and Privacy, 2004. Proceedings. 2004","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130061836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-05-09DOI: 10.1109/SECPRI.2004.1301315
Jon A. Solworth, R. Sloan
An access control design can be viewed as a three layered entity: the general access control model; the parameterization of the access control model; and the initial users and objects of the system before it goes live. The design of this three-tiered mechanism can be evaluated according to two broad measures, the expressiveness versus the complexity of the system. In particular, the question arises: What security properties can be expressed and verified? We present a general access control model which can be parameterized at the second layer to implement (express) any of the standard Discretionary Access Control (DAC) models. We show that the safety problem is decidable for any access control model implemented using our general access control model. Until now, all general access control models that were known to be sufficiently expressive to implement the full range of DAC models had an undecidable safety problem. Thus, given our model all of the standard DAC models (plus many others) can be implemented in a system in which their safety properties are decidable.
{"title":"A layered design of discretionary access controls with decidable safety properties","authors":"Jon A. Solworth, R. Sloan","doi":"10.1109/SECPRI.2004.1301315","DOIUrl":"https://doi.org/10.1109/SECPRI.2004.1301315","url":null,"abstract":"An access control design can be viewed as a three layered entity: the general access control model; the parameterization of the access control model; and the initial users and objects of the system before it goes live. The design of this three-tiered mechanism can be evaluated according to two broad measures, the expressiveness versus the complexity of the system. In particular, the question arises: What security properties can be expressed and verified? We present a general access control model which can be parameterized at the second layer to implement (express) any of the standard Discretionary Access Control (DAC) models. We show that the safety problem is decidable for any access control model implemented using our general access control model. Until now, all general access control models that were known to be sufficiently expressive to implement the full range of DAC models had an undecidable safety problem. Thus, given our model all of the standard DAC models (plus many others) can be implemented in a system in which their safety properties are decidable.","PeriodicalId":447471,"journal":{"name":"IEEE Symposium on Security and Privacy, 2004. Proceedings. 2004","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133749429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-05-09DOI: 10.1109/SECPRI.2004.1301312
T. Aura, P. Nikander, G. Camarillo
The Stream Control Transmission Protocol (SCTP) is a reliable message-based transport protocol developed by the IETF that could replace TCP in some applications. SCTP allows endpoints to have multiple IP addresses for the purposes of fault tolerance. There is on-going work to extend the SCTP multihoming functions to support dynamic addressing and endpoint mobility. This paper explains how the multihoming and mobility features can be exploited for denial-of-service attacks, connection hijacking, and packet flooding. We propose implementation guidelines for SCTP and changes to the mobility extensions that prevent most of the attacks. The same lessons apply to multihomed TCP variants and other transport-layer protocols that incorporate some flavor of dynamic addressing.
{"title":"Effects of mobility and multihoming on transport-protocol security","authors":"T. Aura, P. Nikander, G. Camarillo","doi":"10.1109/SECPRI.2004.1301312","DOIUrl":"https://doi.org/10.1109/SECPRI.2004.1301312","url":null,"abstract":"The Stream Control Transmission Protocol (SCTP) is a reliable message-based transport protocol developed by the IETF that could replace TCP in some applications. SCTP allows endpoints to have multiple IP addresses for the purposes of fault tolerance. There is on-going work to extend the SCTP multihoming functions to support dynamic addressing and endpoint mobility. This paper explains how the multihoming and mobility features can be exploited for denial-of-service attacks, connection hijacking, and packet flooding. We propose implementation guidelines for SCTP and changes to the mobility extensions that prevent most of the attacks. The same lessons apply to multihomed TCP variants and other transport-layer protocols that incorporate some flavor of dynamic addressing.","PeriodicalId":447471,"journal":{"name":"IEEE Symposium on Security and Privacy, 2004. Proceedings. 2004","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125160401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-05-09DOI: 10.1109/SECPRI.2004.1301319
Jun Yu Li, Minho Sung, Jun Xu, Erran L. Li
Tracing attack packets to their sources, known as IP traceback, is an important step to counter distributed denial-of-service (DDoS) attacks. In this paper, we propose a novel packet logging based (i.e., hash-based) traceback scheme that requires an order of magnitude smaller processing and storage cost than the hash-based scheme proposed by Snoeren, et al. (2001), thereby being able to scalable to much higher link speed (e.g., OC-768). The baseline idea of our approach is to sample and log a small percentage (e.g., 3.3%) of packets. The challenge of this low sampling rate is that much more sophisticated techniques need to be used for traceback. Our solution is to construct the attack tree using the correlation between the attack packets sampled by neighboring routers. The scheme using naive independent random sampling does not perform well due to the low correlation between the packets sampled by neighboring routers. We invent a sampling scheme that improves this correlation and the overall efficiency significantly. Another major contribution of this work is that we introduce a novel information-theoretic framework for our traceback scheme to answer important questions on system parameter tuning and the fundamental trade-off between the resource used for traceback and the traceback accuracy. Simulation results based on real-world network topologies (e.g. Skitter) match very well with results from the information-theoretic analysis. The simulation results also demonstrate that our traceback scheme can achieve high accuracy, and scale very well to a large number of attackers (e.g., 5000+).
{"title":"Large-scale IP traceback in high-speed Internet: practical techniques and theoretical foundation","authors":"Jun Yu Li, Minho Sung, Jun Xu, Erran L. Li","doi":"10.1109/SECPRI.2004.1301319","DOIUrl":"https://doi.org/10.1109/SECPRI.2004.1301319","url":null,"abstract":"Tracing attack packets to their sources, known as IP traceback, is an important step to counter distributed denial-of-service (DDoS) attacks. In this paper, we propose a novel packet logging based (i.e., hash-based) traceback scheme that requires an order of magnitude smaller processing and storage cost than the hash-based scheme proposed by Snoeren, et al. (2001), thereby being able to scalable to much higher link speed (e.g., OC-768). The baseline idea of our approach is to sample and log a small percentage (e.g., 3.3%) of packets. The challenge of this low sampling rate is that much more sophisticated techniques need to be used for traceback. Our solution is to construct the attack tree using the correlation between the attack packets sampled by neighboring routers. The scheme using naive independent random sampling does not perform well due to the low correlation between the packets sampled by neighboring routers. We invent a sampling scheme that improves this correlation and the overall efficiency significantly. Another major contribution of this work is that we introduce a novel information-theoretic framework for our traceback scheme to answer important questions on system parameter tuning and the fundamental trade-off between the resource used for traceback and the traceback accuracy. Simulation results based on real-world network topologies (e.g. Skitter) match very well with results from the information-theoretic analysis. The simulation results also demonstrate that our traceback scheme can achieve high accuracy, and scale very well to a large number of attackers (e.g., 5000+).","PeriodicalId":447471,"journal":{"name":"IEEE Symposium on Security and Privacy, 2004. Proceedings. 2004","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125234232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-05-09DOI: 10.1109/SECPRI.2004.1301324
H. Feng, Jonathon T. Giffin, Yong Huang, S. Jha, Wenke Lee, B. Miller
A key function of a host-based intrusion detection system is to monitor program execution. Models constructed using static analysis have the highly desirable feature that they do not produce false alarms; however, they may still miss attacks. Prior work has shown a trade-off between efficiency and precision. In particular, the more accurate models based upon pushdown automata (PDA) are very inefficient to operate due to non-determinism in stack activity. In this paper, we present techniques for determinizing PDA models. We first provide a formal analysis framework of PDA models and introduce the concepts of determinism and stack-determinism. We then present the VP-Static model, which achieves determinism by extracting information about stack activity of the program, and the Dyck model, which achieves stack-determinism by transforming the program and inserting code to expose program state. Our results show that in run-time monitoring, our models slow execution of our test programs by 1% to 135%. This shows that reasonable efficiency needs not be sacrificed for model precision. We also compare the two models and discover that deterministic PDA are more efficient, although stack-deterministic PDA require less memory.
{"title":"Formalizing sensitivity in static analysis for intrusion detection","authors":"H. Feng, Jonathon T. Giffin, Yong Huang, S. Jha, Wenke Lee, B. Miller","doi":"10.1109/SECPRI.2004.1301324","DOIUrl":"https://doi.org/10.1109/SECPRI.2004.1301324","url":null,"abstract":"A key function of a host-based intrusion detection system is to monitor program execution. Models constructed using static analysis have the highly desirable feature that they do not produce false alarms; however, they may still miss attacks. Prior work has shown a trade-off between efficiency and precision. In particular, the more accurate models based upon pushdown automata (PDA) are very inefficient to operate due to non-determinism in stack activity. In this paper, we present techniques for determinizing PDA models. We first provide a formal analysis framework of PDA models and introduce the concepts of determinism and stack-determinism. We then present the VP-Static model, which achieves determinism by extracting information about stack activity of the program, and the Dyck model, which achieves stack-determinism by transforming the program and inserting code to expose program state. Our results show that in run-time monitoring, our models slow execution of our test programs by 1% to 135%. This shows that reasonable efficiency needs not be sacrificed for model precision. We also compare the two models and discover that deterministic PDA are more efficient, although stack-deterministic PDA require less memory.","PeriodicalId":447471,"journal":{"name":"IEEE Symposium on Security and Privacy, 2004. Proceedings. 2004","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132630057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-05-09DOI: 10.1109/SECPRI.2004.1301322
Lingyu Wang, S. Jajodia, D. Wijesekera
An OLAP (On-line Analytic Processing) system with insufficient security countermeasures may disclose sensitive information and breach an individual's privacy. Both unauthorized accesses and malicious inferences may lead to such inappropriate disclosures. Existing access control models in relational databases are unsuitable for the multi-dimensional data cubes used by OLAP. Inference control methods in statistical databases are expensive and apply to limited situations only. We first devise a flexible framework for specifying authorization objects in data cubes. The framework can partition a data cube both vertically based on dimension hierarchies and horizontally based on slices of data. We then study how to control inferences in data cubes. The proposed method eliminates both unauthorized accesses and malicious inferences. Its effectiveness does not depend on specific types of aggregation functions, external knowledge, or sensitivity criteria. The technique is efficient and readily implementable. Its on-line performance overhead is comparable to that of the minimal security requirement. Its enforcement requires little modification to existing OLAP systems.
{"title":"Securing OLAP data cubes against privacy breaches","authors":"Lingyu Wang, S. Jajodia, D. Wijesekera","doi":"10.1109/SECPRI.2004.1301322","DOIUrl":"https://doi.org/10.1109/SECPRI.2004.1301322","url":null,"abstract":"An OLAP (On-line Analytic Processing) system with insufficient security countermeasures may disclose sensitive information and breach an individual's privacy. Both unauthorized accesses and malicious inferences may lead to such inappropriate disclosures. Existing access control models in relational databases are unsuitable for the multi-dimensional data cubes used by OLAP. Inference control methods in statistical databases are expensive and apply to limited situations only. We first devise a flexible framework for specifying authorization objects in data cubes. The framework can partition a data cube both vertically based on dimension hierarchies and horizontally based on slices of data. We then study how to control inferences in data cubes. The proposed method eliminates both unauthorized accesses and malicious inferences. Its effectiveness does not depend on specific types of aggregation functions, external knowledge, or sensitivity criteria. The technique is efficient and readily implementable. Its on-line performance overhead is comparable to that of the minimal security requirement. Its enforcement requires little modification to existing OLAP systems.","PeriodicalId":447471,"journal":{"name":"IEEE Symposium on Security and Privacy, 2004. Proceedings. 2004","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128897415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2004-05-09DOI: 10.1109/SECPRI.2004.1301318
Michael Patrick Collins, M. Reiter
Numerous techniques have been proposed by which an end-system, subjected to a denial-of-service flood, filters the offending traffic. In this paper, we provide an empirical analysis of several such proposals, using traffic recorded at the border of a large network and including real DoS traffic. We focus our analysis on four filtering techniques, two based on the addresses from which the victim server typically receives traffic (static clustering and network-aware clustering), and two based on coarse indications of the path each packet traverses (hop-count filtering and path identifiers). Our analysis reveals challenges facing the proposed techniques in practice, and the implications of these issues for effective filtering. In addition, we compare techniques on equal footing, by evaluating the performance of one scheme under assumptions made by another. We conclude with an interpretation of the results and suggestions for further analysis.
{"title":"An empirical analysis of target-resident DoS filters","authors":"Michael Patrick Collins, M. Reiter","doi":"10.1109/SECPRI.2004.1301318","DOIUrl":"https://doi.org/10.1109/SECPRI.2004.1301318","url":null,"abstract":"Numerous techniques have been proposed by which an end-system, subjected to a denial-of-service flood, filters the offending traffic. In this paper, we provide an empirical analysis of several such proposals, using traffic recorded at the border of a large network and including real DoS traffic. We focus our analysis on four filtering techniques, two based on the addresses from which the victim server typically receives traffic (static clustering and network-aware clustering), and two based on coarse indications of the path each packet traverses (hop-count filtering and path identifiers). Our analysis reveals challenges facing the proposed techniques in practice, and the implications of these issues for effective filtering. In addition, we compare techniques on equal footing, by evaluating the performance of one scheme under assumptions made by another. We conclude with an interpretation of the results and suggestions for further analysis.","PeriodicalId":447471,"journal":{"name":"IEEE Symposium on Security and Privacy, 2004. Proceedings. 2004","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115219799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}