Phishing is a plague in cyberspace. Typically, phish detection methods either use human-verified URL blacklists or exploit Web page features via machine learning techniques. However, the former is frail in terms of new phish, and the latter suffers from the scarcity of effective features and the high false positive rate (FP). To alleviate those problems, we propose a layered anti-phishing solution that aims at (1) exploiting the expressiveness of a rich set of features with machine learning to achieve a high true positive rate (TP) on novel phish, and (2) limiting the FP to a low level via filtering algorithms. Specifically, we proposed CANTINA+, the most comprehensive feature-based approach in the literature including eight novel features, which exploits the HTML Document Object Model (DOM), search engines and third party services with machine learning techniques to detect phish. Moreover, we designed two filters to help reduce FP and achieve runtime speedup. The first is a near-duplicate phish detector that uses hashing to catch highly similar phish. The second is a login form filter, which directly classifies Web pages with no identified login form as legitimate. We extensively evaluated CANTINA+ with two methods on a diverse spectrum of corpora with 8118 phish and 4883 legitimate Web pages. In the randomized evaluation, CANTINA+ achieved over 92% TP on unique testing phish and over 99% TP on near-duplicate testing phish, and about 0.4% FP with 10% training phish. In the time-based evaluation, CANTINA+ also achieved over 92% TP on unique testing phish, over 99% TP on near-duplicate testing phish, and about 1.4% FP under 20% training phish with a two-week sliding window. Capable of achieving 0.4% FP and over 92% TP, our CANTINA+ has been demonstrated to be a competitive anti-phishing solution.
{"title":"CANTINA+: A Feature-Rich Machine Learning Framework for Detecting Phishing Web Sites","authors":"Guang Xiang, Jason I. Hong, C. Rosé, L. Cranor","doi":"10.1145/2019599.2019606","DOIUrl":"https://doi.org/10.1145/2019599.2019606","url":null,"abstract":"Phishing is a plague in cyberspace. Typically, phish detection methods either use human-verified URL blacklists or exploit Web page features via machine learning techniques. However, the former is frail in terms of new phish, and the latter suffers from the scarcity of effective features and the high false positive rate (FP). To alleviate those problems, we propose a layered anti-phishing solution that aims at (1) exploiting the expressiveness of a rich set of features with machine learning to achieve a high true positive rate (TP) on novel phish, and (2) limiting the FP to a low level via filtering algorithms.\u0000 Specifically, we proposed CANTINA+, the most comprehensive feature-based approach in the literature including eight novel features, which exploits the HTML Document Object Model (DOM), search engines and third party services with machine learning techniques to detect phish. Moreover, we designed two filters to help reduce FP and achieve runtime speedup. The first is a near-duplicate phish detector that uses hashing to catch highly similar phish. The second is a login form filter, which directly classifies Web pages with no identified login form as legitimate.\u0000 We extensively evaluated CANTINA+ with two methods on a diverse spectrum of corpora with 8118 phish and 4883 legitimate Web pages. In the randomized evaluation, CANTINA+ achieved over 92% TP on unique testing phish and over 99% TP on near-duplicate testing phish, and about 0.4% FP with 10% training phish. In the time-based evaluation, CANTINA+ also achieved over 92% TP on unique testing phish, over 99% TP on near-duplicate testing phish, and about 1.4% FP under 20% training phish with a two-week sliding window. Capable of achieving 0.4% FP and over 92% TP, our CANTINA+ has been demonstrated to be a competitive anti-phishing solution.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"37 1","pages":"21:1-21:28"},"PeriodicalIF":0.0,"publicationDate":"2011-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86827318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tamper-evident seals are used by many states’ election officials on voting machines and ballot boxes, either to protect the computer and software from fraudulent modification or to protect paper ballots from fraudulent substitution or stuffing. Physical tamper-indicating seals can usually be easily defeated, given they way they are typically made and used; and the effectiveness of seals depends on the protocol for their application and inspection. The legitimacy of our elections may therefore depend on whether a particular state’s use of seals is effective to prevent, deter, or detect election fraud. This paper is a case study of the use of seals on voting machines by the State of New Jersey. I conclude that New Jersey’s protocols for the use of tamper-evident seals have been not at all effective. I conclude with a discussion of the more general problem of seals in democratic elections.
{"title":"Security Seals on Voting Machines: A Case Study","authors":"A. Appel","doi":"10.1145/2019599.2019603","DOIUrl":"https://doi.org/10.1145/2019599.2019603","url":null,"abstract":"Tamper-evident seals are used by many states’ election officials on voting machines and ballot boxes, either to protect the computer and software from fraudulent modification or to protect paper ballots from fraudulent substitution or stuffing. Physical tamper-indicating seals can usually be easily defeated, given they way they are typically made and used; and the effectiveness of seals depends on the protocol for their application and inspection. The legitimacy of our elections may therefore depend on whether a particular state’s use of seals is effective to prevent, deter, or detect election fraud. This paper is a case study of the use of seals on voting machines by the State of New Jersey. I conclude that New Jersey’s protocols for the use of tamper-evident seals have been not at all effective. I conclude with a discussion of the more general problem of seals in democratic elections.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"86 1","pages":"18:1-18:29"},"PeriodicalIF":0.0,"publicationDate":"2011-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83058375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fuzz testing has proven successful in finding security vulnerabilities in large programs. However, traditional fuzz testing tools have a well-known common drawback: they are ineffective if most generated inputs are rejected at the early stage of program running, especially when target programs employ checksum mechanisms to verify the integrity of inputs. This article presents TaintScope, an automatic fuzzing system using dynamic taint analysis and symbolic execution techniques, to tackle the above problem. TaintScope has several novel features: (1) TaintScope is a checksum-aware fuzzing tool. It can identify checksum fields in inputs, accurately locate checksum-based integrity checks by using branch profiling techniques, and bypass such checks via control flow alteration. Furthermore, it can fix checksum values in generated inputs using combined concrete and symbolic execution techniques. (2) TaintScope is a taint-based fuzzing tool working at the x86 binary level. Based on fine-grained dynamic taint tracing, TaintScope identifies the “hot bytes” in a well-formed input that are used in security-sensitive operations (e.g., invoking system/library calls), and then focuses on modifying such bytes with random or boundary values. (3) TaintScope is also a symbolic-execution-based fuzzing tool. It can symbolically evaluate a trace, reason about all possible values that can execute the trace, and then detect potential vulnerabilities on the trace. We evaluate TaintScope on a number of large real-world applications. Experimental results show that TaintScope can accurately locate the checksum checks in programs and dramatically improve the effectiveness of fuzz testing. TaintScope has already found 30 previously unknown vulnerabilities in several widely used applications, including Adobe Acrobat, Flash Player, Google Picasa, and Microsoft Paint. Most of these severe vulnerabilities have been confirmed by Secunia and oCERT, and assigned CVE identifiers (such as CVE-2009-1882, CVE-2009-2688). Vendor patches have been released or are in preparation based on our reports.
{"title":"Checksum-Aware Fuzzing Combined with Dynamic Taint Analysis and Symbolic Execution","authors":"Tielei Wang, Tao Wei, G. Gu, Wei Zou","doi":"10.1145/2019599.2019600","DOIUrl":"https://doi.org/10.1145/2019599.2019600","url":null,"abstract":"Fuzz testing has proven successful in finding security vulnerabilities in large programs. However, traditional fuzz testing tools have a well-known common drawback: they are ineffective if most generated inputs are rejected at the early stage of program running, especially when target programs employ checksum mechanisms to verify the integrity of inputs. This article presents TaintScope, an automatic fuzzing system using dynamic taint analysis and symbolic execution techniques, to tackle the above problem. TaintScope has several novel features: (1) TaintScope is a checksum-aware fuzzing tool. It can identify checksum fields in inputs, accurately locate checksum-based integrity checks by using branch profiling techniques, and bypass such checks via control flow alteration. Furthermore, it can fix checksum values in generated inputs using combined concrete and symbolic execution techniques. (2) TaintScope is a taint-based fuzzing tool working at the x86 binary level. Based on fine-grained dynamic taint tracing, TaintScope identifies the “hot bytes” in a well-formed input that are used in security-sensitive operations (e.g., invoking system/library calls), and then focuses on modifying such bytes with random or boundary values. (3) TaintScope is also a symbolic-execution-based fuzzing tool. It can symbolically evaluate a trace, reason about all possible values that can execute the trace, and then detect potential vulnerabilities on the trace.\u0000 We evaluate TaintScope on a number of large real-world applications. Experimental results show that TaintScope can accurately locate the checksum checks in programs and dramatically improve the effectiveness of fuzz testing. TaintScope has already found 30 previously unknown vulnerabilities in several widely used applications, including Adobe Acrobat, Flash Player, Google Picasa, and Microsoft Paint. Most of these severe vulnerabilities have been confirmed by Secunia and oCERT, and assigned CVE identifiers (such as CVE-2009-1882, CVE-2009-2688). Vendor patches have been released or are in preparation based on our reports.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"21 2 1","pages":"15:1-15:28"},"PeriodicalIF":0.0,"publicationDate":"2011-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83506501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Basin, Srdjan Capkun, P. Schaller, Benedikt Schmidt
Traditional security protocols are mainly concerned with authentication and key establishment and rely on predistributed keys and properties of cryptographic operators. In contrast, new application areas are emerging that establish and rely on properties of the physical world. Examples include protocols for secure localization, distance bounding, and secure time synchronization. We present a formal model for modeling and reasoning about such physical security protocols. Our model extends standard, inductive, trace-based, symbolic approaches with a formalization of physical properties of the environment, namely communication, location, and time. In particular, communication is subject to physical constraints, for example, message transmission takes time determined by the communication medium used and the distance between nodes. All agents, including intruders, are subject to these constraints and this results in a distributed intruder with restricted, but more realistic, communication capabilities than those of the standard Dolev-Yao intruder. We have formalized our model in Isabelle/HOL and have used it to verify protocols for authenticated ranging, distance bounding, broadcast authentication based on delayed key disclosure, and time synchronization.
{"title":"Formal Reasoning about Physical Properties of Security Protocols","authors":"D. Basin, Srdjan Capkun, P. Schaller, Benedikt Schmidt","doi":"10.1145/2019599.2019601","DOIUrl":"https://doi.org/10.1145/2019599.2019601","url":null,"abstract":"Traditional security protocols are mainly concerned with authentication and key establishment and rely on predistributed keys and properties of cryptographic operators. In contrast, new application areas are emerging that establish and rely on properties of the physical world. Examples include protocols for secure localization, distance bounding, and secure time synchronization.\u0000 We present a formal model for modeling and reasoning about such physical security protocols. Our model extends standard, inductive, trace-based, symbolic approaches with a formalization of physical properties of the environment, namely communication, location, and time. In particular, communication is subject to physical constraints, for example, message transmission takes time determined by the communication medium used and the distance between nodes. All agents, including intruders, are subject to these constraints and this results in a distributed intruder with restricted, but more realistic, communication capabilities than those of the standard Dolev-Yao intruder. We have formalized our model in Isabelle/HOL and have used it to verify protocols for authenticated ranging, distance bounding, broadcast authentication based on delayed key disclosure, and time synchronization.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"9 1","pages":"16:1-16:28"},"PeriodicalIF":0.0,"publicationDate":"2011-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85997119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This issue of TISSEC includes extended versions of articles selected from the program of the 13th ACM Symposium on Access Control Models and Technologies (SACMAT 2008), which took place from June 11 to 13, 2008 in Estes Park, CO. SACMAT is a successful series of symposiums that continue the tradition, first established by the ACM Workshop on Role-Based Access Control, of being the premier forum for the presentation of research results and experience reports on leading-edge issues of access control, including models, systems, applications, and theory. The missions of the symposium are to share novel access control solutions that fulfill the needs of heterogeneous applications and environments and to identify new directions for future research and development. SACMAT gives researchers and practitioners a unique opportunity to share their perspectives with others interested in the various aspects of access control. The articles in this special issue were invited for submission from the 20 articles presented at SACMAT 2008. These were selected from 79 submissions from authors in 24 countries in Africa, Asia, Australia, Europe, North America, and South America. All the journal submissions went through an additional thorough review process to further ensure their quality. The first article “Detecting and Resolving Policy Misconfigurations in Access-Control Systems” by Lujo Bauer, Scott Garriss, and Michael K. Reiter applies association rule mining to the history of accesses to predict changes to access-control policies that are likely to be consistent with users’ intention, so that these changes can be instituted in advance of misconfigurations interfering with legitimate accesses. As instituting these changes requires the consent of the appropriate administrator, the article also introduces techniques to automatically determine from whom to seek consent and to minimize the costs of doing so. The proposed techniques are evaluated using data from a deployed access-control system. The second article “Authorization Recycling in Hierarchical RBAC Systems” by Qiang Wei, Jason Crampton, Konstantin Beznosov, and Matei Ripeanu introduces and evaluates the mechanisms for authorization “recycling” in RBAC enterprise systems. These mechanisms cache and reuse previous authorization decisions to help address the problem that the policy decision point in distributed applications tends to become a single point of failure and a performance bottleneck. The algorithms that support these mechanisms allow making precise and approximate authorization decisions, thereby masking possible failures of the authorization server and reducing its load. I would like to thank all the authors for submitting their research results in this special issue and to all the reviewers for their insightful comments. I am also grateful to Gene Tsudik, editor-in-chief, for his guidance and help throughout this process.
本期TISSEC包括从第13届ACM访问控制模型和技术研讨会(SACMAT 2008)计划中选择的文章的扩展版本,该研讨会于2008年6月11日至13日在CO. Estes Park举行。SACMAT是一个成功的系列研讨会,延续了传统,首先由ACM基于角色的访问控制研讨会建立。作为展示访问控制前沿问题的研究成果和经验报告的主要论坛,包括模型,系统,应用和理论。研讨会的任务是分享满足异构应用和环境需求的新颖访问控制解决方案,并确定未来研究和发展的新方向。SACMAT为研究人员和从业人员提供了一个独特的机会,与对访问控制的各个方面感兴趣的其他人分享他们的观点。这期特刊的文章被邀请从SACMAT 2008上发表的20篇文章中提交。这些作品是从来自非洲、亚洲、澳大利亚、欧洲、北美和南美24个国家的79位作者提交的作品中挑选出来的。所有提交的期刊都经过了额外的彻底审查过程,以进一步确保其质量。由Lujo Bauer、Scott Garriss和Michael K. Reiter撰写的第一篇文章“在访问控制系统中检测和解决策略错误配置”将关联规则挖掘应用于访问历史,以预测可能与用户意图一致的访问控制策略的更改,以便在错误配置干扰合法访问之前制定这些更改。由于实施这些更改需要适当的管理员的同意,因此本文还介绍了自动确定从谁那里寻求同意并将这样做的成本降至最低的技术。使用部署的访问控制系统的数据对所提出的技术进行了评估。Qiang Wei、Jason Crampton、Konstantin Beznosov和Matei Ripeanu撰写的第二篇文章“分层RBAC系统中的授权回收”介绍并评估了RBAC企业系统中的授权“回收”机制。这些机制缓存和重用以前的授权决策,以帮助解决分布式应用程序中的策略决策点容易成为单点故障和性能瓶颈的问题。支持这些机制的算法允许做出精确和近似的授权决策,从而掩盖授权服务器可能出现的故障并减少其负载。感谢所有作者在本期特刊中提交的研究成果,感谢所有审稿人的精辟评论。我还要感谢主编Gene Tsudik在整个过程中的指导和帮助。
{"title":"Introduction to special section SACMAT'08","authors":"Ninghui Li","doi":"10.1145/1952982.1952983","DOIUrl":"https://doi.org/10.1145/1952982.1952983","url":null,"abstract":"This issue of TISSEC includes extended versions of articles selected from the program of the 13th ACM Symposium on Access Control Models and Technologies (SACMAT 2008), which took place from June 11 to 13, 2008 in Estes Park, CO. SACMAT is a successful series of symposiums that continue the tradition, first established by the ACM Workshop on Role-Based Access Control, of being the premier forum for the presentation of research results and experience reports on leading-edge issues of access control, including models, systems, applications, and theory. The missions of the symposium are to share novel access control solutions that fulfill the needs of heterogeneous applications and environments and to identify new directions for future research and development. SACMAT gives researchers and practitioners a unique opportunity to share their perspectives with others interested in the various aspects of access control. The articles in this special issue were invited for submission from the 20 articles presented at SACMAT 2008. These were selected from 79 submissions from authors in 24 countries in Africa, Asia, Australia, Europe, North America, and South America. All the journal submissions went through an additional thorough review process to further ensure their quality. The first article “Detecting and Resolving Policy Misconfigurations in Access-Control Systems” by Lujo Bauer, Scott Garriss, and Michael K. Reiter applies association rule mining to the history of accesses to predict changes to access-control policies that are likely to be consistent with users’ intention, so that these changes can be instituted in advance of misconfigurations interfering with legitimate accesses. As instituting these changes requires the consent of the appropriate administrator, the article also introduces techniques to automatically determine from whom to seek consent and to minimize the costs of doing so. The proposed techniques are evaluated using data from a deployed access-control system. The second article “Authorization Recycling in Hierarchical RBAC Systems” by Qiang Wei, Jason Crampton, Konstantin Beznosov, and Matei Ripeanu introduces and evaluates the mechanisms for authorization “recycling” in RBAC enterprise systems. These mechanisms cache and reuse previous authorization decisions to help address the problem that the policy decision point in distributed applications tends to become a single point of failure and a performance bottleneck. The algorithms that support these mechanisms allow making precise and approximate authorization decisions, thereby masking possible failures of the authorization server and reducing its load. I would like to thank all the authors for submitting their research results in this special issue and to all the reviewers for their insightful comments. I am also grateful to Gene Tsudik, editor-in-chief, for his guidance and help throughout this process.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"8 1","pages":"1:1"},"PeriodicalIF":0.0,"publicationDate":"2011-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89332187","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A power grid is a complex system connecting electric power generators to consumers through power transmission and distribution networks across a large geographical area. System monitoring is necessary to ensure the reliable operation of power grids, and state estimation is used in system monitoring to best estimate the power grid state through analysis of meter measurements and power system models. Various techniques have been developed to detect and identify bad measurements, including interacting bad measurements introduced by arbitrary, nonrandom causes. At first glance, it seems that these techniques can also defeat malicious measurements injected by attackers. In this article, we expose an unknown vulnerability of existing bad measurement detection algorithms by presenting and analyzing a new class of attacks, called false data injection attacks, against state estimation in electric power grids. Under the assumption that the attacker can access the current power system configuration information and manipulate the measurements of meters at physically protected locations such as substations, such attacks can introduce arbitrary errors into certain state variables without being detected by existing algorithms. Moreover, we look at two scenarios, where the attacker is either constrained to specific meters or limited in the resources required to compromise meters. We show that the attacker can systematically and efficiently construct attack vectors in both scenarios to change the results of state estimation in arbitrary ways. We also extend these attacks to generalized false data injection attacks, which can further increase the impact by exploiting measurement errors typically tolerated in state estimation. We demonstrate the success of these attacks through simulation using IEEE test systems, and also discuss the practicality of these attacks and the real-world constraints that limit their effectiveness.
{"title":"False data injection attacks against state estimation in electric power grids","authors":"Yao Liu, P. Ning, M. Reiter","doi":"10.1145/1952982.1952995","DOIUrl":"https://doi.org/10.1145/1952982.1952995","url":null,"abstract":"A power grid is a complex system connecting electric power generators to consumers through power transmission and distribution networks across a large geographical area. System monitoring is necessary to ensure the reliable operation of power grids, and state estimation is used in system monitoring to best estimate the power grid state through analysis of meter measurements and power system models. Various techniques have been developed to detect and identify bad measurements, including interacting bad measurements introduced by arbitrary, nonrandom causes. At first glance, it seems that these techniques can also defeat malicious measurements injected by attackers.\u0000 In this article, we expose an unknown vulnerability of existing bad measurement detection algorithms by presenting and analyzing a new class of attacks, called false data injection attacks, against state estimation in electric power grids. Under the assumption that the attacker can access the current power system configuration information and manipulate the measurements of meters at physically protected locations such as substations, such attacks can introduce arbitrary errors into certain state variables without being detected by existing algorithms. Moreover, we look at two scenarios, where the attacker is either constrained to specific meters or limited in the resources required to compromise meters. We show that the attacker can systematically and efficiently construct attack vectors in both scenarios to change the results of state estimation in arbitrary ways. We also extend these attacks to generalized false data injection attacks, which can further increase the impact by exploiting measurement errors typically tolerated in state estimation. We demonstrate the success of these attacks through simulation using IEEE test systems, and also discuss the practicality of these attacks and the real-world constraints that limit their effectiveness.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"81 1","pages":"13:1-13:33"},"PeriodicalIF":0.0,"publicationDate":"2011-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87520467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a novel video stream authentication scheme which combines signature amortization by means of hash chains and an advanced watermarking technique. We propose a new hash chain construction, the Duplex Hash Chain, which allows us to achieve bit-by-bit authentication that is robust to low bit error rates. This construction is well suited for wireless broadcast communications characterized by low packet losses such as in satellite networks. Moreover, neither hardware upgrades nor specific end-user equipment are needed to enjoy the authentication services. The computation overhead experienced on the receiver only sums to two hashes per block of pictures and one digital signature verification for the whole received stream. This overhead introduces a provably negligible decrease in video quality. A thorough analysis of the proposed solution is provided in conjunction with extensive simulations.
{"title":"Robust and efficient authentication of video stream broadcasting","authors":"G. Oligeri, S. Chessa, R. D. Pietro, G. Giunta","doi":"10.1145/1952982.1952987","DOIUrl":"https://doi.org/10.1145/1952982.1952987","url":null,"abstract":"We present a novel video stream authentication scheme which combines signature amortization by means of hash chains and an advanced watermarking technique. We propose a new hash chain construction, the Duplex Hash Chain, which allows us to achieve bit-by-bit authentication that is robust to low bit error rates. This construction is well suited for wireless broadcast communications characterized by low packet losses such as in satellite networks. Moreover, neither hardware upgrades nor specific end-user equipment are needed to enjoy the authentication services. The computation overhead experienced on the receiver only sums to two hashes per block of pictures and one digital signature verification for the whole received stream. This overhead introduces a provably negligible decrease in video quality. A thorough analysis of the proposed solution is provided in conjunction with extensive simulations.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"147 1","pages":"5:1-5:25"},"PeriodicalIF":0.0,"publicationDate":"2011-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74911827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Ateniese, R. Burns, Reza Curtmola, Joseph Herring, O. Khan, Lea Kissner, Zachary N. J. Peterson, D. Song
We introduce a model for provable data possession (PDP) that can be used for remote data checking: A client that has stored data at an untrusted server can verify that the server possesses the original data without retrieving it. The model generates probabilistic proofs of possession by sampling random sets of blocks from the server, which drastically reduces I/O costs. The client maintains a constant amount of metadata to verify the proof. The challenge/response protocol transmits a small, constant amount of data, which minimizes network communication. Thus, the PDP model for remote data checking is lightweight and supports large data sets in distributed storage systems. The model is also robust in that it incorporates mechanisms for mitigating arbitrary amounts of data corruption. We present two provably-secure PDP schemes that are more efficient than previous solutions. In particular, the overhead at the server is low (or even constant), as opposed to linear in the size of the data. We then propose a generic transformation that adds robustness to any remote data checking scheme based on spot checking. Experiments using our implementation verify the practicality of PDP and reveal that the performance of PDP is bounded by disk I/O and not by cryptographic computation. Finally, we conduct an in-depth experimental evaluation to study the tradeoffs in performance, security, and space overheads when adding robustness to a remote data checking scheme.
{"title":"Remote data checking using provable data possession","authors":"G. Ateniese, R. Burns, Reza Curtmola, Joseph Herring, O. Khan, Lea Kissner, Zachary N. J. Peterson, D. Song","doi":"10.1145/1952982.1952994","DOIUrl":"https://doi.org/10.1145/1952982.1952994","url":null,"abstract":"We introduce a model for provable data possession (PDP) that can be used for remote data checking: A client that has stored data at an untrusted server can verify that the server possesses the original data without retrieving it. The model generates probabilistic proofs of possession by sampling random sets of blocks from the server, which drastically reduces I/O costs. The client maintains a constant amount of metadata to verify the proof. The challenge/response protocol transmits a small, constant amount of data, which minimizes network communication. Thus, the PDP model for remote data checking is lightweight and supports large data sets in distributed storage systems. The model is also robust in that it incorporates mechanisms for mitigating arbitrary amounts of data corruption.\u0000 We present two provably-secure PDP schemes that are more efficient than previous solutions. In particular, the overhead at the server is low (or even constant), as opposed to linear in the size of the data. We then propose a generic transformation that adds robustness to any remote data checking scheme based on spot checking. Experiments using our implementation verify the practicality of PDP and reveal that the performance of PDP is bounded by disk I/O and not by cryptographic computation. Finally, we conduct an in-depth experimental evaluation to study the tradeoffs in performance, security, and space overheads when adding robustness to a remote data checking scheme.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"130 1","pages":"12:1-12:34"},"PeriodicalIF":0.0,"publicationDate":"2011-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89590369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a lightweight RFID authentication protocol that supports forward and backward security. The only cryptographic mechanism that this protocol uses is a pseudorandom number generator (PRNG) that is shared with the backend Server. Authentication is achieved by exchanging a few numbers (3 or 5) drawn from the PRNG. The lookup time is constant, and the protocol can be easily adapted to prevent online man-in-the-middle relay attacks. Security is proven in the UC security framework.
{"title":"Lightweight RFID authentication with forward and backward security","authors":"M. Burmester, J. Munilla","doi":"10.1145/1952982.1952993","DOIUrl":"https://doi.org/10.1145/1952982.1952993","url":null,"abstract":"We propose a lightweight RFID authentication protocol that supports forward and backward security. The only cryptographic mechanism that this protocol uses is a pseudorandom number generator (PRNG) that is shared with the backend Server. Authentication is achieved by exchanging a few numbers (3 or 5) drawn from the PRNG. The lookup time is constant, and the protocol can be easily adapted to prevent online man-in-the-middle relay attacks. Security is proven in the UC security framework.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"54 1","pages":"11:1-11:26"},"PeriodicalIF":0.0,"publicationDate":"2011-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75803212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Access-control policy misconfigurations that cause requests to be erroneously denied can result in wasted time, user frustration, and, in the context of particular applications (e.g., health care), very severe consequences. In this article we apply association rule mining to the history of accesses to predict changes to access-control policies that are likely to be consistent with users' intentions, so that these changes can be instituted in advance of misconfigurations interfering with legitimate accesses. Instituting these changes requires the consent of the appropriate administrator, of course, and so a primary contribution of our work is how to automatically determine from whom to seek consent and how to minimize the costs of doing so. We show using data from a deployed access-control system that our methods can reduce the number of accesses that would have incurred costly time-of-access delays by 43%, and can correctly predict 58% of the intended policy. These gains are achieved without impacting the total amount of time users spend interacting with the system.
{"title":"Detecting and resolving policy misconfigurations in access-control systems","authors":"Lujo Bauer, Scott Garriss, M. Reiter","doi":"10.1145/1952982.1952984","DOIUrl":"https://doi.org/10.1145/1952982.1952984","url":null,"abstract":"Access-control policy misconfigurations that cause requests to be erroneously denied can result in wasted time, user frustration, and, in the context of particular applications (e.g., health care), very severe consequences. In this article we apply association rule mining to the history of accesses to predict changes to access-control policies that are likely to be consistent with users' intentions, so that these changes can be instituted in advance of misconfigurations interfering with legitimate accesses. Instituting these changes requires the consent of the appropriate administrator, of course, and so a primary contribution of our work is how to automatically determine from whom to seek consent and how to minimize the costs of doing so. We show using data from a deployed access-control system that our methods can reduce the number of accesses that would have incurred costly time-of-access delays by 43%, and can correctly predict 58% of the intended policy. These gains are achieved without impacting the total amount of time users spend interacting with the system.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"143 1","pages":"2:1-2:28"},"PeriodicalIF":0.0,"publicationDate":"2011-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72660508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}