We present LOT, a lightweight plug and play secure tunneling protocol deployed at network gateways. Two communicating gateways, A and B, running LOT would automatically detect each other and establish an efficient tunnel, securing communication between them. LOT tunnels allow A to discard spoofed packets that specify source addresses in B’s network and vice versa. This helps to mitigate many attacks, including DNS poisoning, network scans, and most notably (Distributed) Denial of Service (DoS). LOT tunnels provide several additional defenses against DoS attacks. Specifically, since packets received from LOT-protected networks cannot be spoofed, LOT gateways implement quotas, identifying and blocking packet floods from specific networks. Furthermore, a receiving LOT gateway (e.g., B) can send the quota assigned to each tunnel to the peer gateway (A), which can then enforce near-source quotas, reducing waste and congestion by filtering excessive traffic before it leaves the source network. Similarly, LOT tunnels facilitate near-source filtering, where the sending gateway discards packets based on filtering rules defined by the destination gateway. LOT gateways also implement an intergateway congestion detection mechanism, allowing sending gateways to detect when their packets get dropped before reaching the destination gateway and to perform appropriate near-source filtering to block the congesting traffic; this helps against DoS attacks on the backbone connecting the two gateways. LOT is practical: it is easy to manage (plug and play, requires no coordination between gateways), deployed incrementally at edge gateways (not at hosts and core routers), and has negligible overhead in terms of bandwidth and processing, as we validate experimentally. LOT storage requirements are also modest.
{"title":"LOT: A Defense Against IP Spoofing and Flooding Attacks","authors":"Y. Gilad, A. Herzberg","doi":"10.1145/2240276.2240277","DOIUrl":"https://doi.org/10.1145/2240276.2240277","url":null,"abstract":"We present LOT, a lightweight plug and play secure tunneling protocol deployed at network gateways. Two communicating gateways, A and B, running LOT would automatically detect each other and establish an efficient tunnel, securing communication between them. LOT tunnels allow A to discard spoofed packets that specify source addresses in B’s network and vice versa. This helps to mitigate many attacks, including DNS poisoning, network scans, and most notably (Distributed) Denial of Service (DoS).\u0000 LOT tunnels provide several additional defenses against DoS attacks. Specifically, since packets received from LOT-protected networks cannot be spoofed, LOT gateways implement quotas, identifying and blocking packet floods from specific networks. Furthermore, a receiving LOT gateway (e.g., B) can send the quota assigned to each tunnel to the peer gateway (A), which can then enforce near-source quotas, reducing waste and congestion by filtering excessive traffic before it leaves the source network. Similarly, LOT tunnels facilitate near-source filtering, where the sending gateway discards packets based on filtering rules defined by the destination gateway. LOT gateways also implement an intergateway congestion detection mechanism, allowing sending gateways to detect when their packets get dropped before reaching the destination gateway and to perform appropriate near-source filtering to block the congesting traffic; this helps against DoS attacks on the backbone connecting the two gateways.\u0000 LOT is practical: it is easy to manage (plug and play, requires no coordination between gateways), deployed incrementally at edge gateways (not at hosts and core routers), and has negligible overhead in terms of bandwidth and processing, as we validate experimentally. LOT storage requirements are also modest.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"43 1","pages":"6:1-6:30"},"PeriodicalIF":0.0,"publicationDate":"2012-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80195289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Runtime monitoring is an increasingly popular method to ensure the safe execution of untrusted codes. Monitors observe and transform the execution of these codes, responding when needed to correct or prevent a violation of a user-defined security policy. Prior research has shown that the set of properties monitors can enforce correlates with the latitude they are given to transform and alter the target execution. But for enforcement to be meaningful this capacity must be constrained, otherwise the monitor can enforce any property, but not necessarily in a manner that is useful or desirable. However, such constraints have not been significantly addressed in prior work. In this article, we develop a new paradigm of security policy enforcement in which the behavior of the enforcement mechanism is restricted to ensure that valid aspects present in the execution are preserved notwithstanding any transformation it may perform. These restrictions capture the desired behavior of valid executions of the program, and are stated by way of a preorder over sequences. The resulting model is closer than previous ones to what would be expected of a real-life monitor, from which we demand a minimal footprint on both valid and invalid executions. We illustrate this framework with examples of real-life security properties. Since several different enforcement alternatives of the same property are made possible by the flexibility of this type of enforcement, our study also provides metrics that allow the user to compare monitors objectively and choose the best enforcement paradigm for a given situation.
{"title":"Corrective Enforcement: A New Paradigm of Security Policy Enforcement by Monitors","authors":"R. Khoury, N. Tawbi","doi":"10.1145/2240276.2240281","DOIUrl":"https://doi.org/10.1145/2240276.2240281","url":null,"abstract":"Runtime monitoring is an increasingly popular method to ensure the safe execution of untrusted codes. Monitors observe and transform the execution of these codes, responding when needed to correct or prevent a violation of a user-defined security policy. Prior research has shown that the set of properties monitors can enforce correlates with the latitude they are given to transform and alter the target execution. But for enforcement to be meaningful this capacity must be constrained, otherwise the monitor can enforce any property, but not necessarily in a manner that is useful or desirable. However, such constraints have not been significantly addressed in prior work. In this article, we develop a new paradigm of security policy enforcement in which the behavior of the enforcement mechanism is restricted to ensure that valid aspects present in the execution are preserved notwithstanding any transformation it may perform. These restrictions capture the desired behavior of valid executions of the program, and are stated by way of a preorder over sequences. The resulting model is closer than previous ones to what would be expected of a real-life monitor, from which we demand a minimal footprint on both valid and invalid executions. We illustrate this framework with examples of real-life security properties. Since several different enforcement alternatives of the same property are made possible by the flexibility of this type of enforcement, our study also provides metrics that allow the user to compare monitors objectively and choose the best enforcement paradigm for a given situation.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"30 1","pages":"10:1-10:27"},"PeriodicalIF":0.0,"publicationDate":"2012-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91311273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Boris Danev, Srdjan Capkun, Ramya Jayaram Masti, T. S. Benjamin
The deployment of RFID poses a number of security and privacy threats such as cloning, unauthorized tracking, etc. Although the literature contains many investigations of these issues on the logical level, few works have explored the security implications of the physical communication layer. Recently, related studies have shown the feasibility of identifying RFID-enabled devices based on physical-layer fingerprints. In this work, we leverage on these findings and demonstrate that physical-layer identification of HF RFID devices is also practical, that is, can achieve high accuracy and stability. We propose an improved hardware setup and enhanced techniques for fingerprint extraction and matching. Our new system enables device identification with an Equal Error Rate as low as 0.005 (0.5%) on a set 50 HF RFID smart cards of the same manufacturer and type. We further investigate the fingerprint stability over an extended period of time and across different acquisition setups. In the latter case, we propose a solution based on channel equalization that preserves the fingerprint quality across setups. Our results strengthen the practical use of physical-layer identification of RFID devices in product and document anti-counterfeiting solutions.
{"title":"Towards Practical Identification of HF RFID Devices","authors":"Boris Danev, Srdjan Capkun, Ramya Jayaram Masti, T. S. Benjamin","doi":"10.1145/2240276.2240278","DOIUrl":"https://doi.org/10.1145/2240276.2240278","url":null,"abstract":"The deployment of RFID poses a number of security and privacy threats such as cloning, unauthorized tracking, etc. Although the literature contains many investigations of these issues on the logical level, few works have explored the security implications of the physical communication layer. Recently, related studies have shown the feasibility of identifying RFID-enabled devices based on physical-layer fingerprints. In this work, we leverage on these findings and demonstrate that physical-layer identification of HF RFID devices is also practical, that is, can achieve high accuracy and stability. We propose an improved hardware setup and enhanced techniques for fingerprint extraction and matching. Our new system enables device identification with an Equal Error Rate as low as 0.005 (0.5%) on a set 50 HF RFID smart cards of the same manufacturer and type. We further investigate the fingerprint stability over an extended period of time and across different acquisition setups. In the latter case, we propose a solution based on channel equalization that preserves the fingerprint quality across setups. Our results strengthen the practical use of physical-layer identification of RFID devices in product and document anti-counterfeiting solutions.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"302 1","pages":"7:1-7:24"},"PeriodicalIF":0.0,"publicationDate":"2012-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74964401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A workflow specification defines a set of steps and the order in which these steps must be executed. Security requirements may impose constraints on which groups of users are permitted to perform subsets of these steps. A workflow specification is said to be satisfiable if there exists an assignment of users to workflow steps that satisfies all the constraints. An algorithm for determining whether such an assignment exists is important, both as a static analysis tool for workflow specifications and for the construction of runtime reference monitors for workflow management systems. Finding such an assignment is a hard problem in general, but work by Wang and Li [2010] using the theory of parameterized complexity suggests that efficient algorithms exist under reasonable assumptions about workflow specifications. In this article, we improve the complexity bounds for the workflow satisfiability problem. We also generalize and extend the types of constraints that may be defined in a workflow specification and prove that the satisfiability problem remains fixed-parameter tractable for such constraints. Finally, we consider preprocessing for the problem and prove that in an important special case, in polynomial time, we can reduce the given input into an equivalent one where the number of users is at most the number of steps. We also show that no such reduction exists for two natural extensions of this case, which bounds the number of users by a polynomial in the number of steps, provided a widely accepted complexity-theoretical assumption holds.
{"title":"On the Parameterized Complexity and Kernelization of the Workflow Satisfiability Problem","authors":"J. Crampton, G. Gutin, Anders Yeo","doi":"10.1145/2487222.2487226","DOIUrl":"https://doi.org/10.1145/2487222.2487226","url":null,"abstract":"A workflow specification defines a set of steps and the order in which these steps must be executed. Security requirements may impose constraints on which groups of users are permitted to perform subsets of these steps. A workflow specification is said to be satisfiable if there exists an assignment of users to workflow steps that satisfies all the constraints. An algorithm for determining whether such an assignment exists is important, both as a static analysis tool for workflow specifications and for the construction of runtime reference monitors for workflow management systems. Finding such an assignment is a hard problem in general, but work by Wang and Li [2010] using the theory of parameterized complexity suggests that efficient algorithms exist under reasonable assumptions about workflow specifications. In this article, we improve the complexity bounds for the workflow satisfiability problem. We also generalize and extend the types of constraints that may be defined in a workflow specification and prove that the satisfiability problem remains fixed-parameter tractable for such constraints. Finally, we consider preprocessing for the problem and prove that in an important special case, in polynomial time, we can reduce the given input into an equivalent one where the number of users is at most the number of steps. We also show that no such reduction exists for two natural extensions of this case, which bounds the number of users by a polynomial in the number of steps, provided a widely accepted complexity-theoretical assumption holds.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"68 1","pages":"4"},"PeriodicalIF":0.0,"publicationDate":"2012-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79260217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce return-oriented programming, a technique by which an attacker can induce arbitrary behavior in a program whose control flow he has diverted, without injecting any code. A return-oriented program chains together short instruction sequences already present in a program’s address space, each of which ends in a “return” instruction. Return-oriented programming defeats the W⊕X protections recently deployed by Microsoft, Intel, and AMD; in this context, it can be seen as a generalization of traditional return-into-libc attacks. But the threat is more general. Return-oriented programming is readily exploitable on multiple architectures and systems. It also bypasses an entire category of security measures---those that seek to prevent malicious computation by preventing the execution of malicious code. To demonstrate the wide applicability of return-oriented programming, we construct a Turing-complete set of building blocks called gadgets using the standard C libraries of two very different architectures: Linux/x86 and Solaris/SPARC. To demonstrate the power of return-oriented programming, we present a high-level, general-purpose language for describing return-oriented exploits and a compiler that translates it to gadgets.
{"title":"Return-Oriented Programming: Systems, Languages, and Applications","authors":"Ryan Roemer, E. Buchanan, H. Shacham, S. Savage","doi":"10.1145/2133375.2133377","DOIUrl":"https://doi.org/10.1145/2133375.2133377","url":null,"abstract":"We introduce return-oriented programming, a technique by which an attacker can induce arbitrary behavior in a program whose control flow he has diverted, without injecting any code. A return-oriented program chains together short instruction sequences already present in a program’s address space, each of which ends in a “return” instruction.\u0000 Return-oriented programming defeats the W⊕X protections recently deployed by Microsoft, Intel, and AMD; in this context, it can be seen as a generalization of traditional return-into-libc attacks. But the threat is more general. Return-oriented programming is readily exploitable on multiple architectures and systems. It also bypasses an entire category of security measures---those that seek to prevent malicious computation by preventing the execution of malicious code.\u0000 To demonstrate the wide applicability of return-oriented programming, we construct a Turing-complete set of building blocks called gadgets using the standard C libraries of two very different architectures: Linux/x86 and Solaris/SPARC. To demonstrate the power of return-oriented programming, we present a high-level, general-purpose language for describing return-oriented exploits and a compiler that translates it to gadgets.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"98 1","pages":"2:1-2:34"},"PeriodicalIF":0.0,"publicationDate":"2012-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76188237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We narrow the gap between concrete implementations of cryptographic protocols and their verified models. We develop and verify a small functional implementation of the Transport Layer Security protocol (TLS 1.0). We make use of the same executable code for interoperability testing against mainstream implementations for automated symbolic cryptographic verification and automated computational cryptographic verification. We rely on a combination of recent tools and also develop a new tool for extracting computational models from executable code. We obtain strong security guarantees for TLS as used in typical deployments.
{"title":"Verified Cryptographic Implementations for TLS","authors":"K. Bhargavan, C. Fournet, R. Corin, E. Zalinescu","doi":"10.1145/2133375.2133378","DOIUrl":"https://doi.org/10.1145/2133375.2133378","url":null,"abstract":"We narrow the gap between concrete implementations of cryptographic protocols and their verified models. We develop and verify a small functional implementation of the Transport Layer Security protocol (TLS 1.0). We make use of the same executable code for interoperability testing against mainstream implementations for automated symbolic cryptographic verification and automated computational cryptographic verification. We rely on a combination of recent tools and also develop a new tool for extracting computational models from executable code. We obtain strong security guarantees for TLS as used in typical deployments.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"30 1","pages":"3:1-3:32"},"PeriodicalIF":0.0,"publicationDate":"2012-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80288097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This special issue contains extended versions of articles selected from the program of the 15th ACM Conference on Computer and Communications Security (CCS’08), which took place October 27 to 31, 2008 in Alexandria, Virginia (USA). This annual conference is a leading international forum for information-security researchers, practitioners, developers, and users to explore cutting-edge ideas and results, and to exchange techniques, tools, and experiences. Its mission is to promote and share novel research from academia, government, and industry covering all theoretical and practical aspects of computer security as well as case studies and implementation experiences. The selected articles represent the broad scope of the conference. Two articles focus primarily on what we can learn from novel and existing attacks on security, and the other two articles focus on the development of improved mechanisms to provide and assure security. The topics range from programming exploits to large network designs, from cryptographic advances to formal techniques to verify the use of cryptography in practical protocols. The four articles in this special issue were invited for submission from the fifty-one papers presented at CCS’08. These were selected from 280 papers submitted to the conference. The submissions to the special issue were required to contain at least 25% new material to differentiate the journal articles from the conference papers. All the journal submissions went through an additional thorough review process (the same review process as any submission to this journal) to further ensure their quality. The first article, “Return-Oriented Programming: Systems, Languages, and Applications” by Ryan Roemer, Erik Buchanan, Hovav Shacham, and Stefan Savage, describes a technique to attack programs without injecting code by chaining instruction sequences already in a program’s address space. The technique they demonstrate challenges the assumption that preventing the introduction of malicious code can prevent malicious computation. The next article shows an advance on the defense side of program security. In “Verified Cryptographic Implementations for TLS”, Karthikeyan Bhargavan, Ricardo Corin, Cédric Fournet, and Eugen Zălinescu show how to develop a small functional implementation of TLS and then use novel and existing model-extraction and verification tools to provide security guarantees at both the symbolic and computational levels. The end result is the first automated verification of executable code using standard cryptographic assumptions. Next, Jan Camenisch and Thomas Gross, in their article “Efficient Attributes for Anonymous Credentials,” address a different aspect of cryptographic protocols, how to practically provide just the authorization needed. A novel approach permits much more efficient proofs of possession of even a large combination of attribute credentials. This makes it much more practical to anonymously show that one has needed attribut
{"title":"Guest Editorial: Special Issue on Computer and Communications Security","authors":"P. Syverson, S. Jha","doi":"10.1145/2133375.2133376","DOIUrl":"https://doi.org/10.1145/2133375.2133376","url":null,"abstract":"This special issue contains extended versions of articles selected from the program of the 15th ACM Conference on Computer and Communications Security (CCS’08), which took place October 27 to 31, 2008 in Alexandria, Virginia (USA). This annual conference is a leading international forum for information-security researchers, practitioners, developers, and users to explore cutting-edge ideas and results, and to exchange techniques, tools, and experiences. Its mission is to promote and share novel research from academia, government, and industry covering all theoretical and practical aspects of computer security as well as case studies and implementation experiences. The selected articles represent the broad scope of the conference. Two articles focus primarily on what we can learn from novel and existing attacks on security, and the other two articles focus on the development of improved mechanisms to provide and assure security. The topics range from programming exploits to large network designs, from cryptographic advances to formal techniques to verify the use of cryptography in practical protocols. The four articles in this special issue were invited for submission from the fifty-one papers presented at CCS’08. These were selected from 280 papers submitted to the conference. The submissions to the special issue were required to contain at least 25% new material to differentiate the journal articles from the conference papers. All the journal submissions went through an additional thorough review process (the same review process as any submission to this journal) to further ensure their quality. The first article, “Return-Oriented Programming: Systems, Languages, and Applications” by Ryan Roemer, Erik Buchanan, Hovav Shacham, and Stefan Savage, describes a technique to attack programs without injecting code by chaining instruction sequences already in a program’s address space. The technique they demonstrate challenges the assumption that preventing the introduction of malicious code can prevent malicious computation. The next article shows an advance on the defense side of program security. In “Verified Cryptographic Implementations for TLS”, Karthikeyan Bhargavan, Ricardo Corin, Cédric Fournet, and Eugen Zălinescu show how to develop a small functional implementation of TLS and then use novel and existing model-extraction and verification tools to provide security guarantees at both the symbolic and computational levels. The end result is the first automated verification of executable code using standard cryptographic assumptions. Next, Jan Camenisch and Thomas Gross, in their article “Efficient Attributes for Anonymous Credentials,” address a different aspect of cryptographic protocols, how to practically provide just the authorization needed. A novel approach permits much more efficient proofs of possession of even a large combination of attribute credentials. This makes it much more practical to anonymously show that one has needed attribut","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"19 1","pages":"1:1-1:2"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77744811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We extend the Camenisch-Lysyanskaya anonymous credential system such that selective disclosure of attributes becomes highly efficient. The resulting system significantly improves upon existing approaches, which suffer from a linear number of modular exponentiations in the total number of attributes. This limitation makes them unfit for many practical applications, such as electronic identity cards. Our novel approach can incorporate a large number of binary and finite-set attributes without significant performance impact. It compresses all such attributes into a single attribute base and, thus, boosts the efficiency of all proofs of possession. The core idea is to encode discrete binary and finite-set values as prime numbers. We then use the divisibility property for efficient proofs of their presence or absence. In addition, we contribute efficient methods for conjunctions and disjunctions. The system builds on the strong RSA assumption. We demonstrate the aptness of our method in realistic application scenarios, notably electronic identity cards, and show its advantages for small devices, such as smartcards and cell phones.
{"title":"Efficient Attributes for Anonymous Credentials","authors":"J. Camenisch, Thomas Gross","doi":"10.1145/2133375.2133379","DOIUrl":"https://doi.org/10.1145/2133375.2133379","url":null,"abstract":"We extend the Camenisch-Lysyanskaya anonymous credential system such that selective disclosure of attributes becomes highly efficient. The resulting system significantly improves upon existing approaches, which suffer from a linear number of modular exponentiations in the total number of attributes. This limitation makes them unfit for many practical applications, such as electronic identity cards. Our novel approach can incorporate a large number of binary and finite-set attributes without significant performance impact. It compresses all such attributes into a single attribute base and, thus, boosts the efficiency of all proofs of possession. The core idea is to encode discrete binary and finite-set values as prime numbers. We then use the divisibility property for efficient proofs of their presence or absence. In addition, we contribute efficient methods for conjunctions and disjunctions. The system builds on the strong RSA assumption. We demonstrate the aptness of our method in realistic application scenarios, notably electronic identity cards, and show its advantages for small devices, such as smartcards and cell phones.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"26 1","pages":"4:1-4:30"},"PeriodicalIF":0.0,"publicationDate":"2012-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73026865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Online gaming is a lucrative and growing industry but one that is slowed by cheating that compromises the gaming experience and hence drives away players (and revenue). In this paper we develop a technique by which game developers can enable game operators to validate the behavior of game clients as being consistent with valid execution of the sanctioned client software. Our technique employs symbolic execution of the client software to extract constraints on client-side state implied by each client-to-server message, and then uses constraint solving to determine whether the sequence of client-to-server messages can be “explained” by any possible user inputs, in light of the server-to-client messages already received. The requisite constraints and solving components can be developed either simultaneously with the game or retroactively for existing games. We demonstrate our approach in three case studies on the open-source game XPilot, a game similar to Pac-Man of our own design, and an open-source multiplayer version of Tetris.
{"title":"Server-side verification of client behavior in online games","authors":"D. Bethea, Robert A. Cochran, M. Reiter","doi":"10.1145/2043628.2043633","DOIUrl":"https://doi.org/10.1145/2043628.2043633","url":null,"abstract":"Online gaming is a lucrative and growing industry but one that is slowed by cheating that compromises the gaming experience and hence drives away players (and revenue). In this paper we develop a technique by which game developers can enable game operators to validate the behavior of game clients as being consistent with valid execution of the sanctioned client software. Our technique employs symbolic execution of the client software to extract constraints on client-side state implied by each client-to-server message, and then uses constraint solving to determine whether the sequence of client-to-server messages can be “explained” by any possible user inputs, in light of the server-to-client messages already received. The requisite constraints and solving components can be developed either simultaneously with the game or retroactively for existing games. We demonstrate our approach in three case studies on the open-source game XPilot, a game similar to Pac-Man of our own design, and an open-source multiplayer version of Tetris.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"1 1","pages":"32:1-32:27"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89431517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Several anonymous authentication schemes allow servers to revoke a misbehaving user's ability to make future accesses. Traditionally, these schemes have relied on powerful Trusted Third Parties (TTPs) capable of deanonymizing (or linking) users' connections. Such TTPs are undesirable because users' anonymity is not guaranteed, and users must trust them to judge misbehaviors fairly. Recent schemes such as Blacklistable Anonymous Credentials (BLAC) and Enhanced Privacy ID (EPID) support “privacy-enhanced revocation”— servers can revoke misbehaving users without a TTP's involvement, and without learning the revoked users' identities. In BLAC and EPID, however, the computation required for authentication at the server is linear in the size (L) of the revocation list, which is impractical as the size approaches thousands of entries. We propose PEREA, a new anonymous authentication scheme for which this bottleneck computation is independent of the size of the revocation list. Instead, the time complexity of authentication is linear in the size of a revocation window K ≪ L, the number of subsequent authentications before which a user's misbehavior must be recognized if the user is to be revoked. We extend PEREA to support more complex revocation policies that take the severity of misbehaviors into account. Users can authenticate anonymously if their naughtiness, i.e., the sum of the severities of their blacklisted misbehaviors, is below a certain naughtiness threshold. We call our extension PEREA-Naughtiness. We prove the security of our constructions, and validate their efficiency as compared to BLAC analytically and quantitatively.
{"title":"PEREA: Practical TTP-free revocation of repeatedly misbehaving anonymous users","authors":"M. Au, Patrick P. Tsang, Apu Kapadia","doi":"10.1145/2043628.2043630","DOIUrl":"https://doi.org/10.1145/2043628.2043630","url":null,"abstract":"Several anonymous authentication schemes allow servers to revoke a misbehaving user's ability to make future accesses. Traditionally, these schemes have relied on powerful Trusted Third Parties (TTPs) capable of deanonymizing (or linking) users' connections. Such TTPs are undesirable because users' anonymity is not guaranteed, and users must trust them to judge misbehaviors fairly. Recent schemes such as Blacklistable Anonymous Credentials (BLAC) and Enhanced Privacy ID (EPID) support “privacy-enhanced revocation”— servers can revoke misbehaving users without a TTP's involvement, and without learning the revoked users' identities.\u0000 In BLAC and EPID, however, the computation required for authentication at the server is linear in the size (L) of the revocation list, which is impractical as the size approaches thousands of entries. We propose PEREA, a new anonymous authentication scheme for which this bottleneck computation is independent of the size of the revocation list. Instead, the time complexity of authentication is linear in the size of a revocation window K ≪ L, the number of subsequent authentications before which a user's misbehavior must be recognized if the user is to be revoked. We extend PEREA to support more complex revocation policies that take the severity of misbehaviors into account. Users can authenticate anonymously if their naughtiness, i.e., the sum of the severities of their blacklisted misbehaviors, is below a certain naughtiness threshold. We call our extension PEREA-Naughtiness. We prove the security of our constructions, and validate their efficiency as compared to BLAC analytically and quantitatively.","PeriodicalId":50912,"journal":{"name":"ACM Transactions on Information and System Security","volume":"104 1","pages":"29:1-29:34"},"PeriodicalIF":0.0,"publicationDate":"2011-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87798076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}