This paper introduces a novel approach for augmenting attribute-based access control systems in a way that allows them to offer fully anonymous access to resources while at the same time achieving strong accountability guarantees. We assume that users hold attribute certificates and we show how to exploit cryptographic zero-knowledge proofs to allow requesting users to prove that they hold suitable certificates for accessing a resource. In contrast to the commonly taken approach of sending all possibly relevant certificates to the access control system, our approach hence does not release any information to the access control system except for the presence of a set of certificates satisfying the access condition. This constitutes the minimal amount of information that has to be released for coming up with a correct access decision, and our approach is the first to achieve this. Additionally given a trusted third party for identity escrow, we furthermore show that a concise application of zero-knowledge proofs offers the access control system the capability to hold a requesting user accountable for her actions under specific, well-defined conditions. All the employed cryptographic techniques are highly efficient, and an architecture for exploiting our approach in practical scenarios is already in place.
{"title":"Anonymous yet accountable access control","authors":"M. Backes, J. Camenisch, Dieter Sommer","doi":"10.1145/1102199.1102208","DOIUrl":"https://doi.org/10.1145/1102199.1102208","url":null,"abstract":"This paper introduces a novel approach for augmenting attribute-based access control systems in a way that allows them to offer fully anonymous access to resources while at the same time achieving strong accountability guarantees. We assume that users hold attribute certificates and we show how to exploit cryptographic zero-knowledge proofs to allow requesting users to prove that they hold suitable certificates for accessing a resource. In contrast to the commonly taken approach of sending all possibly relevant certificates to the access control system, our approach hence does not release any information to the access control system except for the presence of a set of certificates satisfying the access condition. This constitutes the minimal amount of information that has to be released for coming up with a correct access decision, and our approach is the first to achieve this. Additionally given a trusted third party for identity escrow, we furthermore show that a concise application of zero-knowledge proofs offers the access control system the capability to hold a requesting user accountable for her actions under specific, well-defined conditions. All the employed cryptographic techniques are highly efficient, and an architecture for exploiting our approach in practical scenarios is already in place.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"18 1","pages":"40-46"},"PeriodicalIF":0.0,"publicationDate":"2005-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74471643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Organizations in privacy-regulated industries (e.g. healthcare and financial institutions) face significant challenges when developing policies and systems that are properly aligned with relevant privacy legislation. We analyze privacy regulations derived from the Health Insurance Portability and Accountability Act (HIPAA) that affect information sharing practices and consumer privacy in healthcare systems. Our analysis shows specific natural language semantics that formally characterize rights, obligations, and the meaningful relationships between them required to build value into systems. Furthermore, we evaluate semantics for rules and constraints necessary to develop machine-enforceable policies that bridge between laws, policies, practices, and system requirements. We believe the results of our analysis will benefit legislators, regulators and policy and system developers by focusing their attention on natural language policy semantics that are implementable in software systems.
{"title":"Mining rule semantics to understand legislative compliance","authors":"T. Breaux, A. Antón","doi":"10.1145/1102199.1102210","DOIUrl":"https://doi.org/10.1145/1102199.1102210","url":null,"abstract":"Organizations in privacy-regulated industries (e.g. healthcare and financial institutions) face significant challenges when developing policies and systems that are properly aligned with relevant privacy legislation. We analyze privacy regulations derived from the Health Insurance Portability and Accountability Act (HIPAA) that affect information sharing practices and consumer privacy in healthcare systems. Our analysis shows specific natural language semantics that formally characterize rights, obligations, and the meaningful relationships between them required to build value into systems. Furthermore, we evaluate semantics for rules and constraints necessary to develop machine-enforceable policies that bridge between laws, policies, practices, and system requirements. We believe the results of our analysis will benefit legislators, regulators and policy and system developers by focusing their attention on natural language policy semantics that are implementable in software systems.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"109 1","pages":"51-54"},"PeriodicalIF":0.0,"publicationDate":"2005-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85251147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
At the 2004 Workshop on Privacy in the Electronic Society (WPES), Borisov, Goldberg and Brewer, presented "Off the Record Messaging" (OTR), a protocol designed to add end-to-end security and privacy to Instant Messaging protocols. An open-source implementation of OTR is available and has achieved considerable success.In this paper we present a security analysis of OTR showing that, while the overall concept of the system is valid and attractive, the protocol suffers from security shortcomings due to the use of an insecure key-exchange protocol and other problematic design choices.On the basis of these findings, we propose alternative designs and improvements that strengthen the security of the system and provide the originally intended features of the protocol, including deniability, in a sound and well-defined sense.
在2004年的电子社会隐私研讨会(WPES)上,Borisov, Goldberg和Brewer提出了“Off - the - Record Messaging (OTR)”,这是一种旨在为即时消息协议增加端到端安全性和隐私性的协议。OTR的开源实现是可用的,并且已经取得了相当大的成功。在本文中,我们提出了OTR的安全性分析,表明虽然系统的整体概念是有效的和有吸引力的,但由于使用不安全的密钥交换协议和其他有问题的设计选择,该协议存在安全缺陷。在这些发现的基础上,我们提出了替代设计和改进,以加强系统的安全性,并在合理和明确的意义上提供协议的最初预期功能,包括可否认性。
{"title":"Secure off-the-record messaging","authors":"M. Raimondo, R. Gennaro, H. Krawczyk","doi":"10.1145/1102199.1102216","DOIUrl":"https://doi.org/10.1145/1102199.1102216","url":null,"abstract":"At the 2004 Workshop on Privacy in the Electronic Society (WPES), Borisov, Goldberg and Brewer, presented \"Off the Record Messaging\" (OTR), a protocol designed to add end-to-end security and privacy to Instant Messaging protocols. An open-source implementation of OTR is available and has achieved considerable success.In this paper we present a security analysis of OTR showing that, while the overall concept of the system is valid and attractive, the protocol suffers from security shortcomings due to the use of an insecure key-exchange protocol and other problematic design choices.On the basis of these findings, we propose alternative designs and improvements that strengthen the security of the system and provide the originally intended features of the protocol, including deniability, in a sound and well-defined sense.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"66 1","pages":"81-89"},"PeriodicalIF":0.0,"publicationDate":"2005-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88404934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Participation in social networking sites has dramatically increased in recent years. Services such as Friendster, Tribe, or the Facebook allow millions of individuals to create online profiles and share personal information with vast networks of friends - and, often, unknown numbers of strangers. In this paper we study patterns of information revelation in online social networks and their privacy implications. We analyze the online behavior of more than 4,000 Carnegie Mellon University students who have joined a popular social networking site catered to colleges. We evaluate the amount of information they disclose and study their usage of the site's privacy settings. We highlight potential attacks on various aspects of their privacy, and we show that only a minimal percentage of users changes the highly permeable privacy preferences.
{"title":"Information revelation and privacy in online social networks","authors":"R. Gross, A. Acquisti","doi":"10.1145/1102199.1102214","DOIUrl":"https://doi.org/10.1145/1102199.1102214","url":null,"abstract":"Participation in social networking sites has dramatically increased in recent years. Services such as Friendster, Tribe, or the Facebook allow millions of individuals to create online profiles and share personal information with vast networks of friends - and, often, unknown numbers of strangers. In this paper we study patterns of information revelation in online social networks and their privacy implications. We analyze the online behavior of more than 4,000 Carnegie Mellon University students who have joined a popular social networking site catered to colleges. We evaluate the amount of information they disclose and study their usage of the site's privacy settings. We highlight potential attacks on various aspects of their privacy, and we show that only a minimal percentage of users changes the highly permeable privacy preferences.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"5 1","pages":"71-80"},"PeriodicalIF":0.0,"publicationDate":"2005-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79757211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Existing solutions to protect consumer privacy in RFID either put the burden on the consumer or suffer from the very limited capabilities of today's RFID tags. We propose the use of physical RFID tag structures that permit a consumer to disable a tag by mechanically altering the tag in such a way that the ability of a reader to interrogate the RFID tag by wireless mean is inhibited. In "clipped tags", consumers can physically separate the body (chip) from the head (antenna) in an intuitive way. Such a separation provides visual confirmation that the tag has been deactivated. However, a physical contact channel may be used later to reactivate it. Such a reactivation would require deliberate actions on the part of the owner of the RFID tag to permit the reactivation to take place. Thus reactivation could not be undertaken without the owner's knowledge unless the item were either stolen or left unattended. This mechanism enables controlled reuse after purchase, making clipped tags superior to other RFID privacy-enhancing technologies.
{"title":"Disabling RFID tags with visible confirmation: clipped tags are silenced","authors":"G. Karjoth, P. Moskowitz","doi":"10.1145/1102199.1102205","DOIUrl":"https://doi.org/10.1145/1102199.1102205","url":null,"abstract":"Existing solutions to protect consumer privacy in RFID either put the burden on the consumer or suffer from the very limited capabilities of today's RFID tags. We propose the use of physical RFID tag structures that permit a consumer to disable a tag by mechanically altering the tag in such a way that the ability of a reader to interrogate the RFID tag by wireless mean is inhibited. In \"clipped tags\", consumers can physically separate the body (chip) from the head (antenna) in an intuitive way. Such a separation provides visual confirmation that the tag has been deactivated. However, a physical contact channel may be used later to reactivate it. Such a reactivation would require deliberate actions on the part of the owner of the RFID tag to permit the reactivation to take place. Thus reactivation could not be undertaken without the owner's knowledge unless the item were either stolen or left unattended. This mechanism enables controlled reuse after purchase, making clipped tags superior to other RFID privacy-enhancing technologies.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"296 8","pages":"27-30"},"PeriodicalIF":0.0,"publicationDate":"2005-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72551085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe the Pynchon Gate, a practical pseudonymous message retrieval system. Our design uses a simple distributed-trust private information retrieval protocol to prevent adversaries from linking recipients to their pseudonyms, even when some of the infrastructure has been compromised. This approach resists global traffic analysis significantly better than existing deployed pseudonymous email solutions, at the cost of additional bandwidth. We examine security concerns raised by our model, and propose solutions.
{"title":"The pynchon gate: a secure method of pseudonymous mail retrieval","authors":"Len Sassaman, B. Cohen, Nick Mathewson","doi":"10.1145/1102199.1102201","DOIUrl":"https://doi.org/10.1145/1102199.1102201","url":null,"abstract":"We describe the Pynchon Gate, a practical pseudonymous message retrieval system. Our design uses a simple distributed-trust private information retrieval protocol to prevent adversaries from linking recipients to their pseudonyms, even when some of the infrastructure has been compromised. This approach resists global traffic analysis significantly better than existing deployed pseudonymous email solutions, at the cost of additional bandwidth. We examine security concerns raised by our model, and propose solutions.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"52 1","pages":"1-9"},"PeriodicalIF":0.0,"publicationDate":"2005-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85157005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As pervasive computing environments become popular, RFID devices, such as contactless smart cards and RFID tags, are introduced into our daily life. However, there exists a privacy problem that a third party can trace user's behavior by linking device's ID.The concept of unlinkability, that a third party cannot recognize whether some outputs are from the same user, is important to solve the privacy problem. A scheme using hash function satisfies unlinkability against a third party by changing the outputs of RFID devices every time. However, the schemes are not scalable since the server needs O(N) hash calculations for every ID matching, where N is the number of RFID devices.In this paper, we propose the K-steps ID matching scheme, which can reduce the number of the hash calculations on the server to O(log N). Secondly, we propose a quantification of unlinkability using conditional entropy and mutual information. Finally, we analyze the K-steps ID matching scheme using the proposed quantification, and show the relation between the time complexity and unlinkability.
{"title":"Quantitative evaluation of unlinkable ID matching schemes","authors":"Yasunobu Nohara, Sozo Inoue, K. Baba, H. Yasuura","doi":"10.1145/1102199.1102212","DOIUrl":"https://doi.org/10.1145/1102199.1102212","url":null,"abstract":"As pervasive computing environments become popular, RFID devices, such as contactless smart cards and RFID tags, are introduced into our daily life. However, there exists a privacy problem that a third party can trace user's behavior by linking device's ID.The concept of unlinkability, that a third party cannot recognize whether some outputs are from the same user, is important to solve the privacy problem. A scheme using hash function satisfies unlinkability against a third party by changing the outputs of RFID devices every time. However, the schemes are not scalable since the server needs O(N) hash calculations for every ID matching, where N is the number of RFID devices.In this paper, we propose the K-steps ID matching scheme, which can reduce the number of the hash calculations on the server to O(log N). Secondly, we propose a quantification of unlinkability using conditional entropy and mutual information. Finally, we analyze the K-steps ID matching scheme using the proposed quantification, and show the relation between the time complexity and unlinkability.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"41 1","pages":"55-60"},"PeriodicalIF":0.0,"publicationDate":"2005-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79056539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary medical data provides important statistical information for public health, but risks revealing confidential patient information. This risk is particularly difficult to assess when many different tables are released, independently protected against disclosure by various techniques. In this paper, we present a new technique for disclosure control in tabular data which uses explicit uncertainty to prevent small numbers of records from being identified disclosively. In contrast to other techniques, bounds on the cell perturbations are also made public. This technique can be applied effectively to large datasets in their entirety, automatically, and the transformed data can then be used to create the derivative tables, or hosted on a public web site. It is even safe for population-based data. Furthermore, we show that this transformation is computationally efficient while ensuring k-anonymity, and demonstrate the suitability of the transformed data for further statistical analysis.
{"title":"Protecting privacy in tabular healthcare data: explicit uncertainty for disclosure control","authors":"B. Shand, J. Rashbass","doi":"10.1145/1102199.1102203","DOIUrl":"https://doi.org/10.1145/1102199.1102203","url":null,"abstract":"Summary medical data provides important statistical information for public health, but risks revealing confidential patient information. This risk is particularly difficult to assess when many different tables are released, independently protected against disclosure by various techniques. In this paper, we present a new technique for disclosure control in tabular data which uses explicit uncertainty to prevent small numbers of records from being identified disclosively. In contrast to other techniques, bounds on the cell perturbations are also made public. This technique can be applied effectively to large datasets in their entirety, automatically, and the transformed data can then be used to create the derivative tables, or hosted on a public web site. It is even safe for population-based data. Furthermore, we show that this transformation is computationally efficient while ensuring k-anonymity, and demonstrate the suitability of the transformed data for further statistical analysis.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"112 1","pages":"20-26"},"PeriodicalIF":0.0,"publicationDate":"2005-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79410830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Electronic voting, as well as other privacy-preserving protocols, use special cryptographic primitives and techniques that are not widely used in other types of protocols, e.g. in authentication protocols. These include blind signatures, commitments, zero-knowledge proofs, mixes and homomorphic encryption. Furthermore, typical formalizations of the Dolev-Yao intruder's capabilities do not take into account these primitives and techniques, nor do they consider some types of attacks that e-voting as well as other types of protocols are designed to protect against, such as privacy attacks due to undesired linkability of protocol executions. This work aims to extend Typed MSR so that it is able to support the specification of privacy-preserving protocols, as well as the capabilities of a Dolev-Yao intruder designed to attack such protocols.
{"title":"Specifying electronic voting protocols in typed MSR","authors":"Theodoros Balopoulos, S. Gritzalis, S. Katsikas","doi":"10.1145/1102199.1102207","DOIUrl":"https://doi.org/10.1145/1102199.1102207","url":null,"abstract":"Electronic voting, as well as other privacy-preserving protocols, use special cryptographic primitives and techniques that are not widely used in other types of protocols, e.g. in authentication protocols. These include blind signatures, commitments, zero-knowledge proofs, mixes and homomorphic encryption. Furthermore, typical formalizations of the Dolev-Yao intruder's capabilities do not take into account these primitives and techniques, nor do they consider some types of attacks that e-voting as well as other types of protocols are designed to protect against, such as privacy attacks due to undesired linkability of protocol executions. This work aims to extend Typed MSR so that it is able to support the specification of privacy-preserving protocols, as well as the capabilities of a Dolev-Yao intruder designed to attack such protocols.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"10 1","pages":"35-39"},"PeriodicalIF":0.0,"publicationDate":"2005-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73428117","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce a model for electronic election schemes that involves a more powerful adversary than previous work. In particular, we allow the adversary to demand of coerced voters that they vote in a particular manner, abstain from voting, or even disclose their secret keys. We define a scheme to be coercion-resistant if it is infeasible for the adversary to determine if a coerced voter complies with the demands.A first contribution of this paper is to describe and characterize a new and strengthened adversary for coercion in elections. (In doing so, we additionally present what we believe to be the first formal security definitions for electronic elections of any type.) A second contribution is to demonstrate a protocol that is secure against this adversary. While it is clear that a strengthening of attack models is of theoretical relevance, it is important to note that our results lie close to practicality. This is true both in that we model real-life threats (such as vote-buying and vote-canceling), and in that our proposed protocol combines a fair degree of efficiency with an unusual lack of structural complexity. Furthermore, previous schemes have required use of an untappable channel throughout. Ours only carries the much more practical requirement of an anonymous channel during the casting of ballots, and an untappable channel during registration (potentially using postal mail).This extended abstract is a heavily truncated version of the full paper available at http://eprint.iacr.org/2002/165.
{"title":"Coercion-resistant electronic elections","authors":"A. Juels, D. Catalano, M. Jakobsson","doi":"10.1145/1102199.1102213","DOIUrl":"https://doi.org/10.1145/1102199.1102213","url":null,"abstract":"We introduce a model for electronic election schemes that involves a more powerful adversary than previous work. In particular, we allow the adversary to demand of coerced voters that they vote in a particular manner, abstain from voting, or even disclose their secret keys. We define a scheme to be coercion-resistant if it is infeasible for the adversary to determine if a coerced voter complies with the demands.A first contribution of this paper is to describe and characterize a new and strengthened adversary for coercion in elections. (In doing so, we additionally present what we believe to be the first formal security definitions for electronic elections of any type.) A second contribution is to demonstrate a protocol that is secure against this adversary. While it is clear that a strengthening of attack models is of theoretical relevance, it is important to note that our results lie close to practicality. This is true both in that we model real-life threats (such as vote-buying and vote-canceling), and in that our proposed protocol combines a fair degree of efficiency with an unusual lack of structural complexity. Furthermore, previous schemes have required use of an untappable channel throughout. Ours only carries the much more practical requirement of an anonymous channel during the casting of ballots, and an untappable channel during registration (potentially using postal mail).This extended abstract is a heavily truncated version of the full paper available at http://eprint.iacr.org/2002/165.","PeriodicalId":74537,"journal":{"name":"Proceedings of the ACM Workshop on Privacy in the Electronic Society. ACM Workshop on Privacy in the Electronic Society","volume":"47 1","pages":"37-63"},"PeriodicalIF":0.0,"publicationDate":"2005-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72551291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}