Long-term archival of signed documents presents specific challenges that do not need to be considered in short-term storage systems. In this paper we present a Secure Long-Term Archival System (SLTAS) that protects, in a verifiable way, the validity of today's digital signatures in a distant future. Moreover, our protocol is the first proposal that provides a proof of when a signature was created, without the possibility of backdating. We include a description of our scheme and an evaluation of its performance in terms of computing time and storage space. Finally, we discuss how to extend our system to achieve additional security properties. This paper does not focus on the long-term availability of archived information. nor on format migration problems.
{"title":"Improving secure long-term archival of digitally signed documents","authors":"C. Troncoso, D. D. Cock, B. Preneel","doi":"10.1145/1456469.1456476","DOIUrl":"https://doi.org/10.1145/1456469.1456476","url":null,"abstract":"Long-term archival of signed documents presents specific challenges that do not need to be considered in short-term storage systems. In this paper we present a Secure Long-Term Archival System (SLTAS) that protects, in a verifiable way, the validity of today's digital signatures in a distant future. Moreover, our protocol is the first proposal that provides a proof of when a signature was created, without the possibility of backdating. We include a description of our scheme and an evaluation of its performance in terms of computing time and storage space. Finally, we discuss how to extend our system to achieve additional security properties. This paper does not focus on the long-term availability of archived information. nor on format migration problems.","PeriodicalId":413919,"journal":{"name":"ACM International Workshop on Storage Security And Survivability","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115953047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexander Heitzmann, Bernardo Palazzi, Charalampos Papamanthou, R. Tamassia
Outsourced storage has become more and more practical in recent years. Users can now store large amounts of data in multiple servers at a relatively low price. An important issue for outsourced storage systems is to design an efficient scheme to assure users that their data stored at remote servers has not been tampered with. This paper presents a general method and a practical prototype application for verifying the integrity of files in an untrusted network storage service. The verification process is managed by an application running in a trusted environment (typically on the client) that stores just one cryptographic hash value of constant size, corresponding to the "digest" of an authenticated data structure. The proposed integrity verification service can work with any storage service since it is transparent to the storage technology used. Experimental results show that our integrity verification method is efficient and practical for network storage systems.
{"title":"Efficient integrity checking of untrusted network storage","authors":"Alexander Heitzmann, Bernardo Palazzi, Charalampos Papamanthou, R. Tamassia","doi":"10.1145/1456469.1456479","DOIUrl":"https://doi.org/10.1145/1456469.1456479","url":null,"abstract":"Outsourced storage has become more and more practical in recent years. Users can now store large amounts of data in multiple servers at a relatively low price. An important issue for outsourced storage systems is to design an efficient scheme to assure users that their data stored at remote servers has not been tampered with. This paper presents a general method and a practical prototype application for verifying the integrity of files in an untrusted network storage service. The verification process is managed by an application running in a trusted environment (typically on the client) that stores just one cryptographic hash value of constant size, corresponding to the \"digest\" of an authenticated data structure. The proposed integrity verification service can work with any storage service since it is transparent to the storage technology used. Experimental results show that our integrity verification method is efficient and practical for network storage systems.","PeriodicalId":413919,"journal":{"name":"ACM International Workshop on Storage Security And Survivability","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121234865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Remote data checking protocols, such as provable data possession (PDP) [1], allow clients that outsource data to untrusted servers to verify that the server continues to correctly store the data. Through the careful integration of forward error-correcting codes and remote data checking, a system can prove possession with arbitrarily high probability. We formalize this notion in the robust data possession guarantee. We distill the key performance and security requirements for integrating forward error-correcting codes into PDP and describe an encoding scheme and file organization for robust data possession that meets these requirements. We give a detailed analysis of this scheme and build a Monte-Carlo simulation to evaluate tradeoffs in reliability, space overhead, and performance. A practical way to evaluate these tradeoffs is an essential input to system design, allowing the designer to choose the encoding and data checking protocol parameters that realize robust data possession.
{"title":"Robust remote data checking","authors":"Reza Curtmola, O. Khan, R. Burns","doi":"10.1145/1456469.1456481","DOIUrl":"https://doi.org/10.1145/1456469.1456481","url":null,"abstract":"Remote data checking protocols, such as provable data possession (PDP) [1], allow clients that outsource data to untrusted servers to verify that the server continues to correctly store the data. Through the careful integration of forward error-correcting codes and remote data checking, a system can prove possession with arbitrarily high probability. We formalize this notion in the robust data possession guarantee. We distill the key performance and security requirements for integrating forward error-correcting codes into PDP and describe an encoding scheme and file organization for robust data possession that meets these requirements. We give a detailed analysis of this scheme and build a Monte-Carlo simulation to evaluate tradeoffs in reliability, space overhead, and performance. A practical way to evaluate these tradeoffs is an essential input to system design, allowing the designer to choose the encoding and data checking protocol parameters that realize robust data possession.","PeriodicalId":413919,"journal":{"name":"ACM International Workshop on Storage Security And Survivability","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131686191","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A key challenge in litigation is verifying that all relevant case content has been produced. Adding to the challenge is the fact that litigating parties are both bound to produce relevant documents and bound to protect private information (e.g. medical information). This leaves open the possibility of withholding content inappropriately, and verifying that this has not occurred is a time-consuming process involving the presiding judge. We introduce testable commitments: a cryptographic technique for verifying that only the right information has been withheld with only minimal involvement from a trusted third party. We present a construction of testable commitments and discuss its implementation.
{"title":"Testable commitments","authors":"P. Golle, Richard Chow, Jessica Staddon","doi":"10.1145/1456469.1456477","DOIUrl":"https://doi.org/10.1145/1456469.1456477","url":null,"abstract":"A key challenge in litigation is verifying that all relevant case content has been produced. Adding to the challenge is the fact that litigating parties are both bound to produce relevant documents and bound to protect private information (e.g. medical information). This leaves open the possibility of withholding content inappropriately, and verifying that this has not occurred is a time-consuming process involving the presiding judge. We introduce testable commitments: a cryptographic technique for verifying that only the right information has been withheld with only minimal involvement from a trusted third party. We present a construction of testable commitments and discuss its implementation.","PeriodicalId":413919,"journal":{"name":"ACM International Workshop on Storage Security And Survivability","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122374473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Diesburg, Christopher R. Meyers, David M. Lary, An-I Wang
Confidential data storage through encryption is becoming increasingly important. Designers and implementers of encryption methods of storage media must be aware that storage has different usage patterns and properties compared to securing other information media such as networks. In this paper, we empirically demonstrate two-time pad vulnerabilities in storage that are exposed via shifting file contents, in-place file updates, storage mechanisms hidden by layers of abstractions, inconsistencies between memory and disk content, and backups. We also demonstrate how a simple application of Bloom filters can automatically extract plaintexts from two-time pads. Further, our experience sheds light on system research directions to better support cryptographic assumptions and guarantees.
{"title":"When cryptography meets storage","authors":"S. Diesburg, Christopher R. Meyers, David M. Lary, An-I Wang","doi":"10.1145/1456469.1456472","DOIUrl":"https://doi.org/10.1145/1456469.1456472","url":null,"abstract":"Confidential data storage through encryption is becoming increasingly important. Designers and implementers of encryption methods of storage media must be aware that storage has different usage patterns and properties compared to securing other information media such as networks. In this paper, we empirically demonstrate two-time pad vulnerabilities in storage that are exposed via shifting file contents, in-place file updates, storage mechanisms hidden by layers of abstractions, inconsistencies between memory and disk content, and backups. We also demonstrate how a simple application of Bloom filters can automatically extract plaintexts from two-time pads. Further, our experience sheds light on system research directions to better support cryptographic assumptions and guarantees.","PeriodicalId":413919,"journal":{"name":"ACM International Workshop on Storage Security And Survivability","volume":"17 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125787834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scavenged storage systems harness unused disk space from individual workstations the same way idle CPU cycles are harnessed by desktop grid applications like Seti@Home. These systems provide a promising low cost, high-performance storage solution in certain high-end computing scenarios. However, selecting the security level and designing the security mechanisms for such systems is challenging as scavenging idle storage opens the door for security threats absent in traditional storage systems that use dedicated nodes under a single administrative domain. Moreover, increased security often comes at the price of performance and scalability. This paper develops a general threat model for systems that use scavenged storage, presents the design of a protocol that addresses these threats and is optimized for throughput, and evaluates the overheads brought by the new security protocol when configured to provide a number of different security properties.
{"title":"Configurable security for scavenged storage systems","authors":"Abdullah Gharaibeh, S. Al-Kiswany, M. Ripeanu","doi":"10.1145/1456469.1456480","DOIUrl":"https://doi.org/10.1145/1456469.1456480","url":null,"abstract":"Scavenged storage systems harness unused disk space from individual workstations the same way idle CPU cycles are harnessed by desktop grid applications like Seti@Home. These systems provide a promising low cost, high-performance storage solution in certain high-end computing scenarios. However, selecting the security level and designing the security mechanisms for such systems is challenging as scavenging idle storage opens the door for security threats absent in traditional storage systems that use dedicated nodes under a single administrative domain. Moreover, increased security often comes at the price of performance and scalability. This paper develops a general threat model for systems that use scavenged storage, presents the design of a protocol that addresses these threats and is optimized for throughput, and evaluates the overheads brought by the new security protocol when configured to provide a number of different security properties.","PeriodicalId":413919,"journal":{"name":"ACM International Workshop on Storage Security And Survivability","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131551623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the world moves to digital storage for archival purposes, there is an increasing demand for systems that can provide secure data storage in a cost-effective manner. By identifying common chunks of data both within and between files and storing them only once, deduplication can yield cost savings by increasing the utility of a given amount of storage. Unfortunately, deduplication exploits identical content, while encryption attempts to make all content appear random; the same content encrypted with two different keys results in very different ciphertext. Thus, combining the space efficiency of deduplication with the secrecy aspects of encryption is problematic. We have developed a solution that provides both data security and space efficiency in single-server storage and distributed storage systems. Encryption keys are generated in a consistent manner from the chunk data; thus, identical chunks will always encrypt to the same ciphertext. Furthermore, the keys cannot be deduced from the encrypted chunk data. Since the information each user needs to access and decrypt the chunks that make up a file is encrypted using a key known only to the user, even a full compromise of the system cannot reveal which chunks are used by which users.
{"title":"Secure data deduplication","authors":"M. Storer, K. Greenan, D. Long, E. L. Miller","doi":"10.1145/1456469.1456471","DOIUrl":"https://doi.org/10.1145/1456469.1456471","url":null,"abstract":"As the world moves to digital storage for archival purposes, there is an increasing demand for systems that can provide secure data storage in a cost-effective manner. By identifying common chunks of data both within and between files and storing them only once, deduplication can yield cost savings by increasing the utility of a given amount of storage. Unfortunately, deduplication exploits identical content, while encryption attempts to make all content appear random; the same content encrypted with two different keys results in very different ciphertext. Thus, combining the space efficiency of deduplication with the secrecy aspects of encryption is problematic.\u0000 We have developed a solution that provides both data security and space efficiency in single-server storage and distributed storage systems. Encryption keys are generated in a consistent manner from the chunk data; thus, identical chunks will always encrypt to the same ciphertext. Furthermore, the keys cannot be deduced from the encrypted chunk data. Since the information each user needs to access and decrypt the chunks that make up a file is encrypted using a key known only to the user, even a full compromise of the system cannot reveal which chunks are used by which users.","PeriodicalId":413919,"journal":{"name":"ACM International Workshop on Storage Security And Survivability","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2008-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115268976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wireless sensor networks are increasingly being used to monitor habitats, analyze traffic patterns, study troop movements, and gather data for reconnaissance and surveillance missions. Many wireless sensor networks require the protection of their data from unauthorized access and malicious tampering, motivating the need for a secure and reliable file system for sensor nodes. The file system presented in this paper encrypts data stored on sensor nodes' local storage in such a way that an intruder who compromises a sensor node cannot read it, and backs it up regularly on to its neighbor nodes. The file system utilizes algebraic signatures to detect data tampering.
{"title":"Designing a secure reliable file system for sensor networks","authors":"N. Bhatnagar, E. L. Miller","doi":"10.1145/1314313.1314319","DOIUrl":"https://doi.org/10.1145/1314313.1314319","url":null,"abstract":"Wireless sensor networks are increasingly being used to monitor habitats, analyze traffic patterns, study troop movements, and gather data for reconnaissance and surveillance missions. Many wireless sensor networks require the protection of their data from unauthorized access and malicious tampering, motivating the need for a secure and reliable file system for sensor nodes. The file system presented in this paper encrypts data stored on sensor nodes' local storage in such a way that an intruder who compromises a sensor node cannot read it, and backs it up regularly on to its neighbor nodes. The file system utilizes algebraic signatures to detect data tampering.","PeriodicalId":413919,"journal":{"name":"ACM International Workshop on Storage Security And Survivability","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122673643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kiron Vijayasankar, Gopalan Sivathanu, S. Sundararaman, E. Zadok
Data recoverability in the face of partial disk errors is an important prerequisite in modern storage. We have designed and implemented a prototype disk system that automatically ensures the integrity of stored data, and transparently recovers vital data in the event of integrity violations. We show that by using pointer knowledge, effective integrity assurance can be performed inside a block-based disk with negligible performance overheads. We also show how semantics-aware replication of blocks can help improve the recoverability of data in the event of partial disk errors with small space overheads. Our evaluation results show that for normal user workloads, our disk system has a performance overhead of only 1-5% compared to traditional disks.
{"title":"Exploiting type-awareness in a self-recovering disk","authors":"Kiron Vijayasankar, Gopalan Sivathanu, S. Sundararaman, E. Zadok","doi":"10.1145/1314313.1314321","DOIUrl":"https://doi.org/10.1145/1314313.1314321","url":null,"abstract":"Data recoverability in the face of partial disk errors is an important prerequisite in modern storage. We have designed and implemented a prototype disk system that automatically ensures the integrity of stored data, and transparently recovers vital data in the event of integrity violations. We show that by using pointer knowledge, effective integrity assurance can be performed inside a block-based disk with negligible performance overheads. We also show how semantics-aware replication of blocks can help improve the recoverability of data in the event of partial disk errors with small space overheads. Our evaluation results show that for normal user workloads, our disk system has a performance overhead of only 1-5% compared to traditional disks.","PeriodicalId":413919,"journal":{"name":"ACM International Workshop on Storage Security And Survivability","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125922275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the advent of NFS version 4, NFS security is more important than ever. This is because a main goal of the NFSv4 protocol is suitability for use on the Internet, whereas previous versions were used mainly on private networks. To address these security concerns, the NFSv4 protocol utilizes the RPCSEC GSS protocol and allows clients and servers to negotiate security at mount-time. However, this provides privacy only while data is traveling over the wire. We believe that file servers accessible over the Internet should contain only encrypted data. We present a round-trip privacy scheme for NFSv4, where clients encrypt file data for write requests, and decrypt the data for read requests. The data stored by the server on behalf of the clients is encrypted. This helps ensure privacy if the server or storage is stolen or compromised. As the NFSv4 protocol was designed with extensibility, it is the ideal place to add roundtrip privacy. In addition to providing a higher level of security than only over-the-wire encryption, our technique is more efficient, as the server is relieved from performing encryption and decryption. We developed a prototype of our round-trip privacy scheme. In our performance evaluation, we saw throughput increases of up to 24%, as well as good scalability.
{"title":"Round-trip privacy with nfsv4","authors":"Avishay Traeger, Kumar Thangavelu, E. Zadok","doi":"10.1145/1314313.1314315","DOIUrl":"https://doi.org/10.1145/1314313.1314315","url":null,"abstract":"With the advent of NFS version 4, NFS security is more important than ever. This is because a main goal of the NFSv4 protocol is suitability for use on the Internet, whereas previous versions were used mainly on private networks. To address these security concerns, the NFSv4 protocol utilizes the RPCSEC GSS protocol and allows clients and servers to negotiate security at mount-time. However, this provides privacy only while data is traveling over the wire.\u0000 We believe that file servers accessible over the Internet should contain only encrypted data. We present a round-trip privacy scheme for NFSv4, where clients encrypt file data for write requests, and decrypt the data for read requests. The data stored by the server on behalf of the clients is encrypted. This helps ensure privacy if the server or storage is stolen or compromised. As the NFSv4 protocol was designed with extensibility, it is the ideal place to add roundtrip privacy. In addition to providing a higher level of security than only over-the-wire encryption, our technique is more efficient, as the server is relieved from performing encryption and decryption. We developed a prototype of our round-trip privacy scheme. In our performance evaluation, we saw throughput increases of up to 24%, as well as good scalability.","PeriodicalId":413919,"journal":{"name":"ACM International Workshop on Storage Security And Survivability","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121828887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}