S. Chaitanya, Kevin R. B. Butler, A. Sivasubramaniam, P. Mcdaniel, M. Vilayannur
This paper studies the performance and security aspects of the iSCSI protocol in a network storage based system. Ethernet speeds have been improving rapidly and network throughput is no longer considered a bottleneck when compared to Fibre-channel based storage area networks. However, when security of the data traffic is taken into consideration, existing protocols like IPSec prove to be a major hindrance to the overall throughput. In this paper, we evaluate the performance of iSCSI when deployed over standard security protocols and suggest lazy crypto approaches to alleviate the processing needs at the server. The testbed consists of a cluster of Linux machines directly connected to the server through a Gigabit Ethernet network. Micro and application benchmarks like BTIO and dbench were used to analyze the performance and scalability of the different approaches. Our proposed lazy approaches improved through-put by as much as 46% for microbenchmarks and 30% for application benchmarks in comparison to the IPSec based approaches.
{"title":"Design, implementation and evaluation of security in iSCSI-based network storage systems","authors":"S. Chaitanya, Kevin R. B. Butler, A. Sivasubramaniam, P. Mcdaniel, M. Vilayannur","doi":"10.1145/1179559.1179564","DOIUrl":"https://doi.org/10.1145/1179559.1179564","url":null,"abstract":"This paper studies the performance and security aspects of the iSCSI protocol in a network storage based system. Ethernet speeds have been improving rapidly and network throughput is no longer considered a bottleneck when compared to Fibre-channel based storage area networks. However, when security of the data traffic is taken into consideration, existing protocols like IPSec prove to be a major hindrance to the overall throughput. In this paper, we evaluate the performance of iSCSI when deployed over standard security protocols and suggest lazy crypto approaches to alleviate the processing needs at the server. The testbed consists of a cluster of Linux machines directly connected to the server through a Gigabit Ethernet network. Micro and application benchmarks like BTIO and dbench were used to analyze the performance and scalability of the different approaches. Our proposed lazy approaches improved through-put by as much as 46% for microbenchmarks and 30% for application benchmarks in comparison to the IPSec based approaches.","PeriodicalId":413919,"journal":{"name":"ACM International Workshop on Storage Security And Survivability","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123839360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many storage security breaches have recently been reported in the mass media as the direct result of new breach disclosure state laws across the United States (unfortunately, not internationally). In this paper, we provide an empirical analysis of disclosed storage security breaches for the period of 2005-2006. By processing raw data from the best available sources, we seek to understand the what, who, how, where, and when questions about storage security breaches so that others can build upon this evidence when developing best practices for preventing and mitigating storage breaches. While some policy formulation has already started in reaction to media reports (many without empirical analysis), this work provides initial empirical analysis upon which future empirical analysis and future policy decisions can be based.
{"title":"A statistical analysis of disclosed storage security breaches","authors":"Ragib Hasan, W. Yurcik","doi":"10.1145/1179559.1179561","DOIUrl":"https://doi.org/10.1145/1179559.1179561","url":null,"abstract":"Many storage security breaches have recently been reported in the mass media as the direct result of new breach disclosure state laws across the United States (unfortunately, not internationally). In this paper, we provide an empirical analysis of disclosed storage security breaches for the period of 2005-2006. By processing raw data from the best available sources, we seek to understand the what, who, how, where, and when questions about storage security breaches so that others can build upon this evidence when developing best practices for preventing and mitigating storage breaches. While some policy formulation has already started in reaction to media reports (many without empirical analysis), this work provides initial empirical analysis upon which future empirical analysis and future policy decisions can be based.","PeriodicalId":413919,"journal":{"name":"ACM International Workshop on Storage Security And Survivability","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125168821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lakshmi N. Bairavasundaram, Meenali Rungta, A. Arpaci-Dusseau, Remzi H. Arpaci-Dusseau
We propose a framework for examining trust in the storage stack based on different levels of trustworthiness present across different channels of information flow. We focus on corruption in one of the channels, the data channel and as a case study, we apply type-aware corruption techniques to examine Windows NTFS behavior when on-disk pointers are corrupted. We find that NTFS does not verify on-disk pointers thoroughly before using them and that even established error handling techniques like replication are often used ineffectively. Our study indicates the need to more carefully examine how trust is managed within modern file systems.
{"title":"Limiting trust in the storage stack","authors":"Lakshmi N. Bairavasundaram, Meenali Rungta, A. Arpaci-Dusseau, Remzi H. Arpaci-Dusseau","doi":"10.1145/1179559.1179569","DOIUrl":"https://doi.org/10.1145/1179559.1179569","url":null,"abstract":"We propose a framework for examining trust in the storage stack based on different levels of trustworthiness present across different channels of information flow. We focus on corruption in one of the channels, the data channel and as a case study, we apply type-aware corruption techniques to examine Windows NTFS behavior when on-disk pointers are corrupted. We find that NTFS does not verify on-disk pointers thoroughly before using them and that even established error handling techniques like replication are often used ineffectively. Our study indicates the need to more carefully examine how trust is managed within modern file systems.","PeriodicalId":413919,"journal":{"name":"ACM International Workshop on Storage Security And Survivability","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132799346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent litigation and intense regulatory focus on secure retention of electronic records have spurred a rush to introduce Write-Once-Read-Many (WORM) storage devices for retaining business records such as electronic mail. A file committed to a WORM device cannot be deleted even by a super-user and hence is secure from attacks originating from company insiders. Secure retention, however, is only a part of a document's lifecycle: It is often crucial to delete documents after its mandatory retention period is over. Since most of the modern WORM devices are built on top of magnetic media, they also support a secure deletion operation by associating expiration time with files. However, for the deleted document to be truly unrecoverable, it must also be deleted from any index structure built over it.This paper studies the problem of securely deleting entries from an inverted index. We first formalize the concept of secure deletion by defining two deletion semantics: strongly and weakly secure deletions. We then analyze some of the deletion schemes that have been proposed in literature and show that they only achieve weakly secure deletion. Furthermore, such schemes have poor space efficiency and/or are inflexibe. We then propose a novel technique for hiding index entries for deleted documents, based on the concept of ambiguating deleted entries. The proposed technique also achieves weakly secure deletion, but is more space efficient and flexible.
{"title":"Secure deletion from inverted indexes on compliance storage","authors":"Soumyadeb Mitra, M. Winslett","doi":"10.1145/1179559.1179572","DOIUrl":"https://doi.org/10.1145/1179559.1179572","url":null,"abstract":"Recent litigation and intense regulatory focus on secure retention of electronic records have spurred a rush to introduce Write-Once-Read-Many (WORM) storage devices for retaining business records such as electronic mail. A file committed to a WORM device cannot be deleted even by a super-user and hence is secure from attacks originating from company insiders. Secure retention, however, is only a part of a document's lifecycle: It is often crucial to delete documents after its mandatory retention period is over. Since most of the modern WORM devices are built on top of magnetic media, they also support a secure deletion operation by associating expiration time with files. However, for the deleted document to be truly unrecoverable, it must also be deleted from any index structure built over it.This paper studies the problem of securely deleting entries from an inverted index. We first formalize the concept of secure deletion by defining two deletion semantics: strongly and weakly secure deletions. We then analyze some of the deletion schemes that have been proposed in literature and show that they only achieve weakly secure deletion. Furthermore, such schemes have poor space efficiency and/or are inflexibe. We then propose a novel technique for hiding index entries for deleted documents, based on the concept of ambiguating deleted entries. The proposed technique also achieves weakly secure deletion, but is more space efficient and flexible.","PeriodicalId":413919,"journal":{"name":"ACM International Workshop on Storage Security And Survivability","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133437206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
New designs for petabyte-scale storage systems are now capable of transferring hundreds of gigabytes of data per second, but lack strong security. We propose a scalable and efficient protocol for security in high performance, object-based storage systems that reduces protocol overhead and eliminates bottlenecks, thus increasing performance without sacrificing security primitives. Our protocol enforces security using cryptographically secure capabilities, with three novel features that make them ideal for high performance workloads: a scheme for managing coarse grained capabilities, methods for describing client and file groups, and strict security control through capability lifetime extensions. By reducing the number of unique capabilities that must be generated, metadata server load is reduced. Combining and caching client verifications reduces client latencies and workload because metadata and data requests are more frequently serviced by cached capabilities. Strict access control is handled quickly and efficiently through short-lived capabilities and lifetime extensions.We have implemented a prototype of our security protocol and evaluated its performance and scalability using a high performance file system workload. Our numbers demonstrate the ability of our protocol to drastically reduce client security latency to nearly zero. Additionally, our approach improves MDS performance considerably, serving over 99% of all file access requests with cached capabilities. OSD scalability is greatly improved; our solution requires 95 times fewer capability verifications than previous solutions.
{"title":"Scalable security for large, high performance storage systems","authors":"A. Leung, E. L. Miller","doi":"10.1145/1179559.1179565","DOIUrl":"https://doi.org/10.1145/1179559.1179565","url":null,"abstract":"New designs for petabyte-scale storage systems are now capable of transferring hundreds of gigabytes of data per second, but lack strong security. We propose a scalable and efficient protocol for security in high performance, object-based storage systems that reduces protocol overhead and eliminates bottlenecks, thus increasing performance without sacrificing security primitives. Our protocol enforces security using cryptographically secure capabilities, with three novel features that make them ideal for high performance workloads: a scheme for managing coarse grained capabilities, methods for describing client and file groups, and strict security control through capability lifetime extensions. By reducing the number of unique capabilities that must be generated, metadata server load is reduced. Combining and caching client verifications reduces client latencies and workload because metadata and data requests are more frequently serviced by cached capabilities. Strict access control is handled quickly and efficiently through short-lived capabilities and lifetime extensions.We have implemented a prototype of our security protocol and evaluated its performance and scalability using a high performance file system workload. Our numbers demonstrate the ability of our protocol to drastically reduce client security latency to nearly zero. Additionally, our approach improves MDS performance considerably, serving over 99% of all file access requests with cached capabilities. OSD scalability is greatly improved; our solution requires 95 times fewer capability verifications than previous solutions.","PeriodicalId":413919,"journal":{"name":"ACM International Workshop on Storage Security And Survivability","volume":"256 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132091079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper introduces Hydra, a platform that we are developing for highly survivable and secure data storage systems that distribute information over networks and adapt timely to environment changes, enabling users to store and access critical data in a continuously available and highly trustable fashion. The Hydra platform uses MDS array codes that can be encoded and decoded efficiently for distributing and recovering user data. Novel uses of MDS array codes in Hydra are discussed, as well as Hydra's design goals, general structures and a set of basic operations on user data. We also explore Hydra's applications in survivable and secure data storage systems.
{"title":"Hydra: a platform for survivable and secure data storage systems","authors":"Lihao Xu","doi":"10.1145/1103780.1103797","DOIUrl":"https://doi.org/10.1145/1103780.1103797","url":null,"abstract":"This paper introduces Hydra, a platform that we are developing for highly survivable and secure data storage systems that distribute information over networks and adapt timely to environment changes, enabling users to store and access critical data in a continuously available and highly trustable fashion. The Hydra platform uses MDS array codes that can be encoded and decoded efficiently for distributing and recovering user data. Novel uses of MDS array codes in Hydra are discussed, as well as Hydra's design goals, general structures and a set of basic operations on user data. We also explore Hydra's applications in survivable and secure data storage systems.","PeriodicalId":413919,"journal":{"name":"ACM International Workshop on Storage Security And Survivability","volume":" 90","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113948033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Securing storage data is a manifold problem with requirements in three dimensions: data security, data integrity, and the safety of data. Meeting the requirements for one dimension often means compromising another. SecureParser™ is a software technology which addresses all three dimensions of secure storage without compromising any. In this paper, we describe the SecureParser™ technology and discuss how it addresses the three dimensions of secured storage: security, integrity, and safety.
{"title":"Secured storage using secureParser™","authors":"Sabre A. Schnitzer, Robert A. Johnson, Henry Hoyt","doi":"10.1145/1103780.1103801","DOIUrl":"https://doi.org/10.1145/1103780.1103801","url":null,"abstract":"Securing storage data is a manifold problem with requirements in three dimensions: data security, data integrity, and the safety of data. Meeting the requirements for one dimension often means compromising another. SecureParser™ is a software technology which addresses all three dimensions of secure storage without compromising any. In this paper, we describe the SecureParser™ technology and discuss how it addresses the three dimensions of secured storage: security, integrity, and safety.","PeriodicalId":413919,"journal":{"name":"ACM International Workshop on Storage Security And Survivability","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126584828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Joukov, A. Kashyap, Gopalan Sivathanu, E. Zadok
Improper access of data buffers is one of the most common errors in programs written in assembler, C, C++, and several other languages. Existing programs and OSs frequently access the data beyond the allocated buffers or access buffers that were already freed. Such programs and OSs may run for years before their problems can be detected because improper memory accesses frequently result in a silent data corruption. Not surprisingly, most computer worms exploit buffer overflow errors to gain complete control over computer systems. Only after recent worm epidemics, did code developers begin to realize the scale of the problem and the number of potential memory-access violations in existing code.Due to the syntax and flexibility of many programming languages, memory access violation problems cannot be detected at compile time. Tools that verify correctness before every memory access impose unacceptably high overheads. As a result, most of the developed techniques focus on preventing the hijacking of control by hackers and worms due to stack overflows. Consequently, hidden data corruption is given less attention.Memory access violations can be efficiently detected using the hardware support of the paging and virtual memory.Kefence is the general run-time solution we developed that allows to detect and avoid in-kernel overflow, underflow, and stale access problems for internal kernel buffers. Kefence is especially applicable to file system code because file systems operate at a high level of abstraction and require no direct access to the physical memory. At the same time, file systems use a large number of kernel buffers and file system errors are most harmful for users because users' persistent data can be corrupted.
{"title":"An electric fence for kernel buffers","authors":"N. Joukov, A. Kashyap, Gopalan Sivathanu, E. Zadok","doi":"10.1145/1103780.1103786","DOIUrl":"https://doi.org/10.1145/1103780.1103786","url":null,"abstract":"Improper access of data buffers is one of the most common errors in programs written in assembler, C, C++, and several other languages. Existing programs and OSs frequently access the data beyond the allocated buffers or access buffers that were already freed. Such programs and OSs may run for years before their problems can be detected because improper memory accesses frequently result in a silent data corruption. Not surprisingly, most computer worms exploit buffer overflow errors to gain complete control over computer systems. Only after recent worm epidemics, did code developers begin to realize the scale of the problem and the number of potential memory-access violations in existing code.Due to the syntax and flexibility of many programming languages, memory access violation problems cannot be detected at compile time. Tools that verify correctness before every memory access impose unacceptably high overheads. As a result, most of the developed techniques focus on preventing the hijacking of control by hackers and worms due to stack overflows. Consequently, hidden data corruption is given less attention.Memory access violations can be efficiently detected using the hardware support of the paging and virtual memory.Kefence is the general run-time solution we developed that allows to detect and avoid in-kernel overflow, underflow, and stale access problems for internal kernel buffers. Kefence is especially applicable to file system code because file systems operate at a high level of abstraction and require no direct access to the physical memory. At the same time, file systems use a large number of kernel buffers and file system errors are most harmful for users because users' persistent data can be corrupted.","PeriodicalId":413919,"journal":{"name":"ACM International Workshop on Storage Security And Survivability","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125132262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Block ciphers generally have fixed and relatively small input length. Thus they are often used in some mode of operations (e.g., ECB, CBC, CFB, and CTR) that enables the encryption of longer messages. Unfortunately, all these modes of operation reveal some information on their inputs or on relationships between different inputs. As an example, in the CBC mode, encrypting two messages with an identical prefix will result in identical initial blocks in the ciphertexts. Due to the well-known birthday attack and the small input length, the CBC mode becomes less secure as the number of data blocks to be encrypted increases. This leads to a challenging task, namely to design schemes for storage device block or sector level data encryption that are efficient and do not have the disadvantages mentioned above. In this paper, we propose an efficient cipher whose data/cipher blocks can be specified flexibly to match the length of a block unit for current and foreseeable future storage devices. We show that our encryption scheme is provably secure under the assumption that the underlying one-way hash function is a random function.
{"title":"Efficient and provably secure ciphers for storage device block level encryption","authors":"Yuliang Zheng, Yongge Wang","doi":"10.1145/1103780.1103796","DOIUrl":"https://doi.org/10.1145/1103780.1103796","url":null,"abstract":"Block ciphers generally have fixed and relatively small input length. Thus they are often used in some mode of operations (e.g., ECB, CBC, CFB, and CTR) that enables the encryption of longer messages. Unfortunately, all these modes of operation reveal some information on their inputs or on relationships between different inputs. As an example, in the CBC mode, encrypting two messages with an identical prefix will result in identical initial blocks in the ciphertexts. Due to the well-known birthday attack and the small input length, the CBC mode becomes less secure as the number of data blocks to be encrypted increases. This leads to a challenging task, namely to design schemes for storage device block or sector level data encryption that are efficient and do not have the disadvantages mentioned above. In this paper, we propose an efficient cipher whose data/cipher blocks can be specified flexibly to match the length of a block unit for current and foreseeable future storage devices. We show that our encryption scheme is provably secure under the assumption that the underlying one-way hash function is a random function.","PeriodicalId":413919,"journal":{"name":"ACM International Workshop on Storage Security And Survivability","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134265368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We describe a novel approach for building a secure and fault tolerant data storage service in collaborative work environments, which uses perfect secret sharing schemes to store data. Perfect secret sharing schemes have found little use in managing generic data because of the high computation overheads incurred by such schemes. Our proposed approach uses a novel combination of XOR secret sharing and replication mechanisms, which drastically reduce the computation overheads and achieve speeds comparable to standard encryption schemes. The combination of secret sharing and replication manifests itself as an architectural framework, which has the attractive property that its dimension can be varied to exploit tradeoffs amongst different performance metrics. We evaluate the properties and performance of the proposed framework and show that the combination of perfect secret sharing and replication can be used to build efficient fault-tolerant and secure distributed data storage systems.
{"title":"An approach for fault tolerant and secure data storage in collaborative work environments","authors":"A. Subbiah, D. Blough","doi":"10.1145/1103780.1103793","DOIUrl":"https://doi.org/10.1145/1103780.1103793","url":null,"abstract":"We describe a novel approach for building a secure and fault tolerant data storage service in collaborative work environments, which uses perfect secret sharing schemes to store data. Perfect secret sharing schemes have found little use in managing generic data because of the high computation overheads incurred by such schemes. Our proposed approach uses a novel combination of XOR secret sharing and replication mechanisms, which drastically reduce the computation overheads and achieve speeds comparable to standard encryption schemes. The combination of secret sharing and replication manifests itself as an architectural framework, which has the attractive property that its dimension can be varied to exploit tradeoffs amongst different performance metrics. We evaluate the properties and performance of the proposed framework and show that the combination of perfect secret sharing and replication can be used to build efficient fault-tolerant and secure distributed data storage systems.","PeriodicalId":413919,"journal":{"name":"ACM International Workshop on Storage Security And Survivability","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133256078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}