Data Warehouses (DWs) are the enterprise's most valuable asset in what concerns critical business information, making them an appealing target for attackers. Packaged database encryption solutions are considered the best solution to protect sensitive data. However, given the volume of data typically processed by DW queries, the existing encryption solutions heavily increase storage space and introduce very large overheads in query response time, due to decryption costs. In many cases, this performance degradation makes encryption unfeasible for use in DWs. In this paper we propose a transparent data masking solution for numerical values in DWs based on the mathematical modulus operator, which can be used without changing user application and DBMS source code. Our solution provides strong data security while introducing small overheads in both storage space and database performance. Several experimental evaluations using the TPC-H decision support benchmark and a real-world DW are included. The results show the overall efficiency of our proposal, demonstrating that it is a valid alternative to existing standard encryption routines for enforcing data confidentiality in DWs.
{"title":"A data masking technique for data warehouses","authors":"R. Santos, Jorge Bernardino, M. Vieira","doi":"10.1145/2076623.2076632","DOIUrl":"https://doi.org/10.1145/2076623.2076632","url":null,"abstract":"Data Warehouses (DWs) are the enterprise's most valuable asset in what concerns critical business information, making them an appealing target for attackers. Packaged database encryption solutions are considered the best solution to protect sensitive data. However, given the volume of data typically processed by DW queries, the existing encryption solutions heavily increase storage space and introduce very large overheads in query response time, due to decryption costs. In many cases, this performance degradation makes encryption unfeasible for use in DWs. In this paper we propose a transparent data masking solution for numerical values in DWs based on the mathematical modulus operator, which can be used without changing user application and DBMS source code. Our solution provides strong data security while introducing small overheads in both storage space and database performance. Several experimental evaluations using the TPC-H decision support benchmark and a real-world DW are included. The results show the overall efficiency of our proposal, demonstrating that it is a valid alternative to existing standard encryption routines for enforcing data confidentiality in DWs.","PeriodicalId":93615,"journal":{"name":"Proceedings. International Database Engineering and Applications Symposium","volume":"3 1","pages":"61-69"},"PeriodicalIF":0.0,"publicationDate":"2011-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91041675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Provenance that records the derivation history of data is useful for a wide variety of applications, including those where an audit trail needs to be provided, where the sources and the trust-level attributed to the sources contribute to determining the trust-level in results etc. There have been different efforts in the past for representing provenance information, the most notable being the Open Provenance Model (OPM). OPM defines structures for representing the provenance information as a graph with nodes and edges, and also specifies inference queries. Our work builds on these by proposing query language constructs, that the users will find useful for manipulating the provenance information. Rather than specifying a query language, we define two classes of algebraic constructs: content-based operators that operate on the content of nodes and edges, and structure-based operators that operate on the graph structure of the provenance graph. These content-based and the structure-based constructs can be combined to express a wide variety of interesting queries on the provenance data that go much beyond simple inference queries as expressible using Datalog/SQL.
{"title":"Query language constructs for provenance","authors":"Murali Mani, M. Alawa, A. Kalyanasundaram","doi":"10.1145/2076623.2076661","DOIUrl":"https://doi.org/10.1145/2076623.2076661","url":null,"abstract":"Provenance that records the derivation history of data is useful for a wide variety of applications, including those where an audit trail needs to be provided, where the sources and the trust-level attributed to the sources contribute to determining the trust-level in results etc. There have been different efforts in the past for representing provenance information, the most notable being the Open Provenance Model (OPM). OPM defines structures for representing the provenance information as a graph with nodes and edges, and also specifies inference queries. Our work builds on these by proposing query language constructs, that the users will find useful for manipulating the provenance information. Rather than specifying a query language, we define two classes of algebraic constructs: content-based operators that operate on the content of nodes and edges, and structure-based operators that operate on the graph structure of the provenance graph. These content-based and the structure-based constructs can be combined to express a wide variety of interesting queries on the provenance data that go much beyond simple inference queries as expressible using Datalog/SQL.","PeriodicalId":93615,"journal":{"name":"Proceedings. International Database Engineering and Applications Symposium","volume":"29 1","pages":"254-255"},"PeriodicalIF":0.0,"publicationDate":"2011-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85494770","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Igo Ramalho Brilhante, J. Macêdo, C. Renso, M. Casanova
A massive amount of data on moving object trajectories is available today. However, it is still a major challenge to process such information in order to explain moving object interactions, which could help in revealing non-trivial behavioral patterns. To that end, we consider a complex networks-based representation of trajectory data. Frequent encounters among moving objects (trajectory encounters) are used to create the network edges whereas nodes represent trajectories. A real trajectory dataset of vehicles moving within the City of Milan allows us to study the structure of vehicle interactions and validate our method. We create seven networks and compute the clustering coefficient, and the average shortest path length comparing them with those of the Erdős-Rényi model. Our analysis shows that all computed trajectory networks have the small world effect and the scale-free feature similar to the internet and biological networks. Finally, we discuss how these results could be interpreted in the light of the traffic application domain.
{"title":"Trajectory data analysis using complex networks","authors":"Igo Ramalho Brilhante, J. Macêdo, C. Renso, M. Casanova","doi":"10.1145/2076623.2076627","DOIUrl":"https://doi.org/10.1145/2076623.2076627","url":null,"abstract":"A massive amount of data on moving object trajectories is available today. However, it is still a major challenge to process such information in order to explain moving object interactions, which could help in revealing non-trivial behavioral patterns. To that end, we consider a complex networks-based representation of trajectory data. Frequent encounters among moving objects (trajectory encounters) are used to create the network edges whereas nodes represent trajectories. A real trajectory dataset of vehicles moving within the City of Milan allows us to study the structure of vehicle interactions and validate our method. We create seven networks and compute the clustering coefficient, and the average shortest path length comparing them with those of the Erdős-Rényi model. Our analysis shows that all computed trajectory networks have the small world effect and the scale-free feature similar to the internet and biological networks. Finally, we discuss how these results could be interpreted in the light of the traffic application domain.","PeriodicalId":93615,"journal":{"name":"Proceedings. International Database Engineering and Applications Symposium","volume":"18 1","pages":"17-25"},"PeriodicalIF":0.0,"publicationDate":"2011-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89609696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huge volumes of streaming data have been generated by sensors for applications such as environment surveillance. Partially due to the inherited limitation of sensors, these continuous streaming data can be uncertain. Over the past few years, algorithms have been proposed to apply the sliding window or time-fading window model to mine frequent patterns from streams of uncertain data. However, there are also other models to process data streams. In this paper, we propose a landmark-model based system for mining frequent patterns from streams of uncertain data.
{"title":"A landmark-model based system for mining frequent patterns from uncertain data streams","authors":"C. Leung, Fan Jiang, Y. Hayduk","doi":"10.1145/2076623.2076659","DOIUrl":"https://doi.org/10.1145/2076623.2076659","url":null,"abstract":"Huge volumes of streaming data have been generated by sensors for applications such as environment surveillance. Partially due to the inherited limitation of sensors, these continuous streaming data can be uncertain. Over the past few years, algorithms have been proposed to apply the sliding window or time-fading window model to mine frequent patterns from streams of uncertain data. However, there are also other models to process data streams. In this paper, we propose a landmark-model based system for mining frequent patterns from streams of uncertain data.","PeriodicalId":93615,"journal":{"name":"Proceedings. International Database Engineering and Applications Symposium","volume":"81 1","pages":"249-250"},"PeriodicalIF":0.0,"publicationDate":"2011-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89640995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elise Desmier, Frédéric Flouvat, D. Gay, Nazha Selmaoui-Folcher
Extraction of interesting colocations in geo-referenced data is one of the major tasks in spatial pattern mining. The goal is to find sets of spatial object-types with instances located in the same neighborhood. In this context, the main drawback is the visualization and interpretation of extracted patterns by domain experts. Indeed, common textual representation of colocations loses important spatial information such as the position, the orientation or the spatial distribution of the patterns. To overcome this problem, we propose a new clustering-based visualization technique deeply integrated in the colocation mining algorithm. This new simple, concise and intuitive cartographic visualization considers both spatial information and expert practices. This proposition has been integrated in a Geographic Information System and experimented on a real-world geological data set. Domain experts confirm the added-value of this visualization approach.
{"title":"A clustering-based visualization of colocation patterns","authors":"Elise Desmier, Frédéric Flouvat, D. Gay, Nazha Selmaoui-Folcher","doi":"10.1145/2076623.2076633","DOIUrl":"https://doi.org/10.1145/2076623.2076633","url":null,"abstract":"Extraction of interesting colocations in geo-referenced data is one of the major tasks in spatial pattern mining. The goal is to find sets of spatial object-types with instances located in the same neighborhood. In this context, the main drawback is the visualization and interpretation of extracted patterns by domain experts. Indeed, common textual representation of colocations loses important spatial information such as the position, the orientation or the spatial distribution of the patterns. To overcome this problem, we propose a new clustering-based visualization technique deeply integrated in the colocation mining algorithm. This new simple, concise and intuitive cartographic visualization considers both spatial information and expert practices. This proposition has been integrated in a Geographic Information System and experimented on a real-world geological data set. Domain experts confirm the added-value of this visualization approach.","PeriodicalId":93615,"journal":{"name":"Proceedings. International Database Engineering and Applications Symposium","volume":"130 1","pages":"70-78"},"PeriodicalIF":0.0,"publicationDate":"2011-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74694518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Key-value stores which keep the data entirely in main memory can serve applications whose performance criteria cannot be met by disk-based key-value stores. This paper evaluates the performance implications of cache-conscious data placement in an in-memory key-value store by examining how many values have to be stored consecutively in blocks in order to fully exploit memory locality during bandwidth-bound operations. We contribute by introducing a random block traversal main memory access pattern, by describing the corresponding memory access costs as well as by formally and experimentally deriving the correlation between block size and throughput. Our calculations and experiments vary the value and block sizes as well as their placement in the memory and derive their impact on cache-misses throughout the different memory hierarchies, the ability to prefetch data, and the number of needed CPU cycles to perform a certain set of data operations. The paper closes with the insight that a block-wise grouping of relatively few key-value pairs increases the throughput up to a factor six and with a discussion which implications a block-wise grouping of data has on the system design key-value store.
{"title":"Cache-conscious data placement in an in-memory key-value store","authors":"Christian Tinnefeld, A. Zeier, H. Plattner","doi":"10.1145/2076623.2076640","DOIUrl":"https://doi.org/10.1145/2076623.2076640","url":null,"abstract":"Key-value stores which keep the data entirely in main memory can serve applications whose performance criteria cannot be met by disk-based key-value stores. This paper evaluates the performance implications of cache-conscious data placement in an in-memory key-value store by examining how many values have to be stored consecutively in blocks in order to fully exploit memory locality during bandwidth-bound operations. We contribute by introducing a random block traversal main memory access pattern, by describing the corresponding memory access costs as well as by formally and experimentally deriving the correlation between block size and throughput. Our calculations and experiments vary the value and block sizes as well as their placement in the memory and derive their impact on cache-misses throughout the different memory hierarchies, the ability to prefetch data, and the number of needed CPU cycles to perform a certain set of data operations. The paper closes with the insight that a block-wise grouping of relatively few key-value pairs increases the throughput up to a factor six and with a discussion which implications a block-wise grouping of data has on the system design key-value store.","PeriodicalId":93615,"journal":{"name":"Proceedings. International Database Engineering and Applications Symposium","volume":"24 1","pages":"134-142"},"PeriodicalIF":0.0,"publicationDate":"2011-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87003396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To achieve scalable data intensive analytics, we investigate methods to integrate general purpose analytic computation into a query pipeline using User Defined Functions (UDFs). However, an existing UDF cannot act as a block operator with chunk-wise input along the tuple-wise query processing pipeline, therefore unable to deal with the application semantics definable on the set of incoming tuples representing a single object or falling in a time window, and unable to leverage external computation engines for efficient batch processing. To enable the data intensive computation pipeline, we introduce a new kind of UDFs called Set-In Set-Out (SISO) UDFs. A SISO UDF is a block operator for processing the input tuples and returning the resulting tuples chunk by chunk. Operated in the query processing pipeline, a SISO UDF pools a chunk of input tuples, dispatches them to GPUs or an analytic engine in batch, materializes and then streams out the results. This behavior differentiates SISO UDF from all the existing ones, and makes efficient integration of analytic computation and data management feasible. We have implemented the SISO UDF framework by extending the PostgreSQL query engine, and further demonstrated the use of SISO UDF with GPU-enabled analytical query evaluation. Our experiments show that the proposed approach is scalable and efficient.
{"title":"Extend core UDF framework for GPU-enabled analytical query evaluation","authors":"Qiming Chen, R. Wu, M. Hsu, Bin Zhang","doi":"10.1145/2076623.2076641","DOIUrl":"https://doi.org/10.1145/2076623.2076641","url":null,"abstract":"To achieve scalable data intensive analytics, we investigate methods to integrate general purpose analytic computation into a query pipeline using User Defined Functions (UDFs). However, an existing UDF cannot act as a block operator with chunk-wise input along the tuple-wise query processing pipeline, therefore unable to deal with the application semantics definable on the set of incoming tuples representing a single object or falling in a time window, and unable to leverage external computation engines for efficient batch processing.\u0000 To enable the data intensive computation pipeline, we introduce a new kind of UDFs called Set-In Set-Out (SISO) UDFs. A SISO UDF is a block operator for processing the input tuples and returning the resulting tuples chunk by chunk. Operated in the query processing pipeline, a SISO UDF pools a chunk of input tuples, dispatches them to GPUs or an analytic engine in batch, materializes and then streams out the results. This behavior differentiates SISO UDF from all the existing ones, and makes efficient integration of analytic computation and data management feasible. We have implemented the SISO UDF framework by extending the PostgreSQL query engine, and further demonstrated the use of SISO UDF with GPU-enabled analytical query evaluation. Our experiments show that the proposed approach is scalable and efficient.","PeriodicalId":93615,"journal":{"name":"Proceedings. International Database Engineering and Applications Symposium","volume":"341 1","pages":"143-151"},"PeriodicalIF":0.0,"publicationDate":"2011-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79544022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Conventional search techniques are mainly designed to return a ranked list of single objects that are relevant to a given query. However, they do not meet the criteria for retrieving a combination of objects that is close to the query. This paper presents top-k query processing in which Euclidean distance is used as the scoring function for combinatorial objects. We also propose a pruning method based on clustering and efficiently select object combinations by pruning clusters that do not contain potential candidates for the top-k results. We compared the proposed method with the method that enumerates all the combinatorial objects and calculates the distance to the query. Experimental results revealed that the proposed method improves the processing efficiency to about 95% at maximum.
{"title":"Top-k query processing for combinatorial objects using Euclidean distance","authors":"Takanobu Suzuki, A. Takasu, J. Adachi","doi":"10.1145/2076623.2076651","DOIUrl":"https://doi.org/10.1145/2076623.2076651","url":null,"abstract":"Conventional search techniques are mainly designed to return a ranked list of single objects that are relevant to a given query. However, they do not meet the criteria for retrieving a combination of objects that is close to the query. This paper presents top-k query processing in which Euclidean distance is used as the scoring function for combinatorial objects. We also propose a pruning method based on clustering and efficiently select object combinations by pruning clusters that do not contain potential candidates for the top-k results. We compared the proposed method with the method that enumerates all the combinatorial objects and calculates the distance to the query. Experimental results revealed that the proposed method improves the processing efficiency to about 95% at maximum.","PeriodicalId":93615,"journal":{"name":"Proceedings. International Database Engineering and Applications Symposium","volume":"51 1","pages":"209-213"},"PeriodicalIF":0.0,"publicationDate":"2011-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74416904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. García, S. Segrera, V. F. L. Batista, María Dolores Muñoz Vicente, Angel L. Sánchez
Recommender systems are becoming very popular in recent years, mainly in the e-commerce sites, although they are increasing in importance in other areas such as e-learning, tourism, news pages, etc. These systems are endowed with intelligent mechanisms to personalize recommendations about products or services. However, they present some serious drawbacks that impact in user satisfaction. First-rater and cold-start problems are two important drawbacks that take place respectively when new products or new users are introduced in the system. The lack of rating about these products or from these users prevents from making recommendations. Nowadays, traditional collaborative filtering methods have being replaced by web mining techniques in order to deal with scalability and performance problems, but first-rater and cold-start ones require a different strategy. In this work, we propose a methodology that combines data mining techniques with semantic data in order to overcome these two important shortcomings.
{"title":"Mining semantic data for solving first-rater and cold-start problems in recommender systems","authors":"M. García, S. Segrera, V. F. L. Batista, María Dolores Muñoz Vicente, Angel L. Sánchez","doi":"10.1145/2076623.2076662","DOIUrl":"https://doi.org/10.1145/2076623.2076662","url":null,"abstract":"Recommender systems are becoming very popular in recent years, mainly in the e-commerce sites, although they are increasing in importance in other areas such as e-learning, tourism, news pages, etc. These systems are endowed with intelligent mechanisms to personalize recommendations about products or services. However, they present some serious drawbacks that impact in user satisfaction. First-rater and cold-start problems are two important drawbacks that take place respectively when new products or new users are introduced in the system. The lack of rating about these products or from these users prevents from making recommendations. Nowadays, traditional collaborative filtering methods have being replaced by web mining techniques in order to deal with scalability and performance problems, but first-rater and cold-start ones require a different strategy. In this work, we propose a methodology that combines data mining techniques with semantic data in order to overcome these two important shortcomings.","PeriodicalId":93615,"journal":{"name":"Proceedings. International Database Engineering and Applications Symposium","volume":"R-34 1","pages":"256-257"},"PeriodicalIF":0.0,"publicationDate":"2011-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84556944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Queries over probabilistic databases lead to probabilistic results. As the process of arriving at these results is based on underlying data probabilities, we believe involving a user in the loop of query processing and leveraging the user's personal knowledge to deal with uncertain data, will enable the system to scrub (correct) and tailor its probabilistic query results towards a better quality from the perspective of the specific user. In this paper, we propose to open the black box of a probabilistic database query engine, and explain to the user how the engine comes up with the probabilistic query result as well as which uncertain tuples in the database the result is derived from. In this way, the user based on his/her knowledge about uncertain information can not only decide how much confidence to be placed on the query engine, but also help clarify some uncertain information so that the query engine can re-generate an improved query result. Two particular issues associated with such a probabilistic database query framework are addressed: (i) how to interact with a user for answer explanation and uncertainty clarification without bringing much burden to the user, and (ii) how to scrub/correct the query result without incurring much computation overhead to the query engine. Our performance study demonstrates the accuracy effectiveness and computational efficiency achieved by the proposed framework.
{"title":"Scrubbing query results from probabilistic databases","authors":"Jianwen Chen, Ling Feng, Wenwei Xue","doi":"10.1145/2076623.2076634","DOIUrl":"https://doi.org/10.1145/2076623.2076634","url":null,"abstract":"Queries over probabilistic databases lead to probabilistic results. As the process of arriving at these results is based on underlying data probabilities, we believe involving a user in the loop of query processing and leveraging the user's personal knowledge to deal with uncertain data, will enable the system to scrub (correct) and tailor its probabilistic query results towards a better quality from the perspective of the specific user. In this paper, we propose to open the black box of a probabilistic database query engine, and explain to the user how the engine comes up with the probabilistic query result as well as which uncertain tuples in the database the result is derived from. In this way, the user based on his/her knowledge about uncertain information can not only decide how much confidence to be placed on the query engine, but also help clarify some uncertain information so that the query engine can re-generate an improved query result. Two particular issues associated with such a probabilistic database query framework are addressed: (i) how to interact with a user for answer explanation and uncertainty clarification without bringing much burden to the user, and (ii) how to scrub/correct the query result without incurring much computation overhead to the query engine. Our performance study demonstrates the accuracy effectiveness and computational efficiency achieved by the proposed framework.","PeriodicalId":93615,"journal":{"name":"Proceedings. International Database Engineering and Applications Symposium","volume":"20 1","pages":"79-87"},"PeriodicalIF":0.0,"publicationDate":"2011-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85670920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}