Pub Date : 1995-03-06DOI: 10.1109/ICDE.1995.380388
P. Seshadri, M. Livny, R. Ramakrishnan
This paper presents the SEQ model which is the basis for a system to manage various kinds of sequence data. The model separates the data from the ordering information, and includes operators based on two distinct abstractions of a sequence. The main contributions of the SEQ model are: (a) it can deal with different types of sequence data, (b) it supports an expressive range of sequence queries, (c) it draws from many of the diverse existing approaches to modeling sequence data.<>
{"title":"SEQ: A model for sequence databases","authors":"P. Seshadri, M. Livny, R. Ramakrishnan","doi":"10.1109/ICDE.1995.380388","DOIUrl":"https://doi.org/10.1109/ICDE.1995.380388","url":null,"abstract":"This paper presents the SEQ model which is the basis for a system to manage various kinds of sequence data. The model separates the data from the ordering information, and includes operators based on two distinct abstractions of a sequence. The main contributions of the SEQ model are: (a) it can deal with different types of sequence data, (b) it supports an expressive range of sequence queries, (c) it draws from many of the diverse existing approaches to modeling sequence data.<<ETX>>","PeriodicalId":184415,"journal":{"name":"Proceedings of the Eleventh International Conference on Data Engineering","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115919495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-03-06DOI: 10.1109/ICDE.1995.380415
R. Agrawal, R. Srikant
We are given a large database of customer transactions, where each transaction consists of customer-id, transaction time, and the items bought in the transaction. We introduce the problem of mining sequential patterns over such databases. We present three algorithms to solve this problem, and empirically evaluate their performance using synthetic data. Two of the proposed algorithms, AprioriSome and AprioriAll, have comparable performance, albeit AprioriSome performs a little better when the minimum number of customers that must support a sequential pattern is low. Scale-up experiments show that both AprioriSome and AprioriAll scale linearly with the number of customer transactions. They also have excellent scale-up properties with respect to the number of transactions per customer and the number of items in a transaction.<>
{"title":"Mining sequential patterns","authors":"R. Agrawal, R. Srikant","doi":"10.1109/ICDE.1995.380415","DOIUrl":"https://doi.org/10.1109/ICDE.1995.380415","url":null,"abstract":"We are given a large database of customer transactions, where each transaction consists of customer-id, transaction time, and the items bought in the transaction. We introduce the problem of mining sequential patterns over such databases. We present three algorithms to solve this problem, and empirically evaluate their performance using synthetic data. Two of the proposed algorithms, AprioriSome and AprioriAll, have comparable performance, albeit AprioriSome performs a little better when the minimum number of customers that must support a sequential pattern is low. Scale-up experiments show that both AprioriSome and AprioriAll scale linearly with the number of customer transactions. They also have excellent scale-up properties with respect to the number of transactions per customer and the number of items in a transaction.<<ETX>>","PeriodicalId":184415,"journal":{"name":"Proceedings of the Eleventh International Conference on Data Engineering","volume":"125 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124773888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-03-06DOI: 10.1109/ICDE.1995.380411
K. Hua, Wallapak Tavanapong, H. Young
There has been a wealth of research in the area of parallel join algorithms. Among them, hash-based algorithms are particularly suitable for shared-nothing database systems. The effectiveness of these techniques depends on the uniformity in the distribution of the join attribute values. When this condition is not met, a severe fluctuation may occur among the bucket sizes, causing uneven workload for the processing nodes. Many parallel join algorithms with load balancing capability have been proposed to address this problem. Among them, the sampling and incremental approaches have been shown to provide an improvement over the more conventional methods. The comparison between these two approaches, however, has not been investigated. In this paper, we improve these techniques and implement them on an nCUBE/2 parallel computer to compare their performance. Our study indicates that the sampling technique is the better approach.<>
{"title":"A performance evaluation of load balancing techniques for join operations on multicomputer database systems","authors":"K. Hua, Wallapak Tavanapong, H. Young","doi":"10.1109/ICDE.1995.380411","DOIUrl":"https://doi.org/10.1109/ICDE.1995.380411","url":null,"abstract":"There has been a wealth of research in the area of parallel join algorithms. Among them, hash-based algorithms are particularly suitable for shared-nothing database systems. The effectiveness of these techniques depends on the uniformity in the distribution of the join attribute values. When this condition is not met, a severe fluctuation may occur among the bucket sizes, causing uneven workload for the processing nodes. Many parallel join algorithms with load balancing capability have been proposed to address this problem. Among them, the sampling and incremental approaches have been shown to provide an improvement over the more conventional methods. The comparison between these two approaches, however, has not been investigated. In this paper, we improve these techniques and implement them on an nCUBE/2 parallel computer to compare their performance. Our study indicates that the sampling technique is the better approach.<<ETX>>","PeriodicalId":184415,"journal":{"name":"Proceedings of the Eleventh International Conference on Data Engineering","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124139543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-03-06DOI: 10.1109/ICDE.1995.380377
Zhiyong Peng, Y. Kambayashi
Concepts of deputy objects and deputy classes for object-oriented databases (OODBs) are introduced. They can be used for unified realization of object views, roles and migration. The previous researches on these concepts were carried out separately, although they are very closely related. Objects appearing in a view can be regarded as playing roles in that view. Object migration is caused by change of roles of an object. Deputy objects can be used for unified treatment of them and generalization of these concepts. The schemata of deputy objects are defined by deputy classes. A set of algebraic operations are developed for deputy class derivation. In addition, three procedures for update propagation between deputy objects and source objects have been designed, which can support dynamic classification. The unified realization of object views, roles and migration by deputy mechanisms can achieve the following advantages. (1) Treating view objects as roles of an object allows them to have additional attributes and methods so that the autonomous views suitable for OODBs can be realized. (2) Handling object roles in the same way as object views enables object migration to be easily realized by dynamic classification functions of object views. (3) Generalization of object views, roles and migration makes it possible that various semantic constraints on them can, be defined and enforced uniformly.<>
{"title":"Deputy mechanisms for object-oriented databases","authors":"Zhiyong Peng, Y. Kambayashi","doi":"10.1109/ICDE.1995.380377","DOIUrl":"https://doi.org/10.1109/ICDE.1995.380377","url":null,"abstract":"Concepts of deputy objects and deputy classes for object-oriented databases (OODBs) are introduced. They can be used for unified realization of object views, roles and migration. The previous researches on these concepts were carried out separately, although they are very closely related. Objects appearing in a view can be regarded as playing roles in that view. Object migration is caused by change of roles of an object. Deputy objects can be used for unified treatment of them and generalization of these concepts. The schemata of deputy objects are defined by deputy classes. A set of algebraic operations are developed for deputy class derivation. In addition, three procedures for update propagation between deputy objects and source objects have been designed, which can support dynamic classification. The unified realization of object views, roles and migration by deputy mechanisms can achieve the following advantages. (1) Treating view objects as roles of an object allows them to have additional attributes and methods so that the autonomous views suitable for OODBs can be realized. (2) Handling object roles in the same way as object views enables object migration to be easily realized by dynamic classification functions of object views. (3) Generalization of object views, roles and migration makes it possible that various semantic constraints on them can, be defined and enforced uniformly.<<ETX>>","PeriodicalId":184415,"journal":{"name":"Proceedings of the Eleventh International Conference on Data Engineering","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126108050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-03-06DOI: 10.1109/ICDE.1995.380405
B. Subramanian, Theodore W. Leung, Scott L. Vandenberg, S. Zdonik
Relational database systems and most object-oriented database systems provide support for queries. Usually these queries represent retrievals over sets or multisets. Many new applications for databases, such as multimedia systems and digital libraries, need support for queries on complex bulk types such as lists and trees. In this paper we describe an object-oriented query algebra called AQUA (= A Query Algebra) for lists and trees. The operators in the algebra preserve the ordering between the elements of a list or tree, even when the result list or tree contains an arbitrary set of nodes from the original tree. We also present predicate languages for lists and trees which allow order-sensitive queries because they use pattern matching to examine groups of list or tree nodes rather than individual nodes. The ability to decompose predicate patterns enables optimizations that make use of indices.<>
关系数据库系统和大多数面向对象的数据库系统提供查询支持。通常,这些查询表示对集合或多集合的检索。许多新的数据库应用程序,如多媒体系统和数字图书馆,需要支持对复杂的批量类型(如列表和树)的查询。在本文中,我们描述了一个面向对象的查询代数,称为AQUA (= A query algebra),用于列表和树。代数中的运算符保持列表或树的元素之间的顺序,即使结果列表或树包含来自原始树的任意节点集。我们还提供了用于列表和树的谓词语言,这些语言允许顺序敏感的查询,因为它们使用模式匹配来检查列表或树节点组,而不是单个节点。分解谓词模式的能力支持利用索引进行优化。
{"title":"The AQUA approach to querying lists and trees in object-oriented databases","authors":"B. Subramanian, Theodore W. Leung, Scott L. Vandenberg, S. Zdonik","doi":"10.1109/ICDE.1995.380405","DOIUrl":"https://doi.org/10.1109/ICDE.1995.380405","url":null,"abstract":"Relational database systems and most object-oriented database systems provide support for queries. Usually these queries represent retrievals over sets or multisets. Many new applications for databases, such as multimedia systems and digital libraries, need support for queries on complex bulk types such as lists and trees. In this paper we describe an object-oriented query algebra called AQUA (= A Query Algebra) for lists and trees. The operators in the algebra preserve the ordering between the elements of a list or tree, even when the result list or tree contains an arbitrary set of nodes from the original tree. We also present predicate languages for lists and trees which allow order-sensitive queries because they use pattern matching to examine groups of list or tree nodes rather than individual nodes. The ability to decompose predicate patterns enables optimizations that make use of indices.<<ETX>>","PeriodicalId":184415,"journal":{"name":"Proceedings of the Eleventh International Conference on Data Engineering","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130588468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-03-06DOI: 10.1109/ICDE.1995.380357
Young Francis Day, S. Dagtas, Mitsutoshi Iino, A. Khokhar, A. Ghafoor
We propose a graphical data model for specifying spatio-temporal semantics of video data. The proposed model segments a video clip into subsegments consisting of objects. Each object is detected and recognized, and the relevant information of each object is recorded. The motions of objects are modeled through their relative spatial relationships as time evolves. Based on the semantics provided by this model, a user can create his/her own, object-oriented view of the video database. Using the propositional logic, we describe a methodology for specifying conceptual queries involving spatio-temporal semantics and expressing views for retrieving various video clips. Alternatively, a user can sketch the query, by exemplifying the concept. The proposed methodology can be used to specify spatio-temporal concepts at various levels of information granularity.<>
{"title":"Object-oriented conceptual modeling of video data","authors":"Young Francis Day, S. Dagtas, Mitsutoshi Iino, A. Khokhar, A. Ghafoor","doi":"10.1109/ICDE.1995.380357","DOIUrl":"https://doi.org/10.1109/ICDE.1995.380357","url":null,"abstract":"We propose a graphical data model for specifying spatio-temporal semantics of video data. The proposed model segments a video clip into subsegments consisting of objects. Each object is detected and recognized, and the relevant information of each object is recorded. The motions of objects are modeled through their relative spatial relationships as time evolves. Based on the semantics provided by this model, a user can create his/her own, object-oriented view of the video database. Using the propositional logic, we describe a methodology for specifying conceptual queries involving spatio-temporal semantics and expressing views for retrieving various video clips. Alternatively, a user can sketch the query, by exemplifying the concept. The proposed methodology can be used to specify spatio-temporal concepts at various levels of information granularity.<<ETX>>","PeriodicalId":184415,"journal":{"name":"Proceedings of the Eleventh International Conference on Data Engineering","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121403014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-03-06DOI: 10.1109/ICDE.1995.380384
Xiaolei Qian, L. Raschid
We develop an efficient algorithm for the query interoperation among existing heterogeneous object-oriented and relational databases. Our algorithm utilizes a canonical deductive database as a uniform representation of object-oriented schema and data. High-order object queries are transformed to the canonical deductive database in which they are partially evaluated and optimized, before being translated to relational queries. Our algorithm can be incorporated into object-oriented interfaces to relational databases or object-oriented federated databases to support object queries to heterogeneous relational databases.<>
{"title":"Query interoperation among object-oriented and relational databases","authors":"Xiaolei Qian, L. Raschid","doi":"10.1109/ICDE.1995.380384","DOIUrl":"https://doi.org/10.1109/ICDE.1995.380384","url":null,"abstract":"We develop an efficient algorithm for the query interoperation among existing heterogeneous object-oriented and relational databases. Our algorithm utilizes a canonical deductive database as a uniform representation of object-oriented schema and data. High-order object queries are transformed to the canonical deductive database in which they are partially evaluated and optimized, before being translated to relational queries. Our algorithm can be incorporated into object-oriented interfaces to relational databases or object-oriented federated databases to support object queries to heterogeneous relational databases.<<ETX>>","PeriodicalId":184415,"journal":{"name":"Proceedings of the Eleventh International Conference on Data Engineering","volume":"404 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116132454","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-03-06DOI: 10.1109/ICDE.1995.380414
D. McLeod, A. Si
An approach and mechanism to support the dynamic discovery of information units within a collection of autonomous and heterogeneous database systems is described. The mechanism is based upon a core set of database constructs that characterizes object database systems, along with a set of self-adaptive heuristics employing techniques from machine learning. The approach provides an uniform framework for organizing, indexing, searching, and browsing database information units within an environment of multiple, autonomous, interconnected databases. The feasibility of the approach and mechanism is illustrated using a protein/genetics application environment. Metrics for measuring the performance of the discovery system are presented and the effectiveness of the system is thereby evaluated. Performance tradeoffs are examined and analyzed by experiments performed, employing a simulation model.<>
{"title":"The design and experimental evaluation of an information discovery mechanism for networks of autonomous database systems","authors":"D. McLeod, A. Si","doi":"10.1109/ICDE.1995.380414","DOIUrl":"https://doi.org/10.1109/ICDE.1995.380414","url":null,"abstract":"An approach and mechanism to support the dynamic discovery of information units within a collection of autonomous and heterogeneous database systems is described. The mechanism is based upon a core set of database constructs that characterizes object database systems, along with a set of self-adaptive heuristics employing techniques from machine learning. The approach provides an uniform framework for organizing, indexing, searching, and browsing database information units within an environment of multiple, autonomous, interconnected databases. The feasibility of the approach and mechanism is illustrated using a protein/genetics application environment. Metrics for measuring the performance of the discovery system are presented and the effectiveness of the system is thereby evaluated. Performance tradeoffs are examined and analyzed by experiments performed, employing a simulation model.<<ETX>>","PeriodicalId":184415,"journal":{"name":"Proceedings of the Eleventh International Conference on Data Engineering","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125891416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-03-06DOI: 10.1109/ICDE.1995.380409
W. Du, Steve Peterson, M. Shan
Workflow builders are designed to facilitate development of automated processes and support flexible applications that can be updated, enhanced or completely revamped. The Hewlett-Packard WorkManager is an open product data management solution with workflow management capabilities. WorkManager supports the entire product lifecycle by providing a single, logical repository for all data, and it manages and tracks enterprise-wide processes. With a strong information management platform at its core, WorkManager provides central administration capabilities, including supervision and intervention, where necessary. Because enterprise data is usually fragmented and stored in a variety of legacy systems, and different organizations have different amount of control over their data, an enterprise workflow system needs to support processes accessing data from different sites and applications. This paper describes the architecture of distributed workflow, Hewlett-Packard's solution to the enterprise workflow problem. The architecture is an extension of the existing WorkManager architecture. Its development is based on user requirements and four high-level user models. The user models and the architecture are described.<>
{"title":"Enterprise workflow architecture","authors":"W. Du, Steve Peterson, M. Shan","doi":"10.1109/ICDE.1995.380409","DOIUrl":"https://doi.org/10.1109/ICDE.1995.380409","url":null,"abstract":"Workflow builders are designed to facilitate development of automated processes and support flexible applications that can be updated, enhanced or completely revamped. The Hewlett-Packard WorkManager is an open product data management solution with workflow management capabilities. WorkManager supports the entire product lifecycle by providing a single, logical repository for all data, and it manages and tracks enterprise-wide processes. With a strong information management platform at its core, WorkManager provides central administration capabilities, including supervision and intervention, where necessary. Because enterprise data is usually fragmented and stored in a variety of legacy systems, and different organizations have different amount of control over their data, an enterprise workflow system needs to support processes accessing data from different sites and applications. This paper describes the architecture of distributed workflow, Hewlett-Packard's solution to the enterprise workflow problem. The architecture is an extension of the existing WorkManager architecture. Its development is based on user requirements and four high-level user models. The user models and the architecture are described.<<ETX>>","PeriodicalId":184415,"journal":{"name":"Proceedings of the Eleventh International Conference on Data Engineering","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122005878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-03-06DOI: 10.1109/ICDE.1995.380399
Weining Zhang, Clement T. Yu, Bryan Reagan, H. Nakajima
Approaches are proposed to allow fuzzy terms to be interpreted according to the context within which they are used. Such an interpretation is natural and useful. A query-dependent interpretation is proposed to allow a fuzzy term to be interpreted relative to a partial answer of a query. A scaling process is used to transform a pre-defined meaning of a fuzzy term into on appropriate meaning in the given context. Sufficient conditions are given for a nested fuzzy query with RELATIVE quantifiers to be unnested for an efficient evaluation. An attribute-dependent interpretation is proposed to model the applications in which the meaning of a fuzzy term in an attribute must be interpreted with respect to values in other related attributes. Two necessary and sufficient conditions for a tuple to have a unique attribute-dependent interpretation are provided. We describe an interpretation system that allows queries to be processed based on the attribute-dependent interpretation of the data. Two techniques, grouping and shifting, are proposed to improve the implementation.<>
{"title":"Context-dependent interpretations of linguistic terms in fuzzy relational databases","authors":"Weining Zhang, Clement T. Yu, Bryan Reagan, H. Nakajima","doi":"10.1109/ICDE.1995.380399","DOIUrl":"https://doi.org/10.1109/ICDE.1995.380399","url":null,"abstract":"Approaches are proposed to allow fuzzy terms to be interpreted according to the context within which they are used. Such an interpretation is natural and useful. A query-dependent interpretation is proposed to allow a fuzzy term to be interpreted relative to a partial answer of a query. A scaling process is used to transform a pre-defined meaning of a fuzzy term into on appropriate meaning in the given context. Sufficient conditions are given for a nested fuzzy query with RELATIVE quantifiers to be unnested for an efficient evaluation. An attribute-dependent interpretation is proposed to model the applications in which the meaning of a fuzzy term in an attribute must be interpreted with respect to values in other related attributes. Two necessary and sufficient conditions for a tuple to have a unique attribute-dependent interpretation are provided. We describe an interpretation system that allows queries to be processed based on the attribute-dependent interpretation of the data. Two techniques, grouping and shifting, are proposed to improve the implementation.<<ETX>>","PeriodicalId":184415,"journal":{"name":"Proceedings of the Eleventh International Conference on Data Engineering","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121901670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}