Wei Jiang, Jaideep Vaidya, Zahir Balaporia, Chris Clifton, Brett Banich
Transportation and logistics are a major sector of the economy, however data analysis in this domain has remained largely in the province of optimization. The potential of data mining and knowledge discovery techniques is largely untapped. Transportation networks are naturally represented as graphs. This paper explores the problems in mining of transportation network graphs: we hope to find how current techniques both succeed and fail on this problem, and from the failures, we hope to present new challenges for data mining. Experimental results from applying both existing graph mining and conventional data mining techniques to real transportation network data are provided, including new approaches to making these techniques applicable to the problems. Reasons why these techniques are not appropriate are discussed. We also suggest several challenging problems to precipitate research and galvanize future work in this area.
{"title":"Knowledge discovery from transportation network data","authors":"Wei Jiang, Jaideep Vaidya, Zahir Balaporia, Chris Clifton, Brett Banich","doi":"10.1109/ICDE.2005.82","DOIUrl":"https://doi.org/10.1109/ICDE.2005.82","url":null,"abstract":"Transportation and logistics are a major sector of the economy, however data analysis in this domain has remained largely in the province of optimization. The potential of data mining and knowledge discovery techniques is largely untapped. Transportation networks are naturally represented as graphs. This paper explores the problems in mining of transportation network graphs: we hope to find how current techniques both succeed and fail on this problem, and from the failures, we hope to present new challenges for data mining. Experimental results from applying both existing graph mining and conventional data mining techniques to real transportation network data are provided, including new approaches to making these techniques applicable to the problems. Reasons why these techniques are not appropriate are discussed. We also suggest several challenging problems to precipitate research and galvanize future work in this area.","PeriodicalId":297231,"journal":{"name":"21st International Conference on Data Engineering (ICDE'05)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127182223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A distributed stream processing system must adapt to changes in environment parameters and servers' load. We believe a dynamic load management scheme is indispensable for the system to be scalable. In particular, we expect aggressive methods such as query operator migration during runtime to bring long term benefit (especially for long running continuous queries) even though they may incur some short term overhead. However, to date few complete and practical solutions have been proposed for this problem. In this paper, we offer our solution to the problem. More specifically we make the following contributions: We formally define a new metric, performance ratio (PR), to measure the relative performance of each query and the objective for the whole system. By building a new cost model, we identify the heuristics that can be used to approach the objective. We propose a complete and practical distributed load management scheme, which includes a static initial placement scheme for newly, initiated queries as well as a runtime dynamic scheme. We conducted an extensive experimental study that shows the effectiveness of our technique.
{"title":"Dynamic load management for distributed continuous query systems","authors":"Yongluan Zhou, B. Ooi, K. Tan","doi":"10.1109/ICDE.2005.54","DOIUrl":"https://doi.org/10.1109/ICDE.2005.54","url":null,"abstract":"A distributed stream processing system must adapt to changes in environment parameters and servers' load. We believe a dynamic load management scheme is indispensable for the system to be scalable. In particular, we expect aggressive methods such as query operator migration during runtime to bring long term benefit (especially for long running continuous queries) even though they may incur some short term overhead. However, to date few complete and practical solutions have been proposed for this problem. In this paper, we offer our solution to the problem. More specifically we make the following contributions: We formally define a new metric, performance ratio (PR), to measure the relative performance of each query and the objective for the whole system. By building a new cost model, we identify the heuristics that can be used to approach the objective. We propose a complete and practical distributed load management scheme, which includes a static initial placement scheme for newly, initiated queries as well as a runtime dynamic scheme. We conducted an extensive experimental study that shows the effectiveness of our technique.","PeriodicalId":297231,"journal":{"name":"21st International Conference on Data Engineering (ICDE'05)","volume":"415 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124161594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Online monitoring of data streams poses a challenge in many data-centric applications, such as telecommunications networks, traffic management, trend-related analysis, Web-click streams, intrusion detection, and sensor networks. Mining techniques employed in these applications have to be efficient in terms of space usage and per-item processing time while providing a high quality of answers to (1) aggregate monitoring queries, such as finding surprising levels of a data stream, detecting bursts, and to (2) similarity queries, such as detecting correlations and finding interesting patterns. The most important aspect of these tasks is their need for flexible query lengths, i.e., it is difficult to set the appropriate lengths a priori. For example, bursts of events can occur at variable temporal modalities from hours to days to weeks. Correlated trends can occur at various temporal scales. The system has to discover "interesting" behavior online and monitor over flexible window sizes. In this paper, we propose a multi-resolution indexing scheme, which handles variable length queries efficiently. We demonstrate the effectiveness of our framework over existing techniques through an extensive set of experiments.
{"title":"A unified framework for monitoring data streams in real time","authors":"A. Bulut, Ambuj K. Singh","doi":"10.1109/ICDE.2005.13","DOIUrl":"https://doi.org/10.1109/ICDE.2005.13","url":null,"abstract":"Online monitoring of data streams poses a challenge in many data-centric applications, such as telecommunications networks, traffic management, trend-related analysis, Web-click streams, intrusion detection, and sensor networks. Mining techniques employed in these applications have to be efficient in terms of space usage and per-item processing time while providing a high quality of answers to (1) aggregate monitoring queries, such as finding surprising levels of a data stream, detecting bursts, and to (2) similarity queries, such as detecting correlations and finding interesting patterns. The most important aspect of these tasks is their need for flexible query lengths, i.e., it is difficult to set the appropriate lengths a priori. For example, bursts of events can occur at variable temporal modalities from hours to days to weeks. Correlated trends can occur at various temporal scales. The system has to discover \"interesting\" behavior online and monitor over flexible window sizes. In this paper, we propose a multi-resolution indexing scheme, which handles variable length queries efficiently. We demonstrate the effectiveness of our framework over existing techniques through an extensive set of experiments.","PeriodicalId":297231,"journal":{"name":"21st International Conference on Data Engineering (ICDE'05)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123706951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Releasing person-specific data in its most specific state poses a threat to individual privacy. This paper presents a practical and efficient algorithm for determining a generalized version of data that masks sensitive information and remains useful for modelling classification. The generalization of data is implemented by specializing or detailing the level of information in a top-down manner until a minimum privacy requirement is violated. This top-down specialization is natural and efficient for handling both categorical and continuous attributes. Our approach exploits the fact that data usually contains redundant structures for classification. While generalization may eliminate some structures, other structures emerge to help. Our results show that quality of classification can be preserved even for highly restrictive privacy requirements. This work has great applicability to both public and private sectors that share information for mutual benefits and productivity.
{"title":"Top-down specialization for information and privacy preservation","authors":"B. Fung, Ke Wang, Philip S. Yu","doi":"10.1109/ICDE.2005.143","DOIUrl":"https://doi.org/10.1109/ICDE.2005.143","url":null,"abstract":"Releasing person-specific data in its most specific state poses a threat to individual privacy. This paper presents a practical and efficient algorithm for determining a generalized version of data that masks sensitive information and remains useful for modelling classification. The generalization of data is implemented by specializing or detailing the level of information in a top-down manner until a minimum privacy requirement is violated. This top-down specialization is natural and efficient for handling both categorical and continuous attributes. Our approach exploits the fact that data usually contains redundant structures for classification. While generalization may eliminate some structures, other structures emerge to help. Our results show that quality of classification can be preserved even for highly restrictive privacy requirements. This work has great applicability to both public and private sectors that share information for mutual benefits and productivity.","PeriodicalId":297231,"journal":{"name":"21st International Conference on Data Engineering (ICDE'05)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121639213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Moving objects databases managing spatial objects with continuously changing position and extent over time have recently found large interest in the database community. Queries about moving objects become particularly interesting when they ask for temporal changes in the topological relationships between evolving spatial objects. A concept of spatio-temporal predicates has been proposed to describe these relationships. The goal of this paper is to design efficient algorithms for them so that they can be used in spatio-temporal joins and selections. This paper proposes not to design an algorithm for each new predicate individually but to employ a generic algorithmic scheme, which is able to cover present and future predicate definitions.
{"title":"Evaluation of spatio-temporal predicates on moving objects","authors":"Markus Schneider","doi":"10.1109/ICDE.2005.62","DOIUrl":"https://doi.org/10.1109/ICDE.2005.62","url":null,"abstract":"Moving objects databases managing spatial objects with continuously changing position and extent over time have recently found large interest in the database community. Queries about moving objects become particularly interesting when they ask for temporal changes in the topological relationships between evolving spatial objects. A concept of spatio-temporal predicates has been proposed to describe these relationships. The goal of this paper is to design efficient algorithms for them so that they can be used in spatio-temporal joins and selections. This paper proposes not to design an algorithm for each new predicate individually but to employ a generic algorithmic scheme, which is able to cover present and future predicate definitions.","PeriodicalId":297231,"journal":{"name":"21st International Conference on Data Engineering (ICDE'05)","volume":"230 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131535184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Relational graphs are widely used in modeling large scale networks such as biological networks and social networks. In a relational graph, each node represents a distinct entity while each edge represents a relationship between entities. Various algorithms were developed to discover interesting patterns from a single relational graph (Z. Wu et al., 1993). However, little attention has been paid to the patterns that are hidden in multiple relational graphs. One interesting pattern in relational graphs is frequent highly connected subgraph which can identify recurrent groups and clusters. In social networks, this kind of pattern corresponds to communities where people are strongly associated. For example, if several researchers co-author some papers, attend the same conferences, and refer their works from each other, it strongly indicates that they are studying the same research theme.
关系图广泛应用于生物网络和社会网络等大型网络的建模。在关系图中,每个节点表示一个不同的实体,而每个边表示实体之间的关系。开发了各种算法来从单个关系图中发现有趣的模式(Z. Wu et al., 1993)。然而,很少有人关注隐藏在多个关系图中的模式。关系图中一个有趣的模式是频繁高连通子图,它可以识别出循环的群和聚类。在社交网络中,这种模式对应于人们联系紧密的社区。例如,如果几个研究人员共同撰写了一些论文,参加了相同的会议,并相互引用了他们的作品,这强烈表明他们正在研究相同的研究主题。
{"title":"Mining closed relational graphs with connectivity constraints","authors":"Xifeng Yan, X. Zhou, Jiawei Han","doi":"10.1145/1081870.1081908","DOIUrl":"https://doi.org/10.1145/1081870.1081908","url":null,"abstract":"Relational graphs are widely used in modeling large scale networks such as biological networks and social networks. In a relational graph, each node represents a distinct entity while each edge represents a relationship between entities. Various algorithms were developed to discover interesting patterns from a single relational graph (Z. Wu et al., 1993). However, little attention has been paid to the patterns that are hidden in multiple relational graphs. One interesting pattern in relational graphs is frequent highly connected subgraph which can identify recurrent groups and clusters. In social networks, this kind of pattern corresponds to communities where people are strongly associated. For example, if several researchers co-author some papers, attend the same conferences, and refer their works from each other, it strongly indicates that they are studying the same research theme.","PeriodicalId":297231,"journal":{"name":"21st International Conference on Data Engineering (ICDE'05)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131333764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern information systems often store data that has been transformed and integrated from a variety of sources. This integration may obscure the original source semantics of data items. For many tasks, it is important to be able to determine not only where data items originated, but also why they appear in the integration as they do and through what transformation they were derived. This problem is known as data provenance. In this work, we consider data provenance at the schema and mapping level. In particular, we consider how to answer questions such as "what schema elements in the source(s) contributed to this value", or "through what transformations or mappings was this value derived?" Towards this end, we elevate schemas and mappings to first-class citizens that are stored in a repository and are associated with the actual data values. An extended query language, called MXQL, is also developed that allows meta-data to be queried as regular data and we describe its implementation scenario.
{"title":"Representing and querying data transformations","authors":"Yannis Velegrakis, Renée J. Miller, J. Mylopoulos","doi":"10.1109/ICDE.2005.123","DOIUrl":"https://doi.org/10.1109/ICDE.2005.123","url":null,"abstract":"Modern information systems often store data that has been transformed and integrated from a variety of sources. This integration may obscure the original source semantics of data items. For many tasks, it is important to be able to determine not only where data items originated, but also why they appear in the integration as they do and through what transformation they were derived. This problem is known as data provenance. In this work, we consider data provenance at the schema and mapping level. In particular, we consider how to answer questions such as \"what schema elements in the source(s) contributed to this value\", or \"through what transformations or mappings was this value derived?\" Towards this end, we elevate schemas and mappings to first-class citizens that are stored in a repository and are associated with the actual data values. An extended query language, called MXQL, is also developed that allows meta-data to be queried as regular data and we describe its implementation scenario.","PeriodicalId":297231,"journal":{"name":"21st International Conference on Data Engineering (ICDE'05)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124633688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Blakeley, Conor Cunningham, Nigel Ellis, Balaji Rathakrishnan, Ming Wu
This paper presents an architecture overview of the distributed, heterogeneous query processor (DHQP) in the Microsoft SQL server database system to enable queries over a large collection of diverse data sources. The paper highlights three salient aspects of the architecture. First, the system introduces well-defined abstractions such as connections, commands, and rowsets that enable sources to plug into the system. These abstractions are formalized by the OLE DB data access interfaces. The generality of OLE DB and its broad industry adoption enables our system to reach a very large collection of diverse data sources ranging from personal productivity tools, to database management systems, to file system data. Second, the DHQP is built-in to the relational optimizer and execution engine of the system. This enables DH queries and updates to benefit from the cost-based algebraic transformations and execution strategies available in the system. Finally, the architecture is inherently extensible to support new data sources as they emerge as well as serves as a key extensibility point for the relational engine to add new features such as full-text search and distributed partitioned views.
{"title":"Distributed/heterogeneous query processing in Microsoft SQL server","authors":"J. Blakeley, Conor Cunningham, Nigel Ellis, Balaji Rathakrishnan, Ming Wu","doi":"10.1109/ICDE.2005.51","DOIUrl":"https://doi.org/10.1109/ICDE.2005.51","url":null,"abstract":"This paper presents an architecture overview of the distributed, heterogeneous query processor (DHQP) in the Microsoft SQL server database system to enable queries over a large collection of diverse data sources. The paper highlights three salient aspects of the architecture. First, the system introduces well-defined abstractions such as connections, commands, and rowsets that enable sources to plug into the system. These abstractions are formalized by the OLE DB data access interfaces. The generality of OLE DB and its broad industry adoption enables our system to reach a very large collection of diverse data sources ranging from personal productivity tools, to database management systems, to file system data. Second, the DHQP is built-in to the relational optimizer and execution engine of the system. This enables DH queries and updates to benefit from the cost-based algebraic transformations and execution strategies available in the system. Finally, the architecture is inherently extensible to support new data sources as they emerge as well as serves as a key extensibility point for the relational engine to add new features such as full-text search and distributed partitioned views.","PeriodicalId":297231,"journal":{"name":"21st International Conference on Data Engineering (ICDE'05)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130343719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many scientific applications generate massive volumes of data through observations or computer simulations, bringing up the need for effective indexing methods for efficient storage and retrieval of scientific data. Unlike conventional databases, scientific data is mostly read-only and its volume can reach to the order of petabytes, making a compact index structure vital. Bitmap indexing has been successfully applied to scientific databases by exploiting the fact that scientific data are enumerated or numerical. Bitmap indices can be compressed with valiants of run length encoding for a compact index structure. However even this may not be enough for the enormous data generated in some applications such as high energy physics. In this paper, we study how to reorganize bitmap tables for improved compression rates. Our algorithms are used just as a preprocessing step, thus there is no need to reuse the current indexing techniques and the query processing algorithms. We introduce the tuple reordering problem, which aims to reorganize database tuples for optimal compression rates. We propose Gray code ordering algorithm for this NP-Complete problem, which is an in-place algorithm, and runs in linear time in the order of the size of the database. We also discuss how the tuple reordering problem can be reduced to the traveling salesperson problem. Our experimental results on real data sets show that the compression ratio can be improved by a factor of 2 to 10.
{"title":"Compressing bitmap indices by data reorganization","authors":"Ali Pinar, Tao Tao, H. Ferhatosmanoğlu","doi":"10.1109/ICDE.2005.35","DOIUrl":"https://doi.org/10.1109/ICDE.2005.35","url":null,"abstract":"Many scientific applications generate massive volumes of data through observations or computer simulations, bringing up the need for effective indexing methods for efficient storage and retrieval of scientific data. Unlike conventional databases, scientific data is mostly read-only and its volume can reach to the order of petabytes, making a compact index structure vital. Bitmap indexing has been successfully applied to scientific databases by exploiting the fact that scientific data are enumerated or numerical. Bitmap indices can be compressed with valiants of run length encoding for a compact index structure. However even this may not be enough for the enormous data generated in some applications such as high energy physics. In this paper, we study how to reorganize bitmap tables for improved compression rates. Our algorithms are used just as a preprocessing step, thus there is no need to reuse the current indexing techniques and the query processing algorithms. We introduce the tuple reordering problem, which aims to reorganize database tuples for optimal compression rates. We propose Gray code ordering algorithm for this NP-Complete problem, which is an in-place algorithm, and runs in linear time in the order of the size of the database. We also discuss how the tuple reordering problem can be reduced to the traveling salesperson problem. Our experimental results on real data sets show that the compression ratio can be improved by a factor of 2 to 10.","PeriodicalId":297231,"journal":{"name":"21st International Conference on Data Engineering (ICDE'05)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129498659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Efficient execution of ranking query is increasingly becoming a major challenge for database technology. DBMSs provide efficient update, indexing, concurrency and recovery. On the other hand, IR on text and multimedia requires techniques involving uncertainty and ranking for effective retrieval. The main goal of this paper is to give an in-depth look on supporting ranking queries as an increasingly interesting area of research. We cover the state-of-the-art techniques in research prototypes and industry-strength database engines for efficient handling of ranking and queries. We focus primarily on how to integrate ranking as a new query processing and optimization dimension, with the aim of supporting ranking queries as a basic and core functionality. The paper identifies several challenges that need to be addressed towards a true support for ranking and effective retrieval in database management systems.
{"title":"Rank-aware query processsing and optimization","authors":"I. Ilyas, Walid G. Aref","doi":"10.1109/ICDE.2005.119","DOIUrl":"https://doi.org/10.1109/ICDE.2005.119","url":null,"abstract":"Efficient execution of ranking query is increasingly becoming a major challenge for database technology. DBMSs provide efficient update, indexing, concurrency and recovery. On the other hand, IR on text and multimedia requires techniques involving uncertainty and ranking for effective retrieval. The main goal of this paper is to give an in-depth look on supporting ranking queries as an increasingly interesting area of research. We cover the state-of-the-art techniques in research prototypes and industry-strength database engines for efficient handling of ranking and queries. We focus primarily on how to integrate ranking as a new query processing and optimization dimension, with the aim of supporting ranking queries as a basic and core functionality. The paper identifies several challenges that need to be addressed towards a true support for ranking and effective retrieval in database management systems.","PeriodicalId":297231,"journal":{"name":"21st International Conference on Data Engineering (ICDE'05)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130912348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}