Pub Date : 2019-09-01DOI: 10.1109/eScience.2019.00102
Saba Amiri, Sara Salimzadeh, Adam Belloum
Machine learning models recently have seen a large increase in usage across different disciplines. Their ability to learn complex concepts from the data and perform sophisticated tasks combined with their ability to leverage vast computational infrastructures available today have made them a very attractive choice for many challenges in academia and industry. In this context, deep Learning as a sub-class of machine learning is specifically becoming an important tool in modern computing applications. It has been successfully used for a wide range of different use cases, from medical applications to playing games. Due to the nature of these systems and the fact that a considerable portion of their use-cases deal with large volumes of data, training them is a very time and resource consuming task and requires vast amounts of computing cycles. To overcome this issue, it is only natural to try to scale deep learning applications to be able to run them across in order to achieve fast and manageable training speeds while maintaining a high level of accuracy. In recent years, a number of frameworks have been proposed to scale up ML algorithms to overcome the scalability issue, with roots both in the academia and the industry. With most of them being open source and supported by the increasingly large community of AI specialists and data scientists, their capabilities, performance and compatibility with modern hardware have been honed and extended. Thus, it is not easy for the domain scientist to pick the tool/framework best suited for their needs. This research aims to provide an overview of the relevant, widely used scalable machine learning and deep learning frameworks currently available and to provide the grounds on which researchers can compare and choose the best set of tools for their ML pipeline.
{"title":"A Survey of Scalable Deep Learning Frameworks","authors":"Saba Amiri, Sara Salimzadeh, Adam Belloum","doi":"10.1109/eScience.2019.00102","DOIUrl":"https://doi.org/10.1109/eScience.2019.00102","url":null,"abstract":"Machine learning models recently have seen a large increase in usage across different disciplines. Their ability to learn complex concepts from the data and perform sophisticated tasks combined with their ability to leverage vast computational infrastructures available today have made them a very attractive choice for many challenges in academia and industry. In this context, deep Learning as a sub-class of machine learning is specifically becoming an important tool in modern computing applications. It has been successfully used for a wide range of different use cases, from medical applications to playing games. Due to the nature of these systems and the fact that a considerable portion of their use-cases deal with large volumes of data, training them is a very time and resource consuming task and requires vast amounts of computing cycles. To overcome this issue, it is only natural to try to scale deep learning applications to be able to run them across in order to achieve fast and manageable training speeds while maintaining a high level of accuracy. In recent years, a number of frameworks have been proposed to scale up ML algorithms to overcome the scalability issue, with roots both in the academia and the industry. With most of them being open source and supported by the increasingly large community of AI specialists and data scientists, their capabilities, performance and compatibility with modern hardware have been honed and extended. Thus, it is not easy for the domain scientist to pick the tool/framework best suited for their needs. This research aims to provide an overview of the relevant, widely used scalable machine learning and deep learning frameworks currently available and to provide the grounds on which researchers can compare and choose the best set of tools for their ML pipeline.","PeriodicalId":142614,"journal":{"name":"2019 15th International Conference on eScience (eScience)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123691694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.1109/eScience.2019.00054
Geoffrey Fox, S. Jha
We recently outlined the vision of "Learning Everywhere" which captures the possibility and impact of how learning methods and traditional HPC methods can be coupled together. A primary driver of such coupling is the promise that Machine Learning (ML) will give major performance improvements for traditional HPC simulations. Motivated by this potential, the ML around HPC class of integration is of particular significance. In a related follow-up paper, we provided an initial taxonomy for integrating learning around HPC methods. In this paper which is part of the Learning Everywhere series, we discuss ``how'' learning methods and HPC simulations are being integrated to enhance effective performance of computations. This paper describes several modes --- substitution, assimilation, and control, in which learning methods integrate with HPC simulations and provide representative applications in each mode. This paper discusses some open research questions and we hope will motivate and clear the ground for MLaroundHPC benchmarks.
{"title":"Understanding ML Driven HPC: Applications and Infrastructure","authors":"Geoffrey Fox, S. Jha","doi":"10.1109/eScience.2019.00054","DOIUrl":"https://doi.org/10.1109/eScience.2019.00054","url":null,"abstract":"We recently outlined the vision of \"Learning Everywhere\" which captures the possibility and impact of how learning methods and traditional HPC methods can be coupled together. A primary driver of such coupling is the promise that Machine Learning (ML) will give major performance improvements for traditional HPC simulations. Motivated by this potential, the ML around HPC class of integration is of particular significance. In a related follow-up paper, we provided an initial taxonomy for integrating learning around HPC methods. In this paper which is part of the Learning Everywhere series, we discuss ``how'' learning methods and HPC simulations are being integrated to enhance effective performance of computations. This paper describes several modes --- substitution, assimilation, and control, in which learning methods integrate with HPC simulations and provide representative applications in each mode. This paper discusses some open research questions and we hope will motivate and clear the ground for MLaroundHPC benchmarks.","PeriodicalId":142614,"journal":{"name":"2019 15th International Conference on eScience (eScience)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126775311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.1109/eScience.2019.00043
R. Muth, Kerstin Eisenhut, J. Rabe, Florian Tschorsch
Urban development processes often suffer from mistrust amongst different stakeholder groups. The lack of transparency within complex and long-term planning processes and the limited scope for co-creation and joint decision-making constitute a persistent problem for successful participation in urban planning. Civic technology has the potential to improve this predicament. With BBBlockchain, we propose a blockchain-based participation platform, which is able to address all layers of participation. In the development of the platform, we focus on two key aspects: How to increase transparency and how to introduce enhanced co-decision-making. To this end, we exploit the immutable nature of blockchains and effectively offer a platform that excludes monopolistic control over information. The decision-making process is governed by smart contracts implementing, for example, timestamping of planning documents, opinion polls, and the management of a participatory budget. Our architecture and prototypes show the operational capabilities of this approach in a series of use cases for urban development.
{"title":"BBBlockchain: Blockchain-Based Participation in Urban Development","authors":"R. Muth, Kerstin Eisenhut, J. Rabe, Florian Tschorsch","doi":"10.1109/eScience.2019.00043","DOIUrl":"https://doi.org/10.1109/eScience.2019.00043","url":null,"abstract":"Urban development processes often suffer from mistrust amongst different stakeholder groups. The lack of transparency within complex and long-term planning processes and the limited scope for co-creation and joint decision-making constitute a persistent problem for successful participation in urban planning. Civic technology has the potential to improve this predicament. With BBBlockchain, we propose a blockchain-based participation platform, which is able to address all layers of participation. In the development of the platform, we focus on two key aspects: How to increase transparency and how to introduce enhanced co-decision-making. To this end, we exploit the immutable nature of blockchains and effectively offer a platform that excludes monopolistic control over information. The decision-making process is governed by smart contracts implementing, for example, timestamping of planning documents, opinion polls, and the management of a participatory budget. Our architecture and prototypes show the operational capabilities of this approach in a series of use cases for urban development.","PeriodicalId":142614,"journal":{"name":"2019 15th International Conference on eScience (eScience)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114281955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.1109/eScience.2019.00016
S. Sellars, John Graham, D. Mishin, Kyle Marcus, I. Altintas, T. DeFanti, L. Smarr, Camille Crittenden, F. Wuerthwein, Joulien Tatar, P. Nguyen, E. Shearer, S. Sorooshian, F. M. Ralph
In 2016, a team of earth scientists directly engaged a team of computer scientists to identify cyberinfrastructure (CI) approaches that would speed up an earth science workflow. This paper describes the evolution of that workflow as the two teams bridged CI and an image segmentation algorithm to do large scale earth science research. The Pacific Research Platform (PRP) and The Cognitive Hardware and Software Ecosystem Community Infrastructure (CHASE-CI) resources were used to significantly decreased the earth science workflow's wall-clock time from 19.5 days to 53 minutes. The improvement in wall-clock time comes from the use of network appliances, improved image segmentation, deployment of a containerized workflow, and the increase in CI experience and training for the earth scientists. This paper presents a description of the evolving innovations used to improve the workflow, bottlenecks identified within each workflow version, and improvements made within each version of the workflow, over a three-year time period.
{"title":"The Evolution of Bits and Bottlenecks in a Scientific Workflow Trying to Keep Up with Technology: Accelerating 4D Image Segmentation Applied to NASA Data","authors":"S. Sellars, John Graham, D. Mishin, Kyle Marcus, I. Altintas, T. DeFanti, L. Smarr, Camille Crittenden, F. Wuerthwein, Joulien Tatar, P. Nguyen, E. Shearer, S. Sorooshian, F. M. Ralph","doi":"10.1109/eScience.2019.00016","DOIUrl":"https://doi.org/10.1109/eScience.2019.00016","url":null,"abstract":"In 2016, a team of earth scientists directly engaged a team of computer scientists to identify cyberinfrastructure (CI) approaches that would speed up an earth science workflow. This paper describes the evolution of that workflow as the two teams bridged CI and an image segmentation algorithm to do large scale earth science research. The Pacific Research Platform (PRP) and The Cognitive Hardware and Software Ecosystem Community Infrastructure (CHASE-CI) resources were used to significantly decreased the earth science workflow's wall-clock time from 19.5 days to 53 minutes. The improvement in wall-clock time comes from the use of network appliances, improved image segmentation, deployment of a containerized workflow, and the increase in CI experience and training for the earth scientists. This paper presents a description of the evolving innovations used to improve the workflow, bottlenecks identified within each workflow version, and improvements made within each version of the workflow, over a three-year time period.","PeriodicalId":142614,"journal":{"name":"2019 15th International Conference on eScience (eScience)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130908524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.1109/eScience.2019.00053
E. Laure, Olivia Eriksson, Erik Lindahl, D. Henningson
Since 2010, the Swedish e-Science Research Centre (SeRC) is funding and coordinating e-Science activities in a broad spectrum of scientific disciplines. After an initial 5-year phase that produced outstanding results, SeRC is increasingly focusing on fostering interactions between disciplines and has created so-called Multidisciplinary Collaborative Programs (MCPs). In these programs, domain researchers collaborate with e-Science methods and tool developers and e-Infrastructure providers. In this paper we give an overview of the initial phase of SeRC and present the new programs that started operating in 2019.
{"title":"The Future of Swedish e-Science: SeRC 2.0","authors":"E. Laure, Olivia Eriksson, Erik Lindahl, D. Henningson","doi":"10.1109/eScience.2019.00053","DOIUrl":"https://doi.org/10.1109/eScience.2019.00053","url":null,"abstract":"Since 2010, the Swedish e-Science Research Centre (SeRC) is funding and coordinating e-Science activities in a broad spectrum of scientific disciplines. After an initial 5-year phase that produced outstanding results, SeRC is increasingly focusing on fostering interactions between disciplines and has created so-called Multidisciplinary Collaborative Programs (MCPs). In these programs, domain researchers collaborate with e-Science methods and tool developers and e-Infrastructure providers. In this paper we give an overview of the initial phase of SeRC and present the new programs that started operating in 2019.","PeriodicalId":142614,"journal":{"name":"2019 15th International Conference on eScience (eScience)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116934250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.1109/eScience.2019.00065
P. Neumann, J. Biercamp
Kilometer-scale ensemble simulations are expected to significantly boost and impact weather and climate predictions in the future. However, these simulations will only be enabled by exascale compute power and corresponding data capacity. In the following, we discuss a European effort in terms of the e-infrastructure Centre of Excellence in Simulation of Weather and Climate in Europe (ESiWACE). ESiWACE provides infrastructural means to prepare the weather and climate communities for simulations at the exascale. We give an overview of several ESiWACE infrastructure components and discuss their role in reaching the goal of kilometer-scale ensemble predictions. We particularly review the outcomes of the ESiWACE demonstrators, that is community-driven kilometer-scale models that have been developed throughout the last years.
{"title":"ESiWACE: On European Infrastructure Efforts for Weather and Climate Modeling at Exascale","authors":"P. Neumann, J. Biercamp","doi":"10.1109/eScience.2019.00065","DOIUrl":"https://doi.org/10.1109/eScience.2019.00065","url":null,"abstract":"Kilometer-scale ensemble simulations are expected to significantly boost and impact weather and climate predictions in the future. However, these simulations will only be enabled by exascale compute power and corresponding data capacity. In the following, we discuss a European effort in terms of the e-infrastructure Centre of Excellence in Simulation of Weather and Climate in Europe (ESiWACE). ESiWACE provides infrastructural means to prepare the weather and climate communities for simulations at the exascale. We give an overview of several ESiWACE infrastructure components and discuss their role in reaching the goal of kilometer-scale ensemble predictions. We particularly review the outcomes of the ESiWACE demonstrators, that is community-driven kilometer-scale models that have been developed throughout the last years.","PeriodicalId":142614,"journal":{"name":"2019 15th International Conference on eScience (eScience)","volume":"524 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115936324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.1109/eScience.2019.00048
Ravi Shankar, N. Ilangakoon, A. Orenstein, Floriana Ciaglia, N. Glenn, C. Olschanowsky
AdaptLidarTools is a software package that processes full-waveform lidar data. Full-waveform lidar is an active remote sensing technique in which a laser beam is emitted towards a target and the backscattered energy is recorded as a near continuous waveform. A collection of waveforms from airborne lidar can capture landscape characteristics in three dimensions. Specific to vegetation, the extracted echoes and echo properties from the waveforms can provide scientists structural (height, volume, layers of canopy, among others) and functional (leaf area index, diversity) characteristics. The discrete waveforms can be transformed into georeferenced 2D rasters (images), allowing scientists to correlate field-based observations for validation of the waveform observations and fusing the data with other geospatial information. AdaptLidarTools provides an extensible, open-source framework that processes the waveforms and produces multiple data outputs that can be used in vegetation and terrain analysis. AdaptLidarTools is designed to explore new methods to fit full-waveform lidar signals and to maximize the information in the waveforms for vegetation applications. The toolkit explores first differencing, complementary to Gaussian fitting, for faster processing of full-waveform lidar signals and for handling increasingly large volumes of full-waveform lidar datasets. AdaptLidarTools takes approximately 30 min to derive a raster of a given echo property from a raw waveform file of 1 GB size. The toolkit generates first order echo properties such as position, amplitude, pulse width, and other properties such as rise time, fall time and backscattered cross section. It also generates other properties that current proprietary and open-source tools do not. The derived echo properties are delivered as georeferenced raster files of a given spatial resolution that can be viewed and processed by most remote sensing data processing software.
{"title":"AdaptLidarTools: A Full-Waveform Lidar Processing Suite","authors":"Ravi Shankar, N. Ilangakoon, A. Orenstein, Floriana Ciaglia, N. Glenn, C. Olschanowsky","doi":"10.1109/eScience.2019.00048","DOIUrl":"https://doi.org/10.1109/eScience.2019.00048","url":null,"abstract":"AdaptLidarTools is a software package that processes full-waveform lidar data. Full-waveform lidar is an active remote sensing technique in which a laser beam is emitted towards a target and the backscattered energy is recorded as a near continuous waveform. A collection of waveforms from airborne lidar can capture landscape characteristics in three dimensions. Specific to vegetation, the extracted echoes and echo properties from the waveforms can provide scientists structural (height, volume, layers of canopy, among others) and functional (leaf area index, diversity) characteristics. The discrete waveforms can be transformed into georeferenced 2D rasters (images), allowing scientists to correlate field-based observations for validation of the waveform observations and fusing the data with other geospatial information. AdaptLidarTools provides an extensible, open-source framework that processes the waveforms and produces multiple data outputs that can be used in vegetation and terrain analysis. AdaptLidarTools is designed to explore new methods to fit full-waveform lidar signals and to maximize the information in the waveforms for vegetation applications. The toolkit explores first differencing, complementary to Gaussian fitting, for faster processing of full-waveform lidar signals and for handling increasingly large volumes of full-waveform lidar datasets. AdaptLidarTools takes approximately 30 min to derive a raster of a given echo property from a raw waveform file of 1 GB size. The toolkit generates first order echo properties such as position, amplitude, pulse width, and other properties such as rise time, fall time and backscattered cross section. It also generates other properties that current proprietary and open-source tools do not. The derived echo properties are delivered as georeferenced raster files of a given spatial resolution that can be viewed and processed by most remote sensing data processing software.","PeriodicalId":142614,"journal":{"name":"2019 15th International Conference on eScience (eScience)","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116368680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.1109/eScience.2019.00064
P. Gschwandtner, Herbert Jordan, Peter Thoman, T. Fahringer
Effectively implementing scientific algorithms in distributed memory parallel applications is a difficult task for domain scientists, as evident by the large number of domain-specific languages and libraries available today attempting to facilitate the process. However, they usually provide a closed set of parallel patterns and are not open for extension without vast modifications to the underlying system. In this work, we present the AllScale API, a programming interface for developing distributed memory parallel applications with the ease of shared memory programming models. The AllScale API is closed for modification but open for extension, allowing new, user-defined parallel patterns and data structures to be implemented based on existing core primitives and therefore fully supported in the AllScale framework. Focusing on high-level functionality directly offered to application developers, we present the design advantages of such an API design, detail some of its specifications and evaluate it using three real-world use cases. Our results show that AllScale decreases the complexity of implementing scientific applications for distributed memory while attaining comparable or higher performance compared to MPI reference implementations.
{"title":"The AllScale API","authors":"P. Gschwandtner, Herbert Jordan, Peter Thoman, T. Fahringer","doi":"10.1109/eScience.2019.00064","DOIUrl":"https://doi.org/10.1109/eScience.2019.00064","url":null,"abstract":"Effectively implementing scientific algorithms in distributed memory parallel applications is a difficult task for domain scientists, as evident by the large number of domain-specific languages and libraries available today attempting to facilitate the process. However, they usually provide a closed set of parallel patterns and are not open for extension without vast modifications to the underlying system. In this work, we present the AllScale API, a programming interface for developing distributed memory parallel applications with the ease of shared memory programming models. The AllScale API is closed for modification but open for extension, allowing new, user-defined parallel patterns and data structures to be implemented based on existing core primitives and therefore fully supported in the AllScale framework. Focusing on high-level functionality directly offered to application developers, we present the design advantages of such an API design, detail some of its specifications and evaluate it using three real-world use cases. Our results show that AllScale decreases the complexity of implementing scientific applications for distributed memory while attaining comparable or higher performance compared to MPI reference implementations.","PeriodicalId":142614,"journal":{"name":"2019 15th International Conference on eScience (eScience)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127382113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.1109/eScience.2019.00070
Denny Vrandečić
We propose to use Wikidata to provide metadata for datasets when the traditional approach via Schema.org is not feasible. We describe and discuss the proposal, and believe that the process described in this paper can help with increasing findability and accessibility of certain datasets.
{"title":"Describing Datasets in Wikidata","authors":"Denny Vrandečić","doi":"10.1109/eScience.2019.00070","DOIUrl":"https://doi.org/10.1109/eScience.2019.00070","url":null,"abstract":"We propose to use Wikidata to provide metadata for datasets when the traditional approach via Schema.org is not feasible. We describe and discuss the proposal, and believe that the process described in this paper can help with increasing findability and accessibility of certain datasets.","PeriodicalId":142614,"journal":{"name":"2019 15th International Conference on eScience (eScience)","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121400308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}