The need to share data across applications is becoming increasingly evident. Current cloud isolation mechanisms focus solely on protection, such as containers that isolate at the OS-level, and virtual machines that isolate through the hypervisor. However, by focusing rigidly on protection, these approaches do not provide for controlled sharing. This paper presents how Information Flow Control (IFC) offers a flexible alternative. As a data-centric mechanism it enables strong isolation when required, while providing continuous, fine grained control of the data being shared. An IFC-enabled cloud platform would ensure that policies are enforced as data flows across all applications, without requiring any special sharing mechanisms.
{"title":"Information Flow Control for Strong Protection with Flexible Sharing in PaaS","authors":"Thomas Pasquier, Jatinder Singh, J. Bacon","doi":"10.1109/IC2E.2015.64","DOIUrl":"https://doi.org/10.1109/IC2E.2015.64","url":null,"abstract":"The need to share data across applications is becoming increasingly evident. Current cloud isolation mechanisms focus solely on protection, such as containers that isolate at the OS-level, and virtual machines that isolate through the hypervisor. However, by focusing rigidly on protection, these approaches do not provide for controlled sharing. This paper presents how Information Flow Control (IFC) offers a flexible alternative. As a data-centric mechanism it enables strong isolation when required, while providing continuous, fine grained control of the data being shared. An IFC-enabled cloud platform would ensure that policies are enforced as data flows across all applications, without requiring any special sharing mechanisms.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121842725","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ioannis Mytilinis, Dimitrios Tsoumakos, Verena Kantere, Anastassios Nanos, N. Koziris
Big Data applications receive an ever-increasing amount of attention, thus becoming a dominant class of applications that are deployed over virtualized environments. Cloud environments entail a large amount of complexity relative to I/O performance. The use of Big Data increases the complexity of I/O management as well as its characterization and prediction: As I/O operations become growingly dominant in such applications, the intricacies of virtualization, different storage back ends and deployment setups significantly hinder our ability to analyze and correctly predict I/O performance. To that end, this work proposes an end-to-end modeling technique to predict performance of I/O--intensive Big Data applications running over cloud infrastructures. We develop a model tuned over application and infrastructure dimensions: Primitive I/O operations, data access patterns, storage back ends and deployment parameters. The trained model can be used to predict both I/O but also general task performance. Our evaluation results show that for jobs which are dominated by I/O operations, such as I/O-bound MapReduce jobs, our model is capable of predicting execution time with an accuracy close to 90% that decreases as application processing becomes more complex.
{"title":"I/O Performance Modeling for Big Data Applications over Cloud Infrastructures","authors":"Ioannis Mytilinis, Dimitrios Tsoumakos, Verena Kantere, Anastassios Nanos, N. Koziris","doi":"10.1109/IC2E.2015.29","DOIUrl":"https://doi.org/10.1109/IC2E.2015.29","url":null,"abstract":"Big Data applications receive an ever-increasing amount of attention, thus becoming a dominant class of applications that are deployed over virtualized environments. Cloud environments entail a large amount of complexity relative to I/O performance. The use of Big Data increases the complexity of I/O management as well as its characterization and prediction: As I/O operations become growingly dominant in such applications, the intricacies of virtualization, different storage back ends and deployment setups significantly hinder our ability to analyze and correctly predict I/O performance. To that end, this work proposes an end-to-end modeling technique to predict performance of I/O--intensive Big Data applications running over cloud infrastructures. We develop a model tuned over application and infrastructure dimensions: Primitive I/O operations, data access patterns, storage back ends and deployment parameters. The trained model can be used to predict both I/O but also general task performance. Our evaluation results show that for jobs which are dominated by I/O operations, such as I/O-bound MapReduce jobs, our model is capable of predicting execution time with an accuracy close to 90% that decreases as application processing becomes more complex.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132571565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Considerable efforts have been spent on designing architectures to manage heterogeneous resources across multiple administrative domains. Specific fields of application are federated cloud computing (Intercloud) approaches and distributed testbeds, among others. An important interoperability challenge that arises in this context is the exchange of information about the provided resources and their dependencies. Existing work usually rests upon schematic data models, which impede the discovery and management of heterogeneous resources between autonomous sites. One way of addressing this issue is to exchange semantic information models. In this paper, we exploit such approaches to formally define federations, including their infrastructures and the life-cycle of the offered resources and services. The requirements of this work have been derived from several research projects and the results are in process of being standardized by an international body. The main contribution of this work is a higher level (upper) ontology and initial integration concepts for it. These contributions form a basis for further work in the general context of distributed semantic resource management.
{"title":"FIDDLE: Federated Infrastructure Discovery and Description Language","authors":"A. Willner, R. Loughnane, T. Magedanz","doi":"10.1109/IC2E.2015.77","DOIUrl":"https://doi.org/10.1109/IC2E.2015.77","url":null,"abstract":"Considerable efforts have been spent on designing architectures to manage heterogeneous resources across multiple administrative domains. Specific fields of application are federated cloud computing (Intercloud) approaches and distributed testbeds, among others. An important interoperability challenge that arises in this context is the exchange of information about the provided resources and their dependencies. Existing work usually rests upon schematic data models, which impede the discovery and management of heterogeneous resources between autonomous sites. One way of addressing this issue is to exchange semantic information models. In this paper, we exploit such approaches to formally define federations, including their infrastructures and the life-cycle of the offered resources and services. The requirements of this work have been derived from several research projects and the results are in process of being standardized by an international body. The main contribution of this work is a higher level (upper) ontology and initial integration concepts for it. These contributions form a basis for further work in the general context of distributed semantic resource management.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127384642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To track, control, and compel reuse of web APIs, we investigate a new approach to API governance -- combined policy, implementation, and deployment control of web APIs. Our approach, called EAGER, provides a software architecture that integrates into PaaS platforms to support systemwide, deployment-time enforcement of governance policies. Specifically, EAGER checks for and prevents backward incompatible API changes from being deployed into production PaaS clouds, enforces service reuse, and facilitates enforcement of other best practices in software maintenance via policies. Our experiments with an EAGER prototype show that enforcing API governance at deployment-time in PaaS clouds is efficient and scalable to thousands of APIs and policies.
{"title":"EAGER: Deployment-Time API Governance for Modern PaaS Clouds","authors":"Hiranya Jayathilaka, C. Krintz, R. Wolski","doi":"10.1109/IC2E.2015.69","DOIUrl":"https://doi.org/10.1109/IC2E.2015.69","url":null,"abstract":"To track, control, and compel reuse of web APIs, we investigate a new approach to API governance -- combined policy, implementation, and deployment control of web APIs. Our approach, called EAGER, provides a software architecture that integrates into PaaS platforms to support systemwide, deployment-time enforcement of governance policies. Specifically, EAGER checks for and prevents backward incompatible API changes from being deployed into production PaaS clouds, enforces service reuse, and facilitates enforcement of other best practices in software maintenance via policies. Our experiments with an EAGER prototype show that enforcing API governance at deployment-time in PaaS clouds is efficient and scalable to thousands of APIs and policies.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116744250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Rey, M. Cogorno, Sergio Nesmachnow, L. Steffenel
Prototyping and testing distributed systems is considered to be a hard task because it is not always possible to reproduce a given sequence of events. While simulations may help on this task, they cannot replace test and validation with real systems. In this paper we present Docker-Hadoop, a container-based virtualization platform designed to prototype, test and deploy MapReduce applications and systems. This tool allowed us to test and reproduce fault-tolerance scenarios that are especially interesting in the context of the PER-MARE project, which aims at adapting the Hadoop framework to the case pervasive systems. Indeed, we developed a fault-tolerant component that can circumvent the limitations from original Hadoop and prevent the job scheduling stall in the case of failures or network disconnections. Thanks to Docker-Hadoop, we could easily prototype and test our improved Hadoop, with the first scalability and speedup results being presented in this paper.
{"title":"Efficient Prototyping of Fault Tolerant Map-Reduce Applications with Docker-Hadoop","authors":"J. Rey, M. Cogorno, Sergio Nesmachnow, L. Steffenel","doi":"10.1109/IC2E.2015.73","DOIUrl":"https://doi.org/10.1109/IC2E.2015.73","url":null,"abstract":"Prototyping and testing distributed systems is considered to be a hard task because it is not always possible to reproduce a given sequence of events. While simulations may help on this task, they cannot replace test and validation with real systems. In this paper we present Docker-Hadoop, a container-based virtualization platform designed to prototype, test and deploy MapReduce applications and systems. This tool allowed us to test and reproduce fault-tolerance scenarios that are especially interesting in the context of the PER-MARE project, which aims at adapting the Hadoop framework to the case pervasive systems. Indeed, we developed a fault-tolerant component that can circumvent the limitations from original Hadoop and prevent the job scheduling stall in the case of failures or network disconnections. Thanks to Docker-Hadoop, we could easily prototype and test our improved Hadoop, with the first scalability and speedup results being presented in this paper.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131814174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud platforms advances have changed the application development landscape. Cloud platforms abstract the complexity of application delivery to enable rapid development and easy management. This changes the way development teams need to think about and deal with the underlying resources while building and managing their applications. This research describes a new methodology supported by a modeling framework to enable organizations that build cloud applications (e.g., SaaS providers) to unbiasedly exploit the cloud platform building blocks to leverage the flexibility, reliability and scalability that these platforms provide to the application layer.
{"title":"A Bird's-Eye View on Modelling Malleable Multi-cloud Applications","authors":"Mohammad Hamdaqa","doi":"10.1109/IC2E.2015.94","DOIUrl":"https://doi.org/10.1109/IC2E.2015.94","url":null,"abstract":"Cloud platforms advances have changed the application development landscape. Cloud platforms abstract the complexity of application delivery to enable rapid development and easy management. This changes the way development teams need to think about and deal with the underlying resources while building and managing their applications. This research describes a new methodology supported by a modeling framework to enable organizations that build cloud applications (e.g., SaaS providers) to unbiasedly exploit the cloud platform building blocks to leverage the flexibility, reliability and scalability that these platforms provide to the application layer.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131810355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software-Defined Networking (SDN) is revolutionizing data center networks for cloud computing with its ability to enable network virtualization and powerful network resource management that are crucial in any multi-tenant environment. In order to support sophisticated network control logic, the data plane of a switch should have a flexible Flow Table Pipeline (FTP). However, the FTP on state-of-the-art SDN switches is hardware-defined, which greatly limits the advantages of using FTP in cloud computing systems. This paper removes this limitation by introducing software-defined FTP (SDFTP), which provides an extremely flexible FTP as the southbound interface of the SDN control plane. SDFTP offers arbitrary number of pipeline stages and adaptive flow table sizing at runtime by building Software-Defined Flow Tables (SDFTs). Our analysis shows that SDFTP could create 138 times more adaptively sized pipeline stages than the hardware-defined data plane while maintaining comparable performance.
{"title":"Software-Defined Flow Table Pipeline","authors":"Xiaoye Sun, T. Ng, Guohui Wang","doi":"10.1109/IC2E.2015.52","DOIUrl":"https://doi.org/10.1109/IC2E.2015.52","url":null,"abstract":"Software-Defined Networking (SDN) is revolutionizing data center networks for cloud computing with its ability to enable network virtualization and powerful network resource management that are crucial in any multi-tenant environment. In order to support sophisticated network control logic, the data plane of a switch should have a flexible Flow Table Pipeline (FTP). However, the FTP on state-of-the-art SDN switches is hardware-defined, which greatly limits the advantages of using FTP in cloud computing systems. This paper removes this limitation by introducing software-defined FTP (SDFTP), which provides an extremely flexible FTP as the southbound interface of the SDN control plane. SDFTP offers arbitrary number of pipeline stages and adaptive flow table sizing at runtime by building Software-Defined Flow Tables (SDFTs). Our analysis shows that SDFTP could create 138 times more adaptively sized pipeline stages than the hardware-defined data plane while maintaining comparable performance.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"44 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129194579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emergence and adoption of cloud computing have become widely prevalent given the value proposition it brings to an enterprise in terms of agility and cost effectiveness. Big data analytical capabilities (specifically treating storage/system management as a big data problem for a service provider) using Cloud delivery models is defined as Analytics as a Service or Software as a Service. This service simplifies obtaining useful insights from an operational enterprise data center leading to cost and performance optimizations.Software defined environments decouple the control planes from the data planes that were often vertically integrated in a traditional networking or storage systems. The decoupling between the control planes and the data planes enables opportunities for improved security, resiliency and IT optimization in general. This talk describes our novel approach in hosting the systems management platform (a.k.a. control plane) in the cloud offered to enterprises in Software as a Service (SaaS) model. Specifically, in this presentation, focus is on the analytics layer with SaaS paradigm enabling data centers to visualize, optimize and forecast infrastructure via a simple capture, analyze and govern framework. At the core, it uses big data analytics to extract actionable insights from system management metrics data. Our system is developed in research and deployed across customers, where core focus is on agility, elasticity and scalability of the analytics framework. We demonstrate few system/storage management analytics case studies to demonstrate cost and performance optimization for both cloud consumer as well as service provider. Actionable insights generated from the analytics platform are implemented in an automated fashion via an OpenStack based platform.
{"title":"Cloud Storage Infrastructure Optimization Analytics","authors":"R. Routray","doi":"10.1109/IC2E.2015.83","DOIUrl":"https://doi.org/10.1109/IC2E.2015.83","url":null,"abstract":"Emergence and adoption of cloud computing have become widely prevalent given the value proposition it brings to an enterprise in terms of agility and cost effectiveness. Big data analytical capabilities (specifically treating storage/system management as a big data problem for a service provider) using Cloud delivery models is defined as Analytics as a Service or Software as a Service. This service simplifies obtaining useful insights from an operational enterprise data center leading to cost and performance optimizations.Software defined environments decouple the control planes from the data planes that were often vertically integrated in a traditional networking or storage systems. The decoupling between the control planes and the data planes enables opportunities for improved security, resiliency and IT optimization in general. This talk describes our novel approach in hosting the systems management platform (a.k.a. control plane) in the cloud offered to enterprises in Software as a Service (SaaS) model. Specifically, in this presentation, focus is on the analytics layer with SaaS paradigm enabling data centers to visualize, optimize and forecast infrastructure via a simple capture, analyze and govern framework. At the core, it uses big data analytics to extract actionable insights from system management metrics data. Our system is developed in research and deployed across customers, where core focus is on agility, elasticity and scalability of the analytics framework. We demonstrate few system/storage management analytics case studies to demonstrate cost and performance optimization for both cloud consumer as well as service provider. Actionable insights generated from the analytics platform are implemented in an automated fashion via an OpenStack based platform.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125668156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cloud related legal documents, like terms of service or customer agreement are usually managed as plain text files. Hence extensive manual effort is required to monitor the cloud service performance by cross referencing the metrics and measures agreed upon in these documents. We have significantly automated the process of managing and monitoring cloud Service Level Agreements (SLA) using semantic web technologies like OWL, RDF and SPARQL. In this paper, we describe in detail the cloud SLA ontology and the prototype that we have developed to illustrate how the SLA measures can be automatically extracted from legal Terms of Service that are available on cloud provider websites.
{"title":"Automating Cloud Service Level Agreements Using Semantic Technologies","authors":"K. Joshi, C. Pearce","doi":"10.1109/IC2E.2015.63","DOIUrl":"https://doi.org/10.1109/IC2E.2015.63","url":null,"abstract":"Cloud related legal documents, like terms of service or customer agreement are usually managed as plain text files. Hence extensive manual effort is required to monitor the cloud service performance by cross referencing the metrics and measures agreed upon in these documents. We have significantly automated the process of managing and monitoring cloud Service Level Agreements (SLA) using semantic web technologies like OWL, RDF and SPARQL. In this paper, we describe in detail the cloud SLA ontology and the prototype that we have developed to illustrate how the SLA measures can be automatically extracted from legal Terms of Service that are available on cloud provider websites.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114233094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Big data processing tools have evolved rapidly in recent years. MapReduce has proven very successful but is not optimized for many important analytics, especially those involving iteration. In this regard, Iterative MapReduce frameworks improve performance of MapReduce job chains through caching. Further, Pregel, Giraph and Graph Lab abstract data as a graph and process it in iterations. But all these tools are designed with a fixed data abstraction and have limited collective communication support to synchronize application data and algorithm control states among parallel processes. In this paper, we introduce a collective communication abstraction layer which provides efficient collective communication operations on several common data abstractions such as arrays, key-values and graphs, and define a Map Collective programming model which serves the diverse collective communication demands in different parallel algorithms. We implement a library called Harp to provide the features above and plug it into Hadoop so that applications abstracted in Map Collective model can be easily developed on top of MapReduce framework and conveniently integrated with other tools in Apache Big Data Stack. With improved expressiveness in the abstraction and excellent performance on the implementation, we can simultaneously support various applications from HPC to Cloud systems together with high performance.
{"title":"Harp: Collective Communication on Hadoop","authors":"Bingjing Zhang, Yang Ruan, J. Qiu","doi":"10.1109/IC2E.2015.35","DOIUrl":"https://doi.org/10.1109/IC2E.2015.35","url":null,"abstract":"Big data processing tools have evolved rapidly in recent years. MapReduce has proven very successful but is not optimized for many important analytics, especially those involving iteration. In this regard, Iterative MapReduce frameworks improve performance of MapReduce job chains through caching. Further, Pregel, Giraph and Graph Lab abstract data as a graph and process it in iterations. But all these tools are designed with a fixed data abstraction and have limited collective communication support to synchronize application data and algorithm control states among parallel processes. In this paper, we introduce a collective communication abstraction layer which provides efficient collective communication operations on several common data abstractions such as arrays, key-values and graphs, and define a Map Collective programming model which serves the diverse collective communication demands in different parallel algorithms. We implement a library called Harp to provide the features above and plug it into Hadoop so that applications abstracted in Map Collective model can be easily developed on top of MapReduce framework and conveniently integrated with other tools in Apache Big Data Stack. With improved expressiveness in the abstraction and excellent performance on the implementation, we can simultaneously support various applications from HPC to Cloud systems together with high performance.","PeriodicalId":395715,"journal":{"name":"2015 IEEE International Conference on Cloud Engineering","volume":"206 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115739348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}