Pub Date : 2012-11-10DOI: 10.1109/SC.Companion.2012.141
P. Lankford, L. Ericson, Andrey Nikolaev
Market risk management is a critical, resourceintensive task for financial trading firms. The industry relies heavily on innovation in technical infrastructure to increase the quality and quantity of risk management information and to reduce the cost of its production. However, until recently, the industry has lacked an independent standard for gauging the potential of new technologies to help. This changed when the STAC BenchmarkTM Council developed STAC-A2TM, a vendorindependent benchmark suite based on real-world market risk analysis workloads. It was specified by trading firms and made actionable by leading HPC vendors. Unlike vendor-developed benchmarks known to the authors, STAC-A2 satisfies all of the requirements important to end-user firms: relevance, neutrality, scalability, and completeness. Intel has demonstrated the utility of STAC-A2 for comparing successive generations of Intel® Xeon® processors.
{"title":"End-User Driven Technology Benchmarks Based on Market-Risk Workloads","authors":"P. Lankford, L. Ericson, Andrey Nikolaev","doi":"10.1109/SC.Companion.2012.141","DOIUrl":"https://doi.org/10.1109/SC.Companion.2012.141","url":null,"abstract":"Market risk management is a critical, resourceintensive task for financial trading firms. The industry relies heavily on innovation in technical infrastructure to increase the quality and quantity of risk management information and to reduce the cost of its production. However, until recently, the industry has lacked an independent standard for gauging the potential of new technologies to help. This changed when the STAC BenchmarkTM Council developed STAC-A2TM, a vendorindependent benchmark suite based on real-world market risk analysis workloads. It was specified by trading firms and made actionable by leading HPC vendors. Unlike vendor-developed benchmarks known to the authors, STAC-A2 satisfies all of the requirements important to end-user firms: relevance, neutrality, scalability, and completeness. Intel has demonstrated the utility of STAC-A2 for comparing successive generations of Intel® Xeon® processors.","PeriodicalId":6346,"journal":{"name":"2012 SC Companion: High Performance Computing, Networking Storage and Analysis","volume":"47 1","pages":"1171-1175"},"PeriodicalIF":0.0,"publicationDate":"2012-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91206941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-10DOI: 10.1109/SC.Companion.2012.112
Dimitar Pavlov, Joris Soeurt, P. Grosso, Zhiming Zhao, K. V. D. Veldt, Hao Zhu, C. D. Laat
Energy efficiency is an increasingly important requirement for computing and communication systems, especially with their increasing pervasiveness. The IEEE 802.3az protocol reduces the network energy consumption by turning active copper Ethernet links to a low power model when no traffic exists. However, the effect of 802.3az heavily depends on the network traffic patterns which makes system level energy optimization challenging. In clusters, distributed data intensive applications that generate heavy network traffic are common, and in turn the required network devices can consume large amounts of energy. In this research, we examined the 802.3az technology with the goal of applying it in clusters. We defined an energy budget calculator that takes energy-efficient Ethernet into account by including the energy models derived from tests of 802.3az enabled devices. The calculator is an integral tool in a global strategy to optimize the energy usage of applications in a high performance computing environment. We show a few practical examples of how real applications can better plan their execution by integrating this knowledge in their decision strategies.
{"title":"Towards Energy Efficient Data Intensive Computing Using IEEE 802.3az","authors":"Dimitar Pavlov, Joris Soeurt, P. Grosso, Zhiming Zhao, K. V. D. Veldt, Hao Zhu, C. D. Laat","doi":"10.1109/SC.Companion.2012.112","DOIUrl":"https://doi.org/10.1109/SC.Companion.2012.112","url":null,"abstract":"Energy efficiency is an increasingly important requirement for computing and communication systems, especially with their increasing pervasiveness. The IEEE 802.3az protocol reduces the network energy consumption by turning active copper Ethernet links to a low power model when no traffic exists. However, the effect of 802.3az heavily depends on the network traffic patterns which makes system level energy optimization challenging. In clusters, distributed data intensive applications that generate heavy network traffic are common, and in turn the required network devices can consume large amounts of energy. In this research, we examined the 802.3az technology with the goal of applying it in clusters. We defined an energy budget calculator that takes energy-efficient Ethernet into account by including the energy models derived from tests of 802.3az enabled devices. The calculator is an integral tool in a global strategy to optimize the energy usage of applications in a high performance computing environment. We show a few practical examples of how real applications can better plan their execution by integrating this knowledge in their decision strategies.","PeriodicalId":6346,"journal":{"name":"2012 SC Companion: High Performance Computing, Networking Storage and Analysis","volume":"1 1","pages":"806-810"},"PeriodicalIF":0.0,"publicationDate":"2012-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86001164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-10DOI: 10.1109/SC.Companion.2012.170
Maricris L. Mayes, G. Fletcher, M. Gordon
Summary form only given. One of the major challenges of modern quantum chemistry (QC) is to apply it to large systems with thousands of correlated electrons and basis functions. The availability of supercomputers and development of novel methods are necessary to realize this challenge. In particular, we employ linear scaling Fragment Molecular Orbital (FMO) method which decompose the large system into smaller, localized fragments which can be treated with high-level QC method like MP2. FMO is inherently scalable since the individual fragment calculations can be carried out simultaneously on separate processor groups. It is implemented in GAMESS, a popular ab-initio QC program. We present the scalability and performance of FMO on Intrepid (Blue Gene/P) and Blue Gene/Q systems at ALCF.
{"title":"Abstract: Towards Highly Accurate Large-Scale Ab Initio Calculations Using Fragment Molecular Orbital Method in GAMESS","authors":"Maricris L. Mayes, G. Fletcher, M. Gordon","doi":"10.1109/SC.Companion.2012.170","DOIUrl":"https://doi.org/10.1109/SC.Companion.2012.170","url":null,"abstract":"Summary form only given. One of the major challenges of modern quantum chemistry (QC) is to apply it to large systems with thousands of correlated electrons and basis functions. The availability of supercomputers and development of novel methods are necessary to realize this challenge. In particular, we employ linear scaling Fragment Molecular Orbital (FMO) method which decompose the large system into smaller, localized fragments which can be treated with high-level QC method like MP2. FMO is inherently scalable since the individual fragment calculations can be carried out simultaneously on separate processor groups. It is implemented in GAMESS, a popular ab-initio QC program. We present the scalability and performance of FMO on Intrepid (Blue Gene/P) and Blue Gene/Q systems at ALCF.","PeriodicalId":6346,"journal":{"name":"2012 SC Companion: High Performance Computing, Networking Storage and Analysis","volume":"109 1","pages":"1335-1335"},"PeriodicalIF":0.0,"publicationDate":"2012-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86007733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-10DOI: 10.1109/SC.Companion.2012.37
Chao Peng, Peng Mi, Yong Cao
Rendering massive 3D models has been recognized as a challenging task. Due to the limited size of GPU memory, a massive model with hundreds of millions of primitives cannot fit into most of modern GPUs. By applying parallel Level-Of-Detail (LOD), as proposed in [1], transferring only a portion of primitives rather than the whole to the GPU is sufficient for generating a desired simplified version of the model. However, the low bandwidth in CPU-GPU communication make data-transferring a very time-consuming process that prevents users from achieving high-performance rendering of massive 3D models on a single-GPU system. This paper explores a device-level parallel design that distributes the workloads in a multi-GPU multi-display system. Our multi-GPU out-of-core uses a load-balancing method and seamlessly integrates with the parallel LOD algorithm. Our experiments show highly interactive frame rates of the “Boeing 777” airplane model that consists of over 332 million triangles and over 223 million vertices.
{"title":"Load Balanced Parallel GPU Out-of-Core for Continuous LOD Model Visualization","authors":"Chao Peng, Peng Mi, Yong Cao","doi":"10.1109/SC.Companion.2012.37","DOIUrl":"https://doi.org/10.1109/SC.Companion.2012.37","url":null,"abstract":"Rendering massive 3D models has been recognized as a challenging task. Due to the limited size of GPU memory, a massive model with hundreds of millions of primitives cannot fit into most of modern GPUs. By applying parallel Level-Of-Detail (LOD), as proposed in [1], transferring only a portion of primitives rather than the whole to the GPU is sufficient for generating a desired simplified version of the model. However, the low bandwidth in CPU-GPU communication make data-transferring a very time-consuming process that prevents users from achieving high-performance rendering of massive 3D models on a single-GPU system. This paper explores a device-level parallel design that distributes the workloads in a multi-GPU multi-display system. Our multi-GPU out-of-core uses a load-balancing method and seamlessly integrates with the parallel LOD algorithm. Our experiments show highly interactive frame rates of the “Boeing 777” airplane model that consists of over 332 million triangles and over 223 million vertices.","PeriodicalId":6346,"journal":{"name":"2012 SC Companion: High Performance Computing, Networking Storage and Analysis","volume":"34 1","pages":"215-223"},"PeriodicalIF":0.0,"publicationDate":"2012-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81349994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-10DOI: 10.1109/SC.Companion.2012.264
Brian W. Barrett, R. Brightwell, K. Underwood, K. Hemmert
Portals 4 is an advanced network programming interface which allows for the development of a rich set of upper layer protocols. By careful selection of interfaces and strong progress guarantees, Portals 4 is able to support multiple protocols without significant overhead. Recent developments with Portals 4, including development of MPI, SHMEM, and GASNet protocols are discussed.
{"title":"Poster: Portals 4 Network Programming Interface","authors":"Brian W. Barrett, R. Brightwell, K. Underwood, K. Hemmert","doi":"10.1109/SC.Companion.2012.264","DOIUrl":"https://doi.org/10.1109/SC.Companion.2012.264","url":null,"abstract":"Portals 4 is an advanced network programming interface which allows for the development of a rich set of upper layer protocols. By careful selection of interfaces and strong progress guarantees, Portals 4 is able to support multiple protocols without significant overhead. Recent developments with Portals 4, including development of MPI, SHMEM, and GASNet protocols are discussed.","PeriodicalId":6346,"journal":{"name":"2012 SC Companion: High Performance Computing, Networking Storage and Analysis","volume":"19 1","pages":"1467-1467"},"PeriodicalIF":0.0,"publicationDate":"2012-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81830428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-10DOI: 10.1109/SC.COMPANION.2012.150
D. Gunter, S. Cholia, Anubhav Jain, M. Kocher, K. Persson, L. Ramakrishnan, S. Ong, G. Ceder
Efforts such as the Human Genome Project provided a dramatic example of opening scientific datasets to the community. Making high quality scientific data accessible through an online database allows scientists around the world to multiply the value of that data through scientific innovations. Similarly, the goal of the Materials Project is to calculate physical properties of all known inorganic materials and make this data freely available, with the goal of accelerating to invention of better materials. However, the complexity of scientific data, and the complexity of the simulations needed to generate and analyze it, pose challenges to current software ecosystem. In this paper, we describe the approach we used in the Materials Project to overcome these challenges and create and disseminate a high quality database of materials properties computed by solving the basic laws of physics. Our infrastructure requires a novel combination of highthroughput approaches with broadly applicable and scalable approaches to data storage and dissemination.
{"title":"Community Accessible Datastore of High-Throughput Calculations: Experiences from the Materials Project","authors":"D. Gunter, S. Cholia, Anubhav Jain, M. Kocher, K. Persson, L. Ramakrishnan, S. Ong, G. Ceder","doi":"10.1109/SC.COMPANION.2012.150","DOIUrl":"https://doi.org/10.1109/SC.COMPANION.2012.150","url":null,"abstract":"Efforts such as the Human Genome Project provided a dramatic example of opening scientific datasets to the community. Making high quality scientific data accessible through an online database allows scientists around the world to multiply the value of that data through scientific innovations. Similarly, the goal of the Materials Project is to calculate physical properties of all known inorganic materials and make this data freely available, with the goal of accelerating to invention of better materials. However, the complexity of scientific data, and the complexity of the simulations needed to generate and analyze it, pose challenges to current software ecosystem. In this paper, we describe the approach we used in the Materials Project to overcome these challenges and create and disseminate a high quality database of materials properties computed by solving the basic laws of physics. Our infrastructure requires a novel combination of highthroughput approaches with broadly applicable and scalable approaches to data storage and dissemination.","PeriodicalId":6346,"journal":{"name":"2012 SC Companion: High Performance Computing, Networking Storage and Analysis","volume":"23 1","pages":"1244-1251"},"PeriodicalIF":0.0,"publicationDate":"2012-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90792496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-10DOI: 10.1109/SC.Companion.2012.97
Usman Dastgeer, Lu Li, C. Kessler
The PEPPHER component model defines an environment for annotation of native C/C++ based components for homogeneous and heterogeneous multicore and manycore systems, including GPU and multi-GPU based systems. For the same computational functionality, captured as a component, different sequential and explicitly parallel implementation variants using various types of execution units might be provided, together with metadata such as explicitly exposed tunable parameters. The goal is to compose an application from its components and variants such that, depending on the run-time context, the most suitable implementation variant will be chosen automatically for each invocation. We describe and evaluate the PEPPHER composition tool, which explores the application's components and their implementation variants, generates the necessary low-level code that interacts with the runtime system, and coordinates the native compilation and linking of the various code units to compose the overall application code. With several applications, we demonstrate how the composition tool provides a high-level programming front-end while effectively utilizing the task-based PEPPHER runtime system (StarPU) underneath.
{"title":"The PEPPHER Composition Tool: Performance-Aware Dynamic Composition of Applications for GPU-Based Systems","authors":"Usman Dastgeer, Lu Li, C. Kessler","doi":"10.1109/SC.Companion.2012.97","DOIUrl":"https://doi.org/10.1109/SC.Companion.2012.97","url":null,"abstract":"The PEPPHER component model defines an environment for annotation of native C/C++ based components for homogeneous and heterogeneous multicore and manycore systems, including GPU and multi-GPU based systems. For the same computational functionality, captured as a component, different sequential and explicitly parallel implementation variants using various types of execution units might be provided, together with metadata such as explicitly exposed tunable parameters. The goal is to compose an application from its components and variants such that, depending on the run-time context, the most suitable implementation variant will be chosen automatically for each invocation. We describe and evaluate the PEPPHER composition tool, which explores the application's components and their implementation variants, generates the necessary low-level code that interacts with the runtime system, and coordinates the native compilation and linking of the various code units to compose the overall application code. With several applications, we demonstrate how the composition tool provides a high-level programming front-end while effectively utilizing the task-based PEPPHER runtime system (StarPU) underneath.","PeriodicalId":6346,"journal":{"name":"2012 SC Companion: High Performance Computing, Networking Storage and Analysis","volume":"2 1","pages":"711-720"},"PeriodicalIF":0.0,"publicationDate":"2012-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84090326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-10DOI: 10.1109/SC.Companion.2012.92
Michael Schliephake, E. Laure
In order to achieve exascale performance, all aspects of applications and system software need to be analysed and potentially improved. The EU FP7 project “Collaborative Research into Exascale Systemware, Tools & Applications” (CRESTA) uses co-design of advanced simulation applications and system software as well as related development tools as a key element in its approach towards exascale. In this paper we present first results of a co-design activity using the highly scalable application NEK5000. We have analysed the communication structure of NEK5000 and propose new, optimised collective communication operations that will allow to improve the performance of NEK5000 and to prepare it for the use on several millions of cores available in future HPC systems. The latency-optimised communication operations can also be beneficial in other contexts, for instance we expect them to become an important building block for a runtime-system providing dynamic load balancing, also under development within CRESTA.
{"title":"Towards Improving the Communication Performance of CRESTA's Co-Design Application NEK5000","authors":"Michael Schliephake, E. Laure","doi":"10.1109/SC.Companion.2012.92","DOIUrl":"https://doi.org/10.1109/SC.Companion.2012.92","url":null,"abstract":"In order to achieve exascale performance, all aspects of applications and system software need to be analysed and potentially improved. The EU FP7 project “Collaborative Research into Exascale Systemware, Tools & Applications” (CRESTA) uses co-design of advanced simulation applications and system software as well as related development tools as a key element in its approach towards exascale. In this paper we present first results of a co-design activity using the highly scalable application NEK5000. We have analysed the communication structure of NEK5000 and propose new, optimised collective communication operations that will allow to improve the performance of NEK5000 and to prepare it for the use on several millions of cores available in future HPC systems. The latency-optimised communication operations can also be beneficial in other contexts, for instance we expect them to become an important building block for a runtime-system providing dynamic load balancing, also under development within CRESTA.","PeriodicalId":6346,"journal":{"name":"2012 SC Companion: High Performance Computing, Networking Storage and Analysis","volume":"106 1","pages":"669-674"},"PeriodicalIF":0.0,"publicationDate":"2012-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87902707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-10DOI: 10.1109/SC.Companion.2012.232
Michael O. Lam, B. Supinski, M. LeGendre, J. Hollingsworth
As scientific computation continues to scale, efficient use of floating-point arithmetic processors is critical. Lower precision allows streaming architectures to perform more operations per second and can reduce memory bandwidth pressure on all architectures. However, using a precision that is too low for a given algorithm and data set leads to inaccurate results. We present a framework that uses binary instrumentation and modification to build mixed-precision configurations of existing binaries that were originally developed to use only double-precision. Initial results with the Algebraic MultiGrid kernel demonstrate a nearly 2χ speedup.
{"title":"Poster: Automatically Adapting Programs for Mixed-Precision Floating-Point Computation","authors":"Michael O. Lam, B. Supinski, M. LeGendre, J. Hollingsworth","doi":"10.1109/SC.Companion.2012.232","DOIUrl":"https://doi.org/10.1109/SC.Companion.2012.232","url":null,"abstract":"As scientific computation continues to scale, efficient use of floating-point arithmetic processors is critical. Lower precision allows streaming architectures to perform more operations per second and can reduce memory bandwidth pressure on all architectures. However, using a precision that is too low for a given algorithm and data set leads to inaccurate results. We present a framework that uses binary instrumentation and modification to build mixed-precision configurations of existing binaries that were originally developed to use only double-precision. Initial results with the Algebraic MultiGrid kernel demonstrate a nearly 2χ speedup.","PeriodicalId":6346,"journal":{"name":"2012 SC Companion: High Performance Computing, Networking Storage and Analysis","volume":"96 1","pages":"1424-1424"},"PeriodicalIF":0.0,"publicationDate":"2012-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88408077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-10DOI: 10.1109/SC.Companion.2012.212
C. Kessler, Usman Dastgeer, M. Majeed, N. Furmento, Samuel Thibault, R. Namyst, S. Benkner, Sabri Pllana, J. Träff, Martin Wimmer
PEPPHER is a 3-year EU FP7 project that develops a novel approach and framework to enhance performance portability and programmability of heterogeneous multi-core systems. Its primary target is single-node heterogeneous systems, where several CPU cores are supported by accelerators such as GPUs. This poster briefly surveys the PEPPHER framework for single-node systems, and elaborates on the prospectives for leveraging the PEPPHER approach to generate performance-portable code for heterogeneous multi-node systems.
{"title":"Abstract: Leveraging PEPPHER Technology for Performance Portable Supercomputing","authors":"C. Kessler, Usman Dastgeer, M. Majeed, N. Furmento, Samuel Thibault, R. Namyst, S. Benkner, Sabri Pllana, J. Träff, Martin Wimmer","doi":"10.1109/SC.Companion.2012.212","DOIUrl":"https://doi.org/10.1109/SC.Companion.2012.212","url":null,"abstract":"PEPPHER is a 3-year EU FP7 project that develops a novel approach and framework to enhance performance portability and programmability of heterogeneous multi-core systems. Its primary target is single-node heterogeneous systems, where several CPU cores are supported by accelerators such as GPUs. This poster briefly surveys the PEPPHER framework for single-node systems, and elaborates on the prospectives for leveraging the PEPPHER approach to generate performance-portable code for heterogeneous multi-node systems.","PeriodicalId":6346,"journal":{"name":"2012 SC Companion: High Performance Computing, Networking Storage and Analysis","volume":"90 1","pages":"1395-1396"},"PeriodicalIF":0.0,"publicationDate":"2012-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86603215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}