Pub Date : 2003-11-10DOI: 10.1109/CAHPC.2003.1250341
Maya Haridasan, G. H. Pfitscher
The use of clusters of computers as an environment for high performance computing has been shown to be promising. However, the efficient use of such systems still requires advances that make the application development process be simpler and more productive. The development of cluster monitoring tools is essential to achieve this advances. We present PM/sup 2/P, a tool for use in clusters of personal computers that provides a graphic visualization of the temporal execution of distributed applications that use the MPI standard for message passing. The tool uses an approach involving the parallel port to read the time of events that occur in all different machines of a cluster. It also simulates the execution of task precedence graphs and allocates tasks of a graph to the machines of a cluster, among other functionalities.
{"title":"PM/sup 2/P: a tool for performance monitoring of message passing applications in COTS PC clusters","authors":"Maya Haridasan, G. H. Pfitscher","doi":"10.1109/CAHPC.2003.1250341","DOIUrl":"https://doi.org/10.1109/CAHPC.2003.1250341","url":null,"abstract":"The use of clusters of computers as an environment for high performance computing has been shown to be promising. However, the efficient use of such systems still requires advances that make the application development process be simpler and more productive. The development of cluster monitoring tools is essential to achieve this advances. We present PM/sup 2/P, a tool for use in clusters of personal computers that provides a graphic visualization of the temporal execution of distributed applications that use the MPI standard for message passing. The tool uses an approach involving the parallel port to read the time of events that occur in all different machines of a cluster. It also simulates the execution of task precedence graphs and allocates tasks of a graph to the machines of a cluster, among other functionalities.","PeriodicalId":433002,"journal":{"name":"Proceedings. 15th Symposium on Computer Architecture and High Performance Computing","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131402406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-11-10DOI: 10.1109/CAHPC.2003.1250315
Pablo Viana, E. Barros, S. Rigo, R. Azevedo, G. Araújo
We present the cache configuration exploration of a programmable system, in order to find the best matching between the architecture and a given application. Here, programmable systems composed by processor and memories may be rapidly simulated making use of ArchC, an architecture description language (ADL) based on SystemC. Initially designed to model processor architectures, ArchC was extended to support a more detailed description of the memory subsystem, allowing the design space exploration of the whole programmable system. As an example, it is shown an image processing application, running on a SPARC-V8 processor-based architecture, which had its memory organization adjusted to minimize cache misses.
{"title":"Exploring memory hierarchy with ArchC","authors":"Pablo Viana, E. Barros, S. Rigo, R. Azevedo, G. Araújo","doi":"10.1109/CAHPC.2003.1250315","DOIUrl":"https://doi.org/10.1109/CAHPC.2003.1250315","url":null,"abstract":"We present the cache configuration exploration of a programmable system, in order to find the best matching between the architecture and a given application. Here, programmable systems composed by processor and memories may be rapidly simulated making use of ArchC, an architecture description language (ADL) based on SystemC. Initially designed to model processor architectures, ArchC was extended to support a more detailed description of the memory subsystem, allowing the design space exploration of the whole programmable system. As an example, it is shown an image processing application, running on a SPARC-V8 processor-based architecture, which had its memory organization adjusted to minimize cache misses.","PeriodicalId":433002,"journal":{"name":"Proceedings. 15th Symposium on Computer Architecture and High Performance Computing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121885176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-11-10DOI: 10.1109/CAHPC.2003.1250318
R. Santos, T. Santos, M. Pilla, P. Navaux, S. Bampi, M. Nemirovsky
Branch predictors are widely used as an alternative to deal with conditional branches. Despite the high accuracy rates, misprediction penalties are still large in any superscalar pipeline. DCE, or dynamic conditional execution, is an alternative to reduce the number of predicted branches by executing both paths of certain branches, reducing the number of predictions and, therefore, the occurrence of mispredictions. The goal of this work is to analyze the complexity of branch structures and determine the number of branches that can be predicated in DCE and the distribution of mispredictions according to the proposed classification. The complex branch classification proposed extends the classification presented by Klauser [A. Klauser, et al., (1998)]. As result, we show that an average of 35% of all branches can be predicated in DCE and around 32% of all mispredictions fall into these branches.
分支预测器被广泛用作处理条件分支的替代方法。尽管准确率很高,但在任何超标量管道中,错误预测的惩罚仍然很大。DCE,即动态条件执行,是通过执行某些分支的两条路径来减少预测分支数量的一种替代方法,从而减少预测的数量,从而减少错误预测的发生。这项工作的目标是分析分支结构的复杂性,并根据提出的分类确定可以在DCE中预测的分支数量和错误预测的分布。提出的复杂分支分类扩展了Klauser [A.]提出的分类。Klauser, et al.,(1998)。结果,我们表明,平均35%的分支可以在DCE中预测,大约32%的错误预测属于这些分支。
{"title":"Complex branch profiling for dynamic conditional execution","authors":"R. Santos, T. Santos, M. Pilla, P. Navaux, S. Bampi, M. Nemirovsky","doi":"10.1109/CAHPC.2003.1250318","DOIUrl":"https://doi.org/10.1109/CAHPC.2003.1250318","url":null,"abstract":"Branch predictors are widely used as an alternative to deal with conditional branches. Despite the high accuracy rates, misprediction penalties are still large in any superscalar pipeline. DCE, or dynamic conditional execution, is an alternative to reduce the number of predicted branches by executing both paths of certain branches, reducing the number of predictions and, therefore, the occurrence of mispredictions. The goal of this work is to analyze the complexity of branch structures and determine the number of branches that can be predicated in DCE and the distribution of mispredictions according to the proposed classification. The complex branch classification proposed extends the classification presented by Klauser [A. Klauser, et al., (1998)]. As result, we show that an average of 35% of all branches can be predicated in DCE and around 32% of all mispredictions fall into these branches.","PeriodicalId":433002,"journal":{"name":"Proceedings. 15th Symposium on Computer Architecture and High Performance Computing","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126713529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-11-10DOI: 10.1109/CAHPC.2003.1250324
Lars-Olof Burchard, Hans-Ulrich Heiß, C. Rose
In general, two types of resource reservations in computer networks can be distinguished: immediate reservations which are made in a just-in-time manner and advance reservations which allow to reserve resources a long time before they are actually used. Advance reservations are especially useful for grid computing but also for a variety of other applications that require network quality-of-service, such as content distribution networks or even mobile clients, which need advance reservation to support handovers for streaming video. With the emerged MPLS standard, explicit routing can be implemented also in IP networks, thus overcoming the unpredictable routing behavior which so far prevented the implementation of advance reservation services. The impact of such advance reservation mechanisms on the performance of the network with respect to the amount of admitted requests and the allocated bandwidth has so far not been examined in detail. We show that advance reservations can lead to a reduced performance of the network with respect to both metrics. The analysis of the reasons shows a fragmentation of the network resources. In advance reservation environments, additional new services can be defined such as malleable reservations and can lead to an increased performance of the network. Four strategies for scheduling malleable reservations are presented and compared. The results of the comparisons show that some strategies increase the resource fragmentation and are therefore unsuitable in the considered environment while others lead to a significantly better performance of the network. Besides discussing the performance issue, the software architecture of a management system for advance reservations is presented.
{"title":"Performance issues of bandwidth reservations for grid computing","authors":"Lars-Olof Burchard, Hans-Ulrich Heiß, C. Rose","doi":"10.1109/CAHPC.2003.1250324","DOIUrl":"https://doi.org/10.1109/CAHPC.2003.1250324","url":null,"abstract":"In general, two types of resource reservations in computer networks can be distinguished: immediate reservations which are made in a just-in-time manner and advance reservations which allow to reserve resources a long time before they are actually used. Advance reservations are especially useful for grid computing but also for a variety of other applications that require network quality-of-service, such as content distribution networks or even mobile clients, which need advance reservation to support handovers for streaming video. With the emerged MPLS standard, explicit routing can be implemented also in IP networks, thus overcoming the unpredictable routing behavior which so far prevented the implementation of advance reservation services. The impact of such advance reservation mechanisms on the performance of the network with respect to the amount of admitted requests and the allocated bandwidth has so far not been examined in detail. We show that advance reservations can lead to a reduced performance of the network with respect to both metrics. The analysis of the reasons shows a fragmentation of the network resources. In advance reservation environments, additional new services can be defined such as malleable reservations and can lead to an increased performance of the network. Four strategies for scheduling malleable reservations are presented and compared. The results of the comparisons show that some strategies increase the resource fragmentation and are therefore unsuitable in the considered environment while others lead to a significantly better performance of the network. Besides discussing the performance issue, the software architecture of a management system for advance reservations is presented.","PeriodicalId":433002,"journal":{"name":"Proceedings. 15th Symposium on Computer Architecture and High Performance Computing","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126912758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-11-10DOI: 10.1109/CAHPC.2003.1250344
N. Nedjah, L. M. Mourelle
Modular exponentiation is the cornerstone computation performed in public-key cryptography systems such as the RSA cryptosystem. The operation is time consuming for large operands. We describe the characteristics of three architectures designed to implement modular exponentiation using the fast binary method: the first FPGA prototype has a sequential architecture, the second has a parallel architecture and the third has a systolic array-based architecture. We compare the three prototypes using the time/spl times/area classic factor. All three prototypes implement the modular multiplication using the popular Montgomery algorithm.
{"title":"Three hardware implementations for the binary modular exponentiation: sequential, parallel and systolic","authors":"N. Nedjah, L. M. Mourelle","doi":"10.1109/CAHPC.2003.1250344","DOIUrl":"https://doi.org/10.1109/CAHPC.2003.1250344","url":null,"abstract":"Modular exponentiation is the cornerstone computation performed in public-key cryptography systems such as the RSA cryptosystem. The operation is time consuming for large operands. We describe the characteristics of three architectures designed to implement modular exponentiation using the fast binary method: the first FPGA prototype has a sequential architecture, the second has a parallel architecture and the third has a systolic array-based architecture. We compare the three prototypes using the time/spl times/area classic factor. All three prototypes implement the modular multiplication using the popular Montgomery algorithm.","PeriodicalId":433002,"journal":{"name":"Proceedings. 15th Symposium on Computer Architecture and High Performance Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116230215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-11-10DOI: 10.1109/CAHPC.2003.1250320
Gabriela Jacques-Silva, L. Schnorr, B. Stein
Program tracing is one of the most used techniques to debug parallel and distributed programs. In this technique, events are recorded in trace files during the execution of the program for post mortem visualization of its behavior. We describe JRastro, a trace agent capable of tracing Java programs. The agent was designed to cover three key features: to be transparent to the application developer, to use unmodified Java virtual machines and to observe remote method invocations. By integrating these three features, JRastro differentiates itself from similar tools. Unfortunately, for a complete and clean implementation of RMI visualization, additional support on the Java monitoring system is needed.
{"title":"JRastro: a trace agent for debugging multithreaded and distributed Java programs","authors":"Gabriela Jacques-Silva, L. Schnorr, B. Stein","doi":"10.1109/CAHPC.2003.1250320","DOIUrl":"https://doi.org/10.1109/CAHPC.2003.1250320","url":null,"abstract":"Program tracing is one of the most used techniques to debug parallel and distributed programs. In this technique, events are recorded in trace files during the execution of the program for post mortem visualization of its behavior. We describe JRastro, a trace agent capable of tracing Java programs. The agent was designed to cover three key features: to be transparent to the application developer, to use unmodified Java virtual machines and to observe remote method invocations. By integrating these three features, JRastro differentiates itself from similar tools. Unfortunately, for a complete and clean implementation of RMI visualization, additional support on the Java monitoring system is needed.","PeriodicalId":433002,"journal":{"name":"Proceedings. 15th Symposium on Computer Architecture and High Performance Computing","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124788715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-11-10DOI: 10.1109/CAHPC.2003.1250336
E. Cáceres, C. Y. Nasu
We describe a parallel algorithm using the BSP/CGM model (Bulk Synchronous Parallel/Coarse Grained Multicomputer) to obtain the Euler tours in graphs. It is based on the PRAM (parallel random access machine) algorithm by Caceres et al. For an input graph of n vertices and m edges, the algorithm requires local computation time of O((m+n)/p), O((m+n'p) memory and O(logp) communication rounds, where p is the number of processors. To our knowledge there are no other parallel algorithms under the coarse-grained models for the Euler tours in graphs. The proposed algorithm is implemented using MPI (message passing interface) and the C language. The parallel program runs on a Beowulf with 66 nodes. The implementation results confirm the theoretical complexity results of the algorithm.
{"title":"A BSP/CGM algorithm for computing Euler tours in graphs","authors":"E. Cáceres, C. Y. Nasu","doi":"10.1109/CAHPC.2003.1250336","DOIUrl":"https://doi.org/10.1109/CAHPC.2003.1250336","url":null,"abstract":"We describe a parallel algorithm using the BSP/CGM model (Bulk Synchronous Parallel/Coarse Grained Multicomputer) to obtain the Euler tours in graphs. It is based on the PRAM (parallel random access machine) algorithm by Caceres et al. For an input graph of n vertices and m edges, the algorithm requires local computation time of O((m+n)/p), O((m+n'p) memory and O(logp) communication rounds, where p is the number of processors. To our knowledge there are no other parallel algorithms under the coarse-grained models for the Euler tours in graphs. The proposed algorithm is implemented using MPI (message passing interface) and the C language. The parallel program runs on a Beowulf with 66 nodes. The implementation results confirm the theoretical complexity results of the algorithm.","PeriodicalId":433002,"journal":{"name":"Proceedings. 15th Symposium on Computer Architecture and High Performance Computing","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131298977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-11-10DOI: 10.1109/CAHPC.2003.1250338
Ricardo Vargas Dorneles, Rogério Luís Rizzi, T. A. Diverio, P. Navaux
We describe the use of dynamic load balancing in a PC cluster, applied to a multiphysics model that combines the parallel solution for three-dimensional (3D) PDEs of shallow water bodies flow and the parallel solution for the three-dimensional PDEs of scalar transportation of substances. The dynamic load balancing is obtained via diffusion algorithms. The numerical mesh is partitioned using RCB algorithm, in order to minimize communication and balance the load. Parallelism is obtained through Schwarz's additive domain decomposition method (DDM), so that the subproblems are solved concurrently. SPMD is the programming model used and the message passing between processes in the PC cluster is done with MPICH library.
{"title":"Dynamic load balancing in PC clusters: an application to a multiphysics model","authors":"Ricardo Vargas Dorneles, Rogério Luís Rizzi, T. A. Diverio, P. Navaux","doi":"10.1109/CAHPC.2003.1250338","DOIUrl":"https://doi.org/10.1109/CAHPC.2003.1250338","url":null,"abstract":"We describe the use of dynamic load balancing in a PC cluster, applied to a multiphysics model that combines the parallel solution for three-dimensional (3D) PDEs of shallow water bodies flow and the parallel solution for the three-dimensional PDEs of scalar transportation of substances. The dynamic load balancing is obtained via diffusion algorithms. The numerical mesh is partitioned using RCB algorithm, in order to minimize communication and balance the load. Parallelism is obtained through Schwarz's additive domain decomposition method (DDM), so that the subproblems are solved concurrently. SPMD is the programming model used and the message passing between processes in the PC cluster is done with MPICH library.","PeriodicalId":433002,"journal":{"name":"Proceedings. 15th Symposium on Computer Architecture and High Performance Computing","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117308280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-11-10DOI: 10.1109/CAHPC.2003.1250340
George Teodoro, T. Tavares, Bruno Coutinho, Wagner Meira Jr, Dorgival Olavo Guedes Neto
One of the main challenges to the wide use of the Internet is the scalability of the servers, that is, their ability to handle the increasing demand. Scalability in stateful servers, which comprise e-commerce and other transaction-oriented servers, is even more difficult, since it is necessary to keep transaction data across requests from the same user. One common strategy for achieving scalability is to employ clustered servers, where the load is distributed among the various servers. However, as a consequence of the workload characteristics and the need of maintaining data coherent among the servers that compose the cluster, load imbalance arise among servers, reducing the efficiency of the server as a whole. We propose and evaluate a strategy for load balancing in stateful clustered servers. Our strategy is based on control theory and allowed significant gains over configurations that do not employ the load balancing strategy, reducing the response time in up to 50% and increasing the throughput in up to 16%.
{"title":"Load balancing on stateful clustered Web servers","authors":"George Teodoro, T. Tavares, Bruno Coutinho, Wagner Meira Jr, Dorgival Olavo Guedes Neto","doi":"10.1109/CAHPC.2003.1250340","DOIUrl":"https://doi.org/10.1109/CAHPC.2003.1250340","url":null,"abstract":"One of the main challenges to the wide use of the Internet is the scalability of the servers, that is, their ability to handle the increasing demand. Scalability in stateful servers, which comprise e-commerce and other transaction-oriented servers, is even more difficult, since it is necessary to keep transaction data across requests from the same user. One common strategy for achieving scalability is to employ clustered servers, where the load is distributed among the various servers. However, as a consequence of the workload characteristics and the need of maintaining data coherent among the servers that compose the cluster, load imbalance arise among servers, reducing the efficiency of the server as a whole. We propose and evaluate a strategy for load balancing in stateful clustered servers. Our strategy is based on control theory and allowed significant gains over configurations that do not employ the load balancing strategy, reducing the response time in up to 50% and increasing the throughput in up to 16%.","PeriodicalId":433002,"journal":{"name":"Proceedings. 15th Symposium on Computer Architecture and High Performance Computing","volume":"160 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128972989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2003-11-10DOI: 10.1109/CAHPC.2003.1250319
M. Pilla, Amarildo T. da Costa, F. França, B. Childers, M. Soffa
Trace reuse improves the performance of processors by skipping the execution of sequences of redundant instructions. However, many reusable traces do not have all of their inputs ready by the time the reuse test is done. For these cases, we developed a new technique called reuse through speculation on traces (RST), where trace inputs may be predicted. We study the limits of RST for modern processors with deep pipelines, as well as the effects of constraining resources on performance. We show that our approach reuses more traces than the nonspeculative trace reuse technique, with speedups of 43% over a nonspeculative trace reuse and 57% when memory accesses are reused.
{"title":"The limits of speculative trace reuse on deeply pipelined processors","authors":"M. Pilla, Amarildo T. da Costa, F. França, B. Childers, M. Soffa","doi":"10.1109/CAHPC.2003.1250319","DOIUrl":"https://doi.org/10.1109/CAHPC.2003.1250319","url":null,"abstract":"Trace reuse improves the performance of processors by skipping the execution of sequences of redundant instructions. However, many reusable traces do not have all of their inputs ready by the time the reuse test is done. For these cases, we developed a new technique called reuse through speculation on traces (RST), where trace inputs may be predicted. We study the limits of RST for modern processors with deep pipelines, as well as the effects of constraining resources on performance. We show that our approach reuses more traces than the nonspeculative trace reuse technique, with speedups of 43% over a nonspeculative trace reuse and 57% when memory accesses are reused.","PeriodicalId":433002,"journal":{"name":"Proceedings. 15th Symposium on Computer Architecture and High Performance Computing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2003-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114493642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}