Pub Date : 1999-02-10DOI: 10.1109/PCCC.1999.749439
O. Maeshima, Yoshihiro Ito, M. Ishikura, T. Asami
Recently, many kinds of real-time applications have become available over IP networks. It is important to measure network performance for such applications before making use of real applications. The authors developed a general purpose traffic measurement tool for IP networks. This system can generate any kind of traffic flexibly and calculate network performance such as throughput, delay and loss rate according to the packers to be observed. In this paper, the concept and implementation of this tool are described in detail. As an example of network measurements, we also conducted subjective assessment tests for Internet telephony and demonstrated the validity of this tool.
{"title":"A method of service quality estimation with a network measurement tool","authors":"O. Maeshima, Yoshihiro Ito, M. Ishikura, T. Asami","doi":"10.1109/PCCC.1999.749439","DOIUrl":"https://doi.org/10.1109/PCCC.1999.749439","url":null,"abstract":"Recently, many kinds of real-time applications have become available over IP networks. It is important to measure network performance for such applications before making use of real applications. The authors developed a general purpose traffic measurement tool for IP networks. This system can generate any kind of traffic flexibly and calculate network performance such as throughput, delay and loss rate according to the packers to be observed. In this paper, the concept and implementation of this tool are described in detail. As an example of network measurements, we also conducted subjective assessment tests for Internet telephony and demonstrated the validity of this tool.","PeriodicalId":211210,"journal":{"name":"1999 IEEE International Performance, Computing and Communications Conference (Cat. No.99CH36305)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126729983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-02-10DOI: 10.1109/PCCC.1999.749448
M. Chang, Woo Hyong Lee, Y. Hasan
Dynamic memory management has been a high cost component in many software systems. Studies have shown that memory intensive C programs can consume up to 30% of the program runtime in memory allocation and liberation. The OOP language system tends to perform object creation and deletion prolifically. An empirical study shown that C++ programs can have ten times more memory allocation and deallocation than comparable C programs. However, the allocation behavior of C++ programs is rarely reported. This paper attempts to locate where the dynamic memory allocations are coming from and report an empirical study of dynamic memory invocations in C++ programs. Firstly, this paper summarizes the hypothesis of situations that invoke the dynamic memory management explicitly and implicitly. They are: constructors, copy constructors, overloading assignment operator=, type conversions and application specific member functions. Secondly, the development of a source code level tracing tool is reported as the procedure to investigate the hypothesis. Thirdly, results include behavioral patterns of memory allocations. With these patterns, we may increase the reusability of the resources. For example, a profile-based strategy can be used to improve the performance of dynamic memory management. The C++ programs that were traced include Java compiler, CORBA compliant and visual framework.
{"title":"Measuring dynamic memory invocations in object-oriented programs","authors":"M. Chang, Woo Hyong Lee, Y. Hasan","doi":"10.1109/PCCC.1999.749448","DOIUrl":"https://doi.org/10.1109/PCCC.1999.749448","url":null,"abstract":"Dynamic memory management has been a high cost component in many software systems. Studies have shown that memory intensive C programs can consume up to 30% of the program runtime in memory allocation and liberation. The OOP language system tends to perform object creation and deletion prolifically. An empirical study shown that C++ programs can have ten times more memory allocation and deallocation than comparable C programs. However, the allocation behavior of C++ programs is rarely reported. This paper attempts to locate where the dynamic memory allocations are coming from and report an empirical study of dynamic memory invocations in C++ programs. Firstly, this paper summarizes the hypothesis of situations that invoke the dynamic memory management explicitly and implicitly. They are: constructors, copy constructors, overloading assignment operator=, type conversions and application specific member functions. Secondly, the development of a source code level tracing tool is reported as the procedure to investigate the hypothesis. Thirdly, results include behavioral patterns of memory allocations. With these patterns, we may increase the reusability of the resources. For example, a profile-based strategy can be used to improve the performance of dynamic memory management. The C++ programs that were traced include Java compiler, CORBA compliant and visual framework.","PeriodicalId":211210,"journal":{"name":"1999 IEEE International Performance, Computing and Communications Conference (Cat. No.99CH36305)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132333329","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-02-10DOI: 10.1109/PCCC.1999.749453
M. Scottis, M. Krunz, M. M. Liu
In this paper we present an access scheduling scheme for real-time streams (RTS) over the peripheral component interconnect (PCI) bus. We derive a bus model based on the rate monotonic scheduling (RMS) algorithm that guarantees the timing quality of service (QoS) for real-time streams over the PCI bus. The proposed model is valid for constant-bit-rate (CBR) as well as for variable-bit-rate (VBR) streams. We define the effective bus utilization (EBU) as the worst case bus utilization and we determine the value of the internal latency timer (ILT) that minimizes EBU. Finally, we present some simulation results to demonstrate the practicality of the proposed scheme.
{"title":"Enhancing the PCI bus to support real-time streams","authors":"M. Scottis, M. Krunz, M. M. Liu","doi":"10.1109/PCCC.1999.749453","DOIUrl":"https://doi.org/10.1109/PCCC.1999.749453","url":null,"abstract":"In this paper we present an access scheduling scheme for real-time streams (RTS) over the peripheral component interconnect (PCI) bus. We derive a bus model based on the rate monotonic scheduling (RMS) algorithm that guarantees the timing quality of service (QoS) for real-time streams over the PCI bus. The proposed model is valid for constant-bit-rate (CBR) as well as for variable-bit-rate (VBR) streams. We define the effective bus utilization (EBU) as the worst case bus utilization and we determine the value of the internal latency timer (ILT) that minimizes EBU. Finally, we present some simulation results to demonstrate the practicality of the proposed scheme.","PeriodicalId":211210,"journal":{"name":"1999 IEEE International Performance, Computing and Communications Conference (Cat. No.99CH36305)","volume":"187 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116184326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-02-10DOI: 10.1109/PCCC.1999.749472
L. Robinson, G. Whisenhunt
Systems simulation is not new. Several instances of varying degrees have appeared over the last few years providing a diverse level of simulation capability. There are also a plethora of simulation kernels and simulation environments available today. Each of these has strengths and weaknesses usually centered around the specific environment to which the simulation is targeted. During the mid-1990s there were no other publicly available simulations that provided what we considered to be a complete functional system simulation environment. This paper presents the PowerSim simulation environment and the MOOSE simulation kernel. PowerSim is a full system simulation of a PowerPC computer platform capable of running unmodified complex operating systems and applications. MOOSE (Motorola Object-Oriented Simulation Environment) is simulation kernel capable of running distributed object-oriented simulations in an efficient, synchronized manner. In this simulation environment, it is possible to analyze an almost unlimited set of applications and systems software.
{"title":"A PowerPC platform full system simulation-from the MOOSE up","authors":"L. Robinson, G. Whisenhunt","doi":"10.1109/PCCC.1999.749472","DOIUrl":"https://doi.org/10.1109/PCCC.1999.749472","url":null,"abstract":"Systems simulation is not new. Several instances of varying degrees have appeared over the last few years providing a diverse level of simulation capability. There are also a plethora of simulation kernels and simulation environments available today. Each of these has strengths and weaknesses usually centered around the specific environment to which the simulation is targeted. During the mid-1990s there were no other publicly available simulations that provided what we considered to be a complete functional system simulation environment. This paper presents the PowerSim simulation environment and the MOOSE simulation kernel. PowerSim is a full system simulation of a PowerPC computer platform capable of running unmodified complex operating systems and applications. MOOSE (Motorola Object-Oriented Simulation Environment) is simulation kernel capable of running distributed object-oriented simulations in an efficient, synchronized manner. In this simulation environment, it is possible to analyze an almost unlimited set of applications and systems software.","PeriodicalId":211210,"journal":{"name":"1999 IEEE International Performance, Computing and Communications Conference (Cat. No.99CH36305)","volume":"87 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123329632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-02-10DOI: 10.1109/PCCC.1999.749416
A. Bui, A. Datta, F. Petit, V. Villain
Recently (1998), we introduced a new self-stabilizing PIF paradigm, called the Propagation of information with Feedback and Cleaning (PFC), for the rooted tree networks. In this paper, we propose the first self-stabilizing PIF scheme for the tree networks without sense of direction-the trees do not have a root and the processors do not maintain any ancestor. The proposed PIF scheme is based on the paradigm PFC. A PIF algorithm in trees without sense of direction is very useful in many applications because this allows to maintain only one spanning tree of the network instead of one per processor. The proposed algorithm requires 3 states per processor, and only 2 states for the initiator and leaves. This space requirement is optimal for both self-stabilizing and non-stabilizing PIF algorithms on tree networks. Thus, the processors need no extra space to stabilize the proposed PIF scheme.
{"title":"Space optimal PIF algorithm: self-stabilized with no extra space","authors":"A. Bui, A. Datta, F. Petit, V. Villain","doi":"10.1109/PCCC.1999.749416","DOIUrl":"https://doi.org/10.1109/PCCC.1999.749416","url":null,"abstract":"Recently (1998), we introduced a new self-stabilizing PIF paradigm, called the Propagation of information with Feedback and Cleaning (PFC), for the rooted tree networks. In this paper, we propose the first self-stabilizing PIF scheme for the tree networks without sense of direction-the trees do not have a root and the processors do not maintain any ancestor. The proposed PIF scheme is based on the paradigm PFC. A PIF algorithm in trees without sense of direction is very useful in many applications because this allows to maintain only one spanning tree of the network instead of one per processor. The proposed algorithm requires 3 states per processor, and only 2 states for the initiator and leaves. This space requirement is optimal for both self-stabilizing and non-stabilizing PIF algorithms on tree networks. Thus, the processors need no extra space to stabilize the proposed PIF scheme.","PeriodicalId":211210,"journal":{"name":"1999 IEEE International Performance, Computing and Communications Conference (Cat. No.99CH36305)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121452786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-02-10DOI: 10.1109/PCCC.1999.749440
C. E. Chow, A. Hansmats
To improve the reliability, broadband optical networks require fast restoration from single-link failures, node failures and multiple-link failures. This paper presents two distributed network restoration algorithms based on the one prong approach. DFOP uses a depth first search approach with a time-out mechanism to collect more network topology information. BFOP uses a breadth first search approach and a time-out mechanism to explore restoration paths with shorter hop count to reduce spare usage. They can handle single-link failures, node failures, multiple link failures and area failures. Comparisons of these algorithms with an adaptive one prong algorithm are also presented.
{"title":"Design and analysis of one prong network restoration algorithms","authors":"C. E. Chow, A. Hansmats","doi":"10.1109/PCCC.1999.749440","DOIUrl":"https://doi.org/10.1109/PCCC.1999.749440","url":null,"abstract":"To improve the reliability, broadband optical networks require fast restoration from single-link failures, node failures and multiple-link failures. This paper presents two distributed network restoration algorithms based on the one prong approach. DFOP uses a depth first search approach with a time-out mechanism to collect more network topology information. BFOP uses a breadth first search approach and a time-out mechanism to explore restoration paths with shorter hop count to reduce spare usage. They can handle single-link failures, node failures, multiple link failures and area failures. Comparisons of these algorithms with an adaptive one prong algorithm are also presented.","PeriodicalId":211210,"journal":{"name":"1999 IEEE International Performance, Computing and Communications Conference (Cat. No.99CH36305)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122546793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-02-10DOI: 10.1109/PCCC.1999.749414
Alan Skousen, Donald S. Miller
Recent 64-bit microprocessors have made a huge 18.4 quintillion byte address space potentially available to programs. This has led to the design of Operating Systems that provide a single virtual address space in which all code and data reside in and that spans all levels of storage and all nodes of a distributed system. These operating systems called SASOSs, have characteristics that can be used to support synchronization and coherency in a distributed system in ways that provide an improved program development environment and higher performance than that available from conventional operating systems. Sombrero, our SASOS design, makes use of its hardware support for object-grained protection, separate thread related protection domains and implicit protection domain crossing to provide synchronization and coherency support for distributed object copy set management not available in SASOSs built on stock processors. Its design, which provides direct system level support for object oriented programming includes a number of system architectural features targeted at modern distributed computing.
{"title":"Using a single address space operating system for distributed computing and high performance","authors":"Alan Skousen, Donald S. Miller","doi":"10.1109/PCCC.1999.749414","DOIUrl":"https://doi.org/10.1109/PCCC.1999.749414","url":null,"abstract":"Recent 64-bit microprocessors have made a huge 18.4 quintillion byte address space potentially available to programs. This has led to the design of Operating Systems that provide a single virtual address space in which all code and data reside in and that spans all levels of storage and all nodes of a distributed system. These operating systems called SASOSs, have characteristics that can be used to support synchronization and coherency in a distributed system in ways that provide an improved program development environment and higher performance than that available from conventional operating systems. Sombrero, our SASOS design, makes use of its hardware support for object-grained protection, separate thread related protection domains and implicit protection domain crossing to provide synchronization and coherency support for distributed object copy set management not available in SASOSs built on stock processors. Its design, which provides direct system level support for object oriented programming includes a number of system architectural features targeted at modern distributed computing.","PeriodicalId":211210,"journal":{"name":"1999 IEEE International Performance, Computing and Communications Conference (Cat. No.99CH36305)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129069641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-02-10DOI: 10.1109/PCCC.1999.749450
R. Oliver, P. Teller
Widely-used benchmarks are commonly classified as either scientific or commercial. Although process execution characteristics have been used as indicators of a benchmark's classification, a set of these characteristics along with a mechanism that can be used to easily compare and contrast workloads and partition them into classes with respect to these characteristics has not been identified. This paper identifies a set of process execution characteristics (PEC) that can be used to compare and contrast workloads and a method that can be used to partition workloads with respect to their PEC. These PEC, such as instruction locality, execution cycles per instruction, and context-switch frequency, are displayed with a high-density visualization tool called the PEC-Graph. Using the centroid linkage algorithm, processes' PEC are partitioned into clusters that are used to construct a taxonomy of workloads that is finer grained than taxonomies previously reported in the literature. The finer-grained categorization of workloads enables computer architects to select workloads that are known to stress specific architectural features, yielding potentially better performance analysis of new designs.
{"title":"Are all scientific workloads equal?","authors":"R. Oliver, P. Teller","doi":"10.1109/PCCC.1999.749450","DOIUrl":"https://doi.org/10.1109/PCCC.1999.749450","url":null,"abstract":"Widely-used benchmarks are commonly classified as either scientific or commercial. Although process execution characteristics have been used as indicators of a benchmark's classification, a set of these characteristics along with a mechanism that can be used to easily compare and contrast workloads and partition them into classes with respect to these characteristics has not been identified. This paper identifies a set of process execution characteristics (PEC) that can be used to compare and contrast workloads and a method that can be used to partition workloads with respect to their PEC. These PEC, such as instruction locality, execution cycles per instruction, and context-switch frequency, are displayed with a high-density visualization tool called the PEC-Graph. Using the centroid linkage algorithm, processes' PEC are partitioned into clusters that are used to construct a taxonomy of workloads that is finer grained than taxonomies previously reported in the literature. The finer-grained categorization of workloads enables computer architects to select workloads that are known to stress specific architectural features, yielding potentially better performance analysis of new designs.","PeriodicalId":211210,"journal":{"name":"1999 IEEE International Performance, Computing and Communications Conference (Cat. No.99CH36305)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131707707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-02-10DOI: 10.1109/PCCC.1999.749477
N. Malik, J. Baumgartner, S. Roberts, Ryan Dobson
There has been a growing interest in applying formal methods for functional and performance verification of complex and safety critical designs. Model checking is one of the most common formal verification methodologies utilized in verifying sequential logic due to its automated decision procedures and its ability to provide "counter examples" for debugging. However, model checking hasn't found broad acceptance as a verification methodology due to its complexity. This arises because of the need to specify correctness properties in a temporal logic language and develop an environment around a partitioned model under test in a non deterministic HDL-type language. Generally, engineers are not trained in mathematical logic languages and becoming proficient in such a language requires a steep learning curve. Furthermore, defining a behavioral environment at the complex and undocumented microarchitectural interface level is a time consuming and error prone activity. As such, there is a strong motivation to bring the model checking technology to a level such that the designers may utilize this technology as a part of their design process without being burdened with the details that are generally only within the grasps of computer theoreticians. The paper outlines two tools which greatly assist in this goal: the first, Polly, automates the difficult and error prone task of developing the behavioral environment around the partitioned model under test; the second Oracle, obviates the need for learning temporal logic to enter specification.
{"title":"A toolset for assisted formal verification","authors":"N. Malik, J. Baumgartner, S. Roberts, Ryan Dobson","doi":"10.1109/PCCC.1999.749477","DOIUrl":"https://doi.org/10.1109/PCCC.1999.749477","url":null,"abstract":"There has been a growing interest in applying formal methods for functional and performance verification of complex and safety critical designs. Model checking is one of the most common formal verification methodologies utilized in verifying sequential logic due to its automated decision procedures and its ability to provide \"counter examples\" for debugging. However, model checking hasn't found broad acceptance as a verification methodology due to its complexity. This arises because of the need to specify correctness properties in a temporal logic language and develop an environment around a partitioned model under test in a non deterministic HDL-type language. Generally, engineers are not trained in mathematical logic languages and becoming proficient in such a language requires a steep learning curve. Furthermore, defining a behavioral environment at the complex and undocumented microarchitectural interface level is a time consuming and error prone activity. As such, there is a strong motivation to bring the model checking technology to a level such that the designers may utilize this technology as a part of their design process without being burdened with the details that are generally only within the grasps of computer theoreticians. The paper outlines two tools which greatly assist in this goal: the first, Polly, automates the difficult and error prone task of developing the behavioral environment around the partitioned model under test; the second Oracle, obviates the need for learning temporal logic to enter specification.","PeriodicalId":211210,"journal":{"name":"1999 IEEE International Performance, Computing and Communications Conference (Cat. No.99CH36305)","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132223971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1999-02-10DOI: 10.1109/PCCC.1999.749423
E. Johnson
The tremendous storage space required for a useful data base of memory reference traces has prompted a search for trace compaction techniques. PDATS is the standard trace format used in the NMSU TraceBase, a widely used archive of memory and instruction traces. The PDATS family of trace compression techniques achieves trace coding densities of about six references per byte with no loss of reference type or address information by using differential run-length encoding. This paper proposes an improvement on the PDATS scheme that doubles the typical compression ratio without losing information.
{"title":"PDATS II: improved compression of address traces","authors":"E. Johnson","doi":"10.1109/PCCC.1999.749423","DOIUrl":"https://doi.org/10.1109/PCCC.1999.749423","url":null,"abstract":"The tremendous storage space required for a useful data base of memory reference traces has prompted a search for trace compaction techniques. PDATS is the standard trace format used in the NMSU TraceBase, a widely used archive of memory and instruction traces. The PDATS family of trace compression techniques achieves trace coding densities of about six references per byte with no loss of reference type or address information by using differential run-length encoding. This paper proposes an improvement on the PDATS scheme that doubles the typical compression ratio without losing information.","PeriodicalId":211210,"journal":{"name":"1999 IEEE International Performance, Computing and Communications Conference (Cat. No.99CH36305)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1999-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114077071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}