Pub Date : 2002-11-12DOI: 10.1109/ISSRE.2002.1173219
Subhachandra Chandra, Peter M. Chen
Recovery systems must save state before a failure occurs to enable the system to recover from the failure. However, recovery will fail if the recovery system saves any state corrupted by the fault. The frequency and comprehensiveness of how a recovery system saves state has a major effect on how often the recovery system inadvertently saves corrupted state. This paper explores and measures that effect. We measure how often software faults in the application and operating system cause real applications to save corrupted state when using different types of recovery systems. We find that generic recovery techniques, such as checkpointing and logging, work well for faults in the operating system. However, we find that they do not work well for faults in the application because the very actions taken to enable recovery often corrupt the state upon which successful recovery depends.
{"title":"The impact of recovery mechanisms on the likelihood of saving corrupted state","authors":"Subhachandra Chandra, Peter M. Chen","doi":"10.1109/ISSRE.2002.1173219","DOIUrl":"https://doi.org/10.1109/ISSRE.2002.1173219","url":null,"abstract":"Recovery systems must save state before a failure occurs to enable the system to recover from the failure. However, recovery will fail if the recovery system saves any state corrupted by the fault. The frequency and comprehensiveness of how a recovery system saves state has a major effect on how often the recovery system inadvertently saves corrupted state. This paper explores and measures that effect. We measure how often software faults in the application and operating system cause real applications to save corrupted state when using different types of recovery systems. We find that generic recovery techniques, such as checkpointing and logging, work well for faults in the operating system. However, we find that they do not work well for faults in the application because the very actions taken to enable recovery often corrupt the state upon which successful recovery depends.","PeriodicalId":159160,"journal":{"name":"13th International Symposium on Software Reliability Engineering, 2002. Proceedings.","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131757749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-12DOI: 10.1109/ISSRE.2002.1173265
M. Fisher, Dalai Jin, G. Rothermel, M. Burnett
Spreadsheet languages are widely used by a variety of end users to perform many important tasks. Despite their perceived simplicity, spreadsheets often contain faults. Furthermore, users modify their spreadsheets frequently, which can render previously correct spreadsheets faulty. To address this problem, we previously introduced a visual approach by which users can systematically test their spreadsheets, see where new tests are required after changes, and request automated generation of potentially useful test inputs. To date, however, this approach has not taken advantage of previously developed test cases, which means that users of the approach cannot benefit, when re-testing following changes, from prior testing efforts. We have therefore been investigating ways to add support for test re-use into our spreadsheet testing methodology. In this paper we present a test re-use strategy for spreadsheets, and the algorithms that implement it, and describe their integration into our spreadsheet testing methodology. We report results of a case study examining the application of this strategy.
{"title":"Test reuse in the spreadsheet paradigm","authors":"M. Fisher, Dalai Jin, G. Rothermel, M. Burnett","doi":"10.1109/ISSRE.2002.1173265","DOIUrl":"https://doi.org/10.1109/ISSRE.2002.1173265","url":null,"abstract":"Spreadsheet languages are widely used by a variety of end users to perform many important tasks. Despite their perceived simplicity, spreadsheets often contain faults. Furthermore, users modify their spreadsheets frequently, which can render previously correct spreadsheets faulty. To address this problem, we previously introduced a visual approach by which users can systematically test their spreadsheets, see where new tests are required after changes, and request automated generation of potentially useful test inputs. To date, however, this approach has not taken advantage of previously developed test cases, which means that users of the approach cannot benefit, when re-testing following changes, from prior testing efforts. We have therefore been investigating ways to add support for test re-use into our spreadsheet testing methodology. In this paper we present a test re-use strategy for spreadsheets, and the algorithms that implement it, and describe their integration into our spreadsheet testing methodology. We report results of a case study examining the application of this strategy.","PeriodicalId":159160,"journal":{"name":"13th International Symposium on Software Reliability Engineering, 2002. Proceedings.","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114977631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-12DOI: 10.1109/ISSRE.2002.1173234
O. Mizuno, Eijiro Shigematsu, Yasunari Takagi, T. Kikuno
In practical software development, software quality is generally evaluated by the number of residual defects. To keep the number of residual defects within a permissible value, too much effort is often assigned to software testing. We try to develop a statistical model to determine the amount of testing effort which is needed to assure the field quality. The model explicitly includes design, review, and test (including debug) activities. Firstly, we construct a linear multiple regression model that can clarify the relationship among the number of residual defects and the efforts assigned to design, review, and test activities. We then confirm the applicability of the model by statistical analysis using actual project data. Next, we obtain an equation based on the model to determine the test effort. As parameters in the equation, the permissible number of residual defects, the design effort, and the review effort are included. Then, the equation determines the test effort that is needed to assure the permissible residual defects. Finally, we conduct an experimental evaluation using actual project data and show the usefulness of the equation.
{"title":"On estimating testing effort needed to assure field quality in software development","authors":"O. Mizuno, Eijiro Shigematsu, Yasunari Takagi, T. Kikuno","doi":"10.1109/ISSRE.2002.1173234","DOIUrl":"https://doi.org/10.1109/ISSRE.2002.1173234","url":null,"abstract":"In practical software development, software quality is generally evaluated by the number of residual defects. To keep the number of residual defects within a permissible value, too much effort is often assigned to software testing. We try to develop a statistical model to determine the amount of testing effort which is needed to assure the field quality. The model explicitly includes design, review, and test (including debug) activities. Firstly, we construct a linear multiple regression model that can clarify the relationship among the number of residual defects and the efforts assigned to design, review, and test activities. We then confirm the applicability of the model by statistical analysis using actual project data. Next, we obtain an equation based on the model to determine the test effort. As parameters in the equation, the permissible number of residual defects, the design effort, and the review effort are included. Then, the equation determines the test effort that is needed to assure the permissible residual defects. Finally, we conduct an experimental evaluation using actual project data and show the usefulness of the equation.","PeriodicalId":159160,"journal":{"name":"13th International Symposium on Software Reliability Engineering, 2002. Proceedings.","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122698236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-12DOI: 10.1109/ISSRE.2002.1173253
R. Alexander, Jeff Offutt, J. Bieman
Object-oriented programs cause a shift in focus from software units to the way software classes and components are connected. Thus, we are finding that we need less emphasis on unit testing and more on integration testing. The compositional relationships of inheritance and aggregation, especially when combined with polymorphism, introduce new kinds of integration faults, which can be covered using testing criteria that take the effects of inheritance and polymorphism into account. This paper demonstrates, via a set of experiments, the relative effectiveness of several coupling-based OO testing criteria and branch coverage. OO criteria are all more effective at detecting faults due to the use of inheritance and polymorphism than branch coverage.
{"title":"Fault detection capabilities of coupling-based OO testing","authors":"R. Alexander, Jeff Offutt, J. Bieman","doi":"10.1109/ISSRE.2002.1173253","DOIUrl":"https://doi.org/10.1109/ISSRE.2002.1173253","url":null,"abstract":"Object-oriented programs cause a shift in focus from software units to the way software classes and components are connected. Thus, we are finding that we need less emphasis on unit testing and more on integration testing. The compositional relationships of inheritance and aggregation, especially when combined with polymorphism, introduce new kinds of integration faults, which can be covered using testing criteria that take the effects of inheritance and polymorphism into account. This paper demonstrates, via a set of experiments, the relative effectiveness of several coupling-based OO testing criteria and branch coverage. OO criteria are all more effective at detecting faults due to the use of inheritance and polymorphism than branch coverage.","PeriodicalId":159160,"journal":{"name":"13th International Symposium on Software Reliability Engineering, 2002. Proceedings.","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122907708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-12DOI: 10.1109/ISSRE.2002.1173291
F. Bastani, Sung Kim, I. Yen, I. Chen
Distributed embedded software systems, such as sensor networks and command and control systems, are complex systems with stringent performance, reliability, security, and safety constraints. These are also long-lived systems that must be continually upgraded and evolved to incorporate enhanced functionality. One approach for achieving high quality and evolvability for these systems is to organize them in the form of application-oriented frameworks that allow the system to be composed from orthogonal aspects that can be independently developed, evolved, and certified. In this paper, we define a general framework that allows a distributed embedded system to have relatively independent aspects, including "plug-and-play" capability. We present conditions under which the reliability of the system can be inferred from the reliability of the individual aspects. The approach is illustrated for a framework-based distributed sensor network.
{"title":"Reliability assessment of framework-based distributed embedded software systems","authors":"F. Bastani, Sung Kim, I. Yen, I. Chen","doi":"10.1109/ISSRE.2002.1173291","DOIUrl":"https://doi.org/10.1109/ISSRE.2002.1173291","url":null,"abstract":"Distributed embedded software systems, such as sensor networks and command and control systems, are complex systems with stringent performance, reliability, security, and safety constraints. These are also long-lived systems that must be continually upgraded and evolved to incorporate enhanced functionality. One approach for achieving high quality and evolvability for these systems is to organize them in the form of application-oriented frameworks that allow the system to be composed from orthogonal aspects that can be independently developed, evolved, and certified. In this paper, we define a general framework that allows a distributed embedded system to have relatively independent aspects, including \"plug-and-play\" capability. We present conditions under which the reliability of the system can be inferred from the reliability of the individual aspects. The approach is illustrated for a framework-based distributed sensor network.","PeriodicalId":159160,"journal":{"name":"13th International Symposium on Software Reliability Engineering, 2002. Proceedings.","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130623362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-12DOI: 10.1109/ISSRE.2002.1173208
T. Menzies, David Owen, B. Cukic
Formal analysis of software is a powerful analysis tool, but can be too costly. Random search of formal models can reduce that cost, but is theoretically incomplete. However, random search of finite-state machines exhibits an early saturation effect, i.e., random search quickly yields all that can be found, even after a much longer search. Hence, we avoid the theoretical problem of incompleteness, provided that testing continues until after the saturation point. Such a random search is rapid, consumes little memory, is simple to implement, and can handle very large formal models (in one experiment shown here, over 10/sup 178/ states).
{"title":"Saturation effects in testing of formal models","authors":"T. Menzies, David Owen, B. Cukic","doi":"10.1109/ISSRE.2002.1173208","DOIUrl":"https://doi.org/10.1109/ISSRE.2002.1173208","url":null,"abstract":"Formal analysis of software is a powerful analysis tool, but can be too costly. Random search of formal models can reduce that cost, but is theoretically incomplete. However, random search of finite-state machines exhibits an early saturation effect, i.e., random search quickly yields all that can be found, even after a much longer search. Hence, we avoid the theoretical problem of incompleteness, provided that testing continues until after the saturation point. Such a random search is rapid, consumes little memory, is simple to implement, and can handle very large formal models (in one experiment shown here, over 10/sup 178/ states).","PeriodicalId":159160,"journal":{"name":"13th International Symposium on Software Reliability Engineering, 2002. Proceedings.","volume":"51 26","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134225917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-12DOI: 10.1109/ISSRE.2002.1173270
Satya Kanduri, Sebastian G. Elbaum
Tracing is a dynamic analysis technique to continuously capture events of interest on a running program. The occurrence of a statement, the invocation of a function, and the trigger of a signal are examples of traced events. Software engineers employ traces to accomplish various tasks, ranging from performance monitoring to failure analysis. Despite its capabilities, tracing can negatively impact the performance and general behavior of an application. In order to minimize that impact, traces are normally buffered and transferred to (slower) permanent storage at specific intervals. This scenario presents a delicate balance. Increased buffering can minimize the impact on the target program, but it increases the risk of losing valuable collected data in the event of a failure. Frequent disk transfers can ensure traced data integrity, but it risks a high impact on the target program. We conducted an experiment involving six tracing schemes and various buffer sizes to address these trade-offs. Our results highlight opportunities for tailored tracing schemes that would benefit failure analysis.
{"title":"An empirical study of tracing techniques from a failure analysis perspective","authors":"Satya Kanduri, Sebastian G. Elbaum","doi":"10.1109/ISSRE.2002.1173270","DOIUrl":"https://doi.org/10.1109/ISSRE.2002.1173270","url":null,"abstract":"Tracing is a dynamic analysis technique to continuously capture events of interest on a running program. The occurrence of a statement, the invocation of a function, and the trigger of a signal are examples of traced events. Software engineers employ traces to accomplish various tasks, ranging from performance monitoring to failure analysis. Despite its capabilities, tracing can negatively impact the performance and general behavior of an application. In order to minimize that impact, traces are normally buffered and transferred to (slower) permanent storage at specific intervals. This scenario presents a delicate balance. Increased buffering can minimize the impact on the target program, but it increases the risk of losing valuable collected data in the event of a failure. Frequent disk transfers can ensure traced data integrity, but it risks a high impact on the target program. We conducted an experiment involving six tracing schemes and various buffer sizes to address these trade-offs. Our results highlight opportunities for tailored tracing schemes that would benefit failure analysis.","PeriodicalId":159160,"journal":{"name":"13th International Symposium on Software Reliability Engineering, 2002. Proceedings.","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122689632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-12DOI: 10.1109/ISSRE.2002.1173296
João W. Cangussu, A. Mathur, R. Decarlo
We report a study to determine the impact of four types of disturbances on the failure intensity of a software product undergoing system test. Hardware failures, discovery of a critical fault, attrition in the test team, are examples of disturbances that will likely affect the convergence of the failure intensity to its desired value. Such disturbances are modeled as impulse, pulse, step, and white noise. Our study examined, in quantitative terms, the impact of such disturbances on the convergence behavior of the failure intensity. Results from this study reveal that the behavior of the state model, proposed elsewhere, is consistent with what one might predict. The model is useful in that it provides a quantitative measure of the delay one can expect when a disturbance occurs.
{"title":"Effect of disturbances on the convergence of failure intensity","authors":"João W. Cangussu, A. Mathur, R. Decarlo","doi":"10.1109/ISSRE.2002.1173296","DOIUrl":"https://doi.org/10.1109/ISSRE.2002.1173296","url":null,"abstract":"We report a study to determine the impact of four types of disturbances on the failure intensity of a software product undergoing system test. Hardware failures, discovery of a critical fault, attrition in the test team, are examples of disturbances that will likely affect the convergence of the failure intensity to its desired value. Such disturbances are modeled as impulse, pulse, step, and white noise. Our study examined, in quantitative terms, the impact of such disturbances on the convergence behavior of the failure intensity. Results from this study reveal that the behavior of the state model, proposed elsewhere, is consistent with what one might predict. The model is useful in that it provides a quantitative measure of the delay one can expect when a disturbance occurs.","PeriodicalId":159160,"journal":{"name":"13th International Symposium on Software Reliability Engineering, 2002. Proceedings.","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122274246","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-12DOI: 10.1109/ISSRE.2002.1173261
Ganesh J. Pai, J. Dugan
The reliability of a computer-based system may be as important as its performance and its correctness of computation. It is worthwhile to estimate system reliability at the conceptual design stage, since reliability can influence the subsequent design decisions and may often be pivotal for making trade-offs or in establishing system cost. In this paper we describe a framework for modeling computer-based systems, based on the Unified Modeling Language (UML), that facilitates automated dependability analysis during design. An algorithm to automatically synthesize dynamic fault trees (DFTs) from the UML system model is developed. We succeed both in embedding information needed for reliability analysis within the system model and in generating the DFT Thereafter, we evaluate our approach using examples of real systems. We analytically compute system unreliability from the algorithmically developed DFT and we compare our results with the analytical solution of manually developed DFTs. Our solutions produce the same results as manually generated DFTs.
{"title":"Automatic synthesis of dynamic fault trees from UML system models","authors":"Ganesh J. Pai, J. Dugan","doi":"10.1109/ISSRE.2002.1173261","DOIUrl":"https://doi.org/10.1109/ISSRE.2002.1173261","url":null,"abstract":"The reliability of a computer-based system may be as important as its performance and its correctness of computation. It is worthwhile to estimate system reliability at the conceptual design stage, since reliability can influence the subsequent design decisions and may often be pivotal for making trade-offs or in establishing system cost. In this paper we describe a framework for modeling computer-based systems, based on the Unified Modeling Language (UML), that facilitates automated dependability analysis during design. An algorithm to automatically synthesize dynamic fault trees (DFTs) from the UML system model is developed. We succeed both in embedding information needed for reliability analysis within the system model and in generating the DFT Thereafter, we evaluate our approach using examples of real systems. We analytically compute system unreliability from the algorithmically developed DFT and we compare our results with the analytical solution of manually developed DFTs. Our solutions produce the same results as manually generated DFTs.","PeriodicalId":159160,"journal":{"name":"13th International Symposium on Software Reliability Engineering, 2002. Proceedings.","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124845908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-11-12DOI: 10.1109/ISSRE.2002.1173214
S. Gokhale, Kishor S. Trivedi
Prevalent approaches to characterize the behavior of monolithic applications are inappropriate to model modern software systems which are heterogeneous, and are built using a combination of components picked off the shelf, those developed in-house and those developed contractually. Development of techniques to characterize the behavior of such component-based software systems based on their architecture is then absolutely essential. Earlier efforts in the area of architecture-based analysis have focused on the development of composite models which are quite cumbersome due to their inherent largeness and stiffness. In this paper we develop an accurate hierarchical model to predict the performance and reliability of component-based software systems based on their architecture. This model accounts for the variance of the number of visits to each module, and thus provides predictions closer to those provided by a composite model. The approach developed in this paper enables the identification of performance and reliability bottlenecks. We also develop expressions to analyze the sensitivity of the performance and reliability predictions to the changes in the parameters of individual modules. In addition, we demonstrate how the hierarchical model could be used to assess the impact of changes in the workload on the performance and reliability of the application. We illustrate the performance and reliability prediction as well as sensitivity analysis techniques with examples.
{"title":"Reliability prediction and sensitivity analysis based on software architecture","authors":"S. Gokhale, Kishor S. Trivedi","doi":"10.1109/ISSRE.2002.1173214","DOIUrl":"https://doi.org/10.1109/ISSRE.2002.1173214","url":null,"abstract":"Prevalent approaches to characterize the behavior of monolithic applications are inappropriate to model modern software systems which are heterogeneous, and are built using a combination of components picked off the shelf, those developed in-house and those developed contractually. Development of techniques to characterize the behavior of such component-based software systems based on their architecture is then absolutely essential. Earlier efforts in the area of architecture-based analysis have focused on the development of composite models which are quite cumbersome due to their inherent largeness and stiffness. In this paper we develop an accurate hierarchical model to predict the performance and reliability of component-based software systems based on their architecture. This model accounts for the variance of the number of visits to each module, and thus provides predictions closer to those provided by a composite model. The approach developed in this paper enables the identification of performance and reliability bottlenecks. We also develop expressions to analyze the sensitivity of the performance and reliability predictions to the changes in the parameters of individual modules. In addition, we demonstrate how the hierarchical model could be used to assess the impact of changes in the workload on the performance and reliability of the application. We illustrate the performance and reliability prediction as well as sensitivity analysis techniques with examples.","PeriodicalId":159160,"journal":{"name":"13th International Symposium on Software Reliability Engineering, 2002. Proceedings.","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129899696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}