Pub Date : 1995-06-27DOI: 10.1109/FTCS.1995.466957
R. Chillarege, S. Biyani, J. Rosenthal
In the history of empirical failure rate measurement, one problem that continues to plague researchers and practitioners is that of measuring the customer perceived failure rate of commercial software. Unfortunately, even order of magnitude measures of failure rate are not truly available for commercial software which is widely distributed. Given repeated reports on the criticality of software, and its significance, the industry flounders for some real baselines. The paper reports the failure rate of a several million line of code commercial software product distributed to hundreds of thousands of customers. To first order of approximation, the MTBF reaches around 4 years and 2 years for successive releases of the software. The changes in the failure rate as a function of severity, release and time are also provided. The measurement technique develops a direct link between failures and faults, providing an opportunity to study and describe the failure process. Two metrics, the fault weight, corresponding to the number of failures due to a fault and failure window, measuring the length of time between the first and last fault, are defined and characterized. The two metrics are found to be higher for higher severity faults, consistently across all severities and releases. At the same time the window to weight ratio, is invariant by severity. The fault weight and failure window are natural measures and are intuitive about the failure process. The fault weight measures the impact of a fault on the overall failure rate and the failure window the dispersion of that impact over time. These two do provide a new forum for discussion and opportunity to gain greater understanding of the processes involved.<>
{"title":"Measurement of failure rate in widely distributed software","authors":"R. Chillarege, S. Biyani, J. Rosenthal","doi":"10.1109/FTCS.1995.466957","DOIUrl":"https://doi.org/10.1109/FTCS.1995.466957","url":null,"abstract":"In the history of empirical failure rate measurement, one problem that continues to plague researchers and practitioners is that of measuring the customer perceived failure rate of commercial software. Unfortunately, even order of magnitude measures of failure rate are not truly available for commercial software which is widely distributed. Given repeated reports on the criticality of software, and its significance, the industry flounders for some real baselines. The paper reports the failure rate of a several million line of code commercial software product distributed to hundreds of thousands of customers. To first order of approximation, the MTBF reaches around 4 years and 2 years for successive releases of the software. The changes in the failure rate as a function of severity, release and time are also provided. The measurement technique develops a direct link between failures and faults, providing an opportunity to study and describe the failure process. Two metrics, the fault weight, corresponding to the number of failures due to a fault and failure window, measuring the length of time between the first and last fault, are defined and characterized. The two metrics are found to be higher for higher severity faults, consistently across all severities and releases. At the same time the window to weight ratio, is invariant by severity. The fault weight and failure window are natural measures and are intuitive about the failure process. The fault weight measures the impact of a fault on the overall failure rate and the failure window the dispersion of that impact over time. These two do provide a new forum for discussion and opportunity to gain greater understanding of the processes involved.<<ETX>>","PeriodicalId":309075,"journal":{"name":"Twenty-Fifth International Symposium on Fault-Tolerant Computing. Digest of Papers","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131985253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-06-27DOI: 10.1109/FTCS.1995.466960
J. Bright, G. Sullivan
We present several examples of programs which efficiently monitor the answers from queries performed on data structures to determine if any errors are present. Our paper includes the first efficient on-line error monitor for a data structure designed to perform nearest neighbor queries. Applications of nearest neighbor queries are extensive and include learning, categorization, speech processing, and data compression. Our paper also discusses on-line error monitors for priority queues and splittable priority queues. On-line error monitors immediately detect if an error is present in the answer to a query. An error monitor which is not on-line may delay the time of detection until a later query is being processed which may allow the error to propagate or may cause irreversible state changes. On-line monitors can allow a more rapid and accurate response to an error.<>
{"title":"On-line error monitoring for several data structures","authors":"J. Bright, G. Sullivan","doi":"10.1109/FTCS.1995.466960","DOIUrl":"https://doi.org/10.1109/FTCS.1995.466960","url":null,"abstract":"We present several examples of programs which efficiently monitor the answers from queries performed on data structures to determine if any errors are present. Our paper includes the first efficient on-line error monitor for a data structure designed to perform nearest neighbor queries. Applications of nearest neighbor queries are extensive and include learning, categorization, speech processing, and data compression. Our paper also discusses on-line error monitors for priority queues and splittable priority queues. On-line error monitors immediately detect if an error is present in the answer to a query. An error monitor which is not on-line may delay the time of detection until a later query is being processed which may allow the error to propagate or may cause irreversible state changes. On-line monitors can allow a more rapid and accurate response to an error.<<ETX>>","PeriodicalId":309075,"journal":{"name":"Twenty-Fifth International Symposium on Fault-Tolerant Computing. Digest of Papers","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133734530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-06-27DOI: 10.1109/FTCS.1995.466971
G. Suri, B. Janssens, W. Fuchs
Rollback techniques that use message logging and deterministic replay can be used in parallel systems to recover a failed node without involving other nodes. Distributed shared memory (DSM) systems cannot directly apply message-passing logging techniques because they use inherently nondeterministic asynchronous communication. This paper presents new logging schemes that reduce the typically high overhead for logging in DSM. Our algorithm for sequentially consistent systems tracks rather than logs accesses to shared memory. In an extension of this method to lazy release consistency, the per-access overhead of tracking has been completely eliminated. Measurements with parallel applications show a significant reduction in failure-free overhead.<>
{"title":"Reduced overhead logging for rollback recovery in distributed shared memory","authors":"G. Suri, B. Janssens, W. Fuchs","doi":"10.1109/FTCS.1995.466971","DOIUrl":"https://doi.org/10.1109/FTCS.1995.466971","url":null,"abstract":"Rollback techniques that use message logging and deterministic replay can be used in parallel systems to recover a failed node without involving other nodes. Distributed shared memory (DSM) systems cannot directly apply message-passing logging techniques because they use inherently nondeterministic asynchronous communication. This paper presents new logging schemes that reduce the typically high overhead for logging in DSM. Our algorithm for sequentially consistent systems tracks rather than logs accesses to shared memory. In an extension of this method to lazy release consistency, the per-access overhead of tracking has been completely eliminated. Measurements with parallel applications show a significant reduction in failure-free overhead.<<ETX>>","PeriodicalId":309075,"journal":{"name":"Twenty-Fifth International Symposium on Fault-Tolerant Computing. Digest of Papers","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115834768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-06-27DOI: 10.1109/FTCS.1995.466944
R. Maxion, Aimee L. deChambeau
Even if a system's hardware and software underpinnings are completely reliable, errors at the user interface can cripple or destroy a mission, often with catastrophic consequences. Little attention has been paid to handling faults and errors at the user interface their causes and remediations are little understood and methods of predeployment fault detection in user interfaces are almost nonexistent. The paper presents a working definition of a user interface defect, and a robust method for detecting defects automatically. An experimental methodology for empirical testing and validation is given. Results show that while manifestations of defects may be many, only a few root causes are responsible for them.<>
{"title":"Dependability at the user interface","authors":"R. Maxion, Aimee L. deChambeau","doi":"10.1109/FTCS.1995.466944","DOIUrl":"https://doi.org/10.1109/FTCS.1995.466944","url":null,"abstract":"Even if a system's hardware and software underpinnings are completely reliable, errors at the user interface can cripple or destroy a mission, often with catastrophic consequences. Little attention has been paid to handling faults and errors at the user interface their causes and remediations are little understood and methods of predeployment fault detection in user interfaces are almost nonexistent. The paper presents a working definition of a user interface defect, and a robust method for detecting defects automatically. An experimental methodology for empirical testing and validation is given. Results show that while manifestations of defects may be many, only a few root causes are responsible for them.<<ETX>>","PeriodicalId":309075,"journal":{"name":"Twenty-Fifth International Symposium on Fault-Tolerant Computing. Digest of Papers","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123955869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-06-27DOI: 10.1109/FTCS.1995.466948
Jie Xu, B. Randell, A. Romanovsky, C. M. F. Rubira, R. Stroud, Zhixue Wu
Presents a scheme for coordinated error recovery between multiple interacting objects in a concurrent object-oriented system. A conceptual framework for fault tolerance is established based on a general object concurrency model that is supported by most concurrent object-oriented languages and systems. This framework integrates two complementary concepts-conversations and transactions. Conversations (associated with cooperative exception handling) are used to provide coordinated error recovery between concurrent interacting activities whilst transactions are used to maintain the consistency of shared resources in the presence of concurrent access and possible failures. The serialisability property of transactions is exploited in order to help prevent unexpected information smuggling. The proposed framework is illustrated by means of a case study, and various linguistic and implementation issues are discussed.<>
{"title":"Fault tolerance in concurrent object-oriented software through coordinated error recovery","authors":"Jie Xu, B. Randell, A. Romanovsky, C. M. F. Rubira, R. Stroud, Zhixue Wu","doi":"10.1109/FTCS.1995.466948","DOIUrl":"https://doi.org/10.1109/FTCS.1995.466948","url":null,"abstract":"Presents a scheme for coordinated error recovery between multiple interacting objects in a concurrent object-oriented system. A conceptual framework for fault tolerance is established based on a general object concurrency model that is supported by most concurrent object-oriented languages and systems. This framework integrates two complementary concepts-conversations and transactions. Conversations (associated with cooperative exception handling) are used to provide coordinated error recovery between concurrent interacting activities whilst transactions are used to maintain the consistency of shared resources in the presence of concurrent access and possible failures. The serialisability property of transactions is exploited in order to help prevent unexpected information smuggling. The proposed framework is illustrated by means of a case study, and various linguistic and implementation issues are discussed.<<ETX>>","PeriodicalId":309075,"journal":{"name":"Twenty-Fifth International Symposium on Fault-Tolerant Computing. Digest of Papers","volume":"98 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122664671","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-06-27DOI: 10.1109/FTCS.1995.466955
B. Littlewood, David Wright
It has been proposed to conduct a test of a software safety system for a nuclear reactor by subjecting it to demands that are statistically representative of those it meets in operational use. The intention behind the test is to acquire a high confidence (99%) that the probability of failure on demand is smaller than 10/sup -3/. To this end the test takes the form of executing about 5000 demands and requiring that all of these are successful. In practice if is necessary to consider what happens if the software fails the test and is repaired. We argue that the earlier failure information needs to be taken into account in devising the form of the test that the modified software needs to pass-essentially that after such failure the testing requirement might need to be more stringent (i.e. the number of tests that must be executed failure-free should increase). We examine a Bayesian approach to the problem, for this stopping rule based upon a required bound for the probability of failure on demand, as above, and also for a requirement based upon a prediction of future failure behaviour. We show that the first approach seems to be less conservative than the second, and argue that the second should be preferred for practical application.<>
{"title":"Stopping rules for the operational testing of safety-critical software","authors":"B. Littlewood, David Wright","doi":"10.1109/FTCS.1995.466955","DOIUrl":"https://doi.org/10.1109/FTCS.1995.466955","url":null,"abstract":"It has been proposed to conduct a test of a software safety system for a nuclear reactor by subjecting it to demands that are statistically representative of those it meets in operational use. The intention behind the test is to acquire a high confidence (99%) that the probability of failure on demand is smaller than 10/sup -3/. To this end the test takes the form of executing about 5000 demands and requiring that all of these are successful. In practice if is necessary to consider what happens if the software fails the test and is repaired. We argue that the earlier failure information needs to be taken into account in devising the form of the test that the modified software needs to pass-essentially that after such failure the testing requirement might need to be more stringent (i.e. the number of tests that must be executed failure-free should increase). We examine a Bayesian approach to the problem, for this stopping rule based upon a required bound for the probability of failure on demand, as above, and also for a requirement based upon a prediction of future failure behaviour. We show that the first approach seems to be less conservative than the second, and argue that the second should be preferred for practical application.<<ETX>>","PeriodicalId":309075,"journal":{"name":"Twenty-Fifth International Symposium on Fault-Tolerant Computing. Digest of Papers","volume":"89 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117295912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-06-27DOI: 10.1109/FTCS.1995.466984
A. Steininger, H. Schweinzer
Results of fault injection experiments performed under different conditions can only be related to each other, if their interpretation is based on a thorough understanding of activation and propagation of faults and errors. We analyze these processes by applying a special layer model of a computing system. Our aim is to model the transformation of a fault on a signal line into a system failure as the propagation of erroneous information through multiple layers. Two specific layers that describe the fault activation process have been sufficiently completed and are presented here. A quantification for these is derived and different applications are summarized. Excellent correspondence between analytical results based on modeling and experimental data is found. A prediction of fault activation with high accuracy is possible, as well as a quantitative evaluation of the effect of synchronizing fault injection.<>
{"title":"A model for the analysis of the fault injection process","authors":"A. Steininger, H. Schweinzer","doi":"10.1109/FTCS.1995.466984","DOIUrl":"https://doi.org/10.1109/FTCS.1995.466984","url":null,"abstract":"Results of fault injection experiments performed under different conditions can only be related to each other, if their interpretation is based on a thorough understanding of activation and propagation of faults and errors. We analyze these processes by applying a special layer model of a computing system. Our aim is to model the transformation of a fault on a signal line into a system failure as the propagation of erroneous information through multiple layers. Two specific layers that describe the fault activation process have been sufficiently completed and are presented here. A quantification for these is derived and different applications are summarized. Excellent correspondence between analytical results based on modeling and experimental data is found. A prediction of fault activation with high accuracy is possible, as well as a quantitative evaluation of the effect of synchronizing fault injection.<<ETX>>","PeriodicalId":309075,"journal":{"name":"Twenty-Fifth International Symposium on Fault-Tolerant Computing. Digest of Papers","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117140829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-06-27DOI: 10.1109/FTCS.1995.466967
S. Dutt, N. Mahapatra
Most previous work on fault-tolerant (FT) multiprocessor design has concentrated on deterministic k-fault-tolerant (k-FT) designs in which exactly k spare processors and some spare switches and links are added to construct multiprocessors that can tolerate any k processor faults. However, after k faults are reconfigured around, much of the extra links and switches can remain unutilized. We show how to use the node-covering principle of Dutt and Hayes (1992) and error correcting codes in order to construct probabilistic designs with very high average fault tolerance but low wiring and switch overhead. This design methodology is applicable to any multiprocessor interconnection topology. We also obtain the deterministic fault tolerance for these designs and develop efficient layout strategies for them.<>
{"title":"Node covering, error correcting codes and multiprocessors with very high average fault tolerance","authors":"S. Dutt, N. Mahapatra","doi":"10.1109/FTCS.1995.466967","DOIUrl":"https://doi.org/10.1109/FTCS.1995.466967","url":null,"abstract":"Most previous work on fault-tolerant (FT) multiprocessor design has concentrated on deterministic k-fault-tolerant (k-FT) designs in which exactly k spare processors and some spare switches and links are added to construct multiprocessors that can tolerate any k processor faults. However, after k faults are reconfigured around, much of the extra links and switches can remain unutilized. We show how to use the node-covering principle of Dutt and Hayes (1992) and error correcting codes in order to construct probabilistic designs with very high average fault tolerance but low wiring and switch overhead. This design methodology is applicable to any multiprocessor interconnection topology. We also obtain the deterministic fault tolerance for these designs and develop efficient layout strategies for them.<<ETX>>","PeriodicalId":309075,"journal":{"name":"Twenty-Fifth International Symposium on Fault-Tolerant Computing. Digest of Papers","volume":"54 44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127354829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-06-27DOI: 10.1109/FTCS.1995.466970
Anne-Marie Kermarrec, G. Cabillic, A. Gefflaut, C. Morin, I. Puaut
Large-scale distributed systems are very attractive for the execution of parallel applications requiring a huge computing power. However, their high probability of site failure is unacceptable, especially for long time running applications. In this paper, we address this problem and propose a checkpointing mechanism relying on a recoverable distributed shared memory (DSM) in order to tolerate single node failures. Although most recoverable DSMs require specific hardware to store recovery data, our scheme uses standard memories to store both current and recovery data. Moreover, the management of recovery data is merged with the management of current data by extending the DSM's coherence protocol. This approach takes advantage of the data replication provided by a DSM in order to limit the amount of transferred pages during the checkpointing. The paper also presents an implementation and a preliminary performance evaluation of our recoverable DSM on a 56-node Intel Paragon.<>
{"title":"A recoverable distributed shared memory integrating coherence and recoverability","authors":"Anne-Marie Kermarrec, G. Cabillic, A. Gefflaut, C. Morin, I. Puaut","doi":"10.1109/FTCS.1995.466970","DOIUrl":"https://doi.org/10.1109/FTCS.1995.466970","url":null,"abstract":"Large-scale distributed systems are very attractive for the execution of parallel applications requiring a huge computing power. However, their high probability of site failure is unacceptable, especially for long time running applications. In this paper, we address this problem and propose a checkpointing mechanism relying on a recoverable distributed shared memory (DSM) in order to tolerate single node failures. Although most recoverable DSMs require specific hardware to store recovery data, our scheme uses standard memories to store both current and recovery data. Moreover, the management of recovery data is merged with the management of current data by extending the DSM's coherence protocol. This approach takes advantage of the data replication provided by a DSM in order to limit the amount of transferred pages during the checkpointing. The paper also presents an implementation and a preliminary performance evaluation of our recoverable DSM on a 56-node Intel Paragon.<<ETX>>","PeriodicalId":309075,"journal":{"name":"Twenty-Fifth International Symposium on Fault-Tolerant Computing. Digest of Papers","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127814908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 1995-06-27DOI: 10.1109/FTCS.1995.466990
J. Bass, Sylvain Metge, A. Browne, P. Croll, P. Fleming
The Development Framework provides a highly automatic translation from a specification to an implementation. The specification is in a popular, graphical control engineering notation typically representing a system with stringent reliability requirements and hard real time constraints. An interface has been constructed between the Development Framework and the commercially available dependability modelling tool, SURF-2. This tool is designed to support an evaluation based design approach. Multiple design solutions can be compared to assess the implications of design decisions on the dependability of the system under development. The software demonstration will show how the interface between the Development Framework and SURF-2 is used to model the inclusion of selected fault tolerant mechanisms in the system under development.<>
{"title":"Dependability modelling in a prototype development framework","authors":"J. Bass, Sylvain Metge, A. Browne, P. Croll, P. Fleming","doi":"10.1109/FTCS.1995.466990","DOIUrl":"https://doi.org/10.1109/FTCS.1995.466990","url":null,"abstract":"The Development Framework provides a highly automatic translation from a specification to an implementation. The specification is in a popular, graphical control engineering notation typically representing a system with stringent reliability requirements and hard real time constraints. An interface has been constructed between the Development Framework and the commercially available dependability modelling tool, SURF-2. This tool is designed to support an evaluation based design approach. Multiple design solutions can be compared to assess the implications of design decisions on the dependability of the system under development. The software demonstration will show how the interface between the Development Framework and SURF-2 is used to model the inclusion of selected fault tolerant mechanisms in the system under development.<<ETX>>","PeriodicalId":309075,"journal":{"name":"Twenty-Fifth International Symposium on Fault-Tolerant Computing. Digest of Papers","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125799243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}