Pub Date : 2013-05-27DOI: 10.1049/iet-sen.2011.0203
Kavitha Rajarathinam, S. Natarajan
The size of the test suite and the duration of time determines the time taken by the regression testing. Conversely, the testers can prioritise the test cases by the use of a competent prioritisation technique to obtain an increased rate of fault detection in the system, allowing for earlier corrections, and getting higher overall confidence that the software has been tested suitably. A prioritised test suite is more likely to be more effective during that time period than would have been achieved via a random ordering if execution needs to be suspended after some time. An enhanced test case ordering may be probable if the desired implementation time to run the test cases is proven earlier. This research work's main intention is to prioritise the regressiontesting test cases. In order to prioritise the test cases some factors are considered here. These factors are employed in the prioritisation algorithm. The trace events are one of the important factors, used to find the most significant test cases in the projects. The requirement factor value is calculated and subsequently a weightage is calculated and assigned to each test case in the software based on these factors by using a thresholding technique. Later, the test cases are prioritised according to the weightage allocated to them. Executing the test cases based on the prioritisation will greatly decreases the computation cost and time. The proposed technique is efficient in prioritising the regression test cases. The new prioritised subsequences of the given unit test suites are executed on Java programs after the completion of prioritisation. Average of the percentage of faults detected is an evaluation metric used for evaluating the 'superiority' of these orderings.
{"title":"Test suite prioritisation using trace events technique","authors":"Kavitha Rajarathinam, S. Natarajan","doi":"10.1049/iet-sen.2011.0203","DOIUrl":"https://doi.org/10.1049/iet-sen.2011.0203","url":null,"abstract":"The size of the test suite and the duration of time determines the time taken by the regression testing. Conversely, the testers can prioritise the test cases by the use of a competent prioritisation technique to obtain an increased rate of fault detection in the system, allowing for earlier corrections, and getting higher overall confidence that the software has been tested suitably. A prioritised test suite is more likely to be more effective during that time period than would have been achieved via a random ordering if execution needs to be suspended after some time. An enhanced test case ordering may be probable if the desired implementation time to run the test cases is proven earlier. This research work's main intention is to prioritise the regressiontesting test cases. In order to prioritise the test cases some factors are considered here. These factors are employed in the prioritisation algorithm. The trace events are one of the important factors, used to find the most significant test cases in the projects. The requirement factor value is calculated and subsequently a weightage is calculated and assigned to each test case in the software based on these factors by using a thresholding technique. Later, the test cases are prioritised according to the weightage allocated to them. Executing the test cases based on the prioritisation will greatly decreases the computation cost and time. The proposed technique is efficient in prioritising the regression test cases. The new prioritised subsequences of the given unit test suites are executed on Java programs after the completion of prioritisation. Average of the percentage of faults detected is an evaluation metric used for evaluating the 'superiority' of these orderings.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2013-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78952206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-27DOI: 10.1049/iet-sen.2011.0170
G. Scanniello, F. Fasano, A. D. Lucia, G. Tortora
The authors present the results of a descriptive survey to ascertain the relevance and the typology of the software error/ defect identification methods/approaches used in the industrial practice. This study involved industries/organisations that develop and sell software as a main part of their business or develop software as an integral part of their products or services. The results indicated that software error/defect identification is very relevant and regard almost the totality of the interviewed companies. The most widely used and popular practice is testing. An increasing interest has been also manifested in distributed inspection methods.
{"title":"Does software error/defect identification matter in the italian industry?","authors":"G. Scanniello, F. Fasano, A. D. Lucia, G. Tortora","doi":"10.1049/iet-sen.2011.0170","DOIUrl":"https://doi.org/10.1049/iet-sen.2011.0170","url":null,"abstract":"The authors present the results of a descriptive survey to ascertain the relevance and the typology of the software error/ defect identification methods/approaches used in the industrial practice. This study involved industries/organisations that develop and sell software as a main part of their business or develop software as an integral part of their products or services. The results indicated that software error/defect identification is very relevant and regard almost the totality of the interviewed companies. The most widely used and popular practice is testing. An increasing interest has been also manifested in distributed inspection methods.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"142 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2013-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86777508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-01DOI: 10.1049/iet-sen.2012.0095
A. Gosain, Sushama Nagpal, Sangeeta Sabharwal
Structural properties including hierarchies have been recognised as important factors influencing quality of a software product. Metrics based on structural properties (structural complexity metrics) have been popularly used to assess the quality attributes like understandability, maintainability, fault-proneness etc. of a software artefact. Although few researchers have considered metrics based on dimension hierarchies to assess the quality of multidimensional models for data warehouse, there are certain aspects of dimension hierarchies like those related to multiple hierarchies, shared dimension hierarchies among various dimensions etc. which have not been considered in the earlier works. In the authors' previous work, they identified the metrics based on these aspects which may contribute towards the structural complexity and in turn the quality of multidimensional models for data warehouse. However, the work lacks theoretical and empirical validation of the proposed metrics and any metric proposal is acceptable in practice, if it is theoretically and empirically valid. In this study, the authors provide thorough validation of the metrics considered in their previous work. The metrics have been validated theoretically on the basis of Briand's framework - a property-based framework and empirically on the basis of controlled experiment using statistical techniques like correlation and linear regression. The results of these validations indicate that these metrics are either size or length measure and hence, contribute significantly towards structural complexity of multidimensional models and have considerable impact on understandability of these models.
{"title":"Validating dimension hierarchy metrics for the understandability of multidimensional models for data warehouse","authors":"A. Gosain, Sushama Nagpal, Sangeeta Sabharwal","doi":"10.1049/iet-sen.2012.0095","DOIUrl":"https://doi.org/10.1049/iet-sen.2012.0095","url":null,"abstract":"Structural properties including hierarchies have been recognised as important factors influencing quality of a software product. Metrics based on structural properties (structural complexity metrics) have been popularly used to assess the quality attributes like understandability, maintainability, fault-proneness etc. of a software artefact. Although few researchers have considered metrics based on dimension hierarchies to assess the quality of multidimensional models for data warehouse, there are certain aspects of dimension hierarchies like those related to multiple hierarchies, shared dimension hierarchies among various dimensions etc. which have not been considered in the earlier works. In the authors' previous work, they identified the metrics based on these aspects which may contribute towards the structural complexity and in turn the quality of multidimensional models for data warehouse. However, the work lacks theoretical and empirical validation of the proposed metrics and any metric proposal is acceptable in practice, if it is theoretically and empirically valid. In this study, the authors provide thorough validation of the metrics considered in their previous work. The metrics have been validated theoretically on the basis of Briand's framework - a property-based framework and empirically on the basis of controlled experiment using statistical techniques like correlation and linear regression. The results of these validations indicate that these metrics are either size or length measure and hence, contribute significantly towards structural complexity of multidimensional models and have considerable impact on understandability of these models.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"8 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2013-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91434064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-16DOI: 10.1049/iet-sen.2011.0132
David Gray, David Bowes, N. Davey, Yi Sun, B. Christianson
Background: The NASA metrics data program (MDP) data sets have been heavily used in software defect prediction research. Aim: To highlight the data quality issues present in these data sets, and the problems that can arise when they are used in a binary classification context. Method: A thorough exploration of all 13 original NASA data sets, followed by various experiments demonstrating the potential impact of duplicate data points when data mining. Conclusions: Firstly researchers need to analyse the data that forms the basis of their findings in the context of how it will be used. Secondly, the bulk of defect prediction experiments based on the NASA MDP data sets may have led to erroneous findings. This is mainly because of repeated/duplicate data points potentially causing substantial amounts of training and testing data to be identical.
{"title":"Reflections on the NASA MDP data sets","authors":"David Gray, David Bowes, N. Davey, Yi Sun, B. Christianson","doi":"10.1049/iet-sen.2011.0132","DOIUrl":"https://doi.org/10.1049/iet-sen.2011.0132","url":null,"abstract":"Background: The NASA metrics data program (MDP) data sets have been heavily used in software defect prediction research. Aim: To highlight the data quality issues present in these data sets, and the problems that can arise when they are used in a binary classification context. Method: A thorough exploration of all 13 original NASA data sets, followed by various experiments demonstrating the potential impact of duplicate data points when data mining. Conclusions: Firstly researchers need to analyse the data that forms the basis of their findings in the context of how it will be used. Secondly, the bulk of defect prediction experiments based on the NASA MDP data sets may have led to erroneous findings. This is mainly because of repeated/duplicate data points potentially causing substantial amounts of training and testing data to be identical.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"22 1","pages":"549-558"},"PeriodicalIF":0.0,"publicationDate":"2012-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83457259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-16DOI: 10.1049/iet-sen.2012.0016
G. Kapitsaki
Context-awareness is related to the application capability to respond proactively to environment conditions. The reuse of existing components is an important challenge in the development of context-aware applications. Web services (WSs), usually exploited in this field acting either as business services by providing specific functionality or as context sources by exposing the retrieval of context information, need to be discovered and matched. In this study, the matchmaking of WSs for the identification of potential context sources is proposed. Descriptions are matched based on context adaptation cases, whereas the process is based on adequate WS descriptions that derive from the proposed semantic WS profile. The procedure is illustrated through a proof of concept based on service descriptions retrieved from online service registries.
{"title":"Web service matchmaking for the development of context-aware applications","authors":"G. Kapitsaki","doi":"10.1049/iet-sen.2012.0016","DOIUrl":"https://doi.org/10.1049/iet-sen.2012.0016","url":null,"abstract":"Context-awareness is related to the application capability to respond proactively to environment conditions. The reuse of existing components is an important challenge in the development of context-aware applications. Web services (WSs), usually exploited in this field acting either as business services by providing specific functionality or as context sources by exposing the retrieval of context information, need to be discovered and matched. In this study, the matchmaking of WSs for the identification of potential context sources is proposed. Descriptions are matched based on context adaptation cases, whereas the process is based on adequate WS descriptions that derive from the proposed semantic WS profile. The procedure is illustrated through a proof of concept based on service descriptions retrieved from online service registries.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"16 1","pages":"536-548"},"PeriodicalIF":0.0,"publicationDate":"2012-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86797270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-16DOI: 10.1049/IET-SEN.2011.0172
J. Siegmund, M. Schulze, M. Papendieck, Christian Kästner, Raimund Dachselt, V. Köppen, Mathias Frisch, G. Saake
Software product line (SPL) engineering provides an effective mechanism to implement variable software. However, using preprocessors to realise variability, which is typical in industry, is heavily criticised, because it often leads to obfuscated code. Using background colours to highlight code annotated with preprocessor statements to support comprehensibility has proved to be effective, however, scalability to large SPLs is questionable. The authors’ aim is to implement and evaluate scalable usage of background colours for industrial-sized SPLs. They designed and implemented scalable concepts in a tool called FeatureCommander. To evaluate its effectiveness, the authors conducted a controlled experiment with a large real-world SPL with over 99 000 lines of code and 340 features. They used a within-subjects design with treatment colours and no colours. They compared correctness and response time of tasks for both treatments. For certain kinds of tasks, background colours improve program comprehension. Furthermore, the subjects generally favour background colours compared with no background colours. In addition, the subjects who worked with background colours had to use the search functions less frequently. The authors show that background colours can improve program comprehension in large SPLs. Based on these encouraging results, they continue their work on improving program comprehension in large SPLs.
{"title":"Supporting program comprehension in large preprocessor-based software product lines","authors":"J. Siegmund, M. Schulze, M. Papendieck, Christian Kästner, Raimund Dachselt, V. Köppen, Mathias Frisch, G. Saake","doi":"10.1049/IET-SEN.2011.0172","DOIUrl":"https://doi.org/10.1049/IET-SEN.2011.0172","url":null,"abstract":"Software product line (SPL) engineering provides an effective mechanism to implement variable software. However, using preprocessors to realise variability, which is typical in industry, is heavily criticised, because it often leads to obfuscated code. Using background colours to highlight code annotated with preprocessor statements to support comprehensibility has proved to be effective, however, scalability to large SPLs is questionable. The authors’ aim is to implement and evaluate scalable usage of background colours for industrial-sized SPLs. They designed and implemented scalable concepts in a tool called FeatureCommander. To evaluate its effectiveness, the authors conducted a controlled experiment with a large real-world SPL with over 99 000 lines of code and 340 features. They used a within-subjects design with treatment colours and no colours. They compared correctness and response time of tasks for both treatments. For certain kinds of tasks, background colours improve program comprehension. Furthermore, the subjects generally favour background colours compared with no background colours. In addition, the subjects who worked with background colours had to use the search functions less frequently. The authors show that background colours can improve program comprehension in large SPLs. Based on these encouraging results, they continue their work on improving program comprehension in large SPLs.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"16 1","pages":"488-501"},"PeriodicalIF":0.0,"publicationDate":"2012-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79801162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-16DOI: 10.1049/iet-sen.2011.0136
M. Kalita, T. Bezboruah
Investigations on implementation of web application with different techniques are essential for developers’ as well as users’ perspective. As such, the authors propose to investigate the same by implementing two prototype research web applications based on two different implementation techniques. One of the web applications has been implemented based on the Microsoft .NET technique and the other based on the Java technique. Our objective is to carry out a detailed comparative study of both the techniques in terms of efficiency, reliability, performance, scalability, stability and time to market features of both the techniques. Load and stress testing are performed for both the web applications using Mercury LoadRunner. The statistical analysis of recorded data for both the applications is carried out to study the feasibility of the work. In this study, the authors present in detail the results of the authors’ comparative study on the two techniques.
{"title":"Investigations on implementation of web applications with different techniques","authors":"M. Kalita, T. Bezboruah","doi":"10.1049/iet-sen.2011.0136","DOIUrl":"https://doi.org/10.1049/iet-sen.2011.0136","url":null,"abstract":"Investigations on implementation of web application with different techniques are essential for developers’ as well as users’ perspective. As such, the authors propose to investigate the same by implementing two prototype research web applications based on two different implementation techniques. One of the web applications has been implemented based on the Microsoft .NET technique and the other based on the Java technique. Our objective is to carry out a detailed comparative study of both the techniques in terms of efficiency, reliability, performance, scalability, stability and time to market features of both the techniques. Load and stress testing are performed for both the web applications using Mercury LoadRunner. The statistical analysis of recorded data for both the applications is carried out to study the feasibility of the work. In this study, the authors present in detail the results of the authors’ comparative study on the two techniques.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"28 1","pages":"474-478"},"PeriodicalIF":0.0,"publicationDate":"2012-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91356295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-16DOI: 10.1049/iet-sen.2011.0060
I. Antović, Sinisa Vlajic, Milos Milic, Dusan Savic, Vojislav Stanojevic
The aim of this study is to identify the correlations between the use case model, data model and the desired user interface (UI). Since use cases describe the interaction between the users and the system, implemented through the user interface with the aim to change the state of the system, the correlation between these three components should be taken into account at the software requirements phase. In this study, the authors have introduced the meta-model of software requirements developed on the identified correlations. Based on this meta-model, it is possible to create a model of concrete software requirements, which enables not only the design and implementation of the user interface, but also the automation of this process. In order to prove the sustainability of this approach, they have developed a special software tool that performs the transformation of the model to an executable source code. They have considered different ways of user interaction with the system, and consequently, they have recommended the set of most common user interface templates. Thus the flexibility of the user interface is achieved, as the user interface of the same use case could be displayed in several different ways, while still maintaining the desired functionality.
{"title":"Model and software tool for automatic generation of user interface based on use case and data model","authors":"I. Antović, Sinisa Vlajic, Milos Milic, Dusan Savic, Vojislav Stanojevic","doi":"10.1049/iet-sen.2011.0060","DOIUrl":"https://doi.org/10.1049/iet-sen.2011.0060","url":null,"abstract":"The aim of this study is to identify the correlations between the use case model, data model and the desired user interface (UI). Since use cases describe the interaction between the users and the system, implemented through the user interface with the aim to change the state of the system, the correlation between these three components should be taken into account at the software requirements phase. In this study, the authors have introduced the meta-model of software requirements developed on the identified correlations. Based on this meta-model, it is possible to create a model of concrete software requirements, which enables not only the design and implementation of the user interface, but also the automation of this process. In order to prove the sustainability of this approach, they have developed a special software tool that performs the transformation of the model to an executable source code. They have considered different ways of user interaction with the system, and consequently, they have recommended the set of most common user interface templates. Thus the flexibility of the user interface is achieved, as the user interface of the same use case could be displayed in several different ways, while still maintaining the desired functionality.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"69 1","pages":"559-573"},"PeriodicalIF":0.0,"publicationDate":"2012-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87415542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-16DOI: 10.1049/iet-sen.2011.0148
T. R. G. Nair, V. Suma, P. Tiwari
Production of high-quality software is the exquisite need for accomplishing absolute customer satisfaction in software industry. Effective defect management is one of the crucial factors enabling successful development of high-quality products. Inspection and testing are two established methods of defect management domain. However, existing industry atmosphere is not stringent on the quantification of quality of process and people, in order to realise the prime objective of effective defect management. Consequently, defect capturing ability of the process and the efficacy of people are anecdotal leading towards defect manifestation and propagation during different software development stages. This study provides a case study involving empirical analysis of projects from a leading product-based software industry. The investigation strongly indicates the need for awareness and use of quality measurement of process and people in realising effective defect management. Implementation of two recently introduced quality metrics depth of inspection, a process metric and inspection performance metric, a people metric enable the developing team to generate high-quality software. The comprehension of these pair metrics in software development further augments the quality and productivity. It also reduces the expensive rework time, cost and rebinding of resources. Implementation of duo metrics reflects the persistent process improvement of the software enterprise and the resultant success.
{"title":"Significance of depth of inspection and inspection performance metrics for consistent defect management in software industry","authors":"T. R. G. Nair, V. Suma, P. Tiwari","doi":"10.1049/iet-sen.2011.0148","DOIUrl":"https://doi.org/10.1049/iet-sen.2011.0148","url":null,"abstract":"Production of high-quality software is the exquisite need for accomplishing absolute customer satisfaction in software industry. Effective defect management is one of the crucial factors enabling successful development of high-quality products. Inspection and testing are two established methods of defect management domain. However, existing industry atmosphere is not stringent on the quantification of quality of process and people, in order to realise the prime objective of effective defect management. Consequently, defect capturing ability of the process and the efficacy of people are anecdotal leading towards defect manifestation and propagation during different software development stages. This study provides a case study involving empirical analysis of projects from a leading product-based software industry. The investigation strongly indicates the need for awareness and use of quality measurement of process and people in realising effective defect management. Implementation of two recently introduced quality metrics depth of inspection, a process metric and inspection performance metric, a people metric enable the developing team to generate high-quality software. The comprehension of these pair metrics in software development further augments the quality and productivity. It also reduces the expensive rework time, cost and rebinding of resources. Implementation of duo metrics reflects the persistent process improvement of the software enterprise and the resultant success.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"133 1","pages":"524-535"},"PeriodicalIF":0.0,"publicationDate":"2012-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89130018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-11-16DOI: 10.1049/iet-sen.2011.0135
Lei Xiong, QingPing Tan, Z. Shao
Soft errors can affect system reliability by influencing software execution. Dynamic implementation for software-based soft error tolerance methods can protect more types of codes; hence, the method can cover more soft errors. Based on the background of dynamic soft error tolerance, this study proposes an approach to analyse dynamic software behaviours under the effects of soft errors. The authors use a special program model that combines abstract computing on the high level with computing of instructions on the low level. On the high level, programs are divided with the granularity of function. On the low level, every function is implemented by the instructions. Those effects of soft errors on instructions on the low level are passed to the computing results of function on the high level. Backed by the computing results of function on the high level, the caused instruction errors that can lead to incorrect program outcome are distinguished within a function. Based on those different level software behaviours with our program model, the dynamic program reliability model is built under the effects of soft errors. From the dynamic program reliability model, we can see the relationship between the characteristics of dynamic program and reliability of dynamic program under the effects of soft errors. Finally, the experimental results of the authors fault injection experiments validate the dynamic program reliability model. In addition, the experimental results also demonstrate the authors analyses of different dynamic software behaviours under the effects of soft errors.
{"title":"Exploration of the effects of soft errors from dynamic software behaviours","authors":"Lei Xiong, QingPing Tan, Z. Shao","doi":"10.1049/iet-sen.2011.0135","DOIUrl":"https://doi.org/10.1049/iet-sen.2011.0135","url":null,"abstract":"Soft errors can affect system reliability by influencing software execution. Dynamic implementation for software-based soft error tolerance methods can protect more types of codes; hence, the method can cover more soft errors. Based on the background of dynamic soft error tolerance, this study proposes an approach to analyse dynamic software behaviours under the effects of soft errors. The authors use a special program model that combines abstract computing on the high level with computing of instructions on the low level. On the high level, programs are divided with the granularity of function. On the low level, every function is implemented by the instructions. Those effects of soft errors on instructions on the low level are passed to the computing results of function on the high level. Backed by the computing results of function on the high level, the caused instruction errors that can lead to incorrect program outcome are distinguished within a function. Based on those different level software behaviours with our program model, the dynamic program reliability model is built under the effects of soft errors. From the dynamic program reliability model, we can see the relationship between the characteristics of dynamic program and reliability of dynamic program under the effects of soft errors. Finally, the experimental results of the authors fault injection experiments validate the dynamic program reliability model. In addition, the experimental results also demonstrate the authors analyses of different dynamic software behaviours under the effects of soft errors.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"14 1","pages":"514-523"},"PeriodicalIF":0.0,"publicationDate":"2012-11-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85881621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}