Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606681
Michael Smit, B. Simmons, Mark Shtern, Marin Litoiu
To facilitate software development for multiple, federated cloud systems, abstraction layers have been introduced to mask the differences in the offerings, APIs, and terminology of various cloud providers. Such layers rely on a common ontology, which a) is difficult to create, and b) requires developers to understand both the common ontology and how various providers deviate from it. In this paper we propose and describe a structured query language for the cloud, Cloud SQL, along with a system and methodology for acquiring and organizing information from cloud providers and other entities in the cloud ecosystem such that it can be queried. It allows developers to run queries on data organized based on their semantic understanding of the cloud. Like the original SQL, we believe the use of a declarative query language will reduce development costs and make the multi-cloud accessible to a broader set of developers.
{"title":"Supporting application development with structured queries in the cloud","authors":"Michael Smit, B. Simmons, Mark Shtern, Marin Litoiu","doi":"10.1109/ICSE.2013.6606681","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606681","url":null,"abstract":"To facilitate software development for multiple, federated cloud systems, abstraction layers have been introduced to mask the differences in the offerings, APIs, and terminology of various cloud providers. Such layers rely on a common ontology, which a) is difficult to create, and b) requires developers to understand both the common ontology and how various providers deviate from it. In this paper we propose and describe a structured query language for the cloud, Cloud SQL, along with a system and methodology for acquiring and organizing information from cloud providers and other entities in the cloud ecosystem such that it can be queried. It allows developers to run queries on data organized based on their semantic understanding of the cloud. Like the original SQL, we believe the use of a declarative query language will reduce development costs and make the multi-cloud accessible to a broader set of developers.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"573 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134268763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amanda Swearngin, Myra B. Cohen, Bonnie E. John, R. Bellamy
As software systems evolve, new interface features such as keyboard shortcuts and toolbars are introduced. While it is common to regression test the new features for functional correctness, there has been less focus on systematic regression testing for usability, due to the effort and time involved in human studies. Cognitive modeling tools such as CogTool provide some help by computing predictions of user performance, but they still require manual effort to describe the user interface and tasks, limiting regression testing efforts. In recent work, we developed CogTool-Helper to reduce the effort required to generate human performance models of existing systems. We build on this work by providing task specific test case generation and present our vision for human performance regression testing (HPRT) that generates large numbers of test cases and evaluates a range of human performance predictions for the same task. We examine the feasibility of HPRT on four tasks in LibreOffice, find several regressions, and then discuss how a project team could use this information. We also illustrate that we can increase efficiency with sampling by leveraging an inference algorithm. Samples that take approximately 50% of the runtime lose at most 10% of the performance predictions.
{"title":"Human performance regression testing","authors":"Amanda Swearngin, Myra B. Cohen, Bonnie E. John, R. Bellamy","doi":"10.5555/2486788.2486809","DOIUrl":"https://doi.org/10.5555/2486788.2486809","url":null,"abstract":"As software systems evolve, new interface features such as keyboard shortcuts and toolbars are introduced. While it is common to regression test the new features for functional correctness, there has been less focus on systematic regression testing for usability, due to the effort and time involved in human studies. Cognitive modeling tools such as CogTool provide some help by computing predictions of user performance, but they still require manual effort to describe the user interface and tasks, limiting regression testing efforts. In recent work, we developed CogTool-Helper to reduce the effort required to generate human performance models of existing systems. We build on this work by providing task specific test case generation and present our vision for human performance regression testing (HPRT) that generates large numbers of test cases and evaluates a range of human performance predictions for the same task. We examine the feasibility of HPRT on four tasks in LibreOffice, find several regressions, and then discuss how a project team could use this information. We also illustrate that we can increase efficiency with sampling by leveraging an inference algorithm. Samples that take approximately 50% of the runtime lose at most 10% of the performance predictions.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134106612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606698
X. Liu, Yun Yang, Dahai Cao, Dong Yuan
Nowadays, most business processes are running in a parallel, distributed and time-constrained manner. How to guarantee their on-time completion is a challenging issue. In the past few years, temporal checkpoint selection which selects a subset of workflow activities for verification of temporal consistency has been proved to be very successful in monitoring single, complex and large size scientific workflows. An intuitive approach is to apply those strategies to individual business processes. However, in such a case, the total number of checkpoints will be enormous, namely the cost for system monitoring and exception handling could be excessive. To address such an issue, we propose a brand new idea which selects time points along the workflow execution time line as checkpoints to monitor a batch of parallel business processes simultaneously instead of individually. Based on such an idea, a set of new definitions as well as a time-point based checkpoint selection strategy are presented in this paper. Our preliminary results demonstrate that it can achieve an order of magnitude reduction in the number of checkpoints while maintaining satisfactory on-time completion rates compared with the state-of-the-art activity-point based checkpoint selection strategy.
{"title":"Selecting checkpoints along the time line: A novel temporal checkpoint selection strategy for monitoring a batch of parallel business processes","authors":"X. Liu, Yun Yang, Dahai Cao, Dong Yuan","doi":"10.1109/ICSE.2013.6606698","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606698","url":null,"abstract":"Nowadays, most business processes are running in a parallel, distributed and time-constrained manner. How to guarantee their on-time completion is a challenging issue. In the past few years, temporal checkpoint selection which selects a subset of workflow activities for verification of temporal consistency has been proved to be very successful in monitoring single, complex and large size scientific workflows. An intuitive approach is to apply those strategies to individual business processes. However, in such a case, the total number of checkpoints will be enormous, namely the cost for system monitoring and exception handling could be excessive. To address such an issue, we propose a brand new idea which selects time points along the workflow execution time line as checkpoints to monitor a batch of parallel business processes simultaneously instead of individually. Based on such an idea, a set of new definitions as well as a time-point based checkpoint selection strategy are presented in this paper. Our preliminary results demonstrate that it can achieve an order of magnitude reduction in the number of checkpoints while maintaining satisfactory on-time completion rates compared with the state-of-the-art activity-point based checkpoint selection strategy.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133383362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606569
Tim Nelson, Salman Saghafi, Daniel J. Dougherty, Kathi Fisler, S. Krishnamurthi
Scenario-finding tools such as Alloy are widely used to understand the consequences of specifications, with applications to software modeling, security analysis, and verification. This paper focuses on the exploration of scenarios: which scenarios are presented first, and how to traverse them in a well-defined way. We present Aluminum, a modification of Alloy that presents only minimal scenarios: those that contain no more than is necessary. Aluminum lets users explore the scenario space by adding to scenarios and backtracking. It also provides the ability to find what can consistently be used to extend each scenario. We describe the semantic basis of Aluminum in terms of minimal models of first-order logic formulas. We show how this theory can be implemented atop existing SAT-solvers and quantify both the benefits of minimality and its small computational overhead. Finally, we offer some qualitative observations about scenario exploration in Aluminum.
{"title":"Aluminum: Principled scenario exploration through minimality","authors":"Tim Nelson, Salman Saghafi, Daniel J. Dougherty, Kathi Fisler, S. Krishnamurthi","doi":"10.1109/ICSE.2013.6606569","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606569","url":null,"abstract":"Scenario-finding tools such as Alloy are widely used to understand the consequences of specifications, with applications to software modeling, security analysis, and verification. This paper focuses on the exploration of scenarios: which scenarios are presented first, and how to traverse them in a well-defined way. We present Aluminum, a modification of Alloy that presents only minimal scenarios: those that contain no more than is necessary. Aluminum lets users explore the scenario space by adding to scenarios and backtracking. It also provides the ability to find what can consistently be used to extend each scenario. We describe the semantic basis of Aluminum in terms of minimal models of first-order logic formulas. We show how this theory can be implemented atop existing SAT-solvers and quantify both the benefits of minimality and its small computational overhead. Finally, we offer some qualitative observations about scenario exploration in Aluminum.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115778555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606570
S. Maoz, Yaniv Sa'ar
The scenario-based approach to the specification and simulation of reactive systems has attracted much research efforts in recent years. While the problem of synthesizing a controller or a transition system from a scenario-based specification has been studied extensively, no work has yet effectively addressed the case where the specification is unrealizable and a controller cannot be synthesized. This has limited the effectiveness of using scenario-based specifications in requirements analysis and simulation. In this paper we present counter play-out, an interactive debugging method for unrealizable scenario-based specifications. When we identify an unrealizable specification, we generate a controller that plays the role of the environment and lets the engineer play the role of the system. During execution, the former chooses environment's moves such that the latter is forced to eventually fail in satisfying the system's requirements. This results in an interactive, guided execution, leading to the root causes of unrealizability. The generated controller constitutes a proof that the specification is conflicting and cannot be realized. Counter play-out is based on a counter strategy, which we compute by solving a Rabin game using a symbolic, BDD-based algorithm. The work is implemented and integrated with PlayGo, an IDE for scenario-based programming developed at the Weizmann Institute of Science. Case studies show the contribution of our work to the state-of-the-art in the scenario-based approach to specification and simulation.
基于场景的方法来规范和模拟反应系统近年来吸引了大量的研究工作。虽然从基于场景的规范合成控制器或转换系统的问题已经得到了广泛的研究,但还没有工作有效地解决规范无法实现且控制器无法合成的情况。这限制了在需求分析和模拟中使用基于场景的规范的有效性。在本文中,我们提出了计数器播放,这是一种针对无法实现的基于场景的规范的交互式调试方法。当我们确定一个无法实现的规范时,我们生成一个控制器,它扮演环境的角色,而让工程师扮演系统的角色。在执行过程中,前者选择环境的移动,使得后者最终被迫无法满足系统的需求。这将导致交互式的、有指导的执行,从而导致无法实现的根本原因。生成的控制器构成了规范冲突和无法实现的证明。反击是基于反击策略的,我们通过使用符号的、基于bdd的算法来解决Rabin游戏来计算。这项工作与PlayGo (Weizmann Institute of Science开发的基于场景编程的IDE)实现并集成在一起。案例研究显示了我们的工作对基于场景的规范和模拟方法的最新技术的贡献。
{"title":"Counter play-out: Executing unrealizable scenario-based specifications","authors":"S. Maoz, Yaniv Sa'ar","doi":"10.1109/ICSE.2013.6606570","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606570","url":null,"abstract":"The scenario-based approach to the specification and simulation of reactive systems has attracted much research efforts in recent years. While the problem of synthesizing a controller or a transition system from a scenario-based specification has been studied extensively, no work has yet effectively addressed the case where the specification is unrealizable and a controller cannot be synthesized. This has limited the effectiveness of using scenario-based specifications in requirements analysis and simulation. In this paper we present counter play-out, an interactive debugging method for unrealizable scenario-based specifications. When we identify an unrealizable specification, we generate a controller that plays the role of the environment and lets the engineer play the role of the system. During execution, the former chooses environment's moves such that the latter is forced to eventually fail in satisfying the system's requirements. This results in an interactive, guided execution, leading to the root causes of unrealizability. The generated controller constitutes a proof that the specification is conflicting and cannot be realized. Counter play-out is based on a counter strategy, which we compute by solving a Rabin game using a symbolic, BDD-based algorithm. The work is implemented and integrated with PlayGo, an IDE for scenario-based programming developed at the Weizmann Institute of Science. Case studies show the contribution of our work to the state-of-the-art in the scenario-based approach to specification and simulation.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"296 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124251419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606584
Jaechang Nam, Sinno Jialin Pan, Sunghun Kim
Many software defect prediction approaches have been proposed and most are effective in within-project prediction settings. However, for new projects or projects with limited training data, it is desirable to learn a prediction model by using sufficient training data from existing source projects and then apply the model to some target projects (cross-project defect prediction). Unfortunately, the performance of cross-project defect prediction is generally poor, largely because of feature distribution differences between the source and target projects. In this paper, we apply a state-of-the-art transfer learning approach, TCA, to make feature distributions in source and target projects similar. In addition, we propose a novel transfer defect learning approach, TCA+, by extending TCA. Our experimental results for eight open-source projects show that TCA+ significantly improves cross-project prediction performance.
{"title":"Transfer defect learning","authors":"Jaechang Nam, Sinno Jialin Pan, Sunghun Kim","doi":"10.1109/ICSE.2013.6606584","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606584","url":null,"abstract":"Many software defect prediction approaches have been proposed and most are effective in within-project prediction settings. However, for new projects or projects with limited training data, it is desirable to learn a prediction model by using sufficient training data from existing source projects and then apply the model to some target projects (cross-project defect prediction). Unfortunately, the performance of cross-project defect prediction is generally poor, largely because of feature distribution differences between the source and target projects. In this paper, we apply a state-of-the-art transfer learning approach, TCA, to make feature distributions in source and target projects similar. In addition, we propose a novel transfer defect learning approach, TCA+, by extending TCA. Our experimental results for eight open-source projects show that TCA+ significantly improves cross-project prediction performance.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124305055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606611
Yunhui Zheng, X. Zhang
Remote code execution (RCE) attacks are one of the most prominent security threats for web applications. It is a special kind of cross-site-scripting (XSS) attack that allows client inputs to be stored and executed as server side scripts. RCE attacks often require coordination of multiple requests and manipulation of string and non-string inputs from the client side to nullify the access control protocol and induce unusual execution paths on the server side. We propose a path- and context-sensitive interprocedural analysis to detect RCE vulnerabilities. The analysis features a novel way of analyzing both the string and non-string behavior of a web application in a path sensitive fashion. It thoroughly handles the practical challenges entailed by modeling RCE attacks. We develop a prototype system and evaluate it on ten real-world PHP applications. We have identified 21 true RCE vulnerabilities, with 8 unreported before.
{"title":"Path sensitive static analysis of web applications for remote code execution vulnerability detection","authors":"Yunhui Zheng, X. Zhang","doi":"10.1109/ICSE.2013.6606611","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606611","url":null,"abstract":"Remote code execution (RCE) attacks are one of the most prominent security threats for web applications. It is a special kind of cross-site-scripting (XSS) attack that allows client inputs to be stored and executed as server side scripts. RCE attacks often require coordination of multiple requests and manipulation of string and non-string inputs from the client side to nullify the access control protocol and induce unusual execution paths on the server side. We propose a path- and context-sensitive interprocedural analysis to detect RCE vulnerabilities. The analysis features a novel way of analyzing both the string and non-string behavior of a web application in a path sensitive fashion. It thoroughly handles the practical challenges entailed by modeling RCE attacks. We develop a prototype system and evaluate it on ten real-world PHP applications. We have identified 21 true RCE vulnerabilities, with 8 unreported before.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124317411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Program comments have always been the key to understanding code. However, typical text comments can easily become verbose or evasive. Thus sometimes code reviewers find an audio or video code narration quite helpful. In this paper, we present our tool, called MCT (Multimedia Commenting Tool), which is an integrated development environment-based tool that enables programmers to easily explain their code by voice, video and mouse movement in the form of comments. With this tool, programmers can replay the audio or video when they feel like. A demonstration video can be accessed at: http://www.youtube.com/watch?v=tHEHqZme4VE.
{"title":"MCT: A tool for commenting programs by multimedia comments","authors":"Yiyang Hao, Ge Li, Lili Mou, Lu Zhang, Zhi Jin","doi":"10.5555/2486788.2487000","DOIUrl":"https://doi.org/10.5555/2486788.2487000","url":null,"abstract":"Program comments have always been the key to understanding code. However, typical text comments can easily become verbose or evasive. Thus sometimes code reviewers find an audio or video code narration quite helpful. In this paper, we present our tool, called MCT (Multimedia Commenting Tool), which is an integrated development environment-based tool that enables programmers to easily explain their code by voice, video and mouse movement in the form of comments. With this tool, programmers can replay the audio or video when they feel like. A demonstration video can be accessed at: http://www.youtube.com/watch?v=tHEHqZme4VE.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123563037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606595
Abdel Salam Sayyad, T. Menzies, H. Ammar
Software design is a process of trading off competing objectives. If the user objective space is rich, then we should use optimizers that can fully exploit that richness. For example, this study configures software product lines (expressed as feature maps) using various search-based software engineering methods. As we increase the number of optimization objectives, we find that methods in widespread use (e.g. NSGA-II, SPEA2) perform much worse than IBEA (Indicator-Based Evolutionary Algorithm). IBEA works best since it makes most use of user preference knowledge. Hence it does better on the standard measures (hypervolume and spread) but it also generates far more products with 0% violations of domain constraints. Our conclusion is that we need to change our methods for search-based software engineering, particularly when studying complex decision spaces.
{"title":"On the value of user preferences in search-based software engineering: A case study in software product lines","authors":"Abdel Salam Sayyad, T. Menzies, H. Ammar","doi":"10.1109/ICSE.2013.6606595","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606595","url":null,"abstract":"Software design is a process of trading off competing objectives. If the user objective space is rich, then we should use optimizers that can fully exploit that richness. For example, this study configures software product lines (expressed as feature maps) using various search-based software engineering methods. As we increase the number of optimization objectives, we find that methods in widespread use (e.g. NSGA-II, SPEA2) perform much worse than IBEA (Indicator-Based Evolutionary Algorithm). IBEA works best since it makes most use of user preference knowledge. Hence it does better on the standard measures (hypervolume and spread) but it also generates far more products with 0% violations of domain constraints. Our conclusion is that we need to change our methods for search-based software engineering, particularly when studying complex decision spaces.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123592639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-18DOI: 10.1109/ICSE.2013.6606547
F. Boyer, O. Gruber, D. Pous
In this paper, we propose a reconfiguration protocol that can handle any number of failures during a reconfiguration, always producing an architecturally-consistent assembly of components that can be safely introspected and further reconfigured. Our protocol is based on the concept of Incrementally Consistent Sequences (ICS), ensuring that any reconfiguration incrementally respects the reconfiguration contract given to component developers: reconfiguration grammar and architectural invariants. We also propose two recovery policies, one rolls back the failed reconfiguration and the other rolls it forward, both going as far as possible, failure permitting. We specified and proved the reconfiguration contract, the protocol, and recovery policies in Coq.
{"title":"Robust reconfigurations of component assemblies","authors":"F. Boyer, O. Gruber, D. Pous","doi":"10.1109/ICSE.2013.6606547","DOIUrl":"https://doi.org/10.1109/ICSE.2013.6606547","url":null,"abstract":"In this paper, we propose a reconfiguration protocol that can handle any number of failures during a reconfiguration, always producing an architecturally-consistent assembly of components that can be safely introspected and further reconfigured. Our protocol is based on the concept of Incrementally Consistent Sequences (ICS), ensuring that any reconfiguration incrementally respects the reconfiguration contract given to component developers: reconfiguration grammar and architectural invariants. We also propose two recovery policies, one rolls back the failed reconfiguration and the other rolls it forward, both going as far as possible, failure permitting. We specified and proved the reconfiguration contract, the protocol, and recovery policies in Coq.","PeriodicalId":322423,"journal":{"name":"2013 35th International Conference on Software Engineering (ICSE)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-05-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128325701","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}