Issue tracking systems such as Bugzilla are tools to facilitate collaboration between software maintenance professionals. Popular issue tracking systems consists of discussion forums to facilitate bug reporting and comment posting. We observe that several comments posted in issue tracking system contains link to external websites such as YouTube (video sharing website), Twitter (micro-blogging website), Stack overflow (a community-based question and answering website for programmers), Wikipedia and focused discussions forums. Stack overflow is a popular community-based question and answering website for programmers and is widely used by software engineers as it contains answers to millions of questions (an extensive knowledge resource) posted by programmers on diverse topics. We conduct a series of experiments on open-source Google Chromium and Android issue tracker data (publicly available real-world dataset) to understand the role and impact of Stack overflow in issue resolution. Our experimental results show evidences of several references to Stack overflow in threaded discussions and demonstrate correlation between a lower mean time to repair (in one dataset) with presence of Stack overflow links. We also observe that the average number of comments posted in response to bug reports are less when Stack overflow links are presented in contrast to bug reports not containing Stack overflow references. We conduct experiments based on textual similarly analysis (content-based linguistic features) and contextual data analysis (exploited metadata such as tags associated to a Stack overflow question) to recommend Stack overflow questions for an incoming bug report. We perform empirical analysis to measure the effectiveness of the proposed method on a dataset containing ground-truth and present our insights. We present the result of a survey (of Google Chromium Developers) that we conducted to understand practitioner's perspective and experience.
{"title":"Integrating Issue Tracking Systems with Community-Based Question and Answering Websites","authors":"D. Correa, A. Sureka","doi":"10.1109/ASWEC.2013.20","DOIUrl":"https://doi.org/10.1109/ASWEC.2013.20","url":null,"abstract":"Issue tracking systems such as Bugzilla are tools to facilitate collaboration between software maintenance professionals. Popular issue tracking systems consists of discussion forums to facilitate bug reporting and comment posting. We observe that several comments posted in issue tracking system contains link to external websites such as YouTube (video sharing website), Twitter (micro-blogging website), Stack overflow (a community-based question and answering website for programmers), Wikipedia and focused discussions forums. Stack overflow is a popular community-based question and answering website for programmers and is widely used by software engineers as it contains answers to millions of questions (an extensive knowledge resource) posted by programmers on diverse topics. We conduct a series of experiments on open-source Google Chromium and Android issue tracker data (publicly available real-world dataset) to understand the role and impact of Stack overflow in issue resolution. Our experimental results show evidences of several references to Stack overflow in threaded discussions and demonstrate correlation between a lower mean time to repair (in one dataset) with presence of Stack overflow links. We also observe that the average number of comments posted in response to bug reports are less when Stack overflow links are presented in contrast to bug reports not containing Stack overflow references. We conduct experiments based on textual similarly analysis (content-based linguistic features) and contextual data analysis (exploited metadata such as tags associated to a Stack overflow question) to recommend Stack overflow questions for an incoming bug report. We perform empirical analysis to measure the effectiveness of the proposed method on a dataset containing ground-truth and present our insights. We present the result of a survey (of Google Chromium Developers) that we conducted to understand practitioner's perspective and experience.","PeriodicalId":394020,"journal":{"name":"2013 22nd Australian Software Engineering Conference","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127012045","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scenario-based specifications have been widely used to specify the behavior of reactive systems in a visual and intuitive way. Timed Property Sequence Chart (TPSC) is a recently proposed scenario-based specification for specifying timing properties for real-time systems. However, there is currently no model checking tool available to verify timing properties specified by TPSC specifications. To mitigate this gap, this paper provides the semantics rules for TPSC by explicitly translating TPSC into Timed Computational Tree Logic (TCTL) that is a real time temporal logic. Two kinds of rules are defined: basic and compositional rules. Basic rules discuss how to translate a single message in a TPSC specification into a TCTL formula, while compositional rules show how to compose these basic TCTL formulas according to compositional operators. The classification of basic and compositional rules makes our translations more efficient. The translation process is illustrated by a case study. The translating correctness is also proved by the practical measurement of real-time specification patterns. The work described here opens an indirect way to model checking real-time requirements represented in TPSC specifications by translating TPSC specifications into TCTL formulas.
{"title":"On the Semantics of Scenario-Based Specification Based on Timed Computational Tree Logic","authors":"Wenrui Li, Pengcheng Zhang","doi":"10.1109/ASWEC.2013.11","DOIUrl":"https://doi.org/10.1109/ASWEC.2013.11","url":null,"abstract":"Scenario-based specifications have been widely used to specify the behavior of reactive systems in a visual and intuitive way. Timed Property Sequence Chart (TPSC) is a recently proposed scenario-based specification for specifying timing properties for real-time systems. However, there is currently no model checking tool available to verify timing properties specified by TPSC specifications. To mitigate this gap, this paper provides the semantics rules for TPSC by explicitly translating TPSC into Timed Computational Tree Logic (TCTL) that is a real time temporal logic. Two kinds of rules are defined: basic and compositional rules. Basic rules discuss how to translate a single message in a TPSC specification into a TCTL formula, while compositional rules show how to compose these basic TCTL formulas according to compositional operators. The classification of basic and compositional rules makes our translations more efficient. The translation process is illustrated by a case study. The translating correctness is also proved by the practical measurement of real-time specification patterns. The work described here opens an indirect way to model checking real-time requirements represented in TPSC specifications by translating TPSC specifications into TCTL formulas.","PeriodicalId":394020,"journal":{"name":"2013 22nd Australian Software Engineering Conference","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117018744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Programming patterns are stereotypic fragments of code that accomplish common programming goals. The ability to recall, apply and evaluate patterns are important outcomes for learning to program. However, monitoring students use of patterns is currently difficult and time-consuming, requiring expert analysis and code walk-throughs. This paper introduces a method and automated tool for labelling the application (or not) of patterns in Java programs that enables instructors to specify and then analyse the programming patterns used by students. An empirical study is used to identify what patterns variations occur, how frequently, and why. The what and how questions are answered using automatic analysis with our tool, and the why question is answered from student explanations of their code.
{"title":"Evaluating the Application and Understanding of Elementary Programming Patterns","authors":"R. Cardell-Oliver","doi":"10.1109/ASWEC.2013.17","DOIUrl":"https://doi.org/10.1109/ASWEC.2013.17","url":null,"abstract":"Programming patterns are stereotypic fragments of code that accomplish common programming goals. The ability to recall, apply and evaluate patterns are important outcomes for learning to program. However, monitoring students use of patterns is currently difficult and time-consuming, requiring expert analysis and code walk-throughs. This paper introduces a method and automated tool for labelling the application (or not) of patterns in Java programs that enables instructors to specify and then analyse the programming patterns used by students. An empirical study is used to identify what patterns variations occur, how frequently, and why. The what and how questions are answered using automatic analysis with our tool, and the why question is answered from student explanations of their code.","PeriodicalId":394020,"journal":{"name":"2013 22nd Australian Software Engineering Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133909025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Passakorn Phannachitta, J. Keung, Ken-ichi Matsumoto
The success of estimating software project costs using analog-based reasoning has been noticeable for over a decade. The estimation accuracy is heavily depends on different heuristic methods to selecting the best feature subsets and a suitable set of similar projects from the repository. A complete search of all possible combinations may not be feasible due to insufficient computational resources for such a large search space. In this work, the problem is revisited, and we propose a novel algorithm tailored for analogy-based software cost estimation utilizing the latest CUDA computing framework to enable estimation with large project datasets. We demonstrated the use of the proposed distributed algorithm executed on graphic processing units (GPU), which has a different architecture suitable for compute-intensive problems. The method has been evaluated using 11 real-world datasets from the PROMISE repository. Results shows that the proposed ABE-CUDA approach is able to produce the best project cost estimates by determining the best feature subsets and the most suitable number of analogous projects for estimation, significantly improves the overall feature search time and prediction accuracy for software cost estimation. More importantly, the optimized estimation result can be used as a baseline benchmark to compare with other sophisticated analogy-based methods for software cost estimation.
{"title":"An Empirical Experiment on Analogy-Based Software Cost Estimation with CUDA Framework","authors":"Passakorn Phannachitta, J. Keung, Ken-ichi Matsumoto","doi":"10.1109/ASWEC.2013.28","DOIUrl":"https://doi.org/10.1109/ASWEC.2013.28","url":null,"abstract":"The success of estimating software project costs using analog-based reasoning has been noticeable for over a decade. The estimation accuracy is heavily depends on different heuristic methods to selecting the best feature subsets and a suitable set of similar projects from the repository. A complete search of all possible combinations may not be feasible due to insufficient computational resources for such a large search space. In this work, the problem is revisited, and we propose a novel algorithm tailored for analogy-based software cost estimation utilizing the latest CUDA computing framework to enable estimation with large project datasets. We demonstrated the use of the proposed distributed algorithm executed on graphic processing units (GPU), which has a different architecture suitable for compute-intensive problems. The method has been evaluated using 11 real-world datasets from the PROMISE repository. Results shows that the proposed ABE-CUDA approach is able to produce the best project cost estimates by determining the best feature subsets and the most suitable number of analogous projects for estimation, significantly improves the overall feature search time and prediction accuracy for software cost estimation. More importantly, the optimized estimation result can be used as a baseline benchmark to compare with other sophisticated analogy-based methods for software cost estimation.","PeriodicalId":394020,"journal":{"name":"2013 22nd Australian Software Engineering Conference","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123491681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A dynamic system such as a mobile telecommunication system often still have bugs when used in the field. Test engineers hence need to have a good amount of confidence that its software components have attained a certain level of reliability before such a system is released to a customer for operational uses. Software Reliability Growth Models (SRGMs) are useful for estimating the reliability of a software component for quality control and testing purposes. However, due to the availability of a huge number of SRGMs based on the Non-Homogeneous Poisson Process (NHPP), it is very difficult to know which one is the most suitable for a certain software system. The traditional model selection methods conspicuously omit a well defined and verified mechanism to identify suitable NHPP-based SRGMs which are capable of addressing the requirements necessary for testing and debugging a software component of a dynamic system. In this paper, we present a method for selecting the most suitable software reliability model to estimate the reliability of a software component in a dynamic system. Our method is based on model selection criteria and the use of a unification scheme. A software component in the Wireless Network Switching Centre (WNSC) is used as a case study to illustrate the usefulness of our method in a dynamic system.
{"title":"A Method for Selecting a Model to Estimate the Reliability of a Software Component in a Dynamic System","authors":"Mohit Garg, R. Lai, P. K. Kapur","doi":"10.1109/ASWEC.2013.15","DOIUrl":"https://doi.org/10.1109/ASWEC.2013.15","url":null,"abstract":"A dynamic system such as a mobile telecommunication system often still have bugs when used in the field. Test engineers hence need to have a good amount of confidence that its software components have attained a certain level of reliability before such a system is released to a customer for operational uses. Software Reliability Growth Models (SRGMs) are useful for estimating the reliability of a software component for quality control and testing purposes. However, due to the availability of a huge number of SRGMs based on the Non-Homogeneous Poisson Process (NHPP), it is very difficult to know which one is the most suitable for a certain software system. The traditional model selection methods conspicuously omit a well defined and verified mechanism to identify suitable NHPP-based SRGMs which are capable of addressing the requirements necessary for testing and debugging a software component of a dynamic system. In this paper, we present a method for selecting the most suitable software reliability model to estimate the reliability of a software component in a dynamic system. Our method is based on model selection criteria and the use of a unification scheme. A software component in the Wireless Network Switching Centre (WNSC) is used as a case study to illustrate the usefulness of our method in a dynamic system.","PeriodicalId":394020,"journal":{"name":"2013 22nd Australian Software Engineering Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128054601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Integrating formal methods into UML opens up a way to complement UML-based software development with precise semantics, development methodologies, as well as rigorous verification and refinement techniques. In this paper, we present an approach to integrate a formal method to practical component-based model driven development through defining a UML profile that maps the concepts of the formal method as UML stereotypes, and implementing the profile into a CASE tool. Unlike most of the previous works in this vein, which concentrate on verifying the correctness of the models built in the development process, we focus on how the full development process can be driven by applying the refinement rules of the formal method in an incremental and interactive manner. The formal method we adopt in this work is the refinement for Component and Object Systems (rCOS). We demonstrate the development activities in the CASE tool using an example.
{"title":"Support Formal Component-Based Development with UML Profile","authors":"Dan Li, Xiaoshan Li, Zhiming Liu, V. Stolz","doi":"10.1109/ASWEC.2013.31","DOIUrl":"https://doi.org/10.1109/ASWEC.2013.31","url":null,"abstract":"Integrating formal methods into UML opens up a way to complement UML-based software development with precise semantics, development methodologies, as well as rigorous verification and refinement techniques. In this paper, we present an approach to integrate a formal method to practical component-based model driven development through defining a UML profile that maps the concepts of the formal method as UML stereotypes, and implementing the profile into a CASE tool. Unlike most of the previous works in this vein, which concentrate on verifying the correctness of the models built in the development process, we focus on how the full development process can be driven by applying the refinement rules of the formal method in an incremental and interactive manner. The formal method we adopt in this work is the refinement for Component and Object Systems (rCOS). We demonstrate the development activities in the CASE tool using an example.","PeriodicalId":394020,"journal":{"name":"2013 22nd Australian Software Engineering Conference","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125420182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Numerous set similarity metrics have been used for ranking "suspiciousness" of code in spectral fault localization, which uses execution profiles of passed and failed test cases to help locate bugs. Research in data mining has identified several forms of possibly desirable symmetry in similarity metrics. Here we define several forms of "duals" of metrics, based on these forms of symmetries. Use of these duals, plus some other slight modifications, leads to several new similarity metrics. We show that versions of several previously proposed metrics are optimal, or nearly optimal, for locating single bugs. We also show that a form of duality exists between locating single bugs and locating "deterministic" bugs (execution of which always results in test case failure). Duals of the various single bug optimal metrics are optimal for locating such bugs. This more theoretical work leads to a conjecture about how different metrics could be chosen for different stages of software development.
{"title":"Duals in Spectral Fault Localization","authors":"L. Naish, H. Lee","doi":"10.1109/ASWEC.2013.16","DOIUrl":"https://doi.org/10.1109/ASWEC.2013.16","url":null,"abstract":"Numerous set similarity metrics have been used for ranking \"suspiciousness\" of code in spectral fault localization, which uses execution profiles of passed and failed test cases to help locate bugs. Research in data mining has identified several forms of possibly desirable symmetry in similarity metrics. Here we define several forms of \"duals\" of metrics, based on these forms of symmetries. Use of these duals, plus some other slight modifications, leads to several new similarity metrics. We show that versions of several previously proposed metrics are optimal, or nearly optimal, for locating single bugs. We also show that a form of duality exists between locating single bugs and locating \"deterministic\" bugs (execution of which always results in test case failure). Duals of the various single bug optimal metrics are optimal for locating such bugs. This more theoretical work leads to a conjecture about how different metrics could be chosen for different stages of software development.","PeriodicalId":394020,"journal":{"name":"2013 22nd Australian Software Engineering Conference","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123092616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Reports that communication and behavioral issues contribute to inadequately performing software teams have fuelled a wealth of research aimed at understanding the human processes employed during software development. The increasing level of interest in human issues is particularly relevant for agile and global software development approaches that emphasize the importance of people and their interactions during projects. While mature analysis techniques in behavioral psychology have been recommended for studying such issues, particularly when using archives and artifacts, these techniques have rarely been used in software engineering research. We utilize these techniques under an embedded case study approach to examine whether IBM Rational Jazz practitioners' behaviors change over project duration and whether certain tasks affect teams' attitudes and behaviors. We found highest levels of project engagement at project start and completion, as well as increasing levels of team collectiveness as projects progressed. Additionally, Jazz practitioners were most insightful and perceptive at the time of project scoping. Further, Jazz teams' attitudes and behaviors varied in line with the nature of the tasks they were performing. We explain these findings and discuss their implications for software project governance and tool design.
{"title":"What Can Developers' Messages Tell Us? A Psycholinguistic Analysis of Jazz Teams' Attitudes and Behavior Patterns","authors":"Sherlock A. Licorish, Stephen G. MacDonell","doi":"10.1109/ASWEC.2013.22","DOIUrl":"https://doi.org/10.1109/ASWEC.2013.22","url":null,"abstract":"Reports that communication and behavioral issues contribute to inadequately performing software teams have fuelled a wealth of research aimed at understanding the human processes employed during software development. The increasing level of interest in human issues is particularly relevant for agile and global software development approaches that emphasize the importance of people and their interactions during projects. While mature analysis techniques in behavioral psychology have been recommended for studying such issues, particularly when using archives and artifacts, these techniques have rarely been used in software engineering research. We utilize these techniques under an embedded case study approach to examine whether IBM Rational Jazz practitioners' behaviors change over project duration and whether certain tasks affect teams' attitudes and behaviors. We found highest levels of project engagement at project start and completion, as well as increasing levels of team collectiveness as projects progressed. Additionally, Jazz practitioners were most insightful and perceptive at the time of project scoping. Further, Jazz teams' attitudes and behaviors varied in line with the nature of the tasks they were performing. We explain these findings and discuss their implications for software project governance and tool design.","PeriodicalId":394020,"journal":{"name":"2013 22nd Australian Software Engineering Conference","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115156324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Socio-economic inequality indices, like the Gini coefficient or the Theil index, offer us a viable alternative to central tendency statistics when being used to aggregate software metrics data. The specific value of these inequality indices lies in their ability to capture changes in the distribution of metrics data more effectively than, say, average or median. Knowing whether the distribution of one metrics is more unequal than that of another one or whether its distribution becomes more or less unequal over time is the crucial element here. There are, however, challenges in the application of these indices that can result in ecological fallacies. The first issue relates to occurrences of zeros in metrics data, and not all inequality indices cope well with this event. The second problem arises from applying a macro-level inference to a micro-level analysis of a changing population. The Gini coefficient works for the former, whereas the decomposable Theil index serves the latter. Nevertheless, when used with care, and usually in combination, both indices can provide us with a powerful tool not only to analyze software, but also to assess its organizational health and maintainability over time.
{"title":"On the Application of Inequality Indices in Comparative Software Analysis","authors":"O. Goloshchapova, M. Lumpe","doi":"10.1109/ASWEC.2013.23","DOIUrl":"https://doi.org/10.1109/ASWEC.2013.23","url":null,"abstract":"Socio-economic inequality indices, like the Gini coefficient or the Theil index, offer us a viable alternative to central tendency statistics when being used to aggregate software metrics data. The specific value of these inequality indices lies in their ability to capture changes in the distribution of metrics data more effectively than, say, average or median. Knowing whether the distribution of one metrics is more unequal than that of another one or whether its distribution becomes more or less unequal over time is the crucial element here. There are, however, challenges in the application of these indices that can result in ecological fallacies. The first issue relates to occurrences of zeros in metrics data, and not all inequality indices cope well with this event. The second problem arises from applying a macro-level inference to a micro-level analysis of a changing population. The Gini coefficient works for the former, whereas the decomposable Theil index serves the latter. Nevertheless, when used with care, and usually in combination, both indices can provide us with a powerful tool not only to analyze software, but also to assess its organizational health and maintainability over time.","PeriodicalId":394020,"journal":{"name":"2013 22nd Australian Software Engineering Conference","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124792658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Requirement engineering is a difficult task which has a critical impact on software quality. Errors related to requirements are considered the most expensive types of software errors. They are the major cause of project delays and cost overruns. Software developers need to cooperate with multiple stakeholders with different backgrounds and concerns. The developers need to investigate an unfamiliar problem space and make the transition from the informal problem space to the formal solution space. The requirement engineering process should use systematic methods which are constructive, incremental, and rigorous. The methods also need to be easy to use and understand so that they can be used for communication among different stakeholders. Is it possible to invent a human intuitive modelling methodology which systematically translates the informal requirements into a formally defined model? Behaviour Engineering has arguably solved many problems. However, the size and low level of the final Behavior Tree makes it hard to match with the original requirements. Here, we propose a new requirement modelling approach called Rule-Based Behaviour Engineering. We separate two concerns, rules and procedural behaviours, right at the beginning of the requirement modelling process. We combine the Behavior Tree notation for procedural behaviour modelling with a non-monotonic logic called Clausal Defeasible Logic for rule modelling. In a systematic way, the target model is constructed incrementally in four well-defined steps. Both the representations of rules and procedural flows are humanly readable and intuitive. The result is an effective mechanism for formally modelling requirements, detecting requirement defects, and providing a set of tools for communication among stakeholders.
{"title":"Rule-Based Behaviour Engineering: Integrated, Intuitive Formal Rule Modelling","authors":"L. W. Chan, R. Hexel, Lian Wen","doi":"10.1109/ASWEC.2013.13","DOIUrl":"https://doi.org/10.1109/ASWEC.2013.13","url":null,"abstract":"Requirement engineering is a difficult task which has a critical impact on software quality. Errors related to requirements are considered the most expensive types of software errors. They are the major cause of project delays and cost overruns. Software developers need to cooperate with multiple stakeholders with different backgrounds and concerns. The developers need to investigate an unfamiliar problem space and make the transition from the informal problem space to the formal solution space. The requirement engineering process should use systematic methods which are constructive, incremental, and rigorous. The methods also need to be easy to use and understand so that they can be used for communication among different stakeholders. Is it possible to invent a human intuitive modelling methodology which systematically translates the informal requirements into a formally defined model? Behaviour Engineering has arguably solved many problems. However, the size and low level of the final Behavior Tree makes it hard to match with the original requirements. Here, we propose a new requirement modelling approach called Rule-Based Behaviour Engineering. We separate two concerns, rules and procedural behaviours, right at the beginning of the requirement modelling process. We combine the Behavior Tree notation for procedural behaviour modelling with a non-monotonic logic called Clausal Defeasible Logic for rule modelling. In a systematic way, the target model is constructed incrementally in four well-defined steps. Both the representations of rules and procedural flows are humanly readable and intuitive. The result is an effective mechanism for formally modelling requirements, detecting requirement defects, and providing a set of tools for communication among stakeholders.","PeriodicalId":394020,"journal":{"name":"2013 22nd Australian Software Engineering Conference","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128688969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}