Several operations, ranging from regular code updates to compiling, building, testing, and distribution to customers, are consolidated in continuous integration and delivery. Pro-fessionals seek additional information to complete the mission at hand during these tasks. Developers who devote a large amount of time and effort to finding such information may become distracted from their work. We will better understand the processes, procedures, and resources used to deliver a quality product on time by defining the types of information that software professionals seek. A deeper understanding of software practitioners' infor-mationneeds hasmany advantages, including remaining competitive, growingknowledgeof issues that can stymie a timely update, and creating a visualisation tool to assist practitioners in addressing their information needs. This is an extension of a previous work done by the authors. The authors conducted a multiple ‐ case holistic study with six different companies (38 unique participants) to identify information needs in continuous integration and delivery. This study attempts to capture the importance, frequency, required effort (e.g. sequence of actions required to collect information), current approach to handling, and associated stakeholders with respect to identified needs. 27 information needs associated with different stakeholders (i.e. developers, testers, project managers, release team, and compliance authority) were identified. The identified needs were categorised as testing, code & commit, confidence, bug, and artefacts. Apart from identifying information needs, practitioners face several challenges in developing visualisation tools. Thus,8 challenges that were faced by the practitioners to develop/maintain visualisation tools for the software team were identified. The recommendations from practitioners who are experts in developing, maintaining, and providing visualisation services to the software
{"title":"Data visualisation in continuous integration and delivery: Information needs, challenges, and recommendations","authors":"Azeem Ahmad, O. Leifler, K. Sandahl","doi":"10.1049/SFW2.12030","DOIUrl":"https://doi.org/10.1049/SFW2.12030","url":null,"abstract":"Several operations, ranging from regular code updates to compiling, building, testing, and distribution to customers, are consolidated in continuous integration and delivery. Pro-fessionals seek additional information to complete the mission at hand during these tasks. Developers who devote a large amount of time and effort to finding such information may become distracted from their work. We will better understand the processes, procedures, and resources used to deliver a quality product on time by defining the types of information that software professionals seek. A deeper understanding of software practitioners' infor-mationneeds hasmany advantages, including remaining competitive, growingknowledgeof issues that can stymie a timely update, and creating a visualisation tool to assist practitioners in addressing their information needs. This is an extension of a previous work done by the authors. The authors conducted a multiple ‐ case holistic study with six different companies (38 unique participants) to identify information needs in continuous integration and delivery. This study attempts to capture the importance, frequency, required effort (e.g. sequence of actions required to collect information), current approach to handling, and associated stakeholders with respect to identified needs. 27 information needs associated with different stakeholders (i.e. developers, testers, project managers, release team, and compliance authority) were identified. The identified needs were categorised as testing, code & commit, confidence, bug, and artefacts. Apart from identifying information needs, practitioners face several challenges in developing visualisation tools. Thus,8 challenges that were faced by the practitioners to develop/maintain visualisation tools for the software team were identified. The recommendations from practitioners who are experts in developing, maintaining, and providing visualisation services to the software","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"11 1","pages":"331-349"},"PeriodicalIF":0.0,"publicationDate":"2021-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77632578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software defect prediction is an important software quality assurance technique. Nevertheless, the prediction performance of the constructed model is easily susceptible to irrelevant or redundant features in the software projects and is not predominant enough. To address these two issues, a novel defect prediction model called SSEPG based on Stacked Sparse Denoising AutoEncoders (SSDAE) and Extreme Learning Maching (ELM) optimised by Particle Swarm Optimisation (PSO) and another complementary Gravitational Search Algorithm (GSA) are proposed in this paper, which has two main merits: (1) employ a novel deep neural network – SSDAE to extract new combined features, which can effectively learn the robust deep semantic feature representation. (2) integrate strong exploitation capacity of PSO with strong exploration capability of GSA to optimise the input weights and hidden layer biases of ELM, and utilise the superior discriminability of the enhanced ELM to predict the defective modules. The SSDAE is compared with eleven state-of-the-art feature extraction methods in effect and efficiency, and the SSEPG model is compared with multiple baseline models that contain five classic defect predictors and three variants across 24 software defect projects. The experimental results exhibit the superiority of the SSDAE and the SSEPG on six
{"title":"Software defect prediction based on stacked sparse denoising autoencoders and enhanced extreme learning machine","authors":"Nana Zhang, Shi Ying, Kun Zhu, Dandan Zhu","doi":"10.1049/SFW2.12029","DOIUrl":"https://doi.org/10.1049/SFW2.12029","url":null,"abstract":"Software defect prediction is an important software quality assurance technique. Nevertheless, the prediction performance of the constructed model is easily susceptible to irrelevant or redundant features in the software projects and is not predominant enough. To address these two issues, a novel defect prediction model called SSEPG based on Stacked Sparse Denoising AutoEncoders (SSDAE) and Extreme Learning Maching (ELM) optimised by Particle Swarm Optimisation (PSO) and another complementary Gravitational Search Algorithm (GSA) are proposed in this paper, which has two main merits: (1) employ a novel deep neural network – SSDAE to extract new combined features, which can effectively learn the robust deep semantic feature representation. (2) integrate strong exploitation capacity of PSO with strong exploration capability of GSA to optimise the input weights and hidden layer biases of ELM, and utilise the superior discriminability of the enhanced ELM to predict the defective modules. The SSDAE is compared with eleven state-of-the-art feature extraction methods in effect and efficiency, and the SSEPG model is compared with multiple baseline models that contain five classic defect predictors and three variants across 24 software defect projects. The experimental results exhibit the superiority of the SSDAE and the SSEPG on six","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"1 1","pages":"29-47"},"PeriodicalIF":0.0,"publicationDate":"2021-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89638980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Farshid Keynia, Department of Energy Management and Optimization, Institute of Science and High Technology and Environmental Sciences, Graduate University of Advanced Technology, Kerman, Iran. Email: f.keynia@kgut.ac.ir Abstract In this research, a new method for population initialisation in meta‐heuristic algorithms based on the Pareto 80/20 rule is presented. The population in a meta‐heuristic algorithm has two important tasks, including pushing the algorithm toward the real optima and preventing the algorithm from trapping in the local optima. Therefore, the starting point of a meta‐heuristic algorithm can have a significant impact on the performance and output results of the algorithm. In this research, using the Pareto 80/20 rule, an innovative and new method for creating an initial population in meta‐heuristic algorithms is presented. In this method, by using elitism, it is possible to increase the convergence of the algorithm toward the global optima, and by using the complete distribution of the population in the search spaces, the algorithm is prevented from trapping in the local optima. In this research, the proposed initialisation method was implemented in comparison with other initialisation methods using the cuckoo search algorithm. In addition, the efficiency and effectiveness of the proposed method in comparison with other well‐ known initialisation methods using statistical tests and in solving a variety of benchmark functions including unimodal, multimodal, fixed dimensional multimodal, and composite functions as well as in solving well‐known engineering problems was confirmed.
{"title":"A new population initialisation method based on the Pareto 80/20 rule for meta-heuristic optimisation algorithms","authors":"M. Hasanzadeh, F. Keynia","doi":"10.1049/SFW2.12025","DOIUrl":"https://doi.org/10.1049/SFW2.12025","url":null,"abstract":"Farshid Keynia, Department of Energy Management and Optimization, Institute of Science and High Technology and Environmental Sciences, Graduate University of Advanced Technology, Kerman, Iran. Email: f.keynia@kgut.ac.ir Abstract In this research, a new method for population initialisation in meta‐heuristic algorithms based on the Pareto 80/20 rule is presented. The population in a meta‐heuristic algorithm has two important tasks, including pushing the algorithm toward the real optima and preventing the algorithm from trapping in the local optima. Therefore, the starting point of a meta‐heuristic algorithm can have a significant impact on the performance and output results of the algorithm. In this research, using the Pareto 80/20 rule, an innovative and new method for creating an initial population in meta‐heuristic algorithms is presented. In this method, by using elitism, it is possible to increase the convergence of the algorithm toward the global optima, and by using the complete distribution of the population in the search spaces, the algorithm is prevented from trapping in the local optima. In this research, the proposed initialisation method was implemented in comparison with other initialisation methods using the cuckoo search algorithm. In addition, the efficiency and effectiveness of the proposed method in comparison with other well‐ known initialisation methods using statistical tests and in solving a variety of benchmark functions including unimodal, multimodal, fixed dimensional multimodal, and composite functions as well as in solving well‐known engineering problems was confirmed.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"16 1","pages":"323-347"},"PeriodicalIF":0.0,"publicationDate":"2021-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78675137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qiaohe Yang, No.2, Lane 228, Hezheng Road, Jiading District, Shanghai, China. Email: qiaoheyang@126.com Abstract Wireless sensor network (WSN) node localisation technology based on received signal strength indication (RSSI) is widely used as it does not need additional hardware devices. The ranging accuracy of RSSI is poor, and the particle swarm optimisation (PSO) algorithm can effectively improve the positioning accuracy of RSSI. However, the particle swarm diversity of the PSO algorithm is easy to lose quickly and fall into local optimal solution in the iterative process. Based on the convergence conditions and initial search space characteristics of the PSO algorithm in WSN localisation, an improved PSO algorithm (improved self‐adaptive inertia weight particle swarm optimisation [ISAPSO]) is proposed. Compared with the other two PSO location estimation algorithms, the ISAPSO location estimation algorithm has good performance in positioning accuracy, power consumption and real‐time performance under different beacon node proportions, node densities and ranging errors.
{"title":"A new localization method based on improved particle swarm optimization for wireless sensor networks","authors":"Qiaohe Yang","doi":"10.1049/SFW2.12027","DOIUrl":"https://doi.org/10.1049/SFW2.12027","url":null,"abstract":"Qiaohe Yang, No.2, Lane 228, Hezheng Road, Jiading District, Shanghai, China. Email: qiaoheyang@126.com Abstract Wireless sensor network (WSN) node localisation technology based on received signal strength indication (RSSI) is widely used as it does not need additional hardware devices. The ranging accuracy of RSSI is poor, and the particle swarm optimisation (PSO) algorithm can effectively improve the positioning accuracy of RSSI. However, the particle swarm diversity of the PSO algorithm is easy to lose quickly and fall into local optimal solution in the iterative process. Based on the convergence conditions and initial search space characteristics of the PSO algorithm in WSN localisation, an improved PSO algorithm (improved self‐adaptive inertia weight particle swarm optimisation [ISAPSO]) is proposed. Compared with the other two PSO location estimation algorithms, the ISAPSO location estimation algorithm has good performance in positioning accuracy, power consumption and real‐time performance under different beacon node proportions, node densities and ranging errors.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"37 1","pages":"251-258"},"PeriodicalIF":0.0,"publicationDate":"2021-05-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84558470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Making a decision for the requirements of multi ‐ stakeholders is a key process, especially in distributed software development projects. Local decision ‐ making for requirements in distributed software development is really difficult to accomplish as well as communicating these requirements over organizational boundaries and conveying them to the offshore developers is a big task. This study presents an empirical evaluation for the effectiveness of local decision ‐ making on the customization process of the software in the distributed development against productivity and cost reduction. This empirical evaluation utilizes the Communicating Customization Requirements of Multi ‐ Clients in a Distributed Domain (CCRD) model. The empirical study estimates the productivity of CCRD in terms of the number of requirements for which decisions are made. In addition, the study estimates the reduction in the total cost of the customization process in terms of the salaries of the required local decision ‐ makers. Besides, this study finds the critical point at which the CCRD is still valid (i.e. the minimum number of requirements that violate the significance and worthy of CCRD). The study uses a real data set of 18 clients distributed through 16 cities and involved in one customization project requested about 3000 requirements collected in 1290 working hours. The results of this study showed that the local decision ‐ making improved the productivity of the customization process from 503 requirements in 200 min of simulation to 1,499 requirements. In addition, it reduced 41.5% of the cost. Besides, the results showed that
{"title":"An empirical study of local-decision-making-based software customization in distributed development","authors":"Ahmed S. Ghiduk, A. Qahtani","doi":"10.1049/SFW2.12016","DOIUrl":"https://doi.org/10.1049/SFW2.12016","url":null,"abstract":"Making a decision for the requirements of multi ‐ stakeholders is a key process, especially in distributed software development projects. Local decision ‐ making for requirements in distributed software development is really difficult to accomplish as well as communicating these requirements over organizational boundaries and conveying them to the offshore developers is a big task. This study presents an empirical evaluation for the effectiveness of local decision ‐ making on the customization process of the software in the distributed development against productivity and cost reduction. This empirical evaluation utilizes the Communicating Customization Requirements of Multi ‐ Clients in a Distributed Domain (CCRD) model. The empirical study estimates the productivity of CCRD in terms of the number of requirements for which decisions are made. In addition, the study estimates the reduction in the total cost of the customization process in terms of the salaries of the required local decision ‐ makers. Besides, this study finds the critical point at which the CCRD is still valid (i.e. the minimum number of requirements that violate the significance and worthy of CCRD). The study uses a real data set of 18 clients distributed through 16 cities and involved in one customization project requested about 3000 requirements collected in 1290 working hours. The results of this study showed that the local decision ‐ making improved the productivity of the customization process from 503 requirements in 200 min of simulation to 1,499 requirements. In addition, it reduced 41.5% of the cost. Besides, the results showed that","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"49 1","pages":"174-187"},"PeriodicalIF":0.0,"publicationDate":"2021-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74635856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1049/IET-SEN.2018.5060
Ana Vrankovic, Tihana Galinac Grbac, Z. Car
Network analysis has been successfully applied in software engineering to understand structural effects in the software. System software is represented as a network graph, and network metrics are used to analyse system quality. This study is motivated by a previous study, which represents the software structure as three-node subgraphs and empirically identifies that software structure continuously evolves over system releases. Here, the authors extend the previous study to analyse the relation of structural evolution and the defectiveness of subgraphs in the software network graph. This study investigates the behaviour of subgraph defects through software evolution and their impact on system defectiveness. Statistical methods were used to study subgraph defectiveness across versions of the systems and across subgraph types. The authors conclude that software versions have similar behaviours in terms of average subgraph type defectiveness and subgraph frequency distributions. However, different subgraph types have different defectiveness distributions. Based on these conclusions, the authors motivate the use of subgraph-based software representation in defect predictions and software modelling. These promising findings contribute to the further development of the software engineering discipline and help software developers and quality management in terms of better modelling and focusing their testing efforts within the code structure represented by subgraphs.
{"title":"Software structure evolution and relation to subgraph defectiveness","authors":"Ana Vrankovic, Tihana Galinac Grbac, Z. Car","doi":"10.1049/IET-SEN.2018.5060","DOIUrl":"https://doi.org/10.1049/IET-SEN.2018.5060","url":null,"abstract":"Network analysis has been successfully applied in software engineering to understand structural effects in the software. System software is represented as a network graph, and network metrics are used to analyse system quality. This study is motivated by a previous study, which represents the software structure as three-node subgraphs and empirically identifies that software structure continuously evolves over system releases. Here, the authors extend the previous study to analyse the relation of structural evolution and the defectiveness of subgraphs in the software network graph. This study investigates the behaviour of subgraph defects through software evolution and their impact on system defectiveness. Statistical methods were used to study subgraph defectiveness across versions of the systems and across subgraph types. The authors conclude that software versions have similar behaviours in terms of average subgraph type defectiveness and subgraph frequency distributions. However, different subgraph types have different defectiveness distributions. Based on these conclusions, the authors motivate the use of subgraph-based software representation in defect predictions and software modelling. These promising findings contribute to the further development of the software engineering discipline and help software developers and quality management in terms of better modelling and focusing their testing efforts within the code structure represented by subgraphs.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"103 6","pages":"355-367"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91488174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-31DOI: 10.1049/IET-SEN.2018.5355
Jitong Zhao, Yan Liu
Source code examples are key resources for software developers to learn application programming interfaces (APIs) and to understand corresponding usage patterns. Developers usually have to utilise, evaluate and understand code examples from multiple sources, which involve heavy manually processing efforts. To reduce such efforts, there has been growing interest in developing source code mining and recommendation systems. This study proposes API usage as a service (APIUaaS), a reference architecture for facilitating API usage, which allows infrastructures to be built for recommending proper API code examples based on semi-automatic data analytics. This reference architecture contains five logical layers and six global-level architectural concerns. API queries are accepted from programmers, and corresponding code example candidates are extracted from the data sources layer. The detailed structural links between API elements and source codes are captured and stored in the data model & code assets layer. During the recommendation phase, API usages mining, clustering and ranking algorithms are enabled in the knowledge discover & intelligent model layer. Services such as code assist and bug detection are assembled in the API usage services layer. Finally, the authors evaluate APIUaaS from three perspectives: rationality, feasibility, and usability.
{"title":"APIUaaS: a reference architecture for facilitating API usage from a data analytics perspective","authors":"Jitong Zhao, Yan Liu","doi":"10.1049/IET-SEN.2018.5355","DOIUrl":"https://doi.org/10.1049/IET-SEN.2018.5355","url":null,"abstract":"Source code examples are key resources for software developers to learn application programming interfaces (APIs) and to understand corresponding usage patterns. Developers usually have to utilise, evaluate and understand code examples from multiple sources, which involve heavy manually processing efforts. To reduce such efforts, there has been growing interest in developing source code mining and recommendation systems. This study proposes API usage as a service (APIUaaS), a reference architecture for facilitating API usage, which allows infrastructures to be built for recommending proper API code examples based on semi-automatic data analytics. This reference architecture contains five logical layers and six global-level architectural concerns. API queries are accepted from programmers, and corresponding code example candidates are extracted from the data sources layer. The detailed structural links between API elements and source codes are captured and stored in the data model & code assets layer. During the recommendation phase, API usages mining, clustering and ranking algorithms are enabled in the knowledge discover & intelligent model layer. Services such as code assist and bug detection are assembled in the API usage services layer. Finally, the authors evaluate APIUaaS from three perspectives: rationality, feasibility, and usability.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"7 1","pages":"466-478"},"PeriodicalIF":0.0,"publicationDate":"2019-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90804072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-29DOI: 10.1049/IET-SEN.2018.5207
R. Anbunathan, A. Basu
Unified modelling language (UML) activity diagram (AD) is used to represent system behaviour abstractly and is used by testers to generate test cases and test data. During the design of test cases, AD with concurrent activities may lead to a large number of paths, and it may not always be possible to test all execution paths. Research on deriving test cases from AD having concurrent activities has focused on conventional search techniques such as breadth-first search and depth-first search which has been found to be inefficient in such cases. To overcome this drawback, the authors propose a method using pairwise testing and genetic algorithm to derive a reduced number of test cases in AD with concurrent activities. Experiments conducted on various real-life concurrent systems show that the proposed technique generates a reduced number of test cases compared with existing methods.
{"title":"Combining genetic algorithm and pairwise testing for optimised test generation from UML ADs","authors":"R. Anbunathan, A. Basu","doi":"10.1049/IET-SEN.2018.5207","DOIUrl":"https://doi.org/10.1049/IET-SEN.2018.5207","url":null,"abstract":"Unified modelling language (UML) activity diagram (AD) is used to represent system behaviour abstractly and is used by testers to generate test cases and test data. During the design of test cases, AD with concurrent activities may lead to a large number of paths, and it may not always be possible to test all execution paths. Research on deriving test cases from AD having concurrent activities has focused on conventional search techniques such as breadth-first search and depth-first search which has been found to be inefficient in such cases. To overcome this drawback, the authors propose a method using pairwise testing and genetic algorithm to derive a reduced number of test cases in AD with concurrent activities. Experiments conducted on various real-life concurrent systems show that the proposed technique generates a reduced number of test cases compared with existing methods.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"75 1","pages":"423-433"},"PeriodicalIF":0.0,"publicationDate":"2019-05-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88965342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-05-28DOI: 10.1049/IET-SEN.2018.5409
M. Ozkaya
Unified Modelling Language (UML) is essentially a de-facto standard for software modeling and supported with many modeling tools. In this study, 58 UML tools have been analysed for modelling viewpoints, analysis, transformation & export, collaboration, tool integration, scripting, project management, and knowledge management. The analysis results reveal important findings: (i) 11 UML tools support multiple viewpoints, (ii) 17 tools support large-viewpoint management, (iii) Umple and Reactive Blocks support formal verification, (iv) 9 tools support the simulation of activity diagrams, (v) while 14 tools check pre-defined well-formedness rules, 8 of them support user-defined rules, (vi) 16 tools support scripting, (vii) 29 tools support code-generation and 18 of them support round-trip engineering, (viii) Java is the top popular language, (ix) 38 tools export UML models as image, 32 tools export as HTML, and 32 tools export as XML/XMI, (x) 17 tools enable versioning and 13 of them support multi-user access, (xi) 15 tools support the plug-in extensions and 12 tools support the IDE integration, (xii) 6 tools support project management, and (xiii) while most tools provide user-manuals, interactive guidance is rarely supported. The results will be helpful for practitioners in choosing the right tool(s) and the tool developers in determining the weaknesses/strengths.
{"title":"Are the UML modelling tools powerful enough for practitioners? A literature review","authors":"M. Ozkaya","doi":"10.1049/IET-SEN.2018.5409","DOIUrl":"https://doi.org/10.1049/IET-SEN.2018.5409","url":null,"abstract":"Unified Modelling Language (UML) is essentially a de-facto standard for software modeling and supported with many modeling tools. In this study, 58 UML tools have been analysed for modelling viewpoints, analysis, transformation & export, collaboration, tool integration, scripting, project management, and knowledge management. The analysis results reveal important findings: (i) 11 UML tools support multiple viewpoints, (ii) 17 tools support large-viewpoint management, (iii) Umple and Reactive Blocks support formal verification, (iv) 9 tools support the simulation of activity diagrams, (v) while 14 tools check pre-defined well-formedness rules, 8 of them support user-defined rules, (vi) 16 tools support scripting, (vii) 29 tools support code-generation and 18 of them support round-trip engineering, (viii) Java is the top popular language, (ix) 38 tools export UML models as image, 32 tools export as HTML, and 32 tools export as XML/XMI, (x) 17 tools enable versioning and 13 of them support multi-user access, (xi) 15 tools support the plug-in extensions and 12 tools support the IDE integration, (xii) 6 tools support project management, and (xiii) while most tools provide user-manuals, interactive guidance is rarely supported. The results will be helpful for practitioners in choosing the right tool(s) and the tool developers in determining the weaknesses/strengths.","PeriodicalId":13395,"journal":{"name":"IET Softw.","volume":"30 1","pages":"338-354"},"PeriodicalIF":0.0,"publicationDate":"2019-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81079113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}