Pub Date : 2019-12-01DOI: 10.1109/APSEC48747.2019.00023
Ran Chen, Wenhui Zhang
The Belief-Desire-Intention (BDI) architecture is a framework for studying computational agents capable of rational behaviors. The behaviors of such agents may be modeled by possible world structures, for the specification of the behaviors, CTLBDI may be used. As multi-agent systems are increasingly complex, the problem of their verification is acquiring importance. This work develops a symbolic model checking approach for the verification of CTLBDI properties within the BDI-architecture. In addition, we develop a symbolic approach for checking whether a model satisfies the weak and strong realism constraints. The approaches for model checking and realism checking have been implemented, and the experimental data show that the approaches are able to handle models with a fairly large number of possible worlds.
{"title":"Verification of CTL_BDI Properties by Symbolic Model Checking","authors":"Ran Chen, Wenhui Zhang","doi":"10.1109/APSEC48747.2019.00023","DOIUrl":"https://doi.org/10.1109/APSEC48747.2019.00023","url":null,"abstract":"The Belief-Desire-Intention (BDI) architecture is a framework for studying computational agents capable of rational behaviors. The behaviors of such agents may be modeled by possible world structures, for the specification of the behaviors, CTLBDI may be used. As multi-agent systems are increasingly complex, the problem of their verification is acquiring importance. This work develops a symbolic model checking approach for the verification of CTLBDI properties within the BDI-architecture. In addition, we develop a symbolic approach for checking whether a model satisfies the weak and strong realism constraints. The approaches for model checking and realism checking have been implemented, and the experimental data show that the approaches are able to handle models with a fairly large number of possible worlds.","PeriodicalId":325642,"journal":{"name":"2019 26th Asia-Pacific Software Engineering Conference (APSEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130049705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Technical Debt (TD) introduces a quality problem and increases maintenance cost since it may require improvements in the future. Several studies show that it is possible to automatically detect TD from source code comments that developers intentionally created, so-called self-admitted technical debt (SATD). Those studies proposed to use binary classification technique to predict whether a comment shows SATD. However, SATD has different types (e.g. design SATD and requirement SATD). In this paper, we therefore propose an approach using N-gram Inverse Document Frequency (IDF) and employ a multi-class classification technique to build a model that can identify different types of SATD. From the empirical evaluation on 10 open-source projects, our approach outperforms alternative methods (e.g. using BOW and TF-IDF). Our approach also improves the prediction performance over the baseline benchmark by 33%.
{"title":"Automatic Classifying Self-Admitted Technical Debt Using N-Gram IDF","authors":"Supatsara Wattanakriengkrai, Napat Srisermphoak, Sahawat Sintoplertchaikul, Morakot Choetkiertikul, Chaiyong Ragkhitwetsagul, T. Sunetnanta, Hideaki Hata, Ken-ichi Matsumoto","doi":"10.1109/APSEC48747.2019.00050","DOIUrl":"https://doi.org/10.1109/APSEC48747.2019.00050","url":null,"abstract":"Technical Debt (TD) introduces a quality problem and increases maintenance cost since it may require improvements in the future. Several studies show that it is possible to automatically detect TD from source code comments that developers intentionally created, so-called self-admitted technical debt (SATD). Those studies proposed to use binary classification technique to predict whether a comment shows SATD. However, SATD has different types (e.g. design SATD and requirement SATD). In this paper, we therefore propose an approach using N-gram Inverse Document Frequency (IDF) and employ a multi-class classification technique to build a model that can identify different types of SATD. From the empirical evaluation on 10 open-source projects, our approach outperforms alternative methods (e.g. using BOW and TF-IDF). Our approach also improves the prediction performance over the baseline benchmark by 33%.","PeriodicalId":325642,"journal":{"name":"2019 26th Asia-Pacific Software Engineering Conference (APSEC)","volume":"411 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134070011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/APSEC48747.2019.00016
Feng Sheng, Huibiao Zhu, Zongyuang Yang
Model Driven Engineering (MDE) uses models to represent the core part of the software systems. The Unified Model Language (UML) is a widely accepted standard for modeling software systems. Although UML provides numbers of concepts and diagrams to describe the system, there is still an unsolved problem that the semantics and refinement relations of models are not formally defined. In this paper, we apply the constructive type theory to formalize the class diagrams and object diagrams. A suitable subset of UML static models is identified and formally defined. The theorem assistant Coq is applied to encode the semantics of class diagrams. Moreover the refinement relations are also formalized in Coq. The whole approach is supported by tools that do not constrain the semantic definition's expressiveness and flexibility while making it machine-checkable. Our approach offers a novel way for giving a precise foundation in UML and contributes to the goal of improving the overall trustworthy software systems by combining theoretical and practical techniques.
{"title":"Towards the Mechanized Semantics and Refinement of UML Class Diagrams","authors":"Feng Sheng, Huibiao Zhu, Zongyuang Yang","doi":"10.1109/APSEC48747.2019.00016","DOIUrl":"https://doi.org/10.1109/APSEC48747.2019.00016","url":null,"abstract":"Model Driven Engineering (MDE) uses models to represent the core part of the software systems. The Unified Model Language (UML) is a widely accepted standard for modeling software systems. Although UML provides numbers of concepts and diagrams to describe the system, there is still an unsolved problem that the semantics and refinement relations of models are not formally defined. In this paper, we apply the constructive type theory to formalize the class diagrams and object diagrams. A suitable subset of UML static models is identified and formally defined. The theorem assistant Coq is applied to encode the semantics of class diagrams. Moreover the refinement relations are also formalized in Coq. The whole approach is supported by tools that do not constrain the semantic definition's expressiveness and flexibility while making it machine-checkable. Our approach offers a novel way for giving a precise foundation in UML and contributes to the goal of improving the overall trustworthy software systems by combining theoretical and practical techniques.","PeriodicalId":325642,"journal":{"name":"2019 26th Asia-Pacific Software Engineering Conference (APSEC)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132267299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/APSEC48747.2019.00039
N. Tran, M. Babar, Quan Z. Sheng, J. Grundy
The content of the Internet of Things (IoT), notably sensor data and virtual representation of physical devices, has been increasingly delivered via Web protocols and available on the World Wide Web (WWW). Internet of Things Search Engine (IoTSE) systems are catalytic to utilize this influx of data. They enable users to discover and retrieve relevant IoT content. While a general IoTSE system – the next "Google" – is beyond the horizon due to the vast diversity of IoT content and types of queries for them, specific IoTSE systems that target subsets of query types and IoT infrastructure are feasible and beneficial. A component-based engineering approach, in which prior IoTSE systems and research prototypes are reassembled as building blocks for new IoTSE systems, could be a time-and cost-effective solution to engineering IoTSE systems. This paper presents the design, implementation, and evaluation of a framework to facilitate a component-based approach to engineering IoTSE systems. As an evaluation, we developed eight IoTSE components and composed them into eight proof-of-concept IoTSE systems, using a reference implementation of the proposed framework. An analysis on Source Line of Code (SLOC) revealed that the complexity handled transparently by the IoTSE framework could account for over 90% of the code base of a simple IoTSE system.
{"title":"A Framework for Internet of Things Search Engines Engineering","authors":"N. Tran, M. Babar, Quan Z. Sheng, J. Grundy","doi":"10.1109/APSEC48747.2019.00039","DOIUrl":"https://doi.org/10.1109/APSEC48747.2019.00039","url":null,"abstract":"The content of the Internet of Things (IoT), notably sensor data and virtual representation of physical devices, has been increasingly delivered via Web protocols and available on the World Wide Web (WWW). Internet of Things Search Engine (IoTSE) systems are catalytic to utilize this influx of data. They enable users to discover and retrieve relevant IoT content. While a general IoTSE system – the next \"Google\" – is beyond the horizon due to the vast diversity of IoT content and types of queries for them, specific IoTSE systems that target subsets of query types and IoT infrastructure are feasible and beneficial. A component-based engineering approach, in which prior IoTSE systems and research prototypes are reassembled as building blocks for new IoTSE systems, could be a time-and cost-effective solution to engineering IoTSE systems. This paper presents the design, implementation, and evaluation of a framework to facilitate a component-based approach to engineering IoTSE systems. As an evaluation, we developed eight IoTSE components and composed them into eight proof-of-concept IoTSE systems, using a reference implementation of the proposed framework. An analysis on Source Line of Code (SLOC) revealed that the complexity handled transparently by the IoTSE framework could account for over 90% of the code base of a simple IoTSE system.","PeriodicalId":325642,"journal":{"name":"2019 26th Asia-Pacific Software Engineering Conference (APSEC)","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134115593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/APSEC48747.2019.00075
Syed Fatiul Huq, Ali Zafar Sadiq, K. Sakib
Developer emotion or sentiment in a software development environment has the potential to affect performance, and consequently, the software itself. Sentiment analysis, conducted to analyze online collaborative artifacts, can derive effects of developer sentiment. This study aims to understand how developer sentiment is related to bugs, by analyzing the difference of sentiment between regular and Fix-Inducing Changes (FIC) - changes to code that introduce bugs in the system. To do so, sentiment is extracted from Pull Requests of 6 well known GitHub repositories, which contain both code and contributor discussion. Sentiment is calculated using a tool specializing in the software engineering domain: SentiStrength-SE. Next, FICs are detected from Commits by filtering the ones that fix bugs and tracking the origin of the code these remove. Commits are categorized based on FICs and assigned separate sentiment scores (-4 to +4) based on different preceding artifacts - Commits, Comments and Reviews from Pull Requests. The statistical result shows that FICs, compared to regular Commits, contain more positive Comments and Reviews. Commits that precede an FIC have more negative messages. Similarly, all the Pull Request artifacts combined are more negative for FICs than regular Commits.
{"title":"Understanding the Effect of Developer Sentiment on Fix-Inducing Changes: An Exploratory Study on GitHub Pull Requests","authors":"Syed Fatiul Huq, Ali Zafar Sadiq, K. Sakib","doi":"10.1109/APSEC48747.2019.00075","DOIUrl":"https://doi.org/10.1109/APSEC48747.2019.00075","url":null,"abstract":"Developer emotion or sentiment in a software development environment has the potential to affect performance, and consequently, the software itself. Sentiment analysis, conducted to analyze online collaborative artifacts, can derive effects of developer sentiment. This study aims to understand how developer sentiment is related to bugs, by analyzing the difference of sentiment between regular and Fix-Inducing Changes (FIC) - changes to code that introduce bugs in the system. To do so, sentiment is extracted from Pull Requests of 6 well known GitHub repositories, which contain both code and contributor discussion. Sentiment is calculated using a tool specializing in the software engineering domain: SentiStrength-SE. Next, FICs are detected from Commits by filtering the ones that fix bugs and tracking the origin of the code these remove. Commits are categorized based on FICs and assigned separate sentiment scores (-4 to +4) based on different preceding artifacts - Commits, Comments and Reviews from Pull Requests. The statistical result shows that FICs, compared to regular Commits, contain more positive Comments and Reviews. Commits that precede an FIC have more negative messages. Similarly, all the Pull Request artifacts combined are more negative for FICs than regular Commits.","PeriodicalId":325642,"journal":{"name":"2019 26th Asia-Pacific Software Engineering Conference (APSEC)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124477944","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/APSEC48747.2019.00013
Lei Chen, Dandan Wang, Junjie Wang, Qing Wang
Requirements traceability provides important support throughout all software life cycle; however, creating such links manually is time-consuming and error-prone. Supervised automated solutions use machine learning or deep learning techniques to generate trace links, but require large labeled dataset to train an effective model. Unsupervised solutions as word embedding approaches can generate links by capturing the semantic meaning of artifacts and are gaining more attention. Despite that, our observation revealed that, besides the semantic information, the sequential information of terms in the artifacts would provide additional assistance for building the accurate links. This paper proposes an unsupervised requirements traceability approach (named S2Trace) which learns the Sequential Semantics of software artifacts to generate the trace links. Its core idea is to mine the sequential patterns and use them to learn the document embedding representation. Evaluation is conducted on five public datasets, and results show that our approach outperforms three typical baselines. The modeling of sequential information in this paper provides new insights into the unsupervised traceability solutions, and the improvement in the traceability accuracy further proves the usefulness of the sequential information.
{"title":"Enhancing Unsupervised Requirements Traceability with Sequential Semantics","authors":"Lei Chen, Dandan Wang, Junjie Wang, Qing Wang","doi":"10.1109/APSEC48747.2019.00013","DOIUrl":"https://doi.org/10.1109/APSEC48747.2019.00013","url":null,"abstract":"Requirements traceability provides important support throughout all software life cycle; however, creating such links manually is time-consuming and error-prone. Supervised automated solutions use machine learning or deep learning techniques to generate trace links, but require large labeled dataset to train an effective model. Unsupervised solutions as word embedding approaches can generate links by capturing the semantic meaning of artifacts and are gaining more attention. Despite that, our observation revealed that, besides the semantic information, the sequential information of terms in the artifacts would provide additional assistance for building the accurate links. This paper proposes an unsupervised requirements traceability approach (named S2Trace) which learns the Sequential Semantics of software artifacts to generate the trace links. Its core idea is to mine the sequential patterns and use them to learn the document embedding representation. Evaluation is conducted on five public datasets, and results show that our approach outperforms three typical baselines. The modeling of sequential information in this paper provides new insights into the unsupervised traceability solutions, and the improvement in the traceability accuracy further proves the usefulness of the sequential information.","PeriodicalId":325642,"journal":{"name":"2019 26th Asia-Pacific Software Engineering Conference (APSEC)","volume":"44 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115816743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There has been a considerable shift in the way how software is built and delivered today. Most deployed software systems in modern times are created by (autonomous) distributed teams in heterogeneous environments making use of many artifacts, such as externally developed libraries, drawn from a variety of disparate sources. Stakeholders such as developers, managers, and clients across the software delivery value chain are interested in gaining insights such as how and why an artifact came to where it is, what other artifacts are related to it, and who else is using this. Software provenance encompasses the origins of artifacts, their evolution, and usage and is critical for comprehending, managing, decision-making, and analyzing software quality, processes, people, issues etc. In this paper, we propose an extensible framework based on standard provenance model specifications and blockchain technology for capturing, storing, exploring, and analyzing software provenance data. Our framework (i) enhances trustworthiness of provenance data (ii) uncovers non-trivial insights through inferences and reasoning, and (iii) enables interactive visualization of provenance insights. We demonstrate the utility of the proposed framework using open source project data.
{"title":"BLINKER: A Blockchain-Enabled Framework for Software Provenance","authors":"R.P. Jagadeesh Chandra Bose, Kanchanjot Kaur Phokela, Vikrant S. Kaulgud, Sanjay Podder","doi":"10.1109/APSEC48747.2019.00010","DOIUrl":"https://doi.org/10.1109/APSEC48747.2019.00010","url":null,"abstract":"There has been a considerable shift in the way how software is built and delivered today. Most deployed software systems in modern times are created by (autonomous) distributed teams in heterogeneous environments making use of many artifacts, such as externally developed libraries, drawn from a variety of disparate sources. Stakeholders such as developers, managers, and clients across the software delivery value chain are interested in gaining insights such as how and why an artifact came to where it is, what other artifacts are related to it, and who else is using this. Software provenance encompasses the origins of artifacts, their evolution, and usage and is critical for comprehending, managing, decision-making, and analyzing software quality, processes, people, issues etc. In this paper, we propose an extensible framework based on standard provenance model specifications and blockchain technology for capturing, storing, exploring, and analyzing software provenance data. Our framework (i) enhances trustworthiness of provenance data (ii) uncovers non-trivial insights through inferences and reasoning, and (iii) enables interactive visualization of provenance insights. We demonstrate the utility of the proposed framework using open source project data.","PeriodicalId":325642,"journal":{"name":"2019 26th Asia-Pacific Software Engineering Conference (APSEC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129798066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/APSEC48747.2019.00035
D. Horn, Nazakat Ali, Jang-Eui Hong
Due to diverse set of heterogeneous computing devices communicating with one another and fusing with physical components in Cyber-Physical Systems, software engineers may use different tools and/or modeling languages to formally describe or verify the system properties. As a result, the integration of these diverse constituents poses key challenges such as task for identifying interactions of components to be synthesized for a function in the systems. Although existing studies such as ontology and integration semantic languages have been used for specifying interactions of components in a Cyber-Physical System, these are still not applicable to discover the component interactions in collaborative Cyber-Physical Systems. It is due to the fact that functionalities of Cyber-Physical Systems are generally realized through interactions among multiple systems in a collaborative environment. This paper proposes a model interaction language, CyPhyML+ which can identify component interactions of realized functions in collaborative Cyber-Physical Systems. We show the proposed approach validity and applicability via an Automatic Incident Detection System.
{"title":"Automatic Identifying Interaction Components in Collaborative Cyber-Physical Systems","authors":"D. Horn, Nazakat Ali, Jang-Eui Hong","doi":"10.1109/APSEC48747.2019.00035","DOIUrl":"https://doi.org/10.1109/APSEC48747.2019.00035","url":null,"abstract":"Due to diverse set of heterogeneous computing devices communicating with one another and fusing with physical components in Cyber-Physical Systems, software engineers may use different tools and/or modeling languages to formally describe or verify the system properties. As a result, the integration of these diverse constituents poses key challenges such as task for identifying interactions of components to be synthesized for a function in the systems. Although existing studies such as ontology and integration semantic languages have been used for specifying interactions of components in a Cyber-Physical System, these are still not applicable to discover the component interactions in collaborative Cyber-Physical Systems. It is due to the fact that functionalities of Cyber-Physical Systems are generally realized through interactions among multiple systems in a collaborative environment. This paper proposes a model interaction language, CyPhyML+ which can identify component interactions of realized functions in collaborative Cyber-Physical Systems. We show the proposed approach validity and applicability via an Automatic Incident Detection System.","PeriodicalId":325642,"journal":{"name":"2019 26th Asia-Pacific Software Engineering Conference (APSEC)","volume":"154 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124295998","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/APSEC48747.2019.00026
Ran Mo, Mengya Zhan
During software evolution, files are usually changed together for accommodating modifications. Although co-change analysis has been widely adopted for difference studies, such as defect prediction, impact analysis, architectural relations identification etc., there has been little work characterizing co-changed files as a group and modeling the evolution of these files to represent software maintenance and evolution. In this paper, we present the concept of history coupling dependency, based on which, we propose a novel model, history coupling space (HCSpace), to link the co-changed files and represent how files are historically connected as a group. Our investigations on seven open source projects show that each HCSpace could be treated as a maintenance unit where the involved files are more likely to evolve together. The results also show that the HCSpaces have important impacts on a project's maintenance. In particular, we demonstrate that identified HCSpaces consumes a relatively large portion of maintenance effort spent on a project, and these HCSpaces are still actively evolving.
{"title":"History Coupling Space: A New Model to Represent Evolutionary Relations","authors":"Ran Mo, Mengya Zhan","doi":"10.1109/APSEC48747.2019.00026","DOIUrl":"https://doi.org/10.1109/APSEC48747.2019.00026","url":null,"abstract":"During software evolution, files are usually changed together for accommodating modifications. Although co-change analysis has been widely adopted for difference studies, such as defect prediction, impact analysis, architectural relations identification etc., there has been little work characterizing co-changed files as a group and modeling the evolution of these files to represent software maintenance and evolution. In this paper, we present the concept of history coupling dependency, based on which, we propose a novel model, history coupling space (HCSpace), to link the co-changed files and represent how files are historically connected as a group. Our investigations on seven open source projects show that each HCSpace could be treated as a maintenance unit where the involved files are more likely to evolve together. The results also show that the HCSpaces have important impacts on a project's maintenance. In particular, we demonstrate that identified HCSpaces consumes a relatively large portion of maintenance effort spent on a project, and these HCSpaces are still actively evolving.","PeriodicalId":325642,"journal":{"name":"2019 26th Asia-Pacific Software Engineering Conference (APSEC)","volume":"1854 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129878557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-21DOI: 10.1109/APSEC48747.2019.00033
T. Bhagya, Jens Dietrich, H. Guesgen
Modern application development allows applications to be composed using lightweight HTTP services. Testing such an application requires the availability of services that the application makes requests to. However, access to dependent services during testing may be restrained. Simulating the behaviour of such services is, therefore, useful to address their absence and move on application testing. This paper examines the appropriateness of Symbolic Machine Learning algorithms to automatically synthesise HTTP services' mock skeletons from network traffic recordings. These skeletons can then be customised to create mocks that can generate service responses suitable for testing. The mock skeletons have human-readable logic for key aspects of service responses, such as headers and status codes, and are highly accurate.
{"title":"Generating Mock Skeletons for Lightweight Web-Service Testing","authors":"T. Bhagya, Jens Dietrich, H. Guesgen","doi":"10.1109/APSEC48747.2019.00033","DOIUrl":"https://doi.org/10.1109/APSEC48747.2019.00033","url":null,"abstract":"Modern application development allows applications to be composed using lightweight HTTP services. Testing such an application requires the availability of services that the application makes requests to. However, access to dependent services during testing may be restrained. Simulating the behaviour of such services is, therefore, useful to address their absence and move on application testing. This paper examines the appropriateness of Symbolic Machine Learning algorithms to automatically synthesise HTTP services' mock skeletons from network traffic recordings. These skeletons can then be customised to create mocks that can generate service responses suitable for testing. The mock skeletons have human-readable logic for key aspects of service responses, such as headers and status codes, and are highly accurate.","PeriodicalId":325642,"journal":{"name":"2019 26th Asia-Pacific Software Engineering Conference (APSEC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132133476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}