Pub Date : 2021-12-01DOI: 10.1109/APSEC53868.2021.00026
Xin Huang, He Zhang, Xin Zhou, Dong Shao, M. L. Jaccheri
Nowadays, software permeates almost every aspect of our lives. To produce complex and large-scale software products, a large number of software engineers are required. Accordingly, researchers and educators recognize the importance of Software Engineering Education (SEE), and many studies related to SEE have been published in recent years. To synthesize the large amount of research in SEE, some Systematic Literature Reviews (SLRs) focusing on different areas of SEE have been conducted and reported. However, due to their limited focuses, none of these SLRs is able to depict an overall state-of-the-art for SEE. To remedy this, we conducted a tertiary study on SEE, which identifies 26 relevant SLRs published between 2004 and 2019. By classifying and positioning these SLRs in two dimensions, i.e. the education methods/tools applied for SEE and the research topics related to SEE, we present a landscape of SEE, which locates the SLRs on SEE and their research dimensions. Further, we collected the issues studied in the published research and those that need to be addressed for instructors. This paper also discusses the challenges of the current SEE research landscape.
如今,软件几乎渗透到我们生活的方方面面。为了生产复杂的大型软件产品,需要大量的软件工程师。因此,研究人员和教育工作者认识到软件工程教育(SEE)的重要性,并且近年来发表了许多与SEE相关的研究。为了综合大量关于SEE的研究,针对SEE的不同领域进行了一些系统文献综述(Systematic Literature Reviews, slr)。然而,由于它们的聚焦有限,这些单反都无法描绘SEE的整体最新技术。为了解决这个问题,我们对SEE进行了一项高等研究,确定了2004年至2019年期间发表的26个相关单反。通过在SEE的教育方法/工具和SEE相关的研究课题两个维度上对这些单反进行分类和定位,我们呈现了SEE的景观,该景观定位了SEE的单反及其研究维度。此外,我们收集了在已发表的研究中研究的问题以及需要为教师解决的问题。本文还讨论了当前SEE研究领域面临的挑战。
{"title":"A Research Landscape of Software Engineering Education","authors":"Xin Huang, He Zhang, Xin Zhou, Dong Shao, M. L. Jaccheri","doi":"10.1109/APSEC53868.2021.00026","DOIUrl":"https://doi.org/10.1109/APSEC53868.2021.00026","url":null,"abstract":"Nowadays, software permeates almost every aspect of our lives. To produce complex and large-scale software products, a large number of software engineers are required. Accordingly, researchers and educators recognize the importance of Software Engineering Education (SEE), and many studies related to SEE have been published in recent years. To synthesize the large amount of research in SEE, some Systematic Literature Reviews (SLRs) focusing on different areas of SEE have been conducted and reported. However, due to their limited focuses, none of these SLRs is able to depict an overall state-of-the-art for SEE. To remedy this, we conducted a tertiary study on SEE, which identifies 26 relevant SLRs published between 2004 and 2019. By classifying and positioning these SLRs in two dimensions, i.e. the education methods/tools applied for SEE and the research topics related to SEE, we present a landscape of SEE, which locates the SLRs on SEE and their research dimensions. Further, we collected the issues studied in the published research and those that need to be addressed for instructors. This paper also discusses the challenges of the current SEE research landscape.","PeriodicalId":143800,"journal":{"name":"2021 28th Asia-Pacific Software Engineering Conference (APSEC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134458242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/APSEC53868.2021.00018
Huaxi Jiang, Jie Zhu, Li Yang, Geng Liang, Chun Zuo
The release note is an essential software artifact of open-source software that documents crucial information about changes, such as new features and bug fixes. With the help of release notes, both developers and users could have a general understanding of the latest version without browsing the source code. However, it is a daunting and time-consuming job for developers to produce release notes. Although prior studies have provided some automatic approaches, they generate release notes mainly by extracting information from code changes. This will result in language-specific and not being general enough to be applicable. Therefore, helping developers produce release notes effectively remains an unsolved challenge. To address the problem, we first conduct a manual study on the release notes of 900 GitHub projects, which reveals that more than 54% of projects produce their release notes with pull requests. Based on the empirical finding, we propose a deep learning based approach named DeepRelease (Deep learning based Release notes generator) to generate release notes according to pull requests. The process of release notes generation in DeepRelease includes the change entries generation and the change category (i.e., new features or bug fixes) generation, which are formulated as a text summarization task and a multi-class classification problem, respectively. Since DeepRelease fully employs text information from pull requests to summarize changes and identify the change category, it is language-agnostic and can be used for projects in any language. We build a dataset with over 46K release notes and evaluate DeepRelease on the dataset. The experimental results indicate that DeepRelease outperforms four baselines and can generate release notes similar to those manually written ones in a fraction of the time.
{"title":"DeepRelease: Language-agnostic Release Notes Generation from Pull Requests of Open-source Software","authors":"Huaxi Jiang, Jie Zhu, Li Yang, Geng Liang, Chun Zuo","doi":"10.1109/APSEC53868.2021.00018","DOIUrl":"https://doi.org/10.1109/APSEC53868.2021.00018","url":null,"abstract":"The release note is an essential software artifact of open-source software that documents crucial information about changes, such as new features and bug fixes. With the help of release notes, both developers and users could have a general understanding of the latest version without browsing the source code. However, it is a daunting and time-consuming job for developers to produce release notes. Although prior studies have provided some automatic approaches, they generate release notes mainly by extracting information from code changes. This will result in language-specific and not being general enough to be applicable. Therefore, helping developers produce release notes effectively remains an unsolved challenge. To address the problem, we first conduct a manual study on the release notes of 900 GitHub projects, which reveals that more than 54% of projects produce their release notes with pull requests. Based on the empirical finding, we propose a deep learning based approach named DeepRelease (Deep learning based Release notes generator) to generate release notes according to pull requests. The process of release notes generation in DeepRelease includes the change entries generation and the change category (i.e., new features or bug fixes) generation, which are formulated as a text summarization task and a multi-class classification problem, respectively. Since DeepRelease fully employs text information from pull requests to summarize changes and identify the change category, it is language-agnostic and can be used for projects in any language. We build a dataset with over 46K release notes and evaluate DeepRelease on the dataset. The experimental results indicate that DeepRelease outperforms four baselines and can generate release notes similar to those manually written ones in a fraction of the time.","PeriodicalId":143800,"journal":{"name":"2021 28th Asia-Pacific Software Engineering Conference (APSEC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130874089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/APSEC53868.2021.00010
Kejun Li, Taiming Wang, Hui Liu
Methods are basic elements for functional organization in software applications. A high-quality method name should clearly express its function, and help developers understand its usages quickly without reading through the lengthy and complex method body. However, in some cases, method names could be inconsistent with their functional implementations. The inconsistency in turn may result in inaccurate interpretation of methods, and even buggy method invocations. To this end, in this paper, we propose a deep learning-based approach, called NameChecker, to detecting the inconsistency between method names and their corresponding method bodies. NameChecker extracts lexical and structural features of source code by static code analysis. Based on the extracted features, NameChecker employs deep learning techniques (i.e., LSTM, and Attention mechanism) to predict whether the given method name is consistent with its implementation. Different from other deep learning based approaches to inconsistency detection, NameChecker avoids the generation (recommendation) of method names. Empirical studies suggested that generated method names are often incorrect, and thus avoiding method name generation may significantly improve the accuracy of NameChecker. We evaluate NameChecker on open-source applications, and our evaluation results suggest that NameChecker improves the state of the art by increasing the F1-score from 66.7% to 73.4%.
{"title":"NameChecker: Detecting Inconsistency between Method Names and Method Bodies","authors":"Kejun Li, Taiming Wang, Hui Liu","doi":"10.1109/APSEC53868.2021.00010","DOIUrl":"https://doi.org/10.1109/APSEC53868.2021.00010","url":null,"abstract":"Methods are basic elements for functional organization in software applications. A high-quality method name should clearly express its function, and help developers understand its usages quickly without reading through the lengthy and complex method body. However, in some cases, method names could be inconsistent with their functional implementations. The inconsistency in turn may result in inaccurate interpretation of methods, and even buggy method invocations. To this end, in this paper, we propose a deep learning-based approach, called NameChecker, to detecting the inconsistency between method names and their corresponding method bodies. NameChecker extracts lexical and structural features of source code by static code analysis. Based on the extracted features, NameChecker employs deep learning techniques (i.e., LSTM, and Attention mechanism) to predict whether the given method name is consistent with its implementation. Different from other deep learning based approaches to inconsistency detection, NameChecker avoids the generation (recommendation) of method names. Empirical studies suggested that generated method names are often incorrect, and thus avoiding method name generation may significantly improve the accuracy of NameChecker. We evaluate NameChecker on open-source applications, and our evaluation results suggest that NameChecker improves the state of the art by increasing the F1-score from 66.7% to 73.4%.","PeriodicalId":143800,"journal":{"name":"2021 28th Asia-Pacific Software Engineering Conference (APSEC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134049614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Calling Application Programming Interfaces (APIs) shall follow various constraints (e.g., call orders). If these con-straints are violated, API misuses are introduced to code, and such misuses can cause severe bugs. To effectively detect API misuses, most prior approaches mine constraints from client code, and assume that the violations of constraints are potential misuses. However, as client code only illustrates a small portion of API usages, constraints mined from client code are typically incomplete. As a result, when mined constraints are used to detect bugs, many violations of constraints turn out to be false positives. In this paper, our research purpose is to find more misuses and to reduce false positives. As library code contains many details on APIs, we propose an approach that mines API constraints from both client and library code. From client code, our approach builds API usage graphs and uses a frequent subgraph mining algorithm to mine frequent usage patterns as API constraints. From library code, our approach derives various types of constraints with our predefined strategies. With constraints from both sources, our graph matching algorithm can detect API misuses. As a result, our approach takes advantage from both the comprehensiveness and informativeness of library-based constraints and the accuracy of client-based patterns. We compared our approach with MuDetect on the MuBench dataset. Our results show that it significantly improves the detection effectiveness of MuBench from 39.5% to 50.2% of the recall, and from 30.6% to 41.7% of the precision.
{"title":"Mining API Constraints from Library and Client to Detect API Misuses","authors":"Hushuang Zeng, Jingxin Chen, Beijun Shen, Hao Zhong","doi":"10.1109/APSEC53868.2021.00024","DOIUrl":"https://doi.org/10.1109/APSEC53868.2021.00024","url":null,"abstract":"Calling Application Programming Interfaces (APIs) shall follow various constraints (e.g., call orders). If these con-straints are violated, API misuses are introduced to code, and such misuses can cause severe bugs. To effectively detect API misuses, most prior approaches mine constraints from client code, and assume that the violations of constraints are potential misuses. However, as client code only illustrates a small portion of API usages, constraints mined from client code are typically incomplete. As a result, when mined constraints are used to detect bugs, many violations of constraints turn out to be false positives. In this paper, our research purpose is to find more misuses and to reduce false positives. As library code contains many details on APIs, we propose an approach that mines API constraints from both client and library code. From client code, our approach builds API usage graphs and uses a frequent subgraph mining algorithm to mine frequent usage patterns as API constraints. From library code, our approach derives various types of constraints with our predefined strategies. With constraints from both sources, our graph matching algorithm can detect API misuses. As a result, our approach takes advantage from both the comprehensiveness and informativeness of library-based constraints and the accuracy of client-based patterns. We compared our approach with MuDetect on the MuBench dataset. Our results show that it significantly improves the detection effectiveness of MuBench from 39.5% to 50.2% of the recall, and from 30.6% to 41.7% of the precision.","PeriodicalId":143800,"journal":{"name":"2021 28th Asia-Pacific Software Engineering Conference (APSEC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115505458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/APSEC53868.2021.00045
G. Airlangga, Alan Liu
Research in unmanned aerial vehicles (UAVs) has gained attention from various communities because of their potential usage in improving safety and efficiency in different applications. An UAV has shown promising results in dangerous conditions such as forest fires, search and rescue, medical deliveries, wildlife monitoring and geophysical scanning. Some external conditions like slow or no internet connection areas such as rural, farm, forest, ocean, etc. may affect the performance of the UAVs. These conditions can be considered as lost-link problems. Several approaches have been conducted to resolve such issues by implementing robust on-board architecture, machine learning approaches and developing knowledge based reasoning systems. However, much of software architecture research has concentrated on UAV implementation in normal network condition. Thus, we propose a model for considering lost-link problems in software architecture. In this paper, we describe two interconnected architectures for client and server. The UAV as a client is controlled by microkernel based architecture and the server is developed using microservice architecture. Both of them are connected using a synchronizer component to collect, filter, analyze, predict, and mitigate an UAV when a lost-link problem occurs. Therefore, the UAV can still find an appropriate action to complete a mission as far as the sensor and actuator are not in a critical condition. Experiment results show that our approach yields high percentage of mission accomplishment, fault tolerance and performance in a lost-link situation.
{"title":"A Novel Architectural Design for Solving Lost-Link Problems in UAV Collaboration","authors":"G. Airlangga, Alan Liu","doi":"10.1109/APSEC53868.2021.00045","DOIUrl":"https://doi.org/10.1109/APSEC53868.2021.00045","url":null,"abstract":"Research in unmanned aerial vehicles (UAVs) has gained attention from various communities because of their potential usage in improving safety and efficiency in different applications. An UAV has shown promising results in dangerous conditions such as forest fires, search and rescue, medical deliveries, wildlife monitoring and geophysical scanning. Some external conditions like slow or no internet connection areas such as rural, farm, forest, ocean, etc. may affect the performance of the UAVs. These conditions can be considered as lost-link problems. Several approaches have been conducted to resolve such issues by implementing robust on-board architecture, machine learning approaches and developing knowledge based reasoning systems. However, much of software architecture research has concentrated on UAV implementation in normal network condition. Thus, we propose a model for considering lost-link problems in software architecture. In this paper, we describe two interconnected architectures for client and server. The UAV as a client is controlled by microkernel based architecture and the server is developed using microservice architecture. Both of them are connected using a synchronizer component to collect, filter, analyze, predict, and mitigate an UAV when a lost-link problem occurs. Therefore, the UAV can still find an appropriate action to complete a mission as far as the sensor and actuator are not in a critical condition. Experiment results show that our approach yields high percentage of mission accomplishment, fault tolerance and performance in a lost-link situation.","PeriodicalId":143800,"journal":{"name":"2021 28th Asia-Pacific Software Engineering Conference (APSEC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124013235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/APSEC53868.2021.00076
Hasnaa EL Jihad, Morayo Adedjouma, M. Morelli
This paper presents the coupling between systems engineering (SE) and safety analysis (SA), by proposing an approach that allows integrating a safety analysis methodology to the ‘Papyrus 4 robotics’ (P4R) framework. We are especially interesting by automatic generation of fault trees in Open-PSA format from a UML Models.
{"title":"Automated Fault Tree generation in Open-PSA from UML Models","authors":"Hasnaa EL Jihad, Morayo Adedjouma, M. Morelli","doi":"10.1109/APSEC53868.2021.00076","DOIUrl":"https://doi.org/10.1109/APSEC53868.2021.00076","url":null,"abstract":"This paper presents the coupling between systems engineering (SE) and safety analysis (SA), by proposing an approach that allows integrating a safety analysis methodology to the ‘Papyrus 4 robotics’ (P4R) framework. We are especially interesting by automatic generation of fault trees in Open-PSA format from a UML Models.","PeriodicalId":143800,"journal":{"name":"2021 28th Asia-Pacific Software Engineering Conference (APSEC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122792352","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/APSEC53868.2021.00019
Zhifang Liao, Wen-Xiong Li, Yan Zhang, Song Yu
Stack Overflow is a popular online Q&A website related to programming. Although Stack Overflow has detailed questioning guidance, duplicate questions still appear frequently, and a large number of duplicate questions make the quality of the community degraded. To solve this problem, Stack Overflow allows users with high reputations to manually mark duplicate questions. However, this method is inefficient and causes many duplicate questions to remain undiscovered. Therefore, this paper proposes a duplicate questions detection model based on semantic and relevance. The model employs Siamese BiLSTM to encode question pairs and captures the semantic interaction information of title and body through soft align attention and inference composition. The soft term match captures the relevance information in the title. We evaluate the effectiveness of the model in six question groups on Stack Overflow. Compared with the latest deep learning model, the F1-Score and ACC of our model increased by 9.401% and 8.901%, respectively. Experimental results show that our model outperforms the baselines and achieves competitive performance.
{"title":"Detecting Duplicate Questions in Stack Overflow via Semantic and Relevance Approaches","authors":"Zhifang Liao, Wen-Xiong Li, Yan Zhang, Song Yu","doi":"10.1109/APSEC53868.2021.00019","DOIUrl":"https://doi.org/10.1109/APSEC53868.2021.00019","url":null,"abstract":"Stack Overflow is a popular online Q&A website related to programming. Although Stack Overflow has detailed questioning guidance, duplicate questions still appear frequently, and a large number of duplicate questions make the quality of the community degraded. To solve this problem, Stack Overflow allows users with high reputations to manually mark duplicate questions. However, this method is inefficient and causes many duplicate questions to remain undiscovered. Therefore, this paper proposes a duplicate questions detection model based on semantic and relevance. The model employs Siamese BiLSTM to encode question pairs and captures the semantic interaction information of title and body through soft align attention and inference composition. The soft term match captures the relevance information in the title. We evaluate the effectiveness of the model in six question groups on Stack Overflow. Compared with the latest deep learning model, the F1-Score and ACC of our model increased by 9.401% and 8.901%, respectively. Experimental results show that our model outperforms the baselines and achieves competitive performance.","PeriodicalId":143800,"journal":{"name":"2021 28th Asia-Pacific Software Engineering Conference (APSEC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122988242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/APSEC53868.2021.00038
Selin Aydin, Andreas Steffens, H. Lichter
Continuous Delivery (CD) aims at reducing the cycle time from changes to software release while also increasing the software quality. To automate CD, delivery process models, defining all delivery activities need to be designed. Quality properties of delivery process models, such as maintainability, still oppose challenges. Previous research indicates that the quality of such models can be improved by aligning them with the software architecture. While software architecture knowledge is only incorporated implicitly, deep technical and process knowledge is required. On this basis, this paper introduces a new kind of delivery process models that focus mainly on software architecture knowledge. Hereby, we discard the current activity-centric view and shift to an artifact-centric view. Moreover, we outsource the required process- and technical knowledge to a transformation activities knowledge base. In order to make an artifact-based delivery process model executable, we provide a model-to-model transformation which constructs a CD pipeline from an artifact-based model with the help of the transformation activities knowledge base. We evaluated our approach by conducting a small industrial qualitative user study. It showed that low-experienced developers benefit from the reduced knowledge requirements of the artifact-based modeling approach.
{"title":"Automated Construction of Continuous Delivery Pipelines from Architecture Models","authors":"Selin Aydin, Andreas Steffens, H. Lichter","doi":"10.1109/APSEC53868.2021.00038","DOIUrl":"https://doi.org/10.1109/APSEC53868.2021.00038","url":null,"abstract":"Continuous Delivery (CD) aims at reducing the cycle time from changes to software release while also increasing the software quality. To automate CD, delivery process models, defining all delivery activities need to be designed. Quality properties of delivery process models, such as maintainability, still oppose challenges. Previous research indicates that the quality of such models can be improved by aligning them with the software architecture. While software architecture knowledge is only incorporated implicitly, deep technical and process knowledge is required. On this basis, this paper introduces a new kind of delivery process models that focus mainly on software architecture knowledge. Hereby, we discard the current activity-centric view and shift to an artifact-centric view. Moreover, we outsource the required process- and technical knowledge to a transformation activities knowledge base. In order to make an artifact-based delivery process model executable, we provide a model-to-model transformation which constructs a CD pipeline from an artifact-based model with the help of the transformation activities knowledge base. We evaluated our approach by conducting a small industrial qualitative user study. It showed that low-experienced developers benefit from the reduced knowledge requirements of the artifact-based modeling approach.","PeriodicalId":143800,"journal":{"name":"2021 28th Asia-Pacific Software Engineering Conference (APSEC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114886567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/APSEC53868.2021.00033
Xiang Du, Liangze Yin, Haining Feng, Wei Dong
Due to the non-deterministic occurring of interrupt service routines, vulnerabilities of interrupt-driven programs, such as data race and atomicity violation, are usually hard to discover. Static analysis is an effective method for vulnerability analysis of interrupt-driven programs. However, existing techniques usually produce a large number of false alarms, which limits the application of static analysis in practice. To achieve high precision in vulnerability analysis of interrupt-driven programs, this paper proposes a program verification enhanced precise analysis method. For each potential vulnerability detected by static analysis, we propose a vulnerability validation approach which employs program verification to further automatically verify its feasibility. We have implemented a prototype of our method on top of CBMC. Experimental results on both an academic benchmark and 24 real-world programs show that our method can successfully identify true vulnerabilities and achieve a high precise analysis.
{"title":"Program Verification Enhanced Precise Analysis of Interrupt-Driven Program Vulnerabilities","authors":"Xiang Du, Liangze Yin, Haining Feng, Wei Dong","doi":"10.1109/APSEC53868.2021.00033","DOIUrl":"https://doi.org/10.1109/APSEC53868.2021.00033","url":null,"abstract":"Due to the non-deterministic occurring of interrupt service routines, vulnerabilities of interrupt-driven programs, such as data race and atomicity violation, are usually hard to discover. Static analysis is an effective method for vulnerability analysis of interrupt-driven programs. However, existing techniques usually produce a large number of false alarms, which limits the application of static analysis in practice. To achieve high precision in vulnerability analysis of interrupt-driven programs, this paper proposes a program verification enhanced precise analysis method. For each potential vulnerability detected by static analysis, we propose a vulnerability validation approach which employs program verification to further automatically verify its feasibility. We have implemented a prototype of our method on top of CBMC. Experimental results on both an academic benchmark and 24 real-world programs show that our method can successfully identify true vulnerabilities and achieve a high precise analysis.","PeriodicalId":143800,"journal":{"name":"2021 28th Asia-Pacific Software Engineering Conference (APSEC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133225189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}