Autonomous driving is a safety critical system whose performance mainly depends on the recognition of the environment through a large amount of spatio-temporal data and driving policy based on the complex traffic conditions. Thus, it is important and necessary to build the abstract model of environment data and set the safety assessment method for autonomous driving policy. To address the problem, we propose a quantitative safety verification approach for the abstract decision-making model of autonomous driving. We extract the essential spatio-temporal features from both observation and estimation, and preserve them in the abstract model of decision-making. In the estimation, we adopt the explicit description of the uncertain driving decisions of vehicles by means of probability distributions. Based on these time-dependent spatial features, specification, reasoning, and verification of safety property are enabled. To evaluate the safety of the driving policy, we propose an operational verification approach based on Stochastic Hybrid Automata (SHA). Given the environmental information and the corresponding driving decisions according to the planned route on the basis of certain traffic laws, the single-lane roundabout scenario is introduced to illustrate how to verify quantitative safety property in our verification approach by using UPPAAL SMC which can validate the stochastic real-time model.
{"title":"A Quantitative Safety Verification Approach for the Decision-making Process of Autonomous Driving","authors":"Bingqing Xu, Qin Li, Tong Guo, Yi Ao, Dehui Du","doi":"10.1109/TASE.2019.000-9","DOIUrl":"https://doi.org/10.1109/TASE.2019.000-9","url":null,"abstract":"Autonomous driving is a safety critical system whose performance mainly depends on the recognition of the environment through a large amount of spatio-temporal data and driving policy based on the complex traffic conditions. Thus, it is important and necessary to build the abstract model of environment data and set the safety assessment method for autonomous driving policy. To address the problem, we propose a quantitative safety verification approach for the abstract decision-making model of autonomous driving. We extract the essential spatio-temporal features from both observation and estimation, and preserve them in the abstract model of decision-making. In the estimation, we adopt the explicit description of the uncertain driving decisions of vehicles by means of probability distributions. Based on these time-dependent spatial features, specification, reasoning, and verification of safety property are enabled. To evaluate the safety of the driving policy, we propose an operational verification approach based on Stochastic Hybrid Automata (SHA). Given the environmental information and the corresponding driving decisions according to the planned route on the basis of certain traffic laws, the single-lane roundabout scenario is introduced to illustrate how to verify quantitative safety property in our verification approach by using UPPAAL SMC which can validate the stochastic real-time model.","PeriodicalId":183749,"journal":{"name":"2019 International Symposium on Theoretical Aspects of Software Engineering (TASE)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114081473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rongjie Yan, Anyu Cai, Hongyu Gao, Feifei Ma, Jun Yan
Network-on-Chip (NoC) is a promising interconnecting paradigm in the state-of-the-art multi-core architectures. Its communication network can increase the capacity of parallel data transfer such that system performance is improved. In the design of MPSoC-based applications, multiple objectives exist, such as minimizing time and energy consumption, which may conflict and certain trade-off needs to be evaluated. Heuristic-based methods such as evolutionary algorithms are always adopted to find near-optimal solutions for such applications. However, it is hard to evaluate the accuracy of those solutions. As most of the constraints on the mapping and scheduling process of NoCs can be described as logic formulas, we apply SMT-based methods for the multi-objective optimization of NoC-based MPSoCs. Moreover, to improve the scalability of the optimization problem, we propose to reduce the search space with respect to the symmetry feature of NoC architecture, and to decompose the search process according to the feature of non-dominated solutions. Extensive experimental results from random and real-case benchmarks demonstrate the accuracy of SMT-based methods in finding all the Pareto-fronts, and the efficiency of the proposed strategies.
{"title":"SMT-based Multi-objective Optimization for Scheduling of MPSoC Applications","authors":"Rongjie Yan, Anyu Cai, Hongyu Gao, Feifei Ma, Jun Yan","doi":"10.1109/TASE.2019.000-5","DOIUrl":"https://doi.org/10.1109/TASE.2019.000-5","url":null,"abstract":"Network-on-Chip (NoC) is a promising interconnecting paradigm in the state-of-the-art multi-core architectures. Its communication network can increase the capacity of parallel data transfer such that system performance is improved. In the design of MPSoC-based applications, multiple objectives exist, such as minimizing time and energy consumption, which may conflict and certain trade-off needs to be evaluated. Heuristic-based methods such as evolutionary algorithms are always adopted to find near-optimal solutions for such applications. However, it is hard to evaluate the accuracy of those solutions. As most of the constraints on the mapping and scheduling process of NoCs can be described as logic formulas, we apply SMT-based methods for the multi-objective optimization of NoC-based MPSoCs. Moreover, to improve the scalability of the optimization problem, we propose to reduce the search space with respect to the symmetry feature of NoC architecture, and to decompose the search process according to the feature of non-dominated solutions. Extensive experimental results from random and real-case benchmarks demonstrate the accuracy of SMT-based methods in finding all the Pareto-fronts, and the efficiency of the proposed strategies.","PeriodicalId":183749,"journal":{"name":"2019 International Symposium on Theoretical Aspects of Software Engineering (TASE)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132672156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Strategy logic is an expressive multi-agent logic which extends ATL* (the multi-agent version of CTL*). It allows for powerful and interdependent branching (making it possible to express the existence of a dominant strategy, or a qualitative Nash equilibrium). However its model-checking is Non-Elementary. In their paper, Mogavero and al. conjectured that restricting the information strategies have of one another would bring the complexity back to 2-EXPTIME, the same as LTL or ATL* on games. This spurs a bunch of papers ranging over different semantics and fragments of Strategy Logic. As of this instant all papers supported the conjecture. However, so far a 2-EXPTIME model-checking is always obtained by restricting the interactions between the different goals (the interdependent branches in a formula); for example only using a single goal or a conjunction of goals. This severely limits the properties one can create, for example neither admissible strategies nor Nash equilibria can be expressed in these restrictions. In this paper, we prove that a 2-EXPTIME model-checking can be obtained for the fragment SL[BG] in the timeline semantic without restricting the interactions between the goals, greatly improving the expressiveness over other known fragments with 2-EXPTIME model-checking. This places SL[BG] in the timeline semantic as the largest extension of ATL* with similar complexity, yet capable of expressing global properties.
{"title":"Low complexity and large interactions are possible in Strategy logic","authors":"Patrick Gardy","doi":"10.1109/TASE.2019.000-1","DOIUrl":"https://doi.org/10.1109/TASE.2019.000-1","url":null,"abstract":"Strategy logic is an expressive multi-agent logic which extends ATL* (the multi-agent version of CTL*). It allows for powerful and interdependent branching (making it possible to express the existence of a dominant strategy, or a qualitative Nash equilibrium). However its model-checking is Non-Elementary. In their paper, Mogavero and al. conjectured that restricting the information strategies have of one another would bring the complexity back to 2-EXPTIME, the same as LTL or ATL* on games. This spurs a bunch of papers ranging over different semantics and fragments of Strategy Logic. As of this instant all papers supported the conjecture. However, so far a 2-EXPTIME model-checking is always obtained by restricting the interactions between the different goals (the interdependent branches in a formula); for example only using a single goal or a conjunction of goals. This severely limits the properties one can create, for example neither admissible strategies nor Nash equilibria can be expressed in these restrictions. In this paper, we prove that a 2-EXPTIME model-checking can be obtained for the fragment SL[BG] in the timeline semantic without restricting the interactions between the goals, greatly improving the expressiveness over other known fragments with 2-EXPTIME model-checking. This places SL[BG] in the timeline semantic as the largest extension of ATL* with similar complexity, yet capable of expressing global properties.","PeriodicalId":183749,"journal":{"name":"2019 International Symposium on Theoretical Aspects of Software Engineering (TASE)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134180093","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Sub-Reviewers","authors":"","doi":"10.1109/tase.2019.00-28","DOIUrl":"https://doi.org/10.1109/tase.2019.00-28","url":null,"abstract":"","PeriodicalId":183749,"journal":{"name":"2019 International Symposium on Theoretical Aspects of Software Engineering (TASE)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130090224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zuxing Gu, Min Zhou, Jiecheng Wu, Yu Jiang, Jiaxiang Liu, M. Gu
Application Programming Interfaces (APIs) usually have usage constraints, such as call conditions or call orders. Incorrect usage of these constraints, called API misuse, will result in system crashes, bugs, and even security problems. It is crucial to detect such misuses early in the development process. Though many approaches have been proposed over the last years, recent studies show that API misuses are still prevalent, especially the ones specific to individual projects. In this paper, we strive to improve current API-misuse detection capability for large-scale C programs. First, We propose IMSpec, a lightweight domain-specific language enabling developers to specify API usage constraints in three different aspects (i.e., parameter validation, error handling, and causal calling), which are the majority of API-misuse bugs. Then, we have tailored a constraint guided static analysis engine to automatically parse IMSpec rules and detect API-misuse bugs with rich semantics. We evaluate our approach on widely used benchmarks and real-world projects. The results show that our easily extensible approach performs better than state-of-the-art tools. We also discover 19 previously unknown bugs in real-world open-source projects, all of which have been confirmed by the corresponding developers.
{"title":"IMSpec: An Extensible Approach to Exploring the Incorrect Usage of APIs","authors":"Zuxing Gu, Min Zhou, Jiecheng Wu, Yu Jiang, Jiaxiang Liu, M. Gu","doi":"10.1109/TASE.2019.00006","DOIUrl":"https://doi.org/10.1109/TASE.2019.00006","url":null,"abstract":"Application Programming Interfaces (APIs) usually have usage constraints, such as call conditions or call orders. Incorrect usage of these constraints, called API misuse, will result in system crashes, bugs, and even security problems. It is crucial to detect such misuses early in the development process. Though many approaches have been proposed over the last years, recent studies show that API misuses are still prevalent, especially the ones specific to individual projects. In this paper, we strive to improve current API-misuse detection capability for large-scale C programs. First, We propose IMSpec, a lightweight domain-specific language enabling developers to specify API usage constraints in three different aspects (i.e., parameter validation, error handling, and causal calling), which are the majority of API-misuse bugs. Then, we have tailored a constraint guided static analysis engine to automatically parse IMSpec rules and detect API-misuse bugs with rich semantics. We evaluate our approach on widely used benchmarks and real-world projects. The results show that our easily extensible approach performs better than state-of-the-art tools. We also discover 19 previously unknown bugs in real-world open-source projects, all of which have been confirmed by the corresponding developers.","PeriodicalId":183749,"journal":{"name":"2019 International Symposium on Theoretical Aspects of Software Engineering (TASE)","volume":"222 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113977899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although browser extensions bring users a better experience, it creates a hidden danger of privacy leakage. A common privacy leakage detection method is realized through detecting private data transmission. However, only the unintended transmission is considered to be a privacy leak. Therefore, the real challenge is to determine whether or not the transmission is user intended. In order to address this problem, we check the rationality of private data transmission by establishing a privacy model based on classification for extensions to confirm the scope of private data that can be uploaded and domains that can be sent to. Furthermore, we present BEDS (Browser Extension Detection System), a Chromium based extension dynamic detection system. BEDS first builds a privacy model for each extension and then records the extension's network logs and browser API logs when accessing specified pages. Finally, BEDS determines whether there exists a privacy leak according to the strict privacy leakage judgment rules. We test our implementation in large scale on extensions in browsers developed by China's three major Internet companies and complete 15 months of continuous tracking. After examining a total of 14,487 extensions, 1,897 privacy leaks are identified, all results have been inspected by manual and the accuracy of BEDS is over 97%. A number of domains that illegally collect private user data are discovered and tracked. Our results show that about 47,000 Chinese IPs upload private information to suspicious servers every day.
{"title":"Large-scale Detection of Privacy Leaks for BAT Browsers Extensions in China","authors":"Yufei Zhao, Longtao He, Zhoujun Li, Liqun Yang, Haolong Dong, Chao Li, Yu Wang","doi":"10.1109/TASE.2019.00-19","DOIUrl":"https://doi.org/10.1109/TASE.2019.00-19","url":null,"abstract":"Although browser extensions bring users a better experience, it creates a hidden danger of privacy leakage. A common privacy leakage detection method is realized through detecting private data transmission. However, only the unintended transmission is considered to be a privacy leak. Therefore, the real challenge is to determine whether or not the transmission is user intended. In order to address this problem, we check the rationality of private data transmission by establishing a privacy model based on classification for extensions to confirm the scope of private data that can be uploaded and domains that can be sent to. Furthermore, we present BEDS (Browser Extension Detection System), a Chromium based extension dynamic detection system. BEDS first builds a privacy model for each extension and then records the extension's network logs and browser API logs when accessing specified pages. Finally, BEDS determines whether there exists a privacy leak according to the strict privacy leakage judgment rules. We test our implementation in large scale on extensions in browsers developed by China's three major Internet companies and complete 15 months of continuous tracking. After examining a total of 14,487 extensions, 1,897 privacy leaks are identified, all results have been inspected by manual and the accuracy of BEDS is over 97%. A number of domains that illegally collect private user data are discovered and tracked. Our results show that about 47,000 Chinese IPs upload private information to suspicious servers every day.","PeriodicalId":183749,"journal":{"name":"2019 International Symposium on Theoretical Aspects of Software Engineering (TASE)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129858524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Critical real-time systems must be verified to avoid the risk of dramatic consequences in case of failure. Thales developed an open formalism "Time4Sys" to model real-time systems, with expressive features such as periodic or sporadic tasks, task dependencies, distributed systems, etc. However, Time4Sys does not natively allow for a formal reasoning. In this work, we present a translation from Time4Sys to (parametric) timed automata, so as to allow for a formal verification.
{"title":"Formalizing Time4sys using parametric timed automata","authors":"É. André","doi":"10.1109/TASE.2019.000-3","DOIUrl":"https://doi.org/10.1109/TASE.2019.000-3","url":null,"abstract":"Critical real-time systems must be verified to avoid the risk of dramatic consequences in case of failure. Thales developed an open formalism \"Time4Sys\" to model real-time systems, with expressive features such as periodic or sporadic tasks, task dependencies, distributed systems, etc. However, Time4Sys does not natively allow for a formal reasoning. In this work, we present a translation from Time4Sys to (parametric) timed automata, so as to allow for a formal verification.","PeriodicalId":183749,"journal":{"name":"2019 International Symposium on Theoretical Aspects of Software Engineering (TASE)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115180202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yu Zhang, Haowei Deng, Quanxi Li, Haoze Song, Lei Nie
Quantum computing technology has reached a second renaissance in the last decade. However, in the NISQ era pointed out by John Preskill in 2018, quantum noise and decoherence, which affect the accuracy and execution effect of quantum programs, cannot be ignored and corrected by the near future NISQ computers. In order to let users more easily write quantum programs, the compiler and runtime system should consider underlying quantum hardware features such as decoherence. To address the challenges posed by decoherence, in this paper, we propose and prototype QLifeReducer to minimize the qubit lifetime in the input OpenQASM program by delaying qubits into quantum superposition. QLifeReducer includes three core modules, i.e., the parser, parallelism analyzer and transformer. It introduces the layered bundle format to express the quantum program, where a set of parallelizable quantum operations is packaged into a bundle. We evaluate quantum programs before and after transformed by QLifeReducer on both real IBM Q 5 Tenerife and the self-developed simulator. The experimental results show that QLifeReducer reduces the error rate of a quantum program when executed on IBMQ 5 Tenerife by 11%; and can reduce the longest qubit lifetime as well as average qubit lifetime by more than 20% on most quantum workloads.
{"title":"Optimizing Quantum Programs Against Decoherence: Delaying Qubits into Quantum Superposition","authors":"Yu Zhang, Haowei Deng, Quanxi Li, Haoze Song, Lei Nie","doi":"10.1109/TASE.2019.000-2","DOIUrl":"https://doi.org/10.1109/TASE.2019.000-2","url":null,"abstract":"Quantum computing technology has reached a second renaissance in the last decade. However, in the NISQ era pointed out by John Preskill in 2018, quantum noise and decoherence, which affect the accuracy and execution effect of quantum programs, cannot be ignored and corrected by the near future NISQ computers. In order to let users more easily write quantum programs, the compiler and runtime system should consider underlying quantum hardware features such as decoherence. To address the challenges posed by decoherence, in this paper, we propose and prototype QLifeReducer to minimize the qubit lifetime in the input OpenQASM program by delaying qubits into quantum superposition. QLifeReducer includes three core modules, i.e., the parser, parallelism analyzer and transformer. It introduces the layered bundle format to express the quantum program, where a set of parallelizable quantum operations is packaged into a bundle. We evaluate quantum programs before and after transformed by QLifeReducer on both real IBM Q 5 Tenerife and the self-developed simulator. The experimental results show that QLifeReducer reduces the error rate of a quantum program when executed on IBMQ 5 Tenerife by 11%; and can reduce the longest qubit lifetime as well as average qubit lifetime by more than 20% on most quantum workloads.","PeriodicalId":183749,"journal":{"name":"2019 International Symposium on Theoretical Aspects of Software Engineering (TASE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126491457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}