Cyber-physical systems (CPS) are generally defined as systems with integrated physical components and computational components. To simulate heterogeneous components of CPS, the Functional Mock-up Interface (FMI) standard provides the co-simulation technology to generate simulation traces. It play significant roles in analyzing and verifying behaviors of CPS. However, the FMI-based co-simulation algorithm called Master Algorithm with Step Revision (SRMA) is inefficient in some common scenarios. To improve the efficiency of SRMA, we propose an optimized Partial Rollback Co-simulation approach, which decreases the number of the rollback operations effectively. The novelty of our approach has two aspects. First, the Key FMUs Extractor and the Input/Output dependencies classification rules are proposed. They help to determine the minimum set of FMUs which are used to rollback for correcting the simulation error. Second, an optimized Master Algorithm with Partial Step Revision (PSRMA) is also proposed. To implement our approach, we also propose an extension for the FMI standard to check whether an FMU implements the function of the threshold crossing detector. The formal definition of the Zero Crossing Detector (ZCD) is presented to guide the construction of ZCD FMUs and evaluate the simulation error of the whole system. To illustrate the feasibility of our approach, two case studies are also discussed.
信息物理系统通常被定义为物理组件和计算组件集成的系统。为了模拟CPS的异构组件,功能模拟接口(FMI)标准提供了生成仿真轨迹的联合仿真技术。它对CPS的行为分析和验证具有重要作用。然而,基于fmi的协同仿真算法——主算法加步长修正(Master algorithm with Step Revision, SRMA)在一些常见场景下效率低下。为了提高SRMA的效率,我们提出了一种优化的部分回滚联合仿真方法,有效地减少了回滚操作的次数。我们方法的新颖之处有两个方面。首先,提出了关键fmu提取器和输入/输出依赖分类规则。它们有助于确定用于回滚以纠正仿真误差的最小fmu集。其次,提出了一种优化的部分步长修正主算法(PSRMA)。为了实现我们的方法,我们还提出了FMI标准的扩展,以检查FMU是否实现阈值交叉检测器的功能。给出了过零检测器(Zero Crossing Detector, ZCD)的形式化定义,用于指导过零检测器fmu的构建和评估整个系统的仿真误差。为了说明我们的方法的可行性,还讨论了两个案例研究。
{"title":"An Optimized Partial Rollback Co-simulation Approach for Heterogeneous FMUs","authors":"Dehui Du, Yao Wang, Yi Ao, Biao Chen","doi":"10.1109/TASE.2019.00013","DOIUrl":"https://doi.org/10.1109/TASE.2019.00013","url":null,"abstract":"Cyber-physical systems (CPS) are generally defined as systems with integrated physical components and computational components. To simulate heterogeneous components of CPS, the Functional Mock-up Interface (FMI) standard provides the co-simulation technology to generate simulation traces. It play significant roles in analyzing and verifying behaviors of CPS. However, the FMI-based co-simulation algorithm called Master Algorithm with Step Revision (SRMA) is inefficient in some common scenarios. To improve the efficiency of SRMA, we propose an optimized Partial Rollback Co-simulation approach, which decreases the number of the rollback operations effectively. The novelty of our approach has two aspects. First, the Key FMUs Extractor and the Input/Output dependencies classification rules are proposed. They help to determine the minimum set of FMUs which are used to rollback for correcting the simulation error. Second, an optimized Master Algorithm with Partial Step Revision (PSRMA) is also proposed. To implement our approach, we also propose an extension for the FMI standard to check whether an FMU implements the function of the threshold crossing detector. The formal definition of the Zero Crossing Detector (ZCD) is presented to guide the construction of ZCD FMUs and evaluate the simulation error of the whole system. To illustrate the feasibility of our approach, two case studies are also discussed.","PeriodicalId":183749,"journal":{"name":"2019 International Symposium on Theoretical Aspects of Software Engineering (TASE)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116941097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yi Xu, Weimin Ge, Xiaohong Li, Zhiyong Feng, Xiaofei Xie, Yude Bai
To guarantee the quality of software, specifying security requirements (SRs) is essential for developing systems, especially for security-critical software systems. However, using security threat to determine detailed SR is quite difficult according to Common Criteria (CC), which is too confusing and technical for non-security specialists. In this paper, we propose a Co-occurrence Recommend Model (CoRM) to automatically recommend software SRs. In this model, the security threats of product are extracted from security target documents of software, in which the related security requirements are tagged. In order to establish relationships between software security threat and security requirement, semantic similarities between different security threat is calculated by Skip-thoughts Model. To evaluate our CoRM model, over 1000 security target documents of 9 types software products are exploited. The results suggest that building a CoRM model via semantic similarity is feasible and reliable.
{"title":"A Co-Occurrence Recommendation Model of Software Security Requirement","authors":"Yi Xu, Weimin Ge, Xiaohong Li, Zhiyong Feng, Xiaofei Xie, Yude Bai","doi":"10.1109/TASE.2019.00-21","DOIUrl":"https://doi.org/10.1109/TASE.2019.00-21","url":null,"abstract":"To guarantee the quality of software, specifying security requirements (SRs) is essential for developing systems, especially for security-critical software systems. However, using security threat to determine detailed SR is quite difficult according to Common Criteria (CC), which is too confusing and technical for non-security specialists. In this paper, we propose a Co-occurrence Recommend Model (CoRM) to automatically recommend software SRs. In this model, the security threats of product are extracted from security target documents of software, in which the related security requirements are tagged. In order to establish relationships between software security threat and security requirement, semantic similarities between different security threat is calculated by Skip-thoughts Model. To evaluate our CoRM model, over 1000 security target documents of 9 types software products are exploited. The results suggest that building a CoRM model via semantic similarity is feasible and reliable.","PeriodicalId":183749,"journal":{"name":"2019 International Symposium on Theoretical Aspects of Software Engineering (TASE)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127538499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software transactional memory (STM) provides programmers with a high-level programming abstraction for synchronization of parallel processes, allowing blocks of codes that execute in an interleaved manner to be treated as an atomic block. Python Software Transactional Memory (PSTM) is an STM implementation in Python language. Its presentation fills a gap that Python lacks an applicable and reliable software transactional memory. PSTM satisfies the basic transaction properties, however it does not satisfy opacity, which defines conditions for serialising concurrent transaction. To alleviate this issue, we modify the PSTM implementation and present a new PSTM called PSTM-M. Based on PSTM-M, we verify opacity of this implementation. We present the formalization of opacity which is based on the history model of transaction. Further, we explain why PSTM does not satisfy opacity and present a modified PSTM called PSTM-M. Finally, we give a machine-checked proof for the opacity of PSTM-M based on the theorem prover Coq.
{"title":"Verifying Opacity of a Modified PSTM","authors":"Yucheng Fang, Huibiao Zhu, Jiaqi Yin","doi":"10.1109/TASE.2019.00008","DOIUrl":"https://doi.org/10.1109/TASE.2019.00008","url":null,"abstract":"Software transactional memory (STM) provides programmers with a high-level programming abstraction for synchronization of parallel processes, allowing blocks of codes that execute in an interleaved manner to be treated as an atomic block. Python Software Transactional Memory (PSTM) is an STM implementation in Python language. Its presentation fills a gap that Python lacks an applicable and reliable software transactional memory. PSTM satisfies the basic transaction properties, however it does not satisfy opacity, which defines conditions for serialising concurrent transaction. To alleviate this issue, we modify the PSTM implementation and present a new PSTM called PSTM-M. Based on PSTM-M, we verify opacity of this implementation. We present the formalization of opacity which is based on the history model of transaction. Further, we explain why PSTM does not satisfy opacity and present a modified PSTM called PSTM-M. Finally, we give a machine-checked proof for the opacity of PSTM-M based on the theorem prover Coq.","PeriodicalId":183749,"journal":{"name":"2019 International Symposium on Theoretical Aspects of Software Engineering (TASE)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127518165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Distributed systems have been widely used in various domains. However, the concurrent and asynchronous nature makes their safety and reliability hard to guarantee, especially in the design phase. In this paper, we extend Mediator and its semantics to capture the inherent real-time and asynchronous behavior in distributed systems. As a component-based language, Mediator provides a compositional modeling framework and corresponding precise formal semantics, making it able to reuse reliable components in different contexts.
{"title":"Distributed Mediator","authors":"Yi Li, Meng Sun","doi":"10.1109/TASE.2019.00-24","DOIUrl":"https://doi.org/10.1109/TASE.2019.00-24","url":null,"abstract":"Distributed systems have been widely used in various domains. However, the concurrent and asynchronous nature makes their safety and reliability hard to guarantee, especially in the design phase. In this paper, we extend Mediator and its semantics to capture the inherent real-time and asynchronous behavior in distributed systems. As a component-based language, Mediator provides a compositional modeling framework and corresponding precise formal semantics, making it able to reuse reliable components in different contexts.","PeriodicalId":183749,"journal":{"name":"2019 International Symposium on Theoretical Aspects of Software Engineering (TASE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114240123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the scale of software systems is growing rapidly, software complexity is becoming one of the main problems in software engineering. Higher complexity in software increases the potential risk and defects of software system, which makes it more difficult to analyze the correctness and improve the quality of software. In this paper, we present an automated refactoring schema to reduce the complexity of the component-based software. The main idea of our approach is to search a hierarchical software with a minimum hierarchical complexity and refactor the original software into it by reassembling several subcomponents into tightly coupled hierarchical ones. Besides, our approach can be easily adjusted to deal with some new situations, in which several types of constraints on partition of software components are given. Finally, we conduct a case study with Battery Management System (BMS) and the result demonstrates our approach can automatically and effectively reduce the structural complexity of software system.
{"title":"Software Complexity Reduction by Automated Refactoring Schema","authors":"Siteng Cao, Yongxin Zhao, Ling Shi","doi":"10.1109/TASE.2019.00005","DOIUrl":"https://doi.org/10.1109/TASE.2019.00005","url":null,"abstract":"As the scale of software systems is growing rapidly, software complexity is becoming one of the main problems in software engineering. Higher complexity in software increases the potential risk and defects of software system, which makes it more difficult to analyze the correctness and improve the quality of software. In this paper, we present an automated refactoring schema to reduce the complexity of the component-based software. The main idea of our approach is to search a hierarchical software with a minimum hierarchical complexity and refactor the original software into it by reassembling several subcomponents into tightly coupled hierarchical ones. Besides, our approach can be easily adjusted to deal with some new situations, in which several types of constraints on partition of software components are given. Finally, we conduct a case study with Battery Management System (BMS) and the result demonstrates our approach can automatically and effectively reduce the structural complexity of software system.","PeriodicalId":183749,"journal":{"name":"2019 International Symposium on Theoretical Aspects of Software Engineering (TASE)","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123234566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Junxiu Liu, Zhewei Liang, Yuling Luo, Jiadong Huang, Su Yang
Research showed that the tripartite synapse has the capability of self-repairing in the spiking neural networks (SNNs), where the interactions between astrocyte, neuron and synapse underpin this mechanism. It has been used for the hardware electronic systems to enhance the fault-tolerant abilities, especially for the critical task applications. Due to the complex models of the tripartite synapse, its efficient hardware architecture and scalability are the research challenges. In this paper, an efficient hardware tripartite synapse architecture is proposed which is based on the Stochastic Computing (SC) technique. The SC is used to replace the conventional computing components such as DSPs in the hardware devices, and the extended stochastic logics are used to scale the data range during the calculation process. Results show that the proposed hardware architecture has the same output behaviours as the software simulations and has a low hardware resource consumption (with a reduction rate of >85% compared to state-of-the-art approach) which can maintain the system scalability for large SNNs.
{"title":"Hardware Tripartite Synapse Architecture based on Stochastic Computing","authors":"Junxiu Liu, Zhewei Liang, Yuling Luo, Jiadong Huang, Su Yang","doi":"10.1109/TASE.2019.00-16","DOIUrl":"https://doi.org/10.1109/TASE.2019.00-16","url":null,"abstract":"Research showed that the tripartite synapse has the capability of self-repairing in the spiking neural networks (SNNs), where the interactions between astrocyte, neuron and synapse underpin this mechanism. It has been used for the hardware electronic systems to enhance the fault-tolerant abilities, especially for the critical task applications. Due to the complex models of the tripartite synapse, its efficient hardware architecture and scalability are the research challenges. In this paper, an efficient hardware tripartite synapse architecture is proposed which is based on the Stochastic Computing (SC) technique. The SC is used to replace the conventional computing components such as DSPs in the hardware devices, and the extended stochastic logics are used to scale the data range during the calculation process. Results show that the proposed hardware architecture has the same output behaviours as the software simulations and has a low hardware resource consumption (with a reduction rate of >85% compared to state-of-the-art approach) which can maintain the system scalability for large SNNs.","PeriodicalId":183749,"journal":{"name":"2019 International Symposium on Theoretical Aspects of Software Engineering (TASE)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114747168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstraction and refinement offer a stepwise development approach to managing complexity in system design. Based on our previous work that extends Event-B models with high level real-time trigger-response properties, this paper presents refinement semantics of timed systems using behavioral traces. Forward simulation, which is a proof technique for refinement, is used to verify the consistency between different refinement levels. To prove refinement of trace semantics, we construct intermediate traces from concrete traces with a mapping function and prove the intermediate trace without stuttering events and states are abstract traces. Fairness assumptions, relative deadlock freedom, and conditional convergence are adopted in refinement steps to eliminate Zeno behavior in timed models. Based on the semantics, we develop refinement rules and strategies to perform refinement on timed models and refine real-time trigger-response properties into sequential or alternative sub-timing properties with proofs.
{"title":"Towards Refinement Semantics of Real-Time Trigger-Response Properties in Event-B","authors":"Chenyang Zhu, M. Butler, C. Cîrstea","doi":"10.1109/TASE.2019.00-26","DOIUrl":"https://doi.org/10.1109/TASE.2019.00-26","url":null,"abstract":"Abstraction and refinement offer a stepwise development approach to managing complexity in system design. Based on our previous work that extends Event-B models with high level real-time trigger-response properties, this paper presents refinement semantics of timed systems using behavioral traces. Forward simulation, which is a proof technique for refinement, is used to verify the consistency between different refinement levels. To prove refinement of trace semantics, we construct intermediate traces from concrete traces with a mapping function and prove the intermediate trace without stuttering events and states are abstract traces. Fairness assumptions, relative deadlock freedom, and conditional convergence are adopted in refinement steps to eliminate Zeno behavior in timed models. Based on the semantics, we develop refinement rules and strategies to perform refinement on timed models and refine real-time trigger-response properties into sequential or alternative sub-timing properties with proofs.","PeriodicalId":183749,"journal":{"name":"2019 International Symposium on Theoretical Aspects of Software Engineering (TASE)","volume":"388 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122179794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the rapid development of computer technologies and artificial intelligence, intelligent tutoring system is increasingly applied to our daily lives. This paper proposes a common semantic scoring method for Chinese subjective questions based on dependencies, modifiers and HowNet. First, we use dependencies to construct question classification predicate formulas for determining the question type and getting the characteristic words in the question. Then, we use dependency chains to extract multiple sets of score points in the answer according to the question type, and to optimize the answer's score points according to the feature words in the question sentence. Finally, we use the common semantic dictionary HowNet to calculate the similarities between the score points that have the same dependencies respectively in the student answers and the standard answer, and to combine the modifiers in answer sentences for calculating the final score of the subjective question. Experimental results show that our proposed method has the advantages of rapidity, accuracy and efficiency, and surpasses many excellent subjective question scoring methods.
{"title":"A Common Semantic Scoring Method for Chinese Subjective Questions","authors":"Xin-hua Zhu, Qingting Xu, Lanfang Zhang, Hanjun Deng, Hongchao Chen","doi":"10.1109/TASE.2019.00011","DOIUrl":"https://doi.org/10.1109/TASE.2019.00011","url":null,"abstract":"With the rapid development of computer technologies and artificial intelligence, intelligent tutoring system is increasingly applied to our daily lives. This paper proposes a common semantic scoring method for Chinese subjective questions based on dependencies, modifiers and HowNet. First, we use dependencies to construct question classification predicate formulas for determining the question type and getting the characteristic words in the question. Then, we use dependency chains to extract multiple sets of score points in the answer according to the question type, and to optimize the answer's score points according to the feature words in the question sentence. Finally, we use the common semantic dictionary HowNet to calculate the similarities between the score points that have the same dependencies respectively in the student answers and the standard answer, and to combine the modifiers in answer sentences for calculating the final score of the subjective question. Experimental results show that our proposed method has the advantages of rapidity, accuracy and efficiency, and surpasses many excellent subjective question scoring methods.","PeriodicalId":183749,"journal":{"name":"2019 International Symposium on Theoretical Aspects of Software Engineering (TASE)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127879748","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"[Copyright notice]","authors":"","doi":"10.1109/tase.2019.00003","DOIUrl":"https://doi.org/10.1109/tase.2019.00003","url":null,"abstract":"","PeriodicalId":183749,"journal":{"name":"2019 International Symposium on Theoretical Aspects of Software Engineering (TASE)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122507552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alexandra Halchin, Y. A. Ameur, N. Singh, Abderrahmane Feliachi, J. Ordioni
To check the correctness of heterogeneous models of a complex critical system is challenging to meet the certification standard. Such guarantee can be provided by embedding the heterogeneous models into an integrated modelling framework. This work is proposed in the B-PERFect project of RATP (Parisian Public Transport Operator and Maintainer), it aims to apply formal verification using the PERF approach on the integrated safety-critical software related to railway domain expressed in a single modelling language: HLL. This paper presents a certified translation from B formal language to HLL. The proposed approach uses HOL as a unified logical framework to describe the formal semantics and to formalize the translation relation of both languages. The developed Isabelle/HOL models are proved in order to guarantee the correctness of our translation process. Moreover, we have also used weak-bisimulation relation to check the correctness of translation steps. The overall approach is illustrated through a case study issued from a railway software system: onboard localization function. Furthermore, it discusses the integrated verification at system level.
{"title":"Certified Embedding of B Models in an Integrated Verification Framework","authors":"Alexandra Halchin, Y. A. Ameur, N. Singh, Abderrahmane Feliachi, J. Ordioni","doi":"10.1109/TASE.2019.000-4","DOIUrl":"https://doi.org/10.1109/TASE.2019.000-4","url":null,"abstract":"To check the correctness of heterogeneous models of a complex critical system is challenging to meet the certification standard. Such guarantee can be provided by embedding the heterogeneous models into an integrated modelling framework. This work is proposed in the B-PERFect project of RATP (Parisian Public Transport Operator and Maintainer), it aims to apply formal verification using the PERF approach on the integrated safety-critical software related to railway domain expressed in a single modelling language: HLL. This paper presents a certified translation from B formal language to HLL. The proposed approach uses HOL as a unified logical framework to describe the formal semantics and to formalize the translation relation of both languages. The developed Isabelle/HOL models are proved in order to guarantee the correctness of our translation process. Moreover, we have also used weak-bisimulation relation to check the correctness of translation steps. The overall approach is illustrated through a case study issued from a railway software system: onboard localization function. Furthermore, it discusses the integrated verification at system level.","PeriodicalId":183749,"journal":{"name":"2019 International Symposium on Theoretical Aspects of Software Engineering (TASE)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134402285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}