Pub Date : 2020-10-01DOI: 10.1109/ISSREW51248.2020.00057
Juan Hernández-Serrato, Alejandro Velasco, Yury Nifio, M. Linares-Vásquez
With the advent of internet-scale systems, and the need to assure a high functional and non-functional quality of those systems, researchers and practitioners have been working on approaches and tools for monitoring, profiling, and testing of internet-scale systems. One of those approaches is Chaos Engineering, which imposes different challenges for the software reliability engineering community. In this paper, we propose future avenues for research and development with the target of improving chaos engineering capabilities by using machine learning.
{"title":"Applying Machine Learning with Chaos Engineering","authors":"Juan Hernández-Serrato, Alejandro Velasco, Yury Nifio, M. Linares-Vásquez","doi":"10.1109/ISSREW51248.2020.00057","DOIUrl":"https://doi.org/10.1109/ISSREW51248.2020.00057","url":null,"abstract":"With the advent of internet-scale systems, and the need to assure a high functional and non-functional quality of those systems, researchers and practitioners have been working on approaches and tools for monitoring, profiling, and testing of internet-scale systems. One of those approaches is Chaos Engineering, which imposes different challenges for the software reliability engineering community. In this paper, we propose future avenues for research and development with the target of improving chaos engineering capabilities by using machine learning.","PeriodicalId":202247,"journal":{"name":"2020 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122336690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01DOI: 10.1109/ISSREW51248.2020.00054
M. A. Hakamian
Modern distributed systems are supposed to be resilience and continue to operate according to agreed-on Quality of Service (QoS) despite the failure of few services or variations in workload. Real-world incidents show that systems still undergo unacceptable QoS degradations or significant service outages. The main reasons are updates of the system or infrastructural services, and subsequently, faulty recovery logic. Frequent updates and faulty recovery logic result in a correlated set of failure modes that impact the system’s QoS. Software architects need assurance that the system satisfies agreed-on QoS despite updates in the system or infrastructural services. In this research, we propose systematic identification of the risk of a correlated set of failure modes due to updates that cause unacceptable performance degradation or service outage. According to the Architecture Tradeoff Analysis Method (ATAM), we propose to formulate collected risks into a scenario structure for a precise resilience requirement characterization. Furthermore, we propose model-based prediction methods for scenario-based resilience evaluation of the system. Therefore, the software architect has a measurement-based evaluation of system resilience and can incorporate the evaluation result for further system resilience improvement or specifying a precise service level agreement.
{"title":"Engineering Resilience: Predicting The Change Impact on Performance and Availability of Reconfigurable Systems","authors":"M. A. Hakamian","doi":"10.1109/ISSREW51248.2020.00054","DOIUrl":"https://doi.org/10.1109/ISSREW51248.2020.00054","url":null,"abstract":"Modern distributed systems are supposed to be resilience and continue to operate according to agreed-on Quality of Service (QoS) despite the failure of few services or variations in workload. Real-world incidents show that systems still undergo unacceptable QoS degradations or significant service outages. The main reasons are updates of the system or infrastructural services, and subsequently, faulty recovery logic. Frequent updates and faulty recovery logic result in a correlated set of failure modes that impact the system’s QoS. Software architects need assurance that the system satisfies agreed-on QoS despite updates in the system or infrastructural services. In this research, we propose systematic identification of the risk of a correlated set of failure modes due to updates that cause unacceptable performance degradation or service outage. According to the Architecture Tradeoff Analysis Method (ATAM), we propose to formulate collected risks into a scenario structure for a precise resilience requirement characterization. Furthermore, we propose model-based prediction methods for scenario-based resilience evaluation of the system. Therefore, the software architect has a measurement-based evaluation of system resilience and can incorporate the evaluation result for further system resilience improvement or specifying a precise service level agreement.","PeriodicalId":202247,"journal":{"name":"2020 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127214769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01DOI: 10.1109/ISSREW51248.2020.00033
Monika Jaskolka, Stephen Scott, Vera Pantelic, Alan Wassyng, M. Lawford
Modular decomposition is widely used in software engineering to support design, testing and maintenance of software intensive systems. Model-Based Development (MBD) is a paradigm for developing complex software systems using graphical approaches, with MathWorks’ Simulink being a popular choice. How to develop modular Simulink models with stable interfaces, that facilitate understanding and testing, and achieve low coupling and high cohesion, is relatively understudied. This paper applies a new modular decomposition approach to Simulink case studies from the aerospace and nuclear domains. We evaluate how well it supports information hiding, and its impact on coupling and cohesion, interface complexity, cyclomatic complexity, testability, and performance.
{"title":"Applying Modular Decomposition in Simulink","authors":"Monika Jaskolka, Stephen Scott, Vera Pantelic, Alan Wassyng, M. Lawford","doi":"10.1109/ISSREW51248.2020.00033","DOIUrl":"https://doi.org/10.1109/ISSREW51248.2020.00033","url":null,"abstract":"Modular decomposition is widely used in software engineering to support design, testing and maintenance of software intensive systems. Model-Based Development (MBD) is a paradigm for developing complex software systems using graphical approaches, with MathWorks’ Simulink being a popular choice. How to develop modular Simulink models with stable interfaces, that facilitate understanding and testing, and achieve low coupling and high cohesion, is relatively understudied. This paper applies a new modular decomposition approach to Simulink case studies from the aerospace and nuclear domains. We evaluate how well it supports information hiding, and its impact on coupling and cohesion, interface complexity, cyclomatic complexity, testability, and performance.","PeriodicalId":202247,"journal":{"name":"2020 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132580839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01DOI: 10.1109/ISSREW51248.2020.00099
E. Andrade, F. Machida, R. Pietrantuono, Domenico Cotroneo
Image classification systems using machine learning are rapidly adopted in many software application systems. Machine learning models built for image classification tasks are usually deployed on either cloud computing or edge computers close to data sources depending on the performance and resource requirements. However, software reliability aspects during the operation of these systems have not been properly explored. In this paper, we experimentally investigate the software aging phenomena in image classification systems that are continuously running on cloud or edge computing environments. By performing statistical analysis on the measurement data, we detected a suspicious phenomenon of software aging induced by image classification workloads in the memory usages for cloud and edge computing systems. Contrary to the expectation, our experimental results show that the edge system is less impacted by software aging than the cloud system that has four times larger allocated memory resources. We also disclose our software aging data set on our project web site for further exploration of software aging and rejuvenation research.
{"title":"Software Aging in Image Classification Systems on Cloud and Edge","authors":"E. Andrade, F. Machida, R. Pietrantuono, Domenico Cotroneo","doi":"10.1109/ISSREW51248.2020.00099","DOIUrl":"https://doi.org/10.1109/ISSREW51248.2020.00099","url":null,"abstract":"Image classification systems using machine learning are rapidly adopted in many software application systems. Machine learning models built for image classification tasks are usually deployed on either cloud computing or edge computers close to data sources depending on the performance and resource requirements. However, software reliability aspects during the operation of these systems have not been properly explored. In this paper, we experimentally investigate the software aging phenomena in image classification systems that are continuously running on cloud or edge computing environments. By performing statistical analysis on the measurement data, we detected a suspicious phenomenon of software aging induced by image classification workloads in the memory usages for cloud and edge computing systems. Contrary to the expectation, our experimental results show that the edge system is less impacted by software aging than the cloud system that has four times larger allocated memory resources. We also disclose our software aging data set on our project web site for further exploration of software aging and rejuvenation research.","PeriodicalId":202247,"journal":{"name":"2020 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"110-111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132827829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01DOI: 10.1109/ISSREW51248.2020.00059
Qianying Liao
The need for rapid and efficient software development pushes the demand for automation in the phases of build, test, and release. Thereby, the methodology of Continuous Integration and Continuous Deployment (CI/CD) emerges, which then gives birth to a set of CI/CD enabling services, such as Travis CI and Jenkins. Those services facilitate the automatic compilation, connection tracking, and packaging of new features. They not only incorporate playgrounds for testing and functionality verification but also enable the final delivery.Poor understanding and execution in CI/CD operations can result in slowing and even halting the pace of a software project. Many bottlenecks of CI/CD pipeline might occur due to its incorrect configurations, i.e. the inadequate level of automation, the unsuitable load capacity and the suboptimal queueing strategy. However, understanding the actual CI/CD pipeline is hard since its performance varies significantly with different hosting machines, technologies and plugins. On the other hand, finding a way to analyse and improve the settings of CI/CD pipeline brings great managerial and economic benefits since an optimal configuration implies the eventual high efficiency. To that end, this study attempts to design a model that can not only capture the abstraction of the pipeline but also provides a testing environment for the impersonal influencers of CI/CD performance. The current study, therefore, aims to contribute (1) a pipeline model based on the logic of the queueing system and enabled by agent-based simulation, and (2) an experimental environment which allows the testing of different settings and operation scenarios.
{"title":"Modelling CI/CD Pipeline Through Agent-Based Simulation","authors":"Qianying Liao","doi":"10.1109/ISSREW51248.2020.00059","DOIUrl":"https://doi.org/10.1109/ISSREW51248.2020.00059","url":null,"abstract":"The need for rapid and efficient software development pushes the demand for automation in the phases of build, test, and release. Thereby, the methodology of Continuous Integration and Continuous Deployment (CI/CD) emerges, which then gives birth to a set of CI/CD enabling services, such as Travis CI and Jenkins. Those services facilitate the automatic compilation, connection tracking, and packaging of new features. They not only incorporate playgrounds for testing and functionality verification but also enable the final delivery.Poor understanding and execution in CI/CD operations can result in slowing and even halting the pace of a software project. Many bottlenecks of CI/CD pipeline might occur due to its incorrect configurations, i.e. the inadequate level of automation, the unsuitable load capacity and the suboptimal queueing strategy. However, understanding the actual CI/CD pipeline is hard since its performance varies significantly with different hosting machines, technologies and plugins. On the other hand, finding a way to analyse and improve the settings of CI/CD pipeline brings great managerial and economic benefits since an optimal configuration implies the eventual high efficiency. To that end, this study attempts to design a model that can not only capture the abstraction of the pipeline but also provides a testing environment for the impersonal influencers of CI/CD performance. The current study, therefore, aims to contribute (1) a pipeline model based on the logic of the queueing system and enabled by agent-based simulation, and (2) an experimental environment which allows the testing of different settings and operation scenarios.","PeriodicalId":202247,"journal":{"name":"2020 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128119751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01DOI: 10.1109/ISSREW51248.2020.00058
Alexandra Figueiredo, Tatjana Lide, M. Correia
Most web applications are compromised due to vulnerable source code [1]. Static code analysis tools that are often used to find security vulnerabilities in code have two main problems: they are language-specific, and they have to be programmed, or at least configured manually, to deal with new types of vulnerabilities.
{"title":"Multi-Language Web Vulnerability Detection","authors":"Alexandra Figueiredo, Tatjana Lide, M. Correia","doi":"10.1109/ISSREW51248.2020.00058","DOIUrl":"https://doi.org/10.1109/ISSREW51248.2020.00058","url":null,"abstract":"Most web applications are compromised due to vulnerable source code [1]. Static code analysis tools that are often used to find security vulnerabilities in code have two main problems: they are language-specific, and they have to be programmed, or at least configured manually, to deal with new types of vulnerabilities.","PeriodicalId":202247,"journal":{"name":"2020 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134381823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01DOI: 10.1109/ISSREW51248.2020.00079
David Pereira, J. Ferreira, A. Mendes
In this paper we measure the accuracy of password strength meters (PSMs) using password guessing resistance against off-the-shelf guessing attacks. We consider 13 PSMs, 5 different attack tools, and a random selection of 60,000 passwords extracted from three different datasets of real-world password leaks. Our results show that a significant percentage of passwords classified as strong were cracked, thus suggesting that current password strength estimation methods can be improved.
{"title":"Evaluating the Accuracy of Password Strength Meters using Off-The-Shelf Guessing Attacks","authors":"David Pereira, J. Ferreira, A. Mendes","doi":"10.1109/ISSREW51248.2020.00079","DOIUrl":"https://doi.org/10.1109/ISSREW51248.2020.00079","url":null,"abstract":"In this paper we measure the accuracy of password strength meters (PSMs) using password guessing resistance against off-the-shelf guessing attacks. We consider 13 PSMs, 5 different attack tools, and a random selection of 60,000 passwords extracted from three different datasets of real-world password leaks. Our results show that a significant percentage of passwords classified as strong were cracked, thus suggesting that current password strength estimation methods can be improved.","PeriodicalId":202247,"journal":{"name":"2020 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"196 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121183900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01DOI: 10.1109/ISSREW51248.2020.00065
Nishanth Laxman, P. Liggesmeyer
Ipso facto “Uncertainty is certain” makes design and development of Cyber Physical Systems (CPS), specifically for safety critical scenarios, a challenging process. CPS are expected to function safely in unforeseen contexts, which are often characterized by the pervasive presence of uncertainty. There is a multitude of research and numerous approaches available for efficiently handling such uncertainties at runtime, but how many of them handle it from the viewpoint of safety assurance? Are the approaches which handle various possible uncertainties at runtime from safety assurance perspective need of the hour? This paper attempts to explore these issues and offers a rarely chosen but important perspective on handling uncertainties at runtime during the development of CPS. This paper is based on initial outcomes of an ongoing Systematic Literature Review (SLR) and consequent research on ”safe” handling of uncertainties at runtime.
{"title":"Should we “safely” handle the uncertainties at runtime? - A rather seldom asked question","authors":"Nishanth Laxman, P. Liggesmeyer","doi":"10.1109/ISSREW51248.2020.00065","DOIUrl":"https://doi.org/10.1109/ISSREW51248.2020.00065","url":null,"abstract":"Ipso facto “Uncertainty is certain” makes design and development of Cyber Physical Systems (CPS), specifically for safety critical scenarios, a challenging process. CPS are expected to function safely in unforeseen contexts, which are often characterized by the pervasive presence of uncertainty. There is a multitude of research and numerous approaches available for efficiently handling such uncertainties at runtime, but how many of them handle it from the viewpoint of safety assurance? Are the approaches which handle various possible uncertainties at runtime from safety assurance perspective need of the hour? This paper attempts to explore these issues and offers a rarely chosen but important perspective on handling uncertainties at runtime during the development of CPS. This paper is based on initial outcomes of an ongoing Systematic Literature Review (SLR) and consequent research on ”safe” handling of uncertainties at runtime.","PeriodicalId":202247,"journal":{"name":"2020 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"574 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128769681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01DOI: 10.1109/ISSREW51248.2020.00069
Rodger William Byrd, Taniza Sultana, Kristen R. Walcott
The prevalence of push notifications for communication between devices is increasing and is vital to Internet of Things (IoT) components. It has been observed that delays of notification receipt vary even for devices that are on the same network and using the same hardware. A closer analysis is needed to understand what is occurring in the hardware when a notification occurs from a cloud service or other application.In this paper, we describe and develop a framework, AHPCap, to better understand application behavior at the hardware level at the time of a notification. We explain the framework and its deployment and capabilities. We then show an example of a hardware profile that can be generated on mobile devices and analyze the time required to capture and record the profile data. Lastly, we discuss some of AHPCap’s potential applications.
{"title":"AHPCap: A Framework for Automated Hardware Profiling and Capture of Mobile Application States","authors":"Rodger William Byrd, Taniza Sultana, Kristen R. Walcott","doi":"10.1109/ISSREW51248.2020.00069","DOIUrl":"https://doi.org/10.1109/ISSREW51248.2020.00069","url":null,"abstract":"The prevalence of push notifications for communication between devices is increasing and is vital to Internet of Things (IoT) components. It has been observed that delays of notification receipt vary even for devices that are on the same network and using the same hardware. A closer analysis is needed to understand what is occurring in the hardware when a notification occurs from a cloud service or other application.In this paper, we describe and develop a framework, AHPCap, to better understand application behavior at the hardware level at the time of a notification. We explain the framework and its deployment and capabilities. We then show an example of a hardware profile that can be generated on mobile devices and analyze the time required to capture and record the profile data. Lastly, we discuss some of AHPCap’s potential applications.","PeriodicalId":202247,"journal":{"name":"2020 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115409486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-10-01DOI: 10.1109/ISSREW51248.2020.00049
José D’Abruzzo Pereira
Software is frequently deployed with vulnerabilities that may allow hackers to gain access to the system or information, leading to money or reputation losses. Although there are many techniques to detect software vulnerabilities, their effectiveness is far from acceptable, especially in large software projects, as shown by several research works. This Ph.D. aims to study the combination of different techniques to improve the effectiveness of vulnerability detection (increasing the detection rate and decreasing the number of false-positives). Static Code Analysis (SCA) has a good detection rate and is the central technique of this work. However, as SCA reports many false-positives, we will study the combination of various SCA tools and the integration with other detection approaches (e.g., software metrics) to improve vulnerability detection capabilities. We will also study the use of such combination to prioritize the reported vulnerabilities and thus guide the development efforts and fixes in resource-constrained projects.
{"title":"Techniques and Tools for Advanced Software Vulnerability Detection","authors":"José D’Abruzzo Pereira","doi":"10.1109/ISSREW51248.2020.00049","DOIUrl":"https://doi.org/10.1109/ISSREW51248.2020.00049","url":null,"abstract":"Software is frequently deployed with vulnerabilities that may allow hackers to gain access to the system or information, leading to money or reputation losses. Although there are many techniques to detect software vulnerabilities, their effectiveness is far from acceptable, especially in large software projects, as shown by several research works. This Ph.D. aims to study the combination of different techniques to improve the effectiveness of vulnerability detection (increasing the detection rate and decreasing the number of false-positives). Static Code Analysis (SCA) has a good detection rate and is the central technique of this work. However, as SCA reports many false-positives, we will study the combination of various SCA tools and the integration with other detection approaches (e.g., software metrics) to improve vulnerability detection capabilities. We will also study the use of such combination to prioritize the reported vulnerabilities and thus guide the development efforts and fixes in resource-constrained projects.","PeriodicalId":202247,"journal":{"name":"2020 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114537463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}