Pub Date : 2021-12-01DOI: 10.1109/APSEC53868.2021.00073
Irving Rodriguez, Xiaoyin Wang
Extended Reality (XR) is an emerging technique with a lot of application domains. In this paper, we present an exploration study of two XR frameworks, in particular investigating the trends of topics in their issue tracking systems over time.
{"title":"Topic Trends in Issue Tracking System of Extended Reality Frameworks","authors":"Irving Rodriguez, Xiaoyin Wang","doi":"10.1109/APSEC53868.2021.00073","DOIUrl":"https://doi.org/10.1109/APSEC53868.2021.00073","url":null,"abstract":"Extended Reality (XR) is an emerging technique with a lot of application domains. In this paper, we present an exploration study of two XR frameworks, in particular investigating the trends of topics in their issue tracking systems over time.","PeriodicalId":143800,"journal":{"name":"2021 28th Asia-Pacific Software Engineering Conference (APSEC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124579712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/APSEC53868.2021.00050
Yizhen Chen, N. Chaudhari, Mei-Hwa Chen
Most modern software systems are continuously evolving, with changes frequently taking place in the core components or the execution context. These changes can adversely introduce regression faults, causing previously working functions to fail. Regression testing is essential for maintaining the quality of evolving complex software, but it can be overly time-consuming when the size of the test suite is large, or the execution of the test cases takes a long time. There are extensive research studies on selective regression testing aiming at minimizing the size of the regression test suite while maximizing the detection of the regression faults. However, most of the existing techniques focus on the regression faults caused by the code changes, the impact of the context changes on the non-modified software has barely been explored. This paper presents a context-aware regression test selection (CARTS) approach that not only accounts for the modification of code but also changes in the execution context, including libraries, external APIs, and databases. After a change, CARTS uses the program invariants denoted in the pre- and postconditions of a function to determine if the function is affected by the change and selects all the test cases that executed the modified code as well as the non-modified functions whose preconditions are affected by the change. To evaluate the effectiveness of our approach, we conducted empirical studies on multi-release open-source software and case studies on real-world systems that have ongoing changes in code as well as in the execution context. The results of our controlled experiments show that with an average of 32.5% of the regression test cases, CARTS selected all the fault-revealing test cases. In the case studies, all the fault-revealing test cases were selected by using an average of 25.3% of the regression test suite. These results suggest that CARTS can be effective for selecting fault-revealing test cases for both code and execution context changes.
{"title":"Context-Aware Regression Test Selection","authors":"Yizhen Chen, N. Chaudhari, Mei-Hwa Chen","doi":"10.1109/APSEC53868.2021.00050","DOIUrl":"https://doi.org/10.1109/APSEC53868.2021.00050","url":null,"abstract":"Most modern software systems are continuously evolving, with changes frequently taking place in the core components or the execution context. These changes can adversely introduce regression faults, causing previously working functions to fail. Regression testing is essential for maintaining the quality of evolving complex software, but it can be overly time-consuming when the size of the test suite is large, or the execution of the test cases takes a long time. There are extensive research studies on selective regression testing aiming at minimizing the size of the regression test suite while maximizing the detection of the regression faults. However, most of the existing techniques focus on the regression faults caused by the code changes, the impact of the context changes on the non-modified software has barely been explored. This paper presents a context-aware regression test selection (CARTS) approach that not only accounts for the modification of code but also changes in the execution context, including libraries, external APIs, and databases. After a change, CARTS uses the program invariants denoted in the pre- and postconditions of a function to determine if the function is affected by the change and selects all the test cases that executed the modified code as well as the non-modified functions whose preconditions are affected by the change. To evaluate the effectiveness of our approach, we conducted empirical studies on multi-release open-source software and case studies on real-world systems that have ongoing changes in code as well as in the execution context. The results of our controlled experiments show that with an average of 32.5% of the regression test cases, CARTS selected all the fault-revealing test cases. In the case studies, all the fault-revealing test cases were selected by using an average of 25.3% of the regression test suite. These results suggest that CARTS can be effective for selecting fault-revealing test cases for both code and execution context changes.","PeriodicalId":143800,"journal":{"name":"2021 28th Asia-Pacific Software Engineering Conference (APSEC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132823393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/APSEC53868.2021.00041
Azeem Ahmad, F. D. O. Neto, Zhixiang Shi, K. Sandahl, O. Leifler
Developers often spend time to determine whether test case failures are real failures or flaky. The flaky tests, also known as non-deterministic tests, switch their outcomes without any modification in the codebase, hence reducing the confidence of developers during maintenance as well as in the quality of a product. Re-running test cases to reveal flakiness is resource-consuming, unreliable and does not reveal the root causes of test flakiness. Our paper evaluates a multi-factor approach to identify flaky test executions implemented in a tool named MDF laker. The four factors are: trace-back coverage, flaky frequency, number of test smells, and test size. Based on the extracted factors, MDFlaker uses k-Nearest Neighbor (KNN) to determine whether failed test executions are flaky. We investigate MDFlaker in a case study with 2166 test executions from different open-source repositories. We evaluate the effectiveness of our flaky detection tool. We illustrate how the multi-factor approach can be used to reveal root causes for flakiness, and we conduct a qualitative comparison between MDF laker and other tools proposed in literature. Our results show that the combination of different factors can be used to identify flaky tests. Each factor has its own trade-off, e.g., trace-back leads to many true positives, while flaky frequency yields more true negatives. Therefore, specific combinations of factors enable classification for testers with limited information (e.g., not enough test history information).
{"title":"A Multi-factor Approach for Flaky Test Detection and Automated Root Cause Analysis","authors":"Azeem Ahmad, F. D. O. Neto, Zhixiang Shi, K. Sandahl, O. Leifler","doi":"10.1109/APSEC53868.2021.00041","DOIUrl":"https://doi.org/10.1109/APSEC53868.2021.00041","url":null,"abstract":"Developers often spend time to determine whether test case failures are real failures or flaky. The flaky tests, also known as non-deterministic tests, switch their outcomes without any modification in the codebase, hence reducing the confidence of developers during maintenance as well as in the quality of a product. Re-running test cases to reveal flakiness is resource-consuming, unreliable and does not reveal the root causes of test flakiness. Our paper evaluates a multi-factor approach to identify flaky test executions implemented in a tool named MDF laker. The four factors are: trace-back coverage, flaky frequency, number of test smells, and test size. Based on the extracted factors, MDFlaker uses k-Nearest Neighbor (KNN) to determine whether failed test executions are flaky. We investigate MDFlaker in a case study with 2166 test executions from different open-source repositories. We evaluate the effectiveness of our flaky detection tool. We illustrate how the multi-factor approach can be used to reveal root causes for flakiness, and we conduct a qualitative comparison between MDF laker and other tools proposed in literature. Our results show that the combination of different factors can be used to identify flaky tests. Each factor has its own trade-off, e.g., trace-back leads to many true positives, while flaky frequency yields more true negatives. Therefore, specific combinations of factors enable classification for testers with limited information (e.g., not enough test history information).","PeriodicalId":143800,"journal":{"name":"2021 28th Asia-Pacific Software Engineering Conference (APSEC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130932038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/APSEC53868.2021.00012
Asma Razgallah, R. Khoury
The exponential growth in the number of Android applications on the market has been matching with a corresponding growth in malicious application. Of particular concern is the risk of application repackaging, a process by which cy-bercriminals downloads, modifies and republishes an application that already exists on the store with the addition of malicious code. Dynamic detection in system call traces, based on machine learning models has emerged as a promising solution. In this paper, we introduce a novel abstraction process, and demonstrate that it improves the classification process by replicating multiples malware detection techniques from the literature. We further propose a novel classification method, based on our observation that malware triggers specific system calls at different points than benign programs. We further make our dataset available for future researchers.
{"title":"Behavioral classification of Android applications using system calls","authors":"Asma Razgallah, R. Khoury","doi":"10.1109/APSEC53868.2021.00012","DOIUrl":"https://doi.org/10.1109/APSEC53868.2021.00012","url":null,"abstract":"The exponential growth in the number of Android applications on the market has been matching with a corresponding growth in malicious application. Of particular concern is the risk of application repackaging, a process by which cy-bercriminals downloads, modifies and republishes an application that already exists on the store with the addition of malicious code. Dynamic detection in system call traces, based on machine learning models has emerged as a promising solution. In this paper, we introduce a novel abstraction process, and demonstrate that it improves the classification process by replicating multiples malware detection techniques from the literature. We further propose a novel classification method, based on our observation that malware triggers specific system calls at different points than benign programs. We further make our dataset available for future researchers.","PeriodicalId":143800,"journal":{"name":"2021 28th Asia-Pacific Software Engineering Conference (APSEC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115411213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/APSEC53868.2021.00014
Yoshiki Higo, Junnosuke Matsumoto, S. Kusumoto
In software development, source code is repeatedly changed due to various reasons. Similar code changes are called change patterns. Identifying change patterns is useful to support software development in a variety of ways. For example, change patterns can be used to collect ingredients for code completion or automated program repair. Many research studies have proposed various techniques that detect change patterns. For example, Negara et al. proposed a technique that derives change patterns from the edit scripts. Negara's technique can detect fine-grained change patterns, but we consider that there is room to improve their technique. We found that Negara's technique occasionally generates change patterns from structurally-different changes, and we also uncovered that the reason why such change patterns are generated is that their technique performs text comparisons in matching changes. In this study, we propose a new change mining technique to detect change patterns only from structurally-identical changes by taking into account the structure of the abstract syntax trees. We implemented the proposed technique as a tool, TC2P, and we compared it with Negara's technique. As a result, we confirmed that TC2P was not only able to detect change patterns more adequately than the prior technique but also to detect change patterns that were not detected by the prior technique.
{"title":"Tree-based Mining of Fine-grained Code Changes to Detect Unknown Change Patterns","authors":"Yoshiki Higo, Junnosuke Matsumoto, S. Kusumoto","doi":"10.1109/APSEC53868.2021.00014","DOIUrl":"https://doi.org/10.1109/APSEC53868.2021.00014","url":null,"abstract":"In software development, source code is repeatedly changed due to various reasons. Similar code changes are called change patterns. Identifying change patterns is useful to support software development in a variety of ways. For example, change patterns can be used to collect ingredients for code completion or automated program repair. Many research studies have proposed various techniques that detect change patterns. For example, Negara et al. proposed a technique that derives change patterns from the edit scripts. Negara's technique can detect fine-grained change patterns, but we consider that there is room to improve their technique. We found that Negara's technique occasionally generates change patterns from structurally-different changes, and we also uncovered that the reason why such change patterns are generated is that their technique performs text comparisons in matching changes. In this study, we propose a new change mining technique to detect change patterns only from structurally-identical changes by taking into account the structure of the abstract syntax trees. We implemented the proposed technique as a tool, TC2P, and we compared it with Negara's technique. As a result, we confirmed that TC2P was not only able to detect change patterns more adequately than the prior technique but also to detect change patterns that were not detected by the prior technique.","PeriodicalId":143800,"journal":{"name":"2021 28th Asia-Pacific Software Engineering Conference (APSEC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126901542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/APSEC53868.2021.00065
Kapil Singi, Kanchanjot Kaur Phokela, Sukhavasi Narendranath, Vikrant S. Kaulgud
Data is a critical asset for organizations. It helps them generate business insights, improves decision making and creates a competitive advantage. Typically, organizations want exclusive control over data for their own advantage. To protect individual and national rights, governments frame data residency regulations. These laws govern the geographical constraints where storage, transmission and processing of data are allowed. Non-compliance to data regulations often lead to serious reper-cussions for organizations, ranging from hefty penalties to loss of brand value. The different variants of data residency constraints such as first copy within country storage poses challenges in designing a regulation-compliant application deployment architecture. In this paper, we propose a framework and multi-criteria decision technique for determining an optimal single cloud or multi cloud architecture. The framework is based on several criteria including permitted data flows as per regulations, data sensitivity and type, availability of cloud providers etc. The framework helps Cloud architects rapidly arrive at a set of deployment architecture options, which can further optimize by the architects.
{"title":"Framework for Recommending Data Residency Compliant Application Architecture","authors":"Kapil Singi, Kanchanjot Kaur Phokela, Sukhavasi Narendranath, Vikrant S. Kaulgud","doi":"10.1109/APSEC53868.2021.00065","DOIUrl":"https://doi.org/10.1109/APSEC53868.2021.00065","url":null,"abstract":"Data is a critical asset for organizations. It helps them generate business insights, improves decision making and creates a competitive advantage. Typically, organizations want exclusive control over data for their own advantage. To protect individual and national rights, governments frame data residency regulations. These laws govern the geographical constraints where storage, transmission and processing of data are allowed. Non-compliance to data regulations often lead to serious reper-cussions for organizations, ranging from hefty penalties to loss of brand value. The different variants of data residency constraints such as first copy within country storage poses challenges in designing a regulation-compliant application deployment architecture. In this paper, we propose a framework and multi-criteria decision technique for determining an optimal single cloud or multi cloud architecture. The framework is based on several criteria including permitted data flows as per regulations, data sensitivity and type, availability of cloud providers etc. The framework helps Cloud architects rapidly arrive at a set of deployment architecture options, which can further optimize by the architects.","PeriodicalId":143800,"journal":{"name":"2021 28th Asia-Pacific Software Engineering Conference (APSEC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114515114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/APSEC53868.2021.00049
Stefan Fischer, R. Ramler, L. Linsbauer
Highly configurable software gives developers more flexibility to meet different customer requirements and enables users to better tailor software to their needs. However, variability causes higher complexity in software and complicates many development processes, such as testing. One major challenge for testing of configurable software is adjusting tests to fit different configurations, which often has to be done manually. In our previous work, we evaluated the use of an automated reuse technique to support the reuse of existing tests for new configurations. Research on automated reuse of model variants and on applying model-based testing to configurable software encouraged us to also evaluate the automated reuse of model-based test variants. The goal is to investigate differences in applying automated reuse to the different testing paradigms. Our evaluation provides evidence for the usefulness of automated reuse for both testing paradigms. Nonetheless we found some differences in the robustness of tests to small inaccuracies of the reuse approach.
{"title":"Comparing Automated Reuse of Scripted Tests and Model-Based Tests for Configurable Software","authors":"Stefan Fischer, R. Ramler, L. Linsbauer","doi":"10.1109/APSEC53868.2021.00049","DOIUrl":"https://doi.org/10.1109/APSEC53868.2021.00049","url":null,"abstract":"Highly configurable software gives developers more flexibility to meet different customer requirements and enables users to better tailor software to their needs. However, variability causes higher complexity in software and complicates many development processes, such as testing. One major challenge for testing of configurable software is adjusting tests to fit different configurations, which often has to be done manually. In our previous work, we evaluated the use of an automated reuse technique to support the reuse of existing tests for new configurations. Research on automated reuse of model variants and on applying model-based testing to configurable software encouraged us to also evaluate the automated reuse of model-based test variants. The goal is to investigate differences in applying automated reuse to the different testing paradigms. Our evaluation provides evidence for the usefulness of automated reuse for both testing paradigms. Nonetheless we found some differences in the robustness of tests to small inaccuracies of the reuse approach.","PeriodicalId":143800,"journal":{"name":"2021 28th Asia-Pacific Software Engineering Conference (APSEC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130268744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/APSEC53868.2021.00061
Chong Wang, Tianyang Liu, Peng Liang, M. Daneva, M. V. Sinderen
Release planning for mobile apps has recently become an area of active research. Prior research in this area concentrated on the analysis of release notes and on tracking user reviews to support app evolution with issue trackers. However, little is known about the impact of user reviews on the evolution of mobile apps. Our work explores the role of user reviews in app updates based on release notes. For this purpose, we collected user reviews and release notes of Spotify, the ߢnumber one’ app in the ‘Music’ category in Apple App Store, as the research data. Then, we manually removed non-informative parts of each release note, and manually determined the relevance of the app reviews with respect to the release notes. We did this by using Word2Vec calculation techniques based on the top 80 app release notes with the highest similarities. Our empirical results show that more than 60 % of the matched reviews are actually irrelevant to the corresponding release notes. When zooming in at these relevant user reviews, we found that around half of them were posted before the new release and referred to requests, suggestions, and complaints. Whereas, the other half of the relevant user reviews were posted after updating the apps and concentrated more on bug reports and praise.
{"title":"The Role of User Reviews in App Updates: A Preliminary Investigation on App Release Notes*","authors":"Chong Wang, Tianyang Liu, Peng Liang, M. Daneva, M. V. Sinderen","doi":"10.1109/APSEC53868.2021.00061","DOIUrl":"https://doi.org/10.1109/APSEC53868.2021.00061","url":null,"abstract":"Release planning for mobile apps has recently become an area of active research. Prior research in this area concentrated on the analysis of release notes and on tracking user reviews to support app evolution with issue trackers. However, little is known about the impact of user reviews on the evolution of mobile apps. Our work explores the role of user reviews in app updates based on release notes. For this purpose, we collected user reviews and release notes of Spotify, the ߢnumber one’ app in the ‘Music’ category in Apple App Store, as the research data. Then, we manually removed non-informative parts of each release note, and manually determined the relevance of the app reviews with respect to the release notes. We did this by using Word2Vec calculation techniques based on the top 80 app release notes with the highest similarities. Our empirical results show that more than 60 % of the matched reviews are actually irrelevant to the corresponding release notes. When zooming in at these relevant user reviews, we found that around half of them were posted before the new release and referred to requests, suggestions, and complaints. Whereas, the other half of the relevant user reviews were posted after updating the apps and concentrated more on bug reports and praise.","PeriodicalId":143800,"journal":{"name":"2021 28th Asia-Pacific Software Engineering Conference (APSEC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123263240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/APSEC53868.2021.00039
Wenjie Li, Yanyan Jiang, Jun Ma, Chang Xu
Image displaying in Android apps is resource-intensive. Improperly displayed images result in performance degradation or even more severe consequences like app crashes. Existing static performance anti-pattern checkers are conservative and limited to a small set of bugs. This paper presents ImMut, the first test augmentation approach to performance testing for image displaying in Android apps to complement these static checkers. Given a functional test case, ImMut mutates it towards a performance test case by either (1) injecting external-source images with large ones or (2) copy-pasting a repeatable fragment and slightly mutating the copies to display many (potentially distinct) images. Evaluation on our prototype implementation showed promising results that ImMut revealed 14 previously unknown performance bugs that are beyond the capability of state-of-the-art static checkers.
{"title":"Automatic Performance Testing for Image Displaying in Android Apps","authors":"Wenjie Li, Yanyan Jiang, Jun Ma, Chang Xu","doi":"10.1109/APSEC53868.2021.00039","DOIUrl":"https://doi.org/10.1109/APSEC53868.2021.00039","url":null,"abstract":"Image displaying in Android apps is resource-intensive. Improperly displayed images result in performance degradation or even more severe consequences like app crashes. Existing static performance anti-pattern checkers are conservative and limited to a small set of bugs. This paper presents ImMut, the first test augmentation approach to performance testing for image displaying in Android apps to complement these static checkers. Given a functional test case, ImMut mutates it towards a performance test case by either (1) injecting external-source images with large ones or (2) copy-pasting a repeatable fragment and slightly mutating the copies to display many (potentially distinct) images. Evaluation on our prototype implementation showed promising results that ImMut revealed 14 previously unknown performance bugs that are beyond the capability of state-of-the-art static checkers.","PeriodicalId":143800,"journal":{"name":"2021 28th Asia-Pacific Software Engineering Conference (APSEC)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127249432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}