Abhishek Bichhawat, Vineet Rajani, D. Garg, Christian Hammer
Information flow control (IFC) has been extensively studied as an approach to mitigate information leaks in applications. A vast majority of existing work in this area is based on static analysis. However, some applications, especially on the Web, are developed using dynamic languages like JavaScript where static analyses for IFC do not scale well. As a result, there has been a growing interest in recent years to develop dynamic or runtime information flow analysis techniques. In spite of the advances in the field, runtime information flow analysis has not been at the helm of information flow security, one of the reasons being that the analysis techniques and the security property related to them (non-interference) over-approximate information flows (particularly implicit flows), generating many false positives. In this paper, we present a sound and precise approach for handling implicit leaks at runtime. In particular, we present an improvement and enhancement of the so-called permissive-upgrade strategy, which is widely used to tackle implicit leaks in dynamic information flow control. We improve the strategy’s permissiveness and generalize it. Building on top of it, we present an approach to handle implicit leaks when dealing with complex features like unstructured control flow and exceptions in higher-order languages. We explain how we address the challenge of handling unstructured control flow using immediate post-dominator analysis. We prove that our approach is sound and precise.
{"title":"Permissive runtime information flow control in the presence of exceptions","authors":"Abhishek Bichhawat, Vineet Rajani, D. Garg, Christian Hammer","doi":"10.3233/JCS-211385","DOIUrl":"https://doi.org/10.3233/JCS-211385","url":null,"abstract":"Information flow control (IFC) has been extensively studied as an approach to mitigate information leaks in applications. A vast majority of existing work in this area is based on static analysis. However, some applications, especially on the Web, are developed using dynamic languages like JavaScript where static analyses for IFC do not scale well. As a result, there has been a growing interest in recent years to develop dynamic or runtime information flow analysis techniques. In spite of the advances in the field, runtime information flow analysis has not been at the helm of information flow security, one of the reasons being that the analysis techniques and the security property related to them (non-interference) over-approximate information flows (particularly implicit flows), generating many false positives. In this paper, we present a sound and precise approach for handling implicit leaks at runtime. In particular, we present an improvement and enhancement of the so-called permissive-upgrade strategy, which is widely used to tackle implicit leaks in dynamic information flow control. We improve the strategy’s permissiveness and generalize it. Building on top of it, we present an approach to handle implicit leaks when dealing with complex features like unstructured control flow and exceptions in higher-order languages. We explain how we address the challenge of handling unstructured control flow using immediate post-dominator analysis. We prove that our approach is sound and precise.","PeriodicalId":142580,"journal":{"name":"J. Comput. Secur.","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121427378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Shirazi, Bruhadeshwar Bezawada, I. Ray, Charles Anderson
Phishing websites trick honest users into believing that they interact with a legitimate website and capture sensitive information, such as user names, passwords, credit card numbers, and other personal information. Machine learning is a promising technique to distinguish between phishing and legitimate websites. However, machine learning approaches are susceptible to adversarial learning attacks where a phishing sample can bypass classifiers. Our experiments on publicly available datasets reveal that the phishing detection mechanisms are vulnerable to adversarial learning attacks. We investigate the robustness of machine learning-based phishing detection in the face of adversarial learning attacks. We propose a practical approach to simulate such attacks by generating adversarial samples through direct feature manipulation. To enhance the sample’s success probability, we describe a clustering approach that guides an attacker to select the best possible phishing samples that can bypass the classifier by appearing as legitimate samples. We define the notion of vulnerability level for each dataset that measures the number of features that can be manipulated and the cost for such manipulation. Further, we clustered phishing samples and showed that some clusters of samples are more likely to exhibit higher vulnerability levels than others. This helps an adversary identify the best candidates of phishing samples to generate adversarial samples at a lower cost. Our finding can be used to refine the dataset and develop better learning models to compensate for the weak samples in the training dataset.
{"title":"Directed adversarial sampling attacks on phishing detection","authors":"H. Shirazi, Bruhadeshwar Bezawada, I. Ray, Charles Anderson","doi":"10.3233/JCS-191411","DOIUrl":"https://doi.org/10.3233/JCS-191411","url":null,"abstract":"Phishing websites trick honest users into believing that they interact with a legitimate website and capture sensitive information, such as user names, passwords, credit card numbers, and other personal information. Machine learning is a promising technique to distinguish between phishing and legitimate websites. However, machine learning approaches are susceptible to adversarial learning attacks where a phishing sample can bypass classifiers. Our experiments on publicly available datasets reveal that the phishing detection mechanisms are vulnerable to adversarial learning attacks. We investigate the robustness of machine learning-based phishing detection in the face of adversarial learning attacks. We propose a practical approach to simulate such attacks by generating adversarial samples through direct feature manipulation. To enhance the sample’s success probability, we describe a clustering approach that guides an attacker to select the best possible phishing samples that can bypass the classifier by appearing as legitimate samples. We define the notion of vulnerability level for each dataset that measures the number of features that can be manipulated and the cost for such manipulation. Further, we clustered phishing samples and showed that some clusters of samples are more likely to exhibit higher vulnerability levels than others. This helps an adversary identify the best candidates of phishing samples to generate adversarial samples at a lower cost. Our finding can be used to refine the dataset and develop better learning models to compensate for the weak samples in the training dataset.","PeriodicalId":142580,"journal":{"name":"J. Comput. Secur.","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127470195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. A. Guillén, Victor Morales-Rocha, Luis Felipe Fernández Martínez
Among the service models provided by the cloud, the software as a service (SaaS) model has had the greatest growth. This service model is an attractive option for organizations, as they can transfer part or all of their IT functions to a cloud service provider. However, there is still some uncertainty about deciding to carry out a migration of all data to the cloud, mainly due to security concerns. The SaaS model not only inherits the security problems of a traditional application, but there are unique attacks and vulnerabilities for a SaaS architecture. Additionally, some of the attacks in this environment are more devastating due to nature of shared resources in the SaaS model. Some of these attacks and vulnerabilities are not yet well known to software designers and developers. This lack of knowledge has negative consequences as it can expose sensitive data of users and organizations. This paper presents a rigorous systematic review using the SALSA framework to know the threats, attacks and countermeasures to mitigate the security problems that occur in a SaaS environment. As part of the results of this review, a classification of threats, attacks and countermeasures in the SaaS environment is presented.
{"title":"A systematic review of security threats and countermeasures in SaaS","authors":"M. A. Guillén, Victor Morales-Rocha, Luis Felipe Fernández Martínez","doi":"10.3233/jcs-200002","DOIUrl":"https://doi.org/10.3233/jcs-200002","url":null,"abstract":"Among the service models provided by the cloud, the software as a service (SaaS) model has had the greatest growth. This service model is an attractive option for organizations, as they can transfer part or all of their IT functions to a cloud service provider. However, there is still some uncertainty about deciding to carry out a migration of all data to the cloud, mainly due to security concerns. The SaaS model not only inherits the security problems of a traditional application, but there are unique attacks and vulnerabilities for a SaaS architecture. Additionally, some of the attacks in this environment are more devastating due to nature of shared resources in the SaaS model. Some of these attacks and vulnerabilities are not yet well known to software designers and developers. This lack of knowledge has negative consequences as it can expose sensitive data of users and organizations. This paper presents a rigorous systematic review using the SALSA framework to know the threats, attacks and countermeasures to mitigate the security problems that occur in a SaaS environment. As part of the results of this review, a classification of threats, attacks and countermeasures in the SaaS environment is presented.","PeriodicalId":142580,"journal":{"name":"J. Comput. Secur.","volume":"2017 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121578081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-14DOI: 10.1007/978-3-030-59013-0_1
V. Cortier, S. Delaune, Jannik Dreier
{"title":"Automatic Generation of Sources Lemmas in Tamarin: Towards Automatic Proofs of Security Protocols","authors":"V. Cortier, S. Delaune, Jannik Dreier","doi":"10.1007/978-3-030-59013-0_1","DOIUrl":"https://doi.org/10.1007/978-3-030-59013-0_1","url":null,"abstract":"","PeriodicalId":142580,"journal":{"name":"J. Comput. Secur.","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126931768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yangguang Tian, Yingjiu Li, R. Deng, Nan Li, Pengfei Wu, Anyi Liu
In this paper, we introduce the first general framework for strong privacy-preserving biometric-based remote user authentication based on oblivious RAM (ORAM) protocol and computational fuzzy extractors. We define formal security models for the general framework, and we prove that it can achieve user authenticity and strong privacy. In particular, the general framework ensures that: 1) a strong privacy and a log-linear time-complexity are achieved by using a new tree-based ORAM protocol; 2) a constant bandwidth cost is achieved by exploiting computational fuzzy extractors in the challenge-response phase of remote user authentications.
{"title":"A new framework for privacy-preserving biometric-based remote user authentication","authors":"Yangguang Tian, Yingjiu Li, R. Deng, Nan Li, Pengfei Wu, Anyi Liu","doi":"10.3233/jcs-191336","DOIUrl":"https://doi.org/10.3233/jcs-191336","url":null,"abstract":"In this paper, we introduce the first general framework for strong privacy-preserving biometric-based remote user authentication based on oblivious RAM (ORAM) protocol and computational fuzzy extractors. We define formal security models for the general framework, and we prove that it can achieve user authenticity and strong privacy. In particular, the general framework ensures that: 1) a strong privacy and a log-linear time-complexity are achieved by using a new tree-based ORAM protocol; 2) a constant bandwidth cost is achieved by exploiting computational fuzzy extractors in the challenge-response phase of remote user authentications.","PeriodicalId":142580,"journal":{"name":"J. Comput. Secur.","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115196624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hammad Afzali, Santiago Torres-Arias, Reza Curtmola, Justin Cappos
. Web-based Git hosting services such as GitHub and GitLab are popular choices to manage and interact with Git repositories. However, they lack an important security feature – the ability to sign Git commits. Users instruct the server to perform repository operations on their behalf and have to trust that the server will execute their requests faithfully. Such trust may be unwarranted though because a malicious or a compromised server may execute the requested actions in an incorrect manner, leading to a different state of the repository than what the user intended. In this paper, we show a range of high-impact attacks that can be executed stealthily when developers use the web UI of a Git hosting service to perform common actions such as editing files or merging branches. We then propose le-git-imate , a defense against these attacks, which enables users to protect their commits using Git’s standard commit signing mechanism. We implement le-git-imate as a Chrome browser extension. le-git-imate does not require changes on the server side and can thus be used immediately. It also preserves current workflows used in Github/GitLab and does not require the user to leave the browser, and it allows anyone to verify that the server’s actions faithfully follow the user’s requested actions. Moreover, experimental evaluation using the browser extension shows that le-git-imate has comparable performance with Git’s standard commit signature mechanism. With our solution in place, users can take advantage of GitHub/GitLab’s web-based features without sacrificing security, thus paving the way towards verifiable web-based Git repositories.
{"title":"Towards adding verifiability to web-based Git repositories","authors":"Hammad Afzali, Santiago Torres-Arias, Reza Curtmola, Justin Cappos","doi":"10.3233/jcs-191371","DOIUrl":"https://doi.org/10.3233/jcs-191371","url":null,"abstract":". Web-based Git hosting services such as GitHub and GitLab are popular choices to manage and interact with Git repositories. However, they lack an important security feature – the ability to sign Git commits. Users instruct the server to perform repository operations on their behalf and have to trust that the server will execute their requests faithfully. Such trust may be unwarranted though because a malicious or a compromised server may execute the requested actions in an incorrect manner, leading to a different state of the repository than what the user intended. In this paper, we show a range of high-impact attacks that can be executed stealthily when developers use the web UI of a Git hosting service to perform common actions such as editing files or merging branches. We then propose le-git-imate , a defense against these attacks, which enables users to protect their commits using Git’s standard commit signing mechanism. We implement le-git-imate as a Chrome browser extension. le-git-imate does not require changes on the server side and can thus be used immediately. It also preserves current workflows used in Github/GitLab and does not require the user to leave the browser, and it allows anyone to verify that the server’s actions faithfully follow the user’s requested actions. Moreover, experimental evaluation using the browser extension shows that le-git-imate has comparable performance with Git’s standard commit signature mechanism. With our solution in place, users can take advantage of GitHub/GitLab’s web-based features without sacrificing security, thus paving the way towards verifiable web-based Git repositories.","PeriodicalId":142580,"journal":{"name":"J. Comput. Secur.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123822403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jannik Dreier, J. Dumas, P. Lafourcade, Léo Robert
In 1968, Liu described the problem of securing documents in a shared secret project. In an example, at least six out of eleven participating scientists need to be present to open the lock securing the secret documents. Shamir proposed a mathematical solution to this physical problem in 1979, by designing an efficient k-out-of-n secret sharing scheme based on Lagrange’s interpolation. Liu and Shamir also claimed that the minimal solution using physical locks is clearly impractical and exponential in the number of participants. In this paper we relax some implicit assumptions in their claim and propose an optimal physical solution to the problem of Liu that uses physical padlocks, but the number of padlocks is not greater than the number of participants. Then, we show that no device can do better for k-out-of-n threshold padlock systems as soon as k ⩾ 2 n , which holds true in particular for Liu’s example. More generally, we derive bounds required to implement any threshold system and prove a lower bound of O ( log ( n ) ) padlocks for any threshold larger than 2. For instance we propose an optimal scheme reaching that bound for 2-out-of-n threshold systems and requiring less than 2 log 2 ( n ) padlocks. We also discuss more complex access structures, a wrapping technique, and other sublinear realizations like an algorithm to generate 3-out-of-n systems with 2.5 n padlocks. Finally we give an algorithm building k-out-of-n threshold padlock systems with only O ( log ( n ) k − 1 ) padlocks. Apart from the physical world, our results also show that it is possible to implement secret sharing over small fields.
{"title":"Optimal Threshold Padlock Systems","authors":"Jannik Dreier, J. Dumas, P. Lafourcade, Léo Robert","doi":"10.3233/jcs-210065","DOIUrl":"https://doi.org/10.3233/jcs-210065","url":null,"abstract":"In 1968, Liu described the problem of securing documents in a shared secret project. In an example, at least six out of eleven participating scientists need to be present to open the lock securing the secret documents. Shamir proposed a mathematical solution to this physical problem in 1979, by designing an efficient k-out-of-n secret sharing scheme based on Lagrange’s interpolation. Liu and Shamir also claimed that the minimal solution using physical locks is clearly impractical and exponential in the number of participants. In this paper we relax some implicit assumptions in their claim and propose an optimal physical solution to the problem of Liu that uses physical padlocks, but the number of padlocks is not greater than the number of participants. Then, we show that no device can do better for k-out-of-n threshold padlock systems as soon as k ⩾ 2 n , which holds true in particular for Liu’s example. More generally, we derive bounds required to implement any threshold system and prove a lower bound of O ( log ( n ) ) padlocks for any threshold larger than 2. For instance we propose an optimal scheme reaching that bound for 2-out-of-n threshold systems and requiring less than 2 log 2 ( n ) padlocks. We also discuss more complex access structures, a wrapping technique, and other sublinear realizations like an algorithm to generate 3-out-of-n systems with 2.5 n padlocks. Finally we give an algorithm building k-out-of-n threshold padlock systems with only O ( log ( n ) k − 1 ) padlocks. Apart from the physical world, our results also show that it is possible to implement secret sharing over small fields.","PeriodicalId":142580,"journal":{"name":"J. Comput. Secur.","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134570546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jannik Dreier, L. Hirschi, S. Radomirovic, R. Sasse
In cryptographic protocols, in particular RFID protocols, exclusive-or (XOR) operations are common. Due to the inherent complexity of faithful models of XOR, there is only limited tool support for the verification of cryptographic protocols using XOR. In this paper, we improve the TAMARIN prover and its underlying theory to deal with an equational theory modeling XOR operations. The XOR theory can be combined with all equational theories previously supported, including user-defined equational theories. This makes TAMARIN the first verification tool for cryptographic protocols in the symbolic model to support simultaneously this large set of equational theories, protocols with global mutable state, an unbounded number of sessions, and complex security properties including observational equivalence. We demonstrate the effectiveness of our approach by analyzing several protocols that rely on XOR, in particular multiple RFID-protocols, where we can identify attacks as well as provide proofs.
{"title":"Verification of stateful cryptographic protocols with exclusive OR","authors":"Jannik Dreier, L. Hirschi, S. Radomirovic, R. Sasse","doi":"10.3233/jcs-191358","DOIUrl":"https://doi.org/10.3233/jcs-191358","url":null,"abstract":"In cryptographic protocols, in particular RFID protocols, exclusive-or (XOR) operations are common. Due to the inherent complexity of faithful models of XOR, there is only limited tool support for the verification of cryptographic protocols using XOR. In this paper, we improve the TAMARIN prover and its underlying theory to deal with an equational theory modeling XOR operations. The XOR theory can be combined with all equational theories previously supported, including user-defined equational theories. This makes TAMARIN the first verification tool for cryptographic protocols in the symbolic model to support simultaneously this large set of equational theories, protocols with global mutable state, an unbounded number of sessions, and complex security properties including observational equivalence. We demonstrate the effectiveness of our approach by analyzing several protocols that rely on XOR, in particular multiple RFID-protocols, where we can identify attacks as well as provide proofs.","PeriodicalId":142580,"journal":{"name":"J. Comput. Secur.","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122591183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samuel Yeom, Irene Giacomelli, Alan Menaged, Matt Fredrikson, S. Jha
. Machine learning algorithms, when applied to sensitive data, pose a distinct threat to privacy. A growing body of prior work demonstrates that models produced by these algorithms may leak specific private information in the training data to an attacker, either through the models’ structure or their observable behavior. This article examines the factors that can allow a training set membership inference attacker or an attribute inference attacker to learn such information. Using both formal and empirical analyses, we illustrate a clear relationship between these factors and the privacy risk that arises in several popular machine learning algorithms. We find that overfitting is sufficient to allow an attacker to perform membership inference and, when the target attribute meets certain conditions about its influence, attribute inference attacks. We also explore the connection between membership inference and attribute inference, showing that there are deep connections between the two that lead to effective new attacks. We show that overfitting is not necessary for these attacks, demonstrating that other factors, such as robustness to norm-bounded input perturbations and malicious training algorithms, can also significantly increase the privacy risk. Notably, as robustness is intended to be a defense against attacks on the integrity of model predictions, these results suggest it may be difficult in some cases to simultaneously defend against privacy and integrity attacks.
{"title":"Overfitting, robustness, and malicious algorithms: A study of potential causes of privacy risk in machine learning","authors":"Samuel Yeom, Irene Giacomelli, Alan Menaged, Matt Fredrikson, S. Jha","doi":"10.3233/jcs-191362","DOIUrl":"https://doi.org/10.3233/jcs-191362","url":null,"abstract":". Machine learning algorithms, when applied to sensitive data, pose a distinct threat to privacy. A growing body of prior work demonstrates that models produced by these algorithms may leak specific private information in the training data to an attacker, either through the models’ structure or their observable behavior. This article examines the factors that can allow a training set membership inference attacker or an attribute inference attacker to learn such information. Using both formal and empirical analyses, we illustrate a clear relationship between these factors and the privacy risk that arises in several popular machine learning algorithms. We find that overfitting is sufficient to allow an attacker to perform membership inference and, when the target attribute meets certain conditions about its influence, attribute inference attacks. We also explore the connection between membership inference and attribute inference, showing that there are deep connections between the two that lead to effective new attacks. We show that overfitting is not necessary for these attacks, demonstrating that other factors, such as robustness to norm-bounded input perturbations and malicious training algorithms, can also significantly increase the privacy risk. Notably, as robustness is intended to be a defense against attacks on the integrity of model predictions, these results suggest it may be difficult in some cases to simultaneously defend against privacy and integrity attacks.","PeriodicalId":142580,"journal":{"name":"J. Comput. Secur.","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121303074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}