C. Fuchs, N. Murillo, A. Plaat, E. V. D. Kouwe, D. Harsono, Peng Wang
In this contribution, we provide insights on the practical feasibility, effectiveness, and validation of a software-based fault-tolerance architecture we developed for use aboard small satellites. We exploit thread-level coarse-grain lockstep to facilitate forward-error-correction and assures computational correctness on an FPGA-based MPSoC. It can be implemented using standard open-source and FPGA design tools, requires only standard COTS components, and is processor architecture and operating system agnostic.
{"title":"Software-Defined Dependable Computing for Spacecraft","authors":"C. Fuchs, N. Murillo, A. Plaat, E. V. D. Kouwe, D. Harsono, Peng Wang","doi":"10.1109/PRDC.2018.00043","DOIUrl":"https://doi.org/10.1109/PRDC.2018.00043","url":null,"abstract":"In this contribution, we provide insights on the practical feasibility, effectiveness, and validation of a software-based fault-tolerance architecture we developed for use aboard small satellites. We exploit thread-level coarse-grain lockstep to facilitate forward-error-correction and assures computational correctness on an FPGA-based MPSoC. It can be implemented using standard open-source and FPGA design tools, requires only standard COTS components, and is processor architecture and operating system agnostic.","PeriodicalId":409301,"journal":{"name":"2018 IEEE 23rd Pacific Rim International Symposium on Dependable Computing (PRDC)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121983898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tsvetoslava Vateva-Gurova, Salman Manzoor, Yennun Huang, N. Suri
Covert-and side-channel attacks, typically enabled by the usage of shared resources, pose a serious threat to complex systems such as the Cloud. While their exploitation in the real world depends on properties of the execution environment (e.g., scheduling), the explicit consideration of these factors is often neglected. This paper introduces InfoLeak, an information leakage model that establishes the crucial role of the scheduler for exploiting core-private caches as covert channels. We show, formally and empirically, how the availability of these channels and the corresponding attack feasibility are affected by scheduling. Moreover, our model allows security experts to assess the related threat, posed by core-private cache covert channels for a particular system by considering solely the scheduling information. To validate the utility of InfoLeak, we deploy a covert-channel attack and correlate its success ratio to the scheduling of the attacker processes in the target system. We demonstrate the applicability of the InfoLeak model for analyzing the scheduling information for possible information leakage and also provide an example on its usage.
{"title":"InfoLeak: Scheduling-Based Information Leakage","authors":"Tsvetoslava Vateva-Gurova, Salman Manzoor, Yennun Huang, N. Suri","doi":"10.1109/PRDC.2018.00015","DOIUrl":"https://doi.org/10.1109/PRDC.2018.00015","url":null,"abstract":"Covert-and side-channel attacks, typically enabled by the usage of shared resources, pose a serious threat to complex systems such as the Cloud. While their exploitation in the real world depends on properties of the execution environment (e.g., scheduling), the explicit consideration of these factors is often neglected. This paper introduces InfoLeak, an information leakage model that establishes the crucial role of the scheduler for exploiting core-private caches as covert channels. We show, formally and empirically, how the availability of these channels and the corresponding attack feasibility are affected by scheduling. Moreover, our model allows security experts to assess the related threat, posed by core-private cache covert channels for a particular system by considering solely the scheduling information. To validate the utility of InfoLeak, we deploy a covert-channel attack and correlate its success ratio to the scheduling of the attacker processes in the target system. We demonstrate the applicability of the InfoLeak model for analyzing the scheduling information for possible information leakage and also provide an example on its usage.","PeriodicalId":409301,"journal":{"name":"2018 IEEE 23rd Pacific Rim International Symposium on Dependable Computing (PRDC)","volume":"432 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116707035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Trustworthiness is a paramount concern for users and customers in the selection of a software solution, specially in the context of complex and dynamic environments, such as Cloud and IoT. However, assessing and benchmarking trustworthiness (worthiness of software for being trusted) is a challenging task, mainly due to the variety of application scenarios (e.g., businesscritical, safety-critical), the large number of determinative quality attributes (e.g., security, performance), and last, but foremost, due to the subjective notion of trust and trustworthiness. In this paper, we present trustworthiness as a measurable notion in relative terms based on security attributes and propose an approach for the assessment and benchmarking of software. The main goal is to build a trustworthiness assessment model based on software metrics (e.g., Cyclomatic Complexity, CountLine, CBO) that can be used as indicators of software security. To demonstrate the proposed approach, we assessed and ranked several files and functions of the Mozilla Firefox project based on their trustworthiness score and conducted a survey among several software security experts in order to validate the obtained rank. Results show that our approach is able to provide a sound ranking of the benchmarked software.
{"title":"An Approach for Trustworthiness Benchmarking Using Software Metrics","authors":"N. Medeiros, N. Ivaki, Pedro Costa, M. Vieira","doi":"10.1109/PRDC.2018.00019","DOIUrl":"https://doi.org/10.1109/PRDC.2018.00019","url":null,"abstract":"Trustworthiness is a paramount concern for users and customers in the selection of a software solution, specially in the context of complex and dynamic environments, such as Cloud and IoT. However, assessing and benchmarking trustworthiness (worthiness of software for being trusted) is a challenging task, mainly due to the variety of application scenarios (e.g., businesscritical, safety-critical), the large number of determinative quality attributes (e.g., security, performance), and last, but foremost, due to the subjective notion of trust and trustworthiness. In this paper, we present trustworthiness as a measurable notion in relative terms based on security attributes and propose an approach for the assessment and benchmarking of software. The main goal is to build a trustworthiness assessment model based on software metrics (e.g., Cyclomatic Complexity, CountLine, CBO) that can be used as indicators of software security. To demonstrate the proposed approach, we assessed and ranked several files and functions of the Mozilla Firefox project based on their trustworthiness score and conducted a survey among several software security experts in order to validate the obtained rank. Results show that our approach is able to provide a sound ranking of the benchmarked software.","PeriodicalId":409301,"journal":{"name":"2018 IEEE 23rd Pacific Rim International Symposium on Dependable Computing (PRDC)","volume":"83 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115763408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nesrine Berjab, Hieu Hanh Le, Chia-Mu Yu, S. Kuo, H. Yokota
The Internet of things (IoT) is a distributed, networked system composed of many embedded sensor devices. Unfortunately, these devices are resource constrained and susceptible to malicious data-integrity attacks and failures, leading to unreliability and sometimes to major failure of parts of the entire system. Intrusion detection and failure handling are essential requirements for IoT security. Nevertheless, as far as we know, the area of data-integrity detection for IoT has yet to receive much attention. Most previous intrusion-detection methods proposed for IoT, particularly for wireless sensor networks (WSNs), focus only on specific types of network attacks. Moreover, these approaches usually rely on using precise values to specify abnormality thresholds. However, sensor readings are often imprecise and crisp threshold values are inappropriate. To guarantee a lightweight, dependable monitoring system, we propose a novel hierarchical framework for detecting abnormal nodes in WSNs. The proposed approach uses fuzzy logic in event-condition-action (ECA) rule-based WSNs to detect malicious nodes, while also considering failed nodes. The spatiotemporal semantics of heterogeneous sensor readings are considered in the decision process to distinguish malicious data from other anomalies. Following our experiments with the proposed framework, we stress the significance of considering the sensor correlations to achieve detection accuracy, which has been neglected in previous studies. Our experiments using real-world sensor data demonstrate that our approach can provide high detection accuracy with low false-alarm rates. We also show that our approach performs well when compared to two well-known classification algorithms.
{"title":"Hierarchical Abnormal-Node Detection Using Fuzzy Logic for ECA Rule-Based Wireless Sensor Networks","authors":"Nesrine Berjab, Hieu Hanh Le, Chia-Mu Yu, S. Kuo, H. Yokota","doi":"10.1109/PRDC.2018.00051","DOIUrl":"https://doi.org/10.1109/PRDC.2018.00051","url":null,"abstract":"The Internet of things (IoT) is a distributed, networked system composed of many embedded sensor devices. Unfortunately, these devices are resource constrained and susceptible to malicious data-integrity attacks and failures, leading to unreliability and sometimes to major failure of parts of the entire system. Intrusion detection and failure handling are essential requirements for IoT security. Nevertheless, as far as we know, the area of data-integrity detection for IoT has yet to receive much attention. Most previous intrusion-detection methods proposed for IoT, particularly for wireless sensor networks (WSNs), focus only on specific types of network attacks. Moreover, these approaches usually rely on using precise values to specify abnormality thresholds. However, sensor readings are often imprecise and crisp threshold values are inappropriate. To guarantee a lightweight, dependable monitoring system, we propose a novel hierarchical framework for detecting abnormal nodes in WSNs. The proposed approach uses fuzzy logic in event-condition-action (ECA) rule-based WSNs to detect malicious nodes, while also considering failed nodes. The spatiotemporal semantics of heterogeneous sensor readings are considered in the decision process to distinguish malicious data from other anomalies. Following our experiments with the proposed framework, we stress the significance of considering the sensor correlations to achieve detection accuracy, which has been neglected in previous studies. Our experiments using real-world sensor data demonstrate that our approach can provide high detection accuracy with low false-alarm rates. We also show that our approach performs well when compared to two well-known classification algorithms.","PeriodicalId":409301,"journal":{"name":"2018 IEEE 23rd Pacific Rim International Symposium on Dependable Computing (PRDC)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129929983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although atomicity, isolation and temporal correctness are crucial to the dependability of many real-time database-centric systems, the selected assurance mechanism for one property may breach another. Trading off these properties requires to specify and analyze their dependencies, together with the selected supporting mechanisms (abort recovery, concurrency control, and scheduling), which is still insufficiently supported. In this paper, we propose a UML profile, called UTRAN, for specifying atomic concurrent real-time transactions, with explicit support for all three properties and their supporting mechanisms. We also propose a pattern-based modeling framework, called UPPCART, to formalize the transactions and the mechanisms specified in UTRAN, as UPPAAL timed automata. Various mechanisms can be modeled flexibly using our reusable patterns, after which the desired properties can be verified by the UPPAAL model checker. Our techniques facilitate systematic analysis of atomicity, isolation and temporal correctness trade-offs with guarantee, thus contributing to a dependable real-time database system.
{"title":"Specification and Formal Verification of Atomic Concurrent Real-Time Transactions","authors":"Simin Cai, B. Gallina, Dag Nyström, C. Seceleanu","doi":"10.1109/PRDC.2018.00021","DOIUrl":"https://doi.org/10.1109/PRDC.2018.00021","url":null,"abstract":"Although atomicity, isolation and temporal correctness are crucial to the dependability of many real-time database-centric systems, the selected assurance mechanism for one property may breach another. Trading off these properties requires to specify and analyze their dependencies, together with the selected supporting mechanisms (abort recovery, concurrency control, and scheduling), which is still insufficiently supported. In this paper, we propose a UML profile, called UTRAN, for specifying atomic concurrent real-time transactions, with explicit support for all three properties and their supporting mechanisms. We also propose a pattern-based modeling framework, called UPPCART, to formalize the transactions and the mechanisms specified in UTRAN, as UPPAAL timed automata. Various mechanisms can be modeled flexibly using our reusable patterns, after which the desired properties can be verified by the UPPAAL model checker. Our techniques facilitate systematic analysis of atomicity, isolation and temporal correctness trade-offs with guarantee, thus contributing to a dependable real-time database system.","PeriodicalId":409301,"journal":{"name":"2018 IEEE 23rd Pacific Rim International Symposium on Dependable Computing (PRDC)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123412833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anomaly detection, which aims at identifying unexpected trends and data patterns, has widely been used to build error detectors, failure predictors or intrusion detectors. Internal faults or malicious attacks have a different impact on the behavior of the system. They usually manifest as different observable deviations from the expected behavior, which may be identified by anomaly detection algorithms. Our study aims at investigating the suitability of unsupervised algorithms and their families in detecting either point, contextual or collective anomalies. To provide a complete picture, we consider both sliding and non-sliding window algorithms which operate in unsupervised mode. Along with qualitative analyses of each algorithm and family, we conduct an experimental campaign in which we run each algorithm on three state-of-the-art datasets in which we inject either point, contextual or collective anomalies. Results show that non-sliding algorithms are capable to detect point and collective anomalies, while they cannot effectively deal with contextual ones. Instead, sliding window algorithms require shorter periods of training and naturally build a local context, which allow them to effectively deal with contextual anomalies. Such observations are summarized to support the choice of the correct algorithm depending on the investigated class(es) of anomaly.
{"title":"On Algorithms Selection for Unsupervised Anomaly Detection","authors":"T. Zoppi, A. Ceccarelli, A. Bondavalli","doi":"10.1109/PRDC.2018.00050","DOIUrl":"https://doi.org/10.1109/PRDC.2018.00050","url":null,"abstract":"Anomaly detection, which aims at identifying unexpected trends and data patterns, has widely been used to build error detectors, failure predictors or intrusion detectors. Internal faults or malicious attacks have a different impact on the behavior of the system. They usually manifest as different observable deviations from the expected behavior, which may be identified by anomaly detection algorithms. Our study aims at investigating the suitability of unsupervised algorithms and their families in detecting either point, contextual or collective anomalies. To provide a complete picture, we consider both sliding and non-sliding window algorithms which operate in unsupervised mode. Along with qualitative analyses of each algorithm and family, we conduct an experimental campaign in which we run each algorithm on three state-of-the-art datasets in which we inject either point, contextual or collective anomalies. Results show that non-sliding algorithms are capable to detect point and collective anomalies, while they cannot effectively deal with contextual ones. Instead, sliding window algorithms require shorter periods of training and naturally build a local context, which allow them to effectively deal with contextual anomalies. Such observations are summarized to support the choice of the correct algorithm depending on the investigated class(es) of anomaly.","PeriodicalId":409301,"journal":{"name":"2018 IEEE 23rd Pacific Rim International Symposium on Dependable Computing (PRDC)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121634144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a new class of convolutional codes that are used in RAID, and compares their tolerating capabilities with existing MDS codes, they can solve erasure situations that the existing MDS codes of the same rate fail to solve for RAID.
{"title":"Research on Convolutional Codes are Used in RAID","authors":"Tianyi Zhang, M. Kitakami","doi":"10.1109/PRDC.2018.00032","DOIUrl":"https://doi.org/10.1109/PRDC.2018.00032","url":null,"abstract":"This paper proposes a new class of convolutional codes that are used in RAID, and compares their tolerating capabilities with existing MDS codes, they can solve erasure situations that the existing MDS codes of the same rate fail to solve for RAID.","PeriodicalId":409301,"journal":{"name":"2018 IEEE 23rd Pacific Rim International Symposium on Dependable Computing (PRDC)","volume":"241 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133613125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In NB-IoT systems, UEs with poor signal quality employ more repetitions to compensate for additional signal attenuation. Excessively high CE levels and repetitions of UEs lead to wastage of valuable wireless resources, whereas inadequate CE levels and repetitions result in data retrieval failure at the receiving end. Therefore, a machine learning-based adaptive repetition scheme for a 3GPP NB-IoT system is proposed in this work to effectively improve overall network transmission efficiency. The results of simulation show the effect of the discount factor? on the convergence behavior of the proposed scheme, with a lower discount factor value denoting the myopic behavior of the proposed scheme, which results from the fact that it places more emphasis on immediate rewards. And the propose scheme is capable of effectively improving the average spectral efficiency.
{"title":"Adaptive Repetition Scheme with Machine Learning for 3GPP NB-IoT","authors":"Li-Sheng Chen, W. Chung, Ing-Yi Chen, S. Kuo","doi":"10.1109/PRDC.2018.00046","DOIUrl":"https://doi.org/10.1109/PRDC.2018.00046","url":null,"abstract":"In NB-IoT systems, UEs with poor signal quality employ more repetitions to compensate for additional signal attenuation. Excessively high CE levels and repetitions of UEs lead to wastage of valuable wireless resources, whereas inadequate CE levels and repetitions result in data retrieval failure at the receiving end. Therefore, a machine learning-based adaptive repetition scheme for a 3GPP NB-IoT system is proposed in this work to effectively improve overall network transmission efficiency. The results of simulation show the effect of the discount factor? on the convergence behavior of the proposed scheme, with a lower discount factor value denoting the myopic behavior of the proposed scheme, which results from the fact that it places more emphasis on immediate rewards. And the propose scheme is capable of effectively improving the average spectral efficiency.","PeriodicalId":409301,"journal":{"name":"2018 IEEE 23rd Pacific Rim International Symposium on Dependable Computing (PRDC)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131160732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In Software-Defined Networking (SDN) it is important to efficiently partition the rule table into sub-tables and distribute them to the multiple switches over the network. In this paper we proposed an optimal rule table distribution strategy by applying satisfiability (SAT)-based approach. N-coloring problem for partitioning is formulated as conjunctive normal form (CNF), and by repeatedly running SAT solver we can obtain maximum number of partitions.
{"title":"A SAT-Based Approach for SDN Rule Table Distribution","authors":"Ryota Ogasawara, Masayuki Arai","doi":"10.1109/PRDC.2018.00034","DOIUrl":"https://doi.org/10.1109/PRDC.2018.00034","url":null,"abstract":"In Software-Defined Networking (SDN) it is important to efficiently partition the rule table into sub-tables and distribute them to the multiple switches over the network. In this paper we proposed an optimal rule table distribution strategy by applying satisfiability (SAT)-based approach. N-coloring problem for partitioning is formulated as conjunctive normal form (CNF), and by repeatedly running SAT solver we can obtain maximum number of partitions.","PeriodicalId":409301,"journal":{"name":"2018 IEEE 23rd Pacific Rim International Symposium on Dependable Computing (PRDC)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125394485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An array with spares on four sides and the restructuring algorithm for it were proposed in [1]. However, the restructuring algorithm described in [1] is too complicated to be realized in hardware. Here, we propose a method to improve such the situation. First, the array is considered to be an (N +2) (N +2) array if four PEs are added to the four corners of the array and the spares are included. The (N+2) (N+2) array is divided into four subarrays, each of which is of size (N=2 + 1)(N=2 + 1), and the orthogonal side rotation introduced here is individually applied to each subarray. The reliabilities are given by computer simulation. They fairly increase, comparing with those in [1]. :
{"title":"Restructuring Mesh-Connected Processor Arrays with Spares on Four Sides by Orthogonal Side Rotation","authors":"I. Takanami, Masaru Fukushi","doi":"10.1109/PRDC.2018.00029","DOIUrl":"https://doi.org/10.1109/PRDC.2018.00029","url":null,"abstract":"An array with spares on four sides and the restructuring algorithm for it were proposed in [1]. However, the restructuring algorithm described in [1] is too complicated to be realized in hardware. Here, we propose a method to improve such the situation. First, the array is considered to be an (N +2) (N +2) array if four PEs are added to the four corners of the array and the spares are included. The (N+2) (N+2) array is divided into four subarrays, each of which is of size (N=2 + 1)(N=2 + 1), and the orthogonal side rotation introduced here is individually applied to each subarray. The reliabilities are given by computer simulation. They fairly increase, comparing with those in [1]. :","PeriodicalId":409301,"journal":{"name":"2018 IEEE 23rd Pacific Rim International Symposium on Dependable Computing (PRDC)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116783370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}