Because of transient wireless link failures, incremental node deployment, and node mobility, existing information dissemination protocols used in wireless ad-hoc and sensor networks cause nodes to periodically broadcast "advertisement" containing the version of their current data item even in the "steady state" when no dissemination is being done. This is to ensure that all nodes in the network are up-to-date. This causes a continuous energy expenditure during the steady state, which is by far the dominant part of a network's lifetime. In this paper, we present a protocol called Varuna which incurs a constant energy cost, independent of the duration of the steady state. In Varuna, nodes monitor the traffic pattern of the neighboring nodes to decide when an advertisement is necessary. Using testbed experiments and simulations, we show that Varuna achieves several orders of magnitude energy savings compared to Trickle, the existing standard for dissemination in sensor networks, at the expense of a reasonable amount of memory for state maintenance.
{"title":"Fixed Cost Maintenance for Information Dissemination in Wireless Sensor Networks","authors":"R. Panta, M. Vintila, S. Bagchi","doi":"10.1109/SRDS.2010.15","DOIUrl":"https://doi.org/10.1109/SRDS.2010.15","url":null,"abstract":"Because of transient wireless link failures, incremental node deployment, and node mobility, existing information dissemination protocols used in wireless ad-hoc and sensor networks cause nodes to periodically broadcast \"advertisement\" containing the version of their current data item even in the \"steady state\" when no dissemination is being done. This is to ensure that all nodes in the network are up-to-date. This causes a continuous energy expenditure during the steady state, which is by far the dominant part of a network's lifetime. In this paper, we present a protocol called Varuna which incurs a constant energy cost, independent of the duration of the steady state. In Varuna, nodes monitor the traffic pattern of the neighboring nodes to decide when an advertisement is necessary. Using testbed experiments and simulations, we show that Varuna achieves several orders of magnitude energy savings compared to Trickle, the existing standard for dissemination in sensor networks, at the expense of a reasonable amount of memory for state maintenance.","PeriodicalId":219204,"journal":{"name":"2010 29th IEEE Symposium on Reliable Distributed Systems","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116133646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One way to efficiently disseminate information in a P2P overlay is to rely on a spanning tree. However, in a tree, interior nodes support a much higher load than leaf nodes. Also, the failure of a single node can break the tree, impairing the reliability of the dissemination protocol. These problems can be addressed by using multiple trees, such that each node is interior in just a few trees and a leaf node in the remaining, the multiple trees approach allows to achieve load distribution and also to send redundant information for fault-tolerance. This paper proposes Thicket, a decentralized algorithm to efficiently build and maintain such multiple trees over a single unstructured overlay network. The algorithm has been implemented and is extensively evaluated using simulation in a P2P overlay with 10.000 nodes.
{"title":"Thicket: A Protocol for Building and Maintaining Multiple Trees in a P2P Overlay","authors":"M. Ferreira, J. Leitao, L. Rodrigues","doi":"10.1109/SRDS.2010.19","DOIUrl":"https://doi.org/10.1109/SRDS.2010.19","url":null,"abstract":"One way to efficiently disseminate information in a P2P overlay is to rely on a spanning tree. However, in a tree, interior nodes support a much higher load than leaf nodes. Also, the failure of a single node can break the tree, impairing the reliability of the dissemination protocol. These problems can be addressed by using multiple trees, such that each node is interior in just a few trees and a leaf node in the remaining, the multiple trees approach allows to achieve load distribution and also to send redundant information for fault-tolerance. This paper proposes Thicket, a decentralized algorithm to efficiently build and maintain such multiple trees over a single unstructured overlay network. The algorithm has been implemented and is extensively evaluated using simulation in a P2P overlay with 10.000 nodes.","PeriodicalId":219204,"journal":{"name":"2010 29th IEEE Symposium on Reliable Distributed Systems","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126534655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software code reuse has long been touted as a reliable and efficient software development paradigm. Whilst this practice has numerous benefits, it is inherently susceptible to latent vulnerabilities. Source code which is re-used without being patched for various reasons may result in vulnerable binaries, despite the vulnerabilities being made publicly known. To aggravate matters, crackers have access to information on these vulnerabilities as well. Defenders need to ensure all loopholes are patched, while attackers need just one such loophole. In this work, we define latent vulnerabilities, and study the prevalence of the problem. This provides us the motivation, and an insight into the future work to be done in solving the problem. Our results show that unpatched source files which are more than one year old are commonly used in the latest operating systems. In fact, several of these files are more than ten years old. We explore the premises of using symbols in identifying binaries and conclude that they are insufficient in solving the problem. Additionally, we discuss two possible approaches to solve the problem.
{"title":"A Study on Latent Vulnerabilities","authors":"Beng Heng Ng, Xin Hu, A. Prakash","doi":"10.1109/SRDS.2010.47","DOIUrl":"https://doi.org/10.1109/SRDS.2010.47","url":null,"abstract":"Software code reuse has long been touted as a reliable and efficient software development paradigm. Whilst this practice has numerous benefits, it is inherently susceptible to latent vulnerabilities. Source code which is re-used without being patched for various reasons may result in vulnerable binaries, despite the vulnerabilities being made publicly known. To aggravate matters, crackers have access to information on these vulnerabilities as well. Defenders need to ensure all loopholes are patched, while attackers need just one such loophole. In this work, we define latent vulnerabilities, and study the prevalence of the problem. This provides us the motivation, and an insight into the future work to be done in solving the problem. Our results show that unpatched source files which are more than one year old are commonly used in the latest operating systems. In fact, several of these files are more than ten years old. We explore the premises of using symbols in identifying binaries and conclude that they are insufficient in solving the problem. Additionally, we discuss two possible approaches to solve the problem.","PeriodicalId":219204,"journal":{"name":"2010 29th IEEE Symposium on Reliable Distributed Systems","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123655565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recent paradigm shifts in distributed computing such as the advent of cloud computing pose new challenges to the analysis of distributed executions. One important new characteristic is that the management staff of computing platforms and the developers of applications are separated by corporate boundaries. The net result is that once applications go wrong, the most readily available debugging aids for developers are the visible output of the application and any log files collected during their execution. In this paper, we propose the concept of task graphs as a foundation to represent distributed executions, and present a low overhead algorithm to infer task graphs from event log files. Intuitively, a task represents an autonomous segment of computation inside a thread. Edges between tasks represent their interactions and preserve programmers’ notion of data and control flows. Our technique leverages existing logging support where available or otherwise augments it with aspect-based instrumentation to collect events of a set of predefined types. We show how task graphs can improve the precision of anomaly detection in a request-oriented analysis of field software and help programmers understand the running of the Hadoop Distributed File System (HDFS).
{"title":"Lightweight Task Graph Inference for Distributed Applications","authors":"Bin Xin, P. Eugster, X. Zhang, Jinlin Yang","doi":"10.1109/SRDS.2010.20","DOIUrl":"https://doi.org/10.1109/SRDS.2010.20","url":null,"abstract":"Recent paradigm shifts in distributed computing such as the advent of cloud computing pose new challenges to the analysis of distributed executions. One important new characteristic is that the management staff of computing platforms and the developers of applications are separated by corporate boundaries. The net result is that once applications go wrong, the most readily available debugging aids for developers are the visible output of the application and any log files collected during their execution. In this paper, we propose the concept of task graphs as a foundation to represent distributed executions, and present a low overhead algorithm to infer task graphs from event log files. Intuitively, a task represents an autonomous segment of computation inside a thread. Edges between tasks represent their interactions and preserve programmers’ notion of data and control flows. Our technique leverages existing logging support where available or otherwise augments it with aspect-based instrumentation to collect events of a set of predefined types. We show how task graphs can improve the precision of anomaly detection in a request-oriented analysis of field software and help programmers understand the running of the Hadoop Distributed File System (HDFS).","PeriodicalId":219204,"journal":{"name":"2010 29th IEEE Symposium on Reliable Distributed Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130059649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Distributed systems are used in numerous applications where failures can be costly. Due to concerns that some of the nodes may become faulty, critical services are usually replicated across several nodes, which execute distributed algorithms to ensure correct service in spite of failures. To prevent replica-exhaustion, it is fundamental to detect errors and trigger appropriate recovery actions. In particular, it is important to detect situations in which nodes cease to execute the intended algorithm, e.g., when a replica is compromised by an attacker or when a hardware fault causes the node to behave erratically. This paper proposes a method for monitoring the local execution of nodes using watchdog timers. The approach consists in deducing, from the global system properties, local states that must be visited periodically by nodes that execute the intended algorithm correctly. When a node fails to trigger a watchdog before the time limit, an appropriate response can be initiated. The approach is applied to a well-known Byzantine consensus algorithm. The algorithm is modeled in the Promela language and the Spin model checker is used to identify local states that must be visited periodically by correct nodes. Such states are suitable for online monitoring using watchdog timers.
{"title":"Monitoring Local Progress with Watchdog Timers Deduced from Global Properties","authors":"R. Barbosa","doi":"10.1109/SRDS.2010.23","DOIUrl":"https://doi.org/10.1109/SRDS.2010.23","url":null,"abstract":"Distributed systems are used in numerous applications where failures can be costly. Due to concerns that some of the nodes may become faulty, critical services are usually replicated across several nodes, which execute distributed algorithms to ensure correct service in spite of failures. To prevent replica-exhaustion, it is fundamental to detect errors and trigger appropriate recovery actions. In particular, it is important to detect situations in which nodes cease to execute the intended algorithm, e.g., when a replica is compromised by an attacker or when a hardware fault causes the node to behave erratically. This paper proposes a method for monitoring the local execution of nodes using watchdog timers. The approach consists in deducing, from the global system properties, local states that must be visited periodically by nodes that execute the intended algorithm correctly. When a node fails to trigger a watchdog before the time limit, an appropriate response can be initiated. The approach is applied to a well-known Byzantine consensus algorithm. The algorithm is modeled in the Promela language and the Spin model checker is used to identify local states that must be visited periodically by correct nodes. Such states are suitable for online monitoring using watchdog timers.","PeriodicalId":219204,"journal":{"name":"2010 29th IEEE Symposium on Reliable Distributed Systems","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121487181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dependability evaluation techniques such as the ones based on testing, or on the analysis of field data on computer faults, are a fundamental process in assessing complex and critical systems. Recently a new approach [3] has been proposed consisting in collecting the row data produced in the experimental evaluation and store it in a multidimensional data structure. This paper reports the work in progress activities of the entire process of collecting, storing and analyzing the experimental data in order to perform a sound experimental evaluation. This is done through describing the various steps on a running example.
{"title":"Practical Aspects in Analyzing and Sharing the Results of Experimental Evaluation","authors":"F. Brancati, A. Bondavalli","doi":"10.1109/SRDS.2010.46","DOIUrl":"https://doi.org/10.1109/SRDS.2010.46","url":null,"abstract":"Dependability evaluation techniques such as the ones based on testing, or on the analysis of field data on computer faults, are a fundamental process in assessing complex and critical systems. Recently a new approach [3] has been proposed consisting in collecting the row data produced in the experimental evaluation and store it in a multidimensional data structure. This paper reports the work in progress activities of the entire process of collecting, storing and analyzing the experimental data in order to perform a sound experimental evaluation. This is done through describing the various steps on a running example.","PeriodicalId":219204,"journal":{"name":"2010 29th IEEE Symposium on Reliable Distributed Systems","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116512211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haifeng Chen, Guofei Jiang, K. Yoshihira, Akhilesh Saxena
This paper presents an instance based approach to diagnosing failures in computing systems. Owing to the fact that a large portion of occurred failures are repeated ones, our method takes advantage of past experiences by storing historical failures in a database and retrieving similar instances in the occurrence of failure. We extract the system ‘invariants’ by modeling consistent dependencies between system attributes during the operation, and construct a network graph based on the learned invariants. When a failure happens, the status of invariants network, i.e., whether each invariant link is broken or not, provides a view of failure characteristics. We use a high dimensional binary vector to store those failure evidences, and develop a novel algorithm to efficiently retrieve failure signatures from the database. Experimental results in a web based system have demonstrated the effectiveness of our method in diagnosing the injected failures.
{"title":"Invariants Based Failure Diagnosis in Distributed Computing Systems","authors":"Haifeng Chen, Guofei Jiang, K. Yoshihira, Akhilesh Saxena","doi":"10.1109/SRDS.2010.26","DOIUrl":"https://doi.org/10.1109/SRDS.2010.26","url":null,"abstract":"This paper presents an instance based approach to diagnosing failures in computing systems. Owing to the fact that a large portion of occurred failures are repeated ones, our method takes advantage of past experiences by storing historical failures in a database and retrieving similar instances in the occurrence of failure. We extract the system ‘invariants’ by modeling consistent dependencies between system attributes during the operation, and construct a network graph based on the learned invariants. When a failure happens, the status of invariants network, i.e., whether each invariant link is broken or not, provides a view of failure characteristics. We use a high dimensional binary vector to store those failure evidences, and develop a novel algorithm to efficiently retrieve failure signatures from the database. Experimental results in a web based system have demonstrated the effectiveness of our method in diagnosing the injected failures.","PeriodicalId":219204,"journal":{"name":"2010 29th IEEE Symposium on Reliable Distributed Systems","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133732417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The rising popularity of cloud computing makes building high quality cloud applications a critical and urgently required research problem. Component quality ranking approaches are crucial for making optimal component selection from a set of functionally equivalent component candidates. Moreover, quality ranking of cloud components helps the application designers detect the poor performing components in the complex cloud applications, which usually include huge number of distributed components. To provide personalized cloud component ranking for different designers of cloud applications, this paper proposes a QoS-driven component ranking framework for cloud applications by taking advantage of the past component usage experiences of different component users. Our approach requires no additional invocations of the cloud components on behalf of the application designers. The extensive experimental results show that our approach outperforms the competing approaches.
{"title":"CloudRank: A QoS-Driven Component Ranking Framework for Cloud Computing","authors":"Zibin Zheng, Yilei Zhang, Michael R. Lyu","doi":"10.1109/SRDS.2010.29","DOIUrl":"https://doi.org/10.1109/SRDS.2010.29","url":null,"abstract":"The rising popularity of cloud computing makes building high quality cloud applications a critical and urgently required research problem. Component quality ranking approaches are crucial for making optimal component selection from a set of functionally equivalent component candidates. Moreover, quality ranking of cloud components helps the application designers detect the poor performing components in the complex cloud applications, which usually include huge number of distributed components. To provide personalized cloud component ranking for different designers of cloud applications, this paper proposes a QoS-driven component ranking framework for cloud applications by taking advantage of the past component usage experiences of different component users. Our approach requires no additional invocations of the cloud components on behalf of the application designers. The extensive experimental results show that our approach outperforms the competing approaches.","PeriodicalId":219204,"journal":{"name":"2010 29th IEEE Symposium on Reliable Distributed Systems","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126067727","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sina Bahram, Xuxian Jiang, Zhi Wang, Michael C. Grace, Jinku Li, D. Srinivasan, J. Rhee, Dongyan Xu
Virtual machine (VM) introspection is a powerful technique for determining the specific aspects of guest VM execution from outside the VM. Unfortunately, existing introspection solutions share a common questionable assumption. This assumption is embodied in the expectation that original kernel data structures are respected by the untrusted guest and thus can be directly used to bridge the well-known semantic gap. In this paper, we assume the perspective of the attacker, and exploit this questionable assumption to subvert VM introspection. In particular, we present an attack called DKSM (Direct Kernel Structure Manipulation), and show that it can effectively foil existing VM introspection solutions into providing false information. By assuming this perspective, we hope to better understand the challenges and opportunities for the development of future reliable VM introspection solutions that are not vulnerable to the proposed attack.
{"title":"DKSM: Subverting Virtual Machine Introspection for Fun and Profit","authors":"Sina Bahram, Xuxian Jiang, Zhi Wang, Michael C. Grace, Jinku Li, D. Srinivasan, J. Rhee, Dongyan Xu","doi":"10.1109/srds.2010.39","DOIUrl":"https://doi.org/10.1109/srds.2010.39","url":null,"abstract":"Virtual machine (VM) introspection is a powerful technique for determining the specific aspects of guest VM execution from outside the VM. Unfortunately, existing introspection solutions share a common questionable assumption. This assumption is embodied in the expectation that original kernel data structures are respected by the untrusted guest and thus can be directly used to bridge the well-known semantic gap. In this paper, we assume the perspective of the attacker, and exploit this questionable assumption to subvert VM introspection. In particular, we present an attack called DKSM (Direct Kernel Structure Manipulation), and show that it can effectively foil existing VM introspection solutions into providing false information. By assuming this perspective, we hope to better understand the challenges and opportunities for the development of future reliable VM introspection solutions that are not vulnerable to the proposed attack.","PeriodicalId":219204,"journal":{"name":"2010 29th IEEE Symposium on Reliable Distributed Systems","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133564219","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jesus Friginal, D. Andrés, Juan-Carlos Ruiz-Garcia, P. Gil
The increasing emergence of mobile computing devices seamlessly providing wireless communication capabilities opens a wide range of new application domains for ad hoc networks. However, the sensitivity of ad hoc routing protocols to malicious faults (attacks) limits in practice their confident use in commercial products. This requires not only practical means to enforce the security of these protocols, but also approaches to evaluate their behaviour in the presence of attacks. Our previous contribution to the evaluation of ad hoc networks has been focused on the definition of an approach for injecting grey hole attacks in real (non-simulated) ad hoc networks. This paper relies on the use of this methodology to evaluate (i) three different implementations of a proactive ad hoc routing protocol, named OLSR, and (ii) two ad hoc routing protocols of different nature, one proactive (OLSR) and one reactive (AODV). Reported results have proven useful to extend the applicability of attack injection methodologies for evaluation beyond the mere assessment of the robustness of ad hoc routing protocols.
{"title":"Attack Injection to Support the Evaluation of Ad Hoc Networks","authors":"Jesus Friginal, D. Andrés, Juan-Carlos Ruiz-Garcia, P. Gil","doi":"10.1109/SRDS.2010.11","DOIUrl":"https://doi.org/10.1109/SRDS.2010.11","url":null,"abstract":"The increasing emergence of mobile computing devices seamlessly providing wireless communication capabilities opens a wide range of new application domains for ad hoc networks. However, the sensitivity of ad hoc routing protocols to malicious faults (attacks) limits in practice their confident use in commercial products. This requires not only practical means to enforce the security of these protocols, but also approaches to evaluate their behaviour in the presence of attacks. Our previous contribution to the evaluation of ad hoc networks has been focused on the definition of an approach for injecting grey hole attacks in real (non-simulated) ad hoc networks. This paper relies on the use of this methodology to evaluate (i) three different implementations of a proactive ad hoc routing protocol, named OLSR, and (ii) two ad hoc routing protocols of different nature, one proactive (OLSR) and one reactive (AODV). Reported results have proven useful to extend the applicability of attack injection methodologies for evaluation beyond the mere assessment of the robustness of ad hoc routing protocols.","PeriodicalId":219204,"journal":{"name":"2010 29th IEEE Symposium on Reliable Distributed Systems","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115398074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}