Distributed sensor networks such as IoT deployments generate large quantities of measurement data. Often, the analytics that runs on this data is available as a web service which can be purchased for a fee. A major concern in the analytics ecosystem is ensuring the security of the data. Often, companies offer Information Rights Management (IRM) as a solution to the problem of managing usage and access rights of the data that transits administrative boundaries. IRM enables individuals and corporations to create restricted IoT data, which can have its flow from organisation to individual control - disabling copying, forwarding, and allowing timed expiry. We describe our investigations into this functionality and uncover a weak-spot in the architecture - its dependence upon the accurate global availability of time. We present an amplified denial-of-service attack which attacks time synchronisation and could prevent all the users in an organisation from reading any sort of restricted data until their software has been re-installed and re-configured. We argue that IRM systems built on current technology will be too fragile for businesses to risk widespread use. We also present defences that leverage the capabilities of Software-Defined Networks to apply a simple filter-based approach to detect and isolate attack traffic.
物联网部署等分布式传感器网络会产生大量测量数据。通常,运行在这些数据上的分析可以作为一种web服务提供,可以付费购买。分析生态系统中的一个主要关注点是确保数据的安全性。通常,公司提供信息权限管理(Information Rights Management, IRM)作为管理跨管理边界的数据的使用和访问权限问题的解决方案。IRM使个人和企业能够创建受限制的物联网数据,这些数据可以从组织流向个人控制-禁止复制,转发和允许定时过期。我们描述了我们对该功能的调查,并揭示了架构中的一个弱点——它依赖于准确的全局可用性时间。我们提出了一种放大的拒绝服务攻击,它攻击时间同步,可以阻止组织中的所有用户读取任何类型的受限数据,直到他们的软件被重新安装和配置。我们认为,建立在当前技术基础上的IRM系统过于脆弱,企业无法冒险广泛使用。我们还提出了利用软件定义网络的功能来应用简单的基于过滤器的方法来检测和隔离攻击流量的防御措施。
{"title":"Do we have the time for IRM?: service denial attacks and SDN-based defences","authors":"Ryan Shah, Shishir Nagaraja","doi":"10.1145/3288599.3295582","DOIUrl":"https://doi.org/10.1145/3288599.3295582","url":null,"abstract":"Distributed sensor networks such as IoT deployments generate large quantities of measurement data. Often, the analytics that runs on this data is available as a web service which can be purchased for a fee. A major concern in the analytics ecosystem is ensuring the security of the data. Often, companies offer Information Rights Management (IRM) as a solution to the problem of managing usage and access rights of the data that transits administrative boundaries. IRM enables individuals and corporations to create restricted IoT data, which can have its flow from organisation to individual control - disabling copying, forwarding, and allowing timed expiry. We describe our investigations into this functionality and uncover a weak-spot in the architecture - its dependence upon the accurate global availability of time. We present an amplified denial-of-service attack which attacks time synchronisation and could prevent all the users in an organisation from reading any sort of restricted data until their software has been re-installed and re-configured. We argue that IRM systems built on current technology will be too fragile for businesses to risk widespread use. We also present defences that leverage the capabilities of Software-Defined Networks to apply a simple filter-based approach to detect and isolate attack traffic.","PeriodicalId":346177,"journal":{"name":"Proceedings of the 20th International Conference on Distributed Computing and Networking","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126286268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. D. Tanasache, Mara Sorella, Silvia Bonomi, Raniero Rapone, Davide Meacci
Computer networks are undergoing a phenomenal growth, driven by the rapidly increasing number of nodes constituting the networks. At the same time, the number of security threats on Internet and intranet networks is constantly growing, and the testing and experimentation of cyber defense solutions requires the availability of separate, test environments that best emulate the complexity of a real system. Such environments support the deployment and monitoring of complex mission-driven network scenarios, thus enabling the study of cyber defense strategies under real and controllable traffic and attack scenarios. In this paper, we propose a methodology that makes use of a combination of techniques of network and security assessment, and the use of cloud technologies to build an emulation environment with adjustable degree of affinity with respect to actual reference networks or planned systems. As a byproduct, starting from a specific study case, we collected a dataset consisting of complete network traces comprising benign and malicious traffic, which is feature-rich and publicly available.
{"title":"Building an emulation environment for cyber security analyses of complex networked systems","authors":"F. D. Tanasache, Mara Sorella, Silvia Bonomi, Raniero Rapone, Davide Meacci","doi":"10.1145/3288599.3288618","DOIUrl":"https://doi.org/10.1145/3288599.3288618","url":null,"abstract":"Computer networks are undergoing a phenomenal growth, driven by the rapidly increasing number of nodes constituting the networks. At the same time, the number of security threats on Internet and intranet networks is constantly growing, and the testing and experimentation of cyber defense solutions requires the availability of separate, test environments that best emulate the complexity of a real system. Such environments support the deployment and monitoring of complex mission-driven network scenarios, thus enabling the study of cyber defense strategies under real and controllable traffic and attack scenarios. In this paper, we propose a methodology that makes use of a combination of techniques of network and security assessment, and the use of cloud technologies to build an emulation environment with adjustable degree of affinity with respect to actual reference networks or planned systems. As a byproduct, starting from a specific study case, we collected a dataset consisting of complete network traces comprising benign and malicious traffic, which is feature-rich and publicly available.","PeriodicalId":346177,"journal":{"name":"Proceedings of the 20th International Conference on Distributed Computing and Networking","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115322504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present necessary and sufficient conditions for solving the strongly dependent decision (SDD) problem in various distributed systems. Our main contribution is a novel characterization of the SDD problem based on point-set topology. For partially synchronous systems, we show that any algorithm that solves the SDD problem induces a set of executions that is closed with respect to the point-set topology. We also show that the SDD problem is not solvable in the asynchronous system augmented with any arbitrarily strong failure detectors.
{"title":"On the hardness of the strongly dependent decision problem","authors":"M. Biely, Peter Robinson","doi":"10.1145/3288599.3288614","DOIUrl":"https://doi.org/10.1145/3288599.3288614","url":null,"abstract":"We present necessary and sufficient conditions for solving the strongly dependent decision (SDD) problem in various distributed systems. Our main contribution is a novel characterization of the SDD problem based on point-set topology. For partially synchronous systems, we show that any algorithm that solves the SDD problem induces a set of executions that is closed with respect to the point-set topology. We also show that the SDD problem is not solvable in the asynchronous system augmented with any arbitrarily strong failure detectors.","PeriodicalId":346177,"journal":{"name":"Proceedings of the 20th International Conference on Distributed Computing and Networking","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128134298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is commonly believed that blockchain is a revolutionary technology for doing business on the Internet. Blockchain is a decentralized, distributed database or ledger of records. It ensures that the records are tamper-proof but publicly readable. Blockchain platforms such as Ethereum [3] and several others execute complex transactions in blocks through user-defined scripts known as smart contracts. Normally, a block of the chain consists of multiple transactions of smart contracts which are added by a miner. To append a correct block into the blockchain, miners execute these transactions of smart contracts sequentially. Later the validators serially re-execute the smart contract transactions of the block. If the validators agree with final state of the blocks as recorded by the miner, then the block is said to be valid and added to the blockchain using a consensus protocol.
{"title":"Entitling concurrency to smart contracts using optimistic transactional memory","authors":"Parwat Singh Anjana, S. Kumari, Sathya Peri, Sachin Rathor, Archit Somani","doi":"10.1145/3288599.3299723","DOIUrl":"https://doi.org/10.1145/3288599.3299723","url":null,"abstract":"It is commonly believed that blockchain is a revolutionary technology for doing business on the Internet. Blockchain is a decentralized, distributed database or ledger of records. It ensures that the records are tamper-proof but publicly readable. Blockchain platforms such as Ethereum [3] and several others execute complex transactions in blocks through user-defined scripts known as smart contracts. Normally, a block of the chain consists of multiple transactions of smart contracts which are added by a miner. To append a correct block into the blockchain, miners execute these transactions of smart contracts sequentially. Later the validators serially re-execute the smart contract transactions of the block. If the validators agree with final state of the blocks as recorded by the miner, then the block is said to be valid and added to the blockchain using a consensus protocol.","PeriodicalId":346177,"journal":{"name":"Proceedings of the 20th International Conference on Distributed Computing and Networking","volume":"508 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113994301","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammad Roohitavaf, Jung-Sang Ahn, Woon-Hak Kang, Kun Ren, Gene Zhang, S. Ben-Romdhane, S. Kulkarni
Eventual consistency is a popular consistency model for geo-replicated data stores. Although eventual consistency provides high performance and availability, it can cause anomalies that make programming complex for application developers. Session guarantees can remove some of these anomalies while causing much lower overhead compared with stronger consistency models. In this paper, we provide a protocol for providing session guarantees for NuKV, a key-value store developed for services with very high availability and performance requirements at eBay. NuKV relies on the Raft protocol for replication inside datacenters, and uses eventual consistency for replication among datacenters. We provide modified versions of conventional session guarantees to avoid the problem of slowdown cascades in systems with large numbers of partitions. We also use Hybrid Logical Clocks to eliminate the need for delaying write operations to satisfy session guarantees. Our experiments show that our protocol provides session guarantees with a negligible overhead when compared with eventual consistency.
{"title":"Session guarantees with raft and hybrid logical clocks","authors":"Mohammad Roohitavaf, Jung-Sang Ahn, Woon-Hak Kang, Kun Ren, Gene Zhang, S. Ben-Romdhane, S. Kulkarni","doi":"10.1145/3288599.3288619","DOIUrl":"https://doi.org/10.1145/3288599.3288619","url":null,"abstract":"Eventual consistency is a popular consistency model for geo-replicated data stores. Although eventual consistency provides high performance and availability, it can cause anomalies that make programming complex for application developers. Session guarantees can remove some of these anomalies while causing much lower overhead compared with stronger consistency models. In this paper, we provide a protocol for providing session guarantees for NuKV, a key-value store developed for services with very high availability and performance requirements at eBay. NuKV relies on the Raft protocol for replication inside datacenters, and uses eventual consistency for replication among datacenters. We provide modified versions of conventional session guarantees to avoid the problem of slowdown cascades in systems with large numbers of partitions. We also use Hybrid Logical Clocks to eliminate the need for delaying write operations to satisfy session guarantees. Our experiments show that our protocol provides session guarantees with a negligible overhead when compared with eventual consistency.","PeriodicalId":346177,"journal":{"name":"Proceedings of the 20th International Conference on Distributed Computing and Networking","volume":"199 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133707951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we focus on the implementation of distributed programs in using a key-value store where the state of the nodes is stored in a replicated and partitioned data store to improve performance and reliability. Applications of such algorithms occur in weather monitoring, social media, etc. We argue that these applications should be designed to be stabilizing so that they recover from an arbitrary state to a legitimate state. Specifically, if we use a stabilizing algorithm then we can work with more efficient implementations that provide eventual consistency rather than sequential consistency where the data store behaves as if there is just one copy of the data. We find that, although the use of eventual consistency results in consistency violation faults (cvf) where some node executes its action incorrectly because it relies on an older version of the data, the overall performance of the resulting protocol is better. We use experimental analysis to evaluate the expected improvement. We also identify other variations of stabilization that can provide additional guarantees in the presence of eventual consistency. Finally, we note that if the underlying algorithm is not stabilizing, even a single cvf may cause the algorithm to fail completely, thereby making it impossible to benefit from this approach.
{"title":"Benefit of self-stabilizing protocols in eventually consistent key-value stores: a case study","authors":"Duong N. Nguyen, S. Kulkarni, A. Datta","doi":"10.1145/3288599.3288609","DOIUrl":"https://doi.org/10.1145/3288599.3288609","url":null,"abstract":"In this paper, we focus on the implementation of distributed programs in using a key-value store where the state of the nodes is stored in a replicated and partitioned data store to improve performance and reliability. Applications of such algorithms occur in weather monitoring, social media, etc. We argue that these applications should be designed to be stabilizing so that they recover from an arbitrary state to a legitimate state. Specifically, if we use a stabilizing algorithm then we can work with more efficient implementations that provide eventual consistency rather than sequential consistency where the data store behaves as if there is just one copy of the data. We find that, although the use of eventual consistency results in consistency violation faults (cvf) where some node executes its action incorrectly because it relies on an older version of the data, the overall performance of the resulting protocol is better. We use experimental analysis to evaluate the expected improvement. We also identify other variations of stabilization that can provide additional guarantees in the presence of eventual consistency. Finally, we note that if the underlying algorithm is not stabilizing, even a single cvf may cause the algorithm to fail completely, thereby making it impossible to benefit from this approach.","PeriodicalId":346177,"journal":{"name":"Proceedings of the 20th International Conference on Distributed Computing and Networking","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127854530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The dispersion problem on graphs requires k robots placed arbitrarily at the n nodes of an anonymous graph, where k ≤ n, to coordinate with each other to reach a final configuration in which each robot is at a distinct node of the graph. The dispersion problem is important due to its relationship to graph exploration by mobile robots, scattering on a graph, and load balancing on a graph. In addition, an intrinsic application of dispersion has been shown to be the relocation of self-driven electric cars (robots) to recharge stations (nodes). We propose five algorithms to solve dispersion on graphs. The first three algorithms require O(k log Δ) bits at each robot and O(m) steps running time, where m is the number of edges and Δ is the degree of the graph. The algorithms differ in whether they address the synchronous or the asynchronous system model, and in what, where, and how data structures are maintained. The fourth algorithm, for the asynchronous model, has a space usage of O(D log Δ) bits at each robot and uses O(ΔD) steps, where D is the graph diameter. The fifth algorithm, for the asynchronous model, has a space usage of O(max(log k, log Δ)) bits at each robot and uses O((m - n)k) steps.
{"title":"Efficient dispersion of mobile robots on graphs","authors":"A. Kshemkalyani, Faizan Ali","doi":"10.1145/3288599.3288610","DOIUrl":"https://doi.org/10.1145/3288599.3288610","url":null,"abstract":"The dispersion problem on graphs requires k robots placed arbitrarily at the n nodes of an anonymous graph, where k ≤ n, to coordinate with each other to reach a final configuration in which each robot is at a distinct node of the graph. The dispersion problem is important due to its relationship to graph exploration by mobile robots, scattering on a graph, and load balancing on a graph. In addition, an intrinsic application of dispersion has been shown to be the relocation of self-driven electric cars (robots) to recharge stations (nodes). We propose five algorithms to solve dispersion on graphs. The first three algorithms require O(k log Δ) bits at each robot and O(m) steps running time, where m is the number of edges and Δ is the degree of the graph. The algorithms differ in whether they address the synchronous or the asynchronous system model, and in what, where, and how data structures are maintained. The fourth algorithm, for the asynchronous model, has a space usage of O(D log Δ) bits at each robot and uses O(ΔD) steps, where D is the graph diameter. The fifth algorithm, for the asynchronous model, has a space usage of O(max(log k, log Δ)) bits at each robot and uses O((m - n)k) steps.","PeriodicalId":346177,"journal":{"name":"Proceedings of the 20th International Conference on Distributed Computing and Networking","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116022294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the distributed message-passing model in which a communication network is represented by a graph G = (V, E). Usually, the measure of complexity that is considered in this model is the worst-case complexity, which is the largest number of rounds performed by a vertex ν ∈ V. While often this is a reasonable measure, in some occasions it does not express sufficiently well the actual performance of the algorithm. For example, an execution in which one processor performs r rounds, and all the rest perform significantly less rounds than r, has the same running time as an execution in which all processors perform the same number of rounds r. On the other hand, the latter execution is less efficient in several respects, such as energy efficiency, task execution efficiency, local-neighborhood efficiency and simulation efficiency. Consequently, a more appropriate measure is required in these cases. Recently, the vertex-averaged complexity was proposed by [13]. In this measure, the running time is the worst-case average of rounds over the number of vertices. Feuilloley [13] showed that leader-election admits an algorithm with significantly better vertex-averaged complexity than worst-case complexity. On the other hand, for O(1)-coloring of rings, the worst-case and vertex-averaged complexities are the same. This complexity is Θ (log* n) [13]. It remained open whether the vertex-averaged complexity of symmetry-breaking in general graphs can be better than the worst-case complexity. In this paper we devise symmetry-breaking algorithms with significantly improved vertex-averaged complexity for both general graphs, as well as specific graph families. Some algorithms of ours have significantly better vertex-averaged complexity than the best-possible worst case complexity. In particular, for general graphs, we devise an O(a)-forests-decomposition algorithm with a vertex-averaged complexity of O(1) rounds, where the arboricity a is the minimum number of forests that the graph's edges can be partitioned into. In the worst-case, this requires Ω(log n) rounds [10]. In addition, for graphs with constant arboricity a, we compute (Δ + 1)-vertex-coloring, Maximal Independent Set, Maximal Matching and (2Δ - 1)-edge-coloring, deterministically, with O (log* n) vertex-averaged complexity. The best known deterministic algorithms for (Δ + 1)-coloring have time complexity [MATH HERE] in the worst case [3,14], and the best known Maximal Independent Set and Maximal Matching algorithms on these graphs have worst-case complexity at least [MATH HERE] [10, 18]. In addition to deterministic algorithms, we devise randomized algorithms, in which the vertex-averaged bounds hold with high probability. In particular, we show that (Δ + 1)-coloring of general graphs requires O(1) vertex-averaged complexity, with high probability. This is in contrast to the worst case complexity, which is Ω (log* n) even on rings [19].
{"title":"Distributed symmetry-breaking with improved vertex-averaged complexity","authors":"Leonid Barenboim, Y. Tzur","doi":"10.1145/3288599.3288601","DOIUrl":"https://doi.org/10.1145/3288599.3288601","url":null,"abstract":"We study the distributed message-passing model in which a communication network is represented by a graph G = (V, E). Usually, the measure of complexity that is considered in this model is the worst-case complexity, which is the largest number of rounds performed by a vertex ν ∈ V. While often this is a reasonable measure, in some occasions it does not express sufficiently well the actual performance of the algorithm. For example, an execution in which one processor performs r rounds, and all the rest perform significantly less rounds than r, has the same running time as an execution in which all processors perform the same number of rounds r. On the other hand, the latter execution is less efficient in several respects, such as energy efficiency, task execution efficiency, local-neighborhood efficiency and simulation efficiency. Consequently, a more appropriate measure is required in these cases. Recently, the vertex-averaged complexity was proposed by [13]. In this measure, the running time is the worst-case average of rounds over the number of vertices. Feuilloley [13] showed that leader-election admits an algorithm with significantly better vertex-averaged complexity than worst-case complexity. On the other hand, for O(1)-coloring of rings, the worst-case and vertex-averaged complexities are the same. This complexity is Θ (log* n) [13]. It remained open whether the vertex-averaged complexity of symmetry-breaking in general graphs can be better than the worst-case complexity. In this paper we devise symmetry-breaking algorithms with significantly improved vertex-averaged complexity for both general graphs, as well as specific graph families. Some algorithms of ours have significantly better vertex-averaged complexity than the best-possible worst case complexity. In particular, for general graphs, we devise an O(a)-forests-decomposition algorithm with a vertex-averaged complexity of O(1) rounds, where the arboricity a is the minimum number of forests that the graph's edges can be partitioned into. In the worst-case, this requires Ω(log n) rounds [10]. In addition, for graphs with constant arboricity a, we compute (Δ + 1)-vertex-coloring, Maximal Independent Set, Maximal Matching and (2Δ - 1)-edge-coloring, deterministically, with O (log* n) vertex-averaged complexity. The best known deterministic algorithms for (Δ + 1)-coloring have time complexity [MATH HERE] in the worst case [3,14], and the best known Maximal Independent Set and Maximal Matching algorithms on these graphs have worst-case complexity at least [MATH HERE] [10, 18]. In addition to deterministic algorithms, we devise randomized algorithms, in which the vertex-averaged bounds hold with high probability. In particular, we show that (Δ + 1)-coloring of general graphs requires O(1) vertex-averaged complexity, with high probability. This is in contrast to the worst case complexity, which is Ω (log* n) even on rings [19].","PeriodicalId":346177,"journal":{"name":"Proceedings of the 20th International Conference on Distributed Computing and Networking","volume":"210 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116517945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}