Five properties of an architecture for secure access to web services are defined and two existing architectures are evaluated according to these criteria. References to these criteria in the literature and evaluation of the security architectures are tabulated in the conclusion. Policy-sufficiency is defined as the requirement that any meaningful statements can be expressed in policy definitions of the architecture. Protocol neutrality is the requirement that a protocol exchange which is logically equivalent to a valid protocol sequence is also valid. Predicateboundedness is the constraint that a fixed, finite set of predicates (or language constructs) will be sufficient for security policy definitions, i.e. the language does not need to be incrementally extended indefinitely. Protocol-closure requires that security protocols can be combined together arbitrarily to make new protocols. Finally, processing complexity constrains algorithms for evaluating security rules to be of satisfactory (low) complexity. No existing security architectures recieve a tick for all five of these criteria. The RW architecture is more successful in this regard than the simpler XACML architecture.
{"title":"Five Criteria for Web-Services Security Architecture","authors":"R. Addie, A. Colman","doi":"10.1109/NSS.2010.100","DOIUrl":"https://doi.org/10.1109/NSS.2010.100","url":null,"abstract":"Five properties of an architecture for secure access to web services are defined and two existing architectures are evaluated according to these criteria. References to these criteria in the literature and evaluation of the security architectures are tabulated in the conclusion. Policy-sufficiency is defined as the requirement that any meaningful statements can be expressed in policy definitions of the architecture. Protocol neutrality is the requirement that a protocol exchange which is logically equivalent to a valid protocol sequence is also valid. Predicateboundedness is the constraint that a fixed, finite set of predicates (or language constructs) will be sufficient for security policy definitions, i.e. the language does not need to be incrementally extended indefinitely. Protocol-closure requires that security protocols can be combined together arbitrarily to make new protocols. Finally, processing complexity constrains algorithms for evaluating security rules to be of satisfactory (low) complexity. No existing security architectures recieve a tick for all five of these criteria. The RW architecture is more successful in this regard than the simpler XACML architecture.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126340955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lack of well-defined defense perimeter in MANETs prevents the use traditional firewalls, and requires the security to be implemented in a distributed manner. We recently introduced a novel deny-by-default distributed security policy enforcement architecture for MANETs by harnessing and extending the concept of {it network capabilities}. The {it deny-by-default} principle allows compromised nodes to access only authorized services, limiting their ability to disrupt or even interfere with end-to-end connectivity and nodes beyond their local communication radius. The enforcement of policies is done hop-by-hop, in a distributed manner. In this paper, we present the implementation of this architecture, called DIPLOMA, on Linux. Our implementation works at the network layer, and does not require any changes to existing applications. We identify the bottlenecks of the original architecture and propose improvements, including a signature optimization, so that it works well in practice. We present the results of evaluating the architecture in a realistic MANET testbed Orbit. The results show that the architecture incurs minimal overhead in throughput, latency and jitter. We also show that the system protects network bandwidth and the end-hosts in the presence of attackers. To that end, we identify ways of creating multi-hop topologies in indoor environments so that a bad node cannot interfere with every other node. We also show that existing applications are not impacted by the new architecture, achieving good performance.
{"title":"DIPLOMA: Distributed Policy Enforcement Architecture for MANETs","authors":"M. Alicherry, A. Keromytis","doi":"10.1109/NSS.2010.27","DOIUrl":"https://doi.org/10.1109/NSS.2010.27","url":null,"abstract":"Lack of well-defined defense perimeter in MANETs prevents the use traditional firewalls, and requires the security to be implemented in a distributed manner. We recently introduced a novel deny-by-default distributed security policy enforcement architecture for MANETs by harnessing and extending the concept of {it network capabilities}. The {it deny-by-default} principle allows compromised nodes to access only authorized services, limiting their ability to disrupt or even interfere with end-to-end connectivity and nodes beyond their local communication radius. The enforcement of policies is done hop-by-hop, in a distributed manner. In this paper, we present the implementation of this architecture, called DIPLOMA, on Linux. Our implementation works at the network layer, and does not require any changes to existing applications. We identify the bottlenecks of the original architecture and propose improvements, including a signature optimization, so that it works well in practice. We present the results of evaluating the architecture in a realistic MANET testbed Orbit. The results show that the architecture incurs minimal overhead in throughput, latency and jitter. We also show that the system protects network bandwidth and the end-hosts in the presence of attackers. To that end, we identify ways of creating multi-hop topologies in indoor environments so that a bad node cannot interfere with every other node. We also show that existing applications are not impacted by the new architecture, achieving good performance.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129868900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Distributed Denial of Service (DDoS) attacks pose an increasing threat to the current internet. The detection of such attacks plays an important role in maintaining the security of networks. In this paper, we propose a novel adaptive clustering method combined with feature ranking for DDoS attacks detection. First, based on the analysis of network traffic, preliminary variables are selected. Second, the Modified Global K-means algorithm (MGKM) is used as the basic incremental clustering algorithm to identify the cluster structure of the target data. Third, the linear correlation coefficient is used for feature ranking. Lastly, the feature ranking result is used to inform and recalculate the clusters. This adaptive process can make worthwhile adjustments to the working feature vector according to different patterns of DDoS attacks, and can improve the quality of the clusters and the effectiveness of the clustering algorithm. The experimental results demonstrate that our method is effective and adaptive in detecting the separate phases of DDoS attacks.
{"title":"Adaptive Clustering with Feature Ranking for DDoS Attacks Detection","authors":"Lifang Zi, J. Yearwood, Xin-Wen Wu","doi":"10.1109/NSS.2010.70","DOIUrl":"https://doi.org/10.1109/NSS.2010.70","url":null,"abstract":"Distributed Denial of Service (DDoS) attacks pose an increasing threat to the current internet. The detection of such attacks plays an important role in maintaining the security of networks. In this paper, we propose a novel adaptive clustering method combined with feature ranking for DDoS attacks detection. First, based on the analysis of network traffic, preliminary variables are selected. Second, the Modified Global K-means algorithm (MGKM) is used as the basic incremental clustering algorithm to identify the cluster structure of the target data. Third, the linear correlation coefficient is used for feature ranking. Lastly, the feature ranking result is used to inform and recalculate the clusters. This adaptive process can make worthwhile adjustments to the working feature vector according to different patterns of DDoS attacks, and can improve the quality of the clusters and the effectiveness of the clustering algorithm. The experimental results demonstrate that our method is effective and adaptive in detecting the separate phases of DDoS attacks.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129545928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The use of approaches based on artificial intelligence, specially agents and multi-agents systems, permits to clearly distinguish the aspects of implementation and the knowledge that gives substrate to the system. Concerning the implementation aspects, there are some traditional frameworks and languages as AUML, Gaia, and JASON. However, this technologies are not comfortable in acquiring and structuring knowledge. In this paper, it is presented a Petri net model developed to specify knowledge in agents and multi-agent systems independently of frameworks and knowledge representation formalisms. The Petri net model presented permits to map the knowledge acquired and structured in any formalism and framework used to implement a computational system. Besides that, the Petri net tool permits to analyze and validate the knowledge elicited concerning aspects as redundancy, deadlocks and conditions associated to agent tasks. The main contribution of this approach is to shift the project focus to the knowledge level of the system.
{"title":"An Approach to Specify Knowledge in Multi-agent Systems Using Petri Nets","authors":"E. Gonçalves","doi":"10.1109/NSS.2010.58","DOIUrl":"https://doi.org/10.1109/NSS.2010.58","url":null,"abstract":"The use of approaches based on artificial intelligence, specially agents and multi-agents systems, permits to clearly distinguish the aspects of implementation and the knowledge that gives substrate to the system. Concerning the implementation aspects, there are some traditional frameworks and languages as AUML, Gaia, and JASON. However, this technologies are not comfortable in acquiring and structuring knowledge. In this paper, it is presented a Petri net model developed to specify knowledge in agents and multi-agent systems independently of frameworks and knowledge representation formalisms. The Petri net model presented permits to map the knowledge acquired and structured in any formalism and framework used to implement a computational system. Besides that, the Petri net tool permits to analyze and validate the knowledge elicited concerning aspects as redundancy, deadlocks and conditions associated to agent tasks. The main contribution of this approach is to shift the project focus to the knowledge level of the system.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"63 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128740141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
HTTPS is designed to protect a connection against eavesdropping and man-in-the-middle attacks. HTTPS is however often compromised and voided when users are to embrace invalid certificates or disregard if HTTPS is being used. The current HTTPS deployment relies on unsophisticated users to safeguard themselves by performing legitimacy judgment. We propose HTTPS Lock, a simple and immediate approach to enforce HTTPS security. HTTPS Lock can be deployed to a website with a valid certificate by simply including several Javascript and HTML files, which will be cached in browsers. Similar to the trust-on-first-use model used by SSH, the trusted code cached on the client-side can effectively enforce the use of HTTPS and forbid users to embrace invalid certificates for any compromised networks subsequently encountered. Over 72% of major web browsers are supported, and further growth is expected. In any situation where the protection is unsupported or expired, the current security standard is gracefully maintained. As desired, the deployment is not hindered by standardization and collaboration from browser vendors as with other proposals.
{"title":"HTTPSLock: Enforcing HTTPS in Unmodified Browsers with Cached Javascript","authors":"Adonis P. H. Fung, K. Cheung","doi":"10.1109/NSS.2010.84","DOIUrl":"https://doi.org/10.1109/NSS.2010.84","url":null,"abstract":"HTTPS is designed to protect a connection against eavesdropping and man-in-the-middle attacks. HTTPS is however often compromised and voided when users are to embrace invalid certificates or disregard if HTTPS is being used. The current HTTPS deployment relies on unsophisticated users to safeguard themselves by performing legitimacy judgment. We propose HTTPS Lock, a simple and immediate approach to enforce HTTPS security. HTTPS Lock can be deployed to a website with a valid certificate by simply including several Javascript and HTML files, which will be cached in browsers. Similar to the trust-on-first-use model used by SSH, the trusted code cached on the client-side can effectively enforce the use of HTTPS and forbid users to embrace invalid certificates for any compromised networks subsequently encountered. Over 72% of major web browsers are supported, and further growth is expected. In any situation where the protection is unsupported or expired, the current security standard is gracefully maintained. As desired, the deployment is not hindered by standardization and collaboration from browser vendors as with other proposals.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121847538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Distributed Denial of Service (DDoS) attacks are a persistent, current, and very real threat to networks. Expanding upon a flexible distributed framework for network remediation utilising multiple strategies, we examine a novel fusion of methods to maximise throughput from legitimate clients and minimise the impact from attackers. The basic approach is to build up a whitelist of likely legitimate clients by observing outgoing traffic, presenting a challenge though proof-of-work, and providing flow cookies. Traffic that does not match the expected profile is likely attack traffic, and can be heavily filtered during attack conditions. After we incrementally develop this approach, we explore the positive and negative impacts of this approach upon the network and analyse potential countermeasures.
{"title":"Identifying Legitimate Clients under Distributed Denial-of-Service Attacks","authors":"Steven Simpson, A. Lindsay, D. Hutchison","doi":"10.1109/NSS.2010.77","DOIUrl":"https://doi.org/10.1109/NSS.2010.77","url":null,"abstract":"Distributed Denial of Service (DDoS) attacks are a persistent, current, and very real threat to networks. Expanding upon a flexible distributed framework for network remediation utilising multiple strategies, we examine a novel fusion of methods to maximise throughput from legitimate clients and minimise the impact from attackers. The basic approach is to build up a whitelist of likely legitimate clients by observing outgoing traffic, presenting a challenge though proof-of-work, and providing flow cookies. Traffic that does not match the expected profile is likely attack traffic, and can be heavily filtered during attack conditions. After we incrementally develop this approach, we explore the positive and negative impacts of this approach upon the network and analyse potential countermeasures.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"192 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122432419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we aim to enable security within SIP enterprise domains by providing monitoring capabilities at three levels: the network traffic, the server logs and the billing records. We propose an anomaly detection approach based on appropriate feature extraction and one-class Support Vector Machines (SVM). We propose methods for anomaly/attack type classification and attack source identification. Our approach is validated through experiments on a controlled test-bed using a customized normal traffic generation model and synthesized attacks. The results show promising performances in terms of accuracy, efficiency and usability.
{"title":"A Framework for Monitoring SIP Enterprise Networks","authors":"M. Nassar, R. State, O. Festor","doi":"10.1109/NSS.2010.79","DOIUrl":"https://doi.org/10.1109/NSS.2010.79","url":null,"abstract":"In this paper we aim to enable security within SIP enterprise domains by providing monitoring capabilities at three levels: the network traffic, the server logs and the billing records. We propose an anomaly detection approach based on appropriate feature extraction and one-class Support Vector Machines (SVM). We propose methods for anomaly/attack type classification and attack source identification. Our approach is validated through experiments on a controlled test-bed using a customized normal traffic generation model and synthesized attacks. The results show promising performances in terms of accuracy, efficiency and usability.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126370993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
RFID has been widely used in today's commercial and supply chain industry, due to the significant advantages it offers and the relatively low production cost. However, this ubiquitous technology has inherent problems in security and privacy. This calls for the development of simple, efficient and cost effective mechanisms against a variety of security threats. This paper proposes a two-step authentication protocol based on the randomized hash-lock scheme proposed by S. Weis in 2003. By introducing additional measures during the authentication process, this new protocol proves to enhance the security of RFID significantly, and protects the passive tags from almost all major attacks, including tag cloning, replay, full-disclosure, tracking, and eavesdropping. Furthermore, no significant changes to the tags is required to implement this protocol, and the low complexity level of the randomized hash-lock algorithm is retained.
{"title":"A Two-Step Mutual Authentication Protocol Based on Randomized Hash-Lock for Small RFID Networks","authors":"Kaleb Lee","doi":"10.1109/NSS.2010.30","DOIUrl":"https://doi.org/10.1109/NSS.2010.30","url":null,"abstract":"RFID has been widely used in today's commercial and supply chain industry, due to the significant advantages it offers and the relatively low production cost. However, this ubiquitous technology has inherent problems in security and privacy. This calls for the development of simple, efficient and cost effective mechanisms against a variety of security threats. This paper proposes a two-step authentication protocol based on the randomized hash-lock scheme proposed by S. Weis in 2003. By introducing additional measures during the authentication process, this new protocol proves to enhance the security of RFID significantly, and protects the passive tags from almost all major attacks, including tag cloning, replay, full-disclosure, tracking, and eavesdropping. Furthermore, no significant changes to the tags is required to implement this protocol, and the low complexity level of the randomized hash-lock algorithm is retained.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127945258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Intrusion Detection Systems (IDS) have been widely deployed in practice for detecting malicious behavior on network communication and hosts. The problem of false-positive alerts is a popular existing problem for most of IDS approaches. The solution to address this problem is correlation and clustering of alerts. To meet the practical requirements, this process needs to be finished as soon as possible, which is a challenging task as the amount of alerts produced in large scale deployments of distributed IDS is significantly high. We identify the data storage and processing algorithms to be the most important factors influencing the performance of clustering and correlation. We propose and implement the utilization of memory-supported algorithms and a column-oriented database for correlation and clustering in an extensible IDS correlation platform. The utilization of the column-oriented database, an In-Memory Alert Storage, and memory-based index tables leads to significant improvements on the performance. Different types of correlation modules can be integrated and compared on this platform. A plugin concept for Receivers provides flexible integration of various sensors and additional IDS management systems. The platform can be distributed over multiple processing units to share memory and processing power. A standardized interface is designed to provide a unified view of result reports for end users. The efficiency of the proposed platform is tested by practical experiments with several alert storage approaches, different simple algorithms, as well as local and distributed deployment.
{"title":"A Flexible and Efficient Alert Correlation Platform for Distributed IDS","authors":"S. Roschke, Feng Cheng, C. Meinel","doi":"10.1109/NSS.2010.26","DOIUrl":"https://doi.org/10.1109/NSS.2010.26","url":null,"abstract":"Intrusion Detection Systems (IDS) have been widely deployed in practice for detecting malicious behavior on network communication and hosts. The problem of false-positive alerts is a popular existing problem for most of IDS approaches. The solution to address this problem is correlation and clustering of alerts. To meet the practical requirements, this process needs to be finished as soon as possible, which is a challenging task as the amount of alerts produced in large scale deployments of distributed IDS is significantly high. We identify the data storage and processing algorithms to be the most important factors influencing the performance of clustering and correlation. We propose and implement the utilization of memory-supported algorithms and a column-oriented database for correlation and clustering in an extensible IDS correlation platform. The utilization of the column-oriented database, an In-Memory Alert Storage, and memory-based index tables leads to significant improvements on the performance. Different types of correlation modules can be integrated and compared on this platform. A plugin concept for Receivers provides flexible integration of various sensors and additional IDS management systems. The platform can be distributed over multiple processing units to share memory and processing power. A standardized interface is designed to provide a unified view of result reports for end users. The efficiency of the proposed platform is tested by practical experiments with several alert storage approaches, different simple algorithms, as well as local and distributed deployment.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116990845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this research project, the enablers in implementing mobile KMS in Australian regional healthcare will be investigated, and a validated framework and guidelines to assist healthcare in implementing mobile KMS will also be proposed with both qualitative and quantitative approaches. The outcomes for this study are expected to improve the understanding the enabling factors in implementing mobile KMS in Australian healthcare, as well as provide better guidelines for this process.
{"title":"The Enablers and Implementation Model for Mobile KMS in Australian Healthcare","authors":"Heng-Sheng Tsai, R. Gururajan","doi":"10.1109/NSS.2010.22","DOIUrl":"https://doi.org/10.1109/NSS.2010.22","url":null,"abstract":"In this research project, the enablers in implementing mobile KMS in Australian regional healthcare will be investigated, and a validated framework and guidelines to assist healthcare in implementing mobile KMS will also be proposed with both qualitative and quantitative approaches. The outcomes for this study are expected to improve the understanding the enabling factors in implementing mobile KMS in Australian healthcare, as well as provide better guidelines for this process.","PeriodicalId":127173,"journal":{"name":"2010 Fourth International Conference on Network and System Security","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132239152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}