Yee Wan Wong, K. Seng, L. Ang, Wan Yong Khor, Fui Liau
In this paper, a new multimodal biometric recognition system based on feature fusion is proposed to increase the robustness and circumvention of conventional multimodal recognition system. The feature sets originating from the output of the visual and audio feature extraction systems are fused and being classified by RBF neural network. Other than that, 2DPCA is proposed to work in conjunction with LDA to further increase the recognition performance of the visual recognition system. The experimental result shows that the proposed system achieves a higher recognition rate as compared to the conventional multimodal recognition system. Besides, we also show that the 2DPCA+LDA achieves a higher recognition rate as compared with PCA, PCA+LDA and 2DPCA.
{"title":"Audio-Visual Recognition System with Intra-Modal Fusion","authors":"Yee Wan Wong, K. Seng, L. Ang, Wan Yong Khor, Fui Liau","doi":"10.1109/CIS.2007.196","DOIUrl":"https://doi.org/10.1109/CIS.2007.196","url":null,"abstract":"In this paper, a new multimodal biometric recognition system based on feature fusion is proposed to increase the robustness and circumvention of conventional multimodal recognition system. The feature sets originating from the output of the visual and audio feature extraction systems are fused and being classified by RBF neural network. Other than that, 2DPCA is proposed to work in conjunction with LDA to further increase the recognition performance of the visual recognition system. The experimental result shows that the proposed system achieves a higher recognition rate as compared to the conventional multimodal recognition system. Besides, we also show that the 2DPCA+LDA achieves a higher recognition rate as compared with PCA, PCA+LDA and 2DPCA.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121106563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Safaa O. Al-Mamory Hong Li Zhang School of Computer Science, School of Computer Science, Harbin Institute of technology, Harbin Institute of technology, Harbin, China Harbin, China Safaa_vb@yahoo.com zhl@pact518.hit.edu.cn Abstract Intrusion alert correlation techniques correlate alerts into meaningful groups or attack scenarios for the ease to understand by human analysts. These correlation techniques have different strengths and limitations. However, all of them depend heavily on the underlying network intrusion detection systems (NIDSs) and perform poorly when the NIDSs miss critical attacks. In this paper, a system was proposed to represents a set of alerts as subattacks. Then correlates these subattacks and generates abstracted correlation graphs (CGs) which reflect attack scenarios. It also represents attack scenarios by classes of alerts instead of alerts themselves to reduce the rules required and to detect new variations of attacks. The experiments were conducted using Snort as NIDS with different datasets which contain multistep attacks. The resulted CGs imply that our method can correlate related alerts, uncover the attack strategies, and can detect new variations of attacks.
Safaa O. Al-Mamory Hong Li Zhang哈尔滨工业大学计算机科学学院哈尔滨,中国哈尔滨Safaa_vb@yahoo.com zhl@pact518.hit.edu.cn摘要入侵警报关联技术将警报关联到有意义的组或攻击场景中,以便于人类分析人员理解。这些相关技术有不同的优点和局限性。然而,它们都严重依赖于底层的网络入侵检测系统(nids),当nids错过关键攻击时,它们的性能很差。本文提出了一种将一组警报表示为子攻击的系统。然后将这些子攻击进行关联,生成反映攻击场景的抽象关联图(CGs)。它还按警报类别(而不是警报本身)表示攻击场景,以减少所需的规则并检测新的攻击变体。实验使用Snort作为包含多步骤攻击的不同数据集的NIDS进行。结果表明,我们的方法可以将相关警报关联起来,揭示攻击策略,并可以检测到新的攻击变化。
{"title":"Scenario Discovery Using Abstracted Correlation Graph","authors":"S. Al-Mamory, Hongli Zhang","doi":"10.1109/CIS.2007.21","DOIUrl":"https://doi.org/10.1109/CIS.2007.21","url":null,"abstract":"Safaa O. Al-Mamory Hong Li Zhang School of Computer Science, School of Computer Science, Harbin Institute of technology, Harbin Institute of technology, Harbin, China Harbin, China Safaa_vb@yahoo.com zhl@pact518.hit.edu.cn Abstract Intrusion alert correlation techniques correlate alerts into meaningful groups or attack scenarios for the ease to understand by human analysts. These correlation techniques have different strengths and limitations. However, all of them depend heavily on the underlying network intrusion detection systems (NIDSs) and perform poorly when the NIDSs miss critical attacks. In this paper, a system was proposed to represents a set of alerts as subattacks. Then correlates these subattacks and generates abstracted correlation graphs (CGs) which reflect attack scenarios. It also represents attack scenarios by classes of alerts instead of alerts themselves to reduce the rules required and to detect new variations of attacks. The experiments were conducted using Snort as NIDS with different datasets which contain multistep attacks. The resulted CGs imply that our method can correlate related alerts, uncover the attack strategies, and can detect new variations of attacks.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127251164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper is concerned with finding the refined bounds on the value of fuzzy solution to fuzzy programming prob- lem. In this paper we first present the definitions which are the sum of pairs expected value (SPEV), the expected value of the reference scenario (EVRS)and the expectation of pairs expected value (EPEV), and obtain the value of fuzzy solution (VFS) defined by difference between the re- course problem solution and the expected value of reference solution. In addition, several numerical examples are also given in order to explain the definitions specifically. Finally, the properties concerning the concepts are studied, which result in refined bounds on the value of fuzzy solution.
{"title":"Bounds on the Value of Fuzzy Solution to Fuzzy Programming Problem","authors":"Mingfa Zheng, Yian-Kui Liu","doi":"10.1109/CIS.2007.96","DOIUrl":"https://doi.org/10.1109/CIS.2007.96","url":null,"abstract":"The paper is concerned with finding the refined bounds on the value of fuzzy solution to fuzzy programming prob- lem. In this paper we first present the definitions which are the sum of pairs expected value (SPEV), the expected value of the reference scenario (EVRS)and the expectation of pairs expected value (EPEV), and obtain the value of fuzzy solution (VFS) defined by difference between the re- course problem solution and the expected value of reference solution. In addition, several numerical examples are also given in order to explain the definitions specifically. Finally, the properties concerning the concepts are studied, which result in refined bounds on the value of fuzzy solution.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126737599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the vision of pervasive computing, there are a great diversity of services in the environment and a great deal of nomadic users walking through them. The traditional proof-making mechanisms of access control are no longer appropriate for the new context, in which people expect to access information in a more flexible and non-intrusive way. Instead of asking people for the certificates, this paper advocates a direct certifying mechanism, in which delegation relationships are directly verified by asking the delegator. By dynamically discovering potential credential chains a trust community is maintained. Existed credential chain discovery methods are not scalable enough for that they either need collecting all credentials in the system or need referring to all the potential users. In our approach, each user keeps a trust list to reduce the fan-out of searching steps. The simulation shows that the performance is greatly improved.
{"title":"A Novel Trust Community Based on Direct Certifying for Pervasive Computing Systems","authors":"Zhiyu Peng, Shanping Li, Xin Lin","doi":"10.1109/CIS.2007.93","DOIUrl":"https://doi.org/10.1109/CIS.2007.93","url":null,"abstract":"In the vision of pervasive computing, there are a great diversity of services in the environment and a great deal of nomadic users walking through them. The traditional proof-making mechanisms of access control are no longer appropriate for the new context, in which people expect to access information in a more flexible and non-intrusive way. Instead of asking people for the certificates, this paper advocates a direct certifying mechanism, in which delegation relationships are directly verified by asking the delegator. By dynamically discovering potential credential chains a trust community is maintained. Existed credential chain discovery methods are not scalable enough for that they either need collecting all credentials in the system or need referring to all the potential users. In our approach, each user keeps a trust list to reduce the fan-out of searching steps. The simulation shows that the performance is greatly improved.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"217 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126110236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-15DOI: 10.1142/S0218001409007065
S. Soltani, A. Barforoush
Transferring the current Websites to Semantic Websites, using ontology population, is a research area within which classification has the main role. The existing classification algorithms and single level execution of them are insufficient on web data. Moreover, because of the variety in the context and structure of even common domain Websites, there is a lack of training data. In this paper we had three experiences: 1- using information in domain ontology about the layers of classes to train classifiers (layered classification) with improvement up to 10% on accuracy of classification. 2- experience on problem of training dataset and using clustering as a preprocess. 3- using ensembles to benefit from both two methods. Beside the improvement of accuracy from these experiences, we found out that with ensemble we can dispense with the algorithm of classification and use a simple classification like Naïve Bayes and have the accuracy of complex algorithms like SVM.
{"title":"Web pages Classification Using Domain Ontology and Clustering","authors":"S. Soltani, A. Barforoush","doi":"10.1142/S0218001409007065","DOIUrl":"https://doi.org/10.1142/S0218001409007065","url":null,"abstract":"Transferring the current Websites to Semantic Websites, using ontology population, is a research area within which classification has the main role. The existing classification algorithms and single level execution of them are insufficient on web data. Moreover, because of the variety in the context and structure of even common domain Websites, there is a lack of training data. In this paper we had three experiences: 1- using information in domain ontology about the layers of classes to train classifiers (layered classification) with improvement up to 10% on accuracy of classification. 2- experience on problem of training dataset and using clustering as a preprocess. 3- using ensembles to benefit from both two methods. Beside the improvement of accuracy from these experiences, we found out that with ensemble we can dispense with the algorithm of classification and use a simple classification like Naïve Bayes and have the accuracy of complex algorithms like SVM.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130478119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Current watermark models cannot reflect the conflicting relationship among cover fidelity, watermark robustness and watermark capacity. And there is no effective guidance for designing robust watermark algorithms in content security applications, such as the copyright protection. A robust watermark model based on subliminal channel for content security applications is proposed. In this model, the half- symmetry of watermark communication is pointed out. Based on the model, the approaches to solve the conflicting relationship are presented as to increase entropy of cover, to decrease entropy of watermark message and to increase mutual information between cover and watermark through cover transformation, watermark encoding, public and subliminal channel encoding. The conditions and methods of the cover transformation and watermark encoding are presented. This model and its approaches will offer theory guidance for researches on robust watermark algorithms in content security applications.
{"title":"Robust Watermark Model Based on Subliminal Channel","authors":"Cheng Yang, Jianbo Liu, Yaqing Niu","doi":"10.1109/CIS.2007.63","DOIUrl":"https://doi.org/10.1109/CIS.2007.63","url":null,"abstract":"Current watermark models cannot reflect the conflicting relationship among cover fidelity, watermark robustness and watermark capacity. And there is no effective guidance for designing robust watermark algorithms in content security applications, such as the copyright protection. A robust watermark model based on subliminal channel for content security applications is proposed. In this model, the half- symmetry of watermark communication is pointed out. Based on the model, the approaches to solve the conflicting relationship are presented as to increase entropy of cover, to decrease entropy of watermark message and to increase mutual information between cover and watermark through cover transformation, watermark encoding, public and subliminal channel encoding. The conditions and methods of the cover transformation and watermark encoding are presented. This model and its approaches will offer theory guidance for researches on robust watermark algorithms in content security applications.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130760512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, numerous Multiobjective Evolutionary Algorithms (MOEAs) have been presented to solve real life problems. However, a number of issues still remain with regards to MOEAs such as convergence to the true Pareto front as well as scalability to many objective problems rather than just bi-objective problems. The performance of these algorithms may be augmented by incorporating the coevolutionary concept. Hence, in this paper, a new algorithm for multiobjective optimization called SPEA2-CC is illustrated. SPEA2-CC combines an MOEA, Strength Pareto Evolutionary Algorithm 2 (SPEA2) with Cooperative Coevolution (CC). Scalability tests have been conducted to evaluate and compare the SPEA2- CC against the original SPEA2 for seven DTLZ test problems with a set of objectives (3 to 5 objectives). The results show clearly that the performance scalability of SPEA2-CC was significantly better compared to the original SPEA2 as the number of objectives becomes higher.
{"title":"Performance Scalability of a Cooperative Coevolution Multiobjective Evolutionary Algorithm","authors":"Tse Guan Tan, J. Teo, H. Lau","doi":"10.1109/CIS.2007.181","DOIUrl":"https://doi.org/10.1109/CIS.2007.181","url":null,"abstract":"Recently, numerous Multiobjective Evolutionary Algorithms (MOEAs) have been presented to solve real life problems. However, a number of issues still remain with regards to MOEAs such as convergence to the true Pareto front as well as scalability to many objective problems rather than just bi-objective problems. The performance of these algorithms may be augmented by incorporating the coevolutionary concept. Hence, in this paper, a new algorithm for multiobjective optimization called SPEA2-CC is illustrated. SPEA2-CC combines an MOEA, Strength Pareto Evolutionary Algorithm 2 (SPEA2) with Cooperative Coevolution (CC). Scalability tests have been conducted to evaluate and compare the SPEA2- CC against the original SPEA2 for seven DTLZ test problems with a set of objectives (3 to 5 objectives). The results show clearly that the performance scalability of SPEA2-CC was significantly better compared to the original SPEA2 as the number of objectives becomes higher.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"180 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116704767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A novel method to reduce dimensionality for face representation and recognition was proposed in this paper. This technique attempts to preserve both the intrinsic neighborhood geometry of the data samples and the global geometry. It is derived from ONPP. The main difference between ONPP and 2d-NPP is that the latter does not change the input images to vectors, and works well under the undersampled size situation. First, an "affinity" graph was built for the data in 2D- NPP, in a way that is similar to the method of LLE. While the input was mapped to the reduced spaces implicitly in LLE, 2D-NPP employs an explicit linear mapping between the two. So it is trivial to handle the new data just by a simple linear transformation. We also show that is easy to apply the method in a supervised setting. Numerical experiments are reported to illustrate the performance of 2D-NPP and to compare it with a few competing methods.
{"title":"2D-NPP: An Extension of Neighborhood Preserving Projection","authors":"Zirong Li, Minghui Du","doi":"10.1109/CIS.2007.23","DOIUrl":"https://doi.org/10.1109/CIS.2007.23","url":null,"abstract":"A novel method to reduce dimensionality for face representation and recognition was proposed in this paper. This technique attempts to preserve both the intrinsic neighborhood geometry of the data samples and the global geometry. It is derived from ONPP. The main difference between ONPP and 2d-NPP is that the latter does not change the input images to vectors, and works well under the undersampled size situation. First, an \"affinity\" graph was built for the data in 2D- NPP, in a way that is similar to the method of LLE. While the input was mapped to the reduced spaces implicitly in LLE, 2D-NPP employs an explicit linear mapping between the two. So it is trivial to handle the new data just by a simple linear transformation. We also show that is easy to apply the method in a supervised setting. Numerical experiments are reported to illustrate the performance of 2D-NPP and to compare it with a few competing methods.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131233675","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper presents a novel and robust video watermarking scheme for copyright protection based on chaos, DWT and JND model of HVS. Firstly, in order to ensure security of watermarking and reduce the quantity of computation in the embedding process, we adopt a technique called chaotic selection to select the embedding frames from the video. To every selected frame, we transform it by DWT and detect the variation status of the coefficient in low frequency domain. Because HVS isn't too sensitive to the motive things, we choose those coefficients whose variations are large as the embedding coefficients. Finally, the watermarking signals are embedded and detected according to the JND model. The experimental results show that the proposed watermarking algorithm is robust to additive noise, MPEG compression, frame deleting and so on.
{"title":"Research on Video Watermarking Scheme Based on Chaos, DWT and JND Model","authors":"Shuguo Yang, Shenghe Sun, Chunxia Li","doi":"10.1109/CIS.2007.143","DOIUrl":"https://doi.org/10.1109/CIS.2007.143","url":null,"abstract":"The paper presents a novel and robust video watermarking scheme for copyright protection based on chaos, DWT and JND model of HVS. Firstly, in order to ensure security of watermarking and reduce the quantity of computation in the embedding process, we adopt a technique called chaotic selection to select the embedding frames from the video. To every selected frame, we transform it by DWT and detect the variation status of the coefficient in low frequency domain. Because HVS isn't too sensitive to the motive things, we choose those coefficients whose variations are large as the embedding coefficients. Finally, the watermarking signals are embedded and detected according to the JND model. The experimental results show that the proposed watermarking algorithm is robust to additive noise, MPEG compression, frame deleting and so on.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"158 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131580690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A novel license plate locating approach based on the color and texture features is presented. Firstly, the input image is converted to the hue-saturation-intensity (HSI) color space. Then a target image is obtained by applying a sequence of image processing techniques to the hue and saturation component images. After that, the space-pixel histogram of the target image is analyzed and mathematically modeled, so that the horizontal candidate is extracted. Finally, discrete wavelet transform is performed on the candidate, and the sum of the first order difference of the DWT subimages highlights the texture information of the LP area, telling the precise position of the license plate. The proposed algorithm focuses on combining the color features with the texture features, improving the locating reliability. Experiment was conducted on a database of 332 images taken from various illumination situations. The license plate detecting rate of success is as high as 96.4%.
{"title":"A Color and Texture Feature Based Approach to License Plate Location","authors":"Jia Li, Mei Xie","doi":"10.1109/CIS.2007.71","DOIUrl":"https://doi.org/10.1109/CIS.2007.71","url":null,"abstract":"A novel license plate locating approach based on the color and texture features is presented. Firstly, the input image is converted to the hue-saturation-intensity (HSI) color space. Then a target image is obtained by applying a sequence of image processing techniques to the hue and saturation component images. After that, the space-pixel histogram of the target image is analyzed and mathematically modeled, so that the horizontal candidate is extracted. Finally, discrete wavelet transform is performed on the candidate, and the sum of the first order difference of the DWT subimages highlights the texture information of the LP area, telling the precise position of the license plate. The proposed algorithm focuses on combining the color features with the texture features, improving the locating reliability. Experiment was conducted on a database of 332 images taken from various illumination situations. The license plate detecting rate of success is as high as 96.4%.","PeriodicalId":127238,"journal":{"name":"2007 International Conference on Computational Intelligence and Security (CIS 2007)","volume":"120 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132428666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}