Pub Date : 2020-11-01DOI: 10.1109/CIS52066.2020.00019
Lei Yang, Xiaotian Jia, Ganming Liu
This paper proposed multi-objective ant colony algorithm based on pheromone weight, which is used to solve multi-objective optimization problems. The algorithm introduces the weight of distance-related in the initialization of pheromones, which is beneficial to the ant speed up the path selection, improving the efficiency of ant search. At the same time, the adaptive variation operator that dynamically adjusts the number of ant neighbors with the number of iterations and the weight Tchebycheff aggregation method are also introduced, which are beneficial to improve the convergence speed and the quality of the algorithm. The algorithm has been compared with other related algorithms using Hypervolume and other indicators in the standard dual Traveling Salesman Problem (TSP), and has been proven that the improved algorithm has better results.
{"title":"Multi-objective Ant Colony Algorithm Based on Pheromone Weight","authors":"Lei Yang, Xiaotian Jia, Ganming Liu","doi":"10.1109/CIS52066.2020.00019","DOIUrl":"https://doi.org/10.1109/CIS52066.2020.00019","url":null,"abstract":"This paper proposed multi-objective ant colony algorithm based on pheromone weight, which is used to solve multi-objective optimization problems. The algorithm introduces the weight of distance-related in the initialization of pheromones, which is beneficial to the ant speed up the path selection, improving the efficiency of ant search. At the same time, the adaptive variation operator that dynamically adjusts the number of ant neighbors with the number of iterations and the weight Tchebycheff aggregation method are also introduced, which are beneficial to improve the convergence speed and the quality of the algorithm. The algorithm has been compared with other related algorithms using Hypervolume and other indicators in the standard dual Traveling Salesman Problem (TSP), and has been proven that the improved algorithm has better results.","PeriodicalId":106959,"journal":{"name":"2020 16th International Conference on Computational Intelligence and Security (CIS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121096305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/CIS52066.2020.00030
Jian Shi, Yi Xin, Benlian Xu, Mingli Lu, Jinliang Cong
Detection and tracking of multiple cells is critical in biomedical research and computer vision. Resolving lineage relationships between mitotic cells has been of fundamental interest in this filed recently. Microscopy images with cells at poor imagining conditions are difficult to detect and manual operation still remains standard procedure. This paper proposed a cell detection framework consisting of a convolution neural network (CNN) cell detector and a convolutional long short-term memory (LSTM) model. The detector is modeled by a well-trained Faster RCNN network to learn various cell features, and the convolutional LSTM network is employed to capture cell mitotic events, which utilizes both appearance and motion information from candidate sequences. Experimental results on realistic low contrast cell images are presented to demonstrate the robustness and validation of the proposed method.
{"title":"A Deep Framework for Cell Mitosis Detection in Microscopy Images","authors":"Jian Shi, Yi Xin, Benlian Xu, Mingli Lu, Jinliang Cong","doi":"10.1109/CIS52066.2020.00030","DOIUrl":"https://doi.org/10.1109/CIS52066.2020.00030","url":null,"abstract":"Detection and tracking of multiple cells is critical in biomedical research and computer vision. Resolving lineage relationships between mitotic cells has been of fundamental interest in this filed recently. Microscopy images with cells at poor imagining conditions are difficult to detect and manual operation still remains standard procedure. This paper proposed a cell detection framework consisting of a convolution neural network (CNN) cell detector and a convolutional long short-term memory (LSTM) model. The detector is modeled by a well-trained Faster RCNN network to learn various cell features, and the convolutional LSTM network is employed to capture cell mitotic events, which utilizes both appearance and motion information from candidate sequences. Experimental results on realistic low contrast cell images are presented to demonstrate the robustness and validation of the proposed method.","PeriodicalId":106959,"journal":{"name":"2020 16th International Conference on Computational Intelligence and Security (CIS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115961064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/CIS52066.2020.00082
Meng Wang, Yihong Long
SM9 is an identity-based cryptography algorithm published by the State Cryptography Administration of China. With SM9, a user's private key for signing is generated by a central system called key generation center (KGC). When the owner of the private key wants to shirk responsibility by denying that the signature was generated by himself, he can claim that the operator of KGC forged the signature using the generated private key. To address this issue, in this paper, two schemes of SM9 digital signature with non-repudiation are proposed. With the proposed schemes, the user's private key for signing is collaboratively generated by two separate components, one of which is deployed in the private key service provider's site while the other is deployed in the user's site. The private key can only be calculated in the user's site with the help of homomorphic encryption. Therefore, only the user can obtain the private key and he cannot deny that the signature was generated by himself. The proposed schemes can achieve the non-repudiation of SM9 digital signature.
{"title":"SM9 Digital Signature with Non-repudiation","authors":"Meng Wang, Yihong Long","doi":"10.1109/CIS52066.2020.00082","DOIUrl":"https://doi.org/10.1109/CIS52066.2020.00082","url":null,"abstract":"SM9 is an identity-based cryptography algorithm published by the State Cryptography Administration of China. With SM9, a user's private key for signing is generated by a central system called key generation center (KGC). When the owner of the private key wants to shirk responsibility by denying that the signature was generated by himself, he can claim that the operator of KGC forged the signature using the generated private key. To address this issue, in this paper, two schemes of SM9 digital signature with non-repudiation are proposed. With the proposed schemes, the user's private key for signing is collaboratively generated by two separate components, one of which is deployed in the private key service provider's site while the other is deployed in the user's site. The private key can only be calculated in the user's site with the help of homomorphic encryption. Therefore, only the user can obtain the private key and he cannot deny that the signature was generated by himself. The proposed schemes can achieve the non-repudiation of SM9 digital signature.","PeriodicalId":106959,"journal":{"name":"2020 16th International Conference on Computational Intelligence and Security (CIS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127204292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/CIS52066.2020.00081
Zhendong Liu, Yurong Yang, Xinrong Lv, Dongyan Li, Xi Chen, Xiaofeng Li
It is a computing mode of basin hopping graph in RNA folding structural prediction including pseudoknots. We investigate the computing algorithm in RNA folding structural prediction based on extended structure and basin hopping graph. This study presents predicting algorithm based on extended structure, also presents an improved computing algorithm based on basin hopping graph, they are attractive approachs in RNA folding structural prediction. Many experiments have been implemented in Rfam14.2 database and PseudoBase database, the experimental results show our two algorithms are efficiently and accurately than other existing algorithm.
{"title":"Predicting Algorithms and Complexity in RNA Structure Based on BHG","authors":"Zhendong Liu, Yurong Yang, Xinrong Lv, Dongyan Li, Xi Chen, Xiaofeng Li","doi":"10.1109/CIS52066.2020.00081","DOIUrl":"https://doi.org/10.1109/CIS52066.2020.00081","url":null,"abstract":"It is a computing mode of basin hopping graph in RNA folding structural prediction including pseudoknots. We investigate the computing algorithm in RNA folding structural prediction based on extended structure and basin hopping graph. This study presents predicting algorithm based on extended structure, also presents an improved computing algorithm based on basin hopping graph, they are attractive approachs in RNA folding structural prediction. Many experiments have been implemented in Rfam14.2 database and PseudoBase database, the experimental results show our two algorithms are efficiently and accurately than other existing algorithm.","PeriodicalId":106959,"journal":{"name":"2020 16th International Conference on Computational Intelligence and Security (CIS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114043975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/CIS52066.2020.00027
Ping Cai, Xingyuan Chen, Hongjun Wang, Peng Jin
Although natural language generation (NLG) has achieved great success, there are still many problems with the generated text, if humans carefully examine it. To analyze the problems of NLG, we use manual evaluation methods to annotate and analyze the text generated by NLG. According to the analysis results, we can understand the defects of NLG in-depth, comprehensively, and accurately. Further, these provide cues for future improvement. In this paper, we first use a state-of-the-art Topic-to-Essay generation model to generate texts conditional on some topic words. Then, by analyzing the generated text, we propose an annotation framework, and then quantify the main drawbacks of current NLG, including poor semantic coherence, content duplication, logic errors, and repetition. It shows that the text generated by the current sequence-to-sequence model is still far from human expectation.
{"title":"The errors analysis of natural language generation — A case study of Topic-to-Essay generation","authors":"Ping Cai, Xingyuan Chen, Hongjun Wang, Peng Jin","doi":"10.1109/CIS52066.2020.00027","DOIUrl":"https://doi.org/10.1109/CIS52066.2020.00027","url":null,"abstract":"Although natural language generation (NLG) has achieved great success, there are still many problems with the generated text, if humans carefully examine it. To analyze the problems of NLG, we use manual evaluation methods to annotate and analyze the text generated by NLG. According to the analysis results, we can understand the defects of NLG in-depth, comprehensively, and accurately. Further, these provide cues for future improvement. In this paper, we first use a state-of-the-art Topic-to-Essay generation model to generate texts conditional on some topic words. Then, by analyzing the generated text, we propose an annotation framework, and then quantify the main drawbacks of current NLG, including poor semantic coherence, content duplication, logic errors, and repetition. It shows that the text generated by the current sequence-to-sequence model is still far from human expectation.","PeriodicalId":106959,"journal":{"name":"2020 16th International Conference on Computational Intelligence and Security (CIS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116812616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/CIS52066.2020.00073
Jie Chen, X. Su, WeiSheng Wen, Hao-Tian Wu
To establish the vocational education personal learning account and achieve the traceability, query and conversion of learning results, the paper proposes a credit platform of vocational education chain. The technical characteristics of block chain decentralization, bookkeeping, asymmetric encryption, distributed, smart contract and consensus mechanism are used to construct the platform. In system design and system logic design, we focus on how to evaluate students more objectively, completely and accurately, how to make the platform more secure, public and shared, and how to make colleges, students and enterprises more trustworthy and prefer to use credit platforms. In view of the difficulties in the construction of credit platform nodes in polytechnic education chain, such as credit issuance and acquisition, credit query and so on, this paper puts forward some ideas and carries out prototype design. As a result, the credit platform is made more suitable for the development of educational alliance, collectivization, diversification and Internet +, open, shared and lifelong education.
{"title":"Credit Platform Construction of Vocational Education Group Based on Blockchain","authors":"Jie Chen, X. Su, WeiSheng Wen, Hao-Tian Wu","doi":"10.1109/CIS52066.2020.00073","DOIUrl":"https://doi.org/10.1109/CIS52066.2020.00073","url":null,"abstract":"To establish the vocational education personal learning account and achieve the traceability, query and conversion of learning results, the paper proposes a credit platform of vocational education chain. The technical characteristics of block chain decentralization, bookkeeping, asymmetric encryption, distributed, smart contract and consensus mechanism are used to construct the platform. In system design and system logic design, we focus on how to evaluate students more objectively, completely and accurately, how to make the platform more secure, public and shared, and how to make colleges, students and enterprises more trustworthy and prefer to use credit platforms. In view of the difficulties in the construction of credit platform nodes in polytechnic education chain, such as credit issuance and acquisition, credit query and so on, this paper puts forward some ideas and carries out prototype design. As a result, the credit platform is made more suitable for the development of educational alliance, collectivization, diversification and Internet +, open, shared and lifelong education.","PeriodicalId":106959,"journal":{"name":"2020 16th International Conference on Computational Intelligence and Security (CIS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122875611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/cis52066.2020.00001
{"title":"[Title page i]","authors":"","doi":"10.1109/cis52066.2020.00001","DOIUrl":"https://doi.org/10.1109/cis52066.2020.00001","url":null,"abstract":"","PeriodicalId":106959,"journal":{"name":"2020 16th International Conference on Computational Intelligence and Security (CIS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130247912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/CIS52066.2020.00075
Chunman Yan, Yuyao Zhang
For the image super-resolution reconstruction method based on case-learning shows that a fast and efficient dictionary learning algorithm is very important to solve the problem of mapping inconsistency between low-resolution and high-resolution images. This paper adopts the online dictionary learning algorithm for the image super-resolution. In the learning stage, the algorithm constructs the high-resolution and the corresponding low-resolution feature training sets, then by using the online dictionary learning algorithm, obtains a sparse coding matrix of the low-resolution training sets, and computers the high-resolution dictionary by sharing the sparse coding coefficients; in the reconstruction stage, the input low-resolution image firstly is interpolated to the size of the desired high-resolution image, and obtains the sparse coding matrix through OMP ( Orthogonal Matching Pursuit ) method in the low-resolution test sets, then computers the high-resolution image blocks based on the above high-resolution dictionary and the later sparse coding matrix, finally reorders and averages the blocks to achieve the reconstructed high-resolution image. The experimental results show that the proposed method can achieve better quality for image super-resolution reconstruction than the traditional sparse coding method, the detail and texture of the reconstructed image are reconstructed well, and the algorithm can effectively inhibit the artifact of image edge phenomenon.
{"title":"Image Super-Resolution Reconstruction Based on Online dictionary learning Algorithm","authors":"Chunman Yan, Yuyao Zhang","doi":"10.1109/CIS52066.2020.00075","DOIUrl":"https://doi.org/10.1109/CIS52066.2020.00075","url":null,"abstract":"For the image super-resolution reconstruction method based on case-learning shows that a fast and efficient dictionary learning algorithm is very important to solve the problem of mapping inconsistency between low-resolution and high-resolution images. This paper adopts the online dictionary learning algorithm for the image super-resolution. In the learning stage, the algorithm constructs the high-resolution and the corresponding low-resolution feature training sets, then by using the online dictionary learning algorithm, obtains a sparse coding matrix of the low-resolution training sets, and computers the high-resolution dictionary by sharing the sparse coding coefficients; in the reconstruction stage, the input low-resolution image firstly is interpolated to the size of the desired high-resolution image, and obtains the sparse coding matrix through OMP ( Orthogonal Matching Pursuit ) method in the low-resolution test sets, then computers the high-resolution image blocks based on the above high-resolution dictionary and the later sparse coding matrix, finally reorders and averages the blocks to achieve the reconstructed high-resolution image. The experimental results show that the proposed method can achieve better quality for image super-resolution reconstruction than the traditional sparse coding method, the detail and texture of the reconstructed image are reconstructed well, and the algorithm can effectively inhibit the artifact of image edge phenomenon.","PeriodicalId":106959,"journal":{"name":"2020 16th International Conference on Computational Intelligence and Security (CIS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122516331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/CIS52066.2020.00060
Haiyan Huang, Bizhong Wei, Jian Dai, Wenlong Ke
Data mining is the focus of big data applications in various fields. Data pre-processing is a crucial step in the data mining process. With the development of the information society and the application of databases, the educational data has seen explosive growth, and the data on poor students has become informative. However, the actual student financial aid management system collects the data on poor students which generally has problems such as missing values, attributes redundancy, and noise. To solve this problem, we proposed a novel method called DPBP to preprocess data. The proposed DPBP approach consists of four stages: the preparation of data, the scoping of characteristics, the combination of characteristics, and the filtering of missing number. Firstly, we prepare the dataset by extracting data. Next, the characteristic range is limited by choosing experimental results of feature selection algorithm. Then, third stage performs feature combination to obtain the feature decomposition sets. Finally, based on accuracy and missing number, we gain the optimal dataset. Series of experiments result show that our proposed method significantly improves the data quality and stability.
{"title":"Data Preprocessing Method For The Analysis Of Incomplete Data On Students In Poverty","authors":"Haiyan Huang, Bizhong Wei, Jian Dai, Wenlong Ke","doi":"10.1109/CIS52066.2020.00060","DOIUrl":"https://doi.org/10.1109/CIS52066.2020.00060","url":null,"abstract":"Data mining is the focus of big data applications in various fields. Data pre-processing is a crucial step in the data mining process. With the development of the information society and the application of databases, the educational data has seen explosive growth, and the data on poor students has become informative. However, the actual student financial aid management system collects the data on poor students which generally has problems such as missing values, attributes redundancy, and noise. To solve this problem, we proposed a novel method called DPBP to preprocess data. The proposed DPBP approach consists of four stages: the preparation of data, the scoping of characteristics, the combination of characteristics, and the filtering of missing number. Firstly, we prepare the dataset by extracting data. Next, the characteristic range is limited by choosing experimental results of feature selection algorithm. Then, third stage performs feature combination to obtain the feature decomposition sets. Finally, based on accuracy and missing number, we gain the optimal dataset. Series of experiments result show that our proposed method significantly improves the data quality and stability.","PeriodicalId":106959,"journal":{"name":"2020 16th International Conference on Computational Intelligence and Security (CIS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126354319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/CIS52066.2020.00056
Di Wang, Zuoquan Zhang
Financial crisis happened in 2008 has inflicted heavy losses on the global economy and enterprise credit risk has caused extensive concern. There are all kinds of financial data in an enterprise. By using these data, credit risk models can be used to judge credit risk accurately. However, there are still many limitations in these models and the high dimension data brings about difficulties for modeling. Therefore, this paper puts forward a hybrid system based on feature selection approach and ensemble learning. The first experiment is the hybrid system HFES based on F-score and ensemble learning; and the second one is the hybrid system HGIES combines the Gini index and ensemble learning. Both experiments achieve good performance. The real data set consists of 160 listed companies with total 22 features. By using this data, our experiment indicates that the accuracy of classification is signifiantly raised by hybrid system HFES and HGIES. Meanwhile, they not only can be applied to credit risk assessment, but also can be put into use in more fields.
{"title":"Enterprise Credit Risk Assessment Using Feature Selection Approach and Ensemble Learning Technique","authors":"Di Wang, Zuoquan Zhang","doi":"10.1109/CIS52066.2020.00056","DOIUrl":"https://doi.org/10.1109/CIS52066.2020.00056","url":null,"abstract":"Financial crisis happened in 2008 has inflicted heavy losses on the global economy and enterprise credit risk has caused extensive concern. There are all kinds of financial data in an enterprise. By using these data, credit risk models can be used to judge credit risk accurately. However, there are still many limitations in these models and the high dimension data brings about difficulties for modeling. Therefore, this paper puts forward a hybrid system based on feature selection approach and ensemble learning. The first experiment is the hybrid system HFES based on F-score and ensemble learning; and the second one is the hybrid system HGIES combines the Gini index and ensemble learning. Both experiments achieve good performance. The real data set consists of 160 listed companies with total 22 features. By using this data, our experiment indicates that the accuracy of classification is signifiantly raised by hybrid system HFES and HGIES. Meanwhile, they not only can be applied to credit risk assessment, but also can be put into use in more fields.","PeriodicalId":106959,"journal":{"name":"2020 16th International Conference on Computational Intelligence and Security (CIS)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129370221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}