To improve the robustness to noise, outliers and arbitrary cluster boundaries, a robust multi-sphere support vector clustering (SVC) algorithm is proposed in this paper. The proposed algorithm can automatically estimate a suitable kernel parameter, and determine the cluster number. The Gaussian kernel parameter is firstly estimated through a kernel parameter estimation algorithm which is based on support vector domain description (SVDD) and original local variance (LV) algorithm. Based on the estimated kernel parameter, the SVC algorithm classifies the given data points into different clusters and then the SVDD algorithm is performed several times for each cluster. At last, the membership is computed and the final clustering result is obtained based on these computed memberships. Several simulations verify the effectiveness of the proposed algorithm.
{"title":"A Robust Multi-Sphere SVC Algorithm Based on Parameter Estimation","authors":"Kexin Jia, Yuxia Xin, Ting Cheng","doi":"10.1145/3503047.3503112","DOIUrl":"https://doi.org/10.1145/3503047.3503112","url":null,"abstract":"To improve the robustness to noise, outliers and arbitrary cluster boundaries, a robust multi-sphere support vector clustering (SVC) algorithm is proposed in this paper. The proposed algorithm can automatically estimate a suitable kernel parameter, and determine the cluster number. The Gaussian kernel parameter is firstly estimated through a kernel parameter estimation algorithm which is based on support vector domain description (SVDD) and original local variance (LV) algorithm. Based on the estimated kernel parameter, the SVC algorithm classifies the given data points into different clusters and then the SVDD algorithm is performed several times for each cluster. At last, the membership is computed and the final clustering result is obtained based on these computed memberships. Several simulations verify the effectiveness of the proposed algorithm.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124895476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, Neural Networks have gradually replaced other methods in the field of Machine Translation and become the mainstream which have excellent performance in many languages. However, the performance of Neural Machine Translation mainly relies on large-scale parallel corpora, which is not ideal for low-resource languages, especially Tibetan-Chinese Machine Translation. In order to obtain the best translation performance with more external information on the basis of limited corpus, this paper introduces syntactic information, that is, adding part-of-speech(POS) tags as input features in the training process. Experiments verify the effectiveness of this method, which can improve the translation performance to a certain extent.
{"title":"Research on Tibetan-Chinese Neural Machine Translation Integrating Syntactic Information","authors":"Maoxian Zhou, Secha Jia, Rangjia Cai","doi":"10.1145/3503047.3503120","DOIUrl":"https://doi.org/10.1145/3503047.3503120","url":null,"abstract":"In recent years, Neural Networks have gradually replaced other methods in the field of Machine Translation and become the mainstream which have excellent performance in many languages. However, the performance of Neural Machine Translation mainly relies on large-scale parallel corpora, which is not ideal for low-resource languages, especially Tibetan-Chinese Machine Translation. In order to obtain the best translation performance with more external information on the basis of limited corpus, this paper introduces syntactic information, that is, adding part-of-speech(POS) tags as input features in the training process. Experiments verify the effectiveness of this method, which can improve the translation performance to a certain extent.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129148651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The dark web market is full of illegal and criminal activities such as the sale of sensitive personal information, guns, drugs, and terrorist videos. Cybercriminals use The Onion Router(TOR) browser to enter the dark web for information publishing and trading. Because the onion router browser provides privacy protection and anonymity, it is widely used. This privacy protection mode has brought great challenges to network investigators. This article aims to detect the use of the latest Tor browser, compare and analyze the evidence information contained in the registry, memory images, hard disk files, and network data packets through forensic experiments. At the same time, it compares and analyzes the different access modes of the Tor browser, and collects and uses Tor browsing. Evidence of cybercrime committed by a device is helpful to the development of electronic data forensics analysis.
{"title":"A Framework of Darknet Forensics","authors":"Tao Leng, Aimin Yu","doi":"10.1145/3503047.3503082","DOIUrl":"https://doi.org/10.1145/3503047.3503082","url":null,"abstract":"The dark web market is full of illegal and criminal activities such as the sale of sensitive personal information, guns, drugs, and terrorist videos. Cybercriminals use The Onion Router(TOR) browser to enter the dark web for information publishing and trading. Because the onion router browser provides privacy protection and anonymity, it is widely used. This privacy protection mode has brought great challenges to network investigators. This article aims to detect the use of the latest Tor browser, compare and analyze the evidence information contained in the registry, memory images, hard disk files, and network data packets through forensic experiments. At the same time, it compares and analyzes the different access modes of the Tor browser, and collects and uses Tor browsing. Evidence of cybercrime committed by a device is helpful to the development of electronic data forensics analysis.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127904046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Graph edit distance is an important way to measure the similarity of pairwise graphs and has been widely used to bioinformatics, chemistry, social networks, etc. However, the expensive computation of graph edit distance poses serious algorithmic challenges. One of recent methodologies to obtain graph edit distance is to search the vertex mapping. In existing methods, A-Star heuristic search and pruning are used to improve the performance, but they still suffer huge temporal-spatial consumption and loose lower bound. In this paper, based on the heuristic A-Star search methods, three optimal methods are proposed to improve the mapping search strategy. First, a pruning strategy based on Symmertry-Breaking is proposed which defines the concept of mapping-equivalence, and prunes before the equivalence mappings are extended. Second, a pruning strategy based on upper bound is proposed to filter invalid mappings in the priority queue to speed up the search time, which uses Hungarial algorithm to obtain the upper bound. Third, the dequeued order is specified for the mappings in the priority queue with the same lower bound of the edit cost. Experiments on real datasets show that our methods have significant temporal-spatial optimal results
{"title":"Research on the Optimal Methods for Graph Edit Distance","authors":"Xuan Wang, Ziyang Chen","doi":"10.1145/3503047.3503062","DOIUrl":"https://doi.org/10.1145/3503047.3503062","url":null,"abstract":"Graph edit distance is an important way to measure the similarity of pairwise graphs and has been widely used to bioinformatics, chemistry, social networks, etc. However, the expensive computation of graph edit distance poses serious algorithmic challenges. One of recent methodologies to obtain graph edit distance is to search the vertex mapping. In existing methods, A-Star heuristic search and pruning are used to improve the performance, but they still suffer huge temporal-spatial consumption and loose lower bound. In this paper, based on the heuristic A-Star search methods, three optimal methods are proposed to improve the mapping search strategy. First, a pruning strategy based on Symmertry-Breaking is proposed which defines the concept of mapping-equivalence, and prunes before the equivalence mappings are extended. Second, a pruning strategy based on upper bound is proposed to filter invalid mappings in the priority queue to speed up the search time, which uses Hungarial algorithm to obtain the upper bound. Third, the dequeued order is specified for the mappings in the priority queue with the same lower bound of the edit cost. Experiments on real datasets show that our methods have significant temporal-spatial optimal results","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121104694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the rapid development of wireless sensing, intelligent human-computer interaction, and other fields, gesture recognition based on WiFi has become an important research field. Gesture recognition based on WiFi has the advantages of non-contact and privacy protection. In addition, the use of home WiFi makes the technology have a broad application scenario. At present, most gesture recognition models based on WiFi can only achieve good results in a specific domain. When changing the environment or the orientation of gesture action, the performance of the model becomes very poor. This paper proposes a gesture recognition system based on the channel attention mechanism and CNN-LSTM fusion model. On the one hand, the channel attention mechanism can consider the importance of different channel characteristics; On the other hand, the CNN-LSTM fusion model can extract richer features in the time domain and space domain. The system has achieved good classification results in multiple domains of the public data set widar3.0.
{"title":"A Novel WiFi Gesture Recognition Method Based on CNN-LSTM and Channel Attention","authors":"Yu Gu, Jiang Li","doi":"10.1145/3503047.3503148","DOIUrl":"https://doi.org/10.1145/3503047.3503148","url":null,"abstract":"With the rapid development of wireless sensing, intelligent human-computer interaction, and other fields, gesture recognition based on WiFi has become an important research field. Gesture recognition based on WiFi has the advantages of non-contact and privacy protection. In addition, the use of home WiFi makes the technology have a broad application scenario. At present, most gesture recognition models based on WiFi can only achieve good results in a specific domain. When changing the environment or the orientation of gesture action, the performance of the model becomes very poor. This paper proposes a gesture recognition system based on the channel attention mechanism and CNN-LSTM fusion model. On the one hand, the channel attention mechanism can consider the importance of different channel characteristics; On the other hand, the CNN-LSTM fusion model can extract richer features in the time domain and space domain. The system has achieved good classification results in multiple domains of the public data set widar3.0.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127334456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The deep learning is widely used in optical image and synthetic aperture radar (SAR) image. Current academic research shows that adversarial perturbation can effectively attack the deep learning network in optical image. However, in SAR image target recognition network, the existence of universal perturbations and generation approach needs to be further explored. Here, this article firstly proposes a systematic SAR generation perturbation algorithm (SAR-GPA) for target recognition network. The modulation phase sequences of the jamming points can vary casually by using the state-of-the-art electromagnetic metasurface technology. Therefore, when it acts on the SAR deceptive jamming system, it can produce artificial controllable perturbations. First, we take the imperceptible perturbations from universal adversarial perturbations (UAP) as reference to construct a unconstrained minimum optimization problem to find the specific sequences. Then, we solve this issue by adaptive moment estimation (Adam) optimizer.Thus, the SAR adversarial examples can be quickly and flexibly generated through our system. Finally, We design a series of simulation and experiment to verify the effectiveness of the adversarial examples and also the modulation sequences. According to the results, different from the traditional SAR blanket jamming methods, our approach can quickly generate imperceptible jamming, which can effectively attack three classical recognition models.
{"title":"SAR-GPA: SAR Generation Perturbation Algorithm","authors":"Zhe Liu, Weijie Xia, Yongzhen Lei","doi":"10.1145/3503047.3503136","DOIUrl":"https://doi.org/10.1145/3503047.3503136","url":null,"abstract":"The deep learning is widely used in optical image and synthetic aperture radar (SAR) image. Current academic research shows that adversarial perturbation can effectively attack the deep learning network in optical image. However, in SAR image target recognition network, the existence of universal perturbations and generation approach needs to be further explored. Here, this article firstly proposes a systematic SAR generation perturbation algorithm (SAR-GPA) for target recognition network. The modulation phase sequences of the jamming points can vary casually by using the state-of-the-art electromagnetic metasurface technology. Therefore, when it acts on the SAR deceptive jamming system, it can produce artificial controllable perturbations. First, we take the imperceptible perturbations from universal adversarial perturbations (UAP) as reference to construct a unconstrained minimum optimization problem to find the specific sequences. Then, we solve this issue by adaptive moment estimation (Adam) optimizer.Thus, the SAR adversarial examples can be quickly and flexibly generated through our system. Finally, We design a series of simulation and experiment to verify the effectiveness of the adversarial examples and also the modulation sequences. According to the results, different from the traditional SAR blanket jamming methods, our approach can quickly generate imperceptible jamming, which can effectively attack three classical recognition models.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"61 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133719307","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Su-jun Wang, Y. Ping, Gang Chen, Li Yang, Wei Wen, Changzhi Xu, Ying-zhao Shao
Abstract: With the rapid growth of remote sensing image data, it is very important to find a way to extract and recognize the target quickly and accurately from the massive remote sensing data. In recent years, the development of deep learning has provided an effective way for target detection of remote sensing images. This paper applies deep learning technology to target detection of remote sensing images, and constructs a target detection system software which integrates sample labeling, data set construction, pretreatment of training sample, training algorithm, migration learning, target recognition and post processing. It provides technical support to the tasks of classification, information extraction and change detection of remote sensing image. The experimental results show that the target recognition system of remote sensing images has high precision in the scene classification and specific target detection of high-resolution remote sensing images.
{"title":"Target detection of remote sensing images based on deep learning method and system","authors":"Su-jun Wang, Y. Ping, Gang Chen, Li Yang, Wei Wen, Changzhi Xu, Ying-zhao Shao","doi":"10.1145/3503047.3503116","DOIUrl":"https://doi.org/10.1145/3503047.3503116","url":null,"abstract":"Abstract: With the rapid growth of remote sensing image data, it is very important to find a way to extract and recognize the target quickly and accurately from the massive remote sensing data. In recent years, the development of deep learning has provided an effective way for target detection of remote sensing images. This paper applies deep learning technology to target detection of remote sensing images, and constructs a target detection system software which integrates sample labeling, data set construction, pretreatment of training sample, training algorithm, migration learning, target recognition and post processing. It provides technical support to the tasks of classification, information extraction and change detection of remote sensing image. The experimental results show that the target recognition system of remote sensing images has high precision in the scene classification and specific target detection of high-resolution remote sensing images.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124457312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
5G communication promotes the development of VR (Virtual Reality) applications, providing users with immersive experiences. To accomplish VR tasks with large computation and low delay demands, an unmanned aerial vehicle (UAV)-enabled MEC (Mobile Edge Computing) method is proposed to assist VR devices in the rendering process. Under the constraints imposed by the VR characteristics and the device energy, the UAV flight trajectory and the VR rendering mode are jointly optimized to maximize the rendering completion rate of the VR tasks. This problem is modeled as a Markov decision process. To find the optimal policy, a UAV aided rendering algorithm is proposed in the framework of deep reinforcement learning. Specifically, the TD3 (Twin Delayed Deep Deterministic Policy Gradient) algorithm is applied to schedule the UAV trajectory and VR rendering mode to meet the requirements of the randomly arriving VR tasks as much as possible. Simulation results show that the proposed method outperforms baseline strategies in both the rendering completion rate and the convergence speed.
{"title":"UAV-enabled Edge Computing for Virtual Reality","authors":"Shengjie Ding, Juan Liu, Lingfu Xie","doi":"10.1145/3503047.3503128","DOIUrl":"https://doi.org/10.1145/3503047.3503128","url":null,"abstract":"5G communication promotes the development of VR (Virtual Reality) applications, providing users with immersive experiences. To accomplish VR tasks with large computation and low delay demands, an unmanned aerial vehicle (UAV)-enabled MEC (Mobile Edge Computing) method is proposed to assist VR devices in the rendering process. Under the constraints imposed by the VR characteristics and the device energy, the UAV flight trajectory and the VR rendering mode are jointly optimized to maximize the rendering completion rate of the VR tasks. This problem is modeled as a Markov decision process. To find the optimal policy, a UAV aided rendering algorithm is proposed in the framework of deep reinforcement learning. Specifically, the TD3 (Twin Delayed Deep Deterministic Policy Gradient) algorithm is applied to schedule the UAV trajectory and VR rendering mode to meet the requirements of the randomly arriving VR tasks as much as possible. Simulation results show that the proposed method outperforms baseline strategies in both the rendering completion rate and the convergence speed.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116117618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Khulood Faisal, Dareen Alomari, Hind Alasmari, H. Alghamdi, Kawther A. Saeedi
ACM Reference Format: Khulood, K.K.F, Faisal, Dareen, D.J.A, Alomari, Hind, H.M.A, Alasmari, Hanan, H.S.A, Alghamdi, and Kawther, K.A.S, Saeedi. 2021. Life Expectancy Estimation based on Machine Learning and Structured Predictors. In 2021 3rd International Conference on Advanced Information Science and System (AISS 2021) (AISS 2021), November 26–28, 2021, Sanya, China. ACM, New York, NY, USA, 8 pages. https://doi.org/10.1145/3503047.3503122
{"title":"Life Expectancy Estimation based on Machine Learning and Structured Predictors","authors":"Khulood Faisal, Dareen Alomari, Hind Alasmari, H. Alghamdi, Kawther A. Saeedi","doi":"10.1145/3503047.3503122","DOIUrl":"https://doi.org/10.1145/3503047.3503122","url":null,"abstract":"ACM Reference Format: Khulood, K.K.F, Faisal, Dareen, D.J.A, Alomari, Hind, H.M.A, Alasmari, Hanan, H.S.A, Alghamdi, and Kawther, K.A.S, Saeedi. 2021. Life Expectancy Estimation based on Machine Learning and Structured Predictors. In 2021 3rd International Conference on Advanced Information Science and System (AISS 2021) (AISS 2021), November 26–28, 2021, Sanya, China. ACM, New York, NY, USA, 8 pages. https://doi.org/10.1145/3503047.3503122","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"576 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116538968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Internet of Things (IoT) enables us to use diverse sensing data and control IoT devices, sometimes, networks remotely. This technology makes our lives easier and more comfortable. However, the services currently provided by IoT systems are limited within pre-defined sets of devices and it hinders the development of large-scale and complicated IoT services that could be provided through dynamic collaboration between IoT devices across networks. To realize the mission-oriented IoT (MIoT) systems, we must address the security issues of the MIoT systems. Among various security issues, in this paper, we focus on trust management on MIoT. We analyze existing work on trust management to see if they are suitable for the MIoT systems. Then, we identify potential issues and discuss challenges for the trust management on MIoT.
{"title":"Survey of Trust Management on Mission-oriented Internet of Things","authors":"Youna Jung, Noah Goldsmith, John Barker","doi":"10.1145/3503047.3503076","DOIUrl":"https://doi.org/10.1145/3503047.3503076","url":null,"abstract":"The Internet of Things (IoT) enables us to use diverse sensing data and control IoT devices, sometimes, networks remotely. This technology makes our lives easier and more comfortable. However, the services currently provided by IoT systems are limited within pre-defined sets of devices and it hinders the development of large-scale and complicated IoT services that could be provided through dynamic collaboration between IoT devices across networks. To realize the mission-oriented IoT (MIoT) systems, we must address the security issues of the MIoT systems. Among various security issues, in this paper, we focus on trust management on MIoT. We analyze existing work on trust management to see if they are suitable for the MIoT systems. Then, we identify potential issues and discuss challenges for the trust management on MIoT.","PeriodicalId":190604,"journal":{"name":"Proceedings of the 3rd International Conference on Advanced Information Science and System","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116539012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}