This paper demonstrates a model-based reinforcement learning framework for training a self-flying drone. We implement the Dreamer proposed in a prior work as an environment model that responds to the action taken by the drone by predicting the next video frame as a new state signal. The Dreamer is a conditional video sequence generator. This model-based environment avoids the time-consuming interactions between the agent and the environment, speeding up largely the training process. This demonstration showcases for the first time the application of the Dreamer to train an agent that can finish the racing task in the Airsim simulator.
{"title":"Learning to Fly with a Video Generator","authors":"Chia-Chun Chung, Wen-Hsiao Peng, Teng-Hu Cheng, Chin-Feng Yu","doi":"10.1109/VCIP53242.2021.9675414","DOIUrl":"https://doi.org/10.1109/VCIP53242.2021.9675414","url":null,"abstract":"This paper demonstrates a model-based reinforcement learning framework for training a self-flying drone. We implement the Dreamer proposed in a prior work as an environment model that responds to the action taken by the drone by predicting the next video frame as a new state signal. The Dreamer is a conditional video sequence generator. This model-based environment avoids the time-consuming interactions between the agent and the environment, speeding up largely the training process. This demonstration showcases for the first time the application of the Dreamer to train an agent that can finish the racing task in the Airsim simulator.","PeriodicalId":114062,"journal":{"name":"2021 International Conference on Visual Communications and Image Processing (VCIP)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122381129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/VCIP53242.2021.9675436
B. Kathariya, Zhu Li, Jianle Chen, G. V. D. Auwera
Federated Learning (FL), a distributed machine learning architecture, emerged to solve the intelligent data analysis on massive data generated at network edge-devices. With this paradigm, a model is jointly learned in parallel at edge-devices without needing to send voluminous data to a central FL server. This not only allows a model to learn in a feasible duration by reducing network latency but also preserves data privacy. Nonetheless, when thousands of edge-devices are attached to an FL framework, limited network resources inevitably impose intolerable training latency. In this work, we propose model-update compression to solve this issue in a very novel way. The proposed method learns multiple Gaussian distributions that best describe the high dimensional gradient parameters. In the FL server, high dimensional gradients are repopulated from Gaussian distributions utilizing likelihood function parameters which are communicated to the server. Since the distribution information parameters constitute a very small percentage of values compared to the high dimensional gradients themselves, our proposed method is able to save significant uplink band-width while preserving the model accuracy. Experimental results validated our claim.
{"title":"Gradient Compression with a Variational Coding Scheme for Federated Learning","authors":"B. Kathariya, Zhu Li, Jianle Chen, G. V. D. Auwera","doi":"10.1109/VCIP53242.2021.9675436","DOIUrl":"https://doi.org/10.1109/VCIP53242.2021.9675436","url":null,"abstract":"Federated Learning (FL), a distributed machine learning architecture, emerged to solve the intelligent data analysis on massive data generated at network edge-devices. With this paradigm, a model is jointly learned in parallel at edge-devices without needing to send voluminous data to a central FL server. This not only allows a model to learn in a feasible duration by reducing network latency but also preserves data privacy. Nonetheless, when thousands of edge-devices are attached to an FL framework, limited network resources inevitably impose intolerable training latency. In this work, we propose model-update compression to solve this issue in a very novel way. The proposed method learns multiple Gaussian distributions that best describe the high dimensional gradient parameters. In the FL server, high dimensional gradients are repopulated from Gaussian distributions utilizing likelihood function parameters which are communicated to the server. Since the distribution information parameters constitute a very small percentage of values compared to the high dimensional gradients themselves, our proposed method is able to save significant uplink band-width while preserving the model accuracy. Experimental results validated our claim.","PeriodicalId":114062,"journal":{"name":"2021 International Conference on Visual Communications and Image Processing (VCIP)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122423129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/VCIP53242.2021.9675322
Hsuan-Jen Psan, Wen-Jiin Tsai
Anomaly detection is an important task in many traffic applications. Methods based on deep learning networks reach high accuracy; however, they typically rely on supervised training with large annotated data. Considering that anomalous data are not easy to obtain, we present data transformation methods which convert the data obtained from one intersection to other intersections to mitigate the effort of collecting training data. The proposed methods are demonstrated on the task of anomalous trajectory detection. A General model and a Universal model are proposed. The former focuses on saving data collection effort; the latter further reduces the network training effort. We evaluated the methods on the dataset with trajectories from four intersections in GTA V virtual world. The experimental results show that with significant reduction in data collecting and network training efforts, the proposed anomalous trajectory detection still achieves state-of-the-art accuracy.
{"title":"Data Transformer for Anomalous Trajectory Detection","authors":"Hsuan-Jen Psan, Wen-Jiin Tsai","doi":"10.1109/VCIP53242.2021.9675322","DOIUrl":"https://doi.org/10.1109/VCIP53242.2021.9675322","url":null,"abstract":"Anomaly detection is an important task in many traffic applications. Methods based on deep learning networks reach high accuracy; however, they typically rely on supervised training with large annotated data. Considering that anomalous data are not easy to obtain, we present data transformation methods which convert the data obtained from one intersection to other intersections to mitigate the effort of collecting training data. The proposed methods are demonstrated on the task of anomalous trajectory detection. A General model and a Universal model are proposed. The former focuses on saving data collection effort; the latter further reduces the network training effort. We evaluated the methods on the dataset with trajectories from four intersections in GTA V virtual world. The experimental results show that with significant reduction in data collecting and network training efforts, the proposed anomalous trajectory detection still achieves state-of-the-art accuracy.","PeriodicalId":114062,"journal":{"name":"2021 International Conference on Visual Communications and Image Processing (VCIP)","volume":"233 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122379890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/vcip53242.2021.9675415
{"title":"Message from the General and Technical Program Chairs","authors":"","doi":"10.1109/vcip53242.2021.9675415","DOIUrl":"https://doi.org/10.1109/vcip53242.2021.9675415","url":null,"abstract":"","PeriodicalId":114062,"journal":{"name":"2021 International Conference on Visual Communications and Image Processing (VCIP)","volume":"80 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125450390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/VCIP53242.2021.9675397
Yiran Tao, Yaosi Hu, Zhenzhong Chen
In the video saliency prediction task, one of the key issues is the utilization of temporal contextual information of keyframes. In this paper, a deep reinforcement learning agent for video saliency prediction is proposed, designed to look around adjacent frames and adaptively generate a salient contextual window that contains the most correlated information of keyframe for saliency prediction. More specifically, an action set step by step decides whether to expand the window, meanwhile a state set and reward function evaluate the effectiveness of the current window. The deep Q-learning algorithm is followed to train the agent to learn a policy to achieve its goal. The proposed agent can be regarded as plug-and-play which is compatible with generic video saliency prediction models. Experimental results on various datasets demonstrate that our method can achieve an advanced performance.
{"title":"Learn to Look Around: Deep Reinforcement Learning Agent for Video Saliency Prediction","authors":"Yiran Tao, Yaosi Hu, Zhenzhong Chen","doi":"10.1109/VCIP53242.2021.9675397","DOIUrl":"https://doi.org/10.1109/VCIP53242.2021.9675397","url":null,"abstract":"In the video saliency prediction task, one of the key issues is the utilization of temporal contextual information of keyframes. In this paper, a deep reinforcement learning agent for video saliency prediction is proposed, designed to look around adjacent frames and adaptively generate a salient contextual window that contains the most correlated information of keyframe for saliency prediction. More specifically, an action set step by step decides whether to expand the window, meanwhile a state set and reward function evaluate the effectiveness of the current window. The deep Q-learning algorithm is followed to train the agent to learn a policy to achieve its goal. The proposed agent can be regarded as plug-and-play which is compatible with generic video saliency prediction models. Experimental results on various datasets demonstrate that our method can achieve an advanced performance.","PeriodicalId":114062,"journal":{"name":"2021 International Conference on Visual Communications and Image Processing (VCIP)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116920216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/VCIP53242.2021.9675382
Wei Peng, Hongkui Wang, Li Yu
High Efficiency Video Coding - Screen Content Coding (HEVC-SCC) follows the traditional angular intra prediction technique in HEVC. However, the Planar mode and the DC mode are somewhat repetitive for screen content video with features such as no senor noise. Hence, this paper proposes a new intra prediction mode called linear regression (LR) mode, which combines the Planar mode and the DC mode into one mode. The LR mode improves the prediction accuracy of intra prediction for fading regions in screen content video. Besides, by optimizing the most probable mode (MPM) construction, the hit rate of the best mode in the MPM list is improved. The experimental results show that the proposed method can achieve 0.57% BD-BR reduction compared with HM $16.20+text{SCM} 8.8$, while the coding time remains largely the same.
{"title":"Linear Regression Mode of Intra Prediction for Screen Content Coding","authors":"Wei Peng, Hongkui Wang, Li Yu","doi":"10.1109/VCIP53242.2021.9675382","DOIUrl":"https://doi.org/10.1109/VCIP53242.2021.9675382","url":null,"abstract":"High Efficiency Video Coding - Screen Content Coding (HEVC-SCC) follows the traditional angular intra prediction technique in HEVC. However, the Planar mode and the DC mode are somewhat repetitive for screen content video with features such as no senor noise. Hence, this paper proposes a new intra prediction mode called linear regression (LR) mode, which combines the Planar mode and the DC mode into one mode. The LR mode improves the prediction accuracy of intra prediction for fading regions in screen content video. Besides, by optimizing the most probable mode (MPM) construction, the hit rate of the best mode in the MPM list is improved. The experimental results show that the proposed method can achieve 0.57% BD-BR reduction compared with HM $16.20+text{SCM} 8.8$, while the coding time remains largely the same.","PeriodicalId":114062,"journal":{"name":"2021 International Conference on Visual Communications and Image Processing (VCIP)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126094622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/VCIP53242.2021.9675387
Yixin Mei, Fan Li, Li Li, Zhu Li
Recent advances in sensor technology and wide deployment of visual sensors lead to a new application whereas compression of images are not mainly for pixel recovery for human consumption, instead it is for communication to cloud side machine vision tasks like classification, identification, detection and tracking. This opens up new research dimensions for a learning based compression that directly optimizes loss function in vision tasks, and therefore achieves better compression performance vis-a-vis the pixel recovery and then performing vision tasks computing. In this work, we developed a learning based compression scheme that learns a compact feature representation and appropriate bitstreams for the task of visual object detection. Variational Auto-Encoder (VAE) framework is adopted for learning a compact representation, while a bridge network is trained to drive the detection loss function. Simulation results demonstrate that this approach is achieving a new state-of-the-art in task driven compression efficiency, compared with pixel recovery approaches, including both learning based and handcrafted solutions.
{"title":"Learn A Compression for Objection Detection - VAE with a Bridge","authors":"Yixin Mei, Fan Li, Li Li, Zhu Li","doi":"10.1109/VCIP53242.2021.9675387","DOIUrl":"https://doi.org/10.1109/VCIP53242.2021.9675387","url":null,"abstract":"Recent advances in sensor technology and wide deployment of visual sensors lead to a new application whereas compression of images are not mainly for pixel recovery for human consumption, instead it is for communication to cloud side machine vision tasks like classification, identification, detection and tracking. This opens up new research dimensions for a learning based compression that directly optimizes loss function in vision tasks, and therefore achieves better compression performance vis-a-vis the pixel recovery and then performing vision tasks computing. In this work, we developed a learning based compression scheme that learns a compact feature representation and appropriate bitstreams for the task of visual object detection. Variational Auto-Encoder (VAE) framework is adopted for learning a compact representation, while a bridge network is trained to drive the detection loss function. Simulation results demonstrate that this approach is achieving a new state-of-the-art in task driven compression efficiency, compared with pixel recovery approaches, including both learning based and handcrafted solutions.","PeriodicalId":114062,"journal":{"name":"2021 International Conference on Visual Communications and Image Processing (VCIP)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124856899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/VCIP53242.2021.9675360
Honglei Zhang, Francesco Cricri, H. R. Tavakoli, M. Santamaría, Y. Lam, M. Hannuksela
For most machine learning systems, overfitting is an undesired behavior. However, overfitting a model to a test image or a video at inference time is a favorable and effective technique to improve the coding efficiency of learning-based image and video codecs. At the encoding stage, one or more neural networks that are part of the codec are finetuned using the input image or video to achieve a better coding performance. The encoder en-codes the input content into a content bitstream. If the finetuned neural network is part (also) of the decoder, the encoder signals the weight update of the finetuned model to the decoder along with the content bitstream. At the decoding stage, the decoder first updates its neural network model according to the received weight update, and then proceeds with decoding the content bitstream. Since a neural network contains a large number of parameters, compressing the weight update is critical to reducing bitrate overhead. In this paper, we propose learning-based methods to find the important parameters to be overfitted, in terms of rate-distortion performance. Based on simple distribution models for variables in the weight update, we derive two objective functions. By optimizing the proposed objective functions, the importance scores of the parameters can be calculated and the important parameters can be determined. Our experiments on lossless image compression codec show that the proposed method significantly outperforms a prior-art method where overfitted parameters were selected based on heuristics. Furthermore, our technique improved the compression performance of the state-of-the-art lossless image compression codec by 0.1 bit per pixel.
{"title":"Learn to overfit better: finding the important parameters for learned image compression","authors":"Honglei Zhang, Francesco Cricri, H. R. Tavakoli, M. Santamaría, Y. Lam, M. Hannuksela","doi":"10.1109/VCIP53242.2021.9675360","DOIUrl":"https://doi.org/10.1109/VCIP53242.2021.9675360","url":null,"abstract":"For most machine learning systems, overfitting is an undesired behavior. However, overfitting a model to a test image or a video at inference time is a favorable and effective technique to improve the coding efficiency of learning-based image and video codecs. At the encoding stage, one or more neural networks that are part of the codec are finetuned using the input image or video to achieve a better coding performance. The encoder en-codes the input content into a content bitstream. If the finetuned neural network is part (also) of the decoder, the encoder signals the weight update of the finetuned model to the decoder along with the content bitstream. At the decoding stage, the decoder first updates its neural network model according to the received weight update, and then proceeds with decoding the content bitstream. Since a neural network contains a large number of parameters, compressing the weight update is critical to reducing bitrate overhead. In this paper, we propose learning-based methods to find the important parameters to be overfitted, in terms of rate-distortion performance. Based on simple distribution models for variables in the weight update, we derive two objective functions. By optimizing the proposed objective functions, the importance scores of the parameters can be calculated and the important parameters can be determined. Our experiments on lossless image compression codec show that the proposed method significantly outperforms a prior-art method where overfitted parameters were selected based on heuristics. Furthermore, our technique improved the compression performance of the state-of-the-art lossless image compression codec by 0.1 bit per pixel.","PeriodicalId":114062,"journal":{"name":"2021 International Conference on Visual Communications and Image Processing (VCIP)","volume":"37 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130758778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/VCIP53242.2021.9675349
Jacek Trelinski, B. Kwolek
We propose an effective framework for human action recognition on raw depth maps. We leverage a convolutional autoencoder to extract on sequences of deep maps the frame-features that are then fed to a 1D-CNN responsible for embedding action features. A Siamese neural network trained on repre-sentative single depth map for each sequence extracts features, which are then processed by shapelets algorithm to extract action features. These features are then concatenated with features extracted by a BiLSTM with TimeDistributed wrapper. Given the learned individual models on such features we perform a selection of a subset of models. We demonstrate experimentally that on SYSU 3DHOI dataset the proposed algorithm outperforms considerably all recent algorithms including skeleton-based ones.
{"title":"Human Action Recognition on Raw Depth Maps","authors":"Jacek Trelinski, B. Kwolek","doi":"10.1109/VCIP53242.2021.9675349","DOIUrl":"https://doi.org/10.1109/VCIP53242.2021.9675349","url":null,"abstract":"We propose an effective framework for human action recognition on raw depth maps. We leverage a convolutional autoencoder to extract on sequences of deep maps the frame-features that are then fed to a 1D-CNN responsible for embedding action features. A Siamese neural network trained on repre-sentative single depth map for each sequence extracts features, which are then processed by shapelets algorithm to extract action features. These features are then concatenated with features extracted by a BiLSTM with TimeDistributed wrapper. Given the learned individual models on such features we perform a selection of a subset of models. We demonstrate experimentally that on SYSU 3DHOI dataset the proposed algorithm outperforms considerably all recent algorithms including skeleton-based ones.","PeriodicalId":114062,"journal":{"name":"2021 International Conference on Visual Communications and Image Processing (VCIP)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131748864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/VCIP53242.2021.9675437
Gerald Xie, Zhu Li, S. Bhattacharyya, A. Mehmood
Object detection is a classic computer vision task, which learns the mapping between an image and object bounding boxes + class labels. Many applications of object detection involve images which are prone to degradation at capture time, notably motion blur from a moving camera like UAVs or object itself. One approach to handling this blur involves using common deblurring methods to recover the clean pixel images and then the apply vision task. This task is typically ill-posed. On top of this, application of these methods also add onto the inference time of the vision network, which can hinder performance of video inputs. To address the issues, we propose a novel plug-and-play (PnP) solution that insert deblurring features into the target vision task network without the need to retrain the task network. The deblur features are learned from a classification loss network on blur strength and directions, and the PnP scheme works well with the object detection network with minimum inference time complexity, compared with the state of the art deblur and then detection solution.
{"title":"Plug-and-Play Deblurring for Robust Object Detection","authors":"Gerald Xie, Zhu Li, S. Bhattacharyya, A. Mehmood","doi":"10.1109/VCIP53242.2021.9675437","DOIUrl":"https://doi.org/10.1109/VCIP53242.2021.9675437","url":null,"abstract":"Object detection is a classic computer vision task, which learns the mapping between an image and object bounding boxes + class labels. Many applications of object detection involve images which are prone to degradation at capture time, notably motion blur from a moving camera like UAVs or object itself. One approach to handling this blur involves using common deblurring methods to recover the clean pixel images and then the apply vision task. This task is typically ill-posed. On top of this, application of these methods also add onto the inference time of the vision network, which can hinder performance of video inputs. To address the issues, we propose a novel plug-and-play (PnP) solution that insert deblurring features into the target vision task network without the need to retrain the task network. The deblur features are learned from a classification loss network on blur strength and directions, and the PnP scheme works well with the object detection network with minimum inference time complexity, compared with the state of the art deblur and then detection solution.","PeriodicalId":114062,"journal":{"name":"2021 International Conference on Visual Communications and Image Processing (VCIP)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132015696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}