Pub Date : 2020-09-21DOI: 10.1109/MMSP48831.2020.9287094
Jinjie Bi, Weiyan Chen, Changjian Zhu, Hong Zhang, Min Tan
Occlusion lack compensation (OLC) is a multi-plexing gain optimization data acquisition and novel views rendering strategy for light field rendering (LFR). While the achieved OLC is much higher than previously thought possible, the improvement comes at the cost of requiring more scene information. This can capture more detailed scene information, including geometric information, texture information and depth information, by learning and training methods. In this paper, we develop an occlusion compensation (OCC) model based on restricted boltzmann machine (RBM) to compensate for lack scene information caused by occlusion. We show that occlusion will cause the lack of captured scene information, which will lead to the decline of view rendering quality. The OCC model can estimate and compensate the lack information of occlusion edge by learning. We present experimental results to demonstrate the performance of OCC model with analog training, verify our theoretical analysis, and extend our conclusions on optimal rendering quality of light field.
{"title":"An occlusion compensation model for improving the reconstruction quality of light field","authors":"Jinjie Bi, Weiyan Chen, Changjian Zhu, Hong Zhang, Min Tan","doi":"10.1109/MMSP48831.2020.9287094","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287094","url":null,"abstract":"Occlusion lack compensation (OLC) is a multi-plexing gain optimization data acquisition and novel views rendering strategy for light field rendering (LFR). While the achieved OLC is much higher than previously thought possible, the improvement comes at the cost of requiring more scene information. This can capture more detailed scene information, including geometric information, texture information and depth information, by learning and training methods. In this paper, we develop an occlusion compensation (OCC) model based on restricted boltzmann machine (RBM) to compensate for lack scene information caused by occlusion. We show that occlusion will cause the lack of captured scene information, which will lead to the decline of view rendering quality. The OCC model can estimate and compensate the lack information of occlusion edge by learning. We present experimental results to demonstrate the performance of OCC model with analog training, verify our theoretical analysis, and extend our conclusions on optimal rendering quality of light field.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115574942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-21DOI: 10.1109/MMSP48831.2020.9287153
Deyin Liu, Yuan Wu, Liangchen Liu, Qichang Hu, Lin Qi
Dictionary learning and deep learning are two popular representation learning paradigms, which can be combined to boost the classification task. However, existing combination methods often learn multiple dictionaries embedded in a cascade of layers, and a specialized classifier accordingly. This may inattentively lead to overfitting and high computational cost. In this paper, we present a novel deep auto-encoding architecture to learn only a dictionary for classification. To empower the dictionary with discrimination, we construct the dictionary with class-specific sub-dictionaries, and introduce supervision by imposing category constraints. The proposed framework is inspired by a sparse optimization method, namely Iterative Shrinkage Thresholding Algorithm, which characterizes the learning process by the forward-propagation based optimization w.r.t the dictionary only, reducing the number of parameters to learn and the computational cost dramatically. Extensive experiments demonstrate the effectiveness of our method in image classification.
{"title":"Auto-Encoder based Structured Dictinoary Learning","authors":"Deyin Liu, Yuan Wu, Liangchen Liu, Qichang Hu, Lin Qi","doi":"10.1109/MMSP48831.2020.9287153","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287153","url":null,"abstract":"Dictionary learning and deep learning are two popular representation learning paradigms, which can be combined to boost the classification task. However, existing combination methods often learn multiple dictionaries embedded in a cascade of layers, and a specialized classifier accordingly. This may inattentively lead to overfitting and high computational cost. In this paper, we present a novel deep auto-encoding architecture to learn only a dictionary for classification. To empower the dictionary with discrimination, we construct the dictionary with class-specific sub-dictionaries, and introduce supervision by imposing category constraints. The proposed framework is inspired by a sparse optimization method, namely Iterative Shrinkage Thresholding Algorithm, which characterizes the learning process by the forward-propagation based optimization w.r.t the dictionary only, reducing the number of parameters to learn and the computational cost dramatically. Extensive experiments demonstrate the effectiveness of our method in image classification.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"283 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120895950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-21DOI: 10.1109/MMSP48831.2020.9287139
Zahir Alsulaimawi
Machine learning applications have emerged in many aspects of our lives, such as for credit lending, insurance rates, and employment applications. Consequently, it is required that such systems be nondiscriminatory and fair in sensitive features user, e.g., race, sexual orientation, and religion. To address this issue, this paper develops a minimax adversarial framework, called features protector (FP) framework, to achieve the information-theoretical trade-off between minimizing distortion of target data and ensuring that sensitive features have similar distributions. We evaluate the performance of the proposed framework on two real-world datasets. Preliminary empirical evaluation shows that our framework provides both accurate and fair decisions.
{"title":"Variational Bound of Mutual Information for Fairness in Classification","authors":"Zahir Alsulaimawi","doi":"10.1109/MMSP48831.2020.9287139","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287139","url":null,"abstract":"Machine learning applications have emerged in many aspects of our lives, such as for credit lending, insurance rates, and employment applications. Consequently, it is required that such systems be nondiscriminatory and fair in sensitive features user, e.g., race, sexual orientation, and religion. To address this issue, this paper develops a minimax adversarial framework, called features protector (FP) framework, to achieve the information-theoretical trade-off between minimizing distortion of target data and ensuring that sensitive features have similar distributions. We evaluate the performance of the proposed framework on two real-world datasets. Preliminary empirical evaluation shows that our framework provides both accurate and fair decisions.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121542704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-21DOI: 10.1109/MMSP48831.2020.9287085
N. Passalis, A. Tefas
Even though recent advances in deep learning (DL) led to tremendous improvements for various computer and robotic vision tasks, existing DL approaches suffer from a significant limitation: they typically ignore that robots and cyber-physical systems are capable of interacting with the environment in order to better sense their surroundings. In this work we argue that perceiving the world through physical interaction, i.e., employing active perception, allows for both increasing the accuracy of DL models, as well as for deploying smaller and faster models. To this end, we propose an active perception-based face recognition approach, which is capable of simultaneously extracting discriminative embeddings, as well as predicting in which direction the robot must move in order to get a more discriminative view. To the best of our knowledge, we provide the first embedding-based active perception method for deep face recognition. As we experimentally demonstrate, the proposed method can indeed lead to significant improvements, increasing the face recognition accuracy up to 9%, as well as allowing for using overall smaller and faster models, reducing the number of parameters by over one order of magnitude.
{"title":"Leveraging Active Perception for Improving Embedding-based Deep Face Recognition","authors":"N. Passalis, A. Tefas","doi":"10.1109/MMSP48831.2020.9287085","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287085","url":null,"abstract":"Even though recent advances in deep learning (DL) led to tremendous improvements for various computer and robotic vision tasks, existing DL approaches suffer from a significant limitation: they typically ignore that robots and cyber-physical systems are capable of interacting with the environment in order to better sense their surroundings. In this work we argue that perceiving the world through physical interaction, i.e., employing active perception, allows for both increasing the accuracy of DL models, as well as for deploying smaller and faster models. To this end, we propose an active perception-based face recognition approach, which is capable of simultaneously extracting discriminative embeddings, as well as predicting in which direction the robot must move in order to get a more discriminative view. To the best of our knowledge, we provide the first embedding-based active perception method for deep face recognition. As we experimentally demonstrate, the proposed method can indeed lead to significant improvements, increasing the face recognition accuracy up to 9%, as well as allowing for using overall smaller and faster models, reducing the number of parameters by over one order of magnitude.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"2 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113961970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-21DOI: 10.1109/MMSP48831.2020.9287064
Fabian Brand, Jürgen Seiler, E. Alshina, A. Kaup
Optical flow procedures are used to generate dense motion fields which approximate true motion. Such fields contain a large amount of data and if we need to transmit such a field, the raw data usually exceeds the raw data of the two images it was computed from. In many scenarios, however, it is of interest to transmit a dense motion field efficiently. Most prominently this is the case in inter prediction for video coding. In this paper we propose a transmission scheme based on subsampling the motion field. Since a field which was subsampled with a regularly spaced pattern usually yields suboptimal results, we propose an adaptive subsampling algorithm that preferably samples vectors at positions where changes in motion occur. The subsampling pattern is fully reconstructable without the need for signaling of position information. We show an average gain of 2.95 dB in average end point error compared to regular subsampling. Furthermore we show that an additional prediction stage can improve the results by an additional 0.43 dB, gaining 3.38 dB in total.
{"title":"A Triangulation-Based Backward Adaptive Motion Field Subsampling Scheme","authors":"Fabian Brand, Jürgen Seiler, E. Alshina, A. Kaup","doi":"10.1109/MMSP48831.2020.9287064","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287064","url":null,"abstract":"Optical flow procedures are used to generate dense motion fields which approximate true motion. Such fields contain a large amount of data and if we need to transmit such a field, the raw data usually exceeds the raw data of the two images it was computed from. In many scenarios, however, it is of interest to transmit a dense motion field efficiently. Most prominently this is the case in inter prediction for video coding. In this paper we propose a transmission scheme based on subsampling the motion field. Since a field which was subsampled with a regularly spaced pattern usually yields suboptimal results, we propose an adaptive subsampling algorithm that preferably samples vectors at positions where changes in motion occur. The subsampling pattern is fully reconstructable without the need for signaling of position information. We show an average gain of 2.95 dB in average end point error compared to regular subsampling. Furthermore we show that an additional prediction stage can improve the results by an additional 0.43 dB, gaining 3.38 dB in total.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127743370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-21DOI: 10.1109/mmsp48831.2020.9287111
{"title":"MMSP 2020 Author Information Page","authors":"","doi":"10.1109/mmsp48831.2020.9287111","DOIUrl":"https://doi.org/10.1109/mmsp48831.2020.9287111","url":null,"abstract":"","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"259 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134515267","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-21DOI: 10.1109/MMSP48831.2020.9287152
J. A. Lima, C. Miosso, Mylène C. Q. Farias
Motion magnification refers to the ability of amplifying small movements in a video in order to reveal important information about the observed scene. In the past, several motion magnification methods have been proposed, but most of them have the disadvantage of introducing annoying visual artifacts in the video. In this paper, we propose a method that analyses the optical flow between each original frame and the corresponding motion-magnified frame and, then, synthesizes a new motion-magnified video by remapping the original video using the generated optical flow map. The proposed approach is able to eliminate the artifacts that appear in Eulerian methods. Also, it is able to amplify the motion by an extra factor of 2 and to invert the motion direction.
{"title":"Hybrid Motion Magnification based on Same-Frame Optical Flow Computations","authors":"J. A. Lima, C. Miosso, Mylène C. Q. Farias","doi":"10.1109/MMSP48831.2020.9287152","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287152","url":null,"abstract":"Motion magnification refers to the ability of amplifying small movements in a video in order to reveal important information about the observed scene. In the past, several motion magnification methods have been proposed, but most of them have the disadvantage of introducing annoying visual artifacts in the video. In this paper, we propose a method that analyses the optical flow between each original frame and the corresponding motion-magnified frame and, then, synthesizes a new motion-magnified video by remapping the original video using the generated optical flow map. The proposed approach is able to eliminate the artifacts that appear in Eulerian methods. Also, it is able to amplify the motion by an extra factor of 2 and to invert the motion direction.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133487119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-21DOI: 10.1109/MMSP48831.2020.9287098
Matthias Kränzler, Christian Herglotz, A. Kaup
This paper investigates the decoding energy and decoding time demand of VTM-7.0 in relation to HM-16.20. We present the first detailed comparison of two video codecs in terms of software decoder energy consumption. The evaluation shows that the energy demand of the VTM decoder is increased significantly compared to HM and that the increase depends on the coding configuration. For the coding configuration randomaccess, we find that the decoding energy is increased by over 80% at a decoding time increase of over 70%. Furthermore, results indicate that the energy demand increases by up to 207% when Single Instruction Multiple Data (SIMD) instructions are disabled, which corresponds to the HM implementation style. By measurements, it is revealed that the coding tools MIP, AMVR, TPM, LFNST, and MTS increase the energy efficiency of the decoder. Furthermore, we propose a new coding configuration based on our analysis, which reduces the energy demand of the VTM decoder by over 17% on average.
{"title":"A Comparative Analysis of the Time and Energy Demand of Versatile Video Coding and High Efficiency Video Coding Reference Decoders","authors":"Matthias Kränzler, Christian Herglotz, A. Kaup","doi":"10.1109/MMSP48831.2020.9287098","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287098","url":null,"abstract":"This paper investigates the decoding energy and decoding time demand of VTM-7.0 in relation to HM-16.20. We present the first detailed comparison of two video codecs in terms of software decoder energy consumption. The evaluation shows that the energy demand of the VTM decoder is increased significantly compared to HM and that the increase depends on the coding configuration. For the coding configuration randomaccess, we find that the decoding energy is increased by over 80% at a decoding time increase of over 70%. Furthermore, results indicate that the energy demand increases by up to 207% when Single Instruction Multiple Data (SIMD) instructions are disabled, which corresponds to the HM implementation style. By measurements, it is revealed that the coding tools MIP, AMVR, TPM, LFNST, and MTS increase the energy efficiency of the decoder. Furthermore, we propose a new coding configuration based on our analysis, which reduces the energy demand of the VTM decoder by over 17% on average.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133309537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-21DOI: 10.1109/MMSP48831.2020.9287114
Sai Deng, Jingning Han, Yaowu Xu
Video Multi-method Assessment Fusion (VMAF) is a machine-learning based video quality metric. It is experimentally shown to provide higher correlation with human visual system as compared to conventional metrics like peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) in many scenarios and has drawn considerable interest as an alternative metric to evaluate the perceptual quality. This work proposes a systematic approach to improve the video compression performance in VMAF. It is composed of multiple components including a pre-processing stage with a complement automatic filter parameter selection, and a modified rate-distortion optimization framework tailored for VMAF metric. The proposed scheme achieves on average 37% BD-rate reduction in VMAF, as compared to conventional video codec optimized for PSNR.
{"title":"VMAF Based Rate-Distortion Optimization for Video Coding","authors":"Sai Deng, Jingning Han, Yaowu Xu","doi":"10.1109/MMSP48831.2020.9287114","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287114","url":null,"abstract":"Video Multi-method Assessment Fusion (VMAF) is a machine-learning based video quality metric. It is experimentally shown to provide higher correlation with human visual system as compared to conventional metrics like peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) in many scenarios and has drawn considerable interest as an alternative metric to evaluate the perceptual quality. This work proposes a systematic approach to improve the video compression performance in VMAF. It is composed of multiple components including a pre-processing stage with a complement automatic filter parameter selection, and a modified rate-distortion optimization framework tailored for VMAF metric. The proposed scheme achieves on average 37% BD-rate reduction in VMAF, as compared to conventional video codec optimized for PSNR.","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115062581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-09-21DOI: 10.1109/MMSP48831.2020.9287081
O. A. Miloserdov, N. S. Ovcharenko, A. Makarenko
The article presents particulars of developing a plant disease detection system based on analysis of photo-graphic images by deep convolutional neural networks. A original lightweight neural network architecture is used (only 13 480 trained parameters) that is tens and hundreds of times more compact than typical solutions. Real-life field data is used for training and testing, with photographs taken in adverse conditions: variation in hardware quality, angles, lighting conditions, scales (from macro shots of individual fragments of leaf and stem to several rose bushes in one picture), and complex disorienting backgrounds. An adaptive decision-making rule is used, based on the Bayes’ theorem and Wald’s sequential probability ratio test, in order to improve reliability of the results. A following example is provided: detection of disease on leaves and stems of rose from images taken in the visible spectrum. The authors were able attain the quality of 90.6% on real-life data (F1 score, one input image, test dataset).
{"title":"Use of a deep convolutional neural network to diagnose disease in the rose by means of a photographic image","authors":"O. A. Miloserdov, N. S. Ovcharenko, A. Makarenko","doi":"10.1109/MMSP48831.2020.9287081","DOIUrl":"https://doi.org/10.1109/MMSP48831.2020.9287081","url":null,"abstract":"The article presents particulars of developing a plant disease detection system based on analysis of photo-graphic images by deep convolutional neural networks. A original lightweight neural network architecture is used (only 13 480 trained parameters) that is tens and hundreds of times more compact than typical solutions. Real-life field data is used for training and testing, with photographs taken in adverse conditions: variation in hardware quality, angles, lighting conditions, scales (from macro shots of individual fragments of leaf and stem to several rose bushes in one picture), and complex disorienting backgrounds. An adaptive decision-making rule is used, based on the Bayes’ theorem and Wald’s sequential probability ratio test, in order to improve reliability of the results. A following example is provided: detection of disease on leaves and stems of rose from images taken in the visible spectrum. The authors were able attain the quality of 90.6% on real-life data (F1 score, one input image, test dataset).","PeriodicalId":188283,"journal":{"name":"2020 IEEE 22nd International Workshop on Multimedia Signal Processing (MMSP)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129463878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}