Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492802
J. Sun, Yuan Liu, Yu Ding, Xinglong Zhu, J. Xi
In this paper, a binocular stereo vision three-dimensional (3D) reconstruction algorithm is proposed. In order to reduce the computation in feature extraction process, it begins with selecting candidate corner points, and then uses this as the center to establish the search area. Finally, scale invariant feature transform (SIFT) algorithm is used to extract corner points. In the process of stereo matching, the rough matching point pairs obtained from the Normal Cross Correlation (NCC) algorithm are applied to feature constraints to get the precise matching point pairs so that the final experiment realizes the 3D reconstruction of objects.
{"title":"NCC Feature Matching Optimized Algorithm Based on Constraint Fusion","authors":"J. Sun, Yuan Liu, Yu Ding, Xinglong Zhu, J. Xi","doi":"10.1109/ICIVC.2018.8492802","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492802","url":null,"abstract":"In this paper, a binocular stereo vision three-dimensional (3D) reconstruction algorithm is proposed. In order to reduce the computation in feature extraction process, it begins with selecting candidate corner points, and then uses this as the center to establish the search area. Finally, scale invariant feature transform (SIFT) algorithm is used to extract corner points. In the process of stereo matching, the rough matching point pairs obtained from the Normal Cross Correlation (NCC) algorithm are applied to feature constraints to get the precise matching point pairs so that the final experiment realizes the 3D reconstruction of objects.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130640765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492800
Xuan Zhou, Yuliang Lu, Yongjie Wang, Xuehu Yan
Moving Target Defense (MTD) is a research hotspot in the field of network security. Moving Target Network Defense (MTND) is the implementation of MTD at network level. Numerous related works have been proposed in the field of MTND. In this paper, we focus on the scope and area of MTND, systematically present the recent representative progress from four aspects, including IP address and port mutation, route mutation, fingerprint mutation and multiple mutation, and put forward the future development directions. Several new perspectives and elucidations on MTND are rendered.
{"title":"Overview on Moving Target Network Defense","authors":"Xuan Zhou, Yuliang Lu, Yongjie Wang, Xuehu Yan","doi":"10.1109/ICIVC.2018.8492800","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492800","url":null,"abstract":"Moving Target Defense (MTD) is a research hotspot in the field of network security. Moving Target Network Defense (MTND) is the implementation of MTD at network level. Numerous related works have been proposed in the field of MTND. In this paper, we focus on the scope and area of MTND, systematically present the recent representative progress from four aspects, including IP address and port mutation, route mutation, fingerprint mutation and multiple mutation, and put forward the future development directions. Several new perspectives and elucidations on MTND are rendered.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128833577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492770
Jialin Liu, Y. Liu, Weihua Liu, Tingge Zhu
We present a partitioned logarithmic tone mapping algorithm with detailed compensation. First, the High Dynamic Range (HDR) image is divided into a base layer and a detail layer. Then, different adaptive logarithmic functions are proposed for the low, medium, and high luminance regions in the base layer. At last, a pixel-level fusion algorithm is used to eliminate the boundary effect of each region. We also propose an adaptive function to adjust the detail layer. Experimental results show that our algorithm can effectively compress the dynamic range, the image details after tone mapping are more abundant and the color saturation is higher.
{"title":"Partitioned Logarithmic Tone Mapping Algorithm with Detail Compensation","authors":"Jialin Liu, Y. Liu, Weihua Liu, Tingge Zhu","doi":"10.1109/ICIVC.2018.8492770","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492770","url":null,"abstract":"We present a partitioned logarithmic tone mapping algorithm with detailed compensation. First, the High Dynamic Range (HDR) image is divided into a base layer and a detail layer. Then, different adaptive logarithmic functions are proposed for the low, medium, and high luminance regions in the base layer. At last, a pixel-level fusion algorithm is used to eliminate the boundary effect of each region. We also propose an adaptive function to adjust the detail layer. Experimental results show that our algorithm can effectively compress the dynamic range, the image details after tone mapping are more abundant and the color saturation is higher.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125443269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492900
Zhan Shi, Jiande Zhang, Tingting Zhang, Xiaoqin Zeng, Dawei Li
Graph grammars are a rigorous but intuitive way to define and handle graph languages. To tackle time-related issues, this paper proposes a new extension of temporal mechanism based on the existing Edge-based Graph Grammar (EGG), which includes grammatical specifications, productions, operations and so on. In the paper, formal definitions of temporal mechanism are provided first. Then, a new parsing algorithm is presented to check the correctness of a given graph's structure, and to analyze operations' timing when needed.
{"title":"An Extension of Temporal Mechanism for Graph Grammar","authors":"Zhan Shi, Jiande Zhang, Tingting Zhang, Xiaoqin Zeng, Dawei Li","doi":"10.1109/ICIVC.2018.8492900","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492900","url":null,"abstract":"Graph grammars are a rigorous but intuitive way to define and handle graph languages. To tackle time-related issues, this paper proposes a new extension of temporal mechanism based on the existing Edge-based Graph Grammar (EGG), which includes grammatical specifications, productions, operations and so on. In the paper, formal definitions of temporal mechanism are provided first. Then, a new parsing algorithm is presented to check the correctness of a given graph's structure, and to analyze operations' timing when needed.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125543258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492730
Huan Pan, You Wei-wen, T. Zeng
Single particle reconstruction (SPR) from cryo-electron microscopy (cryo-EM) is an emerging technique for determining the three-dimensional (3D) structure of macromolecules. A major challenge in single particle reconstruction from cryo-electron microscopy is to establish a reliable ab initio three-dimensional model using two-dimensional projection images. In this paper we introduce a fast proximal gradient method (FISTA) to solve the corresponding optimization problem of Single particle reconstruction. Numerical experiments with simulated images demonstrate that the proposed methods significantly reduce the estimation error and improved reconstruction quality.
{"title":"A Fast Iterative Shrinkage Thresholding Algorithm for Single Particle Reconstruction of Cryo-EM","authors":"Huan Pan, You Wei-wen, T. Zeng","doi":"10.1109/ICIVC.2018.8492730","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492730","url":null,"abstract":"Single particle reconstruction (SPR) from cryo-electron microscopy (cryo-EM) is an emerging technique for determining the three-dimensional (3D) structure of macromolecules. A major challenge in single particle reconstruction from cryo-electron microscopy is to establish a reliable ab initio three-dimensional model using two-dimensional projection images. In this paper we introduce a fast proximal gradient method (FISTA) to solve the corresponding optimization problem of Single particle reconstruction. Numerical experiments with simulated images demonstrate that the proposed methods significantly reduce the estimation error and improved reconstruction quality.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126929618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492762
Yan Li, Liping Du
Instantaneous frequency is a primary parameter used for analyzing vibration signals of rolling bearings under harsh working conditions, which contains rich information besides the rotating speeds of the shaft. Such vibration signals exhibits characteristics of non-stationary and multi-components. The generalized demodulation time-frequency analysis approach is a novel signal processing method, which is suitable for processing non-stationary and multi-component signals. Its novelty lies in that the signal whose time-frequency distributions are curved paths can be transformed into that whose time-frequency distributions are linear paths paralleling time axis by using generalized demodulation. In this paper, to deal with the vibration signals under variable working speed condition, the generalized demodulation as well as a band-pass filter is used to separate the interested component from original vibration signal. The analysis results demonstrate the validity of the proposed approach.
{"title":"Application of the Generalized Demodulation Time-Frequency Analysis Method to Vibration Signals Under Varying Speed","authors":"Yan Li, Liping Du","doi":"10.1109/ICIVC.2018.8492762","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492762","url":null,"abstract":"Instantaneous frequency is a primary parameter used for analyzing vibration signals of rolling bearings under harsh working conditions, which contains rich information besides the rotating speeds of the shaft. Such vibration signals exhibits characteristics of non-stationary and multi-components. The generalized demodulation time-frequency analysis approach is a novel signal processing method, which is suitable for processing non-stationary and multi-component signals. Its novelty lies in that the signal whose time-frequency distributions are curved paths can be transformed into that whose time-frequency distributions are linear paths paralleling time axis by using generalized demodulation. In this paper, to deal with the vibration signals under variable working speed condition, the generalized demodulation as well as a band-pass filter is used to separate the interested component from original vibration signal. The analysis results demonstrate the validity of the proposed approach.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121303410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492773
L. Yaxin, Teng Yiqian, Zhong Ming
For service robots, intelligent grasping is a core step to accomplish lots of household tasks. The spatial pose estimation of target object is the prerequisite to calculate the grasping pose of manipulator and perform the intelligent grasping. This paper proposes a composite algorithm to estimate the pose of target whose templates obtained from multiple views. With the premise of successful grasping, we divide the household items into two categories based on the difference of the demanded pose accuracy, and use different algorithms to estimate the pose of two categories. For the object with high demanded pose accuracy, an improved pose estimation algorithm is proposed, which combines template-selected method based on VFH and point cloud registration algorithm of key points. Finally, the whole pose estimation algorithm is evaluated by grasping experiments. The result indicates that: when the template is extracted from only 12 views, the success rate of grasping is over 90%., and the average estimation time of the two kinds of objects are 254.9ms and 984.2ms respectively. In conclusion, the algorithm takes into account of the requirement of both accuracy and calculation speed for intelligent grasping based on sparse multi-view templates.
{"title":"An Intelligent Composite Pose Estimation Algorithm Based on 3D Multi-View Templates","authors":"L. Yaxin, Teng Yiqian, Zhong Ming","doi":"10.1109/ICIVC.2018.8492773","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492773","url":null,"abstract":"For service robots, intelligent grasping is a core step to accomplish lots of household tasks. The spatial pose estimation of target object is the prerequisite to calculate the grasping pose of manipulator and perform the intelligent grasping. This paper proposes a composite algorithm to estimate the pose of target whose templates obtained from multiple views. With the premise of successful grasping, we divide the household items into two categories based on the difference of the demanded pose accuracy, and use different algorithms to estimate the pose of two categories. For the object with high demanded pose accuracy, an improved pose estimation algorithm is proposed, which combines template-selected method based on VFH and point cloud registration algorithm of key points. Finally, the whole pose estimation algorithm is evaluated by grasping experiments. The result indicates that: when the template is extracted from only 12 views, the success rate of grasping is over 90%., and the average estimation time of the two kinds of objects are 254.9ms and 984.2ms respectively. In conclusion, the algorithm takes into account of the requirement of both accuracy and calculation speed for intelligent grasping based on sparse multi-view templates.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121679677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492806
Jiacun Wu, Dongzhi He
In recent years, finger vein recognition has been favored by more and more researchers because of its high recognition accuracy, security and convenience of collection. The rotation of the finger will reduce the recognition performance. This paper first correction the collected images through the smallest circumscribed rectangle, then extracts the region of interest according to the location of finger joints, and extracts vein features based on Niblack algorithm. Finally, the intersection points and endpoints of the veins are identified, and an modified Hausdorff distance algorithm (MHD) is used to identify. The experiment shows that the rotation average time and the extraction time of the venous feature of each picture are 8ms and 146ms, respectively. The accuracy of the non image rotation correction is 94.12%, and the accuracy of the image rotation correction is 97.21%, and the algorithm is robust to the rotation angle. It can be concluded that the algorithm has a high advantage in running speed and matching precision
{"title":"Finger Vein Recognition Based on Feature Point Distance","authors":"Jiacun Wu, Dongzhi He","doi":"10.1109/ICIVC.2018.8492806","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492806","url":null,"abstract":"In recent years, finger vein recognition has been favored by more and more researchers because of its high recognition accuracy, security and convenience of collection. The rotation of the finger will reduce the recognition performance. This paper first correction the collected images through the smallest circumscribed rectangle, then extracts the region of interest according to the location of finger joints, and extracts vein features based on Niblack algorithm. Finally, the intersection points and endpoints of the veins are identified, and an modified Hausdorff distance algorithm (MHD) is used to identify. The experiment shows that the rotation average time and the extraction time of the venous feature of each picture are 8ms and 146ms, respectively. The accuracy of the non image rotation correction is 94.12%, and the accuracy of the image rotation correction is 97.21%, and the algorithm is robust to the rotation angle. It can be concluded that the algorithm has a high advantage in running speed and matching precision","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122698594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492782
Ke Liu, Keming Long, Baozhen Ma, Jing Yang
In the process of collecting or transmitting images, various noise interferences are often introduced, especially in a multi-image sensor network, and noise has an important influence on subsequent image processing. Gaussian noise is a common noise in such systems. In order to filter Gaussian noise better, the neighborhood gray level difference weight matrix is proposed and applied to the Pulse-coupled neural network (PCNN). The matrix corresponds to the coupling-connection matrix of the PCNN and is determined by the related constraint relationship. From the perspective of image pixels, the neighborhood gray level difference weight matrix can adaptively change the gray level of the noisy pixels in the center of the neighborhood and improve the correlation of pixel gray levels in the neighborhood. From the macro perspective, the introduction of the neighborhood gray level difference weight matrix converts the image denoising process into a two-dimensional convolution operation. When the initial conditions are determined, parallel processing can be realized, which greatly improves the efficiency of the algorithm. These advantages make the proposed algorithm can be better combined with CNN and other networks as the front-end denoising module of these networks. The specific experiments show that the denoising effect of this algorithm is better, especially under the higher variance Gaussian noise.
{"title":"Gaussian Noise Filtering Using Pulse-Coupled Neural Networks","authors":"Ke Liu, Keming Long, Baozhen Ma, Jing Yang","doi":"10.1109/ICIVC.2018.8492782","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492782","url":null,"abstract":"In the process of collecting or transmitting images, various noise interferences are often introduced, especially in a multi-image sensor network, and noise has an important influence on subsequent image processing. Gaussian noise is a common noise in such systems. In order to filter Gaussian noise better, the neighborhood gray level difference weight matrix is proposed and applied to the Pulse-coupled neural network (PCNN). The matrix corresponds to the coupling-connection matrix of the PCNN and is determined by the related constraint relationship. From the perspective of image pixels, the neighborhood gray level difference weight matrix can adaptively change the gray level of the noisy pixels in the center of the neighborhood and improve the correlation of pixel gray levels in the neighborhood. From the macro perspective, the introduction of the neighborhood gray level difference weight matrix converts the image denoising process into a two-dimensional convolution operation. When the initial conditions are determined, parallel processing can be realized, which greatly improves the efficiency of the algorithm. These advantages make the proposed algorithm can be better combined with CNN and other networks as the front-end denoising module of these networks. The specific experiments show that the denoising effect of this algorithm is better, especially under the higher variance Gaussian noise.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126984862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-06-01DOI: 10.1109/ICIVC.2018.8492721
Lu Zhang, Li Xu
In this paper, a fully convolutional neural network based on U-net is proposed to segment the liver in CT images. Two modifications are made to the original U-net structure. Firstly, an extra path is added to the original net structure to extract the global features and detail features separately. Secondly, the number of convolutional channels of the original contraction path, the original expansion path and the new path is reduced. These two modifications make the training more rapid and improve the efficiency of the convolution kernel extraction feature. Then, the segmentation results before and after modification is compared in terms of performance, including recall rate and precision rate, to ensure that the modified network can reach even higher than the original network precision. After that, the paper analyzes the reasons why our network can maintain good segmentation effect and summarizes the application prospect of the modified network.
{"title":"An Automatic Liver Segmentation Algorithm for CT Images U-Net with Separated Paths of Feature Extraction","authors":"Lu Zhang, Li Xu","doi":"10.1109/ICIVC.2018.8492721","DOIUrl":"https://doi.org/10.1109/ICIVC.2018.8492721","url":null,"abstract":"In this paper, a fully convolutional neural network based on U-net is proposed to segment the liver in CT images. Two modifications are made to the original U-net structure. Firstly, an extra path is added to the original net structure to extract the global features and detail features separately. Secondly, the number of convolutional channels of the original contraction path, the original expansion path and the new path is reduced. These two modifications make the training more rapid and improve the efficiency of the convolution kernel extraction feature. Then, the segmentation results before and after modification is compared in terms of performance, including recall rate and precision rate, to ensure that the modified network can reach even higher than the original network precision. After that, the paper analyzes the reasons why our network can maintain good segmentation effect and summarizes the application prospect of the modified network.","PeriodicalId":173981,"journal":{"name":"2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132693556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}