Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6779940
A. Moeini, K. Faez, Hosein Moeini
In this paper, we propose an efficient method to reconstructing the 3D models of a human face from a single 2D face image robustness under a variety facial expressions using the Deformable Generic Elastic Model (D-GEM). We extended the Generic Elastic Model (GEM) approach and combined it with statistical information of the human face and deformed generic depth models by computing the distance around face lips. Particularly, we demonstrate that D-GEM can approximate the 3D shape of the input face image more accurately, achieving a better and higher quality of 3D face modeling and reconstruction robustness under a variety of facial expressions compared to the original GEM and Gender and Ethnicity-GEM (GE-GEM) approach. It has been tested on an available 3D face database, demonstrating its accuracy and robustness compared to the GEM and GE-GEM approach under a variety of imaging conditions, including facial expressions, gender and ethnicity.
本文提出了一种利用可变形通用弹性模型(D-GEM)在多种面部表情下从单个二维人脸图像鲁棒重建人脸三维模型的有效方法。我们扩展了通用弹性模型(GEM)方法,将其与人脸统计信息和变形的通用深度模型相结合,通过计算人脸嘴唇周围的距离。特别是,我们证明了D-GEM可以更准确地近似输入人脸图像的三维形状,与原始GEM和Gender and Ethnicity-GEM (GE-GEM)方法相比,在各种面部表情下实现了更好和更高质量的三维人脸建模和重建鲁棒性。它已经在一个可用的3D人脸数据库上进行了测试,与GEM和GE-GEM方法相比,在各种成像条件下,包括面部表情、性别和种族,证明了它的准确性和稳健性。
{"title":"Facial expression invariant 3D face reconstruction from a single image using Deformable Generic Elastic Models","authors":"A. Moeini, K. Faez, Hosein Moeini","doi":"10.1109/IRANIANMVIP.2013.6779940","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779940","url":null,"abstract":"In this paper, we propose an efficient method to reconstructing the 3D models of a human face from a single 2D face image robustness under a variety facial expressions using the Deformable Generic Elastic Model (D-GEM). We extended the Generic Elastic Model (GEM) approach and combined it with statistical information of the human face and deformed generic depth models by computing the distance around face lips. Particularly, we demonstrate that D-GEM can approximate the 3D shape of the input face image more accurately, achieving a better and higher quality of 3D face modeling and reconstruction robustness under a variety of facial expressions compared to the original GEM and Gender and Ethnicity-GEM (GE-GEM) approach. It has been tested on an available 3D face database, demonstrating its accuracy and robustness compared to the GEM and GE-GEM approach under a variety of imaging conditions, including facial expressions, gender and ethnicity.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116670544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6779991
Maryam Shamqoli, H. Khosravi
Document images produced by scanner or digital camera, usually suffer from two main distortions: geometric and photometric. Both of them deteriorate the performance of OCR systems. In this paper, we present a novel method to compensate for undesirable geometric distortions aiming to improve OCR results. Our methodology is based on low cost transformation which addresses the projection of curve line to 2-D rectangular area combined with finding text lines. Experimental results on several document images, indicate the effectiveness of the proposed method.
{"title":"Warped document restoration by recovering shape of the surface","authors":"Maryam Shamqoli, H. Khosravi","doi":"10.1109/IRANIANMVIP.2013.6779991","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779991","url":null,"abstract":"Document images produced by scanner or digital camera, usually suffer from two main distortions: geometric and photometric. Both of them deteriorate the performance of OCR systems. In this paper, we present a novel method to compensate for undesirable geometric distortions aiming to improve OCR results. Our methodology is based on low cost transformation which addresses the projection of curve line to 2-D rectangular area combined with finding text lines. Experimental results on several document images, indicate the effectiveness of the proposed method.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125906259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6780001
M. Davoodianidaliki, M. Saadatseresht
Visual sensors, active or passive, play an important role in computer vision and in visual sensors, calibration is of utmost importance. Kinect as a new developed sensor for use as a Natural User Interface is being utilized in different fields especially CV. This integrated system beside other sensors, contains two visual sensors of active and passive that demands a process of calibration. Among different methods of calibration, image-based calibration for data-fusion purposes, has lowest computational cost and can be quite simple and precise. In this study, 2 different methods, consisting of a physical interior distortion model and an eight parameters registration equation have been proposed. Besides computed parameters and their precision, a table of distortion values is introduced that can be used in registration level. Finally to evaluate chosen proposed method, a simple registration of processed data is utilized and results are discussed.
{"title":"Calibrate kinect to use in computer vision, simplified and precise","authors":"M. Davoodianidaliki, M. Saadatseresht","doi":"10.1109/IRANIANMVIP.2013.6780001","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6780001","url":null,"abstract":"Visual sensors, active or passive, play an important role in computer vision and in visual sensors, calibration is of utmost importance. Kinect as a new developed sensor for use as a Natural User Interface is being utilized in different fields especially CV. This integrated system beside other sensors, contains two visual sensors of active and passive that demands a process of calibration. Among different methods of calibration, image-based calibration for data-fusion purposes, has lowest computational cost and can be quite simple and precise. In this study, 2 different methods, consisting of a physical interior distortion model and an eight parameters registration equation have been proposed. Besides computed parameters and their precision, a table of distortion values is introduced that can be used in registration level. Finally to evaluate chosen proposed method, a simple registration of processed data is utilized and results are discussed.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128286439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6780008
F. Ahmadi, M. Sigari, M. Shiri
This paper presents a color image classification method using rank based ensemble classifier. In this paper, we use color histogram in different color spaces and Gabor wavelet to extract color and texture features respectively. These features are classified by two classifiers: Nearest Neighbor (NN) and Multi Layer Perceptron (MLP). In the proposed approach, each set of features are classified by each classifier to generate a rank list of length three. Therefore, we have some rank list for different combination of feature sets and classifiers. The generated rank lists present an ordered list of class labels that the classifier believes the input image is related to those classes in order of priority. To combine the outputs (rank list) of each classifier, simple and weighted majority vote are used. Experiments show the proposed system with weighted majority vote achieves a recall and precision of 86.2 % and 86.16% respectively. Our proposed system has higher efficiency in comparison of other systems.
{"title":"A rank based ensemble classifier for image classification using color and texture features","authors":"F. Ahmadi, M. Sigari, M. Shiri","doi":"10.1109/IRANIANMVIP.2013.6780008","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6780008","url":null,"abstract":"This paper presents a color image classification method using rank based ensemble classifier. In this paper, we use color histogram in different color spaces and Gabor wavelet to extract color and texture features respectively. These features are classified by two classifiers: Nearest Neighbor (NN) and Multi Layer Perceptron (MLP). In the proposed approach, each set of features are classified by each classifier to generate a rank list of length three. Therefore, we have some rank list for different combination of feature sets and classifiers. The generated rank lists present an ordered list of class labels that the classifier believes the input image is related to those classes in order of priority. To combine the outputs (rank list) of each classifier, simple and weighted majority vote are used. Experiments show the proposed system with weighted majority vote achieves a recall and precision of 86.2 % and 86.16% respectively. Our proposed system has higher efficiency in comparison of other systems.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128800179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6779992
Najme Hadibarhaghtalab, Z. Azimifar
Successful approaches use local space-time features for human action recognition task including hand designed features or learned features. However these methods need a wise technique to encode local features to make a global representation for video. For this, some methods use K-means vector quantization to histogram each video as a bag of word. Pooling is a way used for global representation of an image. This method pools the local image feature over some image neighborhood. In this paper we extend pooling method called 3D pooling for global representation of video. 3D pooling represents each video by concatenating pooled feature vectors achieved from 8 equal regions of video. We also applied stacked convolutional ISA as local feature extractor. We evaluated our method on KTH data set and got our best result using max pooling. It improves the performance of highly demanded earlier methods.
{"title":"3D pooling on local space-time features for human action recognition","authors":"Najme Hadibarhaghtalab, Z. Azimifar","doi":"10.1109/IRANIANMVIP.2013.6779992","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779992","url":null,"abstract":"Successful approaches use local space-time features for human action recognition task including hand designed features or learned features. However these methods need a wise technique to encode local features to make a global representation for video. For this, some methods use K-means vector quantization to histogram each video as a bag of word. Pooling is a way used for global representation of an image. This method pools the local image feature over some image neighborhood. In this paper we extend pooling method called 3D pooling for global representation of video. 3D pooling represents each video by concatenating pooled feature vectors achieved from 8 equal regions of video. We also applied stacked convolutional ISA as local feature extractor. We evaluated our method on KTH data set and got our best result using max pooling. It improves the performance of highly demanded earlier methods.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128493258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6780020
Zahra Pakdaman, S. Saryazdi
This paper presents a reversible watermarking scheme based on a reversible Hadamarh Transform. In the proposed method, the watermark is embedded using the prediction error of Hadamard coefficients. To achieve a more accurate prediction, a Gravitational Search Algorithm (GSA) is used to optimize the prediction coefficients. The proposed method does not need any location map. This property leads to increase the capacity as well as the quality of the watermarked image. To evaluate the performance of the proposed method, a comparative experiment with some well-known reversible methods is performed. The obtained results confirm the efficiency of the proposed method.
{"title":"An optimal prediction based reversible image watermarking in Hadamard domain","authors":"Zahra Pakdaman, S. Saryazdi","doi":"10.1109/IRANIANMVIP.2013.6780020","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6780020","url":null,"abstract":"This paper presents a reversible watermarking scheme based on a reversible Hadamarh Transform. In the proposed method, the watermark is embedded using the prediction error of Hadamard coefficients. To achieve a more accurate prediction, a Gravitational Search Algorithm (GSA) is used to optimize the prediction coefficients. The proposed method does not need any location map. This property leads to increase the capacity as well as the quality of the watermarked image. To evaluate the performance of the proposed method, a comparative experiment with some well-known reversible methods is performed. The obtained results confirm the efficiency of the proposed method.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124530145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6780000
H. Kaveh, M. Moin, F. Razzazi
Steganography is the science and art of communicating secret data in an appropriate multimedia cover. Three-dimensional (3D) meshes have been used more and more in industrial, medical, and entertainment applications during the last decades. In This Paper, a high-capacity and very low-distortion 3D-Mesh steganography scheme based on novel directional N-dimensional Surfacelet Transform, is proposed. Experimental results show that the cover model distortion is very small. This novel approach can provide much higher hiding capacity and lower distortion than existing approaches in transform domain, while obeying the main Steganographical factors on 3-D models.
{"title":"A novel steganography approach for 3D polygonal meshes using Surfacelet Transform","authors":"H. Kaveh, M. Moin, F. Razzazi","doi":"10.1109/IRANIANMVIP.2013.6780000","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6780000","url":null,"abstract":"Steganography is the science and art of communicating secret data in an appropriate multimedia cover. Three-dimensional (3D) meshes have been used more and more in industrial, medical, and entertainment applications during the last decades. In This Paper, a high-capacity and very low-distortion 3D-Mesh steganography scheme based on novel directional N-dimensional Surfacelet Transform, is proposed. Experimental results show that the cover model distortion is very small. This novel approach can provide much higher hiding capacity and lower distortion than existing approaches in transform domain, while obeying the main Steganographical factors on 3-D models.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124534741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6779994
Ali Ghafari-Beranghar, E. Kabir, Kaveh Kangarloo
The Stroke width is an important and stable feature to describe the texts in the document images. In this paper, we propose a method for finding stroke width variety in city map images. Since in city maps the graphics lines and text labels are usually overlap with each other, it is difficult to find the stroke width in such images. On the other hand, texts are printed in a variety of widths. Knowing the major text stroke width is a prior knowledge before map processing like text extraction from graphics lines. In the proposed method, we find the candidate connected components that have significant stroke-width information. Then we locally assign a minimum stroke width to each pixel. For each candidate component, stroke width is determined. By clustering stroke width of components, we find major stroke widths. The experimental results on several varieties of city maps are reported and shown to be promising.
{"title":"Finding text Stroke width variety in city maps","authors":"Ali Ghafari-Beranghar, E. Kabir, Kaveh Kangarloo","doi":"10.1109/IRANIANMVIP.2013.6779994","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779994","url":null,"abstract":"The Stroke width is an important and stable feature to describe the texts in the document images. In this paper, we propose a method for finding stroke width variety in city map images. Since in city maps the graphics lines and text labels are usually overlap with each other, it is difficult to find the stroke width in such images. On the other hand, texts are printed in a variety of widths. Knowing the major text stroke width is a prior knowledge before map processing like text extraction from graphics lines. In the proposed method, we find the candidate connected components that have significant stroke-width information. Then we locally assign a minimum stroke width to each pixel. For each candidate component, stroke width is determined. By clustering stroke width of components, we find major stroke widths. The experimental results on several varieties of city maps are reported and shown to be promising.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127641046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6779945
Maryam Karimi, Majid Mohrekesh, Shekoofeh Azizi, S. Samavi
The extreme growth of using digital media has created a need for techniques that can be used to protect the copyrights of digital contents. One approach for copyright protection is to embed an invisible signal, known as a digital watermark, in the image. One of the most important features of an effective watermarking scheme is transparency. A good watermarking method should be invisible such that human eye could not distinguish the dissimilarities between the watermarked image and the original one. On the other hand, a watermarked image should be robust against intentional and unintentional attacks. There is an inherent tradeoff between transparency and robustness. It is desired to keep both properties as high as possible In this paper we propose the use of artificial neural networks (ANN) to predict the most suitable areas of an image for embedding. This ANN is trained based on the human visual system (HVS) model. Only blocks which produce least amount of perceivable changes are selected by this method. This block selection method can aid many of the existing embedding techniques. We have implemented our block selection method in addition to a simple watermarking method. Our results show a noticeable improvement of imperceptibility in our approach compared to other methods.
{"title":"Transparent watermarking based on psychovisual properties using neural networks","authors":"Maryam Karimi, Majid Mohrekesh, Shekoofeh Azizi, S. Samavi","doi":"10.1109/IRANIANMVIP.2013.6779945","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779945","url":null,"abstract":"The extreme growth of using digital media has created a need for techniques that can be used to protect the copyrights of digital contents. One approach for copyright protection is to embed an invisible signal, known as a digital watermark, in the image. One of the most important features of an effective watermarking scheme is transparency. A good watermarking method should be invisible such that human eye could not distinguish the dissimilarities between the watermarked image and the original one. On the other hand, a watermarked image should be robust against intentional and unintentional attacks. There is an inherent tradeoff between transparency and robustness. It is desired to keep both properties as high as possible In this paper we propose the use of artificial neural networks (ANN) to predict the most suitable areas of an image for embedding. This ANN is trained based on the human visual system (HVS) model. Only blocks which produce least amount of perceivable changes are selected by this method. This block selection method can aid many of the existing embedding techniques. We have implemented our block selection method in addition to a simple watermarking method. Our results show a noticeable improvement of imperceptibility in our approach compared to other methods.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"328 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125449405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-09-01DOI: 10.1109/IRANIANMVIP.2013.6779971
R. Amandi, Mitra Bayat, Kobra Minakhani, Hamidreza Mirloo, M. Bazarghan
In this paper we introduce an algorithm to analyze the human iris, long-range iris recognition software has been developed to be more user-friendliness and create an economic way to the identification. Our algorithm centralized on pupil detection, and by using estimated ranges we omit the other regions to create more efficient search space. The final decision on iris region detection provides by Hough Transform. We use the Gaussian method to create a refined mask which has an important rule of the matching process. To extract efficient features of iris regions and matching we used SIFT algorithm, Results on CASIAV4-at Distance shows %93 as verification Rate.
{"title":"Long distance iris recognition","authors":"R. Amandi, Mitra Bayat, Kobra Minakhani, Hamidreza Mirloo, M. Bazarghan","doi":"10.1109/IRANIANMVIP.2013.6779971","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779971","url":null,"abstract":"In this paper we introduce an algorithm to analyze the human iris, long-range iris recognition software has been developed to be more user-friendliness and create an economic way to the identification. Our algorithm centralized on pupil detection, and by using estimated ranges we omit the other regions to create more efficient search space. The final decision on iris region detection provides by Hough Transform. We use the Gaussian method to create a refined mask which has an important rule of the matching process. To extract efficient features of iris regions and matching we used SIFT algorithm, Results on CASIAV4-at Distance shows %93 as verification Rate.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134355525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}