Pub Date : 2017-11-01DOI: 10.1109/IPTA.2017.8310121
Zhe Xin, Xiaoguang Cui, Jixiang Zhang, Yiping Yang, Yanqing Wang
Visual place recognition is one of the most challenging problems in computer vision, due to the large diversities that real-world places can represent. Recently, visual place recognition has become a key part of loop closure detection and topological localization in long-term mobile robot autonomy. In this work, we build up a novel visual place recognition pipeline composed of a first filtering stage followed by a partial reranking process. In the filtering stage, image-wise features are utilized to find a small set of potential places. Afterwards, stable region-wise landmarks are extracted for more accurate matching in the partial reranking process. All global and partial image representations are derived from pre-trained Convolutional Neural Networks (CNNs), and the landmarks are extracted by object proposal techniques. Moreover, a new similarity measurement is provided by considering both spatial and scale distribution of landmarks. Compared with current methods only considering scale distribution, the presented similarity measurement can benefit recognition precision and robustness effectively. Experiments with varied viewpoints and environmental conditions demonstrate that the proposed method achieves superior performance against state-of-the-art methods.
{"title":"Visual place recognition with CNNs: From global to partial","authors":"Zhe Xin, Xiaoguang Cui, Jixiang Zhang, Yiping Yang, Yanqing Wang","doi":"10.1109/IPTA.2017.8310121","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310121","url":null,"abstract":"Visual place recognition is one of the most challenging problems in computer vision, due to the large diversities that real-world places can represent. Recently, visual place recognition has become a key part of loop closure detection and topological localization in long-term mobile robot autonomy. In this work, we build up a novel visual place recognition pipeline composed of a first filtering stage followed by a partial reranking process. In the filtering stage, image-wise features are utilized to find a small set of potential places. Afterwards, stable region-wise landmarks are extracted for more accurate matching in the partial reranking process. All global and partial image representations are derived from pre-trained Convolutional Neural Networks (CNNs), and the landmarks are extracted by object proposal techniques. Moreover, a new similarity measurement is provided by considering both spatial and scale distribution of landmarks. Compared with current methods only considering scale distribution, the presented similarity measurement can benefit recognition precision and robustness effectively. Experiments with varied viewpoints and environmental conditions demonstrate that the proposed method achieves superior performance against state-of-the-art methods.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"115 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117269388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/IPTA.2017.8310133
Robert D. Friedlander, A. Yezzi
Computer vision tasks often have the goal of inferring geometric and radiometric information about a 3D environment given limited sensing resources. It is helpful to develop relationships between these real-world properties and the actual measurements that are taken. To this end we propose a new relationship between object radiance and image irradiance based on power conservation and a thin lens imaging model. The relationship has a closed-form solution for in-focus points and can be solved via numerical integration for points that are not focused. It can be thought of as a generalization of Horn's irradiance equation. Through both numerical simulations and comparison with the intensity values of actual images, our equation is shown to provide better accuracy than Horn's equation. Improvement is most notable for near-focused images where the pinhole imaging model implicit in Horn's derivation breaks down. Outside of this regime, our model validates the use of Horn's approximation.
{"title":"A closed-form expression for thin lens image irradiance","authors":"Robert D. Friedlander, A. Yezzi","doi":"10.1109/IPTA.2017.8310133","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310133","url":null,"abstract":"Computer vision tasks often have the goal of inferring geometric and radiometric information about a 3D environment given limited sensing resources. It is helpful to develop relationships between these real-world properties and the actual measurements that are taken. To this end we propose a new relationship between object radiance and image irradiance based on power conservation and a thin lens imaging model. The relationship has a closed-form solution for in-focus points and can be solved via numerical integration for points that are not focused. It can be thought of as a generalization of Horn's irradiance equation. Through both numerical simulations and comparison with the intensity values of actual images, our equation is shown to provide better accuracy than Horn's equation. Improvement is most notable for near-focused images where the pinhole imaging model implicit in Horn's derivation breaks down. Outside of this regime, our model validates the use of Horn's approximation.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127291796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/IPTA.2017.8310103
D. Tran, Muhammad-Adeel Waris, M. Gabbouj, Alexandros Iosifidis
In this paper, we propose a new regularization scheme for the well-known Support Vector Machine (SVM) classifier that operates on the training sample level. The proposed approach is motivated by the fact that Maximum Margin-based classification defines decision functions as a linear combination of the selected training data and, thus, the variations on training sample selection directly affect generalization performance. We show that the exploitation of the proposed regularization scheme is well motivated and intuitive. Experimental results show that the proposed regularization scheme outperforms standard SVM in human action recognition tasks as well as classical recognition problems.
{"title":"Sample-based regularization for support vector machine classification","authors":"D. Tran, Muhammad-Adeel Waris, M. Gabbouj, Alexandros Iosifidis","doi":"10.1109/IPTA.2017.8310103","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310103","url":null,"abstract":"In this paper, we propose a new regularization scheme for the well-known Support Vector Machine (SVM) classifier that operates on the training sample level. The proposed approach is motivated by the fact that Maximum Margin-based classification defines decision functions as a linear combination of the selected training data and, thus, the variations on training sample selection directly affect generalization performance. We show that the exploitation of the proposed regularization scheme is well motivated and intuitive. Experimental results show that the proposed regularization scheme outperforms standard SVM in human action recognition tasks as well as classical recognition problems.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126377631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/IPTA.2017.8310154
João M. Santos, P. Assunção, L. Cruz, Luis M. N. Tavora, R. Fonseca-Pinto, S. Faria
Recent advances in Light Field acquisition and rendering are pushing research efforts towards increasingly efficient methods to encode this particular type of data. Light Field image compression is of the utmost importance, not only due to the large amount of data required for its representation but also due to quality requirements of many applications and computational photography methods. This paper presents a research study about the impact of reversible colour transformations and alternative data arrangements in Light Field lossless coding. The experimental results indicate that the RCT reversible transform consistently achieves the highest compression performance across all data arrangements and lossless encoders. In particular, the best results are obtained with MRP when encoding the stack of sub-aperture images using a spiral scan order, achieving 6.41 bpp, on average.
{"title":"Lossless light-field compression using reversible colour transformations","authors":"João M. Santos, P. Assunção, L. Cruz, Luis M. N. Tavora, R. Fonseca-Pinto, S. Faria","doi":"10.1109/IPTA.2017.8310154","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310154","url":null,"abstract":"Recent advances in Light Field acquisition and rendering are pushing research efforts towards increasingly efficient methods to encode this particular type of data. Light Field image compression is of the utmost importance, not only due to the large amount of data required for its representation but also due to quality requirements of many applications and computational photography methods. This paper presents a research study about the impact of reversible colour transformations and alternative data arrangements in Light Field lossless coding. The experimental results indicate that the RCT reversible transform consistently achieves the highest compression performance across all data arrangements and lossless encoders. In particular, the best results are obtained with MRP when encoding the stack of sub-aperture images using a spiral scan order, achieving 6.41 bpp, on average.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"263 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133727668","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/IPTA.2017.8310086
F. Kerouh, D. Ziou, A. Serir
The paper deals with assessing blur amount in images. Blur is a common artefact that attenuates the high frequency components of an image. The main idea turns on analysing the frequency response at transitions through resolutions. To achieve that, the histogram of the multiresolution DCT coefficients is modelled by using an exponential probability density function (pdf). The steepness of the pdf is used as a cue to characterize the blur effect. Faithful scores are obtained while testing the proposed approach on five image collections. The proposed measure is validated on the JPEG2000 lossy compression algorithm and the Lucy-Richardson iterative deblurring approach.
{"title":"A multiresolution DCT-based blind blur quality measure","authors":"F. Kerouh, D. Ziou, A. Serir","doi":"10.1109/IPTA.2017.8310086","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310086","url":null,"abstract":"The paper deals with assessing blur amount in images. Blur is a common artefact that attenuates the high frequency components of an image. The main idea turns on analysing the frequency response at transitions through resolutions. To achieve that, the histogram of the multiresolution DCT coefficients is modelled by using an exponential probability density function (pdf). The steepness of the pdf is used as a cue to characterize the blur effect. Faithful scores are obtained while testing the proposed approach on five image collections. The proposed measure is validated on the JPEG2000 lossy compression algorithm and the Lucy-Richardson iterative deblurring approach.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130929132","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/IPTA.2017.8310104
Vineet Kumar, R. Chouhan
No-reference image quality assessment is a challenging task due to the absence of a reference image in practical situations to quantify image quality. This paper proposes a new no-reference image quality metric for natural images using latent noise estimation, Gabor response, and contrast deviation. The algorithm employs an extension of gradient-based SSIM into the no-reference application using SVD-based AWGN estimation, and defines attributes such as Gabor-based smoothness and contrast deviation. The proposed metric arrives at an overall quality score by computing a linear weighted summation of the three image attributes. The proposed algorithm has been tested on several public databases (i.e. LIVE, TID 2013 and CSIQ), and the overall results display a noteworthy correlation of nearly 80% with the human visual system.
{"title":"No-reference image quality assessment using Gabor-based smoothness and latent noise estimation","authors":"Vineet Kumar, R. Chouhan","doi":"10.1109/IPTA.2017.8310104","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310104","url":null,"abstract":"No-reference image quality assessment is a challenging task due to the absence of a reference image in practical situations to quantify image quality. This paper proposes a new no-reference image quality metric for natural images using latent noise estimation, Gabor response, and contrast deviation. The algorithm employs an extension of gradient-based SSIM into the no-reference application using SVD-based AWGN estimation, and defines attributes such as Gabor-based smoothness and contrast deviation. The proposed metric arrives at an overall quality score by computing a linear weighted summation of the three image attributes. The proposed algorithm has been tested on several public databases (i.e. LIVE, TID 2013 and CSIQ), and the overall results display a noteworthy correlation of nearly 80% with the human visual system.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115966967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/IPTA.2017.8310109
Mohamed Cheniti, Z. Akhtar, N. Boukezzoula, T. Falk
Multibiometric systems that fuse information from different sources are able to alleviate limitations of the unimodal biometric systems. In this paper, we propose a multibiometric framework to identify people using their left and right wrist vein patterns. The framework uses a fast and robust preprocessing and feature extraction method. A generic score level fusion approach is proposed to integrate the scores from left and right wrist vein patterns using Dubois and Parad triangular-norm (t-norm). Experiments on the publicly available PUT wrist vein dataset show that the proposed multibiometric framework outperforms the unimodal systems, their fusion using other t-norms techniques, and existing wrist vein recognition methods.
{"title":"Combining left and right wrist vein images for personal verification","authors":"Mohamed Cheniti, Z. Akhtar, N. Boukezzoula, T. Falk","doi":"10.1109/IPTA.2017.8310109","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310109","url":null,"abstract":"Multibiometric systems that fuse information from different sources are able to alleviate limitations of the unimodal biometric systems. In this paper, we propose a multibiometric framework to identify people using their left and right wrist vein patterns. The framework uses a fast and robust preprocessing and feature extraction method. A generic score level fusion approach is proposed to integrate the scores from left and right wrist vein patterns using Dubois and Parad triangular-norm (t-norm). Experiments on the publicly available PUT wrist vein dataset show that the proposed multibiometric framework outperforms the unimodal systems, their fusion using other t-norms techniques, and existing wrist vein recognition methods.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115174589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/IPTA.2017.8310136
Y. Guerbai, Y. Chibani, Bilal Hadjadji
Handwriting gender recognition becomes considerable matter for the document analysis community, due to its effective use in practical applications. This paper addresses the problem of classifying handwriting data with respect to gender. From the state of the art, only a few studies have been carried out in this field. Thus, we propose a new framework for classifying the gender from the handwriting document using the curvelet transform and a classification method based on One-Class Support Vector Machine (OC-SVM). In order to improve the robustness of the proposed system, multiple OC-SVM classifiers are combined according to the type of distance used into the kernel. Experimental results conducted on IAM datasets show the effective use of the OC-SVM for handwriting gender recognition comparatively to the state of the art.
{"title":"Handwriting gender recognition system based on the one-class support vector machines","authors":"Y. Guerbai, Y. Chibani, Bilal Hadjadji","doi":"10.1109/IPTA.2017.8310136","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310136","url":null,"abstract":"Handwriting gender recognition becomes considerable matter for the document analysis community, due to its effective use in practical applications. This paper addresses the problem of classifying handwriting data with respect to gender. From the state of the art, only a few studies have been carried out in this field. Thus, we propose a new framework for classifying the gender from the handwriting document using the curvelet transform and a classification method based on One-Class Support Vector Machine (OC-SVM). In order to improve the robustness of the proposed system, multiple OC-SVM classifiers are combined according to the type of distance used into the kernel. Experimental results conducted on IAM datasets show the effective use of the OC-SVM for handwriting gender recognition comparatively to the state of the art.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122899791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/IPTA.2017.8310105
Xavier Soria Poma, A. Sappa, A. Akbarinia
Multispectral images captured with a single sensor camera have become an attractive alternative for numerous computer vision applications. However, in order to fully exploit their potentials, the color restoration problem (RGB representation) should be addressed. This problem is more evident in outdoor scenarios containing vegetation, living beings, or specular materials. The problem of color distortion emerges from the sensitivity of sensors due to the overlap of visible and near infrared spectral bands. This paper empirically evaluates the variability of the near infrared (NIR) information with respect to the changes of light throughout the day. A tiny neural network is proposed to restore the RGB color representation from the given RGBN (Red, Green, Blue, NIR) images. In order to evaluate the proposed algorithm, different experiments on a RGBN outdoor dataset are conducted, which include various challenging cases. The obtained result shows the challenge and the importance of addressing color restoration in single sensor multispectral images.
{"title":"Multispectral single-sensor RGB-NIR imaging: New challenges and opportunities","authors":"Xavier Soria Poma, A. Sappa, A. Akbarinia","doi":"10.1109/IPTA.2017.8310105","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310105","url":null,"abstract":"Multispectral images captured with a single sensor camera have become an attractive alternative for numerous computer vision applications. However, in order to fully exploit their potentials, the color restoration problem (RGB representation) should be addressed. This problem is more evident in outdoor scenarios containing vegetation, living beings, or specular materials. The problem of color distortion emerges from the sensitivity of sensors due to the overlap of visible and near infrared spectral bands. This paper empirically evaluates the variability of the near infrared (NIR) information with respect to the changes of light throughout the day. A tiny neural network is proposed to restore the RGB color representation from the given RGBN (Red, Green, Blue, NIR) images. In order to evaluate the proposed algorithm, different experiments on a RGBN outdoor dataset are conducted, which include various challenging cases. The obtained result shows the challenge and the importance of addressing color restoration in single sensor multispectral images.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116053380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/IPTA.2017.8310134
Jorge Calvo-Zaragoza, Gabriel Vigliensoni, Ichiro Fujinaga
Content within musical documents not only contains music symbol but also include different elements such as staff lines, text, or frontispieces. Before attempting to automatically recognize components in these layers, it is necessary to perform an analysis of the musical documents in order to detect and classify each of these constituent parts. The obstacle for this analysis is the high heterogeneity amongst music collections, especially with ancient documents, which makes it difficult to devise methods that can be generalizable to a broader range of sources. In this paper we propose a data-driven document analysis framework based on machine learning that focuses on classifying regions of interest at pixel level. For that, we make use of Convolutional Neural Networks trained to infer the category of each pixel. The main advantage of this approach is that it can be applied regardless of the type of document provided, as long as training data is available. Since this work represents first efforts in that direction, our experimentation focuses on reporting a baseline classification using our framework. The experiments show promising performance, achieving an accuracy around 90% in two corpora of old music documents.
{"title":"Pixelwise classification for music document analysis","authors":"Jorge Calvo-Zaragoza, Gabriel Vigliensoni, Ichiro Fujinaga","doi":"10.1109/IPTA.2017.8310134","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310134","url":null,"abstract":"Content within musical documents not only contains music symbol but also include different elements such as staff lines, text, or frontispieces. Before attempting to automatically recognize components in these layers, it is necessary to perform an analysis of the musical documents in order to detect and classify each of these constituent parts. The obstacle for this analysis is the high heterogeneity amongst music collections, especially with ancient documents, which makes it difficult to devise methods that can be generalizable to a broader range of sources. In this paper we propose a data-driven document analysis framework based on machine learning that focuses on classifying regions of interest at pixel level. For that, we make use of Convolutional Neural Networks trained to infer the category of each pixel. The main advantage of this approach is that it can be applied regardless of the type of document provided, as long as training data is available. Since this work represents first efforts in that direction, our experimentation focuses on reporting a baseline classification using our framework. The experiments show promising performance, achieving an accuracy around 90% in two corpora of old music documents.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128589680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}