Kyu-Ha Song, Dong-Weon Lee, Jin-Woo Han, Byung-Koo Park
Information on the pulse repetition interval (PRI) modulation of a radar signal plays an important role in detecting and identifying each radar signal in an electronic warfare support (ES) system. In this paper, we present a new method for recognizing the PRI modulation type of a radar signal using symbolization. The proposed method uses three key feature parameters extracted from symbol sequences in order to discriminate each PRI modulation type. The recognition capability of the method presented is verified through extensive simulations.
{"title":"Pulse Repetition Interval Modulation Recognition Using Symbolization","authors":"Kyu-Ha Song, Dong-Weon Lee, Jin-Woo Han, Byung-Koo Park","doi":"10.1109/DICTA.2010.96","DOIUrl":"https://doi.org/10.1109/DICTA.2010.96","url":null,"abstract":"Information on the pulse repetition interval (PRI) modulation of a radar signal plays an important role in detecting and identifying each radar signal in an electronic warfare support (ES) system. In this paper, we present a new method for recognizing the PRI modulation type of a radar signal using symbolization. The proposed method uses three key feature parameters extracted from symbol sequences in order to discriminate each PRI modulation type. The recognition capability of the method presented is verified through extensive simulations.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"276 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113998626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a segmentation technique which utilizes atlas based centroid forces coupled with Gradient Vector Flow (GVF) parametric active contour for the segmentation of femoral cancellous bone. The atlas used in our study provides prior information to constraint contours at regions where edge based forces are missing and to initialize the active contours. GVF external force field is padded with the centroid force derived from the atlas. In our implementation, once the atlas is registered with the target image to be segmented, the segmentation process is fully automatic. Analysis of segmentation accuracy of twenty one slices at the intercondylar location of sagittal slices provides sensitivity of 97.4±1.9%; specificity of 99.6±0.1%, Dice similarity coefficient of 96.7±1.1%. From the inspection of external force fields and the accuracy results, the study suggests that the centroid force formulation is effective in approximating missing boundaries in GVF and in facilitating automatic initialization.
{"title":"Bone Segmentation of Magnetic Resonance Images by Gradient Vector Flow Active Contour with Atlas Based Centroid Forces","authors":"T. K. Chuah, C. W. Lim, C. Poh, K. Sheah","doi":"10.1109/DICTA.2010.31","DOIUrl":"https://doi.org/10.1109/DICTA.2010.31","url":null,"abstract":"This paper presents a segmentation technique which utilizes atlas based centroid forces coupled with Gradient Vector Flow (GVF) parametric active contour for the segmentation of femoral cancellous bone. The atlas used in our study provides prior information to constraint contours at regions where edge based forces are missing and to initialize the active contours. GVF external force field is padded with the centroid force derived from the atlas. In our implementation, once the atlas is registered with the target image to be segmented, the segmentation process is fully automatic. Analysis of segmentation accuracy of twenty one slices at the intercondylar location of sagittal slices provides sensitivity of 97.4±1.9%; specificity of 99.6±0.1%, Dice similarity coefficient of 96.7±1.1%. From the inspection of external force fields and the accuracy results, the study suggests that the centroid force formulation is effective in approximating missing boundaries in GVF and in facilitating automatic initialization.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124665067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, natural verbal and non-verbal human-robot interaction has attracted an increasing interest. Therefore, models for robustly detecting and describing visual attributes of objects such as, e.g., colors are of great importance. However, in order to learn robust models of visual attributes, large data sets are required. Based on the idea to overcome the shortage of annotated training data by acquiring images from the Internet, we propose a method for robustly learning natural color models. Its novel aspects with respect to prior art are: firstly, a randomized HSL transformation that reflects the slight variations and noise of colors observed in real-world imaging sensors, secondly, a probabilistic ranking and selection of the training samples, which removes a considerable amount of outliers from the training data. These two techniques allow us to estimate robust color models that better resemble the variances seen in real world images. The advantages of the proposed method over the current state-of-the-art technique using the training data without proper transformation and selection are confirmed in experimental evaluations. In combination, for models learned with pLSA-bg and HSL, the proposed techniques reduce the amount of mislabeled objects by 19.87% on the well-known E-Bay data set.
{"title":"Web-Based Learning of Naturalized Color Models for Human-Machine Interaction","authors":"Boris Schauerte, G. Fink","doi":"10.1109/DICTA.2010.90","DOIUrl":"https://doi.org/10.1109/DICTA.2010.90","url":null,"abstract":"In recent years, natural verbal and non-verbal human-robot interaction has attracted an increasing interest. Therefore, models for robustly detecting and describing visual attributes of objects such as, e.g., colors are of great importance. However, in order to learn robust models of visual attributes, large data sets are required. Based on the idea to overcome the shortage of annotated training data by acquiring images from the Internet, we propose a method for robustly learning natural color models. Its novel aspects with respect to prior art are: firstly, a randomized HSL transformation that reflects the slight variations and noise of colors observed in real-world imaging sensors, secondly, a probabilistic ranking and selection of the training samples, which removes a considerable amount of outliers from the training data. These two techniques allow us to estimate robust color models that better resemble the variances seen in real world images. The advantages of the proposed method over the current state-of-the-art technique using the training data without proper transformation and selection are confirmed in experimental evaluations. In combination, for models learned with pLSA-bg and HSL, the proposed techniques reduce the amount of mislabeled objects by 19.87% on the well-known E-Bay data set.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121246832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Karyotyping, manual chromosome classification is a difficult and time consuming process. Many automated classifiers have been developed to overcome this problem. These classifiers either have high classification accuracy or high training speed. This paper proposes a classifier that performs well in both areas based on wavelet neural network (WNN), combining the wavelet into neural network for classification of chromosomes in group E (chromosomes 16, 17 and 18). The nonlinear characteristic of the network which is derived from wavelet specification improves the training speed and accuracy of the nonlinear chromosome classification. The network inputs are nine dimensional feature space extracted from the chromosome images and the outputs are three classes. The simulation result on the chromosomes in the Laboratory of Biomedical Imaging shows that the success rate of WNN was 0.93%, that is comparable to the traditional neural network (ANN) with 0.85% success rate. The number of iterations for training to reach 0.04% error rate is only 200 where it is 3500 iterations for ANN. According to the experimental results WNN achieves high accuracy with minimum training time, which makes it suitable for real-time chromosome classification in the laboratory.
{"title":"Chromosome Classification Based on Wavelet Neural Network","authors":"Baharak Choudari Oskouei, J. Shanbehzadeh","doi":"10.1109/DICTA.2010.107","DOIUrl":"https://doi.org/10.1109/DICTA.2010.107","url":null,"abstract":"Karyotyping, manual chromosome classification is a difficult and time consuming process. Many automated classifiers have been developed to overcome this problem. These classifiers either have high classification accuracy or high training speed. This paper proposes a classifier that performs well in both areas based on wavelet neural network (WNN), combining the wavelet into neural network for classification of chromosomes in group E (chromosomes 16, 17 and 18). The nonlinear characteristic of the network which is derived from wavelet specification improves the training speed and accuracy of the nonlinear chromosome classification. The network inputs are nine dimensional feature space extracted from the chromosome images and the outputs are three classes. The simulation result on the chromosomes in the Laboratory of Biomedical Imaging shows that the success rate of WNN was 0.93%, that is comparable to the traditional neural network (ANN) with 0.85% success rate. The number of iterations for training to reach 0.04% error rate is only 200 where it is 3500 iterations for ANN. According to the experimental results WNN achieves high accuracy with minimum training time, which makes it suitable for real-time chromosome classification in the laboratory.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"258 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115010273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the increase of illegal boat arrivals, the border security is becoming more and more important for Australian government. This paper is exploring the early warnings by detecting of boat generated signals received by a hydrophone. We focus on algorithm development and real boat generated signal tests. Our experiments have proved that the developed median CFAR and post integration algorithms are very robust for various different acoustic signals, which have a high detection rate while keeping a low false alarm rate.
{"title":"Non-cooperative Object Detection in Sea Using Acoustic Sensors","authors":"E. Cheng, S. Challa, X. Tang, Xiaohu Liu","doi":"10.1109/DICTA.2010.58","DOIUrl":"https://doi.org/10.1109/DICTA.2010.58","url":null,"abstract":"With the increase of illegal boat arrivals, the border security is becoming more and more important for Australian government. This paper is exploring the early warnings by detecting of boat generated signals received by a hydrophone. We focus on algorithm development and real boat generated signal tests. Our experiments have proved that the developed median CFAR and post integration algorithms are very robust for various different acoustic signals, which have a high detection rate while keeping a low false alarm rate.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130946186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Increasing application of Gabor feature space in various computer vision tasks and its high computational demand, encourages using parallel computing technologies. In this work we have designed a high throughput GPU based Gabor kernel that mimics the function of initial biological visual cortex layers namely ‘Simple’ and ‘Complex’ cells. The kernel is basically a Gabor filter bank with adjustable number of orientations and scales, supporting ‘Non-Square’ and ‘Variable Size’ filter masks on different channels. Consequently our GPU based Gabor kernel tends to be adjustably more accurate, more flexible for different applications, with optimum computational cost at lower resources. The second important task of our high throughput engine is ‘Gabor Feature Pooling’ with Max and Histogram methods, similar to biological visual ‘Complex cells’. This part of our ‘Gabor Engine’ makes it very practical for computer vision applications, since in addition to massive Gabor features, it also provides more abstract spatial invariant orientational information based on image Gabor features. We have optimised the Engine design to take maximum advantage of all GPU parallel resources and maximum bandwidths of all memories.
{"title":"High Throughput Variable Size Non-square Gabor Engine with Feature Pooling Based on GPU","authors":"Ali Emami, A. Bigdeli, A. Postula","doi":"10.1109/DICTA.2010.73","DOIUrl":"https://doi.org/10.1109/DICTA.2010.73","url":null,"abstract":"Increasing application of Gabor feature space in various computer vision tasks and its high computational demand, encourages using parallel computing technologies. In this work we have designed a high throughput GPU based Gabor kernel that mimics the function of initial biological visual cortex layers namely ‘Simple’ and ‘Complex’ cells. The kernel is basically a Gabor filter bank with adjustable number of orientations and scales, supporting ‘Non-Square’ and ‘Variable Size’ filter masks on different channels. Consequently our GPU based Gabor kernel tends to be adjustably more accurate, more flexible for different applications, with optimum computational cost at lower resources. The second important task of our high throughput engine is ‘Gabor Feature Pooling’ with Max and Histogram methods, similar to biological visual ‘Complex cells’. This part of our ‘Gabor Engine’ makes it very practical for computer vision applications, since in addition to massive Gabor features, it also provides more abstract spatial invariant orientational information based on image Gabor features. We have optimised the Engine design to take maximum advantage of all GPU parallel resources and maximum bandwidths of all memories.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128866708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The ability to measure metric information from videos is an essential functionality in security software packages that populate databases with human morphometric and gait descriptors. The configuration of such security systems often involves calibrating the cameras in the surveillance network. For this purpose, markers are placed on the ground in the vicinity of each camera location in order to compute a critical image-to-ground homography. This paper shows how the homography computation can be avoided by describing the key components of a system designed for view-independent video metrology. Experiments on measuring the height of subjects and ground distances from different camera views demonstrate the viability of the approach. In addition, it is shown that the calibration model of the proposed method yields more accurate measurements than the frequently used square pixel camera model.
{"title":"Video Metrology without the Image-to-Ground Homography","authors":"T. Scoleri","doi":"10.1109/DICTA.2010.64","DOIUrl":"https://doi.org/10.1109/DICTA.2010.64","url":null,"abstract":"The ability to measure metric information from videos is an essential functionality in security software packages that populate databases with human morphometric and gait descriptors. The configuration of such security systems often involves calibrating the cameras in the surveillance network. For this purpose, markers are placed on the ground in the vicinity of each camera location in order to compute a critical image-to-ground homography. This paper shows how the homography computation can be avoided by describing the key components of a system designed for view-independent video metrology. Experiments on measuring the height of subjects and ground distances from different camera views demonstrate the viability of the approach. In addition, it is shown that the calibration model of the proposed method yields more accurate measurements than the frequently used square pixel camera model.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123920337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robust image hashing seeks to transform a given input image into a shorter hashed version using a key-dependent non-invertible transform. These image hashes can be used for watermarking, image integrity authentication or image indexing for fast retrieval. This paper introduces a new method of generating image hashes based on extracting Higher Order Spectral features from the Radon projection of an input image. The feature extraction process is non-invertible, non-linear and different hashes can be produced from the same image through the use of random permutations of the input. We show that the transform is robust to typical image transformations such as JPEG compression, noise, scaling, rotation, smoothing and cropping. We evaluate our system using a verification-style framework based on calculating false match, false non-match likelihoods using the publicly available Uncompressed Colour Image database (UCID) of 1320 images. We also compare our results to Swaminathan’s Fourier-Mellin based hashing method with at least 1% EER improvement under noise, scaling and sharpening.
{"title":"Robust Image Hashing Using Higher Order Spectral Features","authors":"Brenden Chen, V. Chandran","doi":"10.1109/DICTA.2010.26","DOIUrl":"https://doi.org/10.1109/DICTA.2010.26","url":null,"abstract":"Robust image hashing seeks to transform a given input image into a shorter hashed version using a key-dependent non-invertible transform. These image hashes can be used for watermarking, image integrity authentication or image indexing for fast retrieval. This paper introduces a new method of generating image hashes based on extracting Higher Order Spectral features from the Radon projection of an input image. The feature extraction process is non-invertible, non-linear and different hashes can be produced from the same image through the use of random permutations of the input. We show that the transform is robust to typical image transformations such as JPEG compression, noise, scaling, rotation, smoothing and cropping. We evaluate our system using a verification-style framework based on calculating false match, false non-match likelihoods using the publicly available Uncompressed Colour Image database (UCID) of 1320 images. We also compare our results to Swaminathan’s Fourier-Mellin based hashing method with at least 1% EER improvement under noise, scaling and sharpening.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"28 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120901616","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An intelligent robotic living assistive system has become a popular research in the last decade. One of the important topics in that research area is 3D object reconstruction from multiple views. This process may depend on motion estimation using vision. However, often a domestic robot on an electric wheel chair has to move in a steep rotational angle that causes motion estimation from vision to become inaccurate. In addition, an oblique viewing angle creates a perspective distortion to the captured images, which further worsens the estimation result. Hence, in this paper, we propose a new approach by altering the motion estimation problem into a 2D image registration problem. Our method’s accuracy is very close to that of the Scale Invariant Feature Transform (SIFT) features tracker, whereas the Kanade-Lucas-Tomasi (KLT) tracker’s drops as soon as the rotational angle reaches about 40¿. Although our method is 2.7 times slower than the KLT tracker, it is 19 times faster than the SIFT tracker.
{"title":"Camera Ego-Motion Estimation Using Phase Correlation under Planar Motion Constraint","authors":"S. Effendi, R. Jarvis","doi":"10.1109/DICTA.2010.38","DOIUrl":"https://doi.org/10.1109/DICTA.2010.38","url":null,"abstract":"An intelligent robotic living assistive system has become a popular research in the last decade. One of the important topics in that research area is 3D object reconstruction from multiple views. This process may depend on motion estimation using vision. However, often a domestic robot on an electric wheel chair has to move in a steep rotational angle that causes motion estimation from vision to become inaccurate. In addition, an oblique viewing angle creates a perspective distortion to the captured images, which further worsens the estimation result. Hence, in this paper, we propose a new approach by altering the motion estimation problem into a 2D image registration problem. Our method’s accuracy is very close to that of the Scale Invariant Feature Transform (SIFT) features tracker, whereas the Kanade-Lucas-Tomasi (KLT) tracker’s drops as soon as the rotational angle reaches about 40¿. Although our method is 2.7 times slower than the KLT tracker, it is 19 times faster than the SIFT tracker.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124558421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many machine learning tasks can be achieved by using Multiple-instance learning (MIL) when the target features are ambiguous. As a general MIL framework, Diverse Density (DD) provides a way to learn those ambiguous features by maxmising the DD estimator, and the maximum of DD estimator is called a concept. However, modeling and finding multiple concepts is often difficult especially without prior knowledge of concept number, i.e., every positive bag may contain multiple coexistent and heterogeneous concepts but we do not know how many concepts exist. In this work, we present a new approach to find multiple concepts of DD by using an supervised mean shift algorithm. Unlike classic mean shift (an unsupervised clustering algorithm), our approach for the first time introduces the class label to feature point and each point differently contributes the mean shift iterations according to its label and position. A feature point derives from an MIL instance and takes corresponding bag label. Our supervised mean shift starts from positive points and converges to the local maxima that are close to the positive points and far away from the negative points. Experiments qualitatively indicate that our approach has better properties than other DD methods.
{"title":"Learn Concepts in Multiple-Instance Learning with Diverse Density Framework Using Supervised Mean Shift","authors":"Ruo Du, Sheng Wang, Qiang Wu, Xiangjian He","doi":"10.1109/DICTA.2010.111","DOIUrl":"https://doi.org/10.1109/DICTA.2010.111","url":null,"abstract":"Many machine learning tasks can be achieved by using Multiple-instance learning (MIL) when the target features are ambiguous. As a general MIL framework, Diverse Density (DD) provides a way to learn those ambiguous features by maxmising the DD estimator, and the maximum of DD estimator is called a concept. However, modeling and finding multiple concepts is often difficult especially without prior knowledge of concept number, i.e., every positive bag may contain multiple coexistent and heterogeneous concepts but we do not know how many concepts exist. In this work, we present a new approach to find multiple concepts of DD by using an supervised mean shift algorithm. Unlike classic mean shift (an unsupervised clustering algorithm), our approach for the first time introduces the class label to feature point and each point differently contributes the mean shift iterations according to its label and position. A feature point derives from an MIL instance and takes corresponding bag label. Our supervised mean shift starts from positive points and converges to the local maxima that are close to the positive points and far away from the negative points. Experiments qualitatively indicate that our approach has better properties than other DD methods.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132641974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}