This paper proposes a novel non-parametric method to robustly embed conditional and posterior distributions to reproducing Kernel Hilbert space (RKHS). Robust embedding is obtained by the eigenvalue decomposition in the RKHS. By retaining only the leading eigenvectors, the noise in data is methodically disregarded. The non-parametric conditional and posterior distribution embedding obtained by our method can be applied to a wide range of Bayesian inference problems. In this paper, we apply it to heterogeneous face recognition and zero-shot object recognition problems. Experimental validation shows that our method produces better results than the comparative algorithms.
{"title":"Robust Kernel Embedding of Conditional and Posterior Distributions with Applications","authors":"M. Nawaz, Omar Arif","doi":"10.1109/ICMLA.2016.0016","DOIUrl":"https://doi.org/10.1109/ICMLA.2016.0016","url":null,"abstract":"This paper proposes a novel non-parametric method to robustly embed conditional and posterior distributions to reproducing Kernel Hilbert space (RKHS). Robust embedding is obtained by the eigenvalue decomposition in the RKHS. By retaining only the leading eigenvectors, the noise in data is methodically disregarded. The non-parametric conditional and posterior distribution embedding obtained by our method can be applied to a wide range of Bayesian inference problems. In this paper, we apply it to heterogeneous face recognition and zero-shot object recognition problems. Experimental validation shows that our method produces better results than the comparative algorithms.","PeriodicalId":356182,"journal":{"name":"2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121289774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ankit Verma, Monika Sharma, R. Hebbalaguppe, Ehtesham Hassan, L. Vig
Container identification and recognition is still performed manually or in a semi-automatic fashion in multiple ports globally. This results in errors and inefficiencies in port operations. The problem of automatic container identification and recognition is challenging as the ISO standard only prescribes the pattern of the code and does not specify other parameters such as the foreground and background colors, font type and size, orientation of characters (horizontal or vertical) so on. Additionally, the corrugated surface of container body makes the two dimensional projection of the text on three dimensional containers slanted and jagged. We propose a solution in the form of an end-to-end pipeline that uses Region Proposals generated based on Connected Components for text detection in conjunction with Spatial Transformer Networks for text recognition. We demonstrate via our experimental results that the pipeline is reliable and robust even in situations when the code characters are highly distorted and outperforms the state-of-the-art results for text detection and recognition over the containers. We achieve text coverage rate of 100% and text recognition rate of 99.64%.
{"title":"Automatic Container Code Recognition via Spatial Transformer Networks and Connected Component Region Proposals","authors":"Ankit Verma, Monika Sharma, R. Hebbalaguppe, Ehtesham Hassan, L. Vig","doi":"10.1109/ICMLA.2016.0130","DOIUrl":"https://doi.org/10.1109/ICMLA.2016.0130","url":null,"abstract":"Container identification and recognition is still performed manually or in a semi-automatic fashion in multiple ports globally. This results in errors and inefficiencies in port operations. The problem of automatic container identification and recognition is challenging as the ISO standard only prescribes the pattern of the code and does not specify other parameters such as the foreground and background colors, font type and size, orientation of characters (horizontal or vertical) so on. Additionally, the corrugated surface of container body makes the two dimensional projection of the text on three dimensional containers slanted and jagged. We propose a solution in the form of an end-to-end pipeline that uses Region Proposals generated based on Connected Components for text detection in conjunction with Spatial Transformer Networks for text recognition. We demonstrate via our experimental results that the pipeline is reliable and robust even in situations when the code characters are highly distorted and outperforms the state-of-the-art results for text detection and recognition over the containers. We achieve text coverage rate of 100% and text recognition rate of 99.64%.","PeriodicalId":356182,"journal":{"name":"2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128590068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Inpainting, originally designed in computer vision to reconstruct lost or deteriorated parts of images and videos, has been used for image tampering, including region filling and object removal to alter the truth. While several types of tampering including copy-move and seam carving forgery can now be successfully exposed in image forensics, there has been very little study to tackle inpainting forgery in JPEG images, the detection of which is extremely challenging due to the post-recompression attacks performed to cover or compromise original inpainting traces. To date, there is no effective way to detect inpainting image forgery under combined recompression attacks. To fill such a gap in image forensics and reveal inpainting forgery from the post-recompression attacks in JPEG images, we propose in this paper an approach that begins with large feature mining in discrete transform domain, ensemble learning is then applied to deal with the high feature dimensionality and to prevent the overfitting that generally happens to some regular classifiers under high feature dimensions. Our study shows the proposed approach effectively exposes inpainting forgery under post recompression attacks, especially, it noticeably improves the detection accuracy while the recompression quality is lower than the original JPEG image quality, and thus bridges a gap in image forgery detection.
{"title":"Exposing Inpainting Forgery in JPEG Images under Recompression Attacks","authors":"Qingzhong Liu, A. Sung, Bing Zhou, Mengyu Qiao","doi":"10.1109/ICMLA.2016.0035","DOIUrl":"https://doi.org/10.1109/ICMLA.2016.0035","url":null,"abstract":"Inpainting, originally designed in computer vision to reconstruct lost or deteriorated parts of images and videos, has been used for image tampering, including region filling and object removal to alter the truth. While several types of tampering including copy-move and seam carving forgery can now be successfully exposed in image forensics, there has been very little study to tackle inpainting forgery in JPEG images, the detection of which is extremely challenging due to the post-recompression attacks performed to cover or compromise original inpainting traces. To date, there is no effective way to detect inpainting image forgery under combined recompression attacks. To fill such a gap in image forensics and reveal inpainting forgery from the post-recompression attacks in JPEG images, we propose in this paper an approach that begins with large feature mining in discrete transform domain, ensemble learning is then applied to deal with the high feature dimensionality and to prevent the overfitting that generally happens to some regular classifiers under high feature dimensions. Our study shows the proposed approach effectively exposes inpainting forgery under post recompression attacks, especially, it noticeably improves the detection accuracy while the recompression quality is lower than the original JPEG image quality, and thus bridges a gap in image forgery detection.","PeriodicalId":356182,"journal":{"name":"2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130957769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Comparable corpora contain significant quantities of useful data for Natural Language Processing tasks, especially in the area of Machine Translation. They are mainly the source of parallel text fragments. This paper investigates how to effectively extract bilingual texts from comparable corpora relying on a small-size parallel training corpus. We propose a new technique to filter non parallel articles in Wikipedia based on Zipfian frequency distribution. We also use the SVM approach to find parallel chunks of text in a candidate comparable document. In our approach we use a parallel corpus to generate the required features for the training step. The evaluations of generated bilingual texts are promising.
{"title":"Parallel Text Identification Using Lexical and Corpus Features for the English-Maori Language Pair","authors":"Mahsa Mohaghegh, A. Sarrafzadeh","doi":"10.1109/ICMLA.2016.0163","DOIUrl":"https://doi.org/10.1109/ICMLA.2016.0163","url":null,"abstract":"Comparable corpora contain significant quantities of useful data for Natural Language Processing tasks, especially in the area of Machine Translation. They are mainly the source of parallel text fragments. This paper investigates how to effectively extract bilingual texts from comparable corpora relying on a small-size parallel training corpus. We propose a new technique to filter non parallel articles in Wikipedia based on Zipfian frequency distribution. We also use the SVM approach to find parallel chunks of text in a candidate comparable document. In our approach we use a parallel corpus to generate the required features for the training step. The evaluations of generated bilingual texts are promising.","PeriodicalId":356182,"journal":{"name":"2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130846169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Classifying educational resources such as videos and articles can be challenging in low-resource languages due to lack of appropriate tools and sufficient labeled data. To overcome this problem, a crosslingual classification method that utilizes resources created in one high-resource language, such as English, to perform classification in many low-resource languages, is proposed. Data scarcity issue is prevented by transferring information from highresources languages to the low-resources ones. First, word embeddings are extracted using one of the frameworks proposed previously, then classifiers are trained using the highresource language documents. Two versions of the method that use different higher-level composition functions are implemented and compared.
{"title":"Classifying Educational Lectures in Low-Resource Languages","authors":"Gihad N. Sohsah, Onur Güzey, Zaina Tarmanini","doi":"10.1109/ICMLA.2016.0076","DOIUrl":"https://doi.org/10.1109/ICMLA.2016.0076","url":null,"abstract":"Classifying educational resources such as videos and articles can be challenging in low-resource languages due to lack of appropriate tools and sufficient labeled data. To overcome this problem, a crosslingual classification method that utilizes resources created in one high-resource language, such as English, to perform classification in many low-resource languages, is proposed. Data scarcity issue is prevented by transferring information from highresources languages to the low-resources ones. First, word embeddings are extracted using one of the frameworks proposed previously, then classifiers are trained using the highresource language documents. Two versions of the method that use different higher-level composition functions are implemented and compared.","PeriodicalId":356182,"journal":{"name":"2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130205525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A novel algorithm is proposed in this study for improving the accuracy and robustness of human biometric identification using electrocardiograms (ECG) from mobile devices. The algorithm combines the advantages of both fiducial and non-fiducial ECG features and implements a fully automated, two-stage cascaded classification system using wavelet analysis coupled with probabilistic random forest machine learning. The proposed algorithm achieves a high identification accuracy of 99.43% for the MIT-BIH Arrhythmia database, 99.98% for the MIT-BIH Normal Sinus Rhythm database, 100% for the ECG data acquired from an ECG sensor integrated into a mobile phone, and 98.79% for the PhysioNet Human-ID database acquired from multiple tests within a 6-month span. These results demonstrate the effectiveness and robustness of the proposed algorithm for biometric identification, hence supporting its practicality in applications such as remote healthcare and cloud data security.
{"title":"ECG Biometric Identification Using Wavelet Analysis Coupled with Probabilistic Random Forest","authors":"Robin Tan, M. Perkowski","doi":"10.1109/ICMLA.2016.0038","DOIUrl":"https://doi.org/10.1109/ICMLA.2016.0038","url":null,"abstract":"A novel algorithm is proposed in this study for improving the accuracy and robustness of human biometric identification using electrocardiograms (ECG) from mobile devices. The algorithm combines the advantages of both fiducial and non-fiducial ECG features and implements a fully automated, two-stage cascaded classification system using wavelet analysis coupled with probabilistic random forest machine learning. The proposed algorithm achieves a high identification accuracy of 99.43% for the MIT-BIH Arrhythmia database, 99.98% for the MIT-BIH Normal Sinus Rhythm database, 100% for the ECG data acquired from an ECG sensor integrated into a mobile phone, and 98.79% for the PhysioNet Human-ID database acquired from multiple tests within a 6-month span. These results demonstrate the effectiveness and robustness of the proposed algorithm for biometric identification, hence supporting its practicality in applications such as remote healthcare and cloud data security.","PeriodicalId":356182,"journal":{"name":"2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122960585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Raghunath, K. T. Sreekumar, C. S. Kumar, K. I. Ramachandran
High accuracy fault diagnosis systems are extremely important for effective condition based maintenance (CBM) of rotating machines. In this work, we develop a fault diagnosis system using time and frequency domain statistical features as input to a backend support vector machine (SVM) classifier. We evaluate the performance of the baseline system for speed dependent and speed independent performance. We show how feature mapping and feature normalization can help in enhancing the speed independent performance of machine fault diagnosis systems. We first perform feature mapping using locality constrained linear coding (LLC) which maps the input features to a higher dimensional feature space to be used as input to an SVM classifier (LLC-SVM). It is seen that there is a significant improvement in the speed independent performance of the fault identification system. We obtain an improvement of 11.81% absolute and 10.53% absolute respectively for time and frequency domain LLC-SVM systems compared to the respective baseline systems. We then explore variance normalization considering the speed specific variations as noise to further improve the performance of the fault diagnosis system. We obtain a performance improvement of 8.20% absolute and 6.71% absolute respectively over the time and frequency domain LLC-SVM systems. It may be noted that that the variance normalized LLC-SVM system outperforms.
{"title":"Improving Speed Independent Performance of Fault Diagnosis Systems through Feature Mapping and Normalization","authors":"A. Raghunath, K. T. Sreekumar, C. S. Kumar, K. I. Ramachandran","doi":"10.1109/ICMLA.2016.0136","DOIUrl":"https://doi.org/10.1109/ICMLA.2016.0136","url":null,"abstract":"High accuracy fault diagnosis systems are extremely important for effective condition based maintenance (CBM) of rotating machines. In this work, we develop a fault diagnosis system using time and frequency domain statistical features as input to a backend support vector machine (SVM) classifier. We evaluate the performance of the baseline system for speed dependent and speed independent performance. We show how feature mapping and feature normalization can help in enhancing the speed independent performance of machine fault diagnosis systems. We first perform feature mapping using locality constrained linear coding (LLC) which maps the input features to a higher dimensional feature space to be used as input to an SVM classifier (LLC-SVM). It is seen that there is a significant improvement in the speed independent performance of the fault identification system. We obtain an improvement of 11.81% absolute and 10.53% absolute respectively for time and frequency domain LLC-SVM systems compared to the respective baseline systems. We then explore variance normalization considering the speed specific variations as noise to further improve the performance of the fault diagnosis system. We obtain a performance improvement of 8.20% absolute and 6.71% absolute respectively over the time and frequency domain LLC-SVM systems. It may be noted that that the variance normalized LLC-SVM system outperforms.","PeriodicalId":356182,"journal":{"name":"2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123983389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper studies about computational burden of a reference modified PID with a neural network prediction for dc-dc converters. Flexible control methods are required to realize a superior transient response since the converter has a nonlinear behavior. However, the computational burden becomes a problem to implement the control to computation devices. In this paper, the neural network is adopted to improve the transient response of output voltage of the dc-dc converter under the consideration of its computational burden. The neural network computation part has a longer computation period than the PID main control part. It can be possible since the neural network gives more than one predictions which are required for the reference modification for each main control period. Therefore, the reference modification can be adopted on every main control period. From results, it is confirmed that the proposed method can improve the transient response effectively with reducing computational burden of neural network control.
{"title":"A Study on Effects of Different Control Period of Neural Network Based Reference Modified PID Control for DC-DC Converters","authors":"H. Maruta, Hironobu Taniguchi, F. Kurokawa","doi":"10.1109/ICMLA.2016.0081","DOIUrl":"https://doi.org/10.1109/ICMLA.2016.0081","url":null,"abstract":"This paper studies about computational burden of a reference modified PID with a neural network prediction for dc-dc converters. Flexible control methods are required to realize a superior transient response since the converter has a nonlinear behavior. However, the computational burden becomes a problem to implement the control to computation devices. In this paper, the neural network is adopted to improve the transient response of output voltage of the dc-dc converter under the consideration of its computational burden. The neural network computation part has a longer computation period than the PID main control part. It can be possible since the neural network gives more than one predictions which are required for the reference modification for each main control period. Therefore, the reference modification can be adopted on every main control period. From results, it is confirmed that the proposed method can improve the transient response effectively with reducing computational burden of neural network control.","PeriodicalId":356182,"journal":{"name":"2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129155119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Solar power penetration has made the site-specific energy ratings an essential necessity for utilities, independent systems operators and regional transmission organizations. Since, it leads to the reliable and efficient energy production with the increased levels of solar power integration. This study concentrates on the partitional clustering analysis of monthly average insolation period data for the 75 provinces in Turkey. Together with the k-means clustering algorithm, we use Pearson Correlation, Cosine, Squared Euclidean and City-Block distance measures for the high-dimensional neighborhood measurement and utilize the silhouette width for validating the achieved clustering results. In consequence of comparing the star glyph plots with the k-means clustering results, the most productive and the most unfavorable places among all provinces are mined on the basis of monthly average insolation period.
{"title":"k-Means Partition of Monthly Average Insolation Period Data for Turkey","authors":"M. Yesilbudak, I. Colak, R. Bayindir","doi":"10.1109/ICMLA.2016.0077","DOIUrl":"https://doi.org/10.1109/ICMLA.2016.0077","url":null,"abstract":"Solar power penetration has made the site-specific energy ratings an essential necessity for utilities, independent systems operators and regional transmission organizations. Since, it leads to the reliable and efficient energy production with the increased levels of solar power integration. This study concentrates on the partitional clustering analysis of monthly average insolation period data for the 75 provinces in Turkey. Together with the k-means clustering algorithm, we use Pearson Correlation, Cosine, Squared Euclidean and City-Block distance measures for the high-dimensional neighborhood measurement and utilize the silhouette width for validating the achieved clustering results. In consequence of comparing the star glyph plots with the k-means clustering results, the most productive and the most unfavorable places among all provinces are mined on the basis of monthly average insolation period.","PeriodicalId":356182,"journal":{"name":"2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"18 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116694881","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Once a software project has been developed and delivered, any modification to it corresponds to maintenance. Software maintenance (SM) involves modifications to keep a software project usable in a changed or a changing environment, reactive modifications to correct discovered faults, and modifications to improve performance or maintainability. Since the duration of SM should be predicted, in this study, after a statistical analysis of projects maintained on several platforms and programming languages generations, data sets were selected for training and testing multilayer feedforward neural networks (i.e., multilayer perceptron, MLP). These data sets were obtained from the International Software Benchmarking Standards Group. Results based on Wilcoxon statistical tests show that prediction accuracy with the MLP is statistically better than that with the statistical regression models when software projects were maintained on (1) Mid Range platform and coded in programming languages of third generation, and (2) Multi platform and coded in programming languages of fourth generation.
{"title":"Feedforward Neural Networks for Predicting the Duration of Maintained Software Projects","authors":"C. López-Martín","doi":"10.1109/ICMLA.2016.0093","DOIUrl":"https://doi.org/10.1109/ICMLA.2016.0093","url":null,"abstract":"Once a software project has been developed and delivered, any modification to it corresponds to maintenance. Software maintenance (SM) involves modifications to keep a software project usable in a changed or a changing environment, reactive modifications to correct discovered faults, and modifications to improve performance or maintainability. Since the duration of SM should be predicted, in this study, after a statistical analysis of projects maintained on several platforms and programming languages generations, data sets were selected for training and testing multilayer feedforward neural networks (i.e., multilayer perceptron, MLP). These data sets were obtained from the International Software Benchmarking Standards Group. Results based on Wilcoxon statistical tests show that prediction accuracy with the MLP is statistically better than that with the statistical regression models when software projects were maintained on (1) Mid Range platform and coded in programming languages of third generation, and (2) Multi platform and coded in programming languages of fourth generation.","PeriodicalId":356182,"journal":{"name":"2016 15th IEEE International Conference on Machine Learning and Applications (ICMLA)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116983435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}