Pub Date : 2017-11-28DOI: 10.1109/IPTA.2017.8310116
J. Benois-Pineau, Mihai Mitrea
Rather than meeting the theoretical, methodological and applicative expectancies, the impressing number of state-of-the-art saliency oriented studies raises new fundamental questions about the very nature of this psycho-cognitive process. Such questions encompass fundamental modeling aspects from the very saliency dependency on the representation format to its potential relationship to other fundamental research areas, like information theory, for instance. The present survey, structured according to three main saliency applicative fields (visual quality evaluation, watermarking and task-oriented computer vision) is meant to identify the latest trends of research.
{"title":"Extraction of saliency in images and video: Problems, methods and applications. A survey","authors":"J. Benois-Pineau, Mihai Mitrea","doi":"10.1109/IPTA.2017.8310116","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310116","url":null,"abstract":"Rather than meeting the theoretical, methodological and applicative expectancies, the impressing number of state-of-the-art saliency oriented studies raises new fundamental questions about the very nature of this psycho-cognitive process. Such questions encompass fundamental modeling aspects from the very saliency dependency on the representation format to its potential relationship to other fundamental research areas, like information theory, for instance. The present survey, structured according to three main saliency applicative fields (visual quality evaluation, watermarking and task-oriented computer vision) is meant to identify the latest trends of research.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126808900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-27DOI: 10.1109/IPTA.2017.8310150
Qianyi Jiang, S. Moussaoui, J. Idier, G. Collewet, Mai Xu
This paper addresses maximum likelihood estimation of images corrupted by a Rician noise, with the aim to propose an efficient optimization method. The application example is the restoration of magnetic resonance images. Starting from the fact that the criterion to minimize is non-convex but unimodal, the main contribution of this work is to propose an optimization scheme based on the majorization-minimization framework after introducing a variable change allowing to get a strictly convex criterion. The resulting descent algorithm is compared to the classical MM descent algorithm and its performances are assessed using synthetic and real MR images. Finally, by combining these two MM algorithms, two optimization strategies are proposed to improve the numerical efficiency of the image restoration for any signal-to-noise ratio.
{"title":"Majorization-minimization algorithms for maximum likelihood estimation of magnetic resonance images","authors":"Qianyi Jiang, S. Moussaoui, J. Idier, G. Collewet, Mai Xu","doi":"10.1109/IPTA.2017.8310150","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310150","url":null,"abstract":"This paper addresses maximum likelihood estimation of images corrupted by a Rician noise, with the aim to propose an efficient optimization method. The application example is the restoration of magnetic resonance images. Starting from the fact that the criterion to minimize is non-convex but unimodal, the main contribution of this work is to propose an optimization scheme based on the majorization-minimization framework after introducing a variable change allowing to get a strictly convex criterion. The resulting descent algorithm is compared to the classical MM descent algorithm and its performances are assessed using synthetic and real MR images. Finally, by combining these two MM algorithms, two optimization strategies are proposed to improve the numerical efficiency of the image restoration for any signal-to-noise ratio.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134556075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/IPTA.2017.8310146
Said Yacine Boulahia, É. Anquetil, F. Multon, R. Kulpa
Over the past few years, advances in commercial 3D sensors have substantially promoted the research of dynamic hand gesture recognition. On a other side, whole body gestures recognition has also attracted increasing attention since the emergence of Kinect like sensors. One may notice that both research topics deal with human-made motions and are likely to face similar challenges. In this paper, our aim is thus to evaluate the applicability of an action recognition feature-set to model dynamic hand gestures using skeleton data. Furthermore, existing datasets are often composed of pre-segmented gestures that are performed with a single hand only. We collected therefore a more challenging dataset, which contains unsegmented streams of 13 hand gesture classes, performed with either a single hand or two hands. Our approach is first evaluated on an existing dataset, namely DHG dataset, and then using our collected dataset. Better results compared to previous approaches are reported.
{"title":"Dynamic hand gesture recognition based on 3D pattern assembled trajectories","authors":"Said Yacine Boulahia, É. Anquetil, F. Multon, R. Kulpa","doi":"10.1109/IPTA.2017.8310146","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310146","url":null,"abstract":"Over the past few years, advances in commercial 3D sensors have substantially promoted the research of dynamic hand gesture recognition. On a other side, whole body gestures recognition has also attracted increasing attention since the emergence of Kinect like sensors. One may notice that both research topics deal with human-made motions and are likely to face similar challenges. In this paper, our aim is thus to evaluate the applicability of an action recognition feature-set to model dynamic hand gestures using skeleton data. Furthermore, existing datasets are often composed of pre-segmented gestures that are performed with a single hand only. We collected therefore a more challenging dataset, which contains unsegmented streams of 13 hand gesture classes, performed with either a single hand or two hands. Our approach is first evaluated on an existing dataset, namely DHG dataset, and then using our collected dataset. Better results compared to previous approaches are reported.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124826737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/IPTA.2017.8310120
Julien Valognes, Maria A. Amer, Niloufar Salehi Dastjerdi
The rapid increase in digital video content demands effective summarization techniques, specially with the creation of RGBD videos. Keyframe extraction significantly reduces the amount of raw data in a video sequence. In this paper, we present a two-stage (histogram and filtering) keyframe extraction algorithm applicable on RGB and RGBD videos. In the first stage, RGB and depth histogram similarities of consecutive frames are computed and candidate keyframes are extracted. In the second stage, we filter neighboring candidate keyframes based on the MAD of their Euclidean distance and their MSE. Subjective and objective experimental results show our algorithm effectively extracts keyframes from both RGB and RGBD videos.
{"title":"Effective keyframe extraction from RGB and RGB-D video sequences","authors":"Julien Valognes, Maria A. Amer, Niloufar Salehi Dastjerdi","doi":"10.1109/IPTA.2017.8310120","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310120","url":null,"abstract":"The rapid increase in digital video content demands effective summarization techniques, specially with the creation of RGBD videos. Keyframe extraction significantly reduces the amount of raw data in a video sequence. In this paper, we present a two-stage (histogram and filtering) keyframe extraction algorithm applicable on RGB and RGBD videos. In the first stage, RGB and depth histogram similarities of consecutive frames are computed and candidate keyframes are extracted. In the second stage, we filter neighboring candidate keyframes based on the MAD of their Euclidean distance and their MSE. Subjective and objective experimental results show our algorithm effectively extracts keyframes from both RGB and RGBD videos.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122777873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/IPTA.2017.8310132
E. Balster, David B. Mundy, Andrew M. Kordik, Kerry L. Hill
In this paper, a synthetic aperture radar (SAR) image formation simulator is used to objectively evaluate parameter selection within the digital spotlighting process. Specifically, recommendations for the filter type and filter order of the low-pass filters used in the range and azimuth decimation processes within the digital spotlighting algorithm are determined to maximize image quality and minimize computational cost. Results show that an FIR low-pass filter with a Taylor (n = 5) window applied provides the highest image quality over a wide range of filter orders and decimation factors. Additionally, a linear relationship between filter length and decimation factor is found.
{"title":"Digital spotlighting parameter evaluation for SAR imaging","authors":"E. Balster, David B. Mundy, Andrew M. Kordik, Kerry L. Hill","doi":"10.1109/IPTA.2017.8310132","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310132","url":null,"abstract":"In this paper, a synthetic aperture radar (SAR) image formation simulator is used to objectively evaluate parameter selection within the digital spotlighting process. Specifically, recommendations for the filter type and filter order of the low-pass filters used in the range and azimuth decimation processes within the digital spotlighting algorithm are determined to maximize image quality and minimize computational cost. Results show that an FIR low-pass filter with a Taylor (n = 5) window applied provides the highest image quality over a wide range of filter orders and decimation factors. Additionally, a linear relationship between filter length and decimation factor is found.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114404333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/IPTA.2017.8310114
Yan Zhang, Georg Layher, H. Neumann
In application domains, such as human-robot interaction and ambient intelligence, it is expected that an intelligent agent can respond to the person's actions efficiently or make predictions while the person's activity is still ongoing. In this paper, we investigate the problem of continuous activity understanding, based on a visual pattern extraction mechanism which fuses decomposed body pose features from estimated 2D skeletons (based on deep learning skeleton inference) and localized appearance-motion features around spatiotemporal interest points (STIPs). Considering that human activities are observed and inferred gradually, we partition the video into snippets, extract the visual pattern accumulatively and infer the activities in an online fashion. We evaluated the proposed method on two benchmark datasets and achieved 92.6% on the KTH dataset and 92.7% on the Rochester Assisted Daily Living dataset in the equilibrated inference states. In parallel, we discover that context information mainly contributed by STIPs is probably more favourable to activity recognition than the pose information, especially in scenarios of daily living activities. In addition, incorporating the visual patterns of activities from early stages to train the classifier can improve the performance of early recognition; however, it could degrade the recognition rate in later time. To overcome this issue, we propose a mixture model, where the classifier trained with early visual patterns are used in early stages while the classifier trained without early patterns are used in later stages. The experimental results show that this straightforward approach can improve early recognition while retaining the recognition correctness of later times.
{"title":"Continuous activity understanding based on accumulative pose-context visual patterns","authors":"Yan Zhang, Georg Layher, H. Neumann","doi":"10.1109/IPTA.2017.8310114","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310114","url":null,"abstract":"In application domains, such as human-robot interaction and ambient intelligence, it is expected that an intelligent agent can respond to the person's actions efficiently or make predictions while the person's activity is still ongoing. In this paper, we investigate the problem of continuous activity understanding, based on a visual pattern extraction mechanism which fuses decomposed body pose features from estimated 2D skeletons (based on deep learning skeleton inference) and localized appearance-motion features around spatiotemporal interest points (STIPs). Considering that human activities are observed and inferred gradually, we partition the video into snippets, extract the visual pattern accumulatively and infer the activities in an online fashion. We evaluated the proposed method on two benchmark datasets and achieved 92.6% on the KTH dataset and 92.7% on the Rochester Assisted Daily Living dataset in the equilibrated inference states. In parallel, we discover that context information mainly contributed by STIPs is probably more favourable to activity recognition than the pose information, especially in scenarios of daily living activities. In addition, incorporating the visual patterns of activities from early stages to train the classifier can improve the performance of early recognition; however, it could degrade the recognition rate in later time. To overcome this issue, we propose a mixture model, where the classifier trained with early visual patterns are used in early stages while the classifier trained without early patterns are used in later stages. The experimental results show that this straightforward approach can improve early recognition while retaining the recognition correctness of later times.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129043916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/IPTA.2017.8310097
Dona Valy, M. Verleysen, Kimheng Sok
Text line segmentation is one of the most essential pre-processing steps in character recognition and document analysis. In ancient documents, a variety of deformations caused by aging produce noises which make the binarization process very challenging. Moreover, due to the irregular layout such as skewness and fluctuation of text lines, segmenting an ancient manuscript page into lines still remains an open problem to solve. In this paper, we propose a novel line segmentation scheme for grayscale images of Khmer ancient documents. First, a stroke width transform is applied to extract connected components from the document page. The number and medial positions of text lines are estimated using a modified piece-wise projection profile technique. Those positions are then modified adaptively according to the curvature of the actual text lines. Finally, a path finding approach is used to separate touching components and also to mark the boundary of the text lines. Experiments are conducted on a dataset of 110 pages of Khmer palm leaf manuscript images by comparing the robustness of the proposed approach with existing methods from the literature.
{"title":"Line segmentation for grayscale text images of khmer palm leaf manuscripts","authors":"Dona Valy, M. Verleysen, Kimheng Sok","doi":"10.1109/IPTA.2017.8310097","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310097","url":null,"abstract":"Text line segmentation is one of the most essential pre-processing steps in character recognition and document analysis. In ancient documents, a variety of deformations caused by aging produce noises which make the binarization process very challenging. Moreover, due to the irregular layout such as skewness and fluctuation of text lines, segmenting an ancient manuscript page into lines still remains an open problem to solve. In this paper, we propose a novel line segmentation scheme for grayscale images of Khmer ancient documents. First, a stroke width transform is applied to extract connected components from the document page. The number and medial positions of text lines are estimated using a modified piece-wise projection profile technique. Those positions are then modified adaptively according to the curvature of the actual text lines. Finally, a path finding approach is used to separate touching components and also to mark the boundary of the text lines. Experiments are conducted on a dataset of 110 pages of Khmer palm leaf manuscript images by comparing the robustness of the proposed approach with existing methods from the literature.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121921378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/IPTA.2017.8310117
Marwa Ammar, M. Mitrea, Ismail Boujelbane
The present paper studies the potential synergies between a popular approach in saliency extraction — FIT (feature integration theory) and source coding principles. By combining these two approaches, a new saliency model, extracted directly at the HEVC steam syntax elements level is defined. The experiments confront the new model to the human saliency, captured by eye-tracking devices. They consider a reference corpus representing density fixation maps, two objective criteria, two objective measures and 7 state-of-the-art saliency models (3 acting in pixel domain and 4 acting in compressed domain).
{"title":"HEVC stream saliency extraction: Synergies between FIT and information theory principles","authors":"Marwa Ammar, M. Mitrea, Ismail Boujelbane","doi":"10.1109/IPTA.2017.8310117","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310117","url":null,"abstract":"The present paper studies the potential synergies between a popular approach in saliency extraction — FIT (feature integration theory) and source coding principles. By combining these two approaches, a new saliency model, extracted directly at the HEVC steam syntax elements level is defined. The experiments confront the new model to the human saliency, captured by eye-tracking devices. They consider a reference corpus representing density fixation maps, two objective criteria, two objective measures and 7 state-of-the-art saliency models (3 acting in pixel domain and 4 acting in compressed domain).","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133218787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/IPTA.2017.8310078
Irida Shallari, Qaiser Anwar, Muhammad Imran, M. O’nils
Background subtraction is one of the fundamental steps in the image-processing pipeline for distinguishing foreground from background. Most of the methods have been investigated with respect to visual images, in which case challenges are different compared to thermal images. Thermal sensors are invariant to light changes and have reduced privacy concerns. We propose the use of a low-pass IIR filter for background modelling in thermographic imagery due to its better performance compared to algorithms such as Mixture of Gaussians and K-nearest neighbour, while reducing memory requirements for implementation in embedded architectures. Based on the analysis of four different image datasets both indoor and outdoor, with and without people presence, the learning rate for the filter is set to 3×10−3 Hz and the proposed model is implemented on an Artix-7 FPGA.
{"title":"Background modelling, analysis and implementation for thermographic images","authors":"Irida Shallari, Qaiser Anwar, Muhammad Imran, M. O’nils","doi":"10.1109/IPTA.2017.8310078","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310078","url":null,"abstract":"Background subtraction is one of the fundamental steps in the image-processing pipeline for distinguishing foreground from background. Most of the methods have been investigated with respect to visual images, in which case challenges are different compared to thermal images. Thermal sensors are invariant to light changes and have reduced privacy concerns. We propose the use of a low-pass IIR filter for background modelling in thermographic imagery due to its better performance compared to algorithms such as Mixture of Gaussians and K-nearest neighbour, while reducing memory requirements for implementation in embedded architectures. Based on the analysis of four different image datasets both indoor and outdoor, with and without people presence, the learning rate for the filter is set to 3×10−3 Hz and the proposed model is implemented on an Artix-7 FPGA.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124162953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-11-01DOI: 10.1109/IPTA.2017.8310140
Mohamed Mhiri, Sherif Abuelwafa, Christian Desrosiers, M. Cheriet
Classifying historical document images is a challenging task due to the high variability of their content and the common presence of degradation in these documents. For scholars, footnotes are essential to analyze and investigate historical documents. In this work, a novel classification method is proposed for detecting and segmenting footnotes from document images. Our proposed method utilizes horizontal histograms of text lines as inputs to a 1D Convolutional Neural Network (CNN). Experiments on a dataset of historical documents show the proposed method to be effective in dealing with the high variability of footnotes, even while using a small training set. Our method yielded an overall F-measure of 56.36% and a precision of 89.76%, outperforming significantly existing approaches for this task.
{"title":"Footnote-based document image classification using 1D convolutional neural networks and histograms","authors":"Mohamed Mhiri, Sherif Abuelwafa, Christian Desrosiers, M. Cheriet","doi":"10.1109/IPTA.2017.8310140","DOIUrl":"https://doi.org/10.1109/IPTA.2017.8310140","url":null,"abstract":"Classifying historical document images is a challenging task due to the high variability of their content and the common presence of degradation in these documents. For scholars, footnotes are essential to analyze and investigate historical documents. In this work, a novel classification method is proposed for detecting and segmenting footnotes from document images. Our proposed method utilizes horizontal histograms of text lines as inputs to a 1D Convolutional Neural Network (CNN). Experiments on a dataset of historical documents show the proposed method to be effective in dealing with the high variability of footnotes, even while using a small training set. Our method yielded an overall F-measure of 56.36% and a precision of 89.76%, outperforming significantly existing approaches for this task.","PeriodicalId":316356,"journal":{"name":"2017 Seventh International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127124110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}