Pub Date : 2018-10-01DOI: 10.1109/SIBGRAPI.2018.00066
Natan Andrade, F. Faria, F. Cappabianco
The large variety of medical image modalities (e.g. Computed Tomography, Magnetic Resonance Imaging, and Positron Emission Tomography) acquired from the same body region of a patient together with recent advances in computer architectures with faster and larger CPUs and GPUs allows a new, exciting, and unexplored world for image registration area. A precise and accurate registration of images makes possible understanding the etiology of diseases, improving surgery planning and execution, detecting otherwise unnoticed health problem signals, and mapping functionalities of the brain. The goal of this paper is to present a review of the state-of-the-art in medical image registration starting from the preprocessing steps, covering the most popular methodologies of the literature and finish with the more recent advances and perspectives from the application of Deep Learning architectures.
{"title":"A Practical Review on Medical Image Registration: From Rigid to Deep Learning Based Approaches","authors":"Natan Andrade, F. Faria, F. Cappabianco","doi":"10.1109/SIBGRAPI.2018.00066","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00066","url":null,"abstract":"The large variety of medical image modalities (e.g. Computed Tomography, Magnetic Resonance Imaging, and Positron Emission Tomography) acquired from the same body region of a patient together with recent advances in computer architectures with faster and larger CPUs and GPUs allows a new, exciting, and unexplored world for image registration area. A precise and accurate registration of images makes possible understanding the etiology of diseases, improving surgery planning and execution, detecting otherwise unnoticed health problem signals, and mapping functionalities of the brain. The goal of this paper is to present a review of the state-of-the-art in medical image registration starting from the preprocessing steps, covering the most popular methodologies of the literature and finish with the more recent advances and perspectives from the application of Deep Learning architectures.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115559363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/SIBGRAPI.2018.00051
Markus Diego Dias, Fabiano Petronetto, Paola Valdivia, L. G. Nonato
Visualization is an important tool in the analysis and understanding of networks and their content. However, visualization tools face major challenges when dealing with large networks, mainly due to visual clutter. In this context, network simplification has been a main alternative to handle massive networks, reducing complexity while preserving relevant patterns of the network structure and content. In this paper we propose a methodology that rely on Graph Signal Processing theory to filter multivariate data associated to network nodes, assisting and enhancing network simplification and visualization tasks. The simplification process takes into account both topological and multivariate data associated to network nodes to create a hierarchical representation of the network. The effectiveness of the proposed methodology is assessed through a comprehensive set of quantitative evaluation and comparisons, which gauge the impact of the proposed filtering process in the simplification and visualization tasks.
{"title":"Graph Spectral Filtering for Network Simplification","authors":"Markus Diego Dias, Fabiano Petronetto, Paola Valdivia, L. G. Nonato","doi":"10.1109/SIBGRAPI.2018.00051","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00051","url":null,"abstract":"Visualization is an important tool in the analysis and understanding of networks and their content. However, visualization tools face major challenges when dealing with large networks, mainly due to visual clutter. In this context, network simplification has been a main alternative to handle massive networks, reducing complexity while preserving relevant patterns of the network structure and content. In this paper we propose a methodology that rely on Graph Signal Processing theory to filter multivariate data associated to network nodes, assisting and enhancing network simplification and visualization tasks. The simplification process takes into account both topological and multivariate data associated to network nodes to create a hierarchical representation of the network. The effectiveness of the proposed methodology is assessed through a comprehensive set of quantitative evaluation and comparisons, which gauge the impact of the proposed filtering process in the simplification and visualization tasks.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"35 1-2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114132130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/SIBGRAPI.2018.00011
Airton Gaio Junior, E. Santos
Most of the researches dealing with video-based opinion recognition problems employ the combination of data from three different sources: video, audio and text. As a consequence, they are solutions based on complex and language-dependent models. Besides such complexity, it may be observed that these current solutions attain low performance in practical applications. Focusing on overcoming these drawbacks, this work presents a method for opinion classification that uses only video as data source, more precisely, facial expression and body gesture information are extracted from online videos and combined to lead to higher classification rates. The proposed method uses feature encoding strategies to improve data representation and to facilitate the classification task in order to predict user's opinion with high accuracy and independently of the language used in videos. Experiments were carried out using three public databases and three baselines to test the proposed method. The results of these experiments show that, even performing only visual analysis of the videos, the proposed method achieves 16% higher accuracy and precision rates, when compared to baselines that analyze visual, audio and textual data video. Moreover, it is showed that the proposed method may identify emotions in videos whose language is other than the language used for training.
{"title":"A Method for Opinion Classification in Video Combining Facial Expressions and Gestures","authors":"Airton Gaio Junior, E. Santos","doi":"10.1109/SIBGRAPI.2018.00011","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00011","url":null,"abstract":"Most of the researches dealing with video-based opinion recognition problems employ the combination of data from three different sources: video, audio and text. As a consequence, they are solutions based on complex and language-dependent models. Besides such complexity, it may be observed that these current solutions attain low performance in practical applications. Focusing on overcoming these drawbacks, this work presents a method for opinion classification that uses only video as data source, more precisely, facial expression and body gesture information are extracted from online videos and combined to lead to higher classification rates. The proposed method uses feature encoding strategies to improve data representation and to facilitate the classification task in order to predict user's opinion with high accuracy and independently of the language used in videos. Experiments were carried out using three public databases and three baselines to test the proposed method. The results of these experiments show that, even performing only visual analysis of the videos, the proposed method achieves 16% higher accuracy and precision rates, when compared to baselines that analyze visual, audio and textual data video. Moreover, it is showed that the proposed method may identify emotions in videos whose language is other than the language used for training.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130166289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/SIBGRAPI.2018.00049
L. Bergamasco, K. Lima, C. Rochitte, Fátima L. S. Nunes
The data processing to obtain useful information is a trending topic in the computing knowledge domain since we have observed a high demand arising from society for efficient techniques to perform this activity. Spherical Harmonics (SPHARMs) have been widely used in the three-dimensional (3D) object processing domain. Harmonic coefficients generated by this mathematical theory are considered a robust source of information about 3D objects. In parallel, Ford-Fulkerson is a classical method in graph theory that solves network flows problems. In this work we demonstrate the potential of using SPHARMs along with the Ford-Fulkerson method, respectively as descriptor and similarity measure. This article also shows how we adapted the later to transform it into a similarity measure. Our approach has been validated by a 3D medical dataset composed by 3D left ventricle surfaces, some of them presenting Congestive Heart Failure (CHF). The results indicated an average precision of 90%. In addition, the execution time was 65% lower than a descriptor previously tested. With the results obtained we can conclude that our approach, mainly the Ford-Fulkerson adaptation proposed, has a great potential to retrieve 3D medical objects.
{"title":"3D Medical Objects Retrieval Approach Using SPHARMs Descriptor and Network Flow as Similarity Measure","authors":"L. Bergamasco, K. Lima, C. Rochitte, Fátima L. S. Nunes","doi":"10.1109/SIBGRAPI.2018.00049","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00049","url":null,"abstract":"The data processing to obtain useful information is a trending topic in the computing knowledge domain since we have observed a high demand arising from society for efficient techniques to perform this activity. Spherical Harmonics (SPHARMs) have been widely used in the three-dimensional (3D) object processing domain. Harmonic coefficients generated by this mathematical theory are considered a robust source of information about 3D objects. In parallel, Ford-Fulkerson is a classical method in graph theory that solves network flows problems. In this work we demonstrate the potential of using SPHARMs along with the Ford-Fulkerson method, respectively as descriptor and similarity measure. This article also shows how we adapted the later to transform it into a similarity measure. Our approach has been validated by a 3D medical dataset composed by 3D left ventricle surfaces, some of them presenting Congestive Heart Failure (CHF). The results indicated an average precision of 90%. In addition, the execution time was 65% lower than a descriptor previously tested. With the results obtained we can conclude that our approach, mainly the Ford-Fulkerson adaptation proposed, has a great potential to retrieve 3D medical objects.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134230406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/SIBGRAPI.2018.00052
F. C. M. Rodrigues, R. Hirata, A. Telea
Understanding how a classifier partitions a high-dimensional input space and assigns labels to the parts is an important task in machine learning. Current methods for this task mainly use color-coded sample scatterplots, which do not explicitly show the actual decision boundaries or confusion zones. We propose an image-based technique to improve such visualizations. The method samples the 2D space of a dimensionality-reduction projection and color-code relevant classifier outputs, such as the majority class label, the confusion, and the sample density, to render a dense depiction of the high-dimensional decision boundaries. Our technique is simple to implement, handles any classifier, and has only two simple-to-control free parameters. We demonstrate our proposal on several real-world high-dimensional datasets, classifiers, and two different dimensionality reduction methods.
{"title":"Image-Based Visualization of Classifier Decision Boundaries","authors":"F. C. M. Rodrigues, R. Hirata, A. Telea","doi":"10.1109/SIBGRAPI.2018.00052","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00052","url":null,"abstract":"Understanding how a classifier partitions a high-dimensional input space and assigns labels to the parts is an important task in machine learning. Current methods for this task mainly use color-coded sample scatterplots, which do not explicitly show the actual decision boundaries or confusion zones. We propose an image-based technique to improve such visualizations. The method samples the 2D space of a dimensionality-reduction projection and color-code relevant classifier outputs, such as the majority class label, the confusion, and the sample density, to render a dense depiction of the high-dimensional decision boundaries. Our technique is simple to implement, handles any classifier, and has only two simple-to-control free parameters. We demonstrate our proposal on several real-world high-dimensional datasets, classifiers, and two different dimensionality reduction methods.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"30 24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126575152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/SIBGRAPI.2018.00025
Angela Mayhua, Erick Gomez Nieto, Jeffrey Heer, Jorge Poco
Map charts are used in diverse domains to show geographic data (e.g., climate research, oceanography, business analysis, etc.). These charts can be found in news articles, scientific papers, and on the Web. However, many map charts are available only as bitmap images, hindering machine interpretation of the visualized data for indexing and reuse. We propose a pipeline to recover both the visual encodings and underlying data from bitmap images of geographic maps with color-encoded scalar values. We evaluate our results using map images from scientific documents, achieving high accuracy along each step of our proposal. In addition, we present two applications: data extraction and map reprojection to enable improved visual representations of map charts.
{"title":"Extracting Visual Encodings from Map Chart Images with Color-Encoded Scalar Values","authors":"Angela Mayhua, Erick Gomez Nieto, Jeffrey Heer, Jorge Poco","doi":"10.1109/SIBGRAPI.2018.00025","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00025","url":null,"abstract":"Map charts are used in diverse domains to show geographic data (e.g., climate research, oceanography, business analysis, etc.). These charts can be found in news articles, scientific papers, and on the Web. However, many map charts are available only as bitmap images, hindering machine interpretation of the visualized data for indexing and reuse. We propose a pipeline to recover both the visual encodings and underlying data from bitmap images of geographic maps with color-encoded scalar values. We evaluate our results using map images from scientific documents, achieving high accuracy along each step of our proposal. In addition, we present two applications: data extraction and map reprojection to enable improved visual representations of map charts.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123905007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/SIBGRAPI.2018.00039
Anderson Luis Cavalcanti Sales, R. H. Vareto, W. R. Schwartz, Guillermo Cámara Chávez
Person Re-Identification is all about determining a person's entire course as s/he walks around camera-equipped zones. More precisely, person Re-ID is the problem of matching human identities captured from non-overlapping surveillance cameras. In this work, we propose an approach that learns a new low-dimensional metric space in an attempt to cut down multi-camera matching errors. We represent the training and test samples by concatenating handcrafted features. Then, the method performs a two-step ranking using elementary distance metrics and followed by an ensemble of weighted binary classifiers. We validate our approach on CUHK01 and PRID450s datasets, providing only a sample per class for probe and only a sample for gallery (single-shot). According to the experiments, our method achieves CMC Rank-1 results up to 61.1 and 75.4, following leading literature protocols, for CUHK01 and PRID450s, respectively.
{"title":"Single-Shot Person Re-Identification Combining Similarity Metrics and Support Vectors","authors":"Anderson Luis Cavalcanti Sales, R. H. Vareto, W. R. Schwartz, Guillermo Cámara Chávez","doi":"10.1109/SIBGRAPI.2018.00039","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00039","url":null,"abstract":"Person Re-Identification is all about determining a person's entire course as s/he walks around camera-equipped zones. More precisely, person Re-ID is the problem of matching human identities captured from non-overlapping surveillance cameras. In this work, we propose an approach that learns a new low-dimensional metric space in an attempt to cut down multi-camera matching errors. We represent the training and test samples by concatenating handcrafted features. Then, the method performs a two-step ranking using elementary distance metrics and followed by an ensemble of weighted binary classifiers. We validate our approach on CUHK01 and PRID450s datasets, providing only a sample per class for probe and only a sample for gallery (single-shot). According to the experiments, our method achieves CMC Rank-1 results up to 61.1 and 75.4, following leading literature protocols, for CUHK01 and PRID450s, respectively.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115012577","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/SIBGRAPI.2018.00050
Paula Ceccon Ribeiro, H. Lopes
Vector fields play an essential role in a large range of scientific applications. They are commonly generated through computer simulations. Such simulations may be a costly process since they usually require an intensive computational time. When researchers want to quantify the uncertainty in such kind of applications, usually an ensemble of vector fields realizations are generated, making the process much more expensive. The main contribution of this paper is to present a new method, based on the inverse projection technique, to quickly and consistently generate 2D vector fields similar to the ones in the ensemble, which after an evaluation of a specialist could enlarge the ensemble in order to better represent the uncertainty. Through the Helmholtz-Hodge Decomposition, we obtain the divergence-free, rotational-free and harmonic components of a vector field. With those components and the original ensemble in hand, it is possible to derive new realizations from their projections into a 2-dimensional space. To do so, we propose the use of an inverse projection technique individually in each component projected space. Results are obtained in real-time, through an interactive interface. A set of multi-method wind forecast realizations were used to demonstrate the results obtained with this approach.
{"title":"Inverse Projection of Vector Fields","authors":"Paula Ceccon Ribeiro, H. Lopes","doi":"10.1109/SIBGRAPI.2018.00050","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00050","url":null,"abstract":"Vector fields play an essential role in a large range of scientific applications. They are commonly generated through computer simulations. Such simulations may be a costly process since they usually require an intensive computational time. When researchers want to quantify the uncertainty in such kind of applications, usually an ensemble of vector fields realizations are generated, making the process much more expensive. The main contribution of this paper is to present a new method, based on the inverse projection technique, to quickly and consistently generate 2D vector fields similar to the ones in the ensemble, which after an evaluation of a specialist could enlarge the ensemble in order to better represent the uncertainty. Through the Helmholtz-Hodge Decomposition, we obtain the divergence-free, rotational-free and harmonic components of a vector field. With those components and the original ensemble in hand, it is possible to derive new realizations from their projections into a 2-dimensional space. To do so, we propose the use of an inverse projection technique individually in each component projected space. Results are obtained in real-time, through an interactive interface. A set of multi-method wind forecast realizations were used to demonstrate the results obtained with this approach.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116888001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-10-01DOI: 10.1109/SIBGRAPI.2018.00063
G. B. Cavallari, Leo Sampaio Ferraz Ribeiro, M. Ponti
A feature learning task involves training models that are capable of inferring good representations (transformations of the original space) from input data alone. When working with limited or unlabelled data, and also when multiple visual domains are considered, methods that rely on large annotated datasets, such as Convolutional Neural Networks (CNNs), cannot be employed. In this paper we investigate different auto-encoder (AE) architectures, which require no labels, and explore training strategies to learn representations from images. The models are evaluated considering both the reconstruction error of the images and the feature spaces in terms of their discriminative power. We study the role of dense and convolutional layers on the results, as well as the depth and capacity of the networks, since those are shown to affect both the dimensionality reduction and the capability of generalising for different visual domains. Classification results with AE features were as discriminative as pre-trained CNN features. Our findings can be used as guidelines for the design of unsupervised representation learning methods within and across domains.
{"title":"Unsupervised Representation Learning Using Convolutional and Stacked Auto-Encoders: A Domain and Cross-Domain Feature Space Analysis","authors":"G. B. Cavallari, Leo Sampaio Ferraz Ribeiro, M. Ponti","doi":"10.1109/SIBGRAPI.2018.00063","DOIUrl":"https://doi.org/10.1109/SIBGRAPI.2018.00063","url":null,"abstract":"A feature learning task involves training models that are capable of inferring good representations (transformations of the original space) from input data alone. When working with limited or unlabelled data, and also when multiple visual domains are considered, methods that rely on large annotated datasets, such as Convolutional Neural Networks (CNNs), cannot be employed. In this paper we investigate different auto-encoder (AE) architectures, which require no labels, and explore training strategies to learn representations from images. The models are evaluated considering both the reconstruction error of the images and the feature spaces in terms of their discriminative power. We study the role of dense and convolutional layers on the results, as well as the depth and capacity of the networks, since those are shown to affect both the dimensionality reduction and the capability of generalising for different visual domains. Classification results with AE features were as discriminative as pre-trained CNN features. Our findings can be used as guidelines for the design of unsupervised representation learning methods within and across domains.","PeriodicalId":208985,"journal":{"name":"2018 31st SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125860763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}