M. Datar, Girish Gopalakrishnan, S. Ranjan, R. Mullick
With an increase in full-body scans and longitudinal acquisitions to track disease progression, it becomes significant to find correspondence between multiple images. One example would be the monitoring size/location of tumors using PET images during chemotherapy to determine treatment progression. While there is a need to go beyond a single parametric transform to recover misalignments, pure deformable solutions become complex, time-consuming and unnecessary at times. Simple anatomically guided approach for whole body image registration offers enhanced alignment of large coverage inter-scan studies. In this experiment, we provide anatomy specific transformations to capture their independent motions. This solution is characterized by an automatic segmentation of regions in the image, followed by a custom registration and volume stitching. We have tested this algorithm on phantom images as well as clinical longitudinal datasets. We were successful in proving that decoupling transformations improves the overall registration quality.
{"title":"Anatomically Guided Registration for Multimodal Images","authors":"M. Datar, Girish Gopalakrishnan, S. Ranjan, R. Mullick","doi":"10.1109/AIPR.2006.14","DOIUrl":"https://doi.org/10.1109/AIPR.2006.14","url":null,"abstract":"With an increase in full-body scans and longitudinal acquisitions to track disease progression, it becomes significant to find correspondence between multiple images. One example would be the monitoring size/location of tumors using PET images during chemotherapy to determine treatment progression. While there is a need to go beyond a single parametric transform to recover misalignments, pure deformable solutions become complex, time-consuming and unnecessary at times. Simple anatomically guided approach for whole body image registration offers enhanced alignment of large coverage inter-scan studies. In this experiment, we provide anatomy specific transformations to capture their independent motions. This solution is characterized by an automatic segmentation of regions in the image, followed by a custom registration and volume stitching. We have tested this algorithm on phantom images as well as clinical longitudinal datasets. We were successful in proving that decoupling transformations improves the overall registration quality.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134288247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although techniques for resolution enhancement in single-aspect radar imaging have made rapid progress in recent years, it does not necessarily imply that such enhanced images will improve target identification or recognition. However, when multiple looks of the same target from different aspects are obtained, the available knowledge base increases allowing more useful target information to be extracted. Physics based image fusion techniques can be developed by processing the raw data collected from multiple ISAR sensors, even if these individual images are at different resolutions. We derive an appropriate data fusion rule in order to generate a composite image containing increased target shape characteristics for improved target recognition. The rule maps multiple data sets collected by multiple radars with different system parameters on to the same spatial-frequency space. The composite image can be reconstructed using the inverse 2-D Fourier transform over the separated multiple integration areas. An algorithm called the matrix Fourier transform is created to realize such a complicated integral. This algorithm can be regarded as an exact interpolation, such that there is no information loss caused by data fusion. The rotation centers need to be carefully selected in order to properly register the multiple images before performing the fusion. A comparison of the IAR (Image Attribute Rating) curve between the fused image and the spatial-averaged images quantifies the improvement in the detected target features. The technique shows considerable improvement over a simple spatial averaging algorithm and thereby enhances target recognition.
{"title":"Data Level Fusion of Multilook Inverse Synthetic Aperture Radar (ISAR) Images","authors":"Zhixi Li, R. Narayanan","doi":"10.1109/AIPR.2006.21","DOIUrl":"https://doi.org/10.1109/AIPR.2006.21","url":null,"abstract":"Although techniques for resolution enhancement in single-aspect radar imaging have made rapid progress in recent years, it does not necessarily imply that such enhanced images will improve target identification or recognition. However, when multiple looks of the same target from different aspects are obtained, the available knowledge base increases allowing more useful target information to be extracted. Physics based image fusion techniques can be developed by processing the raw data collected from multiple ISAR sensors, even if these individual images are at different resolutions. We derive an appropriate data fusion rule in order to generate a composite image containing increased target shape characteristics for improved target recognition. The rule maps multiple data sets collected by multiple radars with different system parameters on to the same spatial-frequency space. The composite image can be reconstructed using the inverse 2-D Fourier transform over the separated multiple integration areas. An algorithm called the matrix Fourier transform is created to realize such a complicated integral. This algorithm can be regarded as an exact interpolation, such that there is no information loss caused by data fusion. The rotation centers need to be carefully selected in order to properly register the multiple images before performing the fusion. A comparison of the IAR (Image Attribute Rating) curve between the fused image and the spatial-averaged images quantifies the improvement in the detected target features. The technique shows considerable improvement over a simple spatial averaging algorithm and thereby enhances target recognition.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130093040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In night time surveillance, there is a possibility of having extremely bright and dark regions in some image frames of a video sequence. A novel non linear image enhancement algorithm for digital images captured under such extremely non-uniform lighting conditions is proposed in this paper. The new technique constitutes three processes viz. adaptive intensity enhancement, contrast enhancement and color restoration. Adaptive intensity enhancement uses a specifically designed nonlinear transfer function which is capable of reducing the intensity of bright regions and at the same time enhancing the intensity of dark regions. Contrast enhancement tunes the intensity of each pixels magnitude based on its surrounding pixels. Finally, a linear color restoration process based on the chromatic information of the input image frame is applied to convert the enhanced intensity image back to a color image.
{"title":"An Adaptive and Non Linear Technique for Enhancement of Extremely High Contrast Images","authors":"Saibabu Arigela, V. Asari","doi":"10.1109/AIPR.2006.11","DOIUrl":"https://doi.org/10.1109/AIPR.2006.11","url":null,"abstract":"In night time surveillance, there is a possibility of having extremely bright and dark regions in some image frames of a video sequence. A novel non linear image enhancement algorithm for digital images captured under such extremely non-uniform lighting conditions is proposed in this paper. The new technique constitutes three processes viz. adaptive intensity enhancement, contrast enhancement and color restoration. Adaptive intensity enhancement uses a specifically designed nonlinear transfer function which is capable of reducing the intensity of bright regions and at the same time enhancing the intensity of dark regions. Contrast enhancement tunes the intensity of each pixels magnitude based on its surrounding pixels. Finally, a linear color restoration process based on the chromatic information of the input image frame is applied to convert the enhanced intensity image back to a color image.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126408353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The overall goal of the virtual functional anatomy (VFA) project is to fill the important knowledge gap that exists in the relationship between functional movement limitations and impaired joint structure or function. Thus, a set of imaging-based post-processing tools is under development to enable dynamic and static magnetic resonance image (MRI) data to be merged. These tools will provide accurate quantification and visualization of 3D static and dynamic properties of musculoskeletal anatomy (i.e. skeletal kinematics, tendon and ligament strain, muscle force, cartilage contact). The current focus is to apply the six-degree of freedom joint kinematics to subject specific models and to quantify dynamic musculoskeletal properties, such as tendon moment arm, muscle moment arms, joint cartilage contact and tendon strain. To date, these tools have been used to study joint function of healthy and impaired (e.g. Cerebral Palsy, ACL rupture and patellar tracking syndrome) joint structures under simulated conditions experienced during activities of daily living.
{"title":"A Visualization Tool to convey Quantitative in vivo, 3D Knee Joint Kinematics","authors":"A. Seisler, F. Sheehan","doi":"10.1109/AIPR.2006.8","DOIUrl":"https://doi.org/10.1109/AIPR.2006.8","url":null,"abstract":"The overall goal of the virtual functional anatomy (VFA) project is to fill the important knowledge gap that exists in the relationship between functional movement limitations and impaired joint structure or function. Thus, a set of imaging-based post-processing tools is under development to enable dynamic and static magnetic resonance image (MRI) data to be merged. These tools will provide accurate quantification and visualization of 3D static and dynamic properties of musculoskeletal anatomy (i.e. skeletal kinematics, tendon and ligament strain, muscle force, cartilage contact). The current focus is to apply the six-degree of freedom joint kinematics to subject specific models and to quantify dynamic musculoskeletal properties, such as tendon moment arm, muscle moment arms, joint cartilage contact and tendon strain. To date, these tools have been used to study joint function of healthy and impaired (e.g. Cerebral Palsy, ACL rupture and patellar tracking syndrome) joint structures under simulated conditions experienced during activities of daily living.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131406361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The paper reports the development of a classifier that can accurately and reliably discriminate among a large number of different natural surfaces in canonical and natural color images regardless of the viewpoint and illumination conditions. To achieve this objective, a set of general-purpose color and texture features were identified as the input to an ALISA statistical learning engine. These general-purpose color and texture features are those which exhibit the least sensitivity to illumination and viewpoint variation in a broad range of applications. To overcome the Bayesian confusion while a large number of test classes are involved, an ALISA deltaCRC classification method is developed. The classifier selects the trained class which has a known reclassification distribution histogram of a training image patch that is most closely matched with the unknown classification distribution of the test image patch. Preliminary results using the CUReT color texture dataset with test images not in the training set yields average classification accuracies well above 95% with no significant associated cost in computation time.
{"title":"Viewpoint-Invariant and Illumination-Invariant Classification of Natural Surfaces Using General-Purpose Color and Texture Features with the ALISA dCRC Classifier","authors":"Teddy Ko, P. Bock","doi":"10.1109/AIPR.2006.40","DOIUrl":"https://doi.org/10.1109/AIPR.2006.40","url":null,"abstract":"The paper reports the development of a classifier that can accurately and reliably discriminate among a large number of different natural surfaces in canonical and natural color images regardless of the viewpoint and illumination conditions. To achieve this objective, a set of general-purpose color and texture features were identified as the input to an ALISA statistical learning engine. These general-purpose color and texture features are those which exhibit the least sensitivity to illumination and viewpoint variation in a broad range of applications. To overcome the Bayesian confusion while a large number of test classes are involved, an ALISA deltaCRC classification method is developed. The classifier selects the trained class which has a known reclassification distribution histogram of a training image patch that is most closely matched with the unknown classification distribution of the test image patch. Preliminary results using the CUReT color texture dataset with test images not in the training set yields average classification accuracies well above 95% with no significant associated cost in computation time.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127907348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The surfaces of 3D objects may be represented as a connected distribution of surface patches that point in various directions with respect to the observer. Viewpoint-normal patches are those whose tangent plane is perpendicular to the line of sight. Foreshortening of surface patches results from their obliquity, with a directional wavelength compression, and an accompanying 1-dimensional stretching of the spatial frequency distribution. This stretching of spatial frequency distributions was used to generate plausible depth illusions via local foreshortening of surface textures rendered from a stretched spatial frequency envelope. Texture foreshortening cues were exploited by a multi-stage image analysis method that revealed local dominant orientation, degree of orientation dominance, relative power in spatial frequencies at a given orientation, and a measure of local surface obliquity, which provides incomplete but useful information in a multi-cue depth estimation framework.
{"title":"3D shape estimation and texture generation using texture foreshortening cues","authors":"J. Colombe","doi":"10.1109/AIPR.2006.6","DOIUrl":"https://doi.org/10.1109/AIPR.2006.6","url":null,"abstract":"The surfaces of 3D objects may be represented as a connected distribution of surface patches that point in various directions with respect to the observer. Viewpoint-normal patches are those whose tangent plane is perpendicular to the line of sight. Foreshortening of surface patches results from their obliquity, with a directional wavelength compression, and an accompanying 1-dimensional stretching of the spatial frequency distribution. This stretching of spatial frequency distributions was used to generate plausible depth illusions via local foreshortening of surface textures rendered from a stretched spatial frequency envelope. Texture foreshortening cues were exploited by a multi-stage image analysis method that revealed local dominant orientation, degree of orientation dominance, relative power in spatial frequencies at a given orientation, and a measure of local surface obliquity, which provides incomplete but useful information in a multi-cue depth estimation framework.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128101715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Army Research Laboratory (ARL) has been developing its patented chirped amplitude modulation (AM) ladar technique for high resolution 3D imaging and range-Doppler tracking. The concept of operation, hardware configurations, and test results for this technique have been presented in detail elsewhere. Heretofore, the signal and image processing techniques used at ARL to reconstruct and display 3D imagery and range-Doppler plots have only been published partially and only in internal reports. In this paper we present the multiple-return range and range- Doppler signal processing algorithms, the model- based "superresolution" processing algorithm for range precision enhancement, and the 3D image reconstruction, processing, and display algorithms, along with representative examples from laboratory and field test data.
{"title":"3D Image Reconstruction and Range-Doppler Tracking with Chirped AM Ladar Data","authors":"J. Dammann, B. Redman, W. Ruff","doi":"10.1109/AIPR.2006.5","DOIUrl":"https://doi.org/10.1109/AIPR.2006.5","url":null,"abstract":"The Army Research Laboratory (ARL) has been developing its patented chirped amplitude modulation (AM) ladar technique for high resolution 3D imaging and range-Doppler tracking. The concept of operation, hardware configurations, and test results for this technique have been presented in detail elsewhere. Heretofore, the signal and image processing techniques used at ARL to reconstruct and display 3D imagery and range-Doppler plots have only been published partially and only in internal reports. In this paper we present the multiple-return range and range- Doppler signal processing algorithms, the model- based \"superresolution\" processing algorithm for range precision enhancement, and the 3D image reconstruction, processing, and display algorithms, along with representative examples from laboratory and field test data.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130752758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A recent thrust of non-cooperative target recognition (NCTR) using synthetic aperture radar (SAR) has been to complement the extraction of scattering centers by incorporating information contained in the target shadow. When classifying targets based on the shadow region alone, it is essential that an image be well clustered into its respective shadow, highlight, and background regions. To obtain the segmentation, the intensity and spatial location of a pixel are modeled as a mixture of Gaussian distributions. Expectation-maximization (EM) is used to obtain the corresponding distributions for the three regions within a given image. Anisotropic smoothing is applied to smooth the input image as well as the posterior probabilities. A representation of the shadow boundary is developed in conjunction with a Hidden Markov Model (HMM) ensemble to obtain target classification. A variety of targets from the MSTAR database are used to test the performance of both the segmentation algorithm and classification structure.
{"title":"Modeling of Target Shadows for SAR Image Classification","authors":"S. Papson, R. Narayanan","doi":"10.1109/AIPR.2006.27","DOIUrl":"https://doi.org/10.1109/AIPR.2006.27","url":null,"abstract":"A recent thrust of non-cooperative target recognition (NCTR) using synthetic aperture radar (SAR) has been to complement the extraction of scattering centers by incorporating information contained in the target shadow. When classifying targets based on the shadow region alone, it is essential that an image be well clustered into its respective shadow, highlight, and background regions. To obtain the segmentation, the intensity and spatial location of a pixel are modeled as a mixture of Gaussian distributions. Expectation-maximization (EM) is used to obtain the corresponding distributions for the three regions within a given image. Anisotropic smoothing is applied to smooth the input image as well as the posterior probabilities. A representation of the shadow boundary is developed in conjunction with a Hidden Markov Model (HMM) ensemble to obtain target classification. A variety of targets from the MSTAR database are used to test the performance of both the segmentation algorithm and classification structure.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126448434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
High resolution LADAR (laser detection and ranging) images of scenes containing human forms have been automatically segmented and simple algorithms have been developed for recognizing human forms in various positions in both cluttered and uncluttered scenes. Registration of LADAR and color CCD images is suggested as a method to enhance the ability to segment both types of images.
{"title":"Segmentation and Classification of Human Forms using LADAR Data","authors":"J. Albus, T. Hong, Tommy Chang","doi":"10.1109/AIPR.2006.35","DOIUrl":"https://doi.org/10.1109/AIPR.2006.35","url":null,"abstract":"High resolution LADAR (laser detection and ranging) images of scenes containing human forms have been automatically segmented and simple algorithms have been developed for recognizing human forms in various positions in both cluttered and uncluttered scenes. Registration of LADAR and color CCD images is suggested as a method to enhance the ability to segment both types of images.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134277235","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Linear transforms such as bidimensional and tridimensional spatial Fourier transforms for image applications have their limitations due to the uncertainty principle. Also, Fourier transforms allow the existence of negative luminance, which is not physically possible. Wavelet transforms alleviate that through the use of a non-negative wavelet function base, but it still leads to wide spectrum representations. This paper discusses the deployment of new nonlinear methods such as Hilbert-Huang transform for low-cost embedded applications using microprocessors and field programmable gate arrays. Basically, we extract a set of intrinsic mode functions (IMFs), which represent the spectrum of the 3D or 2D scene of a space using these functions as a Hilbert base. Immediate applications for our low cost high performance hardware oriented architecture include image processing for biomedical applications (e.g. pattern recognition and image compression telemedicine) and surveillance.
{"title":"Nonlinear 3D and 2D Transforms for Image Processing and Surveillance","authors":"Y. Tirat-Gefen","doi":"10.1109/AIPR.2006.28","DOIUrl":"https://doi.org/10.1109/AIPR.2006.28","url":null,"abstract":"Linear transforms such as bidimensional and tridimensional spatial Fourier transforms for image applications have their limitations due to the uncertainty principle. Also, Fourier transforms allow the existence of negative luminance, which is not physically possible. Wavelet transforms alleviate that through the use of a non-negative wavelet function base, but it still leads to wide spectrum representations. This paper discusses the deployment of new nonlinear methods such as Hilbert-Huang transform for low-cost embedded applications using microprocessors and field programmable gate arrays. Basically, we extract a set of intrinsic mode functions (IMFs), which represent the spectrum of the 3D or 2D scene of a space using these functions as a Hilbert base. Immediate applications for our low cost high performance hardware oriented architecture include image processing for biomedical applications (e.g. pattern recognition and image compression telemedicine) and surveillance.","PeriodicalId":375571,"journal":{"name":"35th IEEE Applied Imagery and Pattern Recognition Workshop (AIPR'06)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130084720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}