We present a novel method to retrieve multiple positions of point lights in real indoor scenes based on a 3D reconstruction. This method takes advantage of illumination over planes detected using a segmentation of the reconstructed mesh of the scene. We can also provide an estimation without suffering from the presence of specular highlights but rather use this component to refine the final estimation. This allows consistent relighting throughout the entire scene for aumented reality purposes.
{"title":"[POSTER] Retrieving Lights Positions Using Plane Segmentation with Diffuse Illumination Reinforced with Specular Component","authors":"Paul-Emile Buteau, H. Saito","doi":"10.1109/ISMAR.2015.65","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.65","url":null,"abstract":"We present a novel method to retrieve multiple positions of point lights in real indoor scenes based on a 3D reconstruction. This method takes advantage of illumination over planes detected using a segmentation of the reconstructed mesh of the scene. We can also provide an estimation without suffering from the presence of specular highlights but rather use this component to refine the final estimation. This allows consistent relighting throughout the entire scene for aumented reality purposes.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125863673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An effective interaction in augmented reality (AR) requires utilization of different modalities. In this study, we investigated orienting the user in bimodal AR. Using auditory perception to support visual perception provides a useful approach for orienting the user to directions that are outside of the visual field-of-view (FOV). In particular, this is important in path-finding, where points-of-interest (POIs) can be all around the user. However, the ability to perceive the audio POIs is affected by the ventriloquism effect (VE), which means that audio POIs are captured by visual POIs. We measured the spatial limits for the VE in AR using a video see-through head-worn display. The results showed that the amount of the VE in AR was approx. 5°–15° higher than in a real environment. In AR, spatial disparity between an audio and visual POI should be at least 30° of azimuth angle, in order to perceive the audio and visual POIs as separate. The limit was affected by azimuth angle of visual POI and magnitude of head rotations. These results provide guidelines for designing bimodal AR systems.
{"title":"The Ventriloquist Effect in Augmented Reality","authors":"Mikko Kytö, Kenta Kusumoto, P. Oittinen","doi":"10.1109/ISMAR.2015.18","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.18","url":null,"abstract":"An effective interaction in augmented reality (AR) requires utilization of different modalities. In this study, we investigated orienting the user in bimodal AR. Using auditory perception to support visual perception provides a useful approach for orienting the user to directions that are outside of the visual field-of-view (FOV). In particular, this is important in path-finding, where points-of-interest (POIs) can be all around the user. However, the ability to perceive the audio POIs is affected by the ventriloquism effect (VE), which means that audio POIs are captured by visual POIs. We measured the spatial limits for the VE in AR using a video see-through head-worn display. The results showed that the amount of the VE in AR was approx. 5°–15° higher than in a real environment. In AR, spatial disparity between an audio and visual POI should be at least 30° of azimuth angle, in order to perceive the audio and visual POIs as separate. The limit was affected by azimuth angle of visual POI and magnitude of head rotations. These results provide guidelines for designing bimodal AR systems.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125200168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Santos, Takafumi Taketomi, Goshiro Yamamoto, G. Klinker, C. Sandor, H. Kato
Usability evaluations are important to the development of augmented reality systems. However, conducting large-scale longitudinal studies remains challenging because of the lack of inexpensive but appropriate methods. In response, we propose a method for implicitly estimating usability ratings based on readily available sensor logs. To demonstrate our idea, we explored the use of features of accelerometer data in estimating usability ratings in an annotation task. Results show that our implicit method corresponds with explicit usability ratings at 79% and 84%. These results should be investigated further in other use cases, with other sensor logs.
{"title":"[POSTER] Towards Estimating Usability Ratings of Handheld Augmented Reality Using Accelerometer Data","authors":"M. Santos, Takafumi Taketomi, Goshiro Yamamoto, G. Klinker, C. Sandor, H. Kato","doi":"10.1109/ISMAR.2015.62","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.62","url":null,"abstract":"Usability evaluations are important to the development of augmented reality systems. However, conducting large-scale longitudinal studies remains challenging because of the lack of inexpensive but appropriate methods. In response, we propose a method for implicitly estimating usability ratings based on readily available sensor logs. To demonstrate our idea, we explored the use of features of accelerometer data in estimating usability ratings in an annotation task. Results show that our implicit method corresponds with explicit usability ratings at 79% and 84%. These results should be investigated further in other use cases, with other sensor logs.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122290842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Istvan Szentandrasi, Michael Zachariá, Rudolf Kajan, J. Tinka, Markéta Dubská, Jakub Sochor, A. Herout
Augmented reality does not make any sense for fixed cameras. Or does it? In this work, we are dealing with static cameras and their usability for interactive augmented reality applications. Knowing that the camera does not move makes camera pose estimation both less and more difficult - one does not have to deal with pose change in time, but on the other hand, obtaining some level of understanding of the scene from a single viewpoint is challenging. We propose several ways how to gain advantage from the camera being static and a pipeline of a system for broadcasting a video stream enriched by information needed for its interactive visual augmenting - Interactive Camera Streams, INCAST. We present a proof-of-concept system showing the usability of INCAST on several use-cases - non-interactive demos and simple AR games.
{"title":"[POSTER] INCAST: Interactive Camera Streams for Surveillance Cams AR","authors":"Istvan Szentandrasi, Michael Zachariá, Rudolf Kajan, J. Tinka, Markéta Dubská, Jakub Sochor, A. Herout","doi":"10.1109/ISMAR.2015.26","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.26","url":null,"abstract":"Augmented reality does not make any sense for fixed cameras. Or does it? In this work, we are dealing with static cameras and their usability for interactive augmented reality applications. Knowing that the camera does not move makes camera pose estimation both less and more difficult - one does not have to deal with pose change in time, but on the other hand, obtaining some level of understanding of the scene from a single viewpoint is challenging. We propose several ways how to gain advantage from the camera being static and a pipeline of a system for broadcasting a video stream enriched by information needed for its interactive visual augmenting - Interactive Camera Streams, INCAST. We present a proof-of-concept system showing the usability of INCAST on several use-cases - non-interactive demos and simple AR games.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130638111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a movable spatial augmented reality (SAR) system that can be easily installed in a user workspace. The proposed system aims to dynamically cover a wider projection area using a portable projector attached to a simple robotic device. It has a clear advantage than a conventional SAR scenario where, for example, a projector should be installe1d with a fixed projection area in the workspace. In the previous research [1], we proposed a data-driven kinematic control method for a movable SAR system. This method targets a SAR system integrated with a user-created robotic (UCR) device where an explicit kinematic configuration such as CAD model is unavailable. Our contribution in this paper is to show the feasibility of the data-driven control method by developing a practical application where dynamic change of projection area matters. We outline the control method and demonstrate an assembly guide example using a casually installed movable SAR system.
{"title":"[POSTER] Movable Spatial AR On-The-Go","authors":"Ahyun Lee, Joo-Haeng Lee, Jaehong Kim","doi":"10.1109/ISMAR.2015.55","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.55","url":null,"abstract":"We present a movable spatial augmented reality (SAR) system that can be easily installed in a user workspace. The proposed system aims to dynamically cover a wider projection area using a portable projector attached to a simple robotic device. It has a clear advantage than a conventional SAR scenario where, for example, a projector should be installe1d with a fixed projection area in the workspace. In the previous research [1], we proposed a data-driven kinematic control method for a movable SAR system. This method targets a SAR system integrated with a user-created robotic (UCR) device where an explicit kinematic configuration such as CAD model is unavailable. Our contribution in this paper is to show the feasibility of the data-driven control method by developing a practical application where dynamic change of projection area matters. We outline the control method and demonstrate an assembly guide example using a casually installed movable SAR system.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130909805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The successful application of augmented reality as a guidance tool for procedural tasks like maintenance or repair requires an easily usable way of modeling support processes. Even though some suggestions have already been made to address this problem, they still have shortcomings and don't provide all the required features. Thus in a first step the requirements a possible solution has to meet are collected and presented. Based on these, the augmented reality process modeling language (ARPML) is developed, which consists of the four building blocks (i) templates, (ii) sensors, (iii) work steps and (iv) tasks. In contrast to existing approaches it facilitates the creation of multiple views on a single process. This makes it possible to specifically select instructions and information needed in targeted work contexts. It also allows to combine multiple variants of one process into one model with only a minimum of redundancy. The application of ARPML is shown with a practical example.
{"title":"[POSTER] ARPML: The Augmented Reality Process Modeling Language","authors":"T. Muller, Tim Rieger","doi":"10.1109/ISMAR.2015.46","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.46","url":null,"abstract":"The successful application of augmented reality as a guidance tool for procedural tasks like maintenance or repair requires an easily usable way of modeling support processes. Even though some suggestions have already been made to address this problem, they still have shortcomings and don't provide all the required features. Thus in a first step the requirements a possible solution has to meet are collected and presented. Based on these, the augmented reality process modeling language (ARPML) is developed, which consists of the four building blocks (i) templates, (ii) sensors, (iii) work steps and (iv) tasks. In contrast to existing approaches it facilitates the creation of multiple views on a single process. This makes it possible to specifically select instructions and information needed in targeted work contexts. It also allows to combine multiple variants of one process into one model with only a minimum of redundancy. The application of ARPML is shown with a practical example.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129883434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ki-Won Yeom, J. R. Kwon, Ju-Hyuck Maeng, Bum-Jae You
We introduce a novel finger worn ring interface that enables complex spatial interactions through 3D hand movement in virtual reality environment. Users receive physical feedback in the form of vibrations from the wearable ring interface as their finger reaches a certain 3D position. The positions of the fingertip are extracted, linked, and then reconstructed as a trajectory. This system allows the wearer to write characters in midair as if they were using an imaginary whiteboard. User can freely write in the air using Korean characters, English letters, both upper and lower case, and digits in real time with over 92% accuracy rate. Thus, it is now conceivable that anything people can do on contemporary touch based devices, they could do in midair with a pseudocontact interface.
{"title":"[POSTER] Haptic Ring Interface Enabling Air-Writing in Virtual Reality Environment","authors":"Ki-Won Yeom, J. R. Kwon, Ju-Hyuck Maeng, Bum-Jae You","doi":"10.1109/ISMAR.2015.37","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.37","url":null,"abstract":"We introduce a novel finger worn ring interface that enables complex spatial interactions through 3D hand movement in virtual reality environment. Users receive physical feedback in the form of vibrations from the wearable ring interface as their finger reaches a certain 3D position. The positions of the fingertip are extracted, linked, and then reconstructed as a trajectory. This system allows the wearer to write characters in midair as if they were using an imaginary whiteboard. User can freely write in the air using Korean characters, English letters, both upper and lower case, and digits in real time with over 92% accuracy rate. Thus, it is now conceivable that anything people can do on contemporary touch based devices, they could do in midair with a pseudocontact interface.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116216989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An important yet unsolved problem in computer vision and Augmented Reality (AR) is to compute the 3D shape of nonrigid objects from live 2D videos. When the object's shape is provided in a rest pose, this is the Shape-from-Template (SfT) problem. Previous realtime SfT methods require simple, smooth templates, such as flat sheets of paper that are densely textured, and which deform in simple, smooth ways. We present a realtime SfT framework that handles generic template meshes, complex deformations and most of the difficulties present in real imaging conditions. Achieving this has required new, fast solutions to the two core sub-problems: robust registration and 3D shape inference. Registration is achieved with what we call Deformable Render-based Block Matching (DRBM): a highly-parallel solution which densely matches a time-varying render of the object to each video frame. We then combine matches from DRBM with physical deformation priors and perform shape inference, which is done by quickly solving a sparse linear system with a Geometric Multi-Grid (GMG)-based method. On a standard PC we achieve up to 21fps depending on the object. Source code will be released.
{"title":"[POSTER] Realtime Shape-from-Template: System and Applications","authors":"T. Collins, A. Bartoli","doi":"10.1109/ISMAR.2015.35","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.35","url":null,"abstract":"An important yet unsolved problem in computer vision and Augmented Reality (AR) is to compute the 3D shape of nonrigid objects from live 2D videos. When the object's shape is provided in a rest pose, this is the Shape-from-Template (SfT) problem. Previous realtime SfT methods require simple, smooth templates, such as flat sheets of paper that are densely textured, and which deform in simple, smooth ways. We present a realtime SfT framework that handles generic template meshes, complex deformations and most of the difficulties present in real imaging conditions. Achieving this has required new, fast solutions to the two core sub-problems: robust registration and 3D shape inference. Registration is achieved with what we call Deformable Render-based Block Matching (DRBM): a highly-parallel solution which densely matches a time-varying render of the object to each video frame. We then combine matches from DRBM with physical deformation priors and perform shape inference, which is done by quickly solving a sparse linear system with a Geometric Multi-Grid (GMG)-based method. On a standard PC we achieve up to 21fps depending on the object. Source code will be released.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114580040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Habert, Meng Ma, Wadim Kehl, X. Wang, Federico Tombari, P. Fallavollita, N. Navab
Fusing intraoperative X-ray data with real-time video in a common reference frame is not trivial since both modalities have to be acquired from the same viewpoint. The goal of this work is to design a flexible system comprising two RGBD sensors that can be attached to any mobile C-arm, with the objective of synthesizing projective color images from the X-ray source viewpoint. To achieve this, we calibrate the RGBD sensors to the X-ray source with a 3D calibration object. Then, we synthesize the projective color image from the X-ray viewpoint by applying a volumetric-based rendering method. Finally, the X-ray image is overlaid on the projective image without any further registration, offering a multimodal visualization of X-ray and color images. In this paper we present the different steps of development (i.e. hardware setup, calibration and rendering algorithm) and discuss clinical applications for the new video augmented C-arm. By placing X-ray markers on a hand patient and a spine model, we show that the overlay accuracy between the X-ray image and the synthetized image is in average 1.7 mm.
{"title":"[POSTER] Augmenting Mobile C-arm Fluoroscopes via Stereo-RGBD Sensors for Multimodal Visualization","authors":"S. Habert, Meng Ma, Wadim Kehl, X. Wang, Federico Tombari, P. Fallavollita, N. Navab","doi":"10.1109/ISMAR.2015.24","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.24","url":null,"abstract":"Fusing intraoperative X-ray data with real-time video in a common reference frame is not trivial since both modalities have to be acquired from the same viewpoint. The goal of this work is to design a flexible system comprising two RGBD sensors that can be attached to any mobile C-arm, with the objective of synthesizing projective color images from the X-ray source viewpoint. To achieve this, we calibrate the RGBD sensors to the X-ray source with a 3D calibration object. Then, we synthesize the projective color image from the X-ray viewpoint by applying a volumetric-based rendering method. Finally, the X-ray image is overlaid on the projective image without any further registration, offering a multimodal visualization of X-ray and color images. In this paper we present the different steps of development (i.e. hardware setup, calibration and rendering algorithm) and discuss clinical applications for the new video augmented C-arm. By placing X-ray markers on a hand patient and a spine model, we show that the overlay accuracy between the X-ray image and the synthetized image is in average 1.7 mm.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128277579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work, we encourage the idea of using Photogrammetric targets for object tracking in Industrial Augmented Reality (IAR). Photogrammetric targets, especially uncoded circular targets, are widely used in the industry to perform 3D surface measurements. Therefore, an AR solution based on the uncoded circular targets can improve the work flow integration by reusing existing targets and saving time. These circular targets do not have coded patterns to establish unique 2D-3D correspondences between the targets on the model and their image projections. We solve this particular problem of 2D-3D correspondence of non-coplanar circular targets from a single image. We introduce a Conic pair descriptor, which computes the Eucledian invariants from circular targets in the model space and in the image space. A three stage method is used to compare the descriptors and compute the correspondences with up to 100% precision and 89% recall rates. We are able to achieve tracking performance of 3 FPS (2560x1920 pix) to 8 FPS (640×480 pix) depending on the camera resolution and the targets present in the scene.
{"title":"[POSTER] Exploiting Photogrammetric Targets for Industrial AR","authors":"Hemal Naik, Y. Oyamada, P. Keitler, Nassir Navab","doi":"10.1109/ISMAR.2015.42","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.42","url":null,"abstract":"In this work, we encourage the idea of using Photogrammetric targets for object tracking in Industrial Augmented Reality (IAR). Photogrammetric targets, especially uncoded circular targets, are widely used in the industry to perform 3D surface measurements. Therefore, an AR solution based on the uncoded circular targets can improve the work flow integration by reusing existing targets and saving time. These circular targets do not have coded patterns to establish unique 2D-3D correspondences between the targets on the model and their image projections. We solve this particular problem of 2D-3D correspondence of non-coplanar circular targets from a single image. We introduce a Conic pair descriptor, which computes the Eucledian invariants from circular targets in the model space and in the image space. A three stage method is used to compare the descriptors and compute the correspondences with up to 100% precision and 89% recall rates. We are able to achieve tracking performance of 3 FPS (2560x1920 pix) to 8 FPS (640×480 pix) depending on the camera resolution and the targets present in the scene.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133912544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}