Hemal Naik, Federico Tombari, Christoph Resch, P. Keitler, Nassir Navab
In Spatial Augmented Reality (SAR) applications, real world objects are augmented with virtual content by means of a calibrated camera-projector system. A computer generated model (CAD) of the real object is used to plan the positions where the virtual content is to be projected. It is often the case that the real object deviates from its CAD model, this resulting in misregistered augmentations. We propose a new method to dynamically correct the planned augmentation by accommodating for the unknown deviations in the object geometry. We use a closed loop approach where the projected features are detected in the camera image and deployed as feedback. As a result, the registration misalignment is identified and the augmentations are corrected in the areas affected by the deviation. Our work is especially focused on SAR applications related to the industrial domain, where this problem is omnipresent. We show that our method is effective and beneficial for multiple industrial applications.
{"title":"[POSTER] A Step Closer To Reality: Closed Loop Dynamic Registration Correction in SAR","authors":"Hemal Naik, Federico Tombari, Christoph Resch, P. Keitler, Nassir Navab","doi":"10.1109/ISMAR.2015.34","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.34","url":null,"abstract":"In Spatial Augmented Reality (SAR) applications, real world objects are augmented with virtual content by means of a calibrated camera-projector system. A computer generated model (CAD) of the real object is used to plan the positions where the virtual content is to be projected. It is often the case that the real object deviates from its CAD model, this resulting in misregistered augmentations. We propose a new method to dynamically correct the planned augmentation by accommodating for the unknown deviations in the object geometry. We use a closed loop approach where the projected features are detected in the camera image and deployed as feedback. As a result, the registration misalignment is identified and the augmentations are corrected in the areas affected by the deviation. Our work is especially focused on SAR applications related to the industrial domain, where this problem is omnipresent. We show that our method is effective and beneficial for multiple industrial applications.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"34 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133787468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study proposes a method to estimate elastic deformation using silhouettes obtained from multiple endoscopic images. Our method can estimate the intraoperative deformation of organs using a volumetric mesh model reconstructed from preoperative CT data. We use this elastic body silhouette information of elastic bodies not to model the shape but to estimate the local displacements. The model shape is updated to satisfy the silhouette constraint while preserving the shape as much as possible. The result of the experiments showed that the proposed methods could estimate the deformation with root mean square (RMS) errors of 5.0–10 mm.
{"title":"[POSTER] Deformation Estimation of Elastic Bodies Using Multiple Silhouette Images for Endoscopic Image Augmentation","authors":"Akira Saito, M. Nakao, Yuuki Uranishi, T. Matsuda","doi":"10.1109/ISMAR.2015.49","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.49","url":null,"abstract":"This study proposes a method to estimate elastic deformation using silhouettes obtained from multiple endoscopic images. Our method can estimate the intraoperative deformation of organs using a volumetric mesh model reconstructed from preoperative CT data. We use this elastic body silhouette information of elastic bodies not to model the shape but to estimate the local displacements. The model shape is updated to satisfy the silhouette constraint while preserving the shape as much as possible. The result of the experiments showed that the proposed methods could estimate the deformation with root mean square (RMS) errors of 5.0–10 mm.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121107204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Bork, B. Fuerst, Anja-Katharina Schneider, Francisco Pinto, C. Graumann, Nassir Navab
Image-guided medical interventions more frequently rely on Augmented Reality (AR) visualization to enable surgical navigation. Current systems use 2-D monitors to present the view from external cameras, which does not provide an ideal perception of the 3-D position of the region of interest. Despite this problem, most research targets the direct overlay of diagnostic imaging data, and only few studies attempt to improve the perception of occluded structures in external camera views. The focus of this paper lies on improving the 3-D perception of an augmented external camera view by combining both auditory and visual stimuli in a dynamic multi-sensory AR environment for medical applications. Our approach is based on Temporal Distance Coding (TDC) and an active surgical tool to interact with occluded virtual objects of interest in the scene in order to gain an improved perception of their 3-D location. Users performed a simulated needle biopsy by targeting virtual lesions rendered inside a patient phantom. Experimental results demonstrate that our TDC-based visualization technique significantly improves the localization accuracy, while the addition of auditory feedback results in increased intuitiveness and faster completion of the task.
{"title":"Auditory and Visio-Temporal Distance Coding for 3-Dimensional Perception in Medical Augmented Reality","authors":"F. Bork, B. Fuerst, Anja-Katharina Schneider, Francisco Pinto, C. Graumann, Nassir Navab","doi":"10.1109/ISMAR.2015.16","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.16","url":null,"abstract":"Image-guided medical interventions more frequently rely on Augmented Reality (AR) visualization to enable surgical navigation. Current systems use 2-D monitors to present the view from external cameras, which does not provide an ideal perception of the 3-D position of the region of interest. Despite this problem, most research targets the direct overlay of diagnostic imaging data, and only few studies attempt to improve the perception of occluded structures in external camera views. The focus of this paper lies on improving the 3-D perception of an augmented external camera view by combining both auditory and visual stimuli in a dynamic multi-sensory AR environment for medical applications. Our approach is based on Temporal Distance Coding (TDC) and an active surgical tool to interact with occluded virtual objects of interest in the scene in order to gain an improved perception of their 3-D location. Users performed a simulated needle biopsy by targeting virtual lesions rendered inside a patient phantom. Experimental results demonstrate that our TDC-based visualization technique significantly improves the localization accuracy, while the addition of auditory feedback results in increased intuitiveness and faster completion of the task.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114680624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We are investigating methods for improving the robustness and consistency of the Single Point Active Alignment Method (SPAAM) optical see-through (OST) head-mounted display (HMD) calibration procedure. Our investigation focuses on two variants of SPAAM. The first utilizes a standard monocular alignment strategy to calibrate the left and right eye separately, while the second leverages stereoscopic cues available from binocular HMDs to calibrate both eyes simultaneously. We compare results from repeated calibrations between methods using eye location estimates and inter pupillary distance (IPD) measures. Our findings indicate that the stereo SPAAM method produces more accurate and consistent results during calibration compared to the monocular variant.
{"title":"[POSTER] Improved SPAAM Robustness through Stereo Calibration","authors":"Kenneth R. Moser, J. Swan","doi":"10.1109/ISMAR.2015.64","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.64","url":null,"abstract":"We are investigating methods for improving the robustness and consistency of the Single Point Active Alignment Method (SPAAM) optical see-through (OST) head-mounted display (HMD) calibration procedure. Our investigation focuses on two variants of SPAAM. The first utilizes a standard monocular alignment strategy to calibrate the left and right eye separately, while the second leverages stereoscopic cues available from binocular HMDs to calibrate both eyes simultaneously. We compare results from repeated calibrations between methods using eye location estimates and inter pupillary distance (IPD) measures. Our findings indicate that the stereo SPAAM method produces more accurate and consistent results during calibration compared to the monocular variant.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128976538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Philipp Fleck, Clemens Arth, Christian Pirchheim, D. Schmalstieg
In this work, we propose a multi-user system for tracking and mapping, which accommodates mobile clients with different capabilities, mediated by a server capable of providing real-time structure from motion. Clients share their observations of the scene according to their individual capabilities. This can involve only keyframe tracking, but also mapping and map densification, if more computational resources are available. Our contribution is a system architecture that lets heterogeneous clients contribute to a collaborative mapping effort, without prescribing fixed capabilities for the client devices. We investigate the implications that the clients' capabilities have on the collaborative reconstruction effort and its use for AR applications.
{"title":"[POSTER] Tracking and Mapping with a Swarm of Heterogeneous Clients","authors":"Philipp Fleck, Clemens Arth, Christian Pirchheim, D. Schmalstieg","doi":"10.1109/ISMAR.2015.40","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.40","url":null,"abstract":"In this work, we propose a multi-user system for tracking and mapping, which accommodates mobile clients with different capabilities, mediated by a server capable of providing real-time structure from motion. Clients share their observations of the scene according to their individual capabilities. This can involve only keyframe tracking, but also mapping and map densification, if more computational resources are available. Our contribution is a system architecture that lets heterogeneous clients contribute to a collaborative mapping effort, without prescribing fixed capabilities for the client devices. We investigate the implications that the clients' capabilities have on the collaborative reconstruction effort and its use for AR applications.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132065585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuichiro Fujimoto, Goshiro Yamamoto, Takafumi Taketomi, C. Sandor, H. Kato
Projection-based Augmented Reality commonly projects on rigid objects, while only few systems project on deformable objects. In this paper, we present Pseudo Printed Fabrics (PPF), which enables the projection on a deforming piece of cloth. This can be applied to previewing a cloth design while manipulating its shape. We support challenging manipulations, including heavy occlusions and stretching the cloth. In previous work, we developed a similar system, based on a novel marker pattern; PPF extends it in two important aspects. First, we improved performance by two orders of magnitudes to achieve interactive performance. Second, we developed a new interpolation algorithm to keep registration during challenging manipulations. We believe that PPF can be applied to domains including virtual-try on and fashion design.
{"title":"[POSTER] Pseudo Printed Fabrics through Projection Mapping","authors":"Yuichiro Fujimoto, Goshiro Yamamoto, Takafumi Taketomi, C. Sandor, H. Kato","doi":"10.1109/ISMAR.2015.51","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.51","url":null,"abstract":"Projection-based Augmented Reality commonly projects on rigid objects, while only few systems project on deformable objects. In this paper, we present Pseudo Printed Fabrics (PPF), which enables the projection on a deforming piece of cloth. This can be applied to previewing a cloth design while manipulating its shape. We support challenging manipulations, including heavy occlusions and stretching the cloth. In previous work, we developed a similar system, based on a novel marker pattern; PPF extends it in two important aspects. First, we improved performance by two orders of magnitudes to achieve interactive performance. Second, we developed a new interpolation algorithm to keep registration during challenging manipulations. We believe that PPF can be applied to domains including virtual-try on and fashion design.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134481548","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuichi Hiroi, Kei Obata, Katsuhiro Suzuki, Naoto Ienaga, M. Sugimoto, H. Saito, Tadashi Takamaru
This paper proposes a remote welding robot manipulation system by using multi-view images. After an operator specifies two-dimensional path on images, the system transforms it into three-dimensional path and displays the movement of the robot by overlaying graphics with images. The accuracy of our system is sufficient to weld objects when combining with a sensor in the robot. The system allows the non-expert operator to weld objects remotely and intuitively, without the need to create a 3D model of a processed object beforehand.
{"title":"[POSTER] Remote Welding Robot Manipulation Using Multi-view Images","authors":"Yuichi Hiroi, Kei Obata, Katsuhiro Suzuki, Naoto Ienaga, M. Sugimoto, H. Saito, Tadashi Takamaru","doi":"10.1109/ISMAR.2015.38","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.38","url":null,"abstract":"This paper proposes a remote welding robot manipulation system by using multi-view images. After an operator specifies two-dimensional path on images, the system transforms it into three-dimensional path and displays the movement of the robot by overlaying graphics with images. The accuracy of our system is sufficient to weld objects when combining with a sensor in the robot. The system allows the non-expert operator to weld objects remotely and intuitively, without the need to create a 3D model of a processed object beforehand.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128213517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jong Hun Lee, Yong Hwi Kim, Yong Yi Lee, Kwan H. Lee
The SAR technique using a projector-camera system allows us to make various effect on a real scene without physical reconstitution. In order to project contents on a textured scene without color imperfections, geometric and radiometric compensation of a projection image should be conducted as preprocessing. In this paper, we present a new geometric mapping method for color compensation in the projector-camera system. We capture the scene and segment it into adaptive patch according to the scene structure using the SLIC segmentation. The piece-wise polynomial function is evaluated for each patch to find pixel-to-pixel correspondences between the measured and projection images. Finally, color compensation is performed by using a color mixing matrix. Experimental results show that our geometric mapping method establishes accurate correspondences and color compensation alleviates the color imperfections which is caused by texture of a general scene.
{"title":"[POSTER] Geometric Mapping for Color Compensation Using Scene Adaptive Patches","authors":"Jong Hun Lee, Yong Hwi Kim, Yong Yi Lee, Kwan H. Lee","doi":"10.1109/ISMAR.2015.67","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.67","url":null,"abstract":"The SAR technique using a projector-camera system allows us to make various effect on a real scene without physical reconstitution. In order to project contents on a textured scene without color imperfections, geometric and radiometric compensation of a projection image should be conducted as preprocessing. In this paper, we present a new geometric mapping method for color compensation in the projector-camera system. We capture the scene and segment it into adaptive patch according to the scene structure using the SLIC segmentation. The piece-wise polynomial function is evaluated for each patch to find pixel-to-pixel correspondences between the measured and projection images. Finally, color compensation is performed by using a color mixing matrix. Experimental results show that our geometric mapping method establishes accurate correspondences and color compensation alleviates the color imperfections which is caused by texture of a general scene.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114668049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jihye Oh, Yeonjoon Kim, Taeil Jin, Sukwon Lee, Youjin Lee, Sung-Hee Lee
Social touch such as a handshake increases the sense of coexistence and closeness between remote users in a social telepresence environment, but creating such coordinated contact movements with a distant person is extremely difficult if given only visual feedback, without haptic feedback. This paper presents a method to enable hand-contact interaction between remote users in an avatar-mediated telepresence environment. The key approach is, while the avatar directly follows its owner's motion in normal conditions, it adjusts the pose to maintain contact with the other user when the two users attempt to make contact interaction. To this end, we develop classifiers to recognize the users' intention for the contact interaction. The contact classifier identifies whether the users try to initiate contact when they are not in contact, and the separation classifier identifies whether the two in contact attempt to break contact. The classifiers are trained based on a set of geometric distance features. During the contact phase, inverse kinematics is solved to determine the pose of the avatar's arm so as to initiate and maintain natural contact with the other user's hand. Our system is unique in that two remote users can perform real time hand contact interaction in a social telepresence environment.
{"title":"[POSTER] Avatar-Mediated Contact Interaction between Remote Users for Social Telepresence","authors":"Jihye Oh, Yeonjoon Kim, Taeil Jin, Sukwon Lee, Youjin Lee, Sung-Hee Lee","doi":"10.1109/ISMAR.2015.61","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.61","url":null,"abstract":"Social touch such as a handshake increases the sense of coexistence and closeness between remote users in a social telepresence environment, but creating such coordinated contact movements with a distant person is extremely difficult if given only visual feedback, without haptic feedback. This paper presents a method to enable hand-contact interaction between remote users in an avatar-mediated telepresence environment. The key approach is, while the avatar directly follows its owner's motion in normal conditions, it adjusts the pose to maintain contact with the other user when the two users attempt to make contact interaction. To this end, we develop classifiers to recognize the users' intention for the contact interaction. The contact classifier identifies whether the users try to initiate contact when they are not in contact, and the separation classifier identifies whether the two in contact attempt to break contact. The classifiers are trained based on a set of geometric distance features. During the contact phase, inverse kinematics is solved to determine the pose of the avatar's arm so as to initiate and maintain natural contact with the other user's hand. Our system is unique in that two remote users can perform real time hand contact interaction in a social telepresence environment.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116552234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nicola Leucht, S. Habert, P. Wucherer, S. Weidert, Nassir Navab, P. Fallavollita
C-arm fluoroscopes are frequently used during surgeries for intraoperative guidance. Unfortunately, due to X-ray emission and scattering, increased radiation exposure occurs in the operating theatre. The objective of this work is to sensitize the surgeon to their radiation exposure, enable them to check on their exposure over time, and to help them choose their best position related to the C-arm gantry during surgery. First, we aim at simulating the amount of radiation that reaches the surgeon using the Geant4 software, a toolkit developed by CERN. Using a flexible setup in which two RGB-D cameras are mounted to the mobile C-arm, the scene is captured and modeled respectively. After the simulation of particles with specific energies, the dose at the surgeon's position, determined by the depth cameras, can be measured. The validation was performed by comparing the simulation results to both theoretical values from the C-arms user manual and real measurements made with a QUART didoSVM dosimeter. The average error was 16.46% and 16.39%, respectively. The proposed flexible setup and high simulation precision without a calibration with measured dosimeter values, has great potential to be directly used and integrated intraoperatively for dose measurement.
{"title":"[POSTER] Augmented Reality for Radiation Awareness","authors":"Nicola Leucht, S. Habert, P. Wucherer, S. Weidert, Nassir Navab, P. Fallavollita","doi":"10.1109/ISMAR.2015.21","DOIUrl":"https://doi.org/10.1109/ISMAR.2015.21","url":null,"abstract":"C-arm fluoroscopes are frequently used during surgeries for intraoperative guidance. Unfortunately, due to X-ray emission and scattering, increased radiation exposure occurs in the operating theatre. The objective of this work is to sensitize the surgeon to their radiation exposure, enable them to check on their exposure over time, and to help them choose their best position related to the C-arm gantry during surgery. First, we aim at simulating the amount of radiation that reaches the surgeon using the Geant4 software, a toolkit developed by CERN. Using a flexible setup in which two RGB-D cameras are mounted to the mobile C-arm, the scene is captured and modeled respectively. After the simulation of particles with specific energies, the dose at the surgeon's position, determined by the depth cameras, can be measured. The validation was performed by comparing the simulation results to both theoretical values from the C-arms user manual and real measurements made with a QUART didoSVM dosimeter. The average error was 16.46% and 16.39%, respectively. The proposed flexible setup and high simulation precision without a calibration with measured dosimeter values, has great potential to be directly used and integrated intraoperatively for dose measurement.","PeriodicalId":240196,"journal":{"name":"2015 IEEE International Symposium on Mixed and Augmented Reality","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127116455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}