Pub Date : 2011-10-26DOI: 10.1109/ISMAR.2011.6092364
Eisuke Ito, Takayuki Okatani, K. Deguchi
It is one of the central issues in augmented reality and computer vision to track a planar object moving relatively to a camera in an accurate and robust manner. In previous studies, it was pointed out that there are several factors making the tracking difficult, such as illumination change and motion blur, and effective solutions were proposed for them. In this paper, we point out that degradation in effective image resolution can also deteriorate tracking performance, which typically occurs when the plane being tracked has an oblique pose with respect to the viewing direction, or when it moves to a distant location from the camera. The deterioration tends to become significantly large for extreme configurations, e.g., when the planar object has nearly a right angle with the viewing direction. Such configurations can frequently occur in AR applications targeted at ordinary users. To cope with this problem, we model the sampling and reconstruction process of images, and present a tracking algorithm that incorporates the model to correctly handle these configurations. We show through several experiments that the proposed method shows better performance than conventional methods.
{"title":"Accurate and robust planar tracking based on a model of image sampling and reconstruction process","authors":"Eisuke Ito, Takayuki Okatani, K. Deguchi","doi":"10.1109/ISMAR.2011.6092364","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092364","url":null,"abstract":"It is one of the central issues in augmented reality and computer vision to track a planar object moving relatively to a camera in an accurate and robust manner. In previous studies, it was pointed out that there are several factors making the tracking difficult, such as illumination change and motion blur, and effective solutions were proposed for them. In this paper, we point out that degradation in effective image resolution can also deteriorate tracking performance, which typically occurs when the plane being tracked has an oblique pose with respect to the viewing direction, or when it moves to a distant location from the camera. The deterioration tends to become significantly large for extreme configurations, e.g., when the planar object has nearly a right angle with the viewing direction. Such configurations can frequently occur in AR applications targeted at ordinary users. To cope with this problem, we model the sampling and reconstruction process of images, and present a tracking algorithm that incorporates the model to correctly handle these configurations. We show through several experiments that the proposed method shows better performance than conventional methods.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121733158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-26DOI: 10.1109/ISMAR.2011.6092394
Hideaki Uchiyama, É. Marchand
We extend planar fiducial markers using random dots [8] to nonrigidly deformable markers. Because the recognition and tracking of random dot markers are based on keypoint matching, we can estimate the deformation of the markers with nonrigid surface detection from keypoint correspondences. First, the initial pose of the markers is computed from a homography with RANSAC as a planar detection. Second, deformations are estimated from the minimization of a cost function for deformable surface fitting. We show augmentation results of 2D surface deformation recovery with several markers.
{"title":"Deformable random dot markers","authors":"Hideaki Uchiyama, É. Marchand","doi":"10.1109/ISMAR.2011.6092394","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092394","url":null,"abstract":"We extend planar fiducial markers using random dots [8] to nonrigidly deformable markers. Because the recognition and tracking of random dot markers are based on keypoint matching, we can estimate the deformation of the markers with nonrigid surface detection from keypoint correspondences. First, the initial pose of the markers is computed from a homography with RANSAC as a planar detection. Second, deformations are estimated from the minimization of a cost function for deformable surface fitting. We show augmentation results of 2D surface deformation recovery with several markers.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121108051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-26DOI: 10.1109/ISMAR.2011.6092383
Stefan Hauswiesner, M. Straka, Gerhard Reitmayr
Virtual dressing rooms for the fashion industry and digital entertainment applications aim at creating an image or a video of a user in which he or she wears different garments than in the real world. Such images can be displayed, for example, in a magic mirror shopping application or in games and movies. Current solutions involve the error-prone task of body pose tracking. We suggest an approach that allows users who are captured by a set of cameras to be virtually dressed with previously recorded garments in 3D. By using image-based algorithms, we can bypass critical components of other systems, especially tracking based on skeleton models. We rather transfer the appearance of a garment from one user to another by image processing and image-based rendering. Using images of real garments allows for photo-realistic rendering quality with high performance.
{"title":"Image-based clothes transfer","authors":"Stefan Hauswiesner, M. Straka, Gerhard Reitmayr","doi":"10.1109/ISMAR.2011.6092383","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092383","url":null,"abstract":"Virtual dressing rooms for the fashion industry and digital entertainment applications aim at creating an image or a video of a user in which he or she wears different garments than in the real world. Such images can be displayed, for example, in a magic mirror shopping application or in games and movies. Current solutions involve the error-prone task of body pose tracking. We suggest an approach that allows users who are captured by a set of cameras to be virtually dressed with previously recorded garments in 3D. By using image-based algorithms, we can bypass critical components of other systems, especially tracking based on skeleton models. We rather transfer the appearance of a garment from one user to another by image processing and image-based rendering. Using images of real garments allows for photo-realistic rendering quality with high performance.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115793437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-26DOI: 10.1109/ISMAR.2011.6092395
A. Hill, Jacob Schiefer, Jeff Wilson, Brian Davidson, Maribeth Gandy Coleman, B. MacIntyre
In this poster, we present the idea of “virtual transparency” for video see-through AR. In fully synthetic 3D graphics, head-tracked motion parallax has been shown to be a powerful depth cue for understanding the structure of the virtual world. To leverage head-tracked motion parallax in video see-through AR, the view of the virtual and physical world must change together in response to head motion. We present a system for accomplishing this, and discuss the benefits and limitations of our approach.
{"title":"Virtual transparency: Introducing parallax view into video see-through AR","authors":"A. Hill, Jacob Schiefer, Jeff Wilson, Brian Davidson, Maribeth Gandy Coleman, B. MacIntyre","doi":"10.1109/ISMAR.2011.6092395","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092395","url":null,"abstract":"In this poster, we present the idea of “virtual transparency” for video see-through AR. In fully synthetic 3D graphics, head-tracked motion parallax has been shown to be a powerful depth cue for understanding the structure of the virtual world. To leverage head-tracked motion parallax in video see-through AR, the view of the virtual and physical world must change together in response to head motion. We present a system for accomplishing this, and discuss the benefits and limitations of our approach.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126164732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-26DOI: 10.1109/ISMAR.2011.6092389
H. Regenbrecht, G. McGregor, Claudia Ott, S. Hoermann, Thomas W. Schubert, L. Hale, Julia Hoermann, Brian Dixon, E. Franz
Mixed reality rehabilitation systems and games are demonstrating potential as innovative adjunctive therapies for health professionals in their treatment of various hand and upper limb motor impairments. Unilateral motor deficits of the arm, for example, are commonly experienced post stroke. Our TheraMem system provides an augmented reality game environment that contributes to this increasingly rich area of research. We present a prototype system which “fools the brain” by visually amplifying users' hand movements — small actual hand movements lead to perceived larger movements. We validate the usability of our system in an empirical study with forty-five non-clinical participants. In addition, we present early qualitative evidence for the utility of our approach and system for stroke recovery and motor rehabilitation. Future uses of the system are considered by way of conclusion.
{"title":"Out of reach? — A novel AR interface approach for motor rehabilitation","authors":"H. Regenbrecht, G. McGregor, Claudia Ott, S. Hoermann, Thomas W. Schubert, L. Hale, Julia Hoermann, Brian Dixon, E. Franz","doi":"10.1109/ISMAR.2011.6092389","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092389","url":null,"abstract":"Mixed reality rehabilitation systems and games are demonstrating potential as innovative adjunctive therapies for health professionals in their treatment of various hand and upper limb motor impairments. Unilateral motor deficits of the arm, for example, are commonly experienced post stroke. Our TheraMem system provides an augmented reality game environment that contributes to this increasingly rich area of research. We present a prototype system which “fools the brain” by visually amplifying users' hand movements — small actual hand movements lead to perceived larger movements. We validate the usability of our system in an empirical study with forty-five non-clinical participants. In addition, we present early qualitative evidence for the utility of our approach and system for stroke recovery and motor rehabilitation. Future uses of the system are considered by way of conclusion.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"2014 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121066318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-26DOI: 10.1109/ISMAR.2011.6092385
H. Álvarez, I. Aguinaga, D. Borro
This paper proposes a new real-time Augmented Reality based tool to help in disassembly for maintenance operations. This tool provides workers with augmented instructions to perform maintenance tasks more efficiently. Our prototype is a complete framework characterized by its capability to automatically generate all the necessary data from input based on untextured 3D triangle meshes, without requiring additional user intervention. An automatic offline stage extracts the basic geometric features. These are used during the online stage to compute the camera pose from a monocular image. Thus, we can handle the usual textureless 3D models used in industrial applications. A self-supplied and robust markerless tracking system that combines an edge tracker, a point based tracker and a 3D particle filter has also been designed to continuously update the camera pose. Our framework incorporates an automatic path-planning module. During the offline stage, the assembly/disassembly sequence is automatically deduced from the 3D model geometry. This information is used to generate the disassembly instructions for workers.
{"title":"Providing guidance for maintenance operations using automatic markerless Augmented Reality system","authors":"H. Álvarez, I. Aguinaga, D. Borro","doi":"10.1109/ISMAR.2011.6092385","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092385","url":null,"abstract":"This paper proposes a new real-time Augmented Reality based tool to help in disassembly for maintenance operations. This tool provides workers with augmented instructions to perform maintenance tasks more efficiently. Our prototype is a complete framework characterized by its capability to automatically generate all the necessary data from input based on untextured 3D triangle meshes, without requiring additional user intervention. An automatic offline stage extracts the basic geometric features. These are used during the online stage to compute the camera pose from a monocular image. Thus, we can handle the usual textureless 3D models used in industrial applications. A self-supplied and robust markerless tracking system that combines an edge tracker, a point based tracker and a 3D particle filter has also been designed to continuously update the camera pose. Our framework incorporates an automatic path-planning module. During the offline stage, the assembly/disassembly sequence is automatically deduced from the 3D model geometry. This information is used to generate the disassembly instructions for workers.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"21 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128535502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-26DOI: 10.1109/ISMAR.2011.6092381
Christoffer Menk, R. Koch
Spatial augmented reality is especially interesting for the design process of a car, because a lot of virtual content and corresponding real objects are used. One important issue in such a process is that the designer can trust the visualized colors on the real object, because design decisions are made on basis of the projection. In this article, we present an interactive visualization technique which is able to exactly compute the RGB values for the projected image, so that the resulting colors on the real object are equally perceived as the real desired colors. Our approach computes the influences of the ambient light, the material, the pose and the color model of the projector to the resulting colors of the projected RGB values by using a physically-based computation. This information allows us to compute the adjustment for the RGB values for varying projector positions at interactive rates. Since the amount of projectable colors does not only depend on the material and the ambient light, but also on the pose of the projector, our method can be used to interactively adjust the range of projectable colors by moving the projector to arbitrary positions around the real object. The proposed method is evaluated in a number of experiments.
{"title":"Interactive visualization technique for truthful color reproduction in spatial augmented reality applications","authors":"Christoffer Menk, R. Koch","doi":"10.1109/ISMAR.2011.6092381","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092381","url":null,"abstract":"Spatial augmented reality is especially interesting for the design process of a car, because a lot of virtual content and corresponding real objects are used. One important issue in such a process is that the designer can trust the visualized colors on the real object, because design decisions are made on basis of the projection. In this article, we present an interactive visualization technique which is able to exactly compute the RGB values for the projected image, so that the resulting colors on the real object are equally perceived as the real desired colors. Our approach computes the influences of the ambient light, the material, the pose and the color model of the projector to the resulting colors of the projected RGB values by using a physically-based computation. This information allows us to compute the adjustment for the RGB values for varying projector positions at interactive rates. Since the amount of projectable colors does not only depend on the material and the ambient light, but also on the pose of the projector, our method can be used to interactively adjust the range of projectable colors by moving the projector to arbitrary positions around the real object. The proposed method is evaluated in a number of experiments.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117056953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-26DOI: 10.1109/ISMAR.2011.6092368
Clemens Arth, Manfred Klopschitz, Gerhard Reitmayr, D. Schmalstieg
Self-localization in large environments is a vital task for accurately registered information visualization in outdoor Augmented Reality (AR) applications. In this work, we present a system for self-localization on mobile phones using a GPS prior and an online-generated panoramic view of the user's environment. The approach is suitable for executing entirely on current generation mobile devices, such as smartphones. Parallel execution of online incremental panorama generation and accurate 6DOF pose estimation using 3D point reconstructions allows for real-time self-localization and registration in large-scale environments. The power of our approach is demonstrated in several experimental evaluations.
{"title":"Real-time self-localization from panoramic images on mobile devices","authors":"Clemens Arth, Manfred Klopschitz, Gerhard Reitmayr, D. Schmalstieg","doi":"10.1109/ISMAR.2011.6092368","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092368","url":null,"abstract":"Self-localization in large environments is a vital task for accurately registered information visualization in outdoor Augmented Reality (AR) applications. In this work, we present a system for self-localization on mobile phones using a GPS prior and an online-generated panoramic view of the user's environment. The approach is suitable for executing entirely on current generation mobile devices, such as smartphones. Parallel execution of online incremental panorama generation and accurate 6DOF pose estimation using 3D point reconstructions allows for real-time self-localization and registration in large-scale environments. The power of our approach is demonstrated in several experimental evaluations.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130535086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-26DOI: 10.1109/ISMAR.2011.6092386
S. Henderson, Steven K. Feiner
Procedural tasks are common to many domains, ranging from maintenance and repair, to medicine, to the arts. We describe and evaluate a prototype augmented reality (AR) user interface designed to assist users in the relatively under-explored psychomotor phase of procedural tasks. In this phase, the user begins physical manipulations, and thus alters aspects of the underlying task environment. Our prototype tracks the user and multiple components in a typical maintenance assembly task, and provides dynamic, prescriptive, overlaid instructions on a see-through head-worn display in response to the user's ongoing activity. A user study shows participants were able to complete psychomotor aspects of the assembly task significantly faster and with significantly greater accuracy than when using 3D-graphics-based assistance presented on a stationary LCD. Qualitative questionnaire results indicate that participants overwhelmingly preferred the AR condition, and ranked it as more intuitive than the LCD condition.
{"title":"Augmented reality in the psychomotor phase of a procedural task","authors":"S. Henderson, Steven K. Feiner","doi":"10.1109/ISMAR.2011.6092386","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092386","url":null,"abstract":"Procedural tasks are common to many domains, ranging from maintenance and repair, to medicine, to the arts. We describe and evaluate a prototype augmented reality (AR) user interface designed to assist users in the relatively under-explored psychomotor phase of procedural tasks. In this phase, the user begins physical manipulations, and thus alters aspects of the underlying task environment. Our prototype tracks the user and multiple components in a typical maintenance assembly task, and provides dynamic, prescriptive, overlaid instructions on a see-through head-worn display in response to the user's ongoing activity. A user study shows participants were able to complete psychomotor aspects of the assembly task significantly faster and with significantly greater accuracy than when using 3D-graphics-based assistance presented on a stationary LCD. Qualitative questionnaire results indicate that participants overwhelmingly preferred the AR condition, and ranked it as more intuitive than the LCD condition.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128494012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-26DOI: 10.1109/ISMAR.2011.6092388
A. Okur, Seyed-Ahmad Ahmadi, A. Bigdelou, T. Wendler, Nassir Navab
For the past two decades, medical Augmented Reality visualization has been researched and prototype systems have been tested in laboratory setups and limited clinical trials. Up to our knowledge, until now, no commercial system incorporating Augmented Reality visualization has been developed and used routinely within the real-life surgical environment. In this paper, we are reporting on observations and analysis concerning the usage of a commercially developed and clinically approved Freehand SPECT system, which incorporates monitor-based Mixed Reality visualization, during real-life surgeries. The workflow-based analysis we present is focused on an atomic sub-task of sentinel lymph node biopsy. We analyzed the usage of the Augmented and Virtual Reality visualization modes by the surgical team, while leaving the staff completely uninfluenced and unbiased in order to capture the natural interaction with the system. We report on our observations in over 100 Freehand SPECT acquisitions within different phases of 52 surgeries.
{"title":"MR in OR: First analysis of AR/VR visualization in 100 intra-operative Freehand SPECT acquisitions","authors":"A. Okur, Seyed-Ahmad Ahmadi, A. Bigdelou, T. Wendler, Nassir Navab","doi":"10.1109/ISMAR.2011.6092388","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092388","url":null,"abstract":"For the past two decades, medical Augmented Reality visualization has been researched and prototype systems have been tested in laboratory setups and limited clinical trials. Up to our knowledge, until now, no commercial system incorporating Augmented Reality visualization has been developed and used routinely within the real-life surgical environment. In this paper, we are reporting on observations and analysis concerning the usage of a commercially developed and clinically approved Freehand SPECT system, which incorporates monitor-based Mixed Reality visualization, during real-life surgeries. The workflow-based analysis we present is focused on an atomic sub-task of sentinel lymph node biopsy. We analyzed the usage of the Augmented and Virtual Reality visualization modes by the surgical team, while leaving the staff completely uninfluenced and unbiased in order to capture the natural interaction with the system. We report on our observations in over 100 Freehand SPECT acquisitions within different phases of 52 surgeries.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"9 Suppl 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123669300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}