We propose to demonstrate a ubiquitous immersive virtual reality system that is highly scalable and accessible to a larger audience. With the advent of handheld and wearable devices, we have seen it gain considerable popularity among the common masses. We present a practical design of such a system that offers the core affordances of immersive virtual reality in a portable and untethered configuration. In addition, we have developed an extensive immersive virtual experience that involves engaging users visually and aurally. This is an effort towards integrating VR into the space and time of user workflows.
{"title":"Ubiquitous virtual reality ‘To-Go’","authors":"Aryabrata Basu, K. Johnsen","doi":"10.1109/VR.2014.6802101","DOIUrl":"https://doi.org/10.1109/VR.2014.6802101","url":null,"abstract":"We propose to demonstrate a ubiquitous immersive virtual reality system that is highly scalable and accessible to a larger audience. With the advent of handheld and wearable devices, we have seen it gain considerable popularity among the common masses. We present a practical design of such a system that offers the core affordances of immersive virtual reality in a portable and untethered configuration. In addition, we have developed an extensive immersive virtual experience that involves engaging users visually and aurally. This is an effort towards integrating VR into the space and time of user workflows.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128321002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takuto Nakamura, Narihiro Nishimura, Michi Sato, H. Kajimoto
When a wire hanger is placed sideways on the head, and the temporal region is sandwiched by the hanger, the head rotates unexpectedly. This phenomenon has been named the “Hanger Reflex”. Although it is a simple method for producing pseudoforce sensation, the use of the wire hanger in this way has up until now been limited in posistion to the head. Here we report a new finding that when a wrist or waist is equipped with a device of a larger circumferance the arm or the body rotates involuntarily. This fact suggests that the Hanger Reflex principle might be applicable to parts of the body other than the head, leading to the possible compact whole-body force display. This paper documents the development and testing of the devices and, suggesting stable presentation of the rotational force.
{"title":"Application of Hanger Reflex to wrist and waist","authors":"Takuto Nakamura, Narihiro Nishimura, Michi Sato, H. Kajimoto","doi":"10.1109/VR.2014.6802111","DOIUrl":"https://doi.org/10.1109/VR.2014.6802111","url":null,"abstract":"When a wire hanger is placed sideways on the head, and the temporal region is sandwiched by the hanger, the head rotates unexpectedly. This phenomenon has been named the “Hanger Reflex”. Although it is a simple method for producing pseudoforce sensation, the use of the wire hanger in this way has up until now been limited in posistion to the head. Here we report a new finding that when a wrist or waist is equipped with a device of a larger circumferance the arm or the body rotates involuntarily. This fact suggests that the Hanger Reflex principle might be applicable to parts of the body other than the head, leading to the possible compact whole-body force display. This paper documents the development and testing of the devices and, suggesting stable presentation of the rotational force.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133460673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We conducted an experiment in an attempt to generate baseline accuracy and precision values for optical see-through (OST) head mounted display (HMD) calibration without the inclusion of human postural sway error. This preliminary work will act as a control condition for future studies into postural error reduction. An experimental apparatus was constructed to allow performance of a SPAAM calibration using 25 alignments taken using one of three distance distribution patterns: static, sequential, and magic square. The accuracy of the calibrations were determined by calculating the extrinsic X, Y, Z translation values from the resulting projection matrix. The standard deviation for each translation component was also calculated. The results show that the magic square distribution resulted in the most accurate parameter estimation and also resulted in the smallest standard deviation for each extrinsic translation component.
我们进行了一项实验,试图在不包含人体姿势摇摆误差的情况下,为光学透明(OST)头戴式显示器(HMD)校准生成基线精度和精度值。这项初步工作将作为未来研究减少姿势误差的控制条件。构建了一个实验装置,以允许使用使用三种距离分布模式之一进行25次校准的SPAAM校准:静态,顺序和魔方。通过计算得到的投影矩阵的外部X, Y, Z平移值来确定校准的精度。还计算了每个平移分量的标准差。结果表明,幻方分布能得到最准确的参数估计,同时也能得到各外来平移分量的最小标准差。
{"title":"Baseline SPAAM calibration accuracy and precision in the absence of human postural sway error","authors":"Kenneth R. Moser, Magnus Axholt, J. Swan","doi":"10.1109/VR.2014.6802070","DOIUrl":"https://doi.org/10.1109/VR.2014.6802070","url":null,"abstract":"We conducted an experiment in an attempt to generate baseline accuracy and precision values for optical see-through (OST) head mounted display (HMD) calibration without the inclusion of human postural sway error. This preliminary work will act as a control condition for future studies into postural error reduction. An experimental apparatus was constructed to allow performance of a SPAAM calibration using 25 alignments taken using one of three distance distribution patterns: static, sequential, and magic square. The accuracy of the calibrations were determined by calculating the extrinsic X, Y, Z translation values from the resulting projection matrix. The standard deviation for each translation component was also calculated. The results show that the magic square distribution resulted in the most accurate parameter estimation and also resulted in the smallest standard deviation for each extrinsic translation component.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129717650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alessandro Febretti, Arthur Nishimoto, V. Mateevitsi, L. Renambot, Andrew E. Johnson, J. Leigh
In the domain of large-scale visualization instruments, hybrid reality environments (HREs) are a recent innovation that combines the best-in-class capabilities of immersive environments, with the best-in-class capabilities of ultra-high-resolution display walls. HREs create a seamless 2D/3D environment that supports both information-rich analysis as well as virtual reality simulation exploration at a resolution matching human visual acuity. Co-located research groups in HREs tend to work on a variety of tasks during a research session (sometimes in parallel), and these tasks require 2D data views, 3D views, linking between them and the ability to bring in (or hide) data quickly as needed. In this paper we present Omegalib, a software framework that facilitates application development on HREs. Omegalib is designed to support dynamic reconfigurability of the display environment, so that areas of the display can be interactively allocated to 2D or 3D workspaces as needed. Compared to existing frameworks and toolkits, Omegalib makes it possible to have multiple immersive applications running on a cluster-controlled display system, have different input sources dynamically routed to applications, and have rendering results optionally redirected to a distributed compositing manager. Omegalib supports pluggable front-ends, to simplify the integration of third-party libraries like OpenGL, OpenSceneGraph, and the Visualization Toolkit (VTK). We present examples of applications developed with Omegalib for the 74-megapixel, 72-tile CAVE2™ system, and show how a Hybrid Reality Environment proved effective in supporting work for a co-located research group in the environmental sciences.
{"title":"Omegalib: A multi-view application framework for hybrid reality display environments","authors":"Alessandro Febretti, Arthur Nishimoto, V. Mateevitsi, L. Renambot, Andrew E. Johnson, J. Leigh","doi":"10.1109/VR.2014.6802043","DOIUrl":"https://doi.org/10.1109/VR.2014.6802043","url":null,"abstract":"In the domain of large-scale visualization instruments, hybrid reality environments (HREs) are a recent innovation that combines the best-in-class capabilities of immersive environments, with the best-in-class capabilities of ultra-high-resolution display walls. HREs create a seamless 2D/3D environment that supports both information-rich analysis as well as virtual reality simulation exploration at a resolution matching human visual acuity. Co-located research groups in HREs tend to work on a variety of tasks during a research session (sometimes in parallel), and these tasks require 2D data views, 3D views, linking between them and the ability to bring in (or hide) data quickly as needed. In this paper we present Omegalib, a software framework that facilitates application development on HREs. Omegalib is designed to support dynamic reconfigurability of the display environment, so that areas of the display can be interactively allocated to 2D or 3D workspaces as needed. Compared to existing frameworks and toolkits, Omegalib makes it possible to have multiple immersive applications running on a cluster-controlled display system, have different input sources dynamically routed to applications, and have rendering results optionally redirected to a distributed compositing manager. Omegalib supports pluggable front-ends, to simplify the integration of third-party libraries like OpenGL, OpenSceneGraph, and the Visualization Toolkit (VTK). We present examples of applications developed with Omegalib for the 74-megapixel, 72-tile CAVE2™ system, and show how a Hybrid Reality Environment proved effective in supporting work for a co-located research group in the environmental sciences.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132581994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mahdi Azmandian, Rhys Yahata, M. Bolas, Evan A. Suma
Redirected walking techniques enable natural locomotion through immersive virtual environments that are considerably larger than the available real world walking space. However, the most effective strategy for steering the user remains an open question, as most previously presented algorithms simply redirect toward the center of the physical space. In this work, we present a theoretical framework that plans a walking path through a virtual environment and calculates the parameters for combining translation, rotation, and curvature gains such that the user can traverse a series of defined waypoints efficiently based on a utility function. This function minimizes the number of overt reorientations to avoid introducing potential breaks in presence. A notable advantage of this approach is that it leverages knowledge of the layout of both the physical and virtual environments to enhance the steering strategy.
{"title":"An enhanced steering algorithm for redirected walking in virtual environments","authors":"Mahdi Azmandian, Rhys Yahata, M. Bolas, Evan A. Suma","doi":"10.1109/VR.2014.6802053","DOIUrl":"https://doi.org/10.1109/VR.2014.6802053","url":null,"abstract":"Redirected walking techniques enable natural locomotion through immersive virtual environments that are considerably larger than the available real world walking space. However, the most effective strategy for steering the user remains an open question, as most previously presented algorithms simply redirect toward the center of the physical space. In this work, we present a theoretical framework that plans a walking path through a virtual environment and calculates the parameters for combining translation, rotation, and curvature gains such that the user can traverse a series of defined waypoints efficiently based on a utility function. This function minimizes the number of overt reorientations to avoid introducing potential breaks in presence. A notable advantage of this approach is that it leverages knowledge of the layout of both the physical and virtual environments to enhance the steering strategy.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114289438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Krum, Thai-Binh Phan, Lauren Cairco, Peter Wang, M. Bolas
Our demo deals with the need in immersive virtual reality for devices that support expressive and adaptive interaction in a low-cost, eyes-free manner. Leveraging rapid prototyping techniques for fabrication, we have developed a variety of panels that can be overlaid on multi-touch tablets and smartphones. The panels are coupled with an app running on the multi-touch device that exchanges commands and state information over a wireless network with the virtual reality application. Sculpted features of the panels provide tactile disambiguation of control widgets and an onscreen heads-up display provides interaction state information. A variety of interaction mappings can be provided through software to support several classes of interaction techniques in virtual environments. We foresee additional uses for applications where eyes-free use and adaptable interaction interfaces can be beneficial.
{"title":"A demonstration of tablet-based interaction panels for immersive environments","authors":"D. Krum, Thai-Binh Phan, Lauren Cairco, Peter Wang, M. Bolas","doi":"10.1109/VR.2014.6802108","DOIUrl":"https://doi.org/10.1109/VR.2014.6802108","url":null,"abstract":"Our demo deals with the need in immersive virtual reality for devices that support expressive and adaptive interaction in a low-cost, eyes-free manner. Leveraging rapid prototyping techniques for fabrication, we have developed a variety of panels that can be overlaid on multi-touch tablets and smartphones. The panels are coupled with an app running on the multi-touch device that exchanges commands and state information over a wireless network with the virtual reality application. Sculpted features of the panels provide tactile disambiguation of control widgets and an onscreen heads-up display provides interaction state information. A variety of interaction mappings can be provided through software to support several classes of interaction techniques in virtual environments. We foresee additional uses for applications where eyes-free use and adaptable interaction interfaces can be beneficial.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125177531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Burns, David Easter, Rob Chadwick, David A. Smith, Carl Rosengrant
Software distribution and installation is a logistical issue for large enterprises. Web applications are often a good solution because users can instantly receive application updates on any device without needing special permissions to install them on their hardware. Until recently, it was not possible to create 3D multiuser virtual environment-based web applications that didn't require installing a browser plugin. However, recent web standards have made it possible. We present the Virtual World Framework (VWF), a software framework for creating 3D multiuser web applications. We are using VWF to create applications for team training and collaboration. VWF can be downloaded at http://virtual.wf.
{"title":"The Virtual World Framework: Collaborative virtual environments on the web","authors":"E. Burns, David Easter, Rob Chadwick, David A. Smith, Carl Rosengrant","doi":"10.1109/VR.2014.6802103","DOIUrl":"https://doi.org/10.1109/VR.2014.6802103","url":null,"abstract":"Software distribution and installation is a logistical issue for large enterprises. Web applications are often a good solution because users can instantly receive application updates on any device without needing special permissions to install them on their hardware. Until recently, it was not possible to create 3D multiuser virtual environment-based web applications that didn't require installing a browser plugin. However, recent web standards have made it possible. We present the Virtual World Framework (VWF), a software framework for creating 3D multiuser web applications. We are using VWF to create applications for team training and collaboration. VWF can be downloaded at http://virtual.wf.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126972330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A large body of literature has analyzed differences between perception in the real world and virtual environments (VE) in terms of space, distance and speed perception. So far, no empirical data has been collected for time misperception in immersive VEs to our knowledge. However, there is evidence that time perception can deviate from veridical judgments, for instance, due to visual or auditive stimulation related to motion misperception. In this work we evaluate time perception during walking motions with a pilot study in an immersive head-mounted display (HMD) environment. Significant differences between time judgments in the real and virtual environment could not be observed.
{"title":"Time perception during walking in virtual environments","authors":"G. Bruder, Frank Steinicke","doi":"10.1109/VR.2014.6802054","DOIUrl":"https://doi.org/10.1109/VR.2014.6802054","url":null,"abstract":"A large body of literature has analyzed differences between perception in the real world and virtual environments (VE) in terms of space, distance and speed perception. So far, no empirical data has been collected for time misperception in immersive VEs to our knowledge. However, there is evidence that time perception can deviate from veridical judgments, for instance, due to visual or auditive stimulation related to motion misperception. In this work we evaluate time perception during walking motions with a pilot study in an immersive head-mounted display (HMD) environment. Significant differences between time judgments in the real and virtual environment could not be observed.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115309037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ana-Despina Tudor, Ilinca Mustatea, Sandra Poeschl, N. Döring
Presentation skills that involve public speaking are an asset that many recognize to be important for their careers or during their study. One way to learn how to maintain eye contact and address clearly as a speaker is to use virtual audiences (VA) that simulate the reactions of a live public. A mixed-methods exploratory study has been conducted to conceptualize the design of such a VA. The purpose was to research how the nonverbal cues of live audiences vary depending on a speaker's gaze patterns (gazing towards the audience vs. gazing towards the presentation slides or notes) and vocal loudness (low vs. normal). 36 students (listeners) were videotaped during a public speaking situation. The analysis shows that the speaker's gaze patterns and vocal loudness influenced several nonverbal cues the audience displayed. The results could be implemented in the design of VAs by making them responsive in real time to variations in gazing patterns and voice loudness of the speakers (trainees).
{"title":"Responsive audiences — Nonverbal cues as reactions to a speaker's behavior","authors":"Ana-Despina Tudor, Ilinca Mustatea, Sandra Poeschl, N. Döring","doi":"10.1109/VR.2014.6802080","DOIUrl":"https://doi.org/10.1109/VR.2014.6802080","url":null,"abstract":"Presentation skills that involve public speaking are an asset that many recognize to be important for their careers or during their study. One way to learn how to maintain eye contact and address clearly as a speaker is to use virtual audiences (VA) that simulate the reactions of a live public. A mixed-methods exploratory study has been conducted to conceptualize the design of such a VA. The purpose was to research how the nonverbal cues of live audiences vary depending on a speaker's gaze patterns (gazing towards the audience vs. gazing towards the presentation slides or notes) and vocal loudness (low vs. normal). 36 students (listeners) were videotaped during a public speaking situation. The analysis shows that the speaker's gaze patterns and vocal loudness influenced several nonverbal cues the audience displayed. The results could be implemented in the design of VAs by making them responsive in real time to variations in gazing patterns and voice loudness of the speakers (trainees).","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132282443","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bui Minh Khuong, K. Kiyokawa, Andrew Miller, Joseph J. La Viola, T. Mashita, H. Takemura
This study evaluates the effectiveness of an AR-based context-aware assembly support system with AR visualization modes proposed in object assembly. Although many AR-based assembly support systems have been proposed, few keep track of the assembly status in real-time and automatically recognize error and completion states at each step. Naturally, the effectiveness of such context-aware systems remains unexplored. Our test-bed system displays guidance information and error detection information corresponding to the recognized assembly status in the context of building block (LEGO) assembly. A user wearing a head mounted display (HMD) can intuitively build a building block structure on a table by visually confirming correct and incorrect blocks and locating where to attach new blocks. We proposed two AR visualization modes, one of them that displays guidance information directly overlaid on the physical model, and another one in which guidance information is rendered on a virtual model adjacent to the real model. An evaluation was conducted to comparatively evaluate these AR visualization modes as well as determine the effectiveness of context-aware error detection. Our experimental results indicate the visualization mode that shows target status next to real objects of concern outperforms the traditional direct overlay under moderate registration accuracy and marker-based tracking.
{"title":"The effectiveness of an AR-based context-aware assembly support system in object assembly","authors":"Bui Minh Khuong, K. Kiyokawa, Andrew Miller, Joseph J. La Viola, T. Mashita, H. Takemura","doi":"10.1109/VR.2014.6802051","DOIUrl":"https://doi.org/10.1109/VR.2014.6802051","url":null,"abstract":"This study evaluates the effectiveness of an AR-based context-aware assembly support system with AR visualization modes proposed in object assembly. Although many AR-based assembly support systems have been proposed, few keep track of the assembly status in real-time and automatically recognize error and completion states at each step. Naturally, the effectiveness of such context-aware systems remains unexplored. Our test-bed system displays guidance information and error detection information corresponding to the recognized assembly status in the context of building block (LEGO) assembly. A user wearing a head mounted display (HMD) can intuitively build a building block structure on a table by visually confirming correct and incorrect blocks and locating where to attach new blocks. We proposed two AR visualization modes, one of them that displays guidance information directly overlaid on the physical model, and another one in which guidance information is rendered on a virtual model adjacent to the real model. An evaluation was conducted to comparatively evaluate these AR visualization modes as well as determine the effectiveness of context-aware error detection. Our experimental results indicate the visualization mode that shows target status next to real objects of concern outperforms the traditional direct overlay under moderate registration accuracy and marker-based tracking.","PeriodicalId":408559,"journal":{"name":"2014 IEEE Virtual Reality (VR)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123967745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}