Alexandre Pereira, Gun A. Lee, Edson Almeida, M. Billinghurst
Augmented Reality (AR) is a technology that can overlap virtual elements over the real world in real time. This research focuses on studying how different AR elements can help forklift operators locate pallets as quickly as possible in a warehouse environment. We have developed a simulated AR environment to test Egocentric or Exocentric virtual navigation cues. The virtual elements were displayed to the user in a HUD (head-up display) on the forklift windshield, fixed place in front of the user operator, or in a HMD (head-mounted display), where the virtual cues are attached to the head of the user. A user study found that the Egocentric AR view was preferred over the Exocentric condition and performed better while the HUD and HMD viewing methods produced no difference in performance.
{"title":"A Study in Virtual Navigation Cues for Forklift Operators","authors":"Alexandre Pereira, Gun A. Lee, Edson Almeida, M. Billinghurst","doi":"10.1109/SVR.2016.25","DOIUrl":"https://doi.org/10.1109/SVR.2016.25","url":null,"abstract":"Augmented Reality (AR) is a technology that can overlap virtual elements over the real world in real time. This research focuses on studying how different AR elements can help forklift operators locate pallets as quickly as possible in a warehouse environment. We have developed a simulated AR environment to test Egocentric or Exocentric virtual navigation cues. The virtual elements were displayed to the user in a HUD (head-up display) on the forklift windshield, fixed place in front of the user operator, or in a HMD (head-mounted display), where the virtual cues are attached to the head of the user. A user study found that the Egocentric AR view was preferred over the Exocentric condition and performed better while the HUD and HMD viewing methods produced no difference in performance.","PeriodicalId":444488,"journal":{"name":"2016 XVIII Symposium on Virtual and Augmented Reality (SVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128940236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Francisco Simões, Mariana Bezerra, J. M. Teixeira, W. Correia, V. Teichrieb
Although Augmented Reality use in rapid prototyping of digital artifacts is already a reality, its usage in the design of physical artifacts did not reach its full potential and faces an important competitor with the diffusion of 3D printing. This work presents and applies a detailed method to understand and compare the applicability of Augmented Reality and physical prototypes (3D printed) during product's project design. The proposed method was developed based on user's perspective and it was simulated with the development of an alarm clock radio, using expert and non-expert users in the field. Based on the experiments and their analysis, this research discusses the many qualities and drawbacks from Augmented Reality and 3D printing, exposing and validating different aspects of these tools for rapid prototyping.
{"title":"A User Perspective Analysis on Augmented vs 3D Printed Prototypes for Product's Project Design","authors":"Francisco Simões, Mariana Bezerra, J. M. Teixeira, W. Correia, V. Teichrieb","doi":"10.1109/SVR.2016.22","DOIUrl":"https://doi.org/10.1109/SVR.2016.22","url":null,"abstract":"Although Augmented Reality use in rapid prototyping of digital artifacts is already a reality, its usage in the design of physical artifacts did not reach its full potential and faces an important competitor with the diffusion of 3D printing. This work presents and applies a detailed method to understand and compare the applicability of Augmented Reality and physical prototypes (3D printed) during product's project design. The proposed method was developed based on user's perspective and it was simulated with the development of an alarm clock radio, using expert and non-expert users in the field. Based on the experiments and their analysis, this research discusses the many qualities and drawbacks from Augmented Reality and 3D printing, exposing and validating different aspects of these tools for rapid prototyping.","PeriodicalId":444488,"journal":{"name":"2016 XVIII Symposium on Virtual and Augmented Reality (SVR)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132493123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work describes a technique for generating parametric surfaces meshes using parallel computing, with distributed memory processors. This technique can be used in many areas, such as Computer Graphics and Virtual Reality, among others. The input for the algorithm is a set of parametric patches that model the surface of a given object. A structure for spatial partitioning is proposed to decompose the domain in as many subdomains as processes in the parallel system. Each subdomain consists of a set of patches and the division of its load is guided following an estimate. This decomposition attempts to balance the amount of work in all the subdomains. The amount of work, known as load, of any mesh generator is usually given as a function of its output size, i.e., the size of the generated mesh. Therefore, a technique to estimate the size of this mesh, the total load of the domain, is needed beforehand. This work makes use of an analytical average curvature calculated for each patch, which in turn is input data to estimate this load and the decomposition is made from this analytical mean curvature. Once the domain is decomposed, each process generates the mesh on that subdomain or set of patches by a quadtree technique for inner regions, advancing front technique for border regions and is finally applied an improvement to mesh generated. This technique presented good speed-up results, keeping the quality of the mesh comparable to the quality of the serially generated mesh.
{"title":"An Adaptive Mesh Generation Technique in Parallel for Virtual Reality Applications","authors":"Tiago Guimarães, J. B. C. Neto, C. Vidal","doi":"10.1109/SVR.2016.43","DOIUrl":"https://doi.org/10.1109/SVR.2016.43","url":null,"abstract":"This work describes a technique for generating parametric surfaces meshes using parallel computing, with distributed memory processors. This technique can be used in many areas, such as Computer Graphics and Virtual Reality, among others. The input for the algorithm is a set of parametric patches that model the surface of a given object. A structure for spatial partitioning is proposed to decompose the domain in as many subdomains as processes in the parallel system. Each subdomain consists of a set of patches and the division of its load is guided following an estimate. This decomposition attempts to balance the amount of work in all the subdomains. The amount of work, known as load, of any mesh generator is usually given as a function of its output size, i.e., the size of the generated mesh. Therefore, a technique to estimate the size of this mesh, the total load of the domain, is needed beforehand. This work makes use of an analytical average curvature calculated for each patch, which in turn is input data to estimate this load and the decomposition is made from this analytical mean curvature. Once the domain is decomposed, each process generates the mesh on that subdomain or set of patches by a quadtree technique for inner regions, advancing front technique for border regions and is finally applied an improvement to mesh generated. This technique presented good speed-up results, keeping the quality of the mesh comparable to the quality of the serially generated mesh.","PeriodicalId":444488,"journal":{"name":"2016 XVIII Symposium on Virtual and Augmented Reality (SVR)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134080812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kleber A. Sousa, J. Neto, Edimo Sousa Silva, M. A. Rodrigues
In this paper, we present a gestural control system to support rehabilitation exercises using the Microsoft Kinect device. More specifically, the graphical application consists of a static avatar (which represents the pose to be registered by a professional, that will be performed by the user) and a dynamic avatar (which corresponds to the user's pose in real time). The avatars are modeled from the skeletal segments of the user. For verifying the matching between the poses of the two avatars we have used the Cohen-Sutherland clipping algorithm, which quickly identifies whether or not a dynamic segment (straight line) is totally contained in a static segment (rectangular region). During the exercises a visual feedback in real-time and by segment is displayed in the system's interface, helping the user to correct the posture and, consequently, finalize the physical exercises successfully. Initial tests with participants were conducted, showing that the application is effective, simple to use, of low cost, and flexible, since automatically adjusts itself according to anatomical structures that contain segments of varying sizes.
{"title":"A Gesture Control System to Support Rehabilitation Exercises","authors":"Kleber A. Sousa, J. Neto, Edimo Sousa Silva, M. A. Rodrigues","doi":"10.1109/SVR.2016.37","DOIUrl":"https://doi.org/10.1109/SVR.2016.37","url":null,"abstract":"In this paper, we present a gestural control system to support rehabilitation exercises using the Microsoft Kinect device. More specifically, the graphical application consists of a static avatar (which represents the pose to be registered by a professional, that will be performed by the user) and a dynamic avatar (which corresponds to the user's pose in real time). The avatars are modeled from the skeletal segments of the user. For verifying the matching between the poses of the two avatars we have used the Cohen-Sutherland clipping algorithm, which quickly identifies whether or not a dynamic segment (straight line) is totally contained in a static segment (rectangular region). During the exercises a visual feedback in real-time and by segment is displayed in the system's interface, helping the user to correct the posture and, consequently, finalize the physical exercises successfully. Initial tests with participants were conducted, showing that the application is effective, simple to use, of low cost, and flexible, since automatically adjusts itself according to anatomical structures that contain segments of varying sizes.","PeriodicalId":444488,"journal":{"name":"2016 XVIII Symposium on Virtual and Augmented Reality (SVR)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134319079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sergio Henriques M. B. B. Antunes, C. Rodrigues, C. Werner
This work proposes a tool named VMAG -- Visualização de Modelos de sistemas Assistido por Gestos (Visualization of system Models Assisted by Gestures) - for supporting the visualization of system models by controlling it with gestures captured by a Kinect sensor. Inspired by the VisAr3D (Software Architecture Visualization in 3D) approach, this tool aims to support the teaching of System Modeling, using motion controls as an attraction to users to promote immersion. A prototype of this tool was developed, incorporating some of its functions, and a study was made to validate its viability, returning overall positive results.
这项工作提出了一个名为VMAG的工具—visualiza ode Modelos de sistemas Assistido por Gestos(手势辅助的系统模型可视化)—通过使用Kinect传感器捕获的手势控制系统模型来支持系统模型的可视化。受VisAr3D (3D软件架构可视化)方法的启发,该工具旨在支持系统建模的教学,使用运动控制来吸引用户,以促进沉浸感。开发了该工具的原型,结合了它的一些功能,并进行了研究以验证其可行性,总体上得到了积极的结果。
{"title":"Supporting System Modeling Learning Using Gestures for Visualization Control as Method of Immersion","authors":"Sergio Henriques M. B. B. Antunes, C. Rodrigues, C. Werner","doi":"10.1109/SVR.2016.19","DOIUrl":"https://doi.org/10.1109/SVR.2016.19","url":null,"abstract":"This work proposes a tool named VMAG -- Visualização de Modelos de sistemas Assistido por Gestos (Visualization of system Models Assisted by Gestures) - for supporting the visualization of system models by controlling it with gestures captured by a Kinect sensor. Inspired by the VisAr3D (Software Architecture Visualization in 3D) approach, this tool aims to support the teaching of System Modeling, using motion controls as an attraction to users to promote immersion. A prototype of this tool was developed, incorporating some of its functions, and a study was made to validate its viability, returning overall positive results.","PeriodicalId":444488,"journal":{"name":"2016 XVIII Symposium on Virtual and Augmented Reality (SVR)","volume":"134 1-3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116707612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Desktop-based operating systems allow the use of many applications concurrently, but the frequent switching between two or more applications distracts the user, preventing him to keep focused in the main task. In this work we introduce an augmented mouse, which supports the regular 2D movements and clicks, as well as 3D gestures performed over it. While the keyboard and mouse conventional operation are used for the main task, with 3D gestures the user can control secondary tasks. As a proof of concept, we embedded a Leap Motion Controller device inside a regular mouse. User tests have been conducted firstly to help in the selection of the gestures supported, and then to evaluate the device effectiveness and usability. Results shown that the use of the augmented mouse as a strategy to keep the user focused reduces the task completion time.
{"title":"Lossless Multitasking: Using 3D Gestures Embedded in Mouse Devices","authors":"Juliano Franz, Aline Menin, L. Nedel","doi":"10.1109/SVR.2016.27","DOIUrl":"https://doi.org/10.1109/SVR.2016.27","url":null,"abstract":"Desktop-based operating systems allow the use of many applications concurrently, but the frequent switching between two or more applications distracts the user, preventing him to keep focused in the main task. In this work we introduce an augmented mouse, which supports the regular 2D movements and clicks, as well as 3D gestures performed over it. While the keyboard and mouse conventional operation are used for the main task, with 3D gestures the user can control secondary tasks. As a proof of concept, we embedded a Leap Motion Controller device inside a regular mouse. User tests have been conducted firstly to help in the selection of the gestures supported, and then to evaluate the device effectiveness and usability. Results shown that the use of the augmented mouse as a strategy to keep the user focused reduces the task completion time.","PeriodicalId":444488,"journal":{"name":"2016 XVIII Symposium on Virtual and Augmented Reality (SVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134157038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rodrigo Marques Almeida da Silva, Pablo B. Gomes, B. Feijó
This paper presents an application of collaborative virtual reality visualization focused on virtual backlots. The proposed solution was developed and designed for the film production and TV broadcast segment requirements. Thus, we develop a very robust system capable of sharing huge models and managing interactions among participants, even those who use mobile devices. For such, we adapt two sharing methodologies, a local processing and a remote processing using video streaming. Moreover, interaction and collaborative editing tools have been incorporated, as well as auditing mechanisms and events report. Finally, we tested the system with several user groups and evaluate the results and future work.
{"title":"Distributed Virtual Reality for Collaborative Backlot Visualization","authors":"Rodrigo Marques Almeida da Silva, Pablo B. Gomes, B. Feijó","doi":"10.1109/SVR.2016.44","DOIUrl":"https://doi.org/10.1109/SVR.2016.44","url":null,"abstract":"This paper presents an application of collaborative virtual reality visualization focused on virtual backlots. The proposed solution was developed and designed for the film production and TV broadcast segment requirements. Thus, we develop a very robust system capable of sharing huge models and managing interactions among participants, even those who use mobile devices. For such, we adapt two sharing methodologies, a local processing and a remote processing using video streaming. Moreover, interaction and collaborative editing tools have been incorporated, as well as auditing mechanisms and events report. Finally, we tested the system with several user groups and evaluate the results and future work.","PeriodicalId":444488,"journal":{"name":"2016 XVIII Symposium on Virtual and Augmented Reality (SVR)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122170565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Offshore Engineering visualization applications are, in most cases, very complex and should display a lot of data coming from very computational intensive numerical simulations. To help analyze and better visualize the results, 3D visualization can be used in conjunction with a VR environment. The main idea for this work began as we realized two different demands that engineering applications had when running on VR setups: firstly, a demand for visualization support in the form of better navigation and better data analysis capabilities. Secondly, a demand for collaboration, due to the difficulties of coordinating a team with one member using VR. To meet these demands, we developed a Service Oriented Architecture (SOA) capable of adding external communications to any application. Using the added communications, we built an external collaboration layer. We study the architecture of our solution and how it could be implemented for any application. Furthermore, we study the impact of our solution when running an Offshore Engineering application on VR setups with the support of mobile devices. Such devices can be used to help navigate the virtual world or be used as a second screen, helping visualize and manipulate large sets of data in the form of tables or graphs. As our test application, we used Environ, which is a VR application for visualization of 3D models and simulations.
{"title":"EnvironRC: Integrating Mobile Communication and Collaboration to Offshore Engineering Virtual Reality Applications","authors":"Bernardo F. V. Pedras, A. Raposo, I. Santos","doi":"10.1109/SVR.2016.17","DOIUrl":"https://doi.org/10.1109/SVR.2016.17","url":null,"abstract":"Offshore Engineering visualization applications are, in most cases, very complex and should display a lot of data coming from very computational intensive numerical simulations. To help analyze and better visualize the results, 3D visualization can be used in conjunction with a VR environment. The main idea for this work began as we realized two different demands that engineering applications had when running on VR setups: firstly, a demand for visualization support in the form of better navigation and better data analysis capabilities. Secondly, a demand for collaboration, due to the difficulties of coordinating a team with one member using VR. To meet these demands, we developed a Service Oriented Architecture (SOA) capable of adding external communications to any application. Using the added communications, we built an external collaboration layer. We study the architecture of our solution and how it could be implemented for any application. Furthermore, we study the impact of our solution when running an Offshore Engineering application on VR setups with the support of mobile devices. Such devices can be used to help navigate the virtual world or be used as a second screen, helping visualize and manipulate large sets of data in the form of tables or graphs. As our test application, we used Environ, which is a VR application for visualization of 3D models and simulations.","PeriodicalId":444488,"journal":{"name":"2016 XVIII Symposium on Virtual and Augmented Reality (SVR)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131450148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computer systems such as virtual environments and serious games are being used as a tool to enhance the process of user rehabilitation. These systems can help motivate and provide means to assess the user's performance undertaking an exercise session. To do that, these systems incorporate motion tracking and gesture recognition devices, such as natural interaction devices like Kinect and Nintendo Wii. These devices, originally developed for the games market, allowed the development of low cost and minimally invasive rehabilitation systems, allowing the treatment to be taken to the patient's residence. With the advent of natural interaction based on electromyography, devices that use electromyographic signals can also be used to construct these systems. The aim of this work is to show how electromyographic signals could be used as a tool to capture user gestures and incorporated into home-based rehabilitation systems by adopting a low-cost device to capture these gestures. The process of creation of a serious game to show some of these concepts is also present.
{"title":"Surface Electromyography for Game-Based Hand Motor Rehabilitation","authors":"Thiago V. V. Batista, L. Machado, A. Valença","doi":"10.1109/SVR.2016.32","DOIUrl":"https://doi.org/10.1109/SVR.2016.32","url":null,"abstract":"Computer systems such as virtual environments and serious games are being used as a tool to enhance the process of user rehabilitation. These systems can help motivate and provide means to assess the user's performance undertaking an exercise session. To do that, these systems incorporate motion tracking and gesture recognition devices, such as natural interaction devices like Kinect and Nintendo Wii. These devices, originally developed for the games market, allowed the development of low cost and minimally invasive rehabilitation systems, allowing the treatment to be taken to the patient's residence. With the advent of natural interaction based on electromyography, devices that use electromyographic signals can also be used to construct these systems. The aim of this work is to show how electromyographic signals could be used as a tool to capture user gestures and incorporated into home-based rehabilitation systems by adopting a low-cost device to capture these gestures. The process of creation of a serious game to show some of these concepts is also present.","PeriodicalId":444488,"journal":{"name":"2016 XVIII Symposium on Virtual and Augmented Reality (SVR)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124334930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alan Brito dos Santos, Juliel Bronzati Dourado, A. Bezerra
Given the trend toward the use of technologies that allow greater interaction and user immersion with computer systems and mobile applications, augmented reality technology is increasingly present in popular systems in various application domains. To assist in the development of these types of systems using ar technology, various graphic libraries have been developed. This work brings a discussion and an analytical collation in some perspectives of two graphics libraries with ra features: qualcomm vuforia and artoolkit.
{"title":"ARToolkit and Qualcomm Vuforia: An Analytical Collation","authors":"Alan Brito dos Santos, Juliel Bronzati Dourado, A. Bezerra","doi":"10.1109/SVR.2016.46","DOIUrl":"https://doi.org/10.1109/SVR.2016.46","url":null,"abstract":"Given the trend toward the use of technologies that allow greater interaction and user immersion with computer systems and mobile applications, augmented reality technology is increasingly present in popular systems in various application domains. To assist in the development of these types of systems using ar technology, various graphic libraries have been developed. This work brings a discussion and an analytical collation in some perspectives of two graphics libraries with ra features: qualcomm vuforia and artoolkit.","PeriodicalId":444488,"journal":{"name":"2016 XVIII Symposium on Virtual and Augmented Reality (SVR)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115602799","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}