Haptic interaction with virtual objects is a major concern in the virtual reality field. There are many physically-based efficient models that enable the simulation of a specific type of media, e.g. fluid volumes, deformable and rigid bodies. However, combining these often heterogeneous algorithms in the same virtual world in order to simulate and interact with different types of media can be a complex task. In this paper, we propose the first haptic rendering technique for the simulation and the interaction with multistate media, namely fluids, deformable bodies and rigid bodies, in real-time and with 6DoF haptic feedback. Based on the Smoothed-Particle Hydrodynamics (SPH) physical model for all three types of media, our method avoids the complexity of dealing with different algorithms and their coupling. We achieve high update rates while simulating a physically-based virtual world governed by fluid and elasticity theories, and show how to render interaction forces and torques through a 6DoF haptic device.
{"title":"“Tap, squeeze and stir” the virtual world: Touching the different states of matter through 6DoF haptic interaction","authors":"G. Cirio, M. Marchal, A. Gentil, A. Lécuyer","doi":"10.1109/VR.2011.5759449","DOIUrl":"https://doi.org/10.1109/VR.2011.5759449","url":null,"abstract":"Haptic interaction with virtual objects is a major concern in the virtual reality field. There are many physically-based efficient models that enable the simulation of a specific type of media, e.g. fluid volumes, deformable and rigid bodies. However, combining these often heterogeneous algorithms in the same virtual world in order to simulate and interact with different types of media can be a complex task. In this paper, we propose the first haptic rendering technique for the simulation and the interaction with multistate media, namely fluids, deformable bodies and rigid bodies, in real-time and with 6DoF haptic feedback. Based on the Smoothed-Particle Hydrodynamics (SPH) physical model for all three types of media, our method avoids the complexity of dealing with different algorithms and their coupling. We achieve high update rates while simulating a physically-based virtual world governed by fluid and elasticity theories, and show how to render interaction forces and torques through a 6DoF haptic device.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114414828","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takuji Narumi, Takashi Kajinami, Shinya Nishizaka, T. Tanikawa, M. Hirose
In this paper, we propose a pseudo-gustatory display based on the cross-modal interactions that underlie the perception of flavor. Although several studies on visual, auditory, haptic, and olfactory displays have been conducted, gustatory displays have seldom been studied. This scarcity has been attributed to the fact that synthesizing arbitrary taste from basic tastants is difficult. On the other hand, it has been noted that the perception of taste is influenced by visual cues, auditory cues, smell, the trigeminal system, and touch. In our research, we aim at utilizing this influence between modalities for realizing a "pseudo-gustatory" system that enables the user to experience various tastes without changing the chemical composition of foods. Based on this concept, we built a "Meta Cookie" system to change the perceived taste of a cookie by overlaying visual and olfactory information onto a real cookie which an AR marker pattern. We performed an experiment that investigates how people experience the flavor of a plain cookie by using our system. The result suggests that our system can change the perceived taste based on the effect of the cross-modal interaction of vision, olfaction and gustation.
{"title":"Pseudo-gustatory display system based on cross-modal integration of vision, olfaction and gustation","authors":"Takuji Narumi, Takashi Kajinami, Shinya Nishizaka, T. Tanikawa, M. Hirose","doi":"10.1109/VR.2011.5759450","DOIUrl":"https://doi.org/10.1109/VR.2011.5759450","url":null,"abstract":"In this paper, we propose a pseudo-gustatory display based on the cross-modal interactions that underlie the perception of flavor. Although several studies on visual, auditory, haptic, and olfactory displays have been conducted, gustatory displays have seldom been studied. This scarcity has been attributed to the fact that synthesizing arbitrary taste from basic tastants is difficult. On the other hand, it has been noted that the perception of taste is influenced by visual cues, auditory cues, smell, the trigeminal system, and touch. In our research, we aim at utilizing this influence between modalities for realizing a \"pseudo-gustatory\" system that enables the user to experience various tastes without changing the chemical composition of foods. Based on this concept, we built a \"Meta Cookie\" system to change the perceived taste of a cookie by overlaying visual and olfactory information onto a real cookie which an AR marker pattern. We performed an experiment that investigates how people experience the flavor of a plain cookie by using our system. The result suggests that our system can change the perceived taste based on the effect of the cross-modal interaction of vision, olfaction and gustation.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116367576","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This demonstration presents an adaptive visual marker optimised to improve tracking performance in Spatial Augmented Reality environments. The adaptive marker uses a color light sensor to capture the projected light color from a SAR system. The color information is used to select the optimal tracking color that is displayed on a diffused Red, Green, Blue Light Emitting Diode marker attached to a user's finger. We have selected to use the visible light spectrum for the marker since it can be leveraged to present visual feedback to support user interface interactions in addition to the tracking system operation. Our initial results have shown a performance improvement compared to a fixed color passive marker.
{"title":"An adaptive color marker for Spatial Augmented Reality environments and visual feedback","authors":"Ross T. Smith, M. Marner, B. Thomas","doi":"10.1109/VR.2011.5759502","DOIUrl":"https://doi.org/10.1109/VR.2011.5759502","url":null,"abstract":"This demonstration presents an adaptive visual marker optimised to improve tracking performance in Spatial Augmented Reality environments. The adaptive marker uses a color light sensor to capture the projected light color from a SAR system. The color information is used to select the optimal tracking color that is displayed on a diffused Red, Green, Blue Light Emitting Diode marker attached to a user's finger. We have selected to use the visible light spectrum for the marker since it can be leveraged to present visual feedback to support user interface interactions in addition to the tracking system operation. Our initial results have shown a performance improvement compared to a fixed color passive marker.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132112459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, the research on olfactory display for incorporating the sense of smell into virtual reality has gradually expanded. To present odors with a vivid sense of smell, an ability to generate a variety of odors with efficient speed is indispensible. In this paper, we developed a new type of olfactory display, composed of a miniaturized electroosmotic pump and a surface acoustic wave device for presenting the low-volatile scents that are conventionally difficult to be generated with acceptable speed. We used an odor sensing system to evaluate the capability to present the low volatile scent and the ability to blend smells of our proposed new type olfactory display.
{"title":"Olfactory display using a miniaturized pump and a SAW atomizer for presenting low-volatile scents","authors":"Y. Ariyakul, T. Nakamoto","doi":"10.1109/VR.2011.5759464","DOIUrl":"https://doi.org/10.1109/VR.2011.5759464","url":null,"abstract":"Recently, the research on olfactory display for incorporating the sense of smell into virtual reality has gradually expanded. To present odors with a vivid sense of smell, an ability to generate a variety of odors with efficient speed is indispensible. In this paper, we developed a new type of olfactory display, composed of a miniaturized electroosmotic pump and a surface acoustic wave device for presenting the low-volatile scents that are conventionally difficult to be generated with acceptable speed. We used an odor sensing system to evaluate the capability to present the low volatile scent and the ability to blend smells of our proposed new type olfactory display.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"5 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131748898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thierry Hoinville, Abdeldjallil Naceri, J. Ortiz, Emmanuel Bernier, R. Chellali
Using virtual reality for understanding sports performance allows for systematic investigation of human sensorimotor capabilities and meanwhile promotes the design and comparison of realistic immersive platforms. In this paper, we propose a virtual reality-based experimental design for studying the human ability to intercept spinning balls deflected by the Magnus effect. Compared to the previous approaches, we focused on a tight perception-action coupling. Experienced and novice subjects immersed in a 3D soccer stadium were asked to head realistically simulated balls, free kicked with and without sidespin. Consistent with the former studies, qualitative results show that the interception performance systematically relates to both the ball sidespin direction and arrival position for all the subjects, either experienced or not. However, contrary to those former studies where subjects answered only pseudo-verbally, experienced and novice groups differentiate in quantitative performances, supporting that expertise likely appears when perception is coupled to action. Further analyses will be needed to extract the different information-movement relationships governing the behaviors of experienced subjects and novices.
{"title":"Performances of experienced and novice sportball players in heading virtual spinning soccer balls","authors":"Thierry Hoinville, Abdeldjallil Naceri, J. Ortiz, Emmanuel Bernier, R. Chellali","doi":"10.1109/VR.2011.5759441","DOIUrl":"https://doi.org/10.1109/VR.2011.5759441","url":null,"abstract":"Using virtual reality for understanding sports performance allows for systematic investigation of human sensorimotor capabilities and meanwhile promotes the design and comparison of realistic immersive platforms. In this paper, we propose a virtual reality-based experimental design for studying the human ability to intercept spinning balls deflected by the Magnus effect. Compared to the previous approaches, we focused on a tight perception-action coupling. Experienced and novice subjects immersed in a 3D soccer stadium were asked to head realistically simulated balls, free kicked with and without sidespin. Consistent with the former studies, qualitative results show that the interception performance systematically relates to both the ball sidespin direction and arrival position for all the subjects, either experienced or not. However, contrary to those former studies where subjects answered only pseudo-verbally, experienced and novice groups differentiate in quantitative performances, supporting that expertise likely appears when perception is coupled to action. Further analyses will be needed to extract the different information-movement relationships governing the behaviors of experienced subjects and novices.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117318573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N. Magnenat-Thalmann, Pascal Volino, Bart Kevelham, M. Kasap, Qui Tran, M. Arévalo, Ghana Priya, Nedjma Cadi-Yazli
This interactive application will allow visitors to play with garments in three dimensions, transforming them into creative, customizable and experimental objects. Based on touch screen technology and through a simple and attractive interface, visitors will be able to dress and customize a 3 dimensional virtual fashion model. The model will pose for you to show of the physically simulated garments in real time.
{"title":"An interactive virtual try on","authors":"N. Magnenat-Thalmann, Pascal Volino, Bart Kevelham, M. Kasap, Qui Tran, M. Arévalo, Ghana Priya, Nedjma Cadi-Yazli","doi":"10.1109/VR.2011.5759499","DOIUrl":"https://doi.org/10.1109/VR.2011.5759499","url":null,"abstract":"This interactive application will allow visitors to play with garments in three dimensions, transforming them into creative, customizable and experimental objects. Based on touch screen technology and through a simple and attractive interface, visitors will be able to dress and customize a 3 dimensional virtual fashion model. The model will pose for you to show of the physically simulated garments in real time.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132875712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Models of interaction tasks are quantitative descriptions of relationships between human temporal performance and the spatial characteristics of the interactive tasks. Examples include Fitts' law for modeling the pointing task and Accot and Zhai's steering law for the path steering task, etc. Models can be used as guidelines to design efficient user interfaces and quantitatively evaluate interaction techniques and input devices. In this paper, we introduce a 3D object pursuit interaction task, in which users are required to continuously track a moving target in a virtual environment. The entire movement of the task is broken into a tracking phase and a correction phase. For each phase, we propose a model that has been verified by two experiments. As the experimental results show, the time for the tracking phase is fixed once a task has been established, while the time for the correction phase usually varies according to some characteristics of the task. It can be modeled as a function of path length, target width and the velocity with which the target moves. The proposed model can be used to quantitatively evaluate the efficiency of user interfaces that involve the interaction with moving objects.
{"title":"Modeling object pursuit for 3D interactive tasks in virtual reality","authors":"Lei Liu, R. V. Liere","doi":"10.1109/VR.2011.5759416","DOIUrl":"https://doi.org/10.1109/VR.2011.5759416","url":null,"abstract":"Models of interaction tasks are quantitative descriptions of relationships between human temporal performance and the spatial characteristics of the interactive tasks. Examples include Fitts' law for modeling the pointing task and Accot and Zhai's steering law for the path steering task, etc. Models can be used as guidelines to design efficient user interfaces and quantitatively evaluate interaction techniques and input devices. In this paper, we introduce a 3D object pursuit interaction task, in which users are required to continuously track a moving target in a virtual environment. The entire movement of the task is broken into a tracking phase and a correction phase. For each phase, we propose a model that has been verified by two experiments. As the experimental results show, the time for the tracking phase is fixed once a task has been established, while the time for the correction phase usually varies according to some characteristics of the task. It can be modeled as a function of path length, target width and the velocity with which the target moves. The proposed model can be used to quantitatively evaluate the efficiency of user interfaces that involve the interaction with moving objects.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114687695","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shinya Nishizaka, Takuji Narumi, T. Tanikawa, M. Hirose
In this research study, we propose a divided planar-object detection method for augmented reality(AR) applications. There are mainly two types of camera-registration methods for AR applications: marker-based methods, and natural-feature-based methods. In addition, the latter methods are classified into visual SLAM and object detection methods. With respect to object detection methods, particularly for planar objects such as paper, methods for dealing with bending, folding, and occlusion are proposed. However, the division of objects has not been studied. Once an object is divided, a conventional object detection method cannot identify each of the pieces because the feature points of only a single piece are recognized as the target object, and the other feature points are regarded as outliers. The proposed system prepares a database of the target object's natural features, and applies progressive sample consensus(PROSAC), which is a robust estimation method, for iterative homography calculation to achieve the multiple planar-object detection. Moreover, the proposed method can detect shapes of pieces by simultaneously using an occlusion detection method. We demonstrate that it is possible to interact with an arbitrarily divided planar object in real time by our method to implement some AR applications.
{"title":"Detection of divided planar object for augmented reality applications","authors":"Shinya Nishizaka, Takuji Narumi, T. Tanikawa, M. Hirose","doi":"10.1109/VR.2011.5759483","DOIUrl":"https://doi.org/10.1109/VR.2011.5759483","url":null,"abstract":"In this research study, we propose a divided planar-object detection method for augmented reality(AR) applications. There are mainly two types of camera-registration methods for AR applications: marker-based methods, and natural-feature-based methods. In addition, the latter methods are classified into visual SLAM and object detection methods. With respect to object detection methods, particularly for planar objects such as paper, methods for dealing with bending, folding, and occlusion are proposed. However, the division of objects has not been studied. Once an object is divided, a conventional object detection method cannot identify each of the pieces because the feature points of only a single piece are recognized as the target object, and the other feature points are regarded as outliers. The proposed system prepares a database of the target object's natural features, and applies progressive sample consensus(PROSAC), which is a robust estimation method, for iterative homography calculation to achieve the multiple planar-object detection. Moreover, the proposed method can detect shapes of pieces by simultaneously using an occlusion detection method. We demonstrate that it is possible to interact with an arbitrarily divided planar object in real time by our method to implement some AR applications.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116466074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The use of Touch Screen Interaction (TSI) for 3-D interaction entails both the addition of new haptic cues and the separation of the manipulation of the Degrees of Freedom (DoF) of the task: a 3 DoF task must be transformed into a 2-D+1-D task to be completed using a touch screen. In this paper, we investigate the impact of these two factors in the context of a 3-D positioning task. Our goal is to identify their respective influence on subjective preferences and performance measurements. To that purpose, we conducted an experimental comparison of five positioning techniques, isolating the influence of each of these two factors. The results we obtained suggest that the addition of haptic cues does not influence the user precision. However, the decomposition of the task has a strong influence on accuracy. More precisely, separating the manipulation of the depth dimension leads to an increased precision while isolating other dimensions does not influence the results. To explain this result, we realised a behavioural analysis of the data. This study suggests that the differences in the performance may be linked to the perceptual structure of the techniques. A technique isolating the manipulation of the depth seems to have a more adapted perceptual structure than a technique separating the height, even if those two dimensions are equally involved in the realisation of the task.
{"title":"An experimental analysis of the impact of Touch Screen Interaction techniques for 3-D positioning tasks","authors":"Manuel Veit, Antonio Capobianco, D. Bechmann","doi":"10.1109/VR.2011.5759440","DOIUrl":"https://doi.org/10.1109/VR.2011.5759440","url":null,"abstract":"The use of Touch Screen Interaction (TSI) for 3-D interaction entails both the addition of new haptic cues and the separation of the manipulation of the Degrees of Freedom (DoF) of the task: a 3 DoF task must be transformed into a 2-D+1-D task to be completed using a touch screen. In this paper, we investigate the impact of these two factors in the context of a 3-D positioning task. Our goal is to identify their respective influence on subjective preferences and performance measurements. To that purpose, we conducted an experimental comparison of five positioning techniques, isolating the influence of each of these two factors. The results we obtained suggest that the addition of haptic cues does not influence the user precision. However, the decomposition of the task has a strong influence on accuracy. More precisely, separating the manipulation of the depth dimension leads to an increased precision while isolating other dimensions does not influence the results. To explain this result, we realised a behavioural analysis of the data. This study suggests that the differences in the performance may be linked to the perceptual structure of the techniques. A technique isolating the manipulation of the depth seems to have a more adapted perceptual structure than a technique separating the height, even if those two dimensions are equally involved in the realisation of the task.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"532 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123451996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thommen Korah, Jason Wither, Yun-Ta Tsai, Ronald T. Azuma
This work introduces techniques to facilitate large-scale Augmented Reality (AR) experiences in unprepared outdoor environments. We develop a shape-based object detection framework that works with limited texture and can robustly handle extreme illumination and occlusion issues. The contribution of this work is a purely geometric approach for detecting marker-like objects under difficult and realistic outdoor conditions. We demonstrate these techniques for mobile AR experiences by detecting and tracking star-shaped pentagrams embedded in the Hollywood Walk of Fame at 30Hz on a Nokia N900 phone.
{"title":"Mobile Augmented Reality at the Hollywood Walk of Fame","authors":"Thommen Korah, Jason Wither, Yun-Ta Tsai, Ronald T. Azuma","doi":"10.1109/VR.2011.5759460","DOIUrl":"https://doi.org/10.1109/VR.2011.5759460","url":null,"abstract":"This work introduces techniques to facilitate large-scale Augmented Reality (AR) experiences in unprepared outdoor environments. We develop a shape-based object detection framework that works with limited texture and can robustly handle extreme illumination and occlusion issues. The contribution of this work is a purely geometric approach for detecting marker-like objects under difficult and realistic outdoor conditions. We demonstrate these techniques for mobile AR experiences by detecting and tracking star-shaped pentagrams embedded in the Hollywood Walk of Fame at 30Hz on a Nokia N900 phone.","PeriodicalId":346701,"journal":{"name":"2011 IEEE Virtual Reality Conference","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124082232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}