In nowadays, hand gestures can be used as a more natural and convenient way for human computer interaction. The direct interface of hand gestures provides us a new way for communicating with the virtual environment. In this paper, we propose a new hand gesture recognition method using self-organizing map (SOM) with datagloves. The SOM method is a type of machine learning algorithm. It deals with the raw data sampled from datagloves as input vectors, and builds a mapping between these uncalibrated data and gesture commands. The results show the average recognition rate and time efficiency when using SOM for dataglove-based hand gesture recognition. A series of tasks in virtual house illustrate the performance of our interaction method based on hand gesture recognition.
{"title":"SOM-based hand gesture recognition for virtual interactions","authors":"Shuai Jin, Yi Li, Guangming Lu, Jian-xun Luo, Weidong Chen, Xiaoxiang Zheng","doi":"10.1109/ISVRI.2011.5759659","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759659","url":null,"abstract":"In nowadays, hand gestures can be used as a more natural and convenient way for human computer interaction. The direct interface of hand gestures provides us a new way for communicating with the virtual environment. In this paper, we propose a new hand gesture recognition method using self-organizing map (SOM) with datagloves. The SOM method is a type of machine learning algorithm. It deals with the raw data sampled from datagloves as input vectors, and builds a mapping between these uncalibrated data and gesture commands. The results show the average recognition rate and time efficiency when using SOM for dataglove-based hand gesture recognition. A series of tasks in virtual house illustrate the performance of our interaction method based on hand gesture recognition.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130550214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/ISVRI.2011.5759617
Dongdong Weng, Xin Liu, Yongtian Wang, Yue Liu
This paper presents an augmented reality gun application named AR-Ghost Hunter. AR-Ghost Hunter is an extension of the traditional first person game which adopts an innovative infrared marker system and portable computer to form a complete mobile AR system. In this system, players are able to fight with virtual ghost through special gun like devices in real environment. The basic issue of the system such as infrared marker indentify, pose estimation, and user's devices are discussed.
{"title":"AR-Ghost Hunter: An augmented reality gun application","authors":"Dongdong Weng, Xin Liu, Yongtian Wang, Yue Liu","doi":"10.1109/ISVRI.2011.5759617","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759617","url":null,"abstract":"This paper presents an augmented reality gun application named AR-Ghost Hunter. AR-Ghost Hunter is an extension of the traditional first person game which adopts an innovative infrared marker system and portable computer to form a complete mobile AR system. In this system, players are able to fight with virtual ghost through special gun like devices in real environment. The basic issue of the system such as infrared marker indentify, pose estimation, and user's devices are discussed.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131733904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/ISVRI.2011.5759644
T. Kayahara, Hiroki Abe
The crowd sound effect (“Gaya” in Japanese technical word) plays important role to create and perceive the atmosphere of the crowded scene of a movie, but the technique for authoring “Gaya” sound has not been scientifically described so far.
{"title":"Synthesis of footstep sounds of crowd from single step sound based on cognitive property of footstep sounds","authors":"T. Kayahara, Hiroki Abe","doi":"10.1109/ISVRI.2011.5759644","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759644","url":null,"abstract":"The crowd sound effect (“Gaya” in Japanese technical word) plays important role to create and perceive the atmosphere of the crowded scene of a movie, but the technique for authoring “Gaya” sound has not been scientifically described so far.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133454665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/ISVRI.2011.5759599
Quentin Avril, V. Gouranton, B. Arnaldi
In this paper we present a new technique to dynamically adapt the first step (broad phase) of the collision detection process on hardware architecture during simulation. Our approach enables to face the unpredictable evolution of the simulation scenario (this includes addition of complex objects, deletion, split into several objects, …). Our technique of dynamic adaptation is performed on sequential CPU, multi-core, single GPU and multi-GPU architectures. We propose to use off-line simulations to determine fields of optimal performance for broad phase algorithms and use them during in-line simulation. This is achieved by a features analysis of algorithmic performances on different architectures. In this way we ensure the real time adaptation of the broad-phase algorithm during the simulation, switching it to a more appropriate candidate. We also present a study on how graphics hardware parameters (number of cores, bandwidth, …) can influence algorithmic performance. The goal of this analysis is to know if it is possible to find a link between variations of algorithms performances and hardware parameters. We test and compare our model on 1, 2, 4 and 8 cores architectures and also on 1 Quadro FX 3600M, 2 Quadro FX 4600 and 4 Quadro FX 5800. Our results show that using this technique during the collision detection process provides better performance throughout the simulation and enables to face unpredictable scenarios evolution in large-scale virtual environments.
在仿真过程中,提出了一种基于硬件结构动态调整碰撞检测过程第一步(宽相位)的新技术。我们的方法能够面对模拟场景的不可预测的演变(这包括添加复杂对象,删除,拆分为多个对象等)。我们的动态自适应技术可以在顺序CPU、多核、单GPU和多GPU架构上进行。我们建议使用离线模拟来确定宽相位算法的最佳性能领域,并在在线模拟中使用它们。这是通过对不同架构上算法性能的特征分析来实现的。通过这种方式,我们保证了仿真过程中宽相算法的实时适应,将其切换到更合适的候选算法。我们还研究了图形硬件参数(核数、带宽等)如何影响算法性能。此分析的目的是了解是否有可能找到算法性能变化和硬件参数之间的联系。我们在1,2,4和8核架构以及1个Quadro FX 3600M, 2个Quadro FX 4600和4个Quadro FX 5800上测试和比较了我们的模型。我们的研究结果表明,在碰撞检测过程中使用该技术可以在整个仿真过程中提供更好的性能,并能够面对大规模虚拟环境中不可预测的场景演变。
{"title":"Dynamic adaptation of broad phase collision detection algorithms","authors":"Quentin Avril, V. Gouranton, B. Arnaldi","doi":"10.1109/ISVRI.2011.5759599","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759599","url":null,"abstract":"In this paper we present a new technique to dynamically adapt the first step (broad phase) of the collision detection process on hardware architecture during simulation. Our approach enables to face the unpredictable evolution of the simulation scenario (this includes addition of complex objects, deletion, split into several objects, …). Our technique of dynamic adaptation is performed on sequential CPU, multi-core, single GPU and multi-GPU architectures. We propose to use off-line simulations to determine fields of optimal performance for broad phase algorithms and use them during in-line simulation. This is achieved by a features analysis of algorithmic performances on different architectures. In this way we ensure the real time adaptation of the broad-phase algorithm during the simulation, switching it to a more appropriate candidate. We also present a study on how graphics hardware parameters (number of cores, bandwidth, …) can influence algorithmic performance. The goal of this analysis is to know if it is possible to find a link between variations of algorithms performances and hardware parameters. We test and compare our model on 1, 2, 4 and 8 cores architectures and also on 1 Quadro FX 3600M, 2 Quadro FX 4600 and 4 Quadro FX 5800. Our results show that using this technique during the collision detection process provides better performance throughout the simulation and enables to face unpredictable scenarios evolution in large-scale virtual environments.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116730552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/ISVRI.2011.5759609
A. Pusch, O. Martin, S. Coquillart
In this paper, we report on an experiment conducted to test the effects of different hand representations on near space pointing performance and user preference. Subjects were presented with varying levels of hand realism, including real hand video, a high and a low level 3D hand model and an ordinary 3D pointer arrow. Behavioural data revealed that an abstract hand substitute like a 3D pointer arrow leads to significantly larger position estimation errors in terms of lateral target overshooting when touching virtual surfaces with only visual hand movement constraints. Further, questionnaire results show that a higher fidelity hand is preferred over lower fidelity representations for different aspects of the task. But we cannot conclude that realtime video feedback of the own hand is better rated than a high level static 3D hand model. Overall, these results, which largely confirm previous research, suggest that, although a higher fidelity feedback of the hand is desirable from an user acceptance point of view, motor performance seems not to be affected by varying degrees of limb realism - as long as a hand-like shape is provided.
{"title":"Effects of hand feedback fidelity on near space pointing performance and user acceptance","authors":"A. Pusch, O. Martin, S. Coquillart","doi":"10.1109/ISVRI.2011.5759609","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759609","url":null,"abstract":"In this paper, we report on an experiment conducted to test the effects of different hand representations on near space pointing performance and user preference. Subjects were presented with varying levels of hand realism, including real hand video, a high and a low level 3D hand model and an ordinary 3D pointer arrow. Behavioural data revealed that an abstract hand substitute like a 3D pointer arrow leads to significantly larger position estimation errors in terms of lateral target overshooting when touching virtual surfaces with only visual hand movement constraints. Further, questionnaire results show that a higher fidelity hand is preferred over lower fidelity representations for different aspects of the task. But we cannot conclude that realtime video feedback of the own hand is better rated than a high level static 3D hand model. Overall, these results, which largely confirm previous research, suggest that, although a higher fidelity feedback of the hand is desirable from an user acceptance point of view, motor performance seems not to be affected by varying degrees of limb realism - as long as a hand-like shape is provided.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"329 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117069261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/ISVRI.2011.5759615
Huagen Wan, Song Zou, Zilong Dong, Hai Lin, H. Bao
Mixed reality techniques are boosting for aviation progress. In this paper, we present MRStudio, a mixed reality display system for aircraft cockpit. The system architecture is given, with special attention paid upon such technical issues as three-dimensional map construction for aircraft cockpit, computer vision based 6-DOF head tracking, virtual aircraft cockpit panel construction and registration, and mixed reality display for aircraft cockpit using a flexible client-server architecture. A testing scenario on a full scale mockup of the COMAC's ARJ21 cockpit is described.
{"title":"MRStudio: A mixed reality display system for aircraft cockpit","authors":"Huagen Wan, Song Zou, Zilong Dong, Hai Lin, H. Bao","doi":"10.1109/ISVRI.2011.5759615","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759615","url":null,"abstract":"Mixed reality techniques are boosting for aviation progress. In this paper, we present MRStudio, a mixed reality display system for aircraft cockpit. The system architecture is given, with special attention paid upon such technical issues as three-dimensional map construction for aircraft cockpit, computer vision based 6-DOF head tracking, virtual aircraft cockpit panel construction and registration, and mixed reality display for aircraft cockpit using a flexible client-server architecture. A testing scenario on a full scale mockup of the COMAC's ARJ21 cockpit is described.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"50 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123190101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/ISVRI.2011.5759619
Juan Wu, Zhenzhong Song, Wei-Zun Wu, Aiguo Song, D. Constantinescu
This paper presents the design and testing of an image contour display system with vibrotactile array. The tactile image display system is attached on the user's back. It produces non-visual image and permits subjects to determine the position, size, shape of visible objects through vibration stimulus. The system comprises three parts: 1) a USB camera; 2) 48 (6×8) vibrating motors; 3) ARM micro-controlled system. Image is captured with the camera and the 2D contour is extracted and transformed into vibrotactile stimulus with a “contour following” (time-spatial dynamic coding) pattern. With this system subjects could identify the shape of object without special training; meanwhile fewer vibrotactile actuators are adopted. Preliminary experiments were carried out and the results demonstrated that the prototype was satisfactory and efficient for the visually impaired in seeing aid and environment perception.
{"title":"A vibro-tactile system for image contour display","authors":"Juan Wu, Zhenzhong Song, Wei-Zun Wu, Aiguo Song, D. Constantinescu","doi":"10.1109/ISVRI.2011.5759619","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759619","url":null,"abstract":"This paper presents the design and testing of an image contour display system with vibrotactile array. The tactile image display system is attached on the user's back. It produces non-visual image and permits subjects to determine the position, size, shape of visible objects through vibration stimulus. The system comprises three parts: 1) a USB camera; 2) 48 (6×8) vibrating motors; 3) ARM micro-controlled system. Image is captured with the camera and the 2D contour is extracted and transformed into vibrotactile stimulus with a “contour following” (time-spatial dynamic coding) pattern. With this system subjects could identify the shape of object without special training; meanwhile fewer vibrotactile actuators are adopted. Preliminary experiments were carried out and the results demonstrated that the prototype was satisfactory and efficient for the visually impaired in seeing aid and environment perception.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123491024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/ISVRI.2011.5759607
A. Sherstyuk, M. Gavrilova
Augmented Reality applications have already become a part of everyday life, bringing virtual 3D objects into real life scenes. In this paper, we introduce “Virtual Roommates”, a system that employs AR techniques to share people's presence, projected from remote locations. Virtual Roommates is a feature-based mapping between loosely linked spaces. It allows to overlay multiple physical and virtual scenes and populate them with physical or virtual characters. As the name implies, the Virtual Roommates concept provides continuous ambient presence for multiple disparate groups, similar to people sharing living conditions, but without the boundaries of real space.
{"title":"Virtual Roommates in multiple shared spaces","authors":"A. Sherstyuk, M. Gavrilova","doi":"10.1109/ISVRI.2011.5759607","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759607","url":null,"abstract":"Augmented Reality applications have already become a part of everyday life, bringing virtual 3D objects into real life scenes. In this paper, we introduce “Virtual Roommates”, a system that employs AR techniques to share people's presence, projected from remote locations. Virtual Roommates is a feature-based mapping between loosely linked spaces. It allows to overlay multiple physical and virtual scenes and populate them with physical or virtual characters. As the name implies, the Virtual Roommates concept provides continuous ambient presence for multiple disparate groups, similar to people sharing living conditions, but without the boundaries of real space.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"4613 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122836680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/ISVRI.2011.5759657
Tong-de Tan, Zhijie Guo
This paper proposes a new method of extracting feature points of hand. The method uses the center of mass of the hand as the match point to calculate the location information of the target based on Mathematical model of binocular visual positioning. The convex hull points of hand contour obtained by image segmentation can be used to identify the different gestures. Furthermore, a system with both functions of locating the three-dimensional position of hand and identifying the appropriate gestures is designed, which can serve as the interface to drive virtual hand to complete manipulation of grasping, moving and releasing virtual objects.
{"title":"Research of hand positioning and gesture recognition based on binocular vision","authors":"Tong-de Tan, Zhijie Guo","doi":"10.1109/ISVRI.2011.5759657","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759657","url":null,"abstract":"This paper proposes a new method of extracting feature points of hand. The method uses the center of mass of the hand as the match point to calculate the location information of the target based on Mathematical model of binocular visual positioning. The convex hull points of hand contour obtained by image segmentation can be used to identify the different gestures. Furthermore, a system with both functions of locating the three-dimensional position of hand and identifying the appropriate gestures is designed, which can serve as the interface to drive virtual hand to complete manipulation of grasping, moving and releasing virtual objects.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123939114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.3929/ETHZ-A-006397882
Adrian Steinemann, Yves Kellenberger, Pascal Peikert, A. Kunz
Since the publication of the first FTIR multi-touch interaction system [1], the public attention for the field of Single Display Groupware (SDG) has been rising constantly. Lately, former SDG systems with multi-interaction capabilities like DiamondTouch [2], reacTable [3], or Microsoft Surface [4] have been followed by promising systems, i.e. ThinSight [5] and MightyTrace [6]. The latter integrate their tracking technology into commercial liquid crystal displays (LCD), and thus drastically reduce the space requirements. Some recently published work [7] also conveys a trend to support industrial-oriented tasks as systems like BUILDIT [8] did some time ago. We present a traffic training simulator concept based on discrete event simulation to ensure realistic traffic behavior, adequate visualization, and a user centered interaction concept on an SDG system to support training activities for policemen. Within this environment, policemen are able to train their behavior and the adequate choices under different traffic situations much like other professionals train their standard procedures, e.g. pilots in a flight simulator. Such a training environment will give the possibility to learn offline about important characteristics of intersections based on historical data like system stability, incident handling, or additional improvement potential.
{"title":"SAMSON: Simulation supported traffic training environment","authors":"Adrian Steinemann, Yves Kellenberger, Pascal Peikert, A. Kunz","doi":"10.3929/ETHZ-A-006397882","DOIUrl":"https://doi.org/10.3929/ETHZ-A-006397882","url":null,"abstract":"Since the publication of the first FTIR multi-touch interaction system [1], the public attention for the field of Single Display Groupware (SDG) has been rising constantly. Lately, former SDG systems with multi-interaction capabilities like DiamondTouch [2], reacTable [3], or Microsoft Surface [4] have been followed by promising systems, i.e. ThinSight [5] and MightyTrace [6]. The latter integrate their tracking technology into commercial liquid crystal displays (LCD), and thus drastically reduce the space requirements. Some recently published work [7] also conveys a trend to support industrial-oriented tasks as systems like BUILDIT [8] did some time ago. We present a traffic training simulator concept based on discrete event simulation to ensure realistic traffic behavior, adequate visualization, and a user centered interaction concept on an SDG system to support training activities for policemen. Within this environment, policemen are able to train their behavior and the adequate choices under different traffic situations much like other professionals train their standard procedures, e.g. pilots in a flight simulator. Such a training environment will give the possibility to learn offline about important characteristics of intersections based on historical data like system stability, incident handling, or additional improvement potential.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125014798","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}