Brendan David-John, Pallavi Raiturkar, O. Meur, Eakta Jain
Modeling and visualization of user attention in Virtual Reality is important for many applications, such as gaze prediction, robotics, retargeting, video compression, and rendering. Several methods have been proposed to model eye tracking data as saliency maps. We benchmark the performance of four such methods for 360° images. We provide a comprehensive analysis and implementations of these methods to assist researchers and practitioners. Finally, we make recommendations based on our benchmark analyses and the ease of implementation.
{"title":"A Benchmark of Four Methods for Generating 360° Saliency Maps from Eye Tracking Data","authors":"Brendan David-John, Pallavi Raiturkar, O. Meur, Eakta Jain","doi":"10.1109/aivr.2018.00028","DOIUrl":"https://doi.org/10.1109/aivr.2018.00028","url":null,"abstract":"Modeling and visualization of user attention in Virtual Reality is important for many applications, such as gaze prediction, robotics, retargeting, video compression, and rendering. Several methods have been proposed to model eye tracking data as saliency maps. We benchmark the performance of four such methods for 360° images. We provide a comprehensive analysis and implementations of these methods to assist researchers and practitioners. Finally, we make recommendations based on our benchmark analyses and the ease of implementation.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124047174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The performance of glove puppetry relies on the players to control the puppet. Glove puppetry is an important traditional art in Taiwan. In the past, it even became very popular and took the whole Taiwan by storm after it broadcasted. On June 25th, 1974, it even be closed down for the reason of "retarding the work schedule of agricultural workers". Obviously, the degree of the popularity in Taiwan went without saying then. Recently, because of the expensive cost of this traditional performance (such as professional stage, high cost puppet, and long preparing time for performance), it becomes uncompetitive. For the following paper, we provide a method which can let users control the virtual puppets through the computer. The Leap Motion will catch the hand gesture of the user and show the motion of the puppets on the computer screen. User can choose different puppets, scene and music according to your own preferences though the system that we construct. Virtual glove puppetry system is aimed to make more users to enjoy the pleasure of this traditional art in a lower cost.
{"title":"Combining Leap Motion with Unity for Virtual Glove Puppets","authors":"Chi-Yen Lin, Zhang-Hao Yang, Heng-Wei Zhou, Tsai-Ni Yang, Hong-Nien Chen, T. Shih","doi":"10.1109/AIVR.2018.00059","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00059","url":null,"abstract":"The performance of glove puppetry relies on the players to control the puppet. Glove puppetry is an important traditional art in Taiwan. In the past, it even became very popular and took the whole Taiwan by storm after it broadcasted. On June 25th, 1974, it even be closed down for the reason of \"retarding the work schedule of agricultural workers\". Obviously, the degree of the popularity in Taiwan went without saying then. Recently, because of the expensive cost of this traditional performance (such as professional stage, high cost puppet, and long preparing time for performance), it becomes uncompetitive. For the following paper, we provide a method which can let users control the virtual puppets through the computer. The Leap Motion will catch the hand gesture of the user and show the motion of the puppets on the computer screen. User can choose different puppets, scene and music according to your own preferences though the system that we construct. Virtual glove puppetry system is aimed to make more users to enjoy the pleasure of this traditional art in a lower cost.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122101325","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Machine Readable Codes have been used for severalpurposes. Approaches like UPC and QR Code can beseen everywhere. Recently, it has emerged a new MRC able to combine the communication power of classical methodsto a meaningful improvement on aesthetics and data capacity.This method is named Graphic Code1. Although it has beenused in previous researches, this name was firstly used publiclyat Security Document World Conference and Exhibition, 2018. Graphic Code has two major advantages over classical MRCs: aesthetics and larger coding capacity. It opens new possibilitiesfor several purposes such as identification, tracking (using aspecific border), and transferring of content to the application.This paper focuses on presenting how graphic code can be usedfor industry applications, emphasizing its uses on Augmented Reality (AR). In the first context, it is still being used for creatinglabels and validation stamps. In the second one, it can be used asa marker (identification and tracking), and to code parametersfor an AR application such as large texts, meshes of a 3D model, an image, a drawing, or other complex controls.
{"title":"Graphic Code: A New Machine Readable Approach","authors":"Leandro Cruz, Bruno Patrão, Nuno Gonçalves","doi":"10.1109/AIVR.2018.00036","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00036","url":null,"abstract":"Machine Readable Codes have been used for severalpurposes. Approaches like UPC and QR Code can beseen everywhere. Recently, it has emerged a new MRC able to combine the communication power of classical methodsto a meaningful improvement on aesthetics and data capacity.This method is named Graphic Code1. Although it has beenused in previous researches, this name was firstly used publiclyat Security Document World Conference and Exhibition, 2018. Graphic Code has two major advantages over classical MRCs: aesthetics and larger coding capacity. It opens new possibilitiesfor several purposes such as identification, tracking (using aspecific border), and transferring of content to the application.This paper focuses on presenting how graphic code can be usedfor industry applications, emphasizing its uses on Augmented Reality (AR). In the first context, it is still being used for creatinglabels and validation stamps. In the second one, it can be used asa marker (identification and tracking), and to code parametersfor an AR application such as large texts, meshes of a 3D model, an image, a drawing, or other complex controls.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129785485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pei-Chun Lin, D. Mettrick, P. Hung, Farkhund Iqbal
This paper presents a Music Visualization on Robot (MVR) prototype system which automatically links the flashlight, color and emotion of a robot through music. The MVR system is divided into three portions. Firstly, the system calculates the waiting time for a flashlight by beat tracking. Secondly, the system calculates the emotion correlated with music mood. Thirdly, the system links the color with emotion. To illustrate the prototype on a robot, the prototype implementation is based on a programmable robot called Zenbo because Zenbo has 8 LED light colors on 2 wheels and 24 face emotions to support various compositions.
{"title":"Towards a Music Visualization on Robot (MVR) Prototype","authors":"Pei-Chun Lin, D. Mettrick, P. Hung, Farkhund Iqbal","doi":"10.1109/AIVR.2018.00060","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00060","url":null,"abstract":"This paper presents a Music Visualization on Robot (MVR) prototype system which automatically links the flashlight, color and emotion of a robot through music. The MVR system is divided into three portions. Firstly, the system calculates the waiting time for a flashlight by beat tracking. Secondly, the system calculates the emotion correlated with music mood. Thirdly, the system links the color with emotion. To illustrate the prototype on a robot, the prototype implementation is based on a programmable robot called Zenbo because Zenbo has 8 LED light colors on 2 wheels and 24 face emotions to support various compositions.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125316322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Falah Rahim, Tiago Rosa Maria Paula Queluz, João Ascenso
Since displays are planar and with a limited field of view, to visualize 360º (spherical) content, it is necessary to employ a projection to map pixels on the sphere to a 2D plane segment. This 2D plane is called viewport and is created with some limited field of view, usually much less than 360º. To create the viewport, 3D points on the sphere are projected to the 2D plane usually with a perspective projection. This process leads to geometric distortions in the viewport, such as objects that appear stretched or image structures that are bent. This paper proposes a content-dependent objective quality assessment procedure to evaluate line distortions that occur during the viewport creation process, to identify which projection center minimizes the subjective impact of these distortions. To achieve this objective, features that characterize the amount of line distortion in the viewport image are extracted and used by a Support Vector Machine (SVM) classifier, to obtain the viewport quality. To train the classifier, a subjective evaluation of rendered viewport images was conducted to obtain the perceptual scores for different types of content and projection centers. The experimental results show that the proposed metric is able to predict the viewport quality with an average accuracy of 91.2%
{"title":"Objective Assessment of Line Distortions in Viewport Rendering of 360º Images","authors":"Falah Rahim, Tiago Rosa Maria Paula Queluz, João Ascenso","doi":"10.1109/AIVR.2018.00017","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00017","url":null,"abstract":"Since displays are planar and with a limited field of view, to visualize 360º (spherical) content, it is necessary to employ a projection to map pixels on the sphere to a 2D plane segment. This 2D plane is called viewport and is created with some limited field of view, usually much less than 360º. To create the viewport, 3D points on the sphere are projected to the 2D plane usually with a perspective projection. This process leads to geometric distortions in the viewport, such as objects that appear stretched or image structures that are bent. This paper proposes a content-dependent objective quality assessment procedure to evaluate line distortions that occur during the viewport creation process, to identify which projection center minimizes the subjective impact of these distortions. To achieve this objective, features that characterize the amount of line distortion in the viewport image are extracted and used by a Support Vector Machine (SVM) classifier, to obtain the viewport quality. To train the classifier, a subjective evaluation of rendered viewport images was conducted to obtain the perceptual scores for different types of content and projection centers. The experimental results show that the proposed metric is able to predict the viewport quality with an average accuracy of 91.2%","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115890791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Augmented reality technologies have been applied in educational contexts to enhance traditional textbooks with so-called mixed reality books where static printed context is augmented with dynamic content. These techniques sometimes exploit mobile devices held in mid-air or displays placed behind the textbook. This paper presents a design case outlining a concept for augmenting printed material which operated in the 2D plane by superimposing images on top of the printed material. The contents of a smartphone display are reflected via the printed surface. Several use cases are discussed. The method holds potential for both education and accessibility.
{"title":"Visual Augmentation of Printed Materials with Intelligent See-Through Glass Displays: A Prototype Based on Smartphone and Pepper's Ghost","authors":"F. Sandnes, Evelyn Eika","doi":"10.1109/AIVR.2018.00063","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00063","url":null,"abstract":"Augmented reality technologies have been applied in educational contexts to enhance traditional textbooks with so-called mixed reality books where static printed context is augmented with dynamic content. These techniques sometimes exploit mobile devices held in mid-air or displays placed behind the textbook. This paper presents a design case outlining a concept for augmenting printed material which operated in the 2D plane by superimposing images on top of the printed material. The contents of a smartphone display are reflected via the printed surface. Several use cases are discussed. The method holds potential for both education and accessibility.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124690966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jing Qian, Laurent Denoue, Jacob T. Biehl, David A. Shamma
Interaction in augmented reality (AR) or mixed reality environments is generally classified into two modalities: linear (relative to object) or non-linear (relative to camera). Switching between these modes tailors the AR experience to different scenarios. Such interactions can be arduous in cases when on-board touch interaction is limited or restricted as is often the case in medical or industrial applications that require sterility. To solve this, we present Sound-to-Experience where the modality can be effectively toggled by noise or sound which is detected using a modern Artificial Intelligence deep-network classifier.
{"title":"AI for Toggling the Linearity of Interactions in AR","authors":"Jing Qian, Laurent Denoue, Jacob T. Biehl, David A. Shamma","doi":"10.1109/AIVR.2018.00040","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00040","url":null,"abstract":"Interaction in augmented reality (AR) or mixed reality environments is generally classified into two modalities: linear (relative to object) or non-linear (relative to camera). Switching between these modes tailors the AR experience to different scenarios. Such interactions can be arduous in cases when on-board touch interaction is limited or restricted as is often the case in medical or industrial applications that require sterility. To solve this, we present Sound-to-Experience where the modality can be effectively toggled by noise or sound which is detected using a modern Artificial Intelligence deep-network classifier.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124034338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper lies on presenting applications of the Graphic Code exploiting its large-scale information coding capabilities applied to Augmented Reality. Machine Readable Codes (MRCs) are largely used for many reasons, such as, product tagging or to hold URLs. The recently introduced Graphic Code differs from classical MRCs because it is fully integrated with images for aesthetic control. Furthermore, it is able to code large amount of information and, for that reason, it can store different types of models for applications that are unusual for classical MRCs. The main advantage of using our approach as an Augmented Reality marker is the possibility of creating generic applications that can read and decode these Graphic Code markers which may contain 3D models and complex scenes encoded in it. Additionally, the resulting marker has strong aesthetic characteristics associated to it.
{"title":"An Augmented Reality Application Using Graphic Code Markers","authors":"Bruno Patrão, Leandro Cruz, Nuno Gonçalves","doi":"10.1109/AIVR.2018.00044","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00044","url":null,"abstract":"This paper lies on presenting applications of the Graphic Code exploiting its large-scale information coding capabilities applied to Augmented Reality. Machine Readable Codes (MRCs) are largely used for many reasons, such as, product tagging or to hold URLs. The recently introduced Graphic Code differs from classical MRCs because it is fully integrated with images for aesthetic control. Furthermore, it is able to code large amount of information and, for that reason, it can store different types of models for applications that are unusual for classical MRCs. The main advantage of using our approach as an Augmented Reality marker is the possibility of creating generic applications that can read and decode these Graphic Code markers which may contain 3D models and complex scenes encoded in it. Additionally, the resulting marker has strong aesthetic characteristics associated to it.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130689737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this work we present the results of a large-scale comparative user-experience study measuring the user-experience and emotions of a test-group exposed to five immersive virtual reality environments: HMD, large immersive display, mobile VR, AR (HoloLens), and tablet PC. We critically analyze current Mixed Reality (MR) design guidelines typically used in Virtual Reality (VR) and Augmented Reality (AR) to support developers in creating engaging and accessible content. Through a mixed method approach - ranging from questionnaires, measuring mental effort, biofeedback, and interviews - we obtained new insights into participant's attitudes, interaction patterns, behavior, emotional state, and mental effort. Overall, we redefined existing design and usability guidelines to enable developers to create more intuitive and immersive content based on our insights. A UX design approach is pivotal for the enhancement of immersive applications.
在这项工作中,我们展示了一项大规模比较用户体验研究的结果,该研究测量了暴露于五种沉浸式虚拟现实环境(HMD,大型沉浸式显示器,移动VR, AR (HoloLens)和平板电脑)的测试组的用户体验和情绪。我们批判性地分析了当前虚拟现实(VR)和增强现实(AR)中通常使用的混合现实(MR)设计指南,以支持开发人员创建引人入胜和可访问的内容。通过问卷调查、心理努力测量、生物反馈和访谈等混合方法,我们对参与者的态度、互动模式、行为、情绪状态和心理努力有了新的认识。总的来说,我们重新定义了现有的设计和可用性指南,使开发人员能够根据我们的见解创建更直观和沉浸式的内容。UX设计方法对于增强沉浸式应用程序至关重要。
{"title":"Comparative Reality: Measuring User Experience and Emotion in Immersive Virtual Environments","authors":"Adam Greenfeld, A. Lugmayr, Wesley Lamont","doi":"10.1109/AIVR.2018.00048","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00048","url":null,"abstract":"In this work we present the results of a large-scale comparative user-experience study measuring the user-experience and emotions of a test-group exposed to five immersive virtual reality environments: HMD, large immersive display, mobile VR, AR (HoloLens), and tablet PC. We critically analyze current Mixed Reality (MR) design guidelines typically used in Virtual Reality (VR) and Augmented Reality (AR) to support developers in creating engaging and accessible content. Through a mixed method approach - ranging from questionnaires, measuring mental effort, biofeedback, and interviews - we obtained new insights into participant's attitudes, interaction patterns, behavior, emotional state, and mental effort. Overall, we redefined existing design and usability guidelines to enable developers to create more intuitive and immersive content based on our insights. A UX design approach is pivotal for the enhancement of immersive applications.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129949947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study how mixed-reality (MR) telepresence can enhance long-distance human interaction and how altering three-dimensional (3D) representations of a remote person can be used to modulate stress and anxiety during social interactions. To do so, we developed an MR telepresence system employing commodity depth sensors and Microsoft's Hololens. A textured, polygonal 3D model of a person was reconstructed in real time and transmitted over network for rendering in remote location using Hololens. In this pilot study, we used mock job interview paradigm to induce stress in human-subjects interacting with an interviewer presented as an MR hologram. Participants were exposed to three different types of real-time reconstructed virtual holograms of the interviewer, a natural-sized 3D reconstruction (NR), a miniature 3D reconstruction (SR) and a 2D-display representation (LCD). Participants reported their subjective experience through questionnaires, while their biophysical responses were recorded. We found that the size of 3D representation of a remote interviewer had a significant effect on participants' stress levels and their sense of presence. NR condition induced more stress and presence than SR condition and was significantly different from LCD condition.
{"title":"Downsizing: The Effect of Mixed-Reality Person Representations on Stress and Presence in Telecommunication","authors":"M. Joachimczak, Juan Liu, H. Ando","doi":"10.1109/AIVR.2018.00029","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00029","url":null,"abstract":"We study how mixed-reality (MR) telepresence can enhance long-distance human interaction and how altering three-dimensional (3D) representations of a remote person can be used to modulate stress and anxiety during social interactions. To do so, we developed an MR telepresence system employing commodity depth sensors and Microsoft's Hololens. A textured, polygonal 3D model of a person was reconstructed in real time and transmitted over network for rendering in remote location using Hololens. In this pilot study, we used mock job interview paradigm to induce stress in human-subjects interacting with an interviewer presented as an MR hologram. Participants were exposed to three different types of real-time reconstructed virtual holograms of the interviewer, a natural-sized 3D reconstruction (NR), a miniature 3D reconstruction (SR) and a 2D-display representation (LCD). Participants reported their subjective experience through questionnaires, while their biophysical responses were recorded. We found that the size of 3D representation of a remote interviewer had a significant effect on participants' stress levels and their sense of presence. NR condition induced more stress and presence than SR condition and was significantly different from LCD condition.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130303788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}