Surya Soujanya Kodavalla, M. Goel, Priyanka Srivastava
The current work assesses the physiological and psychological responses to the 360° emotional videos selected from Stanford virtual reality (VR) affective database [Li et al., 2017], presented using VR head-mounted display (HMD). Participants were asked to report valence and arousal level after watching each video. The electro-dermal activity (EDA) was recorded while watching the videos. The current pilot study shows no significant difference in skin-conductance response (SCR) between the high and low arousal experience. Similar trends were observed during high and low valence. The self-report pilot data on valence and arousal shows no statistically significant difference between Stanford VR affective responses and the corresponding Indian population psychological responses. Despite positive result of no-significant difference in self-report across cultures, we are limited to generalize the result because of small sample size.
{"title":"Indian Virtual reality affective database with self-report measures and EDA","authors":"Surya Soujanya Kodavalla, M. Goel, Priyanka Srivastava","doi":"10.1145/3359996.3364698","DOIUrl":"https://doi.org/10.1145/3359996.3364698","url":null,"abstract":"The current work assesses the physiological and psychological responses to the 360° emotional videos selected from Stanford virtual reality (VR) affective database [Li et al., 2017], presented using VR head-mounted display (HMD). Participants were asked to report valence and arousal level after watching each video. The electro-dermal activity (EDA) was recorded while watching the videos. The current pilot study shows no significant difference in skin-conductance response (SCR) between the high and low arousal experience. Similar trends were observed during high and low valence. The self-report pilot data on valence and arousal shows no statistically significant difference between Stanford VR affective responses and the corresponding Indian population psychological responses. Despite positive result of no-significant difference in self-report across cultures, we are limited to generalize the result because of small sample size.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116880017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present “AHMED”, a mixed-reality toolset that allows visitors to experience mixed-reality museum or art exhibitions created ad-hoc at locations such as event venues, private parties,or a living room. The system democratizes access to exhibitions for populations that cannot visit these exhibitions in person for reasons of disability, time-constraints, travel restrictions, or socio-economic status.
{"title":"AHMED: Toolset for Ad-Hoc Mixed-reality Exhibition Design","authors":"Krzysztof Pietroszek, Carl Moore","doi":"10.1145/3359996.3364729","DOIUrl":"https://doi.org/10.1145/3359996.3364729","url":null,"abstract":"We present “AHMED”, a mixed-reality toolset that allows visitors to experience mixed-reality museum or art exhibitions created ad-hoc at locations such as event venues, private parties,or a living room. The system democratizes access to exhibitions for populations that cannot visit these exhibitions in person for reasons of disability, time-constraints, travel restrictions, or socio-economic status.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116956679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Annika Wohlan, N. Hochgeschwender, Nadine Meissler
Software systems and components are increasingly based on machine learning methods, such as Convolutional Neural Networks (CNNs). Thus, there is a growing need for common programmers and machine learning newcomers to understand the general functioning of these algorithms. However, as neural networks are complex in nature, novel presentation means are required to enable rapid access to the functionality. For that purpose, this paper examines how CNNs can be visualized in Virtual Reality. A first exploratory study has confirmed that our visualization approach is both intuitive to use and conductive to learning.
{"title":"Visualizing Convolutional Neural Networks with Virtual Reality","authors":"Annika Wohlan, N. Hochgeschwender, Nadine Meissler","doi":"10.1145/3359996.3364817","DOIUrl":"https://doi.org/10.1145/3359996.3364817","url":null,"abstract":"Software systems and components are increasingly based on machine learning methods, such as Convolutional Neural Networks (CNNs). Thus, there is a growing need for common programmers and machine learning newcomers to understand the general functioning of these algorithms. However, as neural networks are complex in nature, novel presentation means are required to enable rapid access to the functionality. For that purpose, this paper examines how CNNs can be visualized in Virtual Reality. A first exploratory study has confirmed that our visualization approach is both intuitive to use and conductive to learning.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121167202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Evgeny V. Tsykunov, R. Ibrahimov, Derek Vasquez, D. Tsetserukou
We propose SlingDrone, a novel Mixed Reality interaction paradigm that utilizes a micro-quadrotor as both pointing controller and interactive robot with a slingshot motion type. The drone attempts to hover at a given position while the human pulls it in desired direction using a hand grip and a leash. Based on the displacement, a virtual trajectory is defined. To allow for intuitive and simple control, we use virtual reality (VR) technology to trace the path of the drone based on the displacement input. The user receives force feedback propagated through the leash. Force feedback from SlingDrone coupled with visualized trajectory in VR creates an intuitive and user friendly pointing device. When the drone is released, it follows the trajectory that was shown in VR. Onboard payload (e.g. magnetic gripper) can perform various scenarios for real interaction with the surroundings, e.g. manipulation or sensing. Unlike HTC Vive controller, SlingDrone does not require handheld devices, thus it can be used as a standalone pointing technology in VR.
{"title":"SlingDrone: Mixed Reality System for Pointing and Interaction Using a Single Drone","authors":"Evgeny V. Tsykunov, R. Ibrahimov, Derek Vasquez, D. Tsetserukou","doi":"10.1145/3359996.3364271","DOIUrl":"https://doi.org/10.1145/3359996.3364271","url":null,"abstract":"We propose SlingDrone, a novel Mixed Reality interaction paradigm that utilizes a micro-quadrotor as both pointing controller and interactive robot with a slingshot motion type. The drone attempts to hover at a given position while the human pulls it in desired direction using a hand grip and a leash. Based on the displacement, a virtual trajectory is defined. To allow for intuitive and simple control, we use virtual reality (VR) technology to trace the path of the drone based on the displacement input. The user receives force feedback propagated through the leash. Force feedback from SlingDrone coupled with visualized trajectory in VR creates an intuitive and user friendly pointing device. When the drone is released, it follows the trajectory that was shown in VR. Onboard payload (e.g. magnetic gripper) can perform various scenarios for real interaction with the surroundings, e.g. manipulation or sensing. Unlike HTC Vive controller, SlingDrone does not require handheld devices, thus it can be used as a standalone pointing technology in VR.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126535456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The estimation of distances and spatial relations between surgical instruments and surrounding anatomical structures is a challenging task for clinicians in image-guided surgery. Using augmented reality (AR), navigation aids can be displayed directly at the intervention site to support the assessment of distances and reduce the risk of damage to healthy tissue. To this end, four distance-encoding visualisation concepts were developed using a head-mounted optical see-through AR setup and evaluated by conducting a comparison study. Results suggest the general advantage of the proposed methods compared to a blank visualisation providing no additional information. Using a Distance Sensor concept signalising the proximity of nearby structures resulted in the least time the instrument was located below 5mm to surrounding risk structures and yielded the least amount of collisions with them.
{"title":"Augmented Reality Visualisation Concepts to Support Intraoperative Distance Estimation","authors":"F. Heinrich, G. Schmidt, F. Jungmann, C. Hansen","doi":"10.1145/3359996.3364818","DOIUrl":"https://doi.org/10.1145/3359996.3364818","url":null,"abstract":"The estimation of distances and spatial relations between surgical instruments and surrounding anatomical structures is a challenging task for clinicians in image-guided surgery. Using augmented reality (AR), navigation aids can be displayed directly at the intervention site to support the assessment of distances and reduce the risk of damage to healthy tissue. To this end, four distance-encoding visualisation concepts were developed using a head-mounted optical see-through AR setup and evaluated by conducting a comparison study. Results suggest the general advantage of the proposed methods compared to a blank visualisation providing no additional information. Using a Distance Sensor concept signalising the proximity of nearby structures resulted in the least time the instrument was located below 5mm to surrounding risk structures and yielded the least amount of collisions with them.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126207652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nathan Moore, Soojeong Yoo, N. Ahmadpour, Russel Tommy, Martin Brown, P. Poronnik
The delivery of ongoing training and support to Advanced Life Support (ALS) teams poses significant resourcing and logistical challenges. A reduced exposure to cardiac arrests and mandated re-accreditation pose further challenges for educators to overcome. This work presents the ALS-SimVR (Advanced Life Support Simulation in VR) application. The application is intended for use as a supplementary training and refresher asset for ALS team leaders. The purpose of the application is to allow critical care clinicians to rehearse the role of ALS Team leader in their own time and location of choice. The application was developed for the Oculus-Go and ported to the Oculus-Quest. The application is also supported for a desktop and server based streaming release.
向高级生命支持(ALS)小组提供持续的培训和支持构成了重大的资源和后勤挑战。心脏骤停的减少和强制重新认证对教育工作者构成了进一步的挑战。本文介绍了ALS-SimVR (Advanced Life Support Simulation in VR)的应用。该应用程序旨在作为ALS团队领导的补充培训和复习资产。该应用程序的目的是让重症监护临床医生在他们自己的时间和选择的地点排练ALS团队领导的角色。该应用程序是为Oculus-Go开发的,并移植到Oculus-Quest上。该应用程序还支持基于桌面和服务器的流媒体版本。
{"title":"ALS-SimVR: Advanced Life Support Virtual Reality Training Application","authors":"Nathan Moore, Soojeong Yoo, N. Ahmadpour, Russel Tommy, Martin Brown, P. Poronnik","doi":"10.1145/3359996.3365051","DOIUrl":"https://doi.org/10.1145/3359996.3365051","url":null,"abstract":"The delivery of ongoing training and support to Advanced Life Support (ALS) teams poses significant resourcing and logistical challenges. A reduced exposure to cardiac arrests and mandated re-accreditation pose further challenges for educators to overcome. This work presents the ALS-SimVR (Advanced Life Support Simulation in VR) application. The application is intended for use as a supplementary training and refresher asset for ALS team leaders. The purpose of the application is to allow critical care clinicians to rehearse the role of ALS Team leader in their own time and location of choice. The application was developed for the Oculus-Go and ported to the Oculus-Quest. The application is also supported for a desktop and server based streaming release.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126679209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose to use the deep learning technique to estimate and predict the torso direction from the head movements alone. The prediction allows to implement the walk-in-place navigation interface without additional sensing of the torso direction, and thereby improves the convenience and usability. We created a small dataset and tested our idea by training an LSTM model and obtained a 3-class prediction rate of about 90%, a figure higher than using other conventional machine learning techniques. While preliminary, the results show the possible inter-dependence between the viewing and torso directions, and with richer dataset and more parameters, a more accurate level of prediction seems possible.
{"title":"Predicting the Torso Direction from HMD Movements for Walk-in-Place Navigation through Deep Learning","authors":"Juyoung Lee, Andréas Pastor, Jae-In Hwang, G. Kim","doi":"10.1145/3359996.3364709","DOIUrl":"https://doi.org/10.1145/3359996.3364709","url":null,"abstract":"In this paper, we propose to use the deep learning technique to estimate and predict the torso direction from the head movements alone. The prediction allows to implement the walk-in-place navigation interface without additional sensing of the torso direction, and thereby improves the convenience and usability. We created a small dataset and tested our idea by training an LSTM model and obtained a 3-class prediction rate of about 90%, a figure higher than using other conventional machine learning techniques. While preliminary, the results show the possible inter-dependence between the viewing and torso directions, and with richer dataset and more parameters, a more accurate level of prediction seems possible.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128093891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Augmented reality (AR) is a promising tool to convey useful information at the place where it is needed. However, perceptual issues with augmented reality visualizations affect the estimation of distances and depth and thus can lead to critically wrong assumptions. These issues have been successfully investigated for video see-through modalities. Moreover, advanced visualization methods encoding depth information by displaying additional depth cues were developed. In this work, state-of-the-art visualization concepts were adopted for a projective AR setup. We conducted a user study to assess the concepts’ suitability to convey depth information. Participants were asked to sort virtual cubes by using the provided depth cues. The investigated visualization concepts consisted of conventional Phong shading, a virtual mirror, depth-encoding silhouettes, pseudo-chromadepth rendering and an illustrative visualization using supporting line depth cues. Besides different concepts, we altered between a monoscopic and a stereoscopic display mode to examine the effects of stereopsis. Consistent results across variables show a clear ranking of examined concepts. The supporting lines approach and the pseudo-chromadepth rendering performed best. Stereopsis was shown to provide significant advantages for depth perception, while the current visualization technique had only little effect on investigated measures in this condition. However, similar results were achieved using the supporting lines and the pseudo-chromadepth concepts in a monoscopic setup. Our study showed the suitability of advanced visualization concepts for the rendering of virtual content in projective AR. Specific depth estimation results contribute to the future design and development of applications for these systems.
{"title":"Depth Perception in Projective Augmented Reality: An Evaluation of Advanced Visualization Techniques","authors":"F. Heinrich, Kai Bornemann, K. Lawonn, C. Hansen","doi":"10.1145/3359996.3364245","DOIUrl":"https://doi.org/10.1145/3359996.3364245","url":null,"abstract":"Augmented reality (AR) is a promising tool to convey useful information at the place where it is needed. However, perceptual issues with augmented reality visualizations affect the estimation of distances and depth and thus can lead to critically wrong assumptions. These issues have been successfully investigated for video see-through modalities. Moreover, advanced visualization methods encoding depth information by displaying additional depth cues were developed. In this work, state-of-the-art visualization concepts were adopted for a projective AR setup. We conducted a user study to assess the concepts’ suitability to convey depth information. Participants were asked to sort virtual cubes by using the provided depth cues. The investigated visualization concepts consisted of conventional Phong shading, a virtual mirror, depth-encoding silhouettes, pseudo-chromadepth rendering and an illustrative visualization using supporting line depth cues. Besides different concepts, we altered between a monoscopic and a stereoscopic display mode to examine the effects of stereopsis. Consistent results across variables show a clear ranking of examined concepts. The supporting lines approach and the pseudo-chromadepth rendering performed best. Stereopsis was shown to provide significant advantages for depth perception, while the current visualization technique had only little effect on investigated measures in this condition. However, similar results were achieved using the supporting lines and the pseudo-chromadepth concepts in a monoscopic setup. Our study showed the suitability of advanced visualization concepts for the rendering of virtual content in projective AR. Specific depth estimation results contribute to the future design and development of applications for these systems.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128494151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce POL360: the first universal VR motion controller that leverages the principle of light polarization. POL360 enables a user who holds it and wears a VR headset to see their hand motion in a virtual world via its accurate 6-DOF position tracking. Compared to other techniques for VR positioning, POL360 has several advantages as follows. (1) Mobile compatibility: Neither additional computing resource like a PC/console nor any complicated pre-installation is required in the environment. Only necessary device is a VR headset with an IR LED module as a light source to which a thin-film linear polarizer is attached. (2) On-device computing: Our POL360’s computation for positioning is completed on the microprocessor in the device. Thus, it does not require additional computing resource of a VR headset. (3) Competitive accuracy and update rate: In spite of POL360’s superior mobile compatibility and affordability, POL360 attains competitive performance of accuracy and fast update rates. That is, it achieves the subcentimeter accuracy of positioning and the tracking rate higher than 60 Hz. In this paper, we derive the mathematical formulation of 6-DOF positioning using light polarization for the first time and implement a POL360 prototype that can directly operate with any commercial VR headset systems. In order to demonstrate POL360’s performance and usability, we carry out thorough quantitative evaluation and a user study and develop three game demos as use cases.
{"title":"POL360: A Universal Mobile VR Motion Controller using Polarized Light","authors":"Hyouk Jang, Juheon Choi, Gunhee Kim","doi":"10.1145/3359996.3364262","DOIUrl":"https://doi.org/10.1145/3359996.3364262","url":null,"abstract":"We introduce POL360: the first universal VR motion controller that leverages the principle of light polarization. POL360 enables a user who holds it and wears a VR headset to see their hand motion in a virtual world via its accurate 6-DOF position tracking. Compared to other techniques for VR positioning, POL360 has several advantages as follows. (1) Mobile compatibility: Neither additional computing resource like a PC/console nor any complicated pre-installation is required in the environment. Only necessary device is a VR headset with an IR LED module as a light source to which a thin-film linear polarizer is attached. (2) On-device computing: Our POL360’s computation for positioning is completed on the microprocessor in the device. Thus, it does not require additional computing resource of a VR headset. (3) Competitive accuracy and update rate: In spite of POL360’s superior mobile compatibility and affordability, POL360 attains competitive performance of accuracy and fast update rates. That is, it achieves the subcentimeter accuracy of positioning and the tracking rate higher than 60 Hz. In this paper, we derive the mathematical formulation of 6-DOF positioning using light polarization for the first time and implement a POL360 prototype that can directly operate with any commercial VR headset systems. In order to demonstrate POL360’s performance and usability, we carry out thorough quantitative evaluation and a user study and develop three game demos as use cases.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134065264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Waseem Safi, Fabrice Maurel, J. Routoure, P. Beust, Michèle Molina, Coralie Sann, Jessica Guilbert
We present results of an empirical study for examining the performance of sighted and blind individuals in discriminating structures of web pages through vibro-tactile feedbacks.
我们提出了一项实证研究的结果,通过振动触觉反馈来检验视力正常和失明个体在辨别网页结构方面的表现。
{"title":"Blind Navigation of Web Pages through Vibro-tactile Feedbacks","authors":"Waseem Safi, Fabrice Maurel, J. Routoure, P. Beust, Michèle Molina, Coralie Sann, Jessica Guilbert","doi":"10.1145/3359996.3364758","DOIUrl":"https://doi.org/10.1145/3359996.3364758","url":null,"abstract":"We present results of an empirical study for examining the performance of sighted and blind individuals in discriminating structures of web pages through vibro-tactile feedbacks.","PeriodicalId":393864,"journal":{"name":"Proceedings of the 25th ACM Symposium on Virtual Reality Software and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115201531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}