Conventional methods of real time sound effects in 3D graphical and virtual environments relied upon preparing all the needed samples ahead of time and simply replaying them as needed, or parametrically modifying a basic set of samples using physically based techniques such as the spring-damper simulation and modal analysis/synthesis. In this work, we propose to apply the generative adversarial network (GAN) approach to the problem at hand, with which only one generator is trained to produce the needed sounds fast with perceptually indifferent quality. Otherwise, with the conventional methods, separate and approximate models would be needed to deal with different material properties and contact types, and manage real time performance. We demonstrate our claim by training a GAN (more specifically WaveGAN) with sounds of different drums and synthesizing the sounds on the fly for a virtual drum playing environment. The perceptual test revealed that the subjects could not discern the synthesized sounds from the ground truth nor perceived any noticeable delay upon the corresponding physical event.
{"title":"A Perceptual Evaluation of Generative Adversarial Network Real-Time Synthesized Drum Sounds in a Virtual Environment","authors":"Minwook Chang, Y. Kim, G. Kim","doi":"10.1109/AIVR.2018.00030","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00030","url":null,"abstract":"Conventional methods of real time sound effects in 3D graphical and virtual environments relied upon preparing all the needed samples ahead of time and simply replaying them as needed, or parametrically modifying a basic set of samples using physically based techniques such as the spring-damper simulation and modal analysis/synthesis. In this work, we propose to apply the generative adversarial network (GAN) approach to the problem at hand, with which only one generator is trained to produce the needed sounds fast with perceptually indifferent quality. Otherwise, with the conventional methods, separate and approximate models would be needed to deal with different material properties and contact types, and manage real time performance. We demonstrate our claim by training a GAN (more specifically WaveGAN) with sounds of different drums and synthesizing the sounds on the fly for a virtual drum playing environment. The perceptual test revealed that the subjects could not discern the synthesized sounds from the ground truth nor perceived any noticeable delay upon the corresponding physical event.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":" 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113952903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual Reality (VR) can cause an unprecedented immersion and feeling of presence yet a lot of users experience motion sickness when moving through a virtual environment. Rollercoaster rides are popular in Virtual Reality but have to be well designed to limit the amount of nausea the user may feel. This paper describes a novel framework to get automated ratings on motion sickness using Neural Networks. An application that lets users create rollercoasters directly in VR, share them with other users and ride and rate them is used to gather real-time data related to the in-game behaviour of the player, the track itself and users' ratings based on a Simulator Sickness Questionnaire (SSQ) integrated into the application. Machine learning architectures based on deep neural networks are trained using this data aiming to predict motion sickness levels. While this paper focuses on rollercoasters this framework could help to rate any VR application on motion sickness and intensity that involves camera movement. A new well defined dataset is provided in this paper and the performance of the proposed architectures are evaluated in a comparative study.
{"title":"Machine Learning Architectures to Predict Motion Sickness Using a Virtual Reality Rollercoaster Simulation Tool","authors":"Stefan Hell, V. Argyriou","doi":"10.1109/AIVR.2018.00032","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00032","url":null,"abstract":"Virtual Reality (VR) can cause an unprecedented immersion and feeling of presence yet a lot of users experience motion sickness when moving through a virtual environment. Rollercoaster rides are popular in Virtual Reality but have to be well designed to limit the amount of nausea the user may feel. This paper describes a novel framework to get automated ratings on motion sickness using Neural Networks. An application that lets users create rollercoasters directly in VR, share them with other users and ride and rate them is used to gather real-time data related to the in-game behaviour of the player, the track itself and users' ratings based on a Simulator Sickness Questionnaire (SSQ) integrated into the application. Machine learning architectures based on deep neural networks are trained using this data aiming to predict motion sickness levels. While this paper focuses on rollercoasters this framework could help to rate any VR application on motion sickness and intensity that involves camera movement. A new well defined dataset is provided in this paper and the performance of the proposed architectures are evaluated in a comparative study.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129699229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes BRIEF, a backward reduction algorithm that explores compact CNN-model designs from the information flow perspective. This algorithm can remove substantial non-zero weighting parameters (redundant neural channels) of a network by considering its dynamic behavior, which traditional model-compaction techniques cannot achieve. With the aid of our proposed algorithm, we achieve significant model reduction on ResNet-34 in the ImageNet scale (32.3% reduction), which is 3X better than the previous result (10.8%). Even for highly optimized models such as SqueezeNet and MobileNet, we can achieve additional 10.81% and 37.56% reduction, respectively, with negligible performance degradation.
{"title":"BRIEF: Backward Reduction of CNNs with Information Flow Analysis","authors":"Yu-Hsun Lin, Chun-Nan Chou, Edward Y. Chang","doi":"10.1109/AIVR.2018.00014","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00014","url":null,"abstract":"This paper proposes BRIEF, a backward reduction algorithm that explores compact CNN-model designs from the information flow perspective. This algorithm can remove substantial non-zero weighting parameters (redundant neural channels) of a network by considering its dynamic behavior, which traditional model-compaction techniques cannot achieve. With the aid of our proposed algorithm, we achieve significant model reduction on ResNet-34 in the ImageNet scale (32.3% reduction), which is 3X better than the previous result (10.8%). Even for highly optimized models such as SqueezeNet and MobileNet, we can achieve additional 10.81% and 37.56% reduction, respectively, with negligible performance degradation.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125091589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}