This paper presents a method that automatically transforms a virtual environment (VE) according to the mood of input music. We use machine learning to extract a mood from the music. We then select images exhibiting the mood and transfer their styles to the textures of objects in the VE photorealistically or artistically. Our user study results indicate that our method is effective in transferring valence-related aspects, but not arousal-related ones. Our method can still provide novel experiences in virtual reality and speed up the production of VEs by automating its procedure.
{"title":"Automatic transfer of musical mood into virtual environments","authors":"Sangyoon Han, Amit Bhardwaj, Seungmoon Choi","doi":"10.1145/3281505.3281534","DOIUrl":"https://doi.org/10.1145/3281505.3281534","url":null,"abstract":"This paper presents a method that automatically transforms a virtual environment (VE) according to the mood of input music. We use machine learning to extract a mood from the music. We then select images exhibiting the mood and transfer their styles to the textures of objects in the VE photorealistically or artistically. Our user study results indicate that our method is effective in transferring valence-related aspects, but not arousal-related ones. Our method can still provide novel experiences in virtual reality and speed up the production of VEs by automating its procedure.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126212404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takumi Azai, Syunsuke Ushiro, Junlin Li, M. Otsuki, F. Shibata, Asako Kimura
Virtual and mixed realities make it possible to view and interact with virtual objects in 3D space. However, where to position menus in 3D space and how to interact with them are often problems. Existing studies developed methods of displaying a menu on the hand or arm. In this study, we proposed a menu system that appears at various body parts. By placing the menu on the body, it enables the user to operate the menus comfortably through kinesthesia, and perceive tactile feedback. Furthermore, displaying the menu not only in the hands and arms but also in the upper legs and the abdomen, the menu display area can be expanded. In this study, we developed a modeling application and introduced a proposed menu design for that application.
{"title":"Tap-tap menu: body touching for virtual interactive menus","authors":"Takumi Azai, Syunsuke Ushiro, Junlin Li, M. Otsuki, F. Shibata, Asako Kimura","doi":"10.1145/3281505.3281561","DOIUrl":"https://doi.org/10.1145/3281505.3281561","url":null,"abstract":"Virtual and mixed realities make it possible to view and interact with virtual objects in 3D space. However, where to position menus in 3D space and how to interact with them are often problems. Existing studies developed methods of displaying a menu on the hand or arm. In this study, we proposed a menu system that appears at various body parts. By placing the menu on the body, it enables the user to operate the menus comfortably through kinesthesia, and perceive tactile feedback. Furthermore, displaying the menu not only in the hands and arms but also in the upper legs and the abdomen, the menu display area can be expanded. In this study, we developed a modeling application and introduced a proposed menu design for that application.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126027950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A high color-fidelity display provides accurate spectral reproduction to reduce observer metamerism. In this poster, we implement a multispectral projection display using a modified projector. The modification only requires adding a lens array on the projection optical path to creates multiple copies of images and using color filters to create new primaries. To produce the new primaries with high throughput and low correlation, we proposed a volume maximization-based filter selection approach. We also present an efficient multispectral rendering algorithm to compute the input values of each primary. The experiments show that our multispectral display can accurately approximate desired multispectral images and effectively reduce observer metamerism when compared with the original three primaries projection display.
{"title":"High color-fidelity display using a modified projector","authors":"Yuqi Li","doi":"10.1145/3281505.3281665","DOIUrl":"https://doi.org/10.1145/3281505.3281665","url":null,"abstract":"A high color-fidelity display provides accurate spectral reproduction to reduce observer metamerism. In this poster, we implement a multispectral projection display using a modified projector. The modification only requires adding a lens array on the projection optical path to creates multiple copies of images and using color filters to create new primaries. To produce the new primaries with high throughput and low correlation, we proposed a volume maximization-based filter selection approach. We also present an efficient multispectral rendering algorithm to compute the input values of each primary. The experiments show that our multispectral display can accurately approximate desired multispectral images and effectively reduce observer metamerism when compared with the original three primaries projection display.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129910469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents some early results from an exploration into Augmented Reality (AR) applications where users have access to controllable alternative viewing positions based on a camera mounted unmanned aerial vehicle (UAV). These results include a system specification that defines and identifies the requirements of multi-view AR; and a demo application where the user can switch between the traditional first person and third person view. While being an initial step in the investigation, the results do illustrate practical applications for multi-view AR functionality. The paper concludes with a discussion on the next steps for the investigation.
{"title":"Multi-view augmented reality with a drone","authors":"Aaron Hitchcock, Kelvin Sung","doi":"10.1145/3281505.3283397","DOIUrl":"https://doi.org/10.1145/3281505.3283397","url":null,"abstract":"This paper presents some early results from an exploration into Augmented Reality (AR) applications where users have access to controllable alternative viewing positions based on a camera mounted unmanned aerial vehicle (UAV). These results include a system specification that defines and identifies the requirements of multi-view AR; and a demo application where the user can switch between the traditional first person and third person view. While being an initial step in the investigation, the results do illustrate practical applications for multi-view AR functionality. The paper concludes with a discussion on the next steps for the investigation.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126734366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Q. Tran, Thanh Dat Ngoc Tran, Duy-Tam Nguyen, H. Regenbrecht, Minh-Triet Tran
Many categories of the illusion of self-motion have been widely studied with the potential support of virtual reality. However, the effects of directly and indirectly powering the movement on the possibility of perceiving changes in moving speed and their relationship with sensory feedback on users' speed change perception have not been investigated before. In this paper, we present the results of our user study on the difference in perceiving changes in moving speed between two different movement techniques: "pedaling" and "throttling". We also explore the effects of different velocity gains, accelerations and speeds of airflow, and their interactions with the movement techniques on users' perception of speed changes in addition to user performance and perception. We built a bike simulator that supports both of the movement techniques and provides sensory feedback. In general, "pedaling" gave users more possibility to perceive changes in moving velocity than "throttling".
{"title":"Can we perceive changes in our moving speed: a comparison between directly and indirectly powering the locomotion in virtual environments","authors":"T. Q. Tran, Thanh Dat Ngoc Tran, Duy-Tam Nguyen, H. Regenbrecht, Minh-Triet Tran","doi":"10.1145/3281505.3281510","DOIUrl":"https://doi.org/10.1145/3281505.3281510","url":null,"abstract":"Many categories of the illusion of self-motion have been widely studied with the potential support of virtual reality. However, the effects of directly and indirectly powering the movement on the possibility of perceiving changes in moving speed and their relationship with sensory feedback on users' speed change perception have not been investigated before. In this paper, we present the results of our user study on the difference in perceiving changes in moving speed between two different movement techniques: \"pedaling\" and \"throttling\". We also explore the effects of different velocity gains, accelerations and speeds of airflow, and their interactions with the movement techniques on users' perception of speed changes in addition to user performance and perception. We built a bike simulator that supports both of the movement techniques and provides sensory feedback. In general, \"pedaling\" gave users more possibility to perceive changes in moving velocity than \"throttling\".","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128907778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Higher resolution, wider FOV and increasing frame rate of HMD are demanding more VR computing resources. Foveated rendering is a key solution to these challenges. This paper introduces a perceptual model optimized foveated rendering. Tessellation levels and culling areas are adaptively adjusted based on visual sensitivity. We improve rendering performance while satisfying visual perception.
{"title":"Perceptual model optimized efficient foveated rendering","authors":"Zipeng Zheng, Zhuo Yang, Yinwei Zhan, Yuqing Li, Wenxin Yu","doi":"10.1145/3281505.3281588","DOIUrl":"https://doi.org/10.1145/3281505.3281588","url":null,"abstract":"Higher resolution, wider FOV and increasing frame rate of HMD are demanding more VR computing resources. Foveated rendering is a key solution to these challenges. This paper introduces a perceptual model optimized foveated rendering. Tessellation levels and culling areas are adaptively adjusted based on visual sensitivity. We improve rendering performance while satisfying visual perception.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124560735","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Balloonygen, an extended tabletop display embedded with a balloon-like deformable spherical screen, is a display that can seamlessly expose a spherical screen for three-dimensional contents, such as omnidirectional images, in a conventional flat display. By continuously morphing between a two-dimensional shape called tabletop and a three-dimensional shape called sphere, we render the benefits of a flat display and a spherical display to coexist and propose a smoother approach for information sharing. Balloonygen dynamically provides an optimal way to display the contents by inflating the rubber membrane installed at the center of a tabletop display and morphing between the two- and three-dimensional shapes. In this study, by prototyping and designing the application scenario, we discuss the advantages and disadvantages of this display and possible interactions involved.
{"title":"Balloonygen","authors":"Soichiro Toyohara, Toshiki Sato, H. Koike","doi":"10.1145/3281505.3281532","DOIUrl":"https://doi.org/10.1145/3281505.3281532","url":null,"abstract":"Balloonygen, an extended tabletop display embedded with a balloon-like deformable spherical screen, is a display that can seamlessly expose a spherical screen for three-dimensional contents, such as omnidirectional images, in a conventional flat display. By continuously morphing between a two-dimensional shape called tabletop and a three-dimensional shape called sphere, we render the benefits of a flat display and a spherical display to coexist and propose a smoother approach for information sharing. Balloonygen dynamically provides an optimal way to display the contents by inflating the rubber membrane installed at the center of a tabletop display and morphing between the two- and three-dimensional shapes. In this study, by prototyping and designing the application scenario, we discuss the advantages and disadvantages of this display and possible interactions involved.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117332237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takafumi Iwaguchi, Hiroyuki Kubo, Takuya Funatomi, Y. Mukaigawa, S. Narasimhan
Light interacts with a scene in various ways. For scene understanding, a light transport is useful because it describes a relationship between the incident light ray and the result of the interaction. Our goal is to acquire the 4D light transport between the projector and the camera, focusing on direct and short-range transport that include the effect of the diffuse reflections, subsurface scattering, and inter-reflections. The acquisition of the light transport is challenging since the acquisition of the full 4D light transport requires a large number of measurement. We propose an efficient method to acquire short range light transport, which is dominant in the general scene, using synchronized projector-camera system. We show the transport profile of various materials, including uniform or heterogeneous subsurface scattering.
{"title":"Acquiring short range 4D light transport with synchronized projector camera system","authors":"Takafumi Iwaguchi, Hiroyuki Kubo, Takuya Funatomi, Y. Mukaigawa, S. Narasimhan","doi":"10.1145/3281505.3283377","DOIUrl":"https://doi.org/10.1145/3281505.3283377","url":null,"abstract":"Light interacts with a scene in various ways. For scene understanding, a light transport is useful because it describes a relationship between the incident light ray and the result of the interaction. Our goal is to acquire the 4D light transport between the projector and the camera, focusing on direct and short-range transport that include the effect of the diffuse reflections, subsurface scattering, and inter-reflections. The acquisition of the light transport is challenging since the acquisition of the full 4D light transport requires a large number of measurement. We propose an efficient method to acquire short range light transport, which is dominant in the general scene, using synchronized projector-camera system. We show the transport profile of various materials, including uniform or heterogeneous subsurface scattering.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133676816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to communicate with a person in a remote place, there are many means such as sending sentences, making a phone call, chatting by video. A contact system with a distant person becomes a communication tool through an avatar by a virtual reality system, and we feel that there is a barrier to reality. So, we build a system to deliver virtual objects to a user in remote place by behaving as if handing the objects. Remote and present space views are projected on a wall using video chat, and each virtual object is handed over by using an Augmented Reality (AR) marker. The system promotes communication by feeling the connection of the space in a remote place.
{"title":"System of delivering virtual object to user in remote place by handing gestures","authors":"M. Irie, T. Shibata","doi":"10.1145/3281505.3281594","DOIUrl":"https://doi.org/10.1145/3281505.3281594","url":null,"abstract":"In order to communicate with a person in a remote place, there are many means such as sending sentences, making a phone call, chatting by video. A contact system with a distant person becomes a communication tool through an avatar by a virtual reality system, and we feel that there is a barrier to reality. So, we build a system to deliver virtual objects to a user in remote place by behaving as if handing the objects. Remote and present space views are projected on a wall using video chat, and each virtual object is handed over by using an Augmented Reality (AR) marker. The system promotes communication by feeling the connection of the space in a remote place.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131345080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generating face images in various directions from an image will be useful to create avatars in VR. In this paper, we introduce a new deep generative model to generate turnaround face images from an image via a latent code space with a parameter. The model was learned with a large scale image dataset annotated with attributes but not including exact target images.
{"title":"Deep face rotation in the wild","authors":"Shohei Morikawa, S. Saito","doi":"10.1145/3281505.3281606","DOIUrl":"https://doi.org/10.1145/3281505.3281606","url":null,"abstract":"Generating face images in various directions from an image will be useful to create avatars in VR. In this paper, we introduce a new deep generative model to generate turnaround face images from an image via a latent code space with a parameter. The model was learned with a large scale image dataset annotated with attributes but not including exact target images.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131481807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}