Alexander Marquardt, Jens Maiero, E. Kruijff, Christina Trepkowski, A. Schwandt, André Hinkenjann, Johannes Schöning, W. Stuerzlinger
We present a novel forearm-and-glove tactile interface that can enhance 3D interaction by guiding hand motor planning and coordination. In particular, we aim to improve hand motion and pose actions related to selection and manipulation tasks. Through our user studies, we illustrate how tactile patterns can guide the user, by triggering hand pose and motion changes, for example to grasp (select) and manipulate (move) an object. We discuss the potential and limitations of the interface, and outline future work.
{"title":"Tactile hand motion and pose guidance for 3D interaction","authors":"Alexander Marquardt, Jens Maiero, E. Kruijff, Christina Trepkowski, A. Schwandt, André Hinkenjann, Johannes Schöning, W. Stuerzlinger","doi":"10.1145/3281505.3281526","DOIUrl":"https://doi.org/10.1145/3281505.3281526","url":null,"abstract":"We present a novel forearm-and-glove tactile interface that can enhance 3D interaction by guiding hand motor planning and coordination. In particular, we aim to improve hand motion and pose actions related to selection and manipulation tasks. Through our user studies, we illustrate how tactile patterns can guide the user, by triggering hand pose and motion changes, for example to grasp (select) and manipulate (move) an object. We discuss the potential and limitations of the interface, and outline future work.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123326676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a method that automatically transforms a virtual environment (VE) according to the mood of input music. We use machine learning to extract a mood from the music. We then select images exhibiting the mood and transfer their styles to the textures of objects in the VE photorealistically or artistically. Our user study results indicate that our method is effective in transferring valence-related aspects, but not arousal-related ones. Our method can still provide novel experiences in virtual reality and speed up the production of VEs by automating its procedure.
{"title":"Automatic transfer of musical mood into virtual environments","authors":"Sangyoon Han, Amit Bhardwaj, Seungmoon Choi","doi":"10.1145/3281505.3281534","DOIUrl":"https://doi.org/10.1145/3281505.3281534","url":null,"abstract":"This paper presents a method that automatically transforms a virtual environment (VE) according to the mood of input music. We use machine learning to extract a mood from the music. We then select images exhibiting the mood and transfer their styles to the textures of objects in the VE photorealistically or artistically. Our user study results indicate that our method is effective in transferring valence-related aspects, but not arousal-related ones. Our method can still provide novel experiences in virtual reality and speed up the production of VEs by automating its procedure.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126212404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents some early results from an exploration into Augmented Reality (AR) applications where users have access to controllable alternative viewing positions based on a camera mounted unmanned aerial vehicle (UAV). These results include a system specification that defines and identifies the requirements of multi-view AR; and a demo application where the user can switch between the traditional first person and third person view. While being an initial step in the investigation, the results do illustrate practical applications for multi-view AR functionality. The paper concludes with a discussion on the next steps for the investigation.
{"title":"Multi-view augmented reality with a drone","authors":"Aaron Hitchcock, Kelvin Sung","doi":"10.1145/3281505.3283397","DOIUrl":"https://doi.org/10.1145/3281505.3283397","url":null,"abstract":"This paper presents some early results from an exploration into Augmented Reality (AR) applications where users have access to controllable alternative viewing positions based on a camera mounted unmanned aerial vehicle (UAV). These results include a system specification that defines and identifies the requirements of multi-view AR; and a demo application where the user can switch between the traditional first person and third person view. While being an initial step in the investigation, the results do illustrate practical applications for multi-view AR functionality. The paper concludes with a discussion on the next steps for the investigation.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126734366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takumi Azai, Syunsuke Ushiro, Junlin Li, M. Otsuki, F. Shibata, Asako Kimura
Virtual and mixed realities make it possible to view and interact with virtual objects in 3D space. However, where to position menus in 3D space and how to interact with them are often problems. Existing studies developed methods of displaying a menu on the hand or arm. In this study, we proposed a menu system that appears at various body parts. By placing the menu on the body, it enables the user to operate the menus comfortably through kinesthesia, and perceive tactile feedback. Furthermore, displaying the menu not only in the hands and arms but also in the upper legs and the abdomen, the menu display area can be expanded. In this study, we developed a modeling application and introduced a proposed menu design for that application.
{"title":"Tap-tap menu: body touching for virtual interactive menus","authors":"Takumi Azai, Syunsuke Ushiro, Junlin Li, M. Otsuki, F. Shibata, Asako Kimura","doi":"10.1145/3281505.3281561","DOIUrl":"https://doi.org/10.1145/3281505.3281561","url":null,"abstract":"Virtual and mixed realities make it possible to view and interact with virtual objects in 3D space. However, where to position menus in 3D space and how to interact with them are often problems. Existing studies developed methods of displaying a menu on the hand or arm. In this study, we proposed a menu system that appears at various body parts. By placing the menu on the body, it enables the user to operate the menus comfortably through kinesthesia, and perceive tactile feedback. Furthermore, displaying the menu not only in the hands and arms but also in the upper legs and the abdomen, the menu display area can be expanded. In this study, we developed a modeling application and introduced a proposed menu design for that application.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126027950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
During interaction in a virtual environment, haptic displays provide users with sensations such as vibration, texture simulation, and electrical muscle stimulation. However, as humans perceive object weights naturally in daily life, objects picked up in virtual reality feel unrealistically light. To create an immersive experience in virtual reality that includes weight sensation, we propose GravityCup, a liquid-based haptic feedback device that simulates realistic object weights and inertia when moving virtual handheld objects. In different scenarios, GravityCup uses liquid to provide users with a dynamic weight sensation experience that enhances interaction with handheld objects in virtual reality.
{"title":"GravityCup","authors":"Chih-Hao Cheng, Chia-Chi Chang, Ying-Hsuan Chen, Ying-Li Lin, Jing-Yuan Huang, Ping-Hsuan Han, Ju-Chun Ko, Lai-Chung Lee","doi":"10.1145/3281505.3281569","DOIUrl":"https://doi.org/10.1145/3281505.3281569","url":null,"abstract":"During interaction in a virtual environment, haptic displays provide users with sensations such as vibration, texture simulation, and electrical muscle stimulation. However, as humans perceive object weights naturally in daily life, objects picked up in virtual reality feel unrealistically light. To create an immersive experience in virtual reality that includes weight sensation, we propose GravityCup, a liquid-based haptic feedback device that simulates realistic object weights and inertia when moving virtual handheld objects. In different scenarios, GravityCup uses liquid to provide users with a dynamic weight sensation experience that enhances interaction with handheld objects in virtual reality.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"36 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126100758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takafumi Iwaguchi, Hiroyuki Kubo, Takuya Funatomi, Y. Mukaigawa, S. Narasimhan
Light interacts with a scene in various ways. For scene understanding, a light transport is useful because it describes a relationship between the incident light ray and the result of the interaction. Our goal is to acquire the 4D light transport between the projector and the camera, focusing on direct and short-range transport that include the effect of the diffuse reflections, subsurface scattering, and inter-reflections. The acquisition of the light transport is challenging since the acquisition of the full 4D light transport requires a large number of measurement. We propose an efficient method to acquire short range light transport, which is dominant in the general scene, using synchronized projector-camera system. We show the transport profile of various materials, including uniform or heterogeneous subsurface scattering.
{"title":"Acquiring short range 4D light transport with synchronized projector camera system","authors":"Takafumi Iwaguchi, Hiroyuki Kubo, Takuya Funatomi, Y. Mukaigawa, S. Narasimhan","doi":"10.1145/3281505.3283377","DOIUrl":"https://doi.org/10.1145/3281505.3283377","url":null,"abstract":"Light interacts with a scene in various ways. For scene understanding, a light transport is useful because it describes a relationship between the incident light ray and the result of the interaction. Our goal is to acquire the 4D light transport between the projector and the camera, focusing on direct and short-range transport that include the effect of the diffuse reflections, subsurface scattering, and inter-reflections. The acquisition of the light transport is challenging since the acquisition of the full 4D light transport requires a large number of measurement. We propose an efficient method to acquire short range light transport, which is dominant in the general scene, using synchronized projector-camera system. We show the transport profile of various materials, including uniform or heterogeneous subsurface scattering.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133676816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtually navigating through photos from a 3D image-based reconstruction has recently become very popular in many applications. In this paper, we consider a particular virtual travel maneuver that is important for this type of virtual navigation---orbiting to photos that can see a point-of-interest (POI). The main challenge with this particular type of orbiting is how to give appropriate feedback to the user regarding the existence and information of each photo in 3D while allowing the user to manipulate three degrees-of-freedom (DoF) for orbiting around the POI. We present a hybrid approach that combines features from two baselines---proxy plane and thumbnail approaches. Experimental results indicate that users rated our hybrid approach more favorably for several qualitative questionnaire statements, and that the hybrid approach is preferred over both baselines for outdoor scenes.
{"title":"Hybrid orbiting-to-photos in 3D reconstructed visual reality","authors":"B. Nuernberger, Tobias Höllerer, M. Turk","doi":"10.1145/3281505.3281528","DOIUrl":"https://doi.org/10.1145/3281505.3281528","url":null,"abstract":"Virtually navigating through photos from a 3D image-based reconstruction has recently become very popular in many applications. In this paper, we consider a particular virtual travel maneuver that is important for this type of virtual navigation---orbiting to photos that can see a point-of-interest (POI). The main challenge with this particular type of orbiting is how to give appropriate feedback to the user regarding the existence and information of each photo in 3D while allowing the user to manipulate three degrees-of-freedom (DoF) for orbiting around the POI. We present a hybrid approach that combines features from two baselines---proxy plane and thumbnail approaches. Experimental results indicate that users rated our hybrid approach more favorably for several qualitative questionnaire statements, and that the hybrid approach is preferred over both baselines for outdoor scenes.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133734679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Taking a famous Chinese painting for a case study, the paper presents a virtual exhibition platform. Through the platform, users can walk into the scenes in the painting with virtual candles in hands, know the scenes which are endowed vitality by attaching actor performances, and see every detail of the artwork. The scenes change their light, shades and shadows in real time by the candles, just as real scenes. For real-time candle-moving and light-changing interaction, in implementation, we render the light effects at densely sampled user positions offline, and extract the light, shades and shadows as masks; during online processing, the system merges the artwork with masks chosen by the positions of candles. The system, novel in both design and techniques, has been partially used in the Palace Museum (Beijing).
{"title":"Walking into ancient paintings with virtual candles","authors":"Wei Ma, Qiuyuan Wang, Danqing Shi, Shuo Liu, Congxin Cheng, Q. Shi, Qiong Li, Tong Li, Ying-Qing Xu","doi":"10.1145/3281505.3283378","DOIUrl":"https://doi.org/10.1145/3281505.3283378","url":null,"abstract":"Taking a famous Chinese painting for a case study, the paper presents a virtual exhibition platform. Through the platform, users can walk into the scenes in the painting with virtual candles in hands, know the scenes which are endowed vitality by attaching actor performances, and see every detail of the artwork. The scenes change their light, shades and shadows in real time by the candles, just as real scenes. For real-time candle-moving and light-changing interaction, in implementation, we render the light effects at densely sampled user positions offline, and extract the light, shades and shadows as masks; during online processing, the system merges the artwork with masks chosen by the positions of candles. The system, novel in both design and techniques, has been partially used in the Palace Museum (Beijing).","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133269463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Q. Tran, Thanh Dat Ngoc Tran, Duy-Tam Nguyen, H. Regenbrecht, Minh-Triet Tran
Many categories of the illusion of self-motion have been widely studied with the potential support of virtual reality. However, the effects of directly and indirectly powering the movement on the possibility of perceiving changes in moving speed and their relationship with sensory feedback on users' speed change perception have not been investigated before. In this paper, we present the results of our user study on the difference in perceiving changes in moving speed between two different movement techniques: "pedaling" and "throttling". We also explore the effects of different velocity gains, accelerations and speeds of airflow, and their interactions with the movement techniques on users' perception of speed changes in addition to user performance and perception. We built a bike simulator that supports both of the movement techniques and provides sensory feedback. In general, "pedaling" gave users more possibility to perceive changes in moving velocity than "throttling".
{"title":"Can we perceive changes in our moving speed: a comparison between directly and indirectly powering the locomotion in virtual environments","authors":"T. Q. Tran, Thanh Dat Ngoc Tran, Duy-Tam Nguyen, H. Regenbrecht, Minh-Triet Tran","doi":"10.1145/3281505.3281510","DOIUrl":"https://doi.org/10.1145/3281505.3281510","url":null,"abstract":"Many categories of the illusion of self-motion have been widely studied with the potential support of virtual reality. However, the effects of directly and indirectly powering the movement on the possibility of perceiving changes in moving speed and their relationship with sensory feedback on users' speed change perception have not been investigated before. In this paper, we present the results of our user study on the difference in perceiving changes in moving speed between two different movement techniques: \"pedaling\" and \"throttling\". We also explore the effects of different velocity gains, accelerations and speeds of airflow, and their interactions with the movement techniques on users' perception of speed changes in addition to user performance and perception. We built a bike simulator that supports both of the movement techniques and provides sensory feedback. In general, \"pedaling\" gave users more possibility to perceive changes in moving velocity than \"throttling\".","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128907778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to communicate with a person in a remote place, there are many means such as sending sentences, making a phone call, chatting by video. A contact system with a distant person becomes a communication tool through an avatar by a virtual reality system, and we feel that there is a barrier to reality. So, we build a system to deliver virtual objects to a user in remote place by behaving as if handing the objects. Remote and present space views are projected on a wall using video chat, and each virtual object is handed over by using an Augmented Reality (AR) marker. The system promotes communication by feeling the connection of the space in a remote place.
{"title":"System of delivering virtual object to user in remote place by handing gestures","authors":"M. Irie, T. Shibata","doi":"10.1145/3281505.3281594","DOIUrl":"https://doi.org/10.1145/3281505.3281594","url":null,"abstract":"In order to communicate with a person in a remote place, there are many means such as sending sentences, making a phone call, chatting by video. A contact system with a distant person becomes a communication tool through an avatar by a virtual reality system, and we feel that there is a barrier to reality. So, we build a system to deliver virtual objects to a user in remote place by behaving as if handing the objects. Remote and present space views are projected on a wall using video chat, and each virtual object is handed over by using an Augmented Reality (AR) marker. The system promotes communication by feeling the connection of the space in a remote place.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131345080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}