In mixed reality, real objects can be used to interact with virtual objects. However, unlike in the real world, real objects do not encounter any opposite reaction force when pushing against virtual objects. The lack of reaction force during manipulation prevents users from perceiving the mass of virtual objects. Although this could be addressed by equipping real objects with force-feedback devices, such a solution remains complex and impractical. In this work, we present a technique to produce an illusion of mass without any active force-feedback mechanism. This is achieved by simulating the effects of this reaction force in a purely visual way. A first study demonstrates that our technique indeed allows users to differentiate light virtual objects from heavy virtual objects. In addition, it shows that the illusion is immediately effective, with no prior training. In a second study, we measure the lowest mass difference (JND) that can be perceived with this technique. The effectiveness and ease of implementation of our solution provides an opportunity to enhance mixed reality interaction at no additional cost.
{"title":"Perceiving mass in mixed reality through pseudo-haptic rendering of Newton's third law","authors":"Paul Issartel, F. Guéniat, S. Coquillart, M. Ammi","doi":"10.1109/VR.2015.7223322","DOIUrl":"https://doi.org/10.1109/VR.2015.7223322","url":null,"abstract":"In mixed reality, real objects can be used to interact with virtual objects. However, unlike in the real world, real objects do not encounter any opposite reaction force when pushing against virtual objects. The lack of reaction force during manipulation prevents users from perceiving the mass of virtual objects. Although this could be addressed by equipping real objects with force-feedback devices, such a solution remains complex and impractical. In this work, we present a technique to produce an illusion of mass without any active force-feedback mechanism. This is achieved by simulating the effects of this reaction force in a purely visual way. A first study demonstrates that our technique indeed allows users to differentiate light virtual objects from heavy virtual objects. In addition, it shows that the illusion is immediately effective, with no prior training. In a second study, we measure the lowest mass difference (JND) that can be perceived with this technique. The effectiveness and ease of implementation of our solution provides an opportunity to enhance mixed reality interaction at no additional cost.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129729599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Non-rigid surface registration is fundamental when accurate tracking or reconstruction of 3D deformable shapes is desirable. However, the majority of non-rigid registration methods are not as fast as the ones developed in the field of rigid registration. Fast methods for non-rigid surface registration are particularly interesting for markerless augmented reality applications, in which the object being used as marker can support non-rigid user interaction. In this paper, we present an adaptive algorithm for non-rigid surface registration. Taking advantage from this adaptivity and the parallelism of the GPU, we show that the proposed algorithm is capable to achieve near real-time performance with an approach as accurate as the ones proposed in the literature.
{"title":"A GPU-based adaptive algorithm for non-rigid surface registration","authors":"A. Souza, Márcio C. F. Macedo, A. Apolinario","doi":"10.1109/VR.2015.7223409","DOIUrl":"https://doi.org/10.1109/VR.2015.7223409","url":null,"abstract":"Non-rigid surface registration is fundamental when accurate tracking or reconstruction of 3D deformable shapes is desirable. However, the majority of non-rigid registration methods are not as fast as the ones developed in the field of rigid registration. Fast methods for non-rigid surface registration are particularly interesting for markerless augmented reality applications, in which the object being used as marker can support non-rigid user interaction. In this paper, we present an adaptive algorithm for non-rigid surface registration. Taking advantage from this adaptivity and the parallelism of the GPU, we show that the proposed algorithm is capable to achieve near real-time performance with an approach as accurate as the ones proposed in the literature.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126269908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In most of today projects, the digital data, knowledge and expertise required are so large that a single person is far from enough to achieve the best decisions. Many critical processes now require several people to share the project specific information as well as their own views to collectively converge towards a right conclusion. Touch tables are promising collaborative work interfaces, but require the development of specific functionalities related to their horizontal format. An especially important element is the remote collaboration one, which should ensure distant teams can work together as efficiently as if they were working close to each other. To bring such interfaces out of the lab and into real-world industrial processes, we have developed several solutions which should ease the integration of such tables in existing data and systems.
{"title":"Connected touch tables for remote collaboration","authors":"Jean-Baptiste de la Rivière, Julien Castet","doi":"10.1109/VR.2015.7223461","DOIUrl":"https://doi.org/10.1109/VR.2015.7223461","url":null,"abstract":"In most of today projects, the digital data, knowledge and expertise required are so large that a single person is far from enough to achieve the best decisions. Many critical processes now require several people to share the project specific information as well as their own views to collectively converge towards a right conclusion. Touch tables are promising collaborative work interfaces, but require the development of specific functionalities related to their horizontal format. An especially important element is the remote collaboration one, which should ensure distant teams can work together as efficiently as if they were working close to each other. To bring such interfaces out of the lab and into real-world industrial processes, we have developed several solutions which should ease the integration of such tables in existing data and systems.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128003269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takura Yanagi, C. Fernando, M. Y. Saraiji, K. Minamizawa, S. Tachi, N. Kishi
We propose an indirect-vision, video-see-through augmented reality (AR) cockpit that uses telexistence technology to provide an AR enriched, virtually transparent view of the surroundings through monitors instead of windows. Such a virtual view has the potential to enhance driving performance and experience above conventional glass as well as head-up display equipped cockpits by combining AR overlays with images obtained by future image sensors that are superior to human eyes. As a proof of concept, we replaced the front windshield of an experimental car by a large stereoscopic monitor. A robotic stereo camera pair that mimics the driver's head motions provides stereoscopic images with seamless motion parallax to the monitor. Initial driving tests at moderate speeds on roads within our research facility confirmed the illusion of transparency. We will conduct human factors evaluations after implementing AR functions in order to show whether it is possible to achieve an overall benefit over conventional cockpits in spite of possible conceptual issues like latency, shift of viewpoint and short distance between driver and display.
{"title":"Transparent cockpit using telexistence","authors":"Takura Yanagi, C. Fernando, M. Y. Saraiji, K. Minamizawa, S. Tachi, N. Kishi","doi":"10.1109/VR.2015.7223420","DOIUrl":"https://doi.org/10.1109/VR.2015.7223420","url":null,"abstract":"We propose an indirect-vision, video-see-through augmented reality (AR) cockpit that uses telexistence technology to provide an AR enriched, virtually transparent view of the surroundings through monitors instead of windows. Such a virtual view has the potential to enhance driving performance and experience above conventional glass as well as head-up display equipped cockpits by combining AR overlays with images obtained by future image sensors that are superior to human eyes. As a proof of concept, we replaced the front windshield of an experimental car by a large stereoscopic monitor. A robotic stereo camera pair that mimics the driver's head motions provides stereoscopic images with seamless motion parallax to the monitor. Initial driving tests at moderate speeds on roads within our research facility confirmed the illusion of transparency. We will conduct human factors evaluations after implementing AR functions in order to show whether it is possible to achieve an overall benefit over conventional cockpits in spite of possible conceptual issues like latency, shift of viewpoint and short distance between driver and display.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122816831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The front surface of the tablet style smartphone or computer is dominated by a touch screen. As a finger operation on the touch screen disturbs its visibility, it is assumed a finger touches the screen instantly. Under such restriction, use of the rear surface of the tablet for tactile display is promising as the fingers holding the tablet constantly touch it and feel the feedback steadily. In this paper, a slim design of tactile feedback mechanism that can be easily installed on the back of existing tablets is given and its mechanical performance regarding electricity consumption, latency and force is evaluated. Human capability in perceiving the tactile information on the display is also evaluated.
{"title":"What can we feel on the back of the tablet? — A thin mechanism to display two dimensional motion on the back and its characteristics","authors":"I. Kumazawa, M. Takao, Y. Sasaki, Shunsuke Ono","doi":"10.1109/VR.2015.7223371","DOIUrl":"https://doi.org/10.1109/VR.2015.7223371","url":null,"abstract":"The front surface of the tablet style smartphone or computer is dominated by a touch screen. As a finger operation on the touch screen disturbs its visibility, it is assumed a finger touches the screen instantly. Under such restriction, use of the rear surface of the tablet for tactile display is promising as the fingers holding the tablet constantly touch it and feel the feedback steadily. In this paper, a slim design of tactile feedback mechanism that can be easily installed on the back of existing tablets is given and its mechanical performance regarding electricity consumption, latency and force is evaluated. Human capability in perceiving the tactile information on the display is also evaluated.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"79 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122887423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
There have been many studies on intuitive user interfaces for large displays by using pointing movements. However, if a user cannot reach a display, object manipulations on the display are difficult because the user will see duplicate fingers due to binocular parallax. We propose Binocular Interface, which enables interactions with an object by using two pseudo fingers. In a prototype, pointing positions on the display are estimated on the basis of the positions of eyes and a finger detected by an RGB-D camera. We implemented three basic operations (select, move, and resize) using duplicate fingers and evaluated each operation.
{"title":"Binocular interface: Interaction techniques considering binocular parallax for a large display","authors":"Keigo Yoshimura, T. Ogawa","doi":"10.1109/VR.2015.7223422","DOIUrl":"https://doi.org/10.1109/VR.2015.7223422","url":null,"abstract":"There have been many studies on intuitive user interfaces for large displays by using pointing movements. However, if a user cannot reach a display, object manipulations on the display are difficult because the user will see duplicate fingers due to binocular parallax. We propose Binocular Interface, which enables interactions with an object by using two pseudo fingers. In a prototype, pointing positions on the display are estimated on the basis of the positions of eyes and a finger detected by an RGB-D camera. We implemented three basic operations (select, move, and resize) using duplicate fingers and evaluated each operation.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"288 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115249974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, the use of Virtual Characters in experimental studies has opened new research avenues in social neuroscience. In this paper, we present the design, implementation, and preliminary results of two case studies exploring different types of social cognition using interactive Virtual Characters animated with motion captured data.
{"title":"Using interactive virtual characters in social neuroscience","authors":"Joanna Hale, Xueni Pan, A. Hamilton","doi":"10.1109/VR.2015.7223359","DOIUrl":"https://doi.org/10.1109/VR.2015.7223359","url":null,"abstract":"In recent years, the use of Virtual Characters in experimental studies has opened new research avenues in social neuroscience. In this paper, we present the design, implementation, and preliminary results of two case studies exploring different types of social cognition using interactive Virtual Characters animated with motion captured data.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115329515","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Roberts, Arturo S. García, J. Dodiya, R. Wolff, Allen J. Fairchild, T. Fernando
We introduce the collaborative telepresence workspaces for SPACE operation and science that are under development in the European research project CROSS DRIVE. The vision is to give space mission controllers and scientists the impression of “beaming” to the surface of Mars, along with simulations of the environment and equipment, to step out together where a robot has or may move. We briefly overview the design and describe the state of the demonstrator. The contribution of the publication is to give an example of how collaborative Virtual Reality research is being taken up in space science.
{"title":"Collaborative telepresence workspaces for space operation and science","authors":"D. Roberts, Arturo S. García, J. Dodiya, R. Wolff, Allen J. Fairchild, T. Fernando","doi":"10.1109/VR.2015.7223402","DOIUrl":"https://doi.org/10.1109/VR.2015.7223402","url":null,"abstract":"We introduce the collaborative telepresence workspaces for SPACE operation and science that are under development in the European research project CROSS DRIVE. The vision is to give space mission controllers and scientists the impression of “beaming” to the surface of Mars, along with simulations of the environment and equipment, to step out together where a robot has or may move. We briefly overview the design and describe the state of the demonstrator. The contribution of the publication is to give an example of how collaborative Virtual Reality research is being taken up in space science.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115291613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
An Augmented Reality (AR) composite layup tool was created using low cost, off the shelf components and software to prove and demonstrate the application of AR in a manufacturing environment. The project tested different tracking technologies in order to ascertain their practicality within an industrial environment. By developing an understanding of the challenges faced in implementing such an application, the project further demonstrates the potential for cost and waste reduction. The experimental setup is at a lower price point than existing all in one solutions, thus increasing access to the technology. The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n° [283336] - REFORM.
{"title":"Location, location, location. An exercise in cost and waste reduction using Augmented Reality in composite layup manufacturing","authors":"C. Freeman, Rab Scott, R. Krain","doi":"10.1109/VR.2015.7223458","DOIUrl":"https://doi.org/10.1109/VR.2015.7223458","url":null,"abstract":"An Augmented Reality (AR) composite layup tool was created using low cost, off the shelf components and software to prove and demonstrate the application of AR in a manufacturing environment. The project tested different tracking technologies in order to ascertain their practicality within an industrial environment. By developing an understanding of the challenges faced in implementing such an application, the project further demonstrates the potential for cost and waste reduction. The experimental setup is at a lower price point than existing all in one solutions, thus increasing access to the technology. The research leading to these results has received funding from the European Union Seventh Framework Programme (FP7/2007-2013) under grant agreement n° [283336] - REFORM.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128335378","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mario Lorenz, Marc Busch, Loukas Rentzos, M. Tscheligi, Philipp Klimant, Peter Fröhlich
For various VR/MR/AR applications, such as virtual usability studies, it is very important that the participants have the feeling that they are really in the environment. This feeling of “being” in a mediated environment is described as presence. Two important factors that influence presence are the level of immersion and the navigation method. We developed two navigation methods to simulate natural walking using a Wii Balance Board and a Kinect Sensor. In this preliminary study we examined the effects of these navigation methods and the level of immersion on the participants' perceived presence in a 2×2 factorial between-subjects study with 32 participants in two different VEs (Powerwall and Mixed-Reality-See-Through-Glasses). The results indicate that reported presence is higher for the Kinect navigation and Powerwall for some facets of presence.
{"title":"I'm There! The influence of virtual reality and mixed reality environments combined with two different navigation methods on presence","authors":"Mario Lorenz, Marc Busch, Loukas Rentzos, M. Tscheligi, Philipp Klimant, Peter Fröhlich","doi":"10.1109/VR.2015.7223376","DOIUrl":"https://doi.org/10.1109/VR.2015.7223376","url":null,"abstract":"For various VR/MR/AR applications, such as virtual usability studies, it is very important that the participants have the feeling that they are really in the environment. This feeling of “being” in a mediated environment is described as presence. Two important factors that influence presence are the level of immersion and the navigation method. We developed two navigation methods to simulate natural walking using a Wii Balance Board and a Kinect Sensor. In this preliminary study we examined the effects of these navigation methods and the level of immersion on the participants' perceived presence in a 2×2 factorial between-subjects study with 32 participants in two different VEs (Powerwall and Mixed-Reality-See-Through-Glasses). The results indicate that reported presence is higher for the Kinect navigation and Powerwall for some facets of presence.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131119708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}