For the several decades, researchers studied human-avatar interactions using a virtual reality (VR). However, speculation on the interaction between a human's recognition memory and interaction types/methods has not enough considered yet. In the current study, we designed a VR interaction paradigm with two different types of human-avatar interaction including initiating and responding interactions, and we also included two interaction methods including a head-gazing and a hand-pointing. The result indicated that there are significant differences in the recognition memory between the initiating and responding interactions. However, we couldn't find any significant effects of interaction methods in the current study. These results suggest that the human-avatar interaction may have similar patterns with the human-human interaction in the recognition memory, and methodological advances also required.
{"title":"Human-avatar interaction and recognition memory according to interaction types and methods","authors":"Mingyu Kim, Woncheol Jang, K. Kim","doi":"10.1109/VR.2015.7223369","DOIUrl":"https://doi.org/10.1109/VR.2015.7223369","url":null,"abstract":"For the several decades, researchers studied human-avatar interactions using a virtual reality (VR). However, speculation on the interaction between a human's recognition memory and interaction types/methods has not enough considered yet. In the current study, we designed a VR interaction paradigm with two different types of human-avatar interaction including initiating and responding interactions, and we also included two interaction methods including a head-gazing and a hand-pointing. The result indicated that there are significant differences in the recognition memory between the initiating and responding interactions. However, we couldn't find any significant effects of interaction methods in the current study. These results suggest that the human-avatar interaction may have similar patterns with the human-human interaction in the recognition memory, and methodological advances also required.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"1247 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131732986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Papadopoulos, Seyedkoosha Mirhosseini, Ievgeniia Gutenko, Kaloian Petkov, A. Kaufman, B. Laha
We present the results of a variable information space experiment, targeted at exploring the scalability limits of immersive highresolution, tiled-display walls under physical navigation. Our work is motivated by a lack of evidence supporting the extension of previously established benefits on substantially large, room-shaped displays. Using the Reality Deck, a gigapixel resolution immersive display, as its apparatus, our study spans four display form-factors, starting at 100 megapixels arranged planarly and up to one gi-gapixel in a horizontally immersive setting. We focus on four core tasks: visual search, attribute search, comparisons and pattern finding. We present a quantitative analysis of per-task user performance across the various display conditions. Our results demonstrate improvements in user performance as the display form-factor changes to 600 megapixels. At the 600 megapixel to 1 gigapixel transition, we observe no tangible performance improvements and the visual search task regressed substantially. Additionally, our analysis of subjective mental effort questionnaire responses indicates that subjective user effort grows as the display size increases, validating previous studies on smaller displays. Our analysis of the participants' physical navigation during the study sessions shows an increase in user movement as the display grew. Finally, by visualizing the participants' movement within the display apparatus space, we discover two main approaches (termed “overview” and “detail”) through which users chose to tackle the various data exploration tasks. The results of our study can inform the design of immersive high-resolution display systems and provide insight into how users navigate within these room-sized visualization spaces.
{"title":"Scalability limits of large immersive high-resolution displays","authors":"C. Papadopoulos, Seyedkoosha Mirhosseini, Ievgeniia Gutenko, Kaloian Petkov, A. Kaufman, B. Laha","doi":"10.1109/VR.2015.7223318","DOIUrl":"https://doi.org/10.1109/VR.2015.7223318","url":null,"abstract":"We present the results of a variable information space experiment, targeted at exploring the scalability limits of immersive highresolution, tiled-display walls under physical navigation. Our work is motivated by a lack of evidence supporting the extension of previously established benefits on substantially large, room-shaped displays. Using the Reality Deck, a gigapixel resolution immersive display, as its apparatus, our study spans four display form-factors, starting at 100 megapixels arranged planarly and up to one gi-gapixel in a horizontally immersive setting. We focus on four core tasks: visual search, attribute search, comparisons and pattern finding. We present a quantitative analysis of per-task user performance across the various display conditions. Our results demonstrate improvements in user performance as the display form-factor changes to 600 megapixels. At the 600 megapixel to 1 gigapixel transition, we observe no tangible performance improvements and the visual search task regressed substantially. Additionally, our analysis of subjective mental effort questionnaire responses indicates that subjective user effort grows as the display size increases, validating previous studies on smaller displays. Our analysis of the participants' physical navigation during the study sessions shows an increase in user movement as the display grew. Finally, by visualizing the participants' movement within the display apparatus space, we discover two main approaches (termed “overview” and “detail”) through which users chose to tackle the various data exploration tasks. The results of our study can inform the design of immersive high-resolution display systems and provide insight into how users navigate within these room-sized visualization spaces.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125279083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
EVE (Exercise in Virtual Environments) is an operational VR system designed for space, polar and submarine crews. This system allows crewmembers - living and working in artificial habitats- to explore immersive natural landscapes during their daily physical exercise, and experience presence in a variety of alternate environments. Using recent hardware and software, this innovative psychological counter-measure aims at reducing the adverse effects of confinement and monotony in long duration missions, while maintaining motivation for physical exercise. Initial testing with a proof-of-concept prototype was conducted near the South magnetic pole, as well as in transient microgravity.
{"title":"EVE: Exercise in Virtual Environments","authors":"A. Solignac, Sebastien Kuntz","doi":"10.1109/VR.2015.7223408","DOIUrl":"https://doi.org/10.1109/VR.2015.7223408","url":null,"abstract":"EVE (Exercise in Virtual Environments) is an operational VR system designed for space, polar and submarine crews. This system allows crewmembers - living and working in artificial habitats- to explore immersive natural landscapes during their daily physical exercise, and experience presence in a variety of alternate environments. Using recent hardware and software, this innovative psychological counter-measure aims at reducing the adverse effects of confinement and monotony in long duration missions, while maintaining motivation for physical exercise. Initial testing with a proof-of-concept prototype was conducted near the South magnetic pole, as well as in transient microgravity.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133842754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A novel integral photography (IP) system in which the amount of popping out is more than three times larger than usual is demonstrated in this study. If autostereoscopic display is introduced into virtual reality, IP is an ideal candidate because not only the horizontal but also the vertical parallax can be obtained. However, the amount of popping out obtained by IP is generally far less than that obtained by head-mounted display because the ray density decreases when the viewer is distant from the fly's eye lens. Although a solution is to extend the focal length of the fly's eye lens, this lens is difficult to manufacture. We address this problem by simply immersing the fly's eye lens into water to extend the effective focal length.
{"title":"Underwater integral photography","authors":"Nahomi Maki, K. Yanaka","doi":"10.1109/VR.2015.7223436","DOIUrl":"https://doi.org/10.1109/VR.2015.7223436","url":null,"abstract":"A novel integral photography (IP) system in which the amount of popping out is more than three times larger than usual is demonstrated in this study. If autostereoscopic display is introduced into virtual reality, IP is an ideal candidate because not only the horizontal but also the vertical parallax can be obtained. However, the amount of popping out obtained by IP is generally far less than that obtained by head-mounted display because the ray density decreases when the viewer is distant from the fly's eye lens. Although a solution is to extend the focal length of the fly's eye lens, this lens is difficult to manufacture. We address this problem by simply immersing the fly's eye lens into water to extend the effective focal length.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115649525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. Kumazawa, Kyohei Sugiyama, Tsukasa Hayashi, Yasuhiro Takatori, Shunsuke Ono
The front face of the tablet style smartphone or computer is dominated by a touch screen. As a finger operation on the touch screen disturbs its visibility, it is assumed a finger touches the screen instantly. Under such restriction, use of the rear surface of the tablet for tactile display is promising as the fingers constantly touch the back face and feel the tactile information. In our presentation, various tactile feedback mechanisms implemented on the back face are demonstrated and the latency of the feedback and its effect on the usability are evaluated for different communication means to control actuators such as wireless LAN, Bluetooth and audio signals. It is shown that the audio signal is promising to generate quick tactile feedback.
{"title":"Various forms of tactile feedback displayed on the back of the tablet: Latency minimized by using audio signal to control actuators","authors":"I. Kumazawa, Kyohei Sugiyama, Tsukasa Hayashi, Yasuhiro Takatori, Shunsuke Ono","doi":"10.1109/VR.2015.7223432","DOIUrl":"https://doi.org/10.1109/VR.2015.7223432","url":null,"abstract":"The front face of the tablet style smartphone or computer is dominated by a touch screen. As a finger operation on the touch screen disturbs its visibility, it is assumed a finger touches the screen instantly. Under such restriction, use of the rear surface of the tablet for tactile display is promising as the fingers constantly touch the back face and feel the tactile information. In our presentation, various tactile feedback mechanisms implemented on the back face are demonstrated and the latency of the feedback and its effect on the usability are evaluated for different communication means to control actuators such as wireless LAN, Bluetooth and audio signals. It is shown that the audio signal is promising to generate quick tactile feedback.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114086470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dynamic Projection Mapping, projection-based AR for a moving object without misalignment by a high-speed optical axis controller by rotational mirrors, has a trade-off between stability of highspeed tracking and high visibility for a variety of projection content. In this paper, a system that will provide robust high-speed tracking without any markers on objects against illumination changes, including projected images, is realized by introducing a retroreflective background with the optical axis controller for Dynamic Projection Mapping. Low-intensity episcopic light is projected with Projection Mapping content, and the light reflected from the background is sufficient for high-speed cameras but is nearly invisible to observers. In addition, we introduce adaptive windows and peripheral weighted erosion to maintain accurate high-speed tracking. Under low light conditions, we examined the visual performance of diffuse reflection and retroreflection from both camera and observer viewpoints. We evaluated stability relative to illumination and disturbance caused by non-target objects. Dynamic Projection Mapping with partially well-lit content in a low-intensity light environment is realized by our proposed system.
{"title":"Robust high-speed tracking against illumination changes for dynamic projection mapping","authors":"Tomohiro Sueishi, H. Oku, M. Ishikawa","doi":"10.1109/VR.2015.7223330","DOIUrl":"https://doi.org/10.1109/VR.2015.7223330","url":null,"abstract":"Dynamic Projection Mapping, projection-based AR for a moving object without misalignment by a high-speed optical axis controller by rotational mirrors, has a trade-off between stability of highspeed tracking and high visibility for a variety of projection content. In this paper, a system that will provide robust high-speed tracking without any markers on objects against illumination changes, including projected images, is realized by introducing a retroreflective background with the optical axis controller for Dynamic Projection Mapping. Low-intensity episcopic light is projected with Projection Mapping content, and the light reflected from the background is sufficient for high-speed cameras but is nearly invisible to observers. In addition, we introduce adaptive windows and peripheral weighted erosion to maintain accurate high-speed tracking. Under low light conditions, we examined the visual performance of diffuse reflection and retroreflection from both camera and observer viewpoints. We evaluated stability relative to illumination and disturbance caused by non-target objects. Dynamic Projection Mapping with partially well-lit content in a low-intensity light environment is realized by our proposed system.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117085520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
H. Regenbrecht, Mansoor Alghamdi, S. Hoermann, T. Langlotz, Mike Goodwin, Colin Aldridge
Collaborative Virtual Environments (CVE) with co-located or remote video communication functionality require a continuous experience of social presence. If, at any stage during the experience the communication interrupts presence, then the CVE experience as a whole is affected - spatial presence is then decoupled from social presence. We present a solution to this problem by introducing the concept of a virtualized version of Google Glass™ called Virtual Glass. Virtual Glass is integrated into the CVE as a real-world metaphor for a communication device, one particularly suited for collaborative instructor-performer systems. In a study with 65 participants we demonstrated that the concept of Virtual Glass is effective, that it supports a high level of social presence and that the social presence is rated higher than a standard picture-in-picture videoconferencing approach for certain tasks.
{"title":"Social presence with virtual glass","authors":"H. Regenbrecht, Mansoor Alghamdi, S. Hoermann, T. Langlotz, Mike Goodwin, Colin Aldridge","doi":"10.1109/VR.2015.7223399","DOIUrl":"https://doi.org/10.1109/VR.2015.7223399","url":null,"abstract":"Collaborative Virtual Environments (CVE) with co-located or remote video communication functionality require a continuous experience of social presence. If, at any stage during the experience the communication interrupts presence, then the CVE experience as a whole is affected - spatial presence is then decoupled from social presence. We present a solution to this problem by introducing the concept of a virtualized version of Google Glass™ called Virtual Glass. Virtual Glass is integrated into the CVE as a real-world metaphor for a communication device, one particularly suited for collaborative instructor-performer systems. In a study with 65 participants we demonstrated that the concept of Virtual Glass is effective, that it supports a high level of social presence and that the social presence is rated higher than a standard picture-in-picture videoconferencing approach for certain tasks.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121607540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Technology has made some of the once unimaginable a reality. The invention of the computer has inspired some of the most useful inventions that we heavily rely on today. Computers were first given tasks to solve complex calculations and usually consisted of a simple command line interface. Overtime people started to see the potential that computers can have on our personal lives which lead to the more user friendly, graphical user interface (GUI). The GUI was one of many interfaces that came from this growth in technology and the use of computers for personal reasons. Having access to personal computers brought forth many possibilities and applications. Today, staying connected is one of the most influential and heavily used applications of the personal computer. People rely on social networking sites such as Twitter and Facebook to keep ties with one another when far away or even close by. Research has shown that there are some negative side effects of this constant cyber connectivity in regards to our social lives and interactions. A more recent invention has received some popularity as the possibility for social interaction within it is currently being explored. This invention is known as virtual reality (VR). I am interested in researching how VR could affect people if used similar to social networking platforms we already use and how it could influence society and possibly change our social interactions. We live so much in a cyber world that we have even invented an acronym (IRL) to decipher between online/cyber life and real life. Throughout this paper I will refer to real life by using the newly invented acronym, IRL.
{"title":"Can living in virtual environments alter reality?","authors":"Melanie Buset","doi":"10.1109/VR.2015.7223467","DOIUrl":"https://doi.org/10.1109/VR.2015.7223467","url":null,"abstract":"Technology has made some of the once unimaginable a reality. The invention of the computer has inspired some of the most useful inventions that we heavily rely on today. Computers were first given tasks to solve complex calculations and usually consisted of a simple command line interface. Overtime people started to see the potential that computers can have on our personal lives which lead to the more user friendly, graphical user interface (GUI). The GUI was one of many interfaces that came from this growth in technology and the use of computers for personal reasons. Having access to personal computers brought forth many possibilities and applications. Today, staying connected is one of the most influential and heavily used applications of the personal computer. People rely on social networking sites such as Twitter and Facebook to keep ties with one another when far away or even close by. Research has shown that there are some negative side effects of this constant cyber connectivity in regards to our social lives and interactions. A more recent invention has received some popularity as the possibility for social interaction within it is currently being explored. This invention is known as virtual reality (VR). I am interested in researching how VR could affect people if used similar to social networking platforms we already use and how it could influence society and possibly change our social interactions. We live so much in a cyber world that we have even invented an acronym (IRL) to decipher between online/cyber life and real life. Throughout this paper I will refer to real life by using the newly invented acronym, IRL.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121872457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We are developing a novel device for measuring hand power grip using frustrated total internal reflection of light in acrylic. Our method uses a force sensitive resistor to calibrate the force of a power grip as a function of the area and light intensity. This research is work in progress but results so far augur well for its applicability in medical and other application areas. The grip measurement device allows the patient and doctor to see the change in grip over time and projects this information directly onto the back of the patient's hand.
{"title":"Towards a high resolution grip measurement device for orthopaedics","authors":"Marc R. Edwards, P. Vangorp, N. John","doi":"10.1109/VR.2015.7223427","DOIUrl":"https://doi.org/10.1109/VR.2015.7223427","url":null,"abstract":"We are developing a novel device for measuring hand power grip using frustrated total internal reflection of light in acrylic. Our method uses a force sensitive resistor to calibrate the force of a power grip as a function of the area and light intensity. This research is work in progress but results so far augur well for its applicability in medical and other application areas. The grip measurement device allows the patient and doctor to see the change in grip over time and projects this information directly onto the back of the patient's hand.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"43 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126329466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Zhong Zhou, Tao Yu, Xiaofeng Qiu, Ruigang Yang, Qinping Zhao
We propose a novel approach to generate 4D light field in the physical world for lighting reproduction. The light field is generated by projecting lighting images on a lens array. The lens array turns the projected images into a controlled anisotropic point light source array which can simulate the light field of a real scene. In terms of acquisition, we capture an array of light probe images from a real scene, based on which an incident light field is generated. The lens array and the projectors are geometric and photometrically calibrated, and an efficient resampling algorithm is developed to turn the incident light field into the images projected onto the lens array. The reproduced illumination, which allows per-ray lighting control, can produce realistic lighting result on real objects, avoiding the complex process of geometric and material modeling. We demonstrate the effectiveness of our approach with a prototype setup.
{"title":"Light field projection for lighting reproduction","authors":"Zhong Zhou, Tao Yu, Xiaofeng Qiu, Ruigang Yang, Qinping Zhao","doi":"10.1109/VR.2015.7223335","DOIUrl":"https://doi.org/10.1109/VR.2015.7223335","url":null,"abstract":"We propose a novel approach to generate 4D light field in the physical world for lighting reproduction. The light field is generated by projecting lighting images on a lens array. The lens array turns the projected images into a controlled anisotropic point light source array which can simulate the light field of a real scene. In terms of acquisition, we capture an array of light probe images from a real scene, based on which an incident light field is generated. The lens array and the projectors are geometric and photometrically calibrated, and an efficient resampling algorithm is developed to turn the incident light field into the images projected onto the lens array. The reproduced illumination, which allows per-ray lighting control, can produce realistic lighting result on real objects, avoiding the complex process of geometric and material modeling. We demonstrate the effectiveness of our approach with a prototype setup.","PeriodicalId":231501,"journal":{"name":"2015 IEEE Virtual Reality (VR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129296657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}