Nowadays, handheld devices are capable of displaying augmented environments in which virtual content overlaps reality. To interact with these environments it is necessary to use a manipulation technique. The objective of a manipulation technique is to define how the input data modify the properties of the virtual objects. Current devices have multi-touch screens that can serve as input. Additionally, the position and rotation of the device can also be used as input creating both an opportunity and a design challenge. In this paper we compared three manipulation techniques which namely employ multi-touch, device position and a combination of both. A user evaluation on a docking task revealed that combining multi-touch and device movement yields the best task completion time and efficiency. Nevertheless, using only the device movement and orientation is more intuitive and performs worse only in large rotations.
{"title":"Combining multi-touch input and device movement for 3D manipulations in mobile augmented reality environments","authors":"A. Pérez, Benoît Bossavit, M. Hachet","doi":"10.1145/2659766.2659775","DOIUrl":"https://doi.org/10.1145/2659766.2659775","url":null,"abstract":"Nowadays, handheld devices are capable of displaying augmented environments in which virtual content overlaps reality. To interact with these environments it is necessary to use a manipulation technique. The objective of a manipulation technique is to define how the input data modify the properties of the virtual objects. Current devices have multi-touch screens that can serve as input. Additionally, the position and rotation of the device can also be used as input creating both an opportunity and a design challenge. In this paper we compared three manipulation techniques which namely employ multi-touch, device position and a combination of both. A user evaluation on a docking task revealed that combining multi-touch and device movement yields the best task completion time and efficiency. Nevertheless, using only the device movement and orientation is more intuitive and performs worse only in large rotations.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128189013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Spatial pointing and touching","authors":"M. Hachet","doi":"10.1145/3247435","DOIUrl":"https://doi.org/10.1145/3247435","url":null,"abstract":"","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133596984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sometime in the coming years -- whether through ubiquitous projection, AR glasses, smart contact lenses, retinal implants or some technology as yet unknown -- we will live in an eccescopic world, where everything we see around us will be augmented by computer graphics, including our own appearance. In a sense, we are just now starting to enter the Age of Computer Graphics. As children are born into this brave new world, what will their experience be? Face to face communication, both in-person and over great distances, will become visually enhanced, and any tangible object can become an interface to digital information [1]. Hand gestures will be able to produce visual artifacts. After these things come to pass, how will future generations of children evolve natural language itself [2]? How might they think and speak differently about the world around them? What will life in such a world be like for those who are native born to it? We will present some possibilities, and some suggestions for empirical ways to explore those possibilities now -- without needing to wait for those smart contact lenses
{"title":"The coming age of computer graphics and the evolution of language","authors":"K. Perlin","doi":"10.1145/2659766.2661116","DOIUrl":"https://doi.org/10.1145/2659766.2661116","url":null,"abstract":"Sometime in the coming years -- whether through ubiquitous projection, AR glasses, smart contact lenses, retinal implants or some technology as yet unknown -- we will live in an eccescopic world, where everything we see around us will be augmented by computer graphics, including our own appearance. In a sense, we are just now starting to enter the Age of Computer Graphics. As children are born into this brave new world, what will their experience be? Face to face communication, both in-person and over great distances, will become visually enhanced, and any tangible object can become an interface to digital information [1]. Hand gestures will be able to produce visual artifacts. After these things come to pass, how will future generations of children evolve natural language itself [2]? How might they think and speak differently about the world around them? What will life in such a world be like for those who are native born to it? We will present some possibilities, and some suggestions for empirical ways to explore those possibilities now -- without needing to wait for those smart contact lenses","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133686650","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ryan P. Spicer, Rhys Yahata, Evan A. Suma, M. Bolas
We present a novel approach to integrating a touch screen device into the experience of a user wearing a Head Mounted Display (HMD) in an immersive virtual reality (VR) environment with tracked head and hands.
{"title":"A raycast approach to hybrid touch / motion capturevirtual reality user experience","authors":"Ryan P. Spicer, Rhys Yahata, Evan A. Suma, M. Bolas","doi":"10.1145/2659766.2661226","DOIUrl":"https://doi.org/10.1145/2659766.2661226","url":null,"abstract":"We present a novel approach to integrating a touch screen device into the experience of a user wearing a Head Mounted Display (HMD) in an immersive virtual reality (VR) environment with tracked head and hands.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"192 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115432126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Spatial gestures","authors":"H. Ishii","doi":"10.1145/3247432","DOIUrl":"https://doi.org/10.1145/3247432","url":null,"abstract":"","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122839309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jarkko Polvi, Takafumi Taketomi, Goshiro Yamamoto, M. Billinghurst, C. Sandor, H. Kato
In this poster we present the design and evaluation of a Handheld Augmented Reality (HAR) prototype system for guidance.
在这张海报中,我们展示了用于指导的手持增强现实(HAR)原型系统的设计和评估。
{"title":"Evaluating a SLAM-based handheld augmented reality guidance system","authors":"Jarkko Polvi, Takafumi Taketomi, Goshiro Yamamoto, M. Billinghurst, C. Sandor, H. Kato","doi":"10.1145/2659766.2661212","DOIUrl":"https://doi.org/10.1145/2659766.2661212","url":null,"abstract":"In this poster we present the design and evaluation of a Handheld Augmented Reality (HAR) prototype system for guidance.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123830610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present VideoHandles, a novel interaction technique to support rapid review of wearable video camera data by re-performing gestures as a search query. The availability of wearable video capture devices has led to a significant increase in activity logging across a range of domains. However, searching through and reviewing footage for data curation can be a laborious and painstaking process. In this paper we showcase the use of gestures as search queries to support review and navigation of video data. By exploring example self-captured footage across a range of activities, we propose two video data navigation styles using gestures: prospective gesture tagging and retrospective gesture searching. We describe VideoHandles' interaction design, motivation and results of a pilot study.
{"title":"VideoHandles: replicating gestures to search through action-camera video","authors":"Jarrod Knibbe, S. A. Seah, Mike Fraser","doi":"10.1145/2659766.2659784","DOIUrl":"https://doi.org/10.1145/2659766.2659784","url":null,"abstract":"We present VideoHandles, a novel interaction technique to support rapid review of wearable video camera data by re-performing gestures as a search query. The availability of wearable video capture devices has led to a significant increase in activity logging across a range of domains. However, searching through and reviewing footage for data curation can be a laborious and painstaking process. In this paper we showcase the use of gestures as search queries to support review and navigation of video data. By exploring example self-captured footage across a range of activities, we propose two video data navigation styles using gestures: prospective gesture tagging and retrospective gesture searching. We describe VideoHandles' interaction design, motivation and results of a pilot study.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127197008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Keiko Yamamoto, I. Kanaya, M. Bordegoni, U. Cugini
Our goal is to allow the creators to focus on their creative activity, developing their ideas for physical products in an intuitive way. We propose a new CAD system allows users to draw virtual lines on the surface of the physical object using see-through AR, and also allows users to import 3D data and make its real object through 3D printing.
{"title":"Re:form: rapid designing system based on fusion and illusion of digital/physical models","authors":"Keiko Yamamoto, I. Kanaya, M. Bordegoni, U. Cugini","doi":"10.1145/2659766.2661205","DOIUrl":"https://doi.org/10.1145/2659766.2661205","url":null,"abstract":"Our goal is to allow the creators to focus on their creative activity, developing their ideas for physical products in an intuitive way. We propose a new CAD system allows users to draw virtual lines on the surface of the physical object using see-through AR, and also allows users to import 3D data and make its real object through 3D printing.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133873604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Hybrid interaction spaces","authors":"B. Fröhlich","doi":"10.1145/3247434","DOIUrl":"https://doi.org/10.1145/3247434","url":null,"abstract":"","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"48 10","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114039154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Orlosky, Qifan Wu, K. Kiyokawa, H. Takemura, Christian Nitschke
A current problem with many video see-through displays is the lack of a wide field of view, which can make them dangerous to use in real world augmented reality applications since peripheral vision is severely limited. Existing wide field of view displays are often bulky, lack stereoscopy, or require complex setups. To solve this problem, we introduce a prototype that utilizes fisheye lenses to expand a user's peripheral vision inside a video see-through head mounted display. Our system provides an undistorted central field of view, so that natural stereoscopy and depth judgment can occur. The peripheral areas of the display show content through the curvature of each of two fisheye lenses using a modified compression algorithm so that objects outside of the inherent viewing angle of the display become visible. We first test an initial prototype with 180° field of view lenses, and then build an improved version with 238° lenses. We also describe solutions to several problems associated with aligning undistorted binocular vision and the compressed periphery, and finally compare our prototype to natural human vision in a series of visual acuity experiments. Results show that users can effectively see objects up to 180°, and that overall detection rate is 62.2% for the display versus 89.7% for the naked eye.
{"title":"Fisheye vision: peripheral spatial compression for improved field of view in head mounted displays","authors":"J. Orlosky, Qifan Wu, K. Kiyokawa, H. Takemura, Christian Nitschke","doi":"10.1145/2659766.2659771","DOIUrl":"https://doi.org/10.1145/2659766.2659771","url":null,"abstract":"A current problem with many video see-through displays is the lack of a wide field of view, which can make them dangerous to use in real world augmented reality applications since peripheral vision is severely limited. Existing wide field of view displays are often bulky, lack stereoscopy, or require complex setups. To solve this problem, we introduce a prototype that utilizes fisheye lenses to expand a user's peripheral vision inside a video see-through head mounted display. Our system provides an undistorted central field of view, so that natural stereoscopy and depth judgment can occur. The peripheral areas of the display show content through the curvature of each of two fisheye lenses using a modified compression algorithm so that objects outside of the inherent viewing angle of the display become visible. We first test an initial prototype with 180° field of view lenses, and then build an improved version with 238° lenses. We also describe solutions to several problems associated with aligning undistorted binocular vision and the compressed periphery, and finally compare our prototype to natural human vision in a series of visual acuity experiments. Results show that users can effectively see objects up to 180°, and that overall detection rate is 62.2% for the display versus 89.7% for the naked eye.","PeriodicalId":274675,"journal":{"name":"Proceedings of the 2nd ACM symposium on Spatial user interaction","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123644433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}