Paul Lubos, Rüdiger Beimler, Markus Lammers, Frank Steinicke
{"title":"Touching the Cloud: Bimanual annotation of immersive point clouds","authors":"Paul Lubos, Rüdiger Beimler, Markus Lammers, Frank Steinicke","doi":"10.1109/3DUI.2014.6798885","DOIUrl":null,"url":null,"abstract":"In this paper we present “Touching the Cloud”, a bi-manual user interface for the interaction, selection and annotation of immersive point cloud data. With minimal instrumentation, the setup allows a user in an immersive head-mounted display (HMD) environment to naturally interact with point clouds. By tracking the user's hands using an OpenNI sensor and displaying them in the virtual environment (VE), the user can touch the virtual 3D point cloud in midair and transform it with pinch gestures inspired by smartphone-based interaction. In addition, by triggering voice- or button-press-activated commands, the user can select, segment and annotate the immersive point cloud, thereby creating hierarchical exploded view models.","PeriodicalId":90698,"journal":{"name":"Proceedings. IEEE Symposium on 3D User Interfaces","volume":"11 1","pages":"191-192"},"PeriodicalIF":0.0000,"publicationDate":"2014-03-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"17","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. IEEE Symposium on 3D User Interfaces","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/3DUI.2014.6798885","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 17
Abstract
In this paper we present “Touching the Cloud”, a bi-manual user interface for the interaction, selection and annotation of immersive point cloud data. With minimal instrumentation, the setup allows a user in an immersive head-mounted display (HMD) environment to naturally interact with point clouds. By tracking the user's hands using an OpenNI sensor and displaying them in the virtual environment (VE), the user can touch the virtual 3D point cloud in midair and transform it with pinch gestures inspired by smartphone-based interaction. In addition, by triggering voice- or button-press-activated commands, the user can select, segment and annotate the immersive point cloud, thereby creating hierarchical exploded view models.