Neven A. M. ElSayed, Ross T. Smith, K. Marriott, B. Thomas
{"title":"混合UI控件定位分析","authors":"Neven A. M. ElSayed, Ross T. Smith, K. Marriott, B. Thomas","doi":"10.1109/BDVA.2016.7787043","DOIUrl":null,"url":null,"abstract":"This paper presents a context aware model for situated analytics, supporting a blended user interface. Our approach is a state-based model, allowing seamless transition between the physical space and information space during use. We designed the model to allow common user interface controls to work in tandem with the printed information on a physical object by adapting the operation and presentation based on a semantic matrix. We demonstrate the use of the model with a set of blended controls including; pinch zoom, menus, and details-on-demand. We analyze each control to highlight how the physical and virtual information spaces work in tandem to provide a rich interaction environment in augmented reality.","PeriodicalId":201664,"journal":{"name":"2016 Big Data Visual Analytics (BDVA)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Blended UI Controls for Situated Analytics\",\"authors\":\"Neven A. M. ElSayed, Ross T. Smith, K. Marriott, B. Thomas\",\"doi\":\"10.1109/BDVA.2016.7787043\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper presents a context aware model for situated analytics, supporting a blended user interface. Our approach is a state-based model, allowing seamless transition between the physical space and information space during use. We designed the model to allow common user interface controls to work in tandem with the printed information on a physical object by adapting the operation and presentation based on a semantic matrix. We demonstrate the use of the model with a set of blended controls including; pinch zoom, menus, and details-on-demand. We analyze each control to highlight how the physical and virtual information spaces work in tandem to provide a rich interaction environment in augmented reality.\",\"PeriodicalId\":201664,\"journal\":{\"name\":\"2016 Big Data Visual Analytics (BDVA)\",\"volume\":\"56 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 Big Data Visual Analytics (BDVA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/BDVA.2016.7787043\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 Big Data Visual Analytics (BDVA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BDVA.2016.7787043","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
This paper presents a context aware model for situated analytics, supporting a blended user interface. Our approach is a state-based model, allowing seamless transition between the physical space and information space during use. We designed the model to allow common user interface controls to work in tandem with the printed information on a physical object by adapting the operation and presentation based on a semantic matrix. We demonstrate the use of the model with a set of blended controls including; pinch zoom, menus, and details-on-demand. We analyze each control to highlight how the physical and virtual information spaces work in tandem to provide a rich interaction environment in augmented reality.