Historic photo collections are important instruments to document the development of cityscapes in the course of time. However, in most cases, such historic photos are buried in archives that are not easily accessible. But even when cultural heritage archives are opened and exposed to the public, for instance by specialized digital libraries, the value of the individual images is limited as they can only be used in the context of the digital library's retrieval engine and independent of the actual location that is being displayed. With GoFind!, we bring the retrieval engine of historic multimedia collections to mobile devices. The system provides location-based querying in historic multimedia collections and adds an augmented reality-based user interface that enables the overlay of historic images and the current view. GoFind! can be used by historians and tourists and provides a virtual view into the past of a city.
{"title":"Exploring Cultural Heritage in Augmented Reality with GoFind!","authors":"Loris Sauter, Luca Rossetto, H. Schuldt","doi":"10.1109/AIVR.2018.00041","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00041","url":null,"abstract":"Historic photo collections are important instruments to document the development of cityscapes in the course of time. However, in most cases, such historic photos are buried in archives that are not easily accessible. But even when cultural heritage archives are opened and exposed to the public, for instance by specialized digital libraries, the value of the individual images is limited as they can only be used in the context of the digital library's retrieval engine and independent of the actual location that is being displayed. With GoFind!, we bring the retrieval engine of historic multimedia collections to mobile devices. The system provides location-based querying in historic multimedia collections and adds an augmented reality-based user interface that enables the overlay of historic images and the current view. GoFind! can be used by historians and tourists and provides a virtual view into the past of a city.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122704805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Eric Nersesian, Adam Spryszynski, Ulysee Thompson, M. Lee
Virtual reality (VR) has the potential to drastically alter the future landscape of education. Immersion can be a powerful educational tool, yet it can create isolation issues if user needs are not thoroughly considered. For this reason, designers, educators, and researchers will need to address accessibility issues for the technology to be adopted into mainstream classroom use. English language learners (ELLs) are a relevant user group to study in this regard, as they are largely underserved within the educational technology space, and their usage of these immersive VR tools can highlight both positive and negative aspects of the experience that developers can use to improve their applications.
{"title":"Encompassing English Language Learners in Virtual Reality","authors":"Eric Nersesian, Adam Spryszynski, Ulysee Thompson, M. Lee","doi":"10.1109/AIVR.2018.00047","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00047","url":null,"abstract":"Virtual reality (VR) has the potential to drastically alter the future landscape of education. Immersion can be a powerful educational tool, yet it can create isolation issues if user needs are not thoroughly considered. For this reason, designers, educators, and researchers will need to address accessibility issues for the technology to be adopted into mainstream classroom use. English language learners (ELLs) are a relevant user group to study in this regard, as they are largely underserved within the educational technology space, and their usage of these immersive VR tools can highlight both positive and negative aspects of the experience that developers can use to improve their applications.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130157681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"[Publisher's information]","authors":"","doi":"10.1109/aivr.2018.00065","DOIUrl":"https://doi.org/10.1109/aivr.2018.00065","url":null,"abstract":"","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114468608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"[Copyright notice]","authors":"","doi":"10.1109/aivr.2018.00003","DOIUrl":"https://doi.org/10.1109/aivr.2018.00003","url":null,"abstract":"","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129373509","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stefano Petrangeli, G. Simon, Viswanathan Swaminathan
Viewport-based adaptive streaming has emerged as the main technique to efficiently stream bandwidth-intensive 360° videos over the best-effort Internet. In viewport-based streaming, only the portion of the video watched by the user is usually streamed at the highest quality, by either using video tiling, foveat-based encoding or similar approaches. To release the full potential of these approaches though, the future position of the user viewport has to be predicted. Indeed, accurate viewport prediction is necessary to minimize quality transitions while the user moves. Current solutions mainly focus on short-term prediction horizons (e.g., less than 2 s), while long-term viewport prediction has received less attention. This paper presents a novel prediction algorithm for the long-term prediction of the user viewport. In the proposed algorithm, the viewport evolution over time of a given user is modeled as a trajectory in the roll, pitch, and yaw angles domain. For a given video, a function is extrapolated to model the evolution of the three aforementioned angles over time, based on the viewing patterns of past users in the system. Moreover, trajectories that exhibit similar viewing behaviors are clustered together, and a different function is calculated for each cluster. The pre-computed functions are subsequently used at run-time to predict the future viewport position of a new user in the system, for the specific video. Preliminary results using a public dataset composed of 16 videos watched on average by 61 users show how the proposed algorithm can increase the predicted viewport area by 13% on average compared to several benchmarking heuristics, for prediction horizons up to 10 seconds.
{"title":"Trajectory-Based Viewport Prediction for 360-Degree Virtual Reality Videos","authors":"Stefano Petrangeli, G. Simon, Viswanathan Swaminathan","doi":"10.1109/AIVR.2018.00033","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00033","url":null,"abstract":"Viewport-based adaptive streaming has emerged as the main technique to efficiently stream bandwidth-intensive 360° videos over the best-effort Internet. In viewport-based streaming, only the portion of the video watched by the user is usually streamed at the highest quality, by either using video tiling, foveat-based encoding or similar approaches. To release the full potential of these approaches though, the future position of the user viewport has to be predicted. Indeed, accurate viewport prediction is necessary to minimize quality transitions while the user moves. Current solutions mainly focus on short-term prediction horizons (e.g., less than 2 s), while long-term viewport prediction has received less attention. This paper presents a novel prediction algorithm for the long-term prediction of the user viewport. In the proposed algorithm, the viewport evolution over time of a given user is modeled as a trajectory in the roll, pitch, and yaw angles domain. For a given video, a function is extrapolated to model the evolution of the three aforementioned angles over time, based on the viewing patterns of past users in the system. Moreover, trajectories that exhibit similar viewing behaviors are clustered together, and a different function is calculated for each cluster. The pre-computed functions are subsequently used at run-time to predict the future viewport position of a new user in the system, for the specific video. Preliminary results using a public dataset composed of 16 videos watched on average by 61 users show how the proposed algorithm can increase the predicted viewport area by 13% on average compared to several benchmarking heuristics, for prediction horizons up to 10 seconds.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116154952","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew Hoblitzell, M. Babbar‐Sebens, S. Mukhopadhyay
This paper discusses a method for dealing with limited data in deep networks based on calculating the uncertainty associated with remaining training data. The method was developed for the Watershed REstoration using Spatio-Temporal Optimization of REsources (WRESTORE) system, an interactive decision support system designed for performing multi-criteria decision analysis with a distributed system of conservation practices on the Eagle Creek Watershed in Indiana, USA. Our results show faster and more stable convergence when using an uncertainty-based incremental sampling method than when using a standard random incremental sampling method. This work describes the existing WRESTORE system, provides details about the implementation of our uncertainty-based incremental sampling method, and provides a discussion of our results and future work. The primary contribution of the paper is an uncertainty-based incremental sampling method which can be applied to limited data watershed design problems.
{"title":"Uncertainty-Based Deep Learning Networks for Limited Data Wetland User Models","authors":"Andrew Hoblitzell, M. Babbar‐Sebens, S. Mukhopadhyay","doi":"10.1109/AIVR.2018.00011","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00011","url":null,"abstract":"This paper discusses a method for dealing with limited data in deep networks based on calculating the uncertainty associated with remaining training data. The method was developed for the Watershed REstoration using Spatio-Temporal Optimization of REsources (WRESTORE) system, an interactive decision support system designed for performing multi-criteria decision analysis with a distributed system of conservation practices on the Eagle Creek Watershed in Indiana, USA. Our results show faster and more stable convergence when using an uncertainty-based incremental sampling method than when using a standard random incremental sampling method. This work describes the existing WRESTORE system, provides details about the implementation of our uncertainty-based incremental sampling method, and provides a discussion of our results and future work. The primary contribution of the paper is an uncertainty-based incremental sampling method which can be applied to limited data watershed design problems.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125219574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual Reality (VR) content development tools are in continuous production by both enthusiastic researchers and software development companies. Yet, learners could benefit from participating in this development, not only for learning vital programming skills, but also skills in creativity and collaboration. Web-based VR (WebVR) has emerged as a platform-independent framework that permits individuals (with little to no prior programming experience) to create immersive and interactive VR applications. Yet, the success of WebVR relies on students' technological acceptance, the intersectionality of perceived utility and ease of use. In order to determine the effectiveness of the emerging tool for learners of varied experience levels, this paper presents a case study of 38 students who were tasked with developing WebVR 'dream' houses. Results showed that students were accepting of the technology by not only learning and implementing WebVR in a short time (one month), but were also capable of demonstrating creativity and problem-solving skills with classroom supports (i.e., pre-project presentations, online discussions, exemplary projects, and TA support). Results as well as recommendations, lessons learned, and further research are addressed.
{"title":"Web-Based Virtual Reality Development in Classroom: From Learner's Perspectives","authors":"V. Nguyen, R. Hite, Tommy Dang","doi":"10.1109/AIVR.2018.00010","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00010","url":null,"abstract":"Virtual Reality (VR) content development tools are in continuous production by both enthusiastic researchers and software development companies. Yet, learners could benefit from participating in this development, not only for learning vital programming skills, but also skills in creativity and collaboration. Web-based VR (WebVR) has emerged as a platform-independent framework that permits individuals (with little to no prior programming experience) to create immersive and interactive VR applications. Yet, the success of WebVR relies on students' technological acceptance, the intersectionality of perceived utility and ease of use. In order to determine the effectiveness of the emerging tool for learners of varied experience levels, this paper presents a case study of 38 students who were tasked with developing WebVR 'dream' houses. Results showed that students were accepting of the technology by not only learning and implementing WebVR in a short time (one month), but were also capable of demonstrating creativity and problem-solving skills with classroom supports (i.e., pre-project presentations, online discussions, exemplary projects, and TA support). Results as well as recommendations, lessons learned, and further research are addressed.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"68 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125884729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Augmented reality (AR) has been exploited in manifold fields but is yet to be used at its full potential. With the massive diffusion of smart devices, opportunities to build immersive human-computer interfaces are continually expanding. In this study, we conceptualize a virtual factory: an interactive, dynamic, holographic abstraction of the physical machines deployed in a factory. Through our prototype implementation, we conducted a user-study driven evaluation of holographic interfaces compared to traditional interfaces, highlighting its pros and cons. Our study shows that the majority of the participants found holographic manipulation more attractive and natural to interact with. However, current performance characteristics of head-mounted displays must be improved to be applied in production.
{"title":"The Virtual Factory: Hologram-Enabled Control and Monitoring of Industrial IoT Devices","authors":"Vittorio Cozzolino, O. Moroz, A. Ding","doi":"10.1109/AIVR.2018.00024","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00024","url":null,"abstract":"Augmented reality (AR) has been exploited in manifold fields but is yet to be used at its full potential. With the massive diffusion of smart devices, opportunities to build immersive human-computer interfaces are continually expanding. In this study, we conceptualize a virtual factory: an interactive, dynamic, holographic abstraction of the physical machines deployed in a factory. Through our prototype implementation, we conducted a user-study driven evaluation of holographic interfaces compared to traditional interfaces, highlighting its pros and cons. Our study shows that the majority of the participants found holographic manipulation more attractive and natural to interact with. However, current performance characteristics of head-mounted displays must be improved to be applied in production.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131886136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"[Title page i]","authors":"","doi":"10.1109/aivr.2018.00001","DOIUrl":"https://doi.org/10.1109/aivr.2018.00001","url":null,"abstract":"","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131101762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The purpose of this paper is to develop an agent that can imitate the behavior of humans driving a car. When human beings driving a car, he/she majorly uses vision system to recognize the states of the car, including the position, velocity, and the surrounding environments. In this paper, we implemented a self-driving car which can drive itself on the track of a simulator. The self-driving car uses deep neural network as a computational framework to "learn" what is the position of the car related to the road. While the car understands the position of itself related to the track, it can use the information as a basis for feedback control.
{"title":"A Combination of Feedback Control and Vision-Based Deep Learning Mechanism for Guiding Self-Driving Cars","authors":"Wen-Yen Lin, Wang-Hsin Hsu, Yi-Yuan Chiang","doi":"10.1109/AIVR.2018.00062","DOIUrl":"https://doi.org/10.1109/AIVR.2018.00062","url":null,"abstract":"The purpose of this paper is to develop an agent that can imitate the behavior of humans driving a car. When human beings driving a car, he/she majorly uses vision system to recognize the states of the car, including the position, velocity, and the surrounding environments. In this paper, we implemented a self-driving car which can drive itself on the track of a simulator. The self-driving car uses deep neural network as a computational framework to \"learn\" what is the position of the car related to the road. While the car understands the position of itself related to the track, it can use the information as a basis for feedback control.","PeriodicalId":371868,"journal":{"name":"2018 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114901010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}