Katyayani Singh, Ayushi Shrivastava, K. Achary, Arindam Dey, Ojaswa Sharma
In this work, we evaluate the applicability of using Augmented Reality applications in for enhanced learning experiences for children from less privileged backgrounds, with a focus on autistic population. Such an intervention can prove to be very useful to children with reduced cognitive development. In our evaluation, we compare the AR mode of instruction for a procedural task training, using tangram puzzles, with live demonstration and a desktop-based application. First, we performed a within-subjects user study on neurotypical children in the age group of 9 - 12 years. We asked the children to independently solve a tangram puzzle after being trained through different modes of instruction. Second, we used the same instruction modes to train autistic participants. Our findings indicate that during training, children took the longest time to interact with Desktop-based instruction, and took the shortest time to interact with the live demonstration mode. Children also took the longest time to independently solve the tangram puzzle in the Desktop mode. We also found that autistic participants could use AR-based instructions but required more time to go through the training.
{"title":"Augmented Reality-Based Procedural Task Training Application for Less Privileged Children and Autistic Individuals","authors":"Katyayani Singh, Ayushi Shrivastava, K. Achary, Arindam Dey, Ojaswa Sharma","doi":"10.1145/3359997.3365703","DOIUrl":"https://doi.org/10.1145/3359997.3365703","url":null,"abstract":"In this work, we evaluate the applicability of using Augmented Reality applications in for enhanced learning experiences for children from less privileged backgrounds, with a focus on autistic population. Such an intervention can prove to be very useful to children with reduced cognitive development. In our evaluation, we compare the AR mode of instruction for a procedural task training, using tangram puzzles, with live demonstration and a desktop-based application. First, we performed a within-subjects user study on neurotypical children in the age group of 9 - 12 years. We asked the children to independently solve a tangram puzzle after being trained through different modes of instruction. Second, we used the same instruction modes to train autistic participants. Our findings indicate that during training, children took the longest time to interact with Desktop-based instruction, and took the shortest time to interact with the live demonstration mode. Children also took the longest time to independently solve the tangram puzzle in the Desktop mode. We also found that autistic participants could use AR-based instructions but required more time to go through the training.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126779956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Computational Design and Fabrication of Customized Gamepads","authors":"Chinmay Rajguru, Christos Mousas, L. Yu","doi":"10.1145/3359997.3365695","DOIUrl":"https://doi.org/10.1145/3359997.3365695","url":null,"abstract":"","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134157440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The machinery used in industrial applications, such as in agriculture, construction, and forestry, are increasingly equipped with digital tools that aim to aid the operator in task completion, improved productivity, and enhanced safety. In addition, as machines are increasingly connected, there are even more opportunities to integrate external information sources. This situation provides a challenge in mediating the information to the operator. One approach that could be used to address this challenge is the use of augmented reality. This enables the system-generated information to be combined with the user’s perception of the environment. It has the potential to enhance the operators’ awareness of the machine, the surroundings, and the operation that needs to be performed. In this paper, we review the current literature to present the state of the art, discuss the possible benefits, and the use of augmented reality in heavy machinery.
{"title":"Using Augmented Reality to Improve Productivity and Safety for Heavy Machinery Operators: State of the Art","authors":"T. Sitompul, Markus Wallmyr","doi":"10.1145/3359997.3365689","DOIUrl":"https://doi.org/10.1145/3359997.3365689","url":null,"abstract":"The machinery used in industrial applications, such as in agriculture, construction, and forestry, are increasingly equipped with digital tools that aim to aid the operator in task completion, improved productivity, and enhanced safety. In addition, as machines are increasingly connected, there are even more opportunities to integrate external information sources. This situation provides a challenge in mediating the information to the operator. One approach that could be used to address this challenge is the use of augmented reality. This enables the system-generated information to be combined with the user’s perception of the environment. It has the potential to enhance the operators’ awareness of the machine, the surroundings, and the operation that needs to be performed. In this paper, we review the current literature to present the state of the art, discuss the possible benefits, and the use of augmented reality in heavy machinery.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130996356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The transdisciplinary experience between art and technology has grown over the last decade. The application of augmented reality and virtual reality on other areas has opened doors for hybrid projects and consequently new experimental ideas. Taking it as motivation, a new application concept is proposed in this work, which will allow to someone to walk through and see his body movement in a three-dimensional space. Currently, a user can choose which visual effect is used to draw the resulting movement (e.g. continuous/dashed line). The model has been extended in a way that the visual effect and shape are automatically generated according to movement type, speed, amplitude and intention. Our technological process includes real-time human body detection, movement visualization in real-time and movement tracking history. This project has a core focus on dance and performance, though we consider that the framework is targeted to anyone interesting in body movement and art work. In this sense, the proposed application tracks body movement inside a three- dimensional physical space, only by using a smartphone camera. Our main objective is to record the sequence of movements of a dance, or of someone moving in space, to further analyze their movements and the way they moved in space. And through this idea, we have created an application that aims to record the movement of a user and represent that record in a visual composition of simple elements. The possibility for the user to see the visual tracking of the choreography or performance, allows a clear observation of the space traveled by the dancer and the range of motion and accuracy of the symmetry that the body should or should not have in each step. Over this article the main concepts of the project are presented as well as the multiple applications to real-life scenarios.
{"title":"Painting with Movement","authors":"Maria Rita Nogueira, P. Menezes, Bruno Patrão","doi":"10.1145/3359997.3365750","DOIUrl":"https://doi.org/10.1145/3359997.3365750","url":null,"abstract":"The transdisciplinary experience between art and technology has grown over the last decade. The application of augmented reality and virtual reality on other areas has opened doors for hybrid projects and consequently new experimental ideas. Taking it as motivation, a new application concept is proposed in this work, which will allow to someone to walk through and see his body movement in a three-dimensional space. Currently, a user can choose which visual effect is used to draw the resulting movement (e.g. continuous/dashed line). The model has been extended in a way that the visual effect and shape are automatically generated according to movement type, speed, amplitude and intention. Our technological process includes real-time human body detection, movement visualization in real-time and movement tracking history. This project has a core focus on dance and performance, though we consider that the framework is targeted to anyone interesting in body movement and art work. In this sense, the proposed application tracks body movement inside a three- dimensional physical space, only by using a smartphone camera. Our main objective is to record the sequence of movements of a dance, or of someone moving in space, to further analyze their movements and the way they moved in space. And through this idea, we have created an application that aims to record the movement of a user and represent that record in a visual composition of simple elements. The possibility for the user to see the visual tracking of the choreography or performance, allows a clear observation of the space traveled by the dancer and the range of motion and accuracy of the symmetry that the body should or should not have in each step. Over this article the main concepts of the project are presented as well as the multiple applications to real-life scenarios.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121083208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual Reality movies, with its possibility of being immersed in 360° spaces, have both inherent challenges and advantages associated with creating and experiencing them. While the grammar of storytelling in traditional media is well established, filmmakers cannot utilize them effectively in the context of VR due to its immersive nature, as the viewers could end up looking elsewhere and miss important parts of the story. Taking this into account, our framework Cinévoqué leverages the unique features of this immersive medium for creating seamless movie experiences, where the narrative alters itself with respect to the viewer’s passive interactions, without making them aware of the changes. In our demo, we present Till We Meet Again, a VR film that utilizes our framework to provide different storylines that evolve seamlessly for each user.
{"title":"Till We Meet Again: A Cinévoqué Experience","authors":"Jayesh S. Pillai, Amal Dev, Amarnath Murugan","doi":"10.1145/3359997.3365726","DOIUrl":"https://doi.org/10.1145/3359997.3365726","url":null,"abstract":"Virtual Reality movies, with its possibility of being immersed in 360° spaces, have both inherent challenges and advantages associated with creating and experiencing them. While the grammar of storytelling in traditional media is well established, filmmakers cannot utilize them effectively in the context of VR due to its immersive nature, as the viewers could end up looking elsewhere and miss important parts of the story. Taking this into account, our framework Cinévoqué leverages the unique features of this immersive medium for creating seamless movie experiences, where the narrative alters itself with respect to the viewer’s passive interactions, without making them aware of the changes. In our demo, we present Till We Meet Again, a VR film that utilizes our framework to provide different storylines that evolve seamlessly for each user.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126925282","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Katharina Krösl, H. Steinlechner, Johanna Donabauer, Daniel Cornel, J. Waser
To be prepared for flooding events, disaster response personnel has to be trained to execute developed action plans. We present a collaborative operator-trainee setup for a flood response training system, by connecting an interactive flood simulation with a VR client, that allows to steer the remote simulation from within the virtual environment, deploy protection measures, and evaluate results of different simulation runs. An operator supervises and assists the trainee from a linked desktop application.
{"title":"Master of Disaster","authors":"Katharina Krösl, H. Steinlechner, Johanna Donabauer, Daniel Cornel, J. Waser","doi":"10.1145/3359997.3365729","DOIUrl":"https://doi.org/10.1145/3359997.3365729","url":null,"abstract":"To be prepared for flooding events, disaster response personnel has to be trained to execute developed action plans. We present a collaborative operator-trainee setup for a flood response training system, by connecting an interactive flood simulation with a VR client, that allows to steer the remote simulation from within the virtual environment, deploy protection measures, and evaluate results of different simulation runs. An operator supervises and assists the trainee from a linked desktop application.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133220538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Wennan He, Ben Swift, H. Gardner, Mingze Xi, Matt Adcock
We show how inter-device location tracking latency can be reduced in an Augmented Reality (AR) service that uses Microsoft’s HoloLens (HL) devices for multi-user collaboration. Specifically, we have built a collaborative AR system for a research greenhouse that allows multiple users to be able to work collaboratively to process and record information about individual plants in the greenhouse. In this system, we have combined the HL “world tracking” functionality together with marker-based tracking to develop a one-for-all-shared-experience (OFALL-SE) dynamic object localization service. We compare this OFALL-SE service with the traditional Local Anchor Transfer (LAT) method for managing shared experiences and show that latency of data transmission throughout the server and users can be dramatically reduced. Our results indicate that OFALL-SE can support near-real-time collaboration when sharing the physical locations of the plants among users in a greenhouse.
{"title":"Reducing Latency in a Collaborative Augmented Reality Service","authors":"Wennan He, Ben Swift, H. Gardner, Mingze Xi, Matt Adcock","doi":"10.1145/3359997.3365699","DOIUrl":"https://doi.org/10.1145/3359997.3365699","url":null,"abstract":"We show how inter-device location tracking latency can be reduced in an Augmented Reality (AR) service that uses Microsoft’s HoloLens (HL) devices for multi-user collaboration. Specifically, we have built a collaborative AR system for a research greenhouse that allows multiple users to be able to work collaboratively to process and record information about individual plants in the greenhouse. In this system, we have combined the HL “world tracking” functionality together with marker-based tracking to develop a one-for-all-shared-experience (OFALL-SE) dynamic object localization service. We compare this OFALL-SE service with the traditional Local Anchor Transfer (LAT) method for managing shared experiences and show that latency of data transmission throughout the server and users can be dramatically reduced. Our results indicate that OFALL-SE can support near-real-time collaboration when sharing the physical locations of the plants among users in a greenhouse.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"2009 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127327552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Allison Jing, Chenyang Xiang, Seungwon Kim, M. Billinghurst, A. Quigley
Collaborative Immersive Analytics (IA) is a tool which enables multiple people to explore the same dataset using immersive technologies, such as Augmented Reality (AR) or Virtual Reality (VR). In this poster, we describe a system which uses AR to provide situated 3D visualisations in a practical agile collaborative setting. Through a preliminary user study we found that our system helps users accept the concept of IA while enhancing engagement and interactivity during AR collaboration.
{"title":"SnapChart: an Augmented Reality Analytics Toolkit to Enhance Interactivity in a Collaborative Environment","authors":"Allison Jing, Chenyang Xiang, Seungwon Kim, M. Billinghurst, A. Quigley","doi":"10.1145/3359997.3365725","DOIUrl":"https://doi.org/10.1145/3359997.3365725","url":null,"abstract":"Collaborative Immersive Analytics (IA) is a tool which enables multiple people to explore the same dataset using immersive technologies, such as Augmented Reality (AR) or Virtual Reality (VR). In this poster, we describe a system which uses AR to provide situated 3D visualisations in a practical agile collaborative setting. Through a preliminary user study we found that our system helps users accept the concept of IA while enhancing engagement and interactivity during AR collaboration.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125692875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This research presents an application for visualizing the real-world cityscapes and massive transport network performance data sets in Augmented Reality (AR) using the Microsoft HoloLens, or any equivalent hardware. This runs in tandem with numerous emerging applications in the growing worldwide Smart Cities movement and industry. Specifically, this application seeks to address visualization of both real-time and aggregated city data feeds - such as weather, traffic and social media feeds. The software is developed in extensible ways, and it able to overlay various historic and live data sets coming from multiple sources. Advances in computer graphics, data processing and visualization now allow us to tie these visual tools in with much more detailed, longitudinal, massive performance data sets to support comprehensive and useful forms of visual analytics for city planners, decision makers and citizens. Further, it allows us to show these in new interfaces such as the HoloLens and other head-mounted displays to enable collaboration and more natural mappings with the real world. Using this toolkit, this visualization technology allows a novel approach to explore hundreds of millions of data points in order to find insights, trends, patterns over significant periods of time and geographic space. The focus of our development uses open data sets, which maximizes applications to assessing the performance of networks of cities worldwide. The city of Sydney, Australia is used as our initial application. It showcases a real-world example of this application enabling analysis of the transport network performance over the past twelve months.
{"title":"HoloCity – exploring the use of augmented reality cityscapes for collaborative understanding of high-volume urban sensor data","authors":"Oliver Lock, T. Bednarz, C. Pettit","doi":"10.1145/3359997.3365734","DOIUrl":"https://doi.org/10.1145/3359997.3365734","url":null,"abstract":"This research presents an application for visualizing the real-world cityscapes and massive transport network performance data sets in Augmented Reality (AR) using the Microsoft HoloLens, or any equivalent hardware. This runs in tandem with numerous emerging applications in the growing worldwide Smart Cities movement and industry. Specifically, this application seeks to address visualization of both real-time and aggregated city data feeds - such as weather, traffic and social media feeds. The software is developed in extensible ways, and it able to overlay various historic and live data sets coming from multiple sources. Advances in computer graphics, data processing and visualization now allow us to tie these visual tools in with much more detailed, longitudinal, massive performance data sets to support comprehensive and useful forms of visual analytics for city planners, decision makers and citizens. Further, it allows us to show these in new interfaces such as the HoloLens and other head-mounted displays to enable collaboration and more natural mappings with the real world. Using this toolkit, this visualization technology allows a novel approach to explore hundreds of millions of data points in order to find insights, trends, patterns over significant periods of time and geographic space. The focus of our development uses open data sets, which maximizes applications to assessing the performance of networks of cities worldwide. The city of Sydney, Australia is used as our initial application. It showcases a real-world example of this application enabling analysis of the transport network performance over the past twelve months.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"345 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115885859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Augmented reality (AR) is being rapidly adopted by industries, such as logistics, manufacturing and military. However, one of the extremely under-explored yet significant areas is the primary production industry, and as a major source of food and nutrition, seafood production has always been a priority for many countries. Aquaculture farming is a highly dynamic, unpredictable and labour-intensive process. In this paper, we discuss the challenges in aquaculture farm operation based on our field studies with leading Australian fisheries. We also propose an ”AR + Cloud” system design to tackle the delayed in-situ water quality data collection and query, as well as aquaculture pond stress monitoring and analysis.
{"title":"An End-to-End Augmented Reality Solution to Support Aquaculture Farmers with Data Collection, Storage, and Analysis","authors":"Mingze Xi, Matt Adcock, John McCulloch","doi":"10.1145/3359997.3365721","DOIUrl":"https://doi.org/10.1145/3359997.3365721","url":null,"abstract":"Augmented reality (AR) is being rapidly adopted by industries, such as logistics, manufacturing and military. However, one of the extremely under-explored yet significant areas is the primary production industry, and as a major source of food and nutrition, seafood production has always been a priority for many countries. Aquaculture farming is a highly dynamic, unpredictable and labour-intensive process. In this paper, we discuss the challenges in aquaculture farm operation based on our field studies with leading Australian fisheries. We also propose an ”AR + Cloud” system design to tackle the delayed in-situ water quality data collection and query, as well as aquaculture pond stress monitoring and analysis.","PeriodicalId":448139,"journal":{"name":"Proceedings of the 17th International Conference on Virtual-Reality Continuum and its Applications in Industry","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114165984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}