Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00063
Chris J. Hughes, Marta B. Zapata, Matthew Johnston, P. Orero
This article focuses on captioning for immersive environments and the research aims to identify how to display them for an optimal viewing experience. This work began four years ago with some partial findings. This second stage of research, built from the lessons learnt, focuses on the design requirements cornerstone: prototyping. A tool has been developed towards quick and realistic prototyping and testing. The framework integrates methods used in existing solutions. Given how easy it is to contrast and compare, the need to further the first framework was obvious. A second improved solution was developed, almost as a showcase on how ideas can quickly be implemented for user testing. After an overview on captions in immersive environments, the article describes its implementation, based on web technologies opening for any device with a web browser. This includes desktop computers, mobile devices and head mounted displays. The article finishes with a description of the new caption modes and methods, hoping to be a useful tool towards testing and standardisation.
{"title":"Immersive Captioning: Developing a framework for evaluating user needs","authors":"Chris J. Hughes, Marta B. Zapata, Matthew Johnston, P. Orero","doi":"10.1109/AIVR50618.2020.00063","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00063","url":null,"abstract":"This article focuses on captioning for immersive environments and the research aims to identify how to display them for an optimal viewing experience. This work began four years ago with some partial findings. This second stage of research, built from the lessons learnt, focuses on the design requirements cornerstone: prototyping. A tool has been developed towards quick and realistic prototyping and testing. The framework integrates methods used in existing solutions. Given how easy it is to contrast and compare, the need to further the first framework was obvious. A second improved solution was developed, almost as a showcase on how ideas can quickly be implemented for user testing. After an overview on captions in immersive environments, the article describes its implementation, based on web technologies opening for any device with a web browser. This includes desktop computers, mobile devices and head mounted displays. The article finishes with a description of the new caption modes and methods, hoping to be a useful tool towards testing and standardisation.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129583633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00081
Miriam Mulders
The HandleVR project develops a Virtual Reality (VR) training based on the 4C/ID model [1] to train vocational competencies in the field of vehicle painting. The paper presents the results of a pilot study with fourteen aspirant vehicle painters who tested two prototypical tasks in VR and evaluated its suitability, i.a. regarding their learning motivation. The results indicate that VR training is highly motivating and some aspects (e.g., a virtual trainer) in particular promote motivation. Further research is needed to take advantage of these positive motivational effects to support meaningful learning.
{"title":"Investigating learners’ motivation towards a virtual reality learning environment: a pilot study in vehicle painting","authors":"Miriam Mulders","doi":"10.1109/AIVR50618.2020.00081","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00081","url":null,"abstract":"The HandleVR project develops a Virtual Reality (VR) training based on the 4C/ID model [1] to train vocational competencies in the field of vehicle painting. The paper presents the results of a pilot study with fourteen aspirant vehicle painters who tested two prototypical tasks in VR and evaluated its suitability, i.a. regarding their learning motivation. The results indicate that VR training is highly motivating and some aspects (e.g., a virtual trainer) in particular promote motivation. Further research is needed to take advantage of these positive motivational effects to support meaningful learning.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129922814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00024
B. Cohn, A. Maselli, E. Ofek, Mar González-Franco
We present SnapMove a technique to reproject reaching movements inside Virtual Reality. SnapMove can be used to reduce the need of large, fatiguing or difficult motions. We designed multiple reprojection techniques, linear or planar, uni-manual, bi-manual or head snap, that can be used for reaching, throwing and virtual tool manipulation. In a user study (n=21) we explore if the self-avatar follower effect can be modulated depending on the cost of the motion introduced by remapping. SnapMove was successful in re-projecting user’s hand position from e.g. a lower area, to a higher avatar-hand position–a mapping which can be ideal for limiting fatigue. It was also successful in preserving avatar embodiment and gradually bring users to perform movements with higher cost energies, which have most interest for rehabilitation scenarios. We implemented applications for menu interaction, climbing, rowing, and throwing darts. Overall, SnapMove can make interactions in virtual environments easier. We discuss the potential impact of SnapMove for application in gaming, accessibility and therapy.
{"title":"SnapMove: Movement Projection Mapping in Virtual Reality","authors":"B. Cohn, A. Maselli, E. Ofek, Mar González-Franco","doi":"10.1109/AIVR50618.2020.00024","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00024","url":null,"abstract":"We present SnapMove a technique to reproject reaching movements inside Virtual Reality. SnapMove can be used to reduce the need of large, fatiguing or difficult motions. We designed multiple reprojection techniques, linear or planar, uni-manual, bi-manual or head snap, that can be used for reaching, throwing and virtual tool manipulation. In a user study (n=21) we explore if the self-avatar follower effect can be modulated depending on the cost of the motion introduced by remapping. SnapMove was successful in re-projecting user’s hand position from e.g. a lower area, to a higher avatar-hand position–a mapping which can be ideal for limiting fatigue. It was also successful in preserving avatar embodiment and gradually bring users to perform movements with higher cost energies, which have most interest for rehabilitation scenarios. We implemented applications for menu interaction, climbing, rowing, and throwing darts. Overall, SnapMove can make interactions in virtual environments easier. We discuss the potential impact of SnapMove for application in gaming, accessibility and therapy.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123730562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00074
Yuji Tsukagoshi, Yuuki Uranishi, J. Orlosky, Kiyomi Ito, H. Takemura
This paper proposes a method for estimating lighting environments from an AR marker coupled with the structural color patterns inherent to a compact disc (CD) form-factor. To achieve photometric consistency, these patterns are used as input to a Conditional Generative Adversarial Network (CGAN), which allows us to efficiently and quickly generate estimations of an environment map. We construct a dataset from pairs of images of the structural color pattern and environment map captured in multiple scenes, and the CGAN is then trained with this dataset. Experiments show that we can generate visually accurate reconstructions with this method for certain scenes, and that the environment map can be estimated in real time.
{"title":"Rainbow Learner: Lighting Environment Estimation from a Structural-color based AR Marker","authors":"Yuji Tsukagoshi, Yuuki Uranishi, J. Orlosky, Kiyomi Ito, H. Takemura","doi":"10.1109/AIVR50618.2020.00074","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00074","url":null,"abstract":"This paper proposes a method for estimating lighting environments from an AR marker coupled with the structural color patterns inherent to a compact disc (CD) form-factor. To achieve photometric consistency, these patterns are used as input to a Conditional Generative Adversarial Network (CGAN), which allows us to efficiently and quickly generate estimations of an environment map. We construct a dataset from pairs of images of the structural color pattern and environment map captured in multiple scenes, and the CGAN is then trained with this dataset. Experiments show that we can generate visually accurate reconstructions with this method for certain scenes, and that the environment map can be estimated in real time.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121262412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00016
Mores Prachyabrued, P. Haddawy, Krittayoch Tengputtipong, Myat Su Yin, D. Bicout, Yongjua Laosiritaworn
Dengue is considered one of the most serious global health burdens. The primary vector of dengue is the Aedes aegypti mosquito, which has adapted to human habitats and breeds primarily in artificial containers that can contain water. Control of dengue relies on effective mosquito vector control, for which detection and mapping of potential breeding sites is essential. The two traditional approaches to this have been to use satellite images, which do not provide sufficient resolution to detect a large proportion of the breeding sites, and manual counting, which is too labor intensive to be used on a routine basis over large areas. Our recent work has addressed this problem by applying convolutional neural nets to detect outdoor containers representing potential breeding sites in Google street view images. The challenge is now not a paucity of data, but rather transforming the large volumes of data produced into meaningful information. In this paper, we present the design of an immersive visualization using a tiled-display wall that supports an early but crucial stage of dengue investigation, by enabling researchers to interactively explore and discover patterns in the datasets, which can help in forming hypotheses that can drive quantitative analyses. The tool is also useful in uncovering patterns that may be too sparse to be discovered by correlational analyses and in identifying outliers that may justify further study. We demonstrate the usefulness of our approach with two usage scenarios that lead to insights into the relationship between dengue incidence and container counts.
{"title":"Immersive Visualization of Dengue Vector Breeding Sites Extracted from Street View Images","authors":"Mores Prachyabrued, P. Haddawy, Krittayoch Tengputtipong, Myat Su Yin, D. Bicout, Yongjua Laosiritaworn","doi":"10.1109/AIVR50618.2020.00016","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00016","url":null,"abstract":"Dengue is considered one of the most serious global health burdens. The primary vector of dengue is the Aedes aegypti mosquito, which has adapted to human habitats and breeds primarily in artificial containers that can contain water. Control of dengue relies on effective mosquito vector control, for which detection and mapping of potential breeding sites is essential. The two traditional approaches to this have been to use satellite images, which do not provide sufficient resolution to detect a large proportion of the breeding sites, and manual counting, which is too labor intensive to be used on a routine basis over large areas. Our recent work has addressed this problem by applying convolutional neural nets to detect outdoor containers representing potential breeding sites in Google street view images. The challenge is now not a paucity of data, but rather transforming the large volumes of data produced into meaningful information. In this paper, we present the design of an immersive visualization using a tiled-display wall that supports an early but crucial stage of dengue investigation, by enabling researchers to interactively explore and discover patterns in the datasets, which can help in forming hypotheses that can drive quantitative analyses. The tool is also useful in uncovering patterns that may be too sparse to be discovered by correlational analyses and in identifying outliers that may justify further study. We demonstrate the usefulness of our approach with two usage scenarios that lead to insights into the relationship between dengue incidence and container counts.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"38 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133478488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00061
Xiaoyang Zhang, Harshit Vadodaria, Na Li, K. Kang, Yao Liu
Emerging virtual and augmented reality applications are envisioned to significantly enhance user experiences. An important issue related to user experience is thermal management in smartphones widely adopted for virtual and augmented reality applications. Although smartphone overheating has been reported many times, a systematic measurement and analysis of their thermal behaviors is relatively scarce, especially for virtual and augmented reality applications. To address the issue, we build a temperature measurement and analysis framework for virtual and augmented reality applications using a robot, infrared cameras, and smartphones. Using the framework, we analyze a comprehensive set of data including the battery power consumption, smartphone surface temperature, and temperature of key hardware components, such as the battery, CPU, GPU,and WiFi module. When a 360° virtual reality video is streamed to a smartphone, the phone surface temperature reaches near $39^{circ} mathrm{C}$. Also, the temperature of the phone surface and its main hardware components generally increases till the end of our 20 -minute experiments despite thermal control undertaken by smartphones, such as CPU/GPU frequency scaling. Our thermal analysis results of a popular AR game are even more serious: the battery power consumption frequently exceeds the thermal design power by 20-80 %, while the peak battery, CPU, GPU, and WiFi module temperature exceeds $45,70,70$, and $65^{circ} mathrm{C}$ respectively.
{"title":"A Smartphone Thermal Temperature Analysis for Virtual and Augmented Reality","authors":"Xiaoyang Zhang, Harshit Vadodaria, Na Li, K. Kang, Yao Liu","doi":"10.1109/AIVR50618.2020.00061","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00061","url":null,"abstract":"Emerging virtual and augmented reality applications are envisioned to significantly enhance user experiences. An important issue related to user experience is thermal management in smartphones widely adopted for virtual and augmented reality applications. Although smartphone overheating has been reported many times, a systematic measurement and analysis of their thermal behaviors is relatively scarce, especially for virtual and augmented reality applications. To address the issue, we build a temperature measurement and analysis framework for virtual and augmented reality applications using a robot, infrared cameras, and smartphones. Using the framework, we analyze a comprehensive set of data including the battery power consumption, smartphone surface temperature, and temperature of key hardware components, such as the battery, CPU, GPU,and WiFi module. When a 360° virtual reality video is streamed to a smartphone, the phone surface temperature reaches near $39^{circ} mathrm{C}$. Also, the temperature of the phone surface and its main hardware components generally increases till the end of our 20 -minute experiments despite thermal control undertaken by smartphones, such as CPU/GPU frequency scaling. Our thermal analysis results of a popular AR game are even more serious: the battery power consumption frequently exceeds the thermal design power by 20-80 %, while the peak battery, CPU, GPU, and WiFi module temperature exceeds $45,70,70$, and $65^{circ} mathrm{C}$ respectively.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131858813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00036
Carolin Straßmann, Alexander Arntz, S. Eimler
As environmental pollution continues to expand, new ways for raising awareness for the consequences need to be explored. Virtual reality has emerged as an effective tool for behavioral change. This paper investigates if virtual reality applications controlled through physical activity can support an even stronger effect, because it enhances the attention and recall performance by stimulating the working memory through motor functions. This was tested in an experimental study using a virtual reality head-mounted display in combination with the ICAROS fitness device enabling participants to explore either a plastic-polluted or non-polluted sea. Results indicated that using a regular controller elicits more presence and a more intense Flow experience than the ICAROS condition, which people controlled via their physical activity. Moreover, the plastic-polluted stimulus was more effective in inducing attitude change than a nonpolluted sea.
{"title":"Under The (Plastic) Sea - Sensitizing People Toward Ecological Behavior Using Virtual Reality Controlled by Users’ Physical Activity","authors":"Carolin Straßmann, Alexander Arntz, S. Eimler","doi":"10.1109/AIVR50618.2020.00036","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00036","url":null,"abstract":"As environmental pollution continues to expand, new ways for raising awareness for the consequences need to be explored. Virtual reality has emerged as an effective tool for behavioral change. This paper investigates if virtual reality applications controlled through physical activity can support an even stronger effect, because it enhances the attention and recall performance by stimulating the working memory through motor functions. This was tested in an experimental study using a virtual reality head-mounted display in combination with the ICAROS fitness device enabling participants to explore either a plastic-polluted or non-polluted sea. Results indicated that using a regular controller elicits more presence and a more intense Flow experience than the ICAROS condition, which people controlled via their physical activity. Moreover, the plastic-polluted stimulus was more effective in inducing attitude change than a nonpolluted sea.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114782337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00055
Janne Heirman, S. Selleri, Tom De Vleeschauwer, Charles Hamesse, Michel Bellemans, Evarest Schoofs, R. Haelterman
Firefighting is a crucial part of the Navy’s training program, as it must ensure the safety on board. This training is dangerous, expensive and environmentally unfriendly. Therefore, the Navy is looking for a safer form of training that can enhance the current one. Extended Reality technology offers new ways of training, with the promise to alleviate issues related to training danger, costs and environmental pollution. In this work, we develop and evaluate a Virtual Reality simulator and a proof of concept of a Mixed Reality simulator, together with a firehose controller adapted to the needs of the Navy’s firefighting training program.
{"title":"Exploring the possibilities of Extended Reality in the world of firefighting","authors":"Janne Heirman, S. Selleri, Tom De Vleeschauwer, Charles Hamesse, Michel Bellemans, Evarest Schoofs, R. Haelterman","doi":"10.1109/AIVR50618.2020.00055","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00055","url":null,"abstract":"Firefighting is a crucial part of the Navy’s training program, as it must ensure the safety on board. This training is dangerous, expensive and environmentally unfriendly. Therefore, the Navy is looking for a safer form of training that can enhance the current one. Extended Reality technology offers new ways of training, with the promise to alleviate issues related to training danger, costs and environmental pollution. In this work, we develop and evaluate a Virtual Reality simulator and a proof of concept of a Mixed Reality simulator, together with a firehose controller adapted to the needs of the Navy’s firefighting training program.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"5 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120864127","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00035
Caglar Yildirim, Tara O'Grady
Mindfulness can be defined as increased awareness of and sustained attentiveness to the present moment. Recently, there has been a growing interest in the applications of mindfulness for empirical research in wellbeing and the use of virtual reality (VR) environments and 3D interfaces as a conduit for mindfulness training. Accordingly, the current experiment investigated whether a brief VR-based mindfulness intervention could induce a greater level of state mindfulness, when compared to an audio-based intervention and control group. Results indicated two mindfulness interventions, VRbased and audio-based, induced a greater state of mindfulness, compared to the control group. Participants in the VR-based mindfulness intervention group reported a greater state of mindfulness than those in the guided audio group, indicating the immersive mindfulness intervention was more robust. Collectively, these results provide empirical support for the efficaciousness of a brief VR-based mindfulness intervention in inducing a robust state of mindfulness in laboratory settings.
{"title":"The Efficacy of a Virtual Reality-Based Mindfulness Intervention","authors":"Caglar Yildirim, Tara O'Grady","doi":"10.1109/AIVR50618.2020.00035","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00035","url":null,"abstract":"Mindfulness can be defined as increased awareness of and sustained attentiveness to the present moment. Recently, there has been a growing interest in the applications of mindfulness for empirical research in wellbeing and the use of virtual reality (VR) environments and 3D interfaces as a conduit for mindfulness training. Accordingly, the current experiment investigated whether a brief VR-based mindfulness intervention could induce a greater level of state mindfulness, when compared to an audio-based intervention and control group. Results indicated two mindfulness interventions, VRbased and audio-based, induced a greater state of mindfulness, compared to the control group. Participants in the VR-based mindfulness intervention group reported a greater state of mindfulness than those in the guided audio group, indicating the immersive mindfulness intervention was more robust. Collectively, these results provide empirical support for the efficaciousness of a brief VR-based mindfulness intervention in inducing a robust state of mindfulness in laboratory settings.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126684446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00021
Noud Savenije, Roland Geraerts, Wolfgang Hürst
Spatial augmented reality, where virtual information is projected into a user’s real environment, provides tremendous opportunities for immersive analytics. In this demonstration, we focus on real-time interactive crowd simulation, that is, the illustration of how crowds move under certain circumstances. Our augmented reality system, called CrowdAR, allows users to study a crowd’s motion behavior by projecting the output of our simulation software onto an augmented reality table and objects on this table. Our prototype system is currently being revised and extended to serve as a museum exhibit. Using real-time interaction, it can teach scientific principles about simulations and illustrate how these, in combination with augmented reality, can be used for crowd behavior analysis.
{"title":"CrowdAR Table An AR system for Real-time Interactive Crowd Simulation","authors":"Noud Savenije, Roland Geraerts, Wolfgang Hürst","doi":"10.1109/AIVR50618.2020.00021","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00021","url":null,"abstract":"Spatial augmented reality, where virtual information is projected into a user’s real environment, provides tremendous opportunities for immersive analytics. In this demonstration, we focus on real-time interactive crowd simulation, that is, the illustration of how crowds move under certain circumstances. Our augmented reality system, called CrowdAR, allows users to study a crowd’s motion behavior by projecting the output of our simulation software onto an augmented reality table and objects on this table. Our prototype system is currently being revised and extended to serve as a museum exhibit. Using real-time interaction, it can teach scientific principles about simulations and illustrate how these, in combination with augmented reality, can be used for crowd behavior analysis.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"145 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123778225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}