Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00018
Tanja Kojić, Sandra Ashipala, S. Möller, Jan-Niklas Voigt-Antons
Recent developments in Virtual Reality (VR) provide vast opportunities for data visualization. Individuals must be motivated to understand the data presented concerning their personal financial statements, as with better understanding comes more efficient money management and a healthier future-minded focus on planning for their personal finances. This paper aims to investigate how a 3D in VR visualization scenario influences an immersed users’ capability to understand and engage with personal financial statement data in comparison to a scenario where a user would be interacting with 2D personal financial statement data on paper. This was achieved by creating a 2D paper prototype data and utilizing the benefits of 3D in a VR prototype visualization. Both prototypes consisted of chart representations of personal financial statement data, which were tested in a user study. Also, participants (N=23) were given a set of tasks (such as ”Find the quarter in which the least amount of expenses was spent”) to solve by using 2D or 3D financial statements. Results show that participants reported interaction is statistically significant more natural with 2D data on paper than interactions with 3D data in VR. Also, participants reported that they perceived statistically significant more considerable delay between outcome and their action with 3D data in VR. However, effectiveness, measured by the percentage of correctly solved tasks, in both versions enabled participants to answer almost all given tasks successfully. Those results are showing that VR as a medium could be used for analyzing financial statements. In future developments, several additional directions could be incorporated. Exploring data visualization is essential not only for the financial sector but also for other domains such as taxes or law reports.
{"title":"Exploring Visualisations for Financial Statements in Virtual Reality","authors":"Tanja Kojić, Sandra Ashipala, S. Möller, Jan-Niklas Voigt-Antons","doi":"10.1109/AIVR50618.2020.00018","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00018","url":null,"abstract":"Recent developments in Virtual Reality (VR) provide vast opportunities for data visualization. Individuals must be motivated to understand the data presented concerning their personal financial statements, as with better understanding comes more efficient money management and a healthier future-minded focus on planning for their personal finances. This paper aims to investigate how a 3D in VR visualization scenario influences an immersed users’ capability to understand and engage with personal financial statement data in comparison to a scenario where a user would be interacting with 2D personal financial statement data on paper. This was achieved by creating a 2D paper prototype data and utilizing the benefits of 3D in a VR prototype visualization. Both prototypes consisted of chart representations of personal financial statement data, which were tested in a user study. Also, participants (N=23) were given a set of tasks (such as ”Find the quarter in which the least amount of expenses was spent”) to solve by using 2D or 3D financial statements. Results show that participants reported interaction is statistically significant more natural with 2D data on paper than interactions with 3D data in VR. Also, participants reported that they perceived statistically significant more considerable delay between outcome and their action with 3D data in VR. However, effectiveness, measured by the percentage of correctly solved tasks, in both versions enabled participants to answer almost all given tasks successfully. Those results are showing that VR as a medium could be used for analyzing financial statements. In future developments, several additional directions could be incorporated. Exploring data visualization is essential not only for the financial sector but also for other domains such as taxes or law reports.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131130292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00051
Hanin A. Almurashi, Rahma Bouaziz
Currently, augmented reality is integrated into several areas (medical, therapeutic, educational, entertainment,). One of these important areas in which the augmented reality has been incorporated in the field of rehabilitation for autism spectrum disorder (ASD) children, but it does not serve the field well. This report will provide an overview of how to use AR technology in the field of autism spectrum disorder (ASD), from an entertainment perspective to develop their communication skills. Where solutions to the problems suggested in previous research will be discussed and attempted to develop them to reach new hypotheses that serve the use of augmented reality AR to better rehabilitate the behavior of autistic children. The report proposes a future hypothesis and a research plan to solve the problem.
{"title":"Augmented Reality and Autism Spectrum Disorder Rehabilitation: Scoping review","authors":"Hanin A. Almurashi, Rahma Bouaziz","doi":"10.1109/AIVR50618.2020.00051","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00051","url":null,"abstract":"Currently, augmented reality is integrated into several areas (medical, therapeutic, educational, entertainment,). One of these important areas in which the augmented reality has been incorporated in the field of rehabilitation for autism spectrum disorder (ASD) children, but it does not serve the field well. This report will provide an overview of how to use AR technology in the field of autism spectrum disorder (ASD), from an entertainment perspective to develop their communication skills. Where solutions to the problems suggested in previous research will be discussed and attempted to develop them to reach new hypotheses that serve the use of augmented reality AR to better rehabilitate the behavior of autistic children. The report proposes a future hypothesis and a research plan to solve the problem.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115010399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00013
Haikun Huang, Yuxuan Zhang, Tomer Weiss, Rebecca W. Perry, L. Yu
We present a novel interactive design tool that allows users to create and visualize gallery walls via a mixed reality device. To use our tool, a user selects a wall to decorate and chooses a focal art item. Our tool then helps the user complete their design by optionally recommending additional art items or automatically completing both the selection and placement of additional art items. Our tool holistically considers common design criteria such as alignment, color, and style compatibility in the synthesis of a gallery wall. Through a mixed reality device, such as a Magic Leap One headset, the user can instantly visualize the gallery wall design in situ and can interactively modify the design in collaboration with our tool’s suggestion engine. We describe the suggestion engine and its adaptability to users with different design goals. We also evaluate our mixedreality-based tool for creating gallery wall designs and compare it with a 2D interface, providing insights for devising mixed reality interior design applications.
{"title":"Interactive Design of Gallery Walls via Mixed Reality","authors":"Haikun Huang, Yuxuan Zhang, Tomer Weiss, Rebecca W. Perry, L. Yu","doi":"10.1109/AIVR50618.2020.00013","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00013","url":null,"abstract":"We present a novel interactive design tool that allows users to create and visualize gallery walls via a mixed reality device. To use our tool, a user selects a wall to decorate and chooses a focal art item. Our tool then helps the user complete their design by optionally recommending additional art items or automatically completing both the selection and placement of additional art items. Our tool holistically considers common design criteria such as alignment, color, and style compatibility in the synthesis of a gallery wall. Through a mixed reality device, such as a Magic Leap One headset, the user can instantly visualize the gallery wall design in situ and can interactively modify the design in collaboration with our tool’s suggestion engine. We describe the suggestion engine and its adaptability to users with different design goals. We also evaluate our mixedreality-based tool for creating gallery wall designs and compare it with a 2D interface, providing insights for devising mixed reality interior design applications.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122453776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00078
Lee Jen Tun, R. J. Rajapakse, Y. Hung, K. Miyata
The main purpose of virtual reality is to immerse a user in a virtual environment. However, the most common criticism of companion robotics is that they are not able to provide effective tactile feedback. In this research, we will propose an interactive companion doll that has a tail mechanism controlled by the user’s hand gestures as well as generate sound feedback by applying OSC and Wekinator. For the VR content, we designed six emotions for the companion doll to be able to provide the user with different feelings while they are interacting with the companion doll in the virtual and physical worlds.
{"title":"Implementations of the Gesture-based Immersive Interaction and ML-driven Sound Generation for Haptic Companion Doll","authors":"Lee Jen Tun, R. J. Rajapakse, Y. Hung, K. Miyata","doi":"10.1109/AIVR50618.2020.00078","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00078","url":null,"abstract":"The main purpose of virtual reality is to immerse a user in a virtual environment. However, the most common criticism of companion robotics is that they are not able to provide effective tactile feedback. In this research, we will propose an interactive companion doll that has a tail mechanism controlled by the user’s hand gestures as well as generate sound feedback by applying OSC and Wekinator. For the VR content, we designed six emotions for the companion doll to be able to provide the user with different feelings while they are interacting with the companion doll in the virtual and physical worlds.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126351082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00017
D. Aschenbrenner, Jonas S. I. Rieder, D. V. Tol, J. Dam, Z. Rusák, J. Blech, Mohammad Azangoo, Salo Panu, Karl Kruusamäe, Houman Masnavi, Igor Rybalskii, A. Aabloo, M. Petry, Gustavo Teixeira, B. Thiede, P. Pedrazzoli, Andrea Ferrario, Michele Foletti, Matteo Confalonieri, Daniele Bertaggia, Thodoris Togias, S. Makris
How to visualize recorded production data in Virtual Reality? How to use state of the art Augmented Reality displays that can show robot data? This paper introduces an opensource ICT framework approach for combining Unity-based Mixed Reality applications with robotic production equipment using ROS Industrial. This publication gives details on the implementation and demonstrates the use as a data analysis tool in the context of scientific exchange within the area of Mixed Reality enabled human-robot co-production.
{"title":"Mirrorlabs - creating accessible Digital Twins of robotic production environment with Mixed Reality","authors":"D. Aschenbrenner, Jonas S. I. Rieder, D. V. Tol, J. Dam, Z. Rusák, J. Blech, Mohammad Azangoo, Salo Panu, Karl Kruusamäe, Houman Masnavi, Igor Rybalskii, A. Aabloo, M. Petry, Gustavo Teixeira, B. Thiede, P. Pedrazzoli, Andrea Ferrario, Michele Foletti, Matteo Confalonieri, Daniele Bertaggia, Thodoris Togias, S. Makris","doi":"10.1109/AIVR50618.2020.00017","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00017","url":null,"abstract":"How to visualize recorded production data in Virtual Reality? How to use state of the art Augmented Reality displays that can show robot data? This paper introduces an opensource ICT framework approach for combining Unity-based Mixed Reality applications with robotic production equipment using ROS Industrial. This publication gives details on the implementation and demonstrates the use as a data analysis tool in the context of scientific exchange within the area of Mixed Reality enabled human-robot co-production.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125438927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00044
R. Sharma, Tomer Weiss, Marcelo Kallmann
Position-Based Dynamics (PBD) has been shown to provide a flexible framework for modeling per-agent collision avoidance behavior for crowd and multi-agent simulations in planar scenarios. In this work, we propose to extend the approach such that collision avoidance reactions can utilize in a controlled way the volumetric 3D space around each agent when deciding how to avoid collisions with other agents. We propose to use separation planes for collision avoidance, using either preferred or automatically determined planes. Our results demonstrate the ability to control the spatial 3D behavior of simulated agents by constraining the produced movements according to the separation planes. Our method is generic and can be integrated with different crowd simulation techniques. We also compare our results with a 3D collision avoidance method based on Reciprocal Velocity Obstacles (RVOs).
{"title":"Plane-Based Local Behaviors for Multi-Agent 3D Simulations with Position-Based Dynamics","authors":"R. Sharma, Tomer Weiss, Marcelo Kallmann","doi":"10.1109/AIVR50618.2020.00044","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00044","url":null,"abstract":"Position-Based Dynamics (PBD) has been shown to provide a flexible framework for modeling per-agent collision avoidance behavior for crowd and multi-agent simulations in planar scenarios. In this work, we propose to extend the approach such that collision avoidance reactions can utilize in a controlled way the volumetric 3D space around each agent when deciding how to avoid collisions with other agents. We propose to use separation planes for collision avoidance, using either preferred or automatically determined planes. Our results demonstrate the ability to control the spatial 3D behavior of simulated agents by constraining the produced movements according to the separation planes. Our method is generic and can be integrated with different crowd simulation techniques. We also compare our results with a 3D collision avoidance method based on Reciprocal Velocity Obstacles (RVOs).","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132167813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00031
Nadisha-Marie Aliman, L. Kester
Advancements in the AI field unfold tremendous opportunities for society. Simultaneously, it becomes increasingly important to address emerging ramifications. Thereby, the focus is often set on ethical and safe design forestalling unintentional failures. However, cybersecurity-oriented approaches to AI safety additionally consider instantiations of intentional malice – including unethical malevolent AI design. Recently, an analogous emphasis on malicious actors has been expressed regarding security and safety for virtual reality (VR). In this vein, while the intersection of AI and VR (AIVR) offers a wide array of beneficial cross-fertilization possibilities, it is responsible to anticipate future malicious AIVR design from the onset on given the potential socio-psycho-technological impacts. For a simplified illustration, this paper analyzes the conceivable use case of Generative AI (here deepfake techniques) utilized for disinformation in immersive journalism. In our view, defenses against such future AIVR safety risks related to falsehood in immersive settings should be transdisciplinarily conceived from an immersive co-creation stance. As a first step, we motivate a cybersecurity-oriented procedure to generate defenses via immersive design fictions. Overall, there may be no panacea but updatable transdisciplinary tools including AIVR itself could be used to incrementally defend against malicious actors in AIVR.
{"title":"Malicious Design in AIVR, Falsehood and Cybersecurity-oriented Immersive Defenses","authors":"Nadisha-Marie Aliman, L. Kester","doi":"10.1109/AIVR50618.2020.00031","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00031","url":null,"abstract":"Advancements in the AI field unfold tremendous opportunities for society. Simultaneously, it becomes increasingly important to address emerging ramifications. Thereby, the focus is often set on ethical and safe design forestalling unintentional failures. However, cybersecurity-oriented approaches to AI safety additionally consider instantiations of intentional malice – including unethical malevolent AI design. Recently, an analogous emphasis on malicious actors has been expressed regarding security and safety for virtual reality (VR). In this vein, while the intersection of AI and VR (AIVR) offers a wide array of beneficial cross-fertilization possibilities, it is responsible to anticipate future malicious AIVR design from the onset on given the potential socio-psycho-technological impacts. For a simplified illustration, this paper analyzes the conceivable use case of Generative AI (here deepfake techniques) utilized for disinformation in immersive journalism. In our view, defenses against such future AIVR safety risks related to falsehood in immersive settings should be transdisciplinarily conceived from an immersive co-creation stance. As a first step, we motivate a cybersecurity-oriented procedure to generate defenses via immersive design fictions. Overall, there may be no panacea but updatable transdisciplinary tools including AIVR itself could be used to incrementally defend against malicious actors in AIVR.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133378580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00077
Samuel Börlin, Ralph Gasser, Florian Spiess, H. Schuldt
3D models play an increasingly important role in various areas, ranging from engineering to the cultural heritage domain. Therefore, tools to effectively and efficiently manage, explore and search large 3D model collections have become more important over the years. Most solutions so far use conventional 2D user interfaces and interaction relies on mouse and keyboard input. In this paper, we present vitrivr-VR, a 3D model retrieval system featuring a virtual reality (VR) user interface based on the multimedia search system vitrivr. Query formulation and results presentation takes place in a VR environment, in which users immerse themselves. vitrivr-VR enables the sculpting of 3D models through constructive solid geometry (CSG) using a VR controller and the use of these sculpted objects as query objects. To the best of our knowledge, vitrivr-VR is the first system that combines CSG and VR to enable 3D model retrieval.
{"title":"3D Model Retrieval Using Constructive Solid Geometry in Virtual Reality","authors":"Samuel Börlin, Ralph Gasser, Florian Spiess, H. Schuldt","doi":"10.1109/AIVR50618.2020.00077","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00077","url":null,"abstract":"3D models play an increasingly important role in various areas, ranging from engineering to the cultural heritage domain. Therefore, tools to effectively and efficiently manage, explore and search large 3D model collections have become more important over the years. Most solutions so far use conventional 2D user interfaces and interaction relies on mouse and keyboard input. In this paper, we present vitrivr-VR, a 3D model retrieval system featuring a virtual reality (VR) user interface based on the multimedia search system vitrivr. Query formulation and results presentation takes place in a VR environment, in which users immerse themselves. vitrivr-VR enables the sculpting of 3D models through constructive solid geometry (CSG) using a VR controller and the use of these sculpted objects as query objects. To the best of our knowledge, vitrivr-VR is the first system that combines CSG and VR to enable 3D model retrieval.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122367554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00022
I. Ghaznavi, Duncan Gillies, D. Nicholls, A. Edalat
We have designed, developed, and tested an Immersive virtual reality (VR) platform to practice the protocols of Self-attachment psychotherapy. We made use of customized photorealistic avatars for the implementation of both the high-end version (based on Facebook’s Oculus) and the low-end version (based on Google’s cardboard) of our platform. Under the Selfattachment therapeutic framework, the causes of mental disorders such as chronic anxiety and depression are traced back to the individual’s insecure attachment with their primary caregiver during childhood and their subsequent problems in affect regulation. The conventional approach (without VR) to Selfattachment requires that the individual uses their childhood photographs to recall their childhood memories and then imagine that the child that they were is present with them. They thus establish a compassionate relationship with their childhood self and then, using love songs and dancing, create an affectional bond with them. Their adult self subsequently role plays a good parent and interacts with their imagined childhood self to perform various developmental and re-parenting activities. The goal is to enhance their capacities for self-regulation of emotion, which can lead them into earning secure attachment. It is hypothesized that our immersive virtual reality platform – which enables the users to interact with their customized 3D photorealistic childhood avatar - offers either a better alternative or at least a complementary visual tool to the conventional imaginal approach to Self-attachment. The platform was developed in Unity 3D, a cross-platform game engine, and takes advantage of the itSeez3D Avatar SDK for generating a customized photorealistic 3D avatar head from a 2D childhood image of the user. The platform also offers facial and body animations for some of the basic emotional states such as Happy, Sad, Scared and Joyful and it allows modifications to the avatar body (height/ width) and clothing color. A study to compare the use of the avatar-based approach (VR) to Self-attachment with the conventional photo-based approach showed promising results. Almost 85% of the participants reported that their photorealistic childhood avatar in VR was more relatable than their childhood photos. Both low-end and high-end VR based approaches were unanimously reported to be more effective than the conventional imaginal approach. Participants reported that the high-end version of the VR platform was more realistic and immersive than the low-end mobile VR version.
{"title":"Photorealistic avatars to enhance the efficacy of Selfattachment psychotherapy","authors":"I. Ghaznavi, Duncan Gillies, D. Nicholls, A. Edalat","doi":"10.1109/AIVR50618.2020.00022","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00022","url":null,"abstract":"We have designed, developed, and tested an Immersive virtual reality (VR) platform to practice the protocols of Self-attachment psychotherapy. We made use of customized photorealistic avatars for the implementation of both the high-end version (based on Facebook’s Oculus) and the low-end version (based on Google’s cardboard) of our platform. Under the Selfattachment therapeutic framework, the causes of mental disorders such as chronic anxiety and depression are traced back to the individual’s insecure attachment with their primary caregiver during childhood and their subsequent problems in affect regulation. The conventional approach (without VR) to Selfattachment requires that the individual uses their childhood photographs to recall their childhood memories and then imagine that the child that they were is present with them. They thus establish a compassionate relationship with their childhood self and then, using love songs and dancing, create an affectional bond with them. Their adult self subsequently role plays a good parent and interacts with their imagined childhood self to perform various developmental and re-parenting activities. The goal is to enhance their capacities for self-regulation of emotion, which can lead them into earning secure attachment. It is hypothesized that our immersive virtual reality platform – which enables the users to interact with their customized 3D photorealistic childhood avatar - offers either a better alternative or at least a complementary visual tool to the conventional imaginal approach to Self-attachment. The platform was developed in Unity 3D, a cross-platform game engine, and takes advantage of the itSeez3D Avatar SDK for generating a customized photorealistic 3D avatar head from a 2D childhood image of the user. The platform also offers facial and body animations for some of the basic emotional states such as Happy, Sad, Scared and Joyful and it allows modifications to the avatar body (height/ width) and clothing color. A study to compare the use of the avatar-based approach (VR) to Self-attachment with the conventional photo-based approach showed promising results. Almost 85% of the participants reported that their photorealistic childhood avatar in VR was more relatable than their childhood photos. Both low-end and high-end VR based approaches were unanimously reported to be more effective than the conventional imaginal approach. Participants reported that the high-end version of the VR platform was more realistic and immersive than the low-end mobile VR version.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129290737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-12-01DOI: 10.1109/AIVR50618.2020.00028
Lucas Thies, M. Stamminger, F. Bauer
VR/AR applications, such as virtual training or coaching, often require a digital twin of a machine. Such a virtual twin must also include a kinematic model that defines its motion behavior. This behavior is usually expressed by constraints in a physics engine. In this paper, we present a system that automatically derives the kinematic model of a machine from RGB video with an optional depth channel. Our system records a live session while a user performs all typical machine movements. It then searches for trajectories and converts them into linear, circular and helical constraints. Our system can also detect kinematic chains and coupled constraints, for example, when a crank moves a toothed rod.
{"title":"Learning Kinematic Machine Models from Videos","authors":"Lucas Thies, M. Stamminger, F. Bauer","doi":"10.1109/AIVR50618.2020.00028","DOIUrl":"https://doi.org/10.1109/AIVR50618.2020.00028","url":null,"abstract":"VR/AR applications, such as virtual training or coaching, often require a digital twin of a machine. Such a virtual twin must also include a kinematic model that defines its motion behavior. This behavior is usually expressed by constraints in a physics engine. In this paper, we present a system that automatically derives the kinematic model of a machine from RGB video with an optional depth channel. Our system records a live session while a user performs all typical machine movements. It then searches for trajectories and converts them into linear, circular and helical constraints. Our system can also detect kinematic chains and coupled constraints, for example, when a crank moves a toothed rod.","PeriodicalId":348199,"journal":{"name":"2020 IEEE International Conference on Artificial Intelligence and Virtual Reality (AIVR)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123810622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}