首页 > 最新文献

Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology最新文献

英文 中文
The physical-virtual table: exploring the effects of a virtual human's physical influence on social interaction 物理-虚拟桌:探索虚拟人的物理影响对社会互动的影响
Myungho Lee, Nahal Norouzi, G. Bruder, P. Wisniewski, G. Welch
In this paper, we investigate the effects of the physical influence of a virtual human (VH) in the context of face-to-face interaction in augmented reality (AR). In our study, participants played a tabletop game with a VH, in which each player takes a turn and moves their own token along the designated spots on the shared table. We compared two conditions as follows: the VH in the virtual condition moves a virtual token that can only be seen through AR glasses, while the VH in the physical condition moves a physical token as the participants do; therefore the VH's token can be seen even in the periphery of the AR glasses. For the physical condition, we designed an actuator system underneath the table. The actuator moves a magnet under the table which then moves the VH's physical token over the surface of the table. Our results indicate that participants felt higher co-presence with the VH in the physical condition, and participants assessed the VH as a more physical entity compared to the VH in the virtual condition. We further observed transference effects when participants attributed the VH's ability to move physical objects to other elements in the real world. Also, the VH's physical influence improved participants' overall experience with the VH. We discuss potential explanations for the findings and implications for future shared AR tabletop setups.
在本文中,我们研究了虚拟人(VH)在增强现实(AR)中面对面互动背景下的物理影响。在我们的研究中,参与者玩了一个带有VH的桌面游戏,在这个游戏中,每个玩家轮流移动自己的代币,沿着共享桌子上的指定位置移动。我们比较了以下两种情况:虚拟条件下的VH移动一个只能通过AR眼镜看到的虚拟令牌,而物理条件下的VH与参与者一样移动一个物理令牌;因此,即使在AR眼镜的外围也能看到VH的标记。针对物理条件,我们在桌子下面设计了一个致动器系统。执行器移动桌子下面的磁铁,然后将VH的物理令牌移动到桌子表面。我们的研究结果表明,与虚拟条件下的VH相比,参与者在物理条件下对VH的共存在感更高,并且参与者将VH评估为更物理的实体。当参与者将VH移动物体的能力归因于现实世界中的其他元素时,我们进一步观察了移情效应。此外,VH的身体影响改善了参与者对VH的整体体验。我们讨论了研究结果的潜在解释以及对未来共享AR桌面设置的影响。
{"title":"The physical-virtual table: exploring the effects of a virtual human's physical influence on social interaction","authors":"Myungho Lee, Nahal Norouzi, G. Bruder, P. Wisniewski, G. Welch","doi":"10.1145/3281505.3281533","DOIUrl":"https://doi.org/10.1145/3281505.3281533","url":null,"abstract":"In this paper, we investigate the effects of the physical influence of a virtual human (VH) in the context of face-to-face interaction in augmented reality (AR). In our study, participants played a tabletop game with a VH, in which each player takes a turn and moves their own token along the designated spots on the shared table. We compared two conditions as follows: the VH in the virtual condition moves a virtual token that can only be seen through AR glasses, while the VH in the physical condition moves a physical token as the participants do; therefore the VH's token can be seen even in the periphery of the AR glasses. For the physical condition, we designed an actuator system underneath the table. The actuator moves a magnet under the table which then moves the VH's physical token over the surface of the table. Our results indicate that participants felt higher co-presence with the VH in the physical condition, and participants assessed the VH as a more physical entity compared to the VH in the virtual condition. We further observed transference effects when participants attributed the VH's ability to move physical objects to other elements in the real world. Also, the VH's physical influence improved participants' overall experience with the VH. We discuss potential explanations for the findings and implications for future shared AR tabletop setups.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130650018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
VR safari park: a concept-based world building interface using blocks and world tree VR野生动物园:一个基于概念的世界建筑界面,使用方块和世界树
Shotaro Ichikawa, Kazuki Takashima, Anthony Tang, Y. Kitamura
We present a concept-based world building approach, realized in a system called VR Safari Park, which allows users to rapidly create and manipulate a world simulation. Conventional world building tools focus on the manipulation and arrangement of entities to set up the simulation, which is time consuming as it requires frequent view and entity manipulations. Our approach focuses on a far simpler mechanic, where users add virtual blocks which represent world entities (e.g. animals, terrain, weather, etc.) to a World Tree, which represents the simulation. In so doing, the World Tree provides a quick overview of the simulation, and users can easily set up scenarios in the simulation without having to manually perform fine-grain manipulations on world entities. A preliminary user study found that the proposed interface is effective and usable for novice users without prior immersive VR experience.
我们提出了一种基于概念的世界构建方法,在一个名为VR Safari Park的系统中实现,它允许用户快速创建和操纵世界模拟。传统的世界构建工具侧重于实体的操作和安排,以建立模拟,这是耗时的,因为它需要频繁的视图和实体操作。我们的方法侧重于一个简单得多的机制,即用户将代表世界实体(如动物、地形、天气等)的虚拟块添加到代表模拟的世界树中。这样,世界树提供了模拟的快速概览,用户可以轻松地在模拟中设置场景,而无需手动执行对世界实体的精细操作。一项初步的用户研究发现,对于没有沉浸式VR体验的新手用户来说,拟议的界面是有效和可用的。
{"title":"VR safari park: a concept-based world building interface using blocks and world tree","authors":"Shotaro Ichikawa, Kazuki Takashima, Anthony Tang, Y. Kitamura","doi":"10.1145/3281505.3281517","DOIUrl":"https://doi.org/10.1145/3281505.3281517","url":null,"abstract":"We present a concept-based world building approach, realized in a system called VR Safari Park, which allows users to rapidly create and manipulate a world simulation. Conventional world building tools focus on the manipulation and arrangement of entities to set up the simulation, which is time consuming as it requires frequent view and entity manipulations. Our approach focuses on a far simpler mechanic, where users add virtual blocks which represent world entities (e.g. animals, terrain, weather, etc.) to a World Tree, which represents the simulation. In so doing, the World Tree provides a quick overview of the simulation, and users can easily set up scenarios in the simulation without having to manually perform fine-grain manipulations on world entities. A preliminary user study found that the proposed interface is effective and usable for novice users without prior immersive VR experience.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131143486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
TransFork TransFork
Ying-Li Lin, Tsai-Yi Chou, Yu-Cheng Lieo, Yu-Cheng Huang, Ping-Hsuan Han
When people eat, the taste is very complex and be influenced easily by other senses. Such as visual, olfactory, and haptic, even past experiences, can affect the human perception, which in turn creates more taste possibilities. We present TransFork, an eating tool with olfactory feedback, which augments the tasting experience with video see-through head-mounted display. Additionally, we design a recipe via preliminary experiments to find out the taste conversion formula, which could enhance the flavor of foods and change the user perception to recognize the food. In this demonstration, we prepare a mini feast with bite-sized fruit, the participants use the TransFork to eat food A and smell the scent of food B stored at the aromatic box via airflow guiding. Before they deliver the food to their mouth, the head-mounted display augmented the color of food B on food A by the QR code on the aromatic box. With this augmented reality techniques and the recipe, the tasting experience could be augmented or enhanced, which is a potential approach and could be a playful used for eating.
{"title":"TransFork","authors":"Ying-Li Lin, Tsai-Yi Chou, Yu-Cheng Lieo, Yu-Cheng Huang, Ping-Hsuan Han","doi":"10.1145/3281505.3281560","DOIUrl":"https://doi.org/10.1145/3281505.3281560","url":null,"abstract":"When people eat, the taste is very complex and be influenced easily by other senses. Such as visual, olfactory, and haptic, even past experiences, can affect the human perception, which in turn creates more taste possibilities. We present TransFork, an eating tool with olfactory feedback, which augments the tasting experience with video see-through head-mounted display. Additionally, we design a recipe via preliminary experiments to find out the taste conversion formula, which could enhance the flavor of foods and change the user perception to recognize the food. In this demonstration, we prepare a mini feast with bite-sized fruit, the participants use the TransFork to eat food A and smell the scent of food B stored at the aromatic box via airflow guiding. Before they deliver the food to their mouth, the head-mounted display augmented the color of food B on food A by the QR code on the aromatic box. With this augmented reality techniques and the recipe, the tasting experience could be augmented or enhanced, which is a potential approach and could be a playful used for eating.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127240794","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A low-cost omni-directional VR walking platform by thigh supporting and motion estimation 基于大腿支撑和运动估计的低成本全方位VR行走平台
Wataru Wakita, Tomoyuki Takano, Toshiyuki Hadama
We propose a low-cost omni-directional VR walking platform by thigh supporting and motion estimation. Specifically, this platform supports the thighs of the user to the walking direction, and the user make the stepping motion while leaning to the walking direction. Thereby making it possible to change the center of gravity of the foot sole like an actual walking. Moreover, our platform estimate the foot movement which constrained by thigh supporting part with load cells around the user's thigh, and render to the HMD according to the estimated foot movement. As a result, our platform enables user to make the walking sensation more realistic at low-cost.
提出了一种基于大腿支撑和运动估计的低成本全方位VR行走平台。具体来说,该平台将用户的大腿支撑向行走方向,用户在向行走方向倾斜的同时做出踏步动作。从而可以改变脚底的重心,就像实际走路一样。此外,我们的平台通过用户大腿周围的称重传感器估计受大腿支撑部分约束的足部运动,并根据估计的足部运动呈现给HMD。因此,我们的平台使用户能够以低成本使行走感觉更加逼真。
{"title":"A low-cost omni-directional VR walking platform by thigh supporting and motion estimation","authors":"Wataru Wakita, Tomoyuki Takano, Toshiyuki Hadama","doi":"10.1145/3281505.3281570","DOIUrl":"https://doi.org/10.1145/3281505.3281570","url":null,"abstract":"We propose a low-cost omni-directional VR walking platform by thigh supporting and motion estimation. Specifically, this platform supports the thighs of the user to the walking direction, and the user make the stepping motion while leaning to the walking direction. Thereby making it possible to change the center of gravity of the foot sole like an actual walking. Moreover, our platform estimate the foot movement which constrained by thigh supporting part with load cells around the user's thigh, and render to the HMD according to the estimated foot movement. As a result, our platform enables user to make the walking sensation more realistic at low-cost.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126732534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Automatic 3D modeling of artwork and visualizing audio in an augmented reality environment 艺术品的自动3D建模和可视化音频在增强现实环境
Elijah Schwelling, Kyungjin Yoo
In recent years, traditional art museums have begun to use AR/VR technology to make visits more engaging and interactive. This paper details an application which provides features designed to be immediately engaging and educational to museum visitors within an AR view. The application superimposes an automatically generated 3D representation over a scanned artwork, along with the work's authorship, title, and date of creation. A GUI allows the user to exaggerate or decrease the depth scale of the 3D representation, as well as to search for related works of music. Given this music as audio input, the generated 3D model will act as an audio visualizer by changing depth scale based on input frequency.
近年来,传统艺术博物馆开始使用AR/VR技术使参观更具吸引力和互动性。本文详细介绍了一个应用程序,该应用程序提供了在AR视图中立即吸引和教育博物馆游客的功能。该应用程序在扫描的艺术品上叠加自动生成的3D表示,以及作品的作者身份、标题和创作日期。GUI允许用户放大或缩小3D表示的深度尺度,以及搜索相关的音乐作品。将音乐作为音频输入,生成的3D模型将通过根据输入频率改变深度尺度来充当音频可视化器。
{"title":"Automatic 3D modeling of artwork and visualizing audio in an augmented reality environment","authors":"Elijah Schwelling, Kyungjin Yoo","doi":"10.1145/3281505.3281617","DOIUrl":"https://doi.org/10.1145/3281505.3281617","url":null,"abstract":"In recent years, traditional art museums have begun to use AR/VR technology to make visits more engaging and interactive. This paper details an application which provides features designed to be immediately engaging and educational to museum visitors within an AR view. The application superimposes an automatically generated 3D representation over a scanned artwork, along with the work's authorship, title, and date of creation. A GUI allows the user to exaggerate or decrease the depth scale of the 3D representation, as well as to search for related works of music. Given this music as audio input, the generated 3D model will act as an audio visualizer by changing depth scale based on input frequency.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122436177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
VR sickness measurement with EEG using DNN algorithm 基于DNN算法的脑电VR疾病测量
D. Jeong, Sangbong Yoo, Yun Jang
Recently, VR technology is rapidly developing and attracting public attention. However, VR Sickness is a problem that is still not solved in the VR experience. The VR sickness is presumed to be caused by crosstalk between sensory and cognitive systems [1]. However, since there is no objective way to measure sensory and cognitive systems, it is difficult to measure VR sickness. In this paper, we collect EEG data while participants experience VR videos. We propose a Deep Neural Network (DNN) deep learning algorithm by measuring VR sickness through electroencephalogram (EEG) data. Experiments have been conducted to search for an appropriate EEG data preprocessing method and DNN structure suitable for the deep learning, and the accuracy of 99.12% is obtained in our study.
近年来,虚拟现实技术正在迅速发展并引起了公众的关注。然而,在VR体验中,VR病仍然是一个没有解决的问题。VR病被认为是由感觉和认知系统之间的串扰引起的[1]。然而,由于没有客观的方法来测量感觉和认知系统,因此很难测量VR疾病。在本文中,我们收集了参与者在观看VR视频时的脑电图数据。我们提出了一种深度神经网络(DNN)深度学习算法,通过脑电图(EEG)数据来测量VR疾病。通过实验寻找合适的脑电数据预处理方法和适合深度学习的DNN结构,获得了99.12%的准确率。
{"title":"VR sickness measurement with EEG using DNN algorithm","authors":"D. Jeong, Sangbong Yoo, Yun Jang","doi":"10.1145/3281505.3283387","DOIUrl":"https://doi.org/10.1145/3281505.3283387","url":null,"abstract":"Recently, VR technology is rapidly developing and attracting public attention. However, VR Sickness is a problem that is still not solved in the VR experience. The VR sickness is presumed to be caused by crosstalk between sensory and cognitive systems [1]. However, since there is no objective way to measure sensory and cognitive systems, it is difficult to measure VR sickness. In this paper, we collect EEG data while participants experience VR videos. We propose a Deep Neural Network (DNN) deep learning algorithm by measuring VR sickness through electroencephalogram (EEG) data. Experiments have been conducted to search for an appropriate EEG data preprocessing method and DNN structure suitable for the deep learning, and the accuracy of 99.12% is obtained in our study.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126123766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A real-time golf-swing training system using sonification and sound image localization 一种实时高尔夫挥杆训练系统,采用声化和声音图像定位
Yuka Tanaka, Homare Kon, H. Koike
There are real-time training systems to learn the correct golf swing form by providing visual feedback to the users. However, real-time visual feedback requires the users to see the display during their motion that leads to the wrong posture. This paper proposed a real-time golf-swing training system using sonification and sound image localization. The system provides real-time audio feedback based on the difference between the pre-recorded model data and real-time user data, which consists of the roll, pitch, and yaw angles of a golf club shaft. The system also used sound image localization so that the user can hear the audio feedback from the direction of the club head. The user can recognize the current posture of the club without moving their gaze.
有实时训练系统,通过向用户提供视觉反馈来学习正确的高尔夫挥杆形式。然而,实时视觉反馈要求用户在运动过程中看到显示器,从而导致错误的姿势。提出了一种基于声化和声音图像定位的高尔夫挥杆实时训练系统。该系统根据预先记录的模型数据和实时用户数据之间的差异提供实时音频反馈,这些数据包括高尔夫球杆轴的滚转、俯仰和偏航角。该系统还使用了声音图像定位,以便用户可以听到来自杆头方向的音频反馈。用户无需移动视线就能识别球杆的当前姿势。
{"title":"A real-time golf-swing training system using sonification and sound image localization","authors":"Yuka Tanaka, Homare Kon, H. Koike","doi":"10.1145/3281505.3281604","DOIUrl":"https://doi.org/10.1145/3281505.3281604","url":null,"abstract":"There are real-time training systems to learn the correct golf swing form by providing visual feedback to the users. However, real-time visual feedback requires the users to see the display during their motion that leads to the wrong posture. This paper proposed a real-time golf-swing training system using sonification and sound image localization. The system provides real-time audio feedback based on the difference between the pre-recorded model data and real-time user data, which consists of the roll, pitch, and yaw angles of a golf club shaft. The system also used sound image localization so that the user can hear the audio feedback from the direction of the club head. The user can recognize the current posture of the club without moving their gaze.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129805925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Designing dynamic aware interiors 设计动感的室内设计
Y. Kitamura, Kazuki Takashima, Kazuyuki Fujita
We are pursuing a vision of reactive interior spaces that are aware of people's actions and transform according to changing needs. We envision furniture and walls that act as interactive displays and that shapeshift to the correct physical form, and the appropriate interactive visual content and modality. This paper briefly describes our proposal based on our recent efforts on realizing this vision.
我们正在追求一种反应性室内空间的愿景,这种空间能够意识到人们的行为,并根据不断变化的需求进行转换。我们设想家具和墙壁可以作为互动展示,并且可以转换成正确的物理形式,以及适当的互动视觉内容和形式。本文简要介绍了我们最近为实现这一愿景所做的努力。
{"title":"Designing dynamic aware interiors","authors":"Y. Kitamura, Kazuki Takashima, Kazuyuki Fujita","doi":"10.1145/3281505.3281603","DOIUrl":"https://doi.org/10.1145/3281505.3281603","url":null,"abstract":"We are pursuing a vision of reactive interior spaces that are aware of people's actions and transform according to changing needs. We envision furniture and walls that act as interactive displays and that shapeshift to the correct physical form, and the appropriate interactive visual content and modality. This paper briefly describes our proposal based on our recent efforts on realizing this vision.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123989564","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An AR system for artistic creativity education 艺术创意教育的AR系统
Jiajia Tan, Boyang Gao, Xiaobo Lu
Creativity and innovation training is the core of the art education. Modern technology provides more effective tools to help students obtain artistic creativity. In this paper, we propose to employ augmented reality technology to assist artistic creativity education. We first analyze the inefficiency of traditional artistic creation training. We then introduce our AR-based smartphone app with technical detail and explain how it can improve accelerate artistic creativity training. We finally show 3 examples created by our AR app to demonstrate the effectiveness of our proposed method.
创造力和创新能力的培养是美术教育的核心。现代科技为学生获得艺术创造力提供了更有效的工具。本文提出利用增强现实技术辅助艺术创意教育。首先分析了传统艺术创作培训的低效率。然后,我们介绍了基于ar的智能手机应用程序的技术细节,并解释了它如何提高和加速艺术创造力的培训。最后,我们展示了AR应用程序创建的3个示例,以证明我们提出的方法的有效性。
{"title":"An AR system for artistic creativity education","authors":"Jiajia Tan, Boyang Gao, Xiaobo Lu","doi":"10.1145/3281505.3283396","DOIUrl":"https://doi.org/10.1145/3281505.3283396","url":null,"abstract":"Creativity and innovation training is the core of the art education. Modern technology provides more effective tools to help students obtain artistic creativity. In this paper, we propose to employ augmented reality technology to assist artistic creativity education. We first analyze the inefficiency of traditional artistic creation training. We then introduce our AR-based smartphone app with technical detail and explain how it can improve accelerate artistic creativity training. We finally show 3 examples created by our AR app to demonstrate the effectiveness of our proposed method.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127728071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Using mixed reality for promoting brand perception 利用混合现实提升品牌认知
Kelvin Cheng, Ichiro Furusawa
Mixed reality offers an immersive and interactive experience through the use of head mounted displays and in-air gestures. Visitors can discover additional content virtually, on top of existing physical items. For a small-scale exhibition at a cafe, we developed a Microsoft HoloLens application to create an interactive experience on top of a collection of historic physical items. Through public experiences of this exhibition, we received positive feedback of our system, and found that it also helped to promote brand perception. In this demo, visitors can experience a similar mixed reality experience that was shown at the exhibition.
混合现实通过使用头戴式显示器和空中手势提供身临其境的互动体验。访问者可以在现有实体项目的基础上发现虚拟的附加内容。为了在一家咖啡馆举办一个小型展览,我们开发了一个微软全息透镜应用程序,在一系列历史实物上创造一种互动体验。通过这次展览的公众体验,我们的系统得到了积极的反馈,并发现它也有助于提升品牌认知度。在这个演示中,参观者可以体验到与展览中展示的类似的混合现实体验。
{"title":"Using mixed reality for promoting brand perception","authors":"Kelvin Cheng, Ichiro Furusawa","doi":"10.1145/3281505.3281574","DOIUrl":"https://doi.org/10.1145/3281505.3281574","url":null,"abstract":"Mixed reality offers an immersive and interactive experience through the use of head mounted displays and in-air gestures. Visitors can discover additional content virtually, on top of existing physical items. For a small-scale exhibition at a cafe, we developed a Microsoft HoloLens application to create an interactive experience on top of a collection of historic physical items. Through public experiences of this exhibition, we received positive feedback of our system, and found that it also helped to promote brand perception. In this demo, visitors can experience a similar mixed reality experience that was shown at the exhibition.","PeriodicalId":138249,"journal":{"name":"Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115770850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Proceedings of the 24th ACM Symposium on Virtual Reality Software and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1