首页 > 最新文献

ImmersiveMe '14最新文献

英文 中文
Integration of a Precise Indoor Position Tracking Algorithm with an HMD-Based Virtual Reality System 一种室内精确位置跟踪算法与基于hmd的虚拟现实系统集成
Pub Date : 2014-11-07 DOI: 10.1145/2660579.2660582
Jongkyu Shin, G. An, Kyogu Lee
In this paper, we present a new system for a highly immersive virtual reality experience utilizing head-mounted displays (HMDs) with accurate indoor-position tracking abilities. The system is designed to have six degrees of freedom in the virtual world, which allows users to physically move around in the real world while wearing the wireless system with an HMD. Three-dimensional X, Y, and Z coordinate data are estimated in real time using ultrasonic sensors. In addition, the pitch, roll, and yaw values are also measured. Unlike previously developed systems which require external input devices to move in the virtual environment, our system provides a natural virtual reality experience by precisely matching the physical movements in a real world to those in a virtual environment. Results show that the system is able to estimate accurate positions of a user, and delivers a highly immersive virtual/mixed reality experience.
在本文中,我们提出了一种利用头戴式显示器(hmd)提供高度沉浸式虚拟现实体验的新系统,该系统具有精确的室内位置跟踪能力。该系统被设计成在虚拟世界中有六个自由度,这允许用户在戴着头戴式头盔的无线系统时在现实世界中四处走动。三维的X, Y和Z坐标数据的实时估计使用超声波传感器。此外,还测量了俯仰、横摇和偏航值。与之前开发的需要外部输入设备才能在虚拟环境中移动的系统不同,我们的系统通过将现实世界中的物理运动与虚拟环境中的物理运动精确匹配来提供自然的虚拟现实体验。结果表明,该系统能够准确估计用户的位置,并提供高度沉浸式的虚拟/混合现实体验。
{"title":"Integration of a Precise Indoor Position Tracking Algorithm with an HMD-Based Virtual Reality System","authors":"Jongkyu Shin, G. An, Kyogu Lee","doi":"10.1145/2660579.2660582","DOIUrl":"https://doi.org/10.1145/2660579.2660582","url":null,"abstract":"In this paper, we present a new system for a highly immersive virtual reality experience utilizing head-mounted displays (HMDs) with accurate indoor-position tracking abilities. The system is designed to have six degrees of freedom in the virtual world, which allows users to physically move around in the real world while wearing the wireless system with an HMD. Three-dimensional X, Y, and Z coordinate data are estimated in real time using ultrasonic sensors. In addition, the pitch, roll, and yaw values are also measured. Unlike previously developed systems which require external input devices to move in the virtual environment, our system provides a natural virtual reality experience by precisely matching the physical movements in a real world to those in a virtual environment. Results show that the system is able to estimate accurate positions of a user, and delivers a highly immersive virtual/mixed reality experience.","PeriodicalId":391229,"journal":{"name":"ImmersiveMe '14","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123243519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Immersion, Imagination & Innovation: Media Immersion Matching the power of the Imagination to Innovate the Future 沉浸、想象与创新:媒体沉浸与想象力的力量相匹配,创新未来
Pub Date : 2014-11-07 DOI: 10.1145/2660579.2660580
Christopher B. Stapleton
The application of immersion to awaken human potential through innovative experience design is dependent upon one's ability to spark the imagination. Our keynote speaker shares his journey of exploring how the diverse application of interactive entertainment techniques can define new innovations in life transformative simulations. Using the interplay of story, play and game, his design research showcases examples of how stimulating the imagination can enhance military training for the US Army, informal education for NASA, medical imaging for ER surgeons, teacher training in Urban classrooms, experiential marketing in shopping malls as well as cognitive rehabilitation in therapy clinics. As technological advancements in simulation catch up with the science-fiction of our parents, what kind of creative leaps will our children will be making into a future where reality, virtuality and imagination work as one world? The future of immersive media will transfer from theme parks to our living rooms, and all new design paradigms will emerge to transform our homes into a school, museum, theme park, training facility, shopping center, as well as a medical and rehabilitation clinic. This talk presents a vision with all new challenges for the future.
通过创新的体验设计,运用沉浸感来唤醒人的潜能,取决于一个人激发想象力的能力。我们的主讲人分享了他探索互动娱乐技术的不同应用如何定义生活变革模拟的新创新的旅程。通过故事、游戏和游戏的相互作用,他的设计研究展示了如何激发想象力来加强美国陆军的军事训练、美国宇航局的非正式教育、急诊室外科医生的医学成像、城市教室的教师培训、购物中心的体验式营销以及治疗诊所的认知康复。随着模拟技术的进步赶上了我们父母的科幻小说,我们的孩子将在现实、虚拟和想象作为一个世界的未来中取得什么样的创造性飞跃?沉浸式媒体的未来将从主题公园转移到我们的客厅,所有新的设计范式将会出现,把我们的家变成学校、博物馆、主题公园、培训设施、购物中心,以及医疗和康复诊所。这次演讲展示了未来所有新挑战的愿景。
{"title":"Immersion, Imagination & Innovation: Media Immersion Matching the power of the Imagination to Innovate the Future","authors":"Christopher B. Stapleton","doi":"10.1145/2660579.2660580","DOIUrl":"https://doi.org/10.1145/2660579.2660580","url":null,"abstract":"The application of immersion to awaken human potential through innovative experience design is dependent upon one's ability to spark the imagination. Our keynote speaker shares his journey of exploring how the diverse application of interactive entertainment techniques can define new innovations in life transformative simulations. Using the interplay of story, play and game, his design research showcases examples of how stimulating the imagination can enhance military training for the US Army, informal education for NASA, medical imaging for ER surgeons, teacher training in Urban classrooms, experiential marketing in shopping malls as well as cognitive rehabilitation in therapy clinics.\u0000 As technological advancements in simulation catch up with the science-fiction of our parents, what kind of creative leaps will our children will be making into a future where reality, virtuality and imagination work as one world? The future of immersive media will transfer from theme parks to our living rooms, and all new design paradigms will emerge to transform our homes into a school, museum, theme park, training facility, shopping center, as well as a medical and rehabilitation clinic. This talk presents a vision with all new challenges for the future.","PeriodicalId":391229,"journal":{"name":"ImmersiveMe '14","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116898542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On-Display Spatial Audio for Multiple Applications on Large Displays 显示空间音频在大型显示器上的多种应用
Pub Date : 2014-11-07 DOI: 10.1145/2660579.2660581
S. Deshpande
We describe a spatial audio system design for multiple applications on large displays. Our system provides spatial audio based on an application window's location on the display screen. Our approach allows conveying application audio height information without special content encoding. Our approach supports spatial audio from multiple concurrent on-display AV windows. The on-display audio location moves and resizes when the AV application window is moved and resized. Our design can handle any X.Y format content and spatialize it with a limited number of discrete loudspeakers. We have implemented this spatial audio system without requiring any special hardware, by reusing existing surround sound cards and developing special software to utilize the discrete individual audio channels supported by them. Subjective listening tests confirm the spatial location perceived by listeners matches the AV window location in our system.
我们描述了一种空间音频系统的设计,用于大型显示器上的多种应用。我们的系统根据应用程序窗口在显示屏上的位置提供空间音频。我们的方法允许在没有特殊内容编码的情况下传递应用程序音频高度信息。我们的方法支持来自多个并发显示AV窗口的空间音频。当AV应用程序窗口移动和调整大小时,显示的音频位置移动和调整大小。我们的设计可以处理任何X.Y格式的内容,并通过有限数量的离散扬声器将其空间化。我们已经实现了这个空间音频系统,而不需要任何特殊的硬件,通过重复使用现有的环绕声卡和开发特殊的软件来利用它们支持的离散的个人音频通道。主观听力测试证实听者感知到的空间位置与我们系统中的AV窗口位置相匹配。
{"title":"On-Display Spatial Audio for Multiple Applications on Large Displays","authors":"S. Deshpande","doi":"10.1145/2660579.2660581","DOIUrl":"https://doi.org/10.1145/2660579.2660581","url":null,"abstract":"We describe a spatial audio system design for multiple applications on large displays. Our system provides spatial audio based on an application window's location on the display screen. Our approach allows conveying application audio height information without special content encoding. Our approach supports spatial audio from multiple concurrent on-display AV windows. The on-display audio location moves and resizes when the AV application window is moved and resized. Our design can handle any X.Y format content and spatialize it with a limited number of discrete loudspeakers. We have implemented this spatial audio system without requiring any special hardware, by reusing existing surround sound cards and developing special software to utilize the discrete individual audio channels supported by them. Subjective listening tests confirm the spatial location perceived by listeners matches the AV window location in our system.","PeriodicalId":391229,"journal":{"name":"ImmersiveMe '14","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123853298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of a Simple and Low-Cost Olfactory Display for Immersive Media Experiences 一种用于沉浸式媒体体验的简单低成本嗅觉显示器的开发
Pub Date : 2014-11-07 DOI: 10.1145/2660579.2660584
Nicolas S. Herrera, Ryan P. McMahan
Olfaction is an important perceptual function that is often neglected in immersive media (IM) and virtual reality (VR) applications. Because the effects of olfaction have not been researched as much as those of visual, auditory, or haptic senses, the effects of olfactory stimuli on IM experiences are mainly unexplored, largely unknown, and debatable in many examples. A major factor limiting olfaction research is the lack of olfactory display options. Commercial solutions are often inadequate and expensive. Prior research on olfactory displays is helpful, but pertinent details are normally missing, and the devices are often too complex to replicate. To address this issue, we have developed a simple, low-cost olfactory display by using inexpensive components and leveraging airflow for vaporization and scent delivery. In this paper, we detail the development of our display and describe an informal study evaluating its effectiveness.
嗅觉是一种重要的感知功能,在沉浸式媒体(IM)和虚拟现实(VR)应用中经常被忽视。由于嗅觉的影响还没有像视觉、听觉或触觉那样被研究得那么多,嗅觉刺激对IM体验的影响主要是未被探索的,很大程度上是未知的,在许多例子中是有争议的。限制嗅觉研究的一个主要因素是缺乏嗅觉显示选项。商业解决方案往往不足且昂贵。先前对嗅觉显示器的研究是有帮助的,但相关的细节通常是缺失的,而且这些设备往往太复杂而无法复制。为了解决这个问题,我们开发了一种简单、低成本的嗅觉显示器,通过使用廉价的组件和利用气流进行蒸发和气味传递。在本文中,我们详细介绍了我们的显示的发展,并描述了一个非正式的研究评估其有效性。
{"title":"Development of a Simple and Low-Cost Olfactory Display for Immersive Media Experiences","authors":"Nicolas S. Herrera, Ryan P. McMahan","doi":"10.1145/2660579.2660584","DOIUrl":"https://doi.org/10.1145/2660579.2660584","url":null,"abstract":"Olfaction is an important perceptual function that is often neglected in immersive media (IM) and virtual reality (VR) applications. Because the effects of olfaction have not been researched as much as those of visual, auditory, or haptic senses, the effects of olfactory stimuli on IM experiences are mainly unexplored, largely unknown, and debatable in many examples. A major factor limiting olfaction research is the lack of olfactory display options. Commercial solutions are often inadequate and expensive. Prior research on olfactory displays is helpful, but pertinent details are normally missing, and the devices are often too complex to replicate. To address this issue, we have developed a simple, low-cost olfactory display by using inexpensive components and leveraging airflow for vaporization and scent delivery. In this paper, we detail the development of our display and describe an informal study evaluating its effectiveness.","PeriodicalId":391229,"journal":{"name":"ImmersiveMe '14","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132829248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Mindful: A Platform for Large-Scale Affective Field Research 正念:大规模情感领域研究的平台
Pub Date : 2014-11-07 DOI: 10.1145/2660579.2660583
Guy Feigenblat, Jonathan Herzig, Michal Shmueli-Scheuer, D. Konopnicki
In this work we present Mindful, a platform for defining, configuring, executing and distributing affective experiments to a large scale audience. This type of experiments measure the emotional reaction of participants to media content selected by experimenters. Furthermore, the platform manages profiles of registered users who have agreed to participate in an experiment as well as a data collection and analysis mechanisms. The analyzed data is then used to enrich users' profile and to better understand their emotional behavior. Throughout the paper we describe the platform in details and present a use case of how the platform is being used in practice.
在这项工作中,我们提出了Mindful,一个用于定义,配置,执行和向大规模受众分发情感实验的平台。这种类型的实验测量参与者对实验者选择的媒体内容的情绪反应。此外,该平台还管理同意参与实验的注册用户的个人资料以及数据收集和分析机制。然后,分析的数据被用来丰富用户的资料,并更好地了解他们的情绪行为。在整篇论文中,我们详细描述了该平台,并给出了一个在实践中如何使用该平台的用例。
{"title":"Mindful: A Platform for Large-Scale Affective Field Research","authors":"Guy Feigenblat, Jonathan Herzig, Michal Shmueli-Scheuer, D. Konopnicki","doi":"10.1145/2660579.2660583","DOIUrl":"https://doi.org/10.1145/2660579.2660583","url":null,"abstract":"In this work we present Mindful, a platform for defining, configuring, executing and distributing affective experiments to a large scale audience. This type of experiments measure the emotional reaction of participants to media content selected by experimenters. Furthermore, the platform manages profiles of registered users who have agreed to participate in an experiment as well as a data collection and analysis mechanisms. The analyzed data is then used to enrich users' profile and to better understand their emotional behavior. Throughout the paper we describe the platform in details and present a use case of how the platform is being used in practice.","PeriodicalId":391229,"journal":{"name":"ImmersiveMe '14","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131487367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GACE: Gesture and Appearance Cutout Embedding for Gaming Applications GACE:手势和外观切割嵌入游戏应用程序
Pub Date : 2014-11-07 DOI: 10.1145/2660579.2660587
Tam V. Nguyen, Y. H. Tan, Jose Sepulveda
This paper presents a lightweight game framework that provides real-time integration of human appearance and gesture-guided control within the game. It augments a new immersive experience since it allows game users to see their personal appearance interacting in real-time with other computer graphical characters in the game. With the goal to make the system easily realizable, we address the challenges in the whole pipeline of video processing, gesture recognition, and communication. To this end, we introduce the game framework, Gesture and Appearance Cutout Embedding (GACE), which runs the human appearance cutout algorithm and connects with game components by using memory mapped files. We also introduce the gesture-based support to enhance the immersion. Extensive experiments have shown that the proposed system runs reliably and comfortably in real-time with a commodity setting.
本文提出了一个轻量级的游戏框架,在游戏中提供了人类外观和手势引导控制的实时集成。它增强了一种新的沉浸式体验,因为它允许游戏用户看到他们的个人外观与游戏中的其他计算机图形角色实时交互。为了使系统易于实现,我们解决了视频处理、手势识别和通信等整个流程中的挑战。为此,我们引入了游戏框架GACE (Gesture and Appearance Cutout Embedding), GACE运行人的外观切割算法,并通过内存映射文件与游戏组件连接。我们还引入了基于手势的支持来增强沉浸感。大量的实验表明,该系统在商品设置下实时运行可靠、舒适。
{"title":"GACE: Gesture and Appearance Cutout Embedding for Gaming Applications","authors":"Tam V. Nguyen, Y. H. Tan, Jose Sepulveda","doi":"10.1145/2660579.2660587","DOIUrl":"https://doi.org/10.1145/2660579.2660587","url":null,"abstract":"This paper presents a lightweight game framework that provides real-time integration of human appearance and gesture-guided control within the game. It augments a new immersive experience since it allows game users to see their personal appearance interacting in real-time with other computer graphical characters in the game. With the goal to make the system easily realizable, we address the challenges in the whole pipeline of video processing, gesture recognition, and communication. To this end, we introduce the game framework, Gesture and Appearance Cutout Embedding (GACE), which runs the human appearance cutout algorithm and connects with game components by using memory mapped files. We also introduce the gesture-based support to enhance the immersion. Extensive experiments have shown that the proposed system runs reliably and comfortably in real-time with a commodity setting.","PeriodicalId":391229,"journal":{"name":"ImmersiveMe '14","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125071670","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
ENF Signal Induced by Power Grid: A New Modality for Video Synchronization 电网诱导的ENF信号:一种新的视频同步方式
Pub Date : 2014-11-07 DOI: 10.1145/2660579.2660588
Hui Su, Adi Hajj-Ahmad, Chau-Wai Wong, Ravi Garg, Min Wu
Multiple videos capturing the same scene from possibly different viewing angles may be synthesized for novel immersive experience. Synchronization is an important task for such applications involving multiple pieces of audio-visual data. In this work, we exploit the electric network frequency (ENF) signal inherently embedded in the soundtrack and/or image sequence of video to temporally align video recordings. ENF is the supply frequency of power distribution networks in a power grid. Its value fluctuates slightly from its nominal value of 50 Hz or 60 Hz, and the fluctuation trends stay consistent within the same grid. Audio and video recordings that are created in areas of electric activities may capture the ENF signal due to electromagnetic interferences and other physical phenomena. We propose to synchronize video recordings by aligning the embedded ENF signals. Without major constraints on viewing angle and camera calibration as many existing methods impose, the proposed approach emerges as a new synchronization modality.
从可能不同的视角捕捉同一场景的多个视频可能会合成为新颖的沉浸式体验。对于涉及多段视听数据的应用程序来说,同步是一项重要的任务。在这项工作中,我们利用嵌入在视频音轨和/或图像序列中的电网络频率(ENF)信号来暂时对齐视频记录。ENF是电网中配电网的供电频率。其值相对于其标称值50hz或60hz有轻微波动,在同一网格内波动趋势保持一致。由于电磁干扰和其他物理现象,在电活动区域产生的音频和视频记录可能会捕获ENF信号。我们建议通过对齐嵌入的ENF信号来同步视频记录。该方法不像许多现有方法那样受视角和摄像机校准的限制,是一种新的同步方式。
{"title":"ENF Signal Induced by Power Grid: A New Modality for Video Synchronization","authors":"Hui Su, Adi Hajj-Ahmad, Chau-Wai Wong, Ravi Garg, Min Wu","doi":"10.1145/2660579.2660588","DOIUrl":"https://doi.org/10.1145/2660579.2660588","url":null,"abstract":"Multiple videos capturing the same scene from possibly different viewing angles may be synthesized for novel immersive experience. Synchronization is an important task for such applications involving multiple pieces of audio-visual data. In this work, we exploit the electric network frequency (ENF) signal inherently embedded in the soundtrack and/or image sequence of video to temporally align video recordings. ENF is the supply frequency of power distribution networks in a power grid. Its value fluctuates slightly from its nominal value of 50 Hz or 60 Hz, and the fluctuation trends stay consistent within the same grid. Audio and video recordings that are created in areas of electric activities may capture the ENF signal due to electromagnetic interferences and other physical phenomena. We propose to synchronize video recordings by aligning the embedded ENF signals. Without major constraints on viewing angle and camera calibration as many existing methods impose, the proposed approach emerges as a new synchronization modality.","PeriodicalId":391229,"journal":{"name":"ImmersiveMe '14","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133997507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 29
The Sensation of Taste in the Future of Immersive Media 未来沉浸式媒体的味觉感受
Pub Date : 2014-11-07 DOI: 10.1145/2660579.2660586
Nimesha Ranasinghe, Kuan-Yi Lee, Gajan Suthokumar, E. Do
To create a truly immersive virtual experience, perceiving information through multiple human senses is important. Therefore, new forms of media are required that deeply involve various human senses -not only sight, sound, and touch, but also nontraditional senses like taste and smell- to create a perception of presence in a non-physical environment. However, at present, the sensation of taste is considered as one of the final frontiers of immersive media to be achieved. This paper discusses key aspects and opportunities of including the sensation of taste in the future of immersive media technologies. As a solution, we then present 'Taste+' utensils that digitally enhance the taste sensations of food and beverages without additional flavoring ingredients. Finally, we envision several future usage scenarios and challenges of the indicated technology to facilitate future immersive digital experiences.
为了创造真正身临其境的虚拟体验,通过多种人类感官感知信息非常重要。因此,需要新形式的媒体,深度涉及人类的各种感官-不仅是视觉,声音和触觉,还包括非传统的感官,如味觉和嗅觉-在非物理环境中创造一种存在感。然而,目前,味觉被认为是沉浸式媒体最终要实现的前沿之一。本文讨论了在未来沉浸式媒体技术中包括味觉的关键方面和机会。作为解决方案,我们推出了“Taste+”餐具,它可以在不添加额外调味料的情况下,通过数字方式增强食品和饮料的味觉。最后,我们展望了该技术未来的几个使用场景和挑战,以促进未来沉浸式数字体验。
{"title":"The Sensation of Taste in the Future of Immersive Media","authors":"Nimesha Ranasinghe, Kuan-Yi Lee, Gajan Suthokumar, E. Do","doi":"10.1145/2660579.2660586","DOIUrl":"https://doi.org/10.1145/2660579.2660586","url":null,"abstract":"To create a truly immersive virtual experience, perceiving information through multiple human senses is important. Therefore, new forms of media are required that deeply involve various human senses -not only sight, sound, and touch, but also nontraditional senses like taste and smell- to create a perception of presence in a non-physical environment. However, at present, the sensation of taste is considered as one of the final frontiers of immersive media to be achieved. This paper discusses key aspects and opportunities of including the sensation of taste in the future of immersive media technologies. As a solution, we then present 'Taste+' utensils that digitally enhance the taste sensations of food and beverages without additional flavoring ingredients. Finally, we envision several future usage scenarios and challenges of the indicated technology to facilitate future immersive digital experiences.","PeriodicalId":391229,"journal":{"name":"ImmersiveMe '14","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126876659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Sensory Fiction: A Design Fiction of Emotional Computation 感官小说:情感计算的设计小说
Pub Date : 2014-11-07 DOI: 10.1145/2660579.2660585
Felix Heibeck, Alexis Hope, J. Legault
This paper is situated in the emergent field of "Design Fiction" and describes how this approach can be applied to explorations in the field of immersive media experiences. We present Sensory Fiction -- an exploration in augmenting the emotions of a reader via a modular, multi-sensory system. The science on the nature of emotions is still inconclusive and direct ways of controlling them computationally are yet to be discovered. However, this project creates a Design Fiction that highlights the opportunities and challenges that the availability of such technology might bring. We leveraged existing scientific insights to build a functional prototype that aims to induce and evoke emotions by simulating the physiological system. Used in combination with conceptual, non-functional modules (i.e. modules that do not function physically but that introduce the idea of a physical actuation), we created an artifact to spark discussion about the future of immersive emotional experiences but that can also be experienced by the audience. Lastly, we show how presenting the project in appropriate contexts and analyzing the audience's reaction is a useful strategy to evaluate Design Fiction projects.
本文位于新兴的“设计小说”领域,并描述了如何将这种方法应用于沉浸式媒体体验领域的探索。我们呈现感官小说——通过模块化、多感官系统增强读者情感的探索。关于情绪本质的科学研究仍然没有定论,而通过计算来控制情绪的直接方法还有待发现。然而,这个项目创造了一个设计小说,突出了这种技术的可用性可能带来的机遇和挑战。我们利用现有的科学见解来构建一个功能原型,旨在通过模拟生理系统来诱导和唤起情感。结合概念性的、非功能性的模块(即没有物理功能的模块,但引入了物理驱动的概念),我们创造了一个神器,引发了关于沉浸式情感体验的未来的讨论,但也可以由观众体验。最后,我们将说明如何在适当的环境中呈现项目并分析用户的反应是评估设计小说项目的有效策略。
{"title":"Sensory Fiction: A Design Fiction of Emotional Computation","authors":"Felix Heibeck, Alexis Hope, J. Legault","doi":"10.1145/2660579.2660585","DOIUrl":"https://doi.org/10.1145/2660579.2660585","url":null,"abstract":"This paper is situated in the emergent field of \"Design Fiction\" and describes how this approach can be applied to explorations in the field of immersive media experiences. We present Sensory Fiction -- an exploration in augmenting the emotions of a reader via a modular, multi-sensory system. The science on the nature of emotions is still inconclusive and direct ways of controlling them computationally are yet to be discovered. However, this project creates a Design Fiction that highlights the opportunities and challenges that the availability of such technology might bring. We leveraged existing scientific insights to build a functional prototype that aims to induce and evoke emotions by simulating the physiological system. Used in combination with conceptual, non-functional modules (i.e. modules that do not function physically but that introduce the idea of a physical actuation), we created an artifact to spark discussion about the future of immersive emotional experiences but that can also be experienced by the audience. Lastly, we show how presenting the project in appropriate contexts and analyzing the audience's reaction is a useful strategy to evaluate Design Fiction projects.","PeriodicalId":391229,"journal":{"name":"ImmersiveMe '14","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129385130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Transforming Lives Through Story Immersion: Innovation of Aphasia Rehabilitation Therapy through Storytelling Learning Landscapes 通过故事沉浸改变生活:通过讲故事学习景观的失语康复治疗创新
Pub Date : 2014-11-07 DOI: 10.1145/2660579.2660590
Christopher B. Stapleton, J. Whiteside, J. Davies, Dana S. Mott, Jennifer Vick
Aphasia is a disease that renders its victims unable to effectively use language. Evidence supports the efficacy of treatment for aphasia yet the effectiveness or transferability of learned communicative abilities to everyday conversation continues to be investigated. In this paper we explore an alternative approach to aphasia treatment based on the art and science of storytelling. Inherent in storytelling are the motivations to share an experience, the cognitive abilities to organize story, and the language system to convey the experience. This approach is based on decades of research in aphasia therapy and immersive storytelling (in other fields) and has been used to engage a subject's creativity and emotions to produce transformative results in real life. We report on early, promising results that could radically innovate the rehabilitative practice of aphasia.
失语症是一种使患者无法有效使用语言的疾病。证据支持治疗失语症的有效性,但对习得的沟通能力在日常对话中的有效性或可转移性仍在研究中。在本文中,我们探索一种基于讲故事艺术和科学的失语症治疗的替代方法。讲故事的内在是分享经验的动机、组织故事的认知能力和传达经验的语言系统。这种方法是基于数十年来对失语症治疗和沉浸式讲故事(在其他领域)的研究,并已被用于调动受试者的创造力和情感,从而在现实生活中产生变革性的结果。我们报告早期,有希望的结果,可以从根本上创新失语症的康复实践。
{"title":"Transforming Lives Through Story Immersion: Innovation of Aphasia Rehabilitation Therapy through Storytelling Learning Landscapes","authors":"Christopher B. Stapleton, J. Whiteside, J. Davies, Dana S. Mott, Jennifer Vick","doi":"10.1145/2660579.2660590","DOIUrl":"https://doi.org/10.1145/2660579.2660590","url":null,"abstract":"Aphasia is a disease that renders its victims unable to effectively use language. Evidence supports the efficacy of treatment for aphasia yet the effectiveness or transferability of learned communicative abilities to everyday conversation continues to be investigated. In this paper we explore an alternative approach to aphasia treatment based on the art and science of storytelling. Inherent in storytelling are the motivations to share an experience, the cognitive abilities to organize story, and the language system to convey the experience. This approach is based on decades of research in aphasia therapy and immersive storytelling (in other fields) and has been used to engage a subject's creativity and emotions to produce transformative results in real life. We report on early, promising results that could radically innovate the rehabilitative practice of aphasia.","PeriodicalId":391229,"journal":{"name":"ImmersiveMe '14","volume":"223 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126999847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
ImmersiveMe '14
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1