首页 > 最新文献

Proceedings of the 10th Augmented Human International Conference 2019最新文献

英文 中文
Build your Own!: Open-Source VR Shoes for Unity3D 建立你自己的!Unity3D的开源VR鞋
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311852
J. Reinhardt, E. Lewandowski, Katrin Wolf
Hand-held controllers enable all kinds of interaction in Virtual Reality (VR), such as object manipulation as well as for locomotion. VR shoes allow using the hand exclusively for naturally manual tasks, such as object manipulation, while locomotion could be realized through feet input -- just like in the physical world. While hand-held VR controllers became standard input devices for consumer VR products, VR shoes are only barely available, and also research on that input modality remains open questions. We contribute here with open-source VR shoes and describe how to build and implement them as Unity3D input device. We hope to support researchers in VR research and practitioners in VR product design to increase usability and natural interaction in VR.
手持控制器可以实现虚拟现实(VR)中的各种交互,例如对象操作以及运动。VR鞋允许只使用手来完成自然的手动任务,比如物体操作,而运动可以通过脚的输入来实现——就像在现实世界中一样。当手持VR控制器成为消费者VR产品的标准输入设备时,VR鞋却很少出现,而且关于这种输入方式的研究仍然是一个悬而未决的问题。我们在这里贡献了开源VR鞋,并描述了如何构建和实现它们作为Unity3D输入设备。我们希望支持VR研究人员和VR产品设计从业者,以提高VR的可用性和自然交互。
{"title":"Build your Own!: Open-Source VR Shoes for Unity3D","authors":"J. Reinhardt, E. Lewandowski, Katrin Wolf","doi":"10.1145/3311823.3311852","DOIUrl":"https://doi.org/10.1145/3311823.3311852","url":null,"abstract":"Hand-held controllers enable all kinds of interaction in Virtual Reality (VR), such as object manipulation as well as for locomotion. VR shoes allow using the hand exclusively for naturally manual tasks, such as object manipulation, while locomotion could be realized through feet input -- just like in the physical world. While hand-held VR controllers became standard input devices for consumer VR products, VR shoes are only barely available, and also research on that input modality remains open questions. We contribute here with open-source VR shoes and describe how to build and implement them as Unity3D input device. We hope to support researchers in VR research and practitioners in VR product design to increase usability and natural interaction in VR.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121263918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Evaluation of a device reproducing the pseudo-force sensation caused by a clothespin 对一种再现衣夹引起的伪力感的装置的评价
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311837
Masahiro Miyakami, Takuto Nakamura, H. Kajimoto
A pseudo-force sensation can be elicited by pinching a finger with a clothespin. When the clothespin is used to pinch the finger from the palm side, a pseudo-force is felt in the direction towards the palm side, and when it is used to pinch the finger from the back side of the hand, the pseudo-force is felt in the extension direction. Here, as a first step to utilizing this phenomenon in human-machine interfaces, we developed a device that reproduces the clothespin phenomenon and confirmed the occurrence rate of the pseudo-force sensation.
用衣夹夹手指可以产生一种伪力感。当用衣夹从掌心侧捏手指时,向掌心侧方向感受一种伪力,当用衣夹从手背侧捏手指时,向伸出方向感受一种伪力。在这里,作为在人机界面中利用这一现象的第一步,我们开发了一种再现衣夹现象的设备,并确认了伪力感的发生率。
{"title":"Evaluation of a device reproducing the pseudo-force sensation caused by a clothespin","authors":"Masahiro Miyakami, Takuto Nakamura, H. Kajimoto","doi":"10.1145/3311823.3311837","DOIUrl":"https://doi.org/10.1145/3311823.3311837","url":null,"abstract":"A pseudo-force sensation can be elicited by pinching a finger with a clothespin. When the clothespin is used to pinch the finger from the palm side, a pseudo-force is felt in the direction towards the palm side, and when it is used to pinch the finger from the back side of the hand, the pseudo-force is felt in the extension direction. Here, as a first step to utilizing this phenomenon in human-machine interfaces, we developed a device that reproduces the clothespin phenomenon and confirmed the occurrence rate of the pseudo-force sensation.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"89 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131957734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Estimation of Fingertip Contact Force by Measuring Skin Deformation and Posture with Photo-reflective Sensors 利用光反射传感器测量皮肤变形和姿态估算指尖接触力
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311824
Ayane Saito, W. Kuno, Wataru Kawai, N. Miyata, Yuta Sugiura
A wearable device for measuring skin deformation of the fingertip---to obtain contact force when the finger touches an object---was prototyped and experimentally evaluated. The device is attached to the fingertip and uses multiple photo-reflective sensors (PRSs) to measures the distance from the PRSs to the side surface of the fingertip. The sensors do not touch the contact surface between the fingertip and the object; as a result, the contact force is obtained without changing the user's tactile sensation. In addition, the accuracy of estimated contact force was improved by determining the posture of the fingertip by measuring the distance between the fingertip and the contact surface. Based on the prototyped device, a system for estimating three-dimensional contact force on the fingertip was implemented.
一种用于测量指尖皮肤变形的可穿戴设备——当手指接触物体时获得接触力——被原型化并进行了实验评估。该设备连接在指尖上,使用多个光反射传感器(PRSs)来测量从PRSs到指尖侧面的距离。传感器不接触指尖与物体之间的接触面;因此,在不改变用户触觉的情况下获得接触力。此外,通过测量指尖与接触面之间的距离来确定指尖的姿态,提高了估计接触力的精度。在此基础上,实现了指尖三维接触力估算系统。
{"title":"Estimation of Fingertip Contact Force by Measuring Skin Deformation and Posture with Photo-reflective Sensors","authors":"Ayane Saito, W. Kuno, Wataru Kawai, N. Miyata, Yuta Sugiura","doi":"10.1145/3311823.3311824","DOIUrl":"https://doi.org/10.1145/3311823.3311824","url":null,"abstract":"A wearable device for measuring skin deformation of the fingertip---to obtain contact force when the finger touches an object---was prototyped and experimentally evaluated. The device is attached to the fingertip and uses multiple photo-reflective sensors (PRSs) to measures the distance from the PRSs to the side surface of the fingertip. The sensors do not touch the contact surface between the fingertip and the object; as a result, the contact force is obtained without changing the user's tactile sensation. In addition, the accuracy of estimated contact force was improved by determining the posture of the fingertip by measuring the distance between the fingertip and the contact surface. Based on the prototyped device, a system for estimating three-dimensional contact force on the fingertip was implemented.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133641673","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
TongueBoard: An Oral Interface for Subtle Input 舌板:用于细微输入的口头界面
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311831
Richard Li, Jason Wu, Thad Starner
We present TongueBoard, a retainer form-factor device for recognizing non-vocalized speech. TongueBoard enables absolute position tracking of the tongue by placing capacitive touch sensors on the roof of the mouth. We collect a dataset of 21 common words from four user study participants (two native American English speakers and two non-native speakers with severe hearing loss). We train a classifier that is able to recognize the words with 91.01% accuracy for the native speakers and 77.76% accuracy for the non-native speakers in a user dependent, offline setting. The native English speakers then participate in a user study involving operating a calculator application with 15 non-vocalized words and two tongue gestures at a desktop and with a mobile phone while walking. TongueBoard consistently maintains an information transfer rate of 3.78 bits per decision (number of choices = 17, accuracy = 97.1%) and 2.18 bits per second across stationary and mobile contexts, which is comparable to our control conditions of mouse (desktop) and touchpad (mobile) input.
我们介绍舌板,一个保留形状因素的设备,用于识别非发声语音。TongueBoard通过在口腔上颚放置电容式触摸传感器来实现舌头的绝对位置跟踪。我们从四个用户研究参与者(两个美国英语母语者和两个听力严重受损的非英语母语者)中收集了21个常用词的数据集。我们训练了一个分类器,在依赖用户的离线设置下,对母语使用者的单词识别准确率为91.01%,对非母语使用者的单词识别准确率为77.76%。然后,以英语为母语的人参加了一项用户研究,包括在桌面操作一个计算器应用程序,使用15个不发音的单词和两个舌头手势,并在走路时使用手机。TongueBoard始终保持每次决策3.78比特的信息传输速率(选择数= 17,准确性= 97.1%),在固定和移动环境下每秒2.18比特,这与我们的鼠标(桌面)和触摸板(移动)输入的控制条件相当。
{"title":"TongueBoard: An Oral Interface for Subtle Input","authors":"Richard Li, Jason Wu, Thad Starner","doi":"10.1145/3311823.3311831","DOIUrl":"https://doi.org/10.1145/3311823.3311831","url":null,"abstract":"We present TongueBoard, a retainer form-factor device for recognizing non-vocalized speech. TongueBoard enables absolute position tracking of the tongue by placing capacitive touch sensors on the roof of the mouth. We collect a dataset of 21 common words from four user study participants (two native American English speakers and two non-native speakers with severe hearing loss). We train a classifier that is able to recognize the words with 91.01% accuracy for the native speakers and 77.76% accuracy for the non-native speakers in a user dependent, offline setting. The native English speakers then participate in a user study involving operating a calculator application with 15 non-vocalized words and two tongue gestures at a desktop and with a mobile phone while walking. TongueBoard consistently maintains an information transfer rate of 3.78 bits per decision (number of choices = 17, accuracy = 97.1%) and 2.18 bits per second across stationary and mobile contexts, which is comparable to our control conditions of mouse (desktop) and touchpad (mobile) input.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126164361","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Detection Threshold of the Height Difference between a Visual and Physical Step 视觉步长与物理步长高度差的检测阈值
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311857
Masatora Kobayashi, Yuki Kon, H. Kajimoto
In recent years, virtual reality (VR) applications that accompany real-space walking have become popular. In these applications, the expression of steps, such as a stairway, is a technical challenge. Preparing a real step with the same scale as that of the step in the VR space is one alternative; however, it is costly and impractical. We propose using a real step, but one physical step for the expression of various steps, by manipulating the viewpoint and foot position when ascending and descending real steps. The hypothesis is that the height of a step can be complemented to some extent visually, even if the heights of the real step and that in the VR space are different. In this paper, we first propose a viewpoint and foot position manipulation algorithm. T hen we measure the detection threshold of the height difference between the visual and physical step when ascending and descending the physical step using our manipulation algorithm. As a result, we found that the difference can be detected if there is a difference of approximately 1.0 cm between the VR space and the real space, irrespective of the height of the physical step.
近年来,伴随真实空间行走的虚拟现实(VR)应用变得流行起来。在这些应用中,台阶(例如楼梯)的表达是一项技术挑战。准备一个与VR空间中步骤相同规模的真实步骤是一种选择;然而,它既昂贵又不切实际。我们建议使用一个真实的步骤,但是一个物理步骤来表达各种步骤,通过在上升和下降真实步骤时操纵视点和脚的位置。假设一个台阶的高度可以在视觉上得到一定程度的补充,即使真实的台阶高度和VR空间中的台阶高度是不同的。本文首先提出了一种视点和脚位操纵算法。然后,我们使用我们的操作算法测量在上升和下降物理步骤时视觉和物理步骤之间高度差的检测阈值。因此,我们发现,无论物理台阶的高度如何,如果VR空间与真实空间之间存在约1.0 cm的差异,就可以检测到差异。
{"title":"Detection Threshold of the Height Difference between a Visual and Physical Step","authors":"Masatora Kobayashi, Yuki Kon, H. Kajimoto","doi":"10.1145/3311823.3311857","DOIUrl":"https://doi.org/10.1145/3311823.3311857","url":null,"abstract":"In recent years, virtual reality (VR) applications that accompany real-space walking have become popular. In these applications, the expression of steps, such as a stairway, is a technical challenge. Preparing a real step with the same scale as that of the step in the VR space is one alternative; however, it is costly and impractical. We propose using a real step, but one physical step for the expression of various steps, by manipulating the viewpoint and foot position when ascending and descending real steps. The hypothesis is that the height of a step can be complemented to some extent visually, even if the heights of the real step and that in the VR space are different. In this paper, we first propose a viewpoint and foot position manipulation algorithm. T hen we measure the detection threshold of the height difference between the visual and physical step when ascending and descending the physical step using our manipulation algorithm. As a result, we found that the difference can be detected if there is a difference of approximately 1.0 cm between the VR space and the real space, irrespective of the height of the physical step.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124720421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Orochi: Investigating Requirements and Expectations for Multipurpose Daily Used Supernumerary Robotic Limbs 研究多用途日常使用的多余机械肢体的需求和期望
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311850
Mohammed Al Sada, Thomas Höglund, M. Khamis, Jaryd Urbani, T. Nakajima
Supernumerary robotic limbs (SRLs) present many opportunities for daily use. However, their obtrusiveness and limitations in interaction genericity hinder their daily use. To address challenges of daily use, we extracted three design considerations from previous literature and embodied them in a wearable we call Orochi. The considerations include the following: 1) multipurpose use, 2) wearability by context, and 3) unobtrusiveness in public. We implemented Orochi as a snake-shaped robot with 25 DoFs and two end effectors, and demonstrated several novel interactions enabled by its limber design. Using Orochi, we conducted hands-on focus groups to explore how multipurpose SRLs are used daily and we conducted a survey to explore how they are perceived when used in public. Participants approved Orochi's design and proposed different use cases and postures in which it could be worn. Orochi's unobtrusive design was generally well received, yet novel interactions raise several challenges for social acceptance. We discuss the significance of our results by highlighting future research opportunities based on the design, implementation, and evaluation of Orochi.
多余机械肢体(srl)为日常使用提供了许多机会。然而,它们的突兀性和交互通用性的局限性阻碍了它们的日常使用。为了应对日常使用的挑战,我们从以前的文献中提取了三个设计考虑因素,并将它们体现在我们称之为Orochi的可穿戴设备中。考虑因素包括:1)多用途,2)环境可穿戴性,以及3)在公共场合不引人注目。我们将Orochi实现为具有25个自由度和两个末端执行器的蛇形机器人,并演示了通过其柔性设计实现的几种新颖交互。使用Orochi,我们进行了实际的焦点小组讨论,以探索日常使用多用途srl的情况,并进行了一项调查,以探索在公共场合使用时它们是如何被感知的。参与者认可了Orochi的设计,并提出了不同的使用案例和佩戴姿势。Orochi不引人注目的设计普遍受到好评,但新颖的互动方式为社会接受度带来了一些挑战。我们通过强调基于Orochi的设计、实施和评估的未来研究机会来讨论我们结果的意义。
{"title":"Orochi: Investigating Requirements and Expectations for Multipurpose Daily Used Supernumerary Robotic Limbs","authors":"Mohammed Al Sada, Thomas Höglund, M. Khamis, Jaryd Urbani, T. Nakajima","doi":"10.1145/3311823.3311850","DOIUrl":"https://doi.org/10.1145/3311823.3311850","url":null,"abstract":"Supernumerary robotic limbs (SRLs) present many opportunities for daily use. However, their obtrusiveness and limitations in interaction genericity hinder their daily use. To address challenges of daily use, we extracted three design considerations from previous literature and embodied them in a wearable we call Orochi. The considerations include the following: 1) multipurpose use, 2) wearability by context, and 3) unobtrusiveness in public. We implemented Orochi as a snake-shaped robot with 25 DoFs and two end effectors, and demonstrated several novel interactions enabled by its limber design. Using Orochi, we conducted hands-on focus groups to explore how multipurpose SRLs are used daily and we conducted a survey to explore how they are perceived when used in public. Participants approved Orochi's design and proposed different use cases and postures in which it could be worn. Orochi's unobtrusive design was generally well received, yet novel interactions raise several challenges for social acceptance. We discuss the significance of our results by highlighting future research opportunities based on the design, implementation, and evaluation of Orochi.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130795905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Automatic Smile and Frown Recognition with Kinetic Earables 自动微笑和皱眉识别与动态耳机
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311869
Seungchul Lee, Chulhong Min, A. Montanari, Akhil Mathur, Youngjae Chang, Junehwa Song, F. Kawsar
In this paper, we introduce inertial signals obtained from an earable placed in the ear canal as a new compelling sensing modality for recognising two key facial expressions: smile and frown. Borrowing principles from Facial Action Coding Systems, we first demonstrate that an inertial measurement unit of an earable can capture facial muscle deformation activated by a set of temporal micro-expressions. Building on these observations, we then present three different learning schemes - shallow models with statistical features, hidden Markov model, and deep neural networks to automatically recognise smile and frown expressions from inertial signals. The experimental results show that in controlled non-conversational settings, we can identify smile and frown with high accuracy (F1 score: 0.85).
在本文中,我们引入了从放置在耳道中的耳塞获得的惯性信号作为一种新的令人信服的感知方式来识别两种关键的面部表情:微笑和皱眉。借用面部动作编码系统的原理,我们首先证明了耳朵的惯性测量单元可以捕捉由一组时间微表情激活的面部肌肉变形。在这些观察的基础上,我们提出了三种不同的学习方案——具有统计特征的浅模型、隐马尔可夫模型和从惯性信号中自动识别微笑和皱眉表情的深度神经网络。实验结果表明,在受控的非会话环境下,我们可以准确地识别微笑和皱眉(F1得分:0.85)。
{"title":"Automatic Smile and Frown Recognition with Kinetic Earables","authors":"Seungchul Lee, Chulhong Min, A. Montanari, Akhil Mathur, Youngjae Chang, Junehwa Song, F. Kawsar","doi":"10.1145/3311823.3311869","DOIUrl":"https://doi.org/10.1145/3311823.3311869","url":null,"abstract":"In this paper, we introduce inertial signals obtained from an earable placed in the ear canal as a new compelling sensing modality for recognising two key facial expressions: smile and frown. Borrowing principles from Facial Action Coding Systems, we first demonstrate that an inertial measurement unit of an earable can capture facial muscle deformation activated by a set of temporal micro-expressions. Building on these observations, we then present three different learning schemes - shallow models with statistical features, hidden Markov model, and deep neural networks to automatically recognise smile and frown expressions from inertial signals. The experimental results show that in controlled non-conversational settings, we can identify smile and frown with high accuracy (F1 score: 0.85).","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121817882","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Hearing Is Believing: Synthesizing Spatial Audio from Everyday Objects to Users 听觉就是相信:从日常物品到用户的空间音频合成
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311872
J. Yang, Yves Frank, Gábor Sörös
The ubiquity of wearable audio devices and the importance of the auditory sense imply great potential for audio augmented reality. In this work, we propose a concept and a prototype of synthesizing spatial sounds from arbitrary real objects to users in everyday interactions, whereby all sounds are rendered directly by the user's own ear pods instead of loudspeakers on the objects. The proposed system tracks the user and the objects in real time, creates a simplified model of the environment, and generates realistic 3D audio effects. We thoroughly evaluate the usability and the usefulness of such a system based on a user study with 21 participants. We also investigate how an acoustic environment model improves the sense of engagement of the rendered 3D sounds.
无处不在的可穿戴音频设备和听觉的重要性意味着音频增强现实的巨大潜力。在这项工作中,我们提出了一个概念和一个原型,在日常互动中,将任意真实物体的空间声音合成给用户,其中所有声音都直接由用户自己的耳塞而不是物体上的扬声器呈现。该系统实时跟踪用户和对象,创建一个简化的环境模型,并产生逼真的3D音频效果。我们彻底评估了可用性和有用性,这样一个系统基于用户研究与21名参与者。我们还研究了声学环境模型如何提高渲染3D声音的参与感。
{"title":"Hearing Is Believing: Synthesizing Spatial Audio from Everyday Objects to Users","authors":"J. Yang, Yves Frank, Gábor Sörös","doi":"10.1145/3311823.3311872","DOIUrl":"https://doi.org/10.1145/3311823.3311872","url":null,"abstract":"The ubiquity of wearable audio devices and the importance of the auditory sense imply great potential for audio augmented reality. In this work, we propose a concept and a prototype of synthesizing spatial sounds from arbitrary real objects to users in everyday interactions, whereby all sounds are rendered directly by the user's own ear pods instead of loudspeakers on the objects. The proposed system tracks the user and the objects in real time, creates a simplified model of the environment, and generates realistic 3D audio effects. We thoroughly evaluate the usability and the usefulness of such a system based on a user study with 21 participants. We also investigate how an acoustic environment model improves the sense of engagement of the rendered 3D sounds.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122899821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Social Activity Measurement by Counting Faces Captured in First-Person View Lifelogging Video 通过第一人称视角生活记录视频中捕捉的面部计数来测量社会活动
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311846
Akane Okuno, Y. Sumi
This paper proposes a method to measure the daily face-to-face social activity of a camera wearer by detecting faces captured in first-person view lifelogging videos. This study was inspired by pedometers used to estimate the amount of physical activity by counting the number of steps detected by accelerometers, which is effective for reflecting individual health and facilitating behavior change. We investigated whether we can estimate the amount of social activity by counting the number of faces captured in the first-person view videos like a pedometer. Our system counts not only the number of faces but also weighs in the numbers according to the size of the face (corresponding to a face's closeness) and the amount of time it was shown in the video. By doing so, we confirmed that we can measure the amount of social activity based on the quality of each interaction. For example, if we simply count the number of faces, we overestimate social activities while passing through a crowd of people. Our system, on the other hand, gives a higher score to a social actitivity even when speaking with a single person for a long time, which was also positively evaluated by experiment participants who viewed the lifelogging videos. Through evaluation experiments, many evaluators evaluated the social activity high when the camera wearer speaks. An interesting feature of the proposed system is that it can correctly evaluate such scenes higher as the camera wearer actively engages in conversations with others, even though the system does not measure the camera wearer's utterances. This is because the conversation partners tend to turn their faces towards to the camera wearer, and that increases the number of detected faces as a result. However, the present system fails to correctly estimate the depth of social activity compared to what the camera wearer recalls especially when the conversation partners are standing out of the camera's field of view. The paper briefly descibes how the results can be improved by widening the camera's field of view.
本文提出了一种方法,通过检测第一人称视角生活记录视频中捕捉的面部来测量相机佩戴者的日常面对面社交活动。这项研究的灵感来自于计步器,计步器通过计算加速度计检测到的步数来估计身体活动量,这对于反映个人健康状况和促进行为改变是有效的。我们研究了我们是否可以通过像计步器一样计算第一人称视角视频中捕捉到的人脸数量来估计社交活动的数量。我们的系统不仅计算脸的数量,而且根据脸的大小(对应于脸的距离)和在视频中显示的时间来计算数字的权重。通过这样做,我们证实了我们可以根据每次互动的质量来衡量社交活动的数量。例如,如果我们只是简单地数脸的数量,我们就会高估在经过人群时的社会活动。另一方面,我们的系统即使在与一个人长时间交谈的情况下,也会对社交活动给出更高的分数,这也得到了观看生活日志视频的实验参与者的积极评价。通过评价实验,许多评价者对佩戴者说话时的社交活动评价较高。该系统的一个有趣的特点是,当相机佩戴者积极地与他人交谈时,即使系统不测量相机佩戴者的话语,它也能正确地评估这些场景。这是因为谈话对象倾向于把脸转向相机佩戴者,这就增加了被检测到的人脸数量。然而,与相机佩戴者的记忆相比,目前的系统无法正确估计社交活动的深度,尤其是当谈话对象站在相机视野之外时。本文简要介绍了如何通过扩大相机的视场来改善结果。
{"title":"Social Activity Measurement by Counting Faces Captured in First-Person View Lifelogging Video","authors":"Akane Okuno, Y. Sumi","doi":"10.1145/3311823.3311846","DOIUrl":"https://doi.org/10.1145/3311823.3311846","url":null,"abstract":"This paper proposes a method to measure the daily face-to-face social activity of a camera wearer by detecting faces captured in first-person view lifelogging videos. This study was inspired by pedometers used to estimate the amount of physical activity by counting the number of steps detected by accelerometers, which is effective for reflecting individual health and facilitating behavior change. We investigated whether we can estimate the amount of social activity by counting the number of faces captured in the first-person view videos like a pedometer. Our system counts not only the number of faces but also weighs in the numbers according to the size of the face (corresponding to a face's closeness) and the amount of time it was shown in the video. By doing so, we confirmed that we can measure the amount of social activity based on the quality of each interaction. For example, if we simply count the number of faces, we overestimate social activities while passing through a crowd of people. Our system, on the other hand, gives a higher score to a social actitivity even when speaking with a single person for a long time, which was also positively evaluated by experiment participants who viewed the lifelogging videos. Through evaluation experiments, many evaluators evaluated the social activity high when the camera wearer speaks. An interesting feature of the proposed system is that it can correctly evaluate such scenes higher as the camera wearer actively engages in conversations with others, even though the system does not measure the camera wearer's utterances. This is because the conversation partners tend to turn their faces towards to the camera wearer, and that increases the number of detected faces as a result. However, the present system fails to correctly estimate the depth of social activity compared to what the camera wearer recalls especially when the conversation partners are standing out of the camera's field of view. The paper briefly descibes how the results can be improved by widening the camera's field of view.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132232105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Augmented taste of wine by artificial climate room: Influence of temperature and humidity on taste evaluation 人工气候室增强葡萄酒的口感:温度和湿度对口感评价的影响
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311871
Toshiharu Igarashi, Tatsuya Minagawa, Yoichi Ochiai
In previous research, there is a augmenting device limited taste influences due to limited contact with utensils. However, in the situation such as enjoying wine while talking with other people and matching cheese with wine, the solution that limits human behaviors must not have been acceptable. So, we focused on changing the temperature and humidity when drinking wine. To study the influence of temperature and humidity on the ingredients and subjective taste of wine, we conducted wine tasting experiments with 16 subjects using an artificial climate room. For the environmental settings, three conditions, i.e., a room temperature of 14°C and humidity of 35%, 17°C and 40% humidity, and 26°C and 40% humidity, were evaluated. In one of the two wines used in the experiment, significant differences in [Color intensity], [Smell development] and [Body] were detected among conditions (p < 0.05). We further investigated changes in the components of the two wines at different temperature conditions (14°C, 17°C, 23°C, and 26°C). Malic acid, protocatechuic acid, gallic acid, and epicatechin were related to temperature in the former wine only. In conclusion, we confirmed that we can change the taste evaluation of wine by adjusting temperature and humidity using the artificial climate room, without attaching the device to human beings themselves. This suggests the possibility to serve wine in a more optimal environment if we can identify the type of wine and person's preference.
在先前的研究中,有一种增强装置,由于与餐具的接触有限,对味道的影响有限。但是,在与人交谈时一边喝葡萄酒,一边用奶酪搭配葡萄酒的情况下,限制人类行为的解决方案肯定是不可接受的。因此,我们专注于改变饮用葡萄酒时的温度和湿度。为了研究温度和湿度对葡萄酒成分和主观味道的影响,我们在人工气候室内对16名受试者进行了品酒实验。环境设置分为室温14℃、湿度35%、17℃、湿度40%、26℃、湿度40%三种情况。在实验中使用的两种葡萄酒中,有一种葡萄酒在不同条件下的[颜色强度],[气味发展]和[体度]存在显著差异(p < 0.05)。我们进一步研究了两种葡萄酒在不同温度条件下(14°C、17°C、23°C和26°C)成分的变化。苹果酸、原儿茶酸、没食子酸和表儿茶素仅与前一种酒的温度有关。综上所述,我们证实了我们可以通过调节温度和湿度来改变葡萄酒的口感评价,而无需将设备安装在人类身上。这表明,如果我们能够确定葡萄酒的类型和人们的偏好,就有可能在更理想的环境中提供葡萄酒。
{"title":"Augmented taste of wine by artificial climate room: Influence of temperature and humidity on taste evaluation","authors":"Toshiharu Igarashi, Tatsuya Minagawa, Yoichi Ochiai","doi":"10.1145/3311823.3311871","DOIUrl":"https://doi.org/10.1145/3311823.3311871","url":null,"abstract":"In previous research, there is a augmenting device limited taste influences due to limited contact with utensils. However, in the situation such as enjoying wine while talking with other people and matching cheese with wine, the solution that limits human behaviors must not have been acceptable. So, we focused on changing the temperature and humidity when drinking wine. To study the influence of temperature and humidity on the ingredients and subjective taste of wine, we conducted wine tasting experiments with 16 subjects using an artificial climate room. For the environmental settings, three conditions, i.e., a room temperature of 14°C and humidity of 35%, 17°C and 40% humidity, and 26°C and 40% humidity, were evaluated. In one of the two wines used in the experiment, significant differences in [Color intensity], [Smell development] and [Body] were detected among conditions (p < 0.05). We further investigated changes in the components of the two wines at different temperature conditions (14°C, 17°C, 23°C, and 26°C). Malic acid, protocatechuic acid, gallic acid, and epicatechin were related to temperature in the former wine only. In conclusion, we confirmed that we can change the taste evaluation of wine by adjusting temperature and humidity using the artificial climate room, without attaching the device to human beings themselves. This suggests the possibility to serve wine in a more optimal environment if we can identify the type of wine and person's preference.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114483990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
Proceedings of the 10th Augmented Human International Conference 2019
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1