首页 > 最新文献

Proceedings of the 10th Augmented Human International Conference 2019最新文献

英文 中文
Let Your World Open: CAVE-based Visualization Methods of Public Virtual Reality towards a Shareable VR Experience 让你的世界开放:面向共享VR体验的基于cave的公共虚拟现实可视化方法
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311860
Akira Ishii, M. Tsuruta, Ippei Suzuki, Shuta Nakamae, Junichi Suzuki, Yoichi Ochiai
Virtual reality (VR) games are currently becoming part of the public-space entertainment (e.g., VR amusement parks). Therefore, VR games should be attractive for players, as well as for bystanders. Current VR systems are still mostly focused on enhancing the experience of the head-mounted display (HMD) users; thus, bystanders without an HMD cannot enjoy the experience together with the HMD users. We propose the "ReverseCAVE": a proof-of-concept prototype for public VR visualization using CAVE-based projection with translucent screens for bystanders toward a shareable VR experience. The screens surround the HMD user and the VR environment is projected onto the screens. This enables the bystanders to see the HMD user and the VR environment simultaneously. We designed and implemented the ReverseCAVE, and evaluated it in terms of the degree of attention, attractiveness, enjoyment, and shareability, assuming that it is used in a public space. Thus, we can make the VR world more accessible and enhance the public VR experience of the bystanders via the ReverseCAVE.
虚拟现实(VR)游戏目前正在成为公共空间娱乐的一部分(如虚拟现实游乐园)。因此,VR游戏既要吸引玩家,也要吸引旁观者。当前的VR系统仍然主要侧重于增强头戴式显示器(HMD)用户的体验;因此,没有HMD的旁观者无法与HMD使用者一起享受体验。我们提出了“ReverseCAVE”:一个概念验证原型,用于公共VR可视化,使用基于cave的投影和半透明屏幕,为旁观者提供可共享的VR体验。屏幕围绕着HMD用户,VR环境被投射到屏幕上。这使得旁观者可以同时看到HMD用户和VR环境。我们设计并实现了ReverseCAVE,并根据关注度、吸引力、享受度和可共享性对其进行了评估,假设它被用于公共空间。因此,我们可以通过ReverseCAVE使VR世界更容易接近,并增强旁观者的公共VR体验。
{"title":"Let Your World Open: CAVE-based Visualization Methods of Public Virtual Reality towards a Shareable VR Experience","authors":"Akira Ishii, M. Tsuruta, Ippei Suzuki, Shuta Nakamae, Junichi Suzuki, Yoichi Ochiai","doi":"10.1145/3311823.3311860","DOIUrl":"https://doi.org/10.1145/3311823.3311860","url":null,"abstract":"Virtual reality (VR) games are currently becoming part of the public-space entertainment (e.g., VR amusement parks). Therefore, VR games should be attractive for players, as well as for bystanders. Current VR systems are still mostly focused on enhancing the experience of the head-mounted display (HMD) users; thus, bystanders without an HMD cannot enjoy the experience together with the HMD users. We propose the \"ReverseCAVE\": a proof-of-concept prototype for public VR visualization using CAVE-based projection with translucent screens for bystanders toward a shareable VR experience. The screens surround the HMD user and the VR environment is projected onto the screens. This enables the bystanders to see the HMD user and the VR environment simultaneously. We designed and implemented the ReverseCAVE, and evaluated it in terms of the degree of attention, attractiveness, enjoyment, and shareability, assuming that it is used in a public space. Thus, we can make the VR world more accessible and enhance the public VR experience of the bystanders via the ReverseCAVE.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114530293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
BitoBody BitoBody
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311855
Erwin Wu, Mistki Piekenbrock, Hideki Koike
In this research, we propose a novel human body contact detection and projection system with dynamic mesh collider. We use motion capture camera and generated human 3D models to detect the contact between user's bodies. Since it is difficult to update human mesh collider every frame, a special algorithm that divides body meshes into small pieces of polygons to do collision detection is developed and detected hit information will be dynamically projected according to its magnitude of damage. The maximum deviation of damage projection is about 7.9cm under a 240-fps optitrack motion capture system and 12.0cm under a 30-fps Kinect camera. The proposed system can be used in various sports where bodies come in contact and it allows the audience and players to understand the context easier.
{"title":"BitoBody","authors":"Erwin Wu, Mistki Piekenbrock, Hideki Koike","doi":"10.1145/3311823.3311855","DOIUrl":"https://doi.org/10.1145/3311823.3311855","url":null,"abstract":"In this research, we propose a novel human body contact detection and projection system with dynamic mesh collider. We use motion capture camera and generated human 3D models to detect the contact between user's bodies. Since it is difficult to update human mesh collider every frame, a special algorithm that divides body meshes into small pieces of polygons to do collision detection is developed and detected hit information will be dynamically projected according to its magnitude of damage. The maximum deviation of damage projection is about 7.9cm under a 240-fps optitrack motion capture system and 12.0cm under a 30-fps Kinect camera. The proposed system can be used in various sports where bodies come in contact and it allows the audience and players to understand the context easier.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"117 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117279563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating Universal Appliance Control through Wearable Augmented Reality 通过可穿戴增强现实技术研究通用电器控制
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311853
Vincent Becker, Felix Rauchenstein, Gábor Sörös
The number of interconnected devices around us is constantly growing. However, it may become challenging to control all these devices when control interfaces are distributed over mechanical elements, apps, and configuration webpages. We investigate interaction methods for smart devices in augmented reality. The physical objects are augmented with interaction widgets, which are generated on demand and represent the connected devices along with their adjustable parameters. For example, a loudspeaker can be overlaid with a controller widget for its volume. We explore three ways of manipulating the virtual widgets: (a) in-air finger pinching and sliding, (b) whole arm gestures rotating and waving, (c) incorporating physical objects in the surrounding and mapping their movements to the interaction primitives. We compare these methods in a user study with 25 participants and find significant differences in the preference of the users, the speed of executing commands, and the granularity of the type of control.
我们周围的互联设备数量在不断增长。然而,当控制界面分布在机械元件、应用程序和配置网页上时,控制所有这些设备可能会变得具有挑战性。我们研究了增强现实中智能设备的交互方法。物理对象通过交互小部件得到增强,这些小部件是按需生成的,代表连接的设备及其可调参数。例如,可以在扬声器上叠加一个控制音量的控制器小部件。我们探索操纵虚拟部件的三种方式:(a)在空中手指捏和滑动,(b)整个手臂手势旋转和摆动,(c)在周围结合物理对象并将其运动映射到交互原语。我们在一项有25名参与者的用户研究中比较了这些方法,发现在用户的偏好、执行命令的速度和控制类型的粒度方面存在显著差异。
{"title":"Investigating Universal Appliance Control through Wearable Augmented Reality","authors":"Vincent Becker, Felix Rauchenstein, Gábor Sörös","doi":"10.1145/3311823.3311853","DOIUrl":"https://doi.org/10.1145/3311823.3311853","url":null,"abstract":"The number of interconnected devices around us is constantly growing. However, it may become challenging to control all these devices when control interfaces are distributed over mechanical elements, apps, and configuration webpages. We investigate interaction methods for smart devices in augmented reality. The physical objects are augmented with interaction widgets, which are generated on demand and represent the connected devices along with their adjustable parameters. For example, a loudspeaker can be overlaid with a controller widget for its volume. We explore three ways of manipulating the virtual widgets: (a) in-air finger pinching and sliding, (b) whole arm gestures rotating and waving, (c) incorporating physical objects in the surrounding and mapping their movements to the interaction primitives. We compare these methods in a user study with 25 participants and find significant differences in the preference of the users, the speed of executing commands, and the granularity of the type of control.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"515 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116210188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Design of Enhanced Flashcards for Second Language Vocabulary Learning with Emotional Binaural Narration 情绪性双耳叙述强化二语词汇学习抽认卡的设计
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311867
S. Fukushima
In this paper, we report on the design of a flashcard application with which learners experience the meaning of written words with emotional binaural voice narrations to enhance second language vocabulary learning. Typically, voice used in English vocabulary learning is recorded by a native speaker with no accent, and it aims for accurate pronunciation and clarity. However, the voice can also be flat and monotonous, and it can be difficult for learners to retain the new vocabulary in the semantic memory. Enhancing textual flashcards with emotional narration in the learner's native language helps the retention of new second language vocabulary items in the episodic memory instead of the semantic memory. Further, greater emotionality in the narration reinforces the retention of episodic memory.
在本文中,我们报告了一个抽认卡应用程序的设计,学习者通过情感双耳语音叙述来体验书面单词的意义,以促进第二语言词汇的学习。通常,英语词汇学习中使用的语音是由母语为英语的人录制的,没有口音,目的是发音准确、清晰。然而,语音也可能是单调乏味的,学习者很难在语义记忆中保留新词汇。用母语进行情绪性叙述,强化文本性抽认卡有助于第二语言新词汇在情景记忆中的保留,而不是语义记忆。此外,叙述中更大的情感会加强情景记忆的保留。
{"title":"Design of Enhanced Flashcards for Second Language Vocabulary Learning with Emotional Binaural Narration","authors":"S. Fukushima","doi":"10.1145/3311823.3311867","DOIUrl":"https://doi.org/10.1145/3311823.3311867","url":null,"abstract":"In this paper, we report on the design of a flashcard application with which learners experience the meaning of written words with emotional binaural voice narrations to enhance second language vocabulary learning. Typically, voice used in English vocabulary learning is recorded by a native speaker with no accent, and it aims for accurate pronunciation and clarity. However, the voice can also be flat and monotonous, and it can be difficult for learners to retain the new vocabulary in the semantic memory. Enhancing textual flashcards with emotional narration in the learner's native language helps the retention of new second language vocabulary items in the episodic memory instead of the semantic memory. Further, greater emotionality in the narration reinforces the retention of episodic memory.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122021619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MusiArm
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311873
Kaito Hatakeyama, M. Y. Saraiji, K. Minamizawa
The emergence of prosthetic limbs where solely focused on substituting the missing limb with an artificial one, in order for the handicap people to manage their daily life independently. Past research on prosthetic hands has mainly focused on prosthesis' function and performance. Few proposals focused on the entertainment aspect of prosthetic hands. In this research, we considered the defective part as a potential margin for freely designing our bodies, and coming up with new use cases beyond the original function of the limb. Thus, we are not aiming to create anthropomorphic designs or functions of the limbs. By fusing the prosthetic hands and musical instruments, we propose a new prosthetic hand called "MusiArm" that extends the body part's function to become an instrument. MusiArm concept was developed through the dialogue between the handicapped people, engineers and prosthetists using the physical characteristics of the handicapped people as a "new value" that only the handicapped person can possess. We asked handicapped people who cannot play musical instruments, as well as people who do not usually play instruments, to use prototypes we made. As a result of the usability tests, using MusiArm, we made a part of the body function as a musical instrument, drawing out the unique expression methods of individuals, and enjoying the performance and clarify the possibility of showing interests.
{"title":"MusiArm","authors":"Kaito Hatakeyama, M. Y. Saraiji, K. Minamizawa","doi":"10.1145/3311823.3311873","DOIUrl":"https://doi.org/10.1145/3311823.3311873","url":null,"abstract":"The emergence of prosthetic limbs where solely focused on substituting the missing limb with an artificial one, in order for the handicap people to manage their daily life independently. Past research on prosthetic hands has mainly focused on prosthesis' function and performance. Few proposals focused on the entertainment aspect of prosthetic hands. In this research, we considered the defective part as a potential margin for freely designing our bodies, and coming up with new use cases beyond the original function of the limb. Thus, we are not aiming to create anthropomorphic designs or functions of the limbs. By fusing the prosthetic hands and musical instruments, we propose a new prosthetic hand called \"MusiArm\" that extends the body part's function to become an instrument. MusiArm concept was developed through the dialogue between the handicapped people, engineers and prosthetists using the physical characteristics of the handicapped people as a \"new value\" that only the handicapped person can possess. We asked handicapped people who cannot play musical instruments, as well as people who do not usually play instruments, to use prototypes we made. As a result of the usability tests, using MusiArm, we made a part of the body function as a musical instrument, drawing out the unique expression methods of individuals, and enjoying the performance and clarify the possibility of showing interests.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122839215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Brain Computer Interface for Neuro-rehabilitation With Deep Learning Classification and Virtual Reality Feedback 基于深度学习分类和虚拟现实反馈的神经康复脑机接口
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311864
Tamás Karácsony, J. P. Hansen, H. Iversen, S. Puthusserypady
Though Motor Imagery (MI) stroke rehabilitation effectively promotes neural reorganization, current therapeutic methods are immeasurable and their repetitiveness can be demotivating. In this work, a real-time electroencephalogram (EEG) based MI-BCI (Brain Computer Interface) system with a virtual reality (VR) game as a motivational feedback has been developed for stroke rehabilitation. If the subject successfully hits one of the targets, it explodes and thus providing feedback on a successfully imagined and virtually executed movement of hands or feet. Novel classification algorithms with deep learning (DL) and convolutional neural network (CNN) architecture with a unique trial onset detection technique was used. Our classifiers performed better than the previous architectures on datasets from PhysioNet offline database. It provided fine classification in the real-time game setting using a 0.5 second 16 channel input for the CNN architectures. Ten participants reported the training to be interesting, fun and immersive. "It is a bit weird, because it feels like it would be my hands", was one of the comments from a test person. The VR system induced a slight discomfort and a moderate effort for MI activations was reported. We conclude that MI-BCI-VR systems with classifiers based on DL for real-time game applications should be considered for motivating MI stroke rehabilitation.
虽然运动意象(MI)中风康复有效地促进神经重组,但目前的治疗方法是不可估量的,它们的重复性可能会使人失去动力。在这项工作中,开发了一种基于实时脑电图(EEG)的MI-BCI(脑机接口)系统,并以虚拟现实(VR)游戏作为动机反馈用于脑卒中康复。如果实验对象成功击中其中一个目标,它就会爆炸,从而为成功想象和虚拟执行的手或脚的运动提供反馈。采用了基于深度学习(DL)和卷积神经网络(CNN)架构的新型分类算法,并采用了独特的试验开始检测技术。我们的分类器在来自PhysioNet离线数据库的数据集上比以前的架构表现得更好。它为CNN架构使用0.5秒16通道输入,在实时游戏设置中提供了精细的分类。10名参与者报告说,培训很有趣、有趣、身临其境。一位测试者评论道:“这有点奇怪,因为它感觉就像我的手一样。”据报道,VR系统引起了轻微的不适,并为心肌梗死激活做出了适度的努力。我们得出结论,MI- bci - vr系统与基于DL的实时游戏应用分类器应该被考虑用于激励MI中风康复。
{"title":"Brain Computer Interface for Neuro-rehabilitation With Deep Learning Classification and Virtual Reality Feedback","authors":"Tamás Karácsony, J. P. Hansen, H. Iversen, S. Puthusserypady","doi":"10.1145/3311823.3311864","DOIUrl":"https://doi.org/10.1145/3311823.3311864","url":null,"abstract":"Though Motor Imagery (MI) stroke rehabilitation effectively promotes neural reorganization, current therapeutic methods are immeasurable and their repetitiveness can be demotivating. In this work, a real-time electroencephalogram (EEG) based MI-BCI (Brain Computer Interface) system with a virtual reality (VR) game as a motivational feedback has been developed for stroke rehabilitation. If the subject successfully hits one of the targets, it explodes and thus providing feedback on a successfully imagined and virtually executed movement of hands or feet. Novel classification algorithms with deep learning (DL) and convolutional neural network (CNN) architecture with a unique trial onset detection technique was used. Our classifiers performed better than the previous architectures on datasets from PhysioNet offline database. It provided fine classification in the real-time game setting using a 0.5 second 16 channel input for the CNN architectures. Ten participants reported the training to be interesting, fun and immersive. \"It is a bit weird, because it feels like it would be my hands\", was one of the comments from a test person. The VR system induced a slight discomfort and a moderate effort for MI activations was reported. We conclude that MI-BCI-VR systems with classifiers based on DL for real-time game applications should be considered for motivating MI stroke rehabilitation.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123957810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 32
Augmenting Human With a Tail 带着尾巴的扩增人
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311847
Haoran Xie, Kento Mitsuhashi, T. Torii
Human-augmentation devices have been extensively proposed and developed recently and are useful in improving our work efficiency and our quality of life. Inspired by animal tails, this study aims to propose a wearable and functional tail device that combines physical and emotional-augmentation modes. In the physical-augmentation mode, the proposed device can be transformed into a consolidated state to support a user's weight, similar to a kangaroo's tail. In the emotional-augmentation mode, the proposed device can help users express their emotions, which are realized by different tail-motion patterns. For our initial prototype, we developed technical features that can support the weight of an adult, and we performed a perceptional investigation of the relations between the tail movements and the corresponding perceptual impressions. Using the animal-tail analog, the proposed device may be able to help the human user in both physical and emotional ways.
近年来,人体增强装置被广泛提出和发展,对提高我们的工作效率和生活质量非常有用。本研究以动物尾巴为灵感,提出一种结合身体和情感增强模式的可穿戴、功能性尾巴装置。在物理增强模式下,提议的设备可以转化为巩固状态,以支持用户的体重,类似于袋鼠的尾巴。在情绪增强模式下,该装置可以帮助用户表达情绪,通过不同的尾巴运动模式来实现。对于我们最初的原型,我们开发了能够支撑成年人体重的技术特征,并对尾巴运动与相应的感知印象之间的关系进行了感知调查。使用动物尾巴的模拟物,提议的设备可能能够在身体和情感上帮助人类用户。
{"title":"Augmenting Human With a Tail","authors":"Haoran Xie, Kento Mitsuhashi, T. Torii","doi":"10.1145/3311823.3311847","DOIUrl":"https://doi.org/10.1145/3311823.3311847","url":null,"abstract":"Human-augmentation devices have been extensively proposed and developed recently and are useful in improving our work efficiency and our quality of life. Inspired by animal tails, this study aims to propose a wearable and functional tail device that combines physical and emotional-augmentation modes. In the physical-augmentation mode, the proposed device can be transformed into a consolidated state to support a user's weight, similar to a kangaroo's tail. In the emotional-augmentation mode, the proposed device can help users express their emotions, which are realized by different tail-motion patterns. For our initial prototype, we developed technical features that can support the weight of an adult, and we performed a perceptional investigation of the relations between the tail movements and the corresponding perceptual impressions. Using the animal-tail analog, the proposed device may be able to help the human user in both physical and emotional ways.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125500927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Guided Walking to Direct Pedestrians toward the Same Destination 引导步行,引导行人前往同一目的地
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311835
Nobuhito Sakamoto, M. Furukawa, M. Kurokawa, T. Maeda
In this paper, we propose a floor covering-type walking guidance sheet to direct pedestrians without requiring attachment/detachment. Polarity is reversed with respect to the direction of walking in the guidance sheet such that a pedestrian travelling in any direction can be guided toward a given point. In experiments, our system successfully guided a pedestrian along the same direction regardless of the direction of travel using the walking guidance sheet. The induction effect of the proposed method was also evaluated.
在本文中,我们提出了一种不需要附着/分离的地板覆盖式步行引导板来指导行人。在引导页中,行走方向的极性是相反的,这样,在任何方向行走的行人都可以被引导到给定的点。在实验中,我们的系统成功地引导行人沿着相同的方向,而不管行走方向如何。并对该方法的诱导效果进行了评价。
{"title":"Guided Walking to Direct Pedestrians toward the Same Destination","authors":"Nobuhito Sakamoto, M. Furukawa, M. Kurokawa, T. Maeda","doi":"10.1145/3311823.3311835","DOIUrl":"https://doi.org/10.1145/3311823.3311835","url":null,"abstract":"In this paper, we propose a floor covering-type walking guidance sheet to direct pedestrians without requiring attachment/detachment. Polarity is reversed with respect to the direction of walking in the guidance sheet such that a pedestrian travelling in any direction can be guided toward a given point. In experiments, our system successfully guided a pedestrian along the same direction regardless of the direction of travel using the walking guidance sheet. The induction effect of the proposed method was also evaluated.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"163 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133208655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
CapMat
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311874
Denys J. C. Matthies, Don Samitha Elvitigala, Sachith Muthukumarana, Jochen Huber, Suranga Nanayakkara
We present CapMat, a smart foot mat that enables user identification, supporting applications such as multi-layer authentication. CapMat leverages a large form factor capacitive sensor to capture shoe sole images. These images vary based on shoe form factors, the individual wear, and the user's weight. In a preliminary evaluation, we distinguished 15 users with an accuracy of up to 100%.
{"title":"CapMat","authors":"Denys J. C. Matthies, Don Samitha Elvitigala, Sachith Muthukumarana, Jochen Huber, Suranga Nanayakkara","doi":"10.1145/3311823.3311874","DOIUrl":"https://doi.org/10.1145/3311823.3311874","url":null,"abstract":"We present CapMat, a smart foot mat that enables user identification, supporting applications such as multi-layer authentication. CapMat leverages a large form factor capacitive sensor to capture shoe sole images. These images vary based on shoe form factors, the individual wear, and the user's weight. In a preliminary evaluation, we distinguished 15 users with an accuracy of up to 100%.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114184398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
SubMe SubMe
Pub Date : 2019-03-11 DOI: 10.1145/3311823.3311865
Katsuya Fujii, Junichi Rekimoto
Owing to the improvement in accuracy of eye tracking devices, eye gaze movements occurring while conducting tasks are now a part of physical activities that can be monitored just like other life-logging data. Analyzing eye gaze movement data to predict reading comprehension has been widely explored and researchers have proven the potential of utilizing computers to estimate the skills and expertise level of users in various categories, including language skills. However, though many researchers have worked specifically on written texts to improve the reading skills of users, little research has been conducted to analyze eye gaze movements in correlation to watching movies, a medium which is known to be a popular and successful method of studying English as it includes reading, listening, and even speaking, the later of which is attributed to language shadowing. In this research, we focus on movies with subtitles due to the fact that they are very useful in order to grasp what is occurring on screen, and therefore, overall understanding of the content. We realized that the viewers' eye gaze movements are distinct depending on their English level. After retrieving the viewers' eye gaze movement data, we implemented a machine learning algorithm to detect their English levels and created a smart subtitle system called SubMe. The goal of this research is to estimate English levels through tracking eye movement. This was conducted by allowing the users to view a movie with subtitles. Our aim is create a system that can give the user certain feedback that can help improve their English studying methods.
{"title":"SubMe","authors":"Katsuya Fujii, Junichi Rekimoto","doi":"10.1145/3311823.3311865","DOIUrl":"https://doi.org/10.1145/3311823.3311865","url":null,"abstract":"Owing to the improvement in accuracy of eye tracking devices, eye gaze movements occurring while conducting tasks are now a part of physical activities that can be monitored just like other life-logging data. Analyzing eye gaze movement data to predict reading comprehension has been widely explored and researchers have proven the potential of utilizing computers to estimate the skills and expertise level of users in various categories, including language skills. However, though many researchers have worked specifically on written texts to improve the reading skills of users, little research has been conducted to analyze eye gaze movements in correlation to watching movies, a medium which is known to be a popular and successful method of studying English as it includes reading, listening, and even speaking, the later of which is attributed to language shadowing. In this research, we focus on movies with subtitles due to the fact that they are very useful in order to grasp what is occurring on screen, and therefore, overall understanding of the content. We realized that the viewers' eye gaze movements are distinct depending on their English level. After retrieving the viewers' eye gaze movement data, we implemented a machine learning algorithm to detect their English levels and created a smart subtitle system called SubMe. The goal of this research is to estimate English levels through tracking eye movement. This was conducted by allowing the users to view a movie with subtitles. Our aim is create a system that can give the user certain feedback that can help improve their English studying methods.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114876316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
Proceedings of the 10th Augmented Human International Conference 2019
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1