首页 > 最新文献

Proceedings of the Augmented Humans International Conference最新文献

英文 中文
ExemPoser
Pub Date : 2020-03-16 DOI: 10.1145/3384657.3384788
Katsuhito Sasaki, Keisuke Shiro, J. Rekimoto
It is important for beginners to imitate poses of experts in various sports; especially in sport climbing, performance depends greatly on the pose that should be taken for given holds. However, it is difficult for beginners to learn the proper poses for all patterns from experts since climbing holds are completely different for each course. Therefore, we propose a system that predict a pose of experts from the positions of the hands and feet of the climber--the positions of holds used by the climber--using a neural network. In other words, our system simulates what pose experts take for the holds the climber is now using. The positions of hands and feet are calculated from a image of the climber captured from behind. To allow users to check what pose is ideal in real time during practice, we have adopted a simple and lightweight network structure with little computational delay. We asked experts to compare the poses predicted by our system with the poses of beginners, and we confirmed that the poses predicted by our system were in most cases better than or as good as those of beginners.
{"title":"ExemPoser","authors":"Katsuhito Sasaki, Keisuke Shiro, J. Rekimoto","doi":"10.1145/3384657.3384788","DOIUrl":"https://doi.org/10.1145/3384657.3384788","url":null,"abstract":"It is important for beginners to imitate poses of experts in various sports; especially in sport climbing, performance depends greatly on the pose that should be taken for given holds. However, it is difficult for beginners to learn the proper poses for all patterns from experts since climbing holds are completely different for each course. Therefore, we propose a system that predict a pose of experts from the positions of the hands and feet of the climber--the positions of holds used by the climber--using a neural network. In other words, our system simulates what pose experts take for the holds the climber is now using. The positions of hands and feet are calculated from a image of the climber captured from behind. To allow users to check what pose is ideal in real time during practice, we have adopted a simple and lightweight network structure with little computational delay. We asked experts to compare the poses predicted by our system with the poses of beginners, and we confirmed that the poses predicted by our system were in most cases better than or as good as those of beginners.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121577899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Accelerating Skill Acquisition of Two-Handed Drumming using Pneumatic Artificial Muscles 利用气动人工肌肉加速双手击鼓的技能习得
Pub Date : 2020-03-16 DOI: 10.1145/3384657.3384780
Takashi Goto, Swagata Das, Katrin Wolf, Pedro Lopes, Y. Kurita, K. Kunze
While computers excel at augmenting user's cognitive abilities, only recently we started utilizing their full potential to enhance our physical abilities. More and more wearable force-feedback devices have been developed based on exoskeletons, electrical muscle stimulation (EMS) or pneumatic actuators. The latter, pneumatic-based artificial muscles, are of particular interest since they strike an interesting balance: lighter than exoskeletons and more precise than EMS. However, the promise of using artificial muscles to actually support skill acquisition and training users is still lacking empirical validation. In this paper, we unveil how pneumatic artificial muscles impact skill acquisition, using two-handed drumming as an example use case. To understand this, we conducted a user study comparing participants' drumming performance after training with the audio or with our artificial-muscle setup. Our haptic system is comprised of four pneumatic muscles and is capable of actuating the user's forearm to drum accurately up to 80 bpm. We show that pneumatic muscles improve participants' correct recall of drumming patterns significantly when compared to auditory training.
虽然计算机擅长增强用户的认知能力,但直到最近我们才开始充分利用它们的潜力来增强我们的身体能力。越来越多的基于外骨骼、电肌肉刺激(EMS)或气动执行器的可穿戴力反馈装置被开发出来。后者是一种基于气动的人造肌肉,尤其令人感兴趣,因为它们达到了一种有趣的平衡:比外骨骼更轻,比EMS更精确。然而,使用人造肌肉来实际支持技能获取和训练用户的承诺仍然缺乏经验验证。在本文中,我们揭示气动人工肌肉如何影响技能习得,以双手击鼓为例。为了理解这一点,我们进行了一项用户研究,将参与者在训练后的击鼓表现与音频或我们的人造肌肉装置进行比较。我们的触觉系统由四个气动肌肉组成,能够驱动用户的前臂精确地达到每分钟80次的节拍。我们表明,与听觉训练相比,气动肌肉显著提高了参与者对击鼓模式的正确回忆。
{"title":"Accelerating Skill Acquisition of Two-Handed Drumming using Pneumatic Artificial Muscles","authors":"Takashi Goto, Swagata Das, Katrin Wolf, Pedro Lopes, Y. Kurita, K. Kunze","doi":"10.1145/3384657.3384780","DOIUrl":"https://doi.org/10.1145/3384657.3384780","url":null,"abstract":"While computers excel at augmenting user's cognitive abilities, only recently we started utilizing their full potential to enhance our physical abilities. More and more wearable force-feedback devices have been developed based on exoskeletons, electrical muscle stimulation (EMS) or pneumatic actuators. The latter, pneumatic-based artificial muscles, are of particular interest since they strike an interesting balance: lighter than exoskeletons and more precise than EMS. However, the promise of using artificial muscles to actually support skill acquisition and training users is still lacking empirical validation. In this paper, we unveil how pneumatic artificial muscles impact skill acquisition, using two-handed drumming as an example use case. To understand this, we conducted a user study comparing participants' drumming performance after training with the audio or with our artificial-muscle setup. Our haptic system is comprised of four pneumatic muscles and is capable of actuating the user's forearm to drum accurately up to 80 bpm. We show that pneumatic muscles improve participants' correct recall of drumming patterns significantly when compared to auditory training.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115825364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
WristLens
Pub Date : 2020-03-16 DOI: 10.1145/3384657.3384797
Hui-Shyong Yeo, Juyoung Lee, Andrea Bianchi, Alejandro Samboy, H. Koike, Woontack Woo, Aaron Quigley
WristLens is a system for surface interaction from wrist-worn wearable devices such as smartwatches and fitness trackers. It enables eyes-free, single-handed gestures on surfaces, using an optical motion sensor embedded in a wrist-strap. This allows the user to leverage any proximate surface, including their own body, for input and interaction. An experimental study was conducted to measure the performance of gesture interaction on three different body parts. Our results show that directional gestures are accurately recognized but less so for shape gestures. Finally, we explore the interaction design space enabled by WristLens, and demonstrate novel use cases and applications, such as on-body interaction, bimanual interaction, cursor control and 3D measurement.
{"title":"WristLens","authors":"Hui-Shyong Yeo, Juyoung Lee, Andrea Bianchi, Alejandro Samboy, H. Koike, Woontack Woo, Aaron Quigley","doi":"10.1145/3384657.3384797","DOIUrl":"https://doi.org/10.1145/3384657.3384797","url":null,"abstract":"WristLens is a system for surface interaction from wrist-worn wearable devices such as smartwatches and fitness trackers. It enables eyes-free, single-handed gestures on surfaces, using an optical motion sensor embedded in a wrist-strap. This allows the user to leverage any proximate surface, including their own body, for input and interaction. An experimental study was conducted to measure the performance of gesture interaction on three different body parts. Our results show that directional gestures are accurately recognized but less so for shape gestures. Finally, we explore the interaction design space enabled by WristLens, and demonstrate novel use cases and applications, such as on-body interaction, bimanual interaction, cursor control and 3D measurement.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128371587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Wearable Reasoner: Towards Enhanced Human Rationality Through A Wearable Device With An Explainable AI Assistant 可穿戴推理器:通过带有可解释AI助手的可穿戴设备增强人类理性
Pub Date : 2020-03-16 DOI: 10.1145/3384657.3384799
Valdemar Danry, Pat Pataranutaporn, Yaoli Mao, P. Maes
Human judgments and decisions are prone to errors in reasoning caused by factors such as personal biases and external misinformation. We explore the possibility of enhanced reasoning by implementing a wearable AI system as a human symbiotic counterpart. We present "Wearable Reasoner", a proof-of-concept wearable system capable of analyzing if an argument is stated with supporting evidence or not. We explore the impact of argumentation mining and explainability of the AI feedback on the user through an experimental study of verbal statement evaluation tasks. The results demonstrate that the device with explainable feedback is effective in enhancing rationality by helping users differentiate between statements supported by evidence and without. When assisted by an AI system with explainable feedback, users significantly consider claims supported by evidence more reasonable and agree more with them compared to those without. Qualitative interviews demonstrate users' internal processes of reflection and integration of the new information in their judgment and decision making, emphasizing improved evaluation of presented arguments.
人类的判断和决定容易因个人偏见和外部错误信息等因素而导致推理错误。我们通过实现可穿戴人工智能系统作为人类共生对应物来探索增强推理的可能性。我们提出了“可穿戴推理器”,这是一个概念验证的可穿戴系统,能够分析论点是否有支持证据。我们通过对口头陈述评估任务的实验研究,探讨了人工智能反馈的论证挖掘和可解释性对用户的影响。结果表明,具有可解释反馈的设备通过帮助用户区分有证据支持的陈述和没有证据支持的陈述,有效地提高了合理性。在有可解释反馈的人工智能系统的帮助下,与没有证据的人相比,用户明显认为有证据支持的主张更合理,也更同意这些主张。定性访谈展示了用户在判断和决策中反思和整合新信息的内部过程,强调了对所提出论点的改进评价。
{"title":"Wearable Reasoner: Towards Enhanced Human Rationality Through A Wearable Device With An Explainable AI Assistant","authors":"Valdemar Danry, Pat Pataranutaporn, Yaoli Mao, P. Maes","doi":"10.1145/3384657.3384799","DOIUrl":"https://doi.org/10.1145/3384657.3384799","url":null,"abstract":"Human judgments and decisions are prone to errors in reasoning caused by factors such as personal biases and external misinformation. We explore the possibility of enhanced reasoning by implementing a wearable AI system as a human symbiotic counterpart. We present \"Wearable Reasoner\", a proof-of-concept wearable system capable of analyzing if an argument is stated with supporting evidence or not. We explore the impact of argumentation mining and explainability of the AI feedback on the user through an experimental study of verbal statement evaluation tasks. The results demonstrate that the device with explainable feedback is effective in enhancing rationality by helping users differentiate between statements supported by evidence and without. When assisted by an AI system with explainable feedback, users significantly consider claims supported by evidence more reasonable and agree more with them compared to those without. Qualitative interviews demonstrate users' internal processes of reflection and integration of the new information in their judgment and decision making, emphasizing improved evaluation of presented arguments.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133597177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
VersaTouch
Pub Date : 2020-03-16 DOI: 10.1145/3384657.3384778
Yilei Shi, Haimo Zhang, Jiashuo Cao, Suranga Nanayakkara
We present VersaTouch, a portable, plug-and-play system that uses active acoustic sensing to track fine-grained touch locations as well as touch force of multiple fingers on everyday surfaces without having to permanently instrument them or do extensive calibration. Our system is versatile in multiple aspects. First, with simple calibration, VersaTouch can be arranged in arbitrary layouts in order to fit into crowded surfaces while retaining its accuracy. Second, various modalities of touch input, such as distance and position, can be supported depending on the number of sensors used to suit the interaction scenario. Third, VersaTouch can sense multi-finger touch, touch force, as well as identify the touch source. Last, VersaTouch is capable of providing vibrotactile feedback to fingertips through the same actuators used for touch sensing. We conducted a series of studies and demonstrated that VersaTouch was able to track finger touch using various layouts with average error from 9.62mm to 14.25mm on different surfaces within a circular area of 400mm diameter centred around the sensors, as well as detect touch force. Finally, we discuss the interaction design space and interaction techniques enabled by VersaTouch.
{"title":"VersaTouch","authors":"Yilei Shi, Haimo Zhang, Jiashuo Cao, Suranga Nanayakkara","doi":"10.1145/3384657.3384778","DOIUrl":"https://doi.org/10.1145/3384657.3384778","url":null,"abstract":"We present VersaTouch, a portable, plug-and-play system that uses active acoustic sensing to track fine-grained touch locations as well as touch force of multiple fingers on everyday surfaces without having to permanently instrument them or do extensive calibration. Our system is versatile in multiple aspects. First, with simple calibration, VersaTouch can be arranged in arbitrary layouts in order to fit into crowded surfaces while retaining its accuracy. Second, various modalities of touch input, such as distance and position, can be supported depending on the number of sensors used to suit the interaction scenario. Third, VersaTouch can sense multi-finger touch, touch force, as well as identify the touch source. Last, VersaTouch is capable of providing vibrotactile feedback to fingertips through the same actuators used for touch sensing. We conducted a series of studies and demonstrated that VersaTouch was able to track finger touch using various layouts with average error from 9.62mm to 14.25mm on different surfaces within a circular area of 400mm diameter centred around the sensors, as well as detect touch force. Finally, we discuss the interaction design space and interaction techniques enabled by VersaTouch.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114412813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Understanding Face Gestures with a User-Centered Approach Using Personal Computer Applications as an Example 以个人电脑应用为例,以用户为中心理解面部手势
Pub Date : 2020-03-16 DOI: 10.1145/3384657.3385333
Yenchin Lai, Benjamin Tag, K. Kunze, R. Malaka
While face gesture input has been proposed by researchers, the issue of practical gestures remains unsolved. We present the first comprehensive investigation of user-defined face gestures as an augmented input modality. Based on a focus group discussion, we developed three sets of tasks, where we asked participants to spontaneously produce face gestures to complete these tasks. We report our findings of a user study and discuss the user preference of face gestures. The results inform the development of future interaction systems utilizing face gestures.
虽然研究人员已经提出了面部手势输入,但实际手势的问题仍未得到解决。我们提出了用户自定义面部手势作为增强输入方式的第一个全面调查。在焦点小组讨论的基础上,我们开发了三组任务,我们要求参与者自发地做出面部手势来完成这些任务。我们报告了一项用户研究的发现,并讨论了用户对面部手势的偏好。研究结果为未来使用面部手势的交互系统的发展提供了信息。
{"title":"Understanding Face Gestures with a User-Centered Approach Using Personal Computer Applications as an Example","authors":"Yenchin Lai, Benjamin Tag, K. Kunze, R. Malaka","doi":"10.1145/3384657.3385333","DOIUrl":"https://doi.org/10.1145/3384657.3385333","url":null,"abstract":"While face gesture input has been proposed by researchers, the issue of practical gestures remains unsolved. We present the first comprehensive investigation of user-defined face gestures as an augmented input modality. Based on a focus group discussion, we developed three sets of tasks, where we asked participants to spontaneously produce face gestures to complete these tasks. We report our findings of a user study and discuss the user preference of face gestures. The results inform the development of future interaction systems utilizing face gestures.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122571410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Facilitating Experiential Knowledge Sharing through Situated Conversations 通过情境对话促进经验知识共享
Pub Date : 2020-03-16 DOI: 10.1145/3384657.3384798
R. Fujikura, Y. Sumi
This paper proposes a system that facilitates knowledge sharing among people in similar situations by providing audio of past conversations. Our system records all voices of conversations among the users in the specific fields such as tourist spots, museums, digital fabrication studio, etc. and then timely provides users in a similar situation with fragments of the accumulated conversations. For segmenting and retrieving past conversation from vast amounts of captured data, we focus on non-verbal contextual information, i.e., location, attention targets, and hand operations of the conversation participants. All voices of conversation are recorded, without any selection or classification. The delivery of the voices to a user is determined not based on the content of the conversation but on the similarity of situations between the conversation participants and the user. To demonstrate the concept of the proposed system, we performed a series of experiments to observe changes in user behavior due to past conversations related to the situation at the digital fabrication workshop. Since we have not achieved a satisfactory implementation to sense user's situation, we used Wizard of Oz (WOZ) method. That is, the experimenter visually judges the change in the situation of the user and inputs it to the system, and the system automatically provides the users with voices of past conversation corresponding to the situation. Experimental results show that most of the conversations presented when the situation perfectly matches is related to the user's situation, and some of them prompts the user to change their behavior effectively. Interestingly, we could observe that conversations that were done in the same area but not related to the current task also had the effect of expanding the user's knowledge. We also observed a case that although a conversation highly related to the user's situation was timely presented but the user could not utilize the knowledge to solve the problem of the current task. It shows the limitation of our system, i.e., even if a knowledgeable conversation is timely provided, it is useless unless it fits with the user's knowledge level.
本文提出了一个系统,通过提供过去对话的音频,促进在类似情况下的人们之间的知识共享。我们的系统记录了用户在旅游景点、博物馆、数字制作工作室等特定领域的所有对话声音,并及时将积累的对话片段提供给处于类似情况的用户。为了从大量捕获的数据中分割和检索过去的对话,我们专注于非语言上下文信息,即对话参与者的位置,注意力目标和手部操作。所有的谈话声音都被记录下来,没有任何选择或分类。对用户的语音传递不是基于会话的内容,而是基于会话参与者和用户之间情况的相似性。为了演示拟议系统的概念,我们进行了一系列实验,观察由于过去与数字制造车间的情况相关的对话而导致的用户行为变化。由于我们还没有达到一个满意的实现来感知用户的情况,我们使用了绿野仙踪(Wizard of Oz, WOZ)的方法。即实验者通过视觉判断用户情境的变化,并将其输入到系统中,系统自动为用户提供与情境相对应的过往对话语音。实验结果表明,在情境完全匹配时呈现的对话大部分与用户的情境相关,其中一些对话有效地提示用户改变其行为。有趣的是,我们可以观察到,在同一区域进行的但与当前任务无关的对话也具有扩展用户知识的效果。我们还观察到一个案例,虽然及时呈现了与用户情况高度相关的对话,但用户无法利用这些知识来解决当前任务的问题。这说明了我们系统的局限性,即即使及时提供了知识渊博的对话,但除非符合用户的知识水平,否则是无用的。
{"title":"Facilitating Experiential Knowledge Sharing through Situated Conversations","authors":"R. Fujikura, Y. Sumi","doi":"10.1145/3384657.3384798","DOIUrl":"https://doi.org/10.1145/3384657.3384798","url":null,"abstract":"This paper proposes a system that facilitates knowledge sharing among people in similar situations by providing audio of past conversations. Our system records all voices of conversations among the users in the specific fields such as tourist spots, museums, digital fabrication studio, etc. and then timely provides users in a similar situation with fragments of the accumulated conversations. For segmenting and retrieving past conversation from vast amounts of captured data, we focus on non-verbal contextual information, i.e., location, attention targets, and hand operations of the conversation participants. All voices of conversation are recorded, without any selection or classification. The delivery of the voices to a user is determined not based on the content of the conversation but on the similarity of situations between the conversation participants and the user. To demonstrate the concept of the proposed system, we performed a series of experiments to observe changes in user behavior due to past conversations related to the situation at the digital fabrication workshop. Since we have not achieved a satisfactory implementation to sense user's situation, we used Wizard of Oz (WOZ) method. That is, the experimenter visually judges the change in the situation of the user and inputs it to the system, and the system automatically provides the users with voices of past conversation corresponding to the situation. Experimental results show that most of the conversations presented when the situation perfectly matches is related to the user's situation, and some of them prompts the user to change their behavior effectively. Interestingly, we could observe that conversations that were done in the same area but not related to the current task also had the effect of expanding the user's knowledge. We also observed a case that although a conversation highly related to the user's situation was timely presented but the user could not utilize the knowledge to solve the problem of the current task. It shows the limitation of our system, i.e., even if a knowledgeable conversation is timely provided, it is useless unless it fits with the user's knowledge level.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131813834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Lateral Line: Augmenting Spatiotemporal Perception with a Tactile Interface 侧线:用触觉界面增强时空感知
Pub Date : 2020-03-16 DOI: 10.1145/3384657.3384775
Matti Krüger, Christiane B. Wiebel-Herboth, H. Wersing
In this paper we describe a concept for artificially supplementing peoples' spatiotemporal perception. Our target is to improve performance in tasks that rely on a fast and accurate understanding of movement dynamics in the environment. To provide an exemplary research and application scenario, we implemented a prototype of the concept in a driving simulation environment and used an interface capable of providing vibrotactile stimuli around the waist to communicate spatiotemporal information. The tactile stimuli dynamically encode directions and temporal proximities towards approaching objects. Temporal proximity is defined as inversely proportional to the time-to-contact and can be interpreted as a measure of imminent collision risk and temporal urgency. Results of a user study demonstrate performance benefits in terms of enhanced driving safety. This indicates a potential for improving peoples' capabilities in assessing relevant properties of dynamic environments in order to purposefully adapt their actions.
本文提出了一个人为补充人的时空感知的概念。我们的目标是提高任务的性能,这些任务依赖于对环境中运动动态的快速准确的理解。为了提供一个示范性的研究和应用场景,我们在驾驶模拟环境中实现了这个概念的原型,并使用了一个能够在腰部周围提供振动触觉刺激的界面来交流时空信息。触觉刺激对接近物体的方向和时间接近性进行动态编码。时间接近度被定义为与接触时间成反比,可以解释为即将发生碰撞风险和时间紧迫性的度量。一项用户研究的结果表明,在提高驾驶安全方面,性能方面有好处。这表明有可能提高人们评估动态环境的相关特性的能力,以便有目的地调整他们的行动。
{"title":"The Lateral Line: Augmenting Spatiotemporal Perception with a Tactile Interface","authors":"Matti Krüger, Christiane B. Wiebel-Herboth, H. Wersing","doi":"10.1145/3384657.3384775","DOIUrl":"https://doi.org/10.1145/3384657.3384775","url":null,"abstract":"In this paper we describe a concept for artificially supplementing peoples' spatiotemporal perception. Our target is to improve performance in tasks that rely on a fast and accurate understanding of movement dynamics in the environment. To provide an exemplary research and application scenario, we implemented a prototype of the concept in a driving simulation environment and used an interface capable of providing vibrotactile stimuli around the waist to communicate spatiotemporal information. The tactile stimuli dynamically encode directions and temporal proximities towards approaching objects. Temporal proximity is defined as inversely proportional to the time-to-contact and can be interpreted as a measure of imminent collision risk and temporal urgency. Results of a user study demonstrate performance benefits in terms of enhanced driving safety. This indicates a potential for improving peoples' capabilities in assessing relevant properties of dynamic environments in order to purposefully adapt their actions.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125044026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
High-speed Projection Method of Swing Plane for Golf Training 高尔夫球训练挥杆平面高速投射方法
Pub Date : 2020-03-16 DOI: 10.1145/3384657.3385330
Tomohiro Sueishi, Chikara Miyaji, Masataka Narumiya, Y. Yamakawa, M. Ishikawa
Display technologies that show dynamic information such as club swing motion are useful for golf training, but conventional methods have a large latency from sensing the motion to displaying them for users. In this study, we propose an immediate, high-speed projection method of swing plane geometric information onto the ground during the swing. The method utilizes marker-based clubhead posture estimation and a mirror-based high-speed tracking system. The intersection line with the ground, which is the geometric information of the swing plane, is immediately cast by a high-speed projector. We have experimentally confirmed the sufficiently low latency of the projection itself for swing motions and have demonstrated the temporal convergence and predictive display of the swing plane line projection around the bottom of the swing motion.
显示球杆挥杆运动等动态信息的显示技术对高尔夫训练很有用,但传统的方法从感知运动到向用户显示这些信息有很大的延迟。在本研究中,我们提出了一种在摆动过程中将摆动平面几何信息快速投影到地面的方法。该方法利用基于标记的杆头姿态估计和基于反射镜的高速跟踪系统。与地面的交点线,即摆动平面的几何信息,立即由高速投影仪投射出来。我们通过实验证实了摆动运动的投影本身具有足够低的延迟,并证明了围绕摆动运动底部的摆动平面线投影的时间收敛性和预测性显示。
{"title":"High-speed Projection Method of Swing Plane for Golf Training","authors":"Tomohiro Sueishi, Chikara Miyaji, Masataka Narumiya, Y. Yamakawa, M. Ishikawa","doi":"10.1145/3384657.3385330","DOIUrl":"https://doi.org/10.1145/3384657.3385330","url":null,"abstract":"Display technologies that show dynamic information such as club swing motion are useful for golf training, but conventional methods have a large latency from sensing the motion to displaying them for users. In this study, we propose an immediate, high-speed projection method of swing plane geometric information onto the ground during the swing. The method utilizes marker-based clubhead posture estimation and a mirror-based high-speed tracking system. The intersection line with the ground, which is the geometric information of the swing plane, is immediately cast by a high-speed projector. We have experimentally confirmed the sufficiently low latency of the projection itself for swing motions and have demonstrated the temporal convergence and predictive display of the swing plane line projection around the bottom of the swing motion.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133215217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
Proceedings of the Augmented Humans International Conference
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1