It is important for beginners to imitate poses of experts in various sports; especially in sport climbing, performance depends greatly on the pose that should be taken for given holds. However, it is difficult for beginners to learn the proper poses for all patterns from experts since climbing holds are completely different for each course. Therefore, we propose a system that predict a pose of experts from the positions of the hands and feet of the climber--the positions of holds used by the climber--using a neural network. In other words, our system simulates what pose experts take for the holds the climber is now using. The positions of hands and feet are calculated from a image of the climber captured from behind. To allow users to check what pose is ideal in real time during practice, we have adopted a simple and lightweight network structure with little computational delay. We asked experts to compare the poses predicted by our system with the poses of beginners, and we confirmed that the poses predicted by our system were in most cases better than or as good as those of beginners.
{"title":"ExemPoser","authors":"Katsuhito Sasaki, Keisuke Shiro, J. Rekimoto","doi":"10.1145/3384657.3384788","DOIUrl":"https://doi.org/10.1145/3384657.3384788","url":null,"abstract":"It is important for beginners to imitate poses of experts in various sports; especially in sport climbing, performance depends greatly on the pose that should be taken for given holds. However, it is difficult for beginners to learn the proper poses for all patterns from experts since climbing holds are completely different for each course. Therefore, we propose a system that predict a pose of experts from the positions of the hands and feet of the climber--the positions of holds used by the climber--using a neural network. In other words, our system simulates what pose experts take for the holds the climber is now using. The positions of hands and feet are calculated from a image of the climber captured from behind. To allow users to check what pose is ideal in real time during practice, we have adopted a simple and lightweight network structure with little computational delay. We asked experts to compare the poses predicted by our system with the poses of beginners, and we confirmed that the poses predicted by our system were in most cases better than or as good as those of beginners.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121577899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Takashi Goto, Swagata Das, Katrin Wolf, Pedro Lopes, Y. Kurita, K. Kunze
While computers excel at augmenting user's cognitive abilities, only recently we started utilizing their full potential to enhance our physical abilities. More and more wearable force-feedback devices have been developed based on exoskeletons, electrical muscle stimulation (EMS) or pneumatic actuators. The latter, pneumatic-based artificial muscles, are of particular interest since they strike an interesting balance: lighter than exoskeletons and more precise than EMS. However, the promise of using artificial muscles to actually support skill acquisition and training users is still lacking empirical validation. In this paper, we unveil how pneumatic artificial muscles impact skill acquisition, using two-handed drumming as an example use case. To understand this, we conducted a user study comparing participants' drumming performance after training with the audio or with our artificial-muscle setup. Our haptic system is comprised of four pneumatic muscles and is capable of actuating the user's forearm to drum accurately up to 80 bpm. We show that pneumatic muscles improve participants' correct recall of drumming patterns significantly when compared to auditory training.
{"title":"Accelerating Skill Acquisition of Two-Handed Drumming using Pneumatic Artificial Muscles","authors":"Takashi Goto, Swagata Das, Katrin Wolf, Pedro Lopes, Y. Kurita, K. Kunze","doi":"10.1145/3384657.3384780","DOIUrl":"https://doi.org/10.1145/3384657.3384780","url":null,"abstract":"While computers excel at augmenting user's cognitive abilities, only recently we started utilizing their full potential to enhance our physical abilities. More and more wearable force-feedback devices have been developed based on exoskeletons, electrical muscle stimulation (EMS) or pneumatic actuators. The latter, pneumatic-based artificial muscles, are of particular interest since they strike an interesting balance: lighter than exoskeletons and more precise than EMS. However, the promise of using artificial muscles to actually support skill acquisition and training users is still lacking empirical validation. In this paper, we unveil how pneumatic artificial muscles impact skill acquisition, using two-handed drumming as an example use case. To understand this, we conducted a user study comparing participants' drumming performance after training with the audio or with our artificial-muscle setup. Our haptic system is comprised of four pneumatic muscles and is capable of actuating the user's forearm to drum accurately up to 80 bpm. We show that pneumatic muscles improve participants' correct recall of drumming patterns significantly when compared to auditory training.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115825364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hui-Shyong Yeo, Juyoung Lee, Andrea Bianchi, Alejandro Samboy, H. Koike, Woontack Woo, Aaron Quigley
WristLens is a system for surface interaction from wrist-worn wearable devices such as smartwatches and fitness trackers. It enables eyes-free, single-handed gestures on surfaces, using an optical motion sensor embedded in a wrist-strap. This allows the user to leverage any proximate surface, including their own body, for input and interaction. An experimental study was conducted to measure the performance of gesture interaction on three different body parts. Our results show that directional gestures are accurately recognized but less so for shape gestures. Finally, we explore the interaction design space enabled by WristLens, and demonstrate novel use cases and applications, such as on-body interaction, bimanual interaction, cursor control and 3D measurement.
{"title":"WristLens","authors":"Hui-Shyong Yeo, Juyoung Lee, Andrea Bianchi, Alejandro Samboy, H. Koike, Woontack Woo, Aaron Quigley","doi":"10.1145/3384657.3384797","DOIUrl":"https://doi.org/10.1145/3384657.3384797","url":null,"abstract":"WristLens is a system for surface interaction from wrist-worn wearable devices such as smartwatches and fitness trackers. It enables eyes-free, single-handed gestures on surfaces, using an optical motion sensor embedded in a wrist-strap. This allows the user to leverage any proximate surface, including their own body, for input and interaction. An experimental study was conducted to measure the performance of gesture interaction on three different body parts. Our results show that directional gestures are accurately recognized but less so for shape gestures. Finally, we explore the interaction design space enabled by WristLens, and demonstrate novel use cases and applications, such as on-body interaction, bimanual interaction, cursor control and 3D measurement.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128371587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Valdemar Danry, Pat Pataranutaporn, Yaoli Mao, P. Maes
Human judgments and decisions are prone to errors in reasoning caused by factors such as personal biases and external misinformation. We explore the possibility of enhanced reasoning by implementing a wearable AI system as a human symbiotic counterpart. We present "Wearable Reasoner", a proof-of-concept wearable system capable of analyzing if an argument is stated with supporting evidence or not. We explore the impact of argumentation mining and explainability of the AI feedback on the user through an experimental study of verbal statement evaluation tasks. The results demonstrate that the device with explainable feedback is effective in enhancing rationality by helping users differentiate between statements supported by evidence and without. When assisted by an AI system with explainable feedback, users significantly consider claims supported by evidence more reasonable and agree more with them compared to those without. Qualitative interviews demonstrate users' internal processes of reflection and integration of the new information in their judgment and decision making, emphasizing improved evaluation of presented arguments.
{"title":"Wearable Reasoner: Towards Enhanced Human Rationality Through A Wearable Device With An Explainable AI Assistant","authors":"Valdemar Danry, Pat Pataranutaporn, Yaoli Mao, P. Maes","doi":"10.1145/3384657.3384799","DOIUrl":"https://doi.org/10.1145/3384657.3384799","url":null,"abstract":"Human judgments and decisions are prone to errors in reasoning caused by factors such as personal biases and external misinformation. We explore the possibility of enhanced reasoning by implementing a wearable AI system as a human symbiotic counterpart. We present \"Wearable Reasoner\", a proof-of-concept wearable system capable of analyzing if an argument is stated with supporting evidence or not. We explore the impact of argumentation mining and explainability of the AI feedback on the user through an experimental study of verbal statement evaluation tasks. The results demonstrate that the device with explainable feedback is effective in enhancing rationality by helping users differentiate between statements supported by evidence and without. When assisted by an AI system with explainable feedback, users significantly consider claims supported by evidence more reasonable and agree more with them compared to those without. Qualitative interviews demonstrate users' internal processes of reflection and integration of the new information in their judgment and decision making, emphasizing improved evaluation of presented arguments.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133597177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present VersaTouch, a portable, plug-and-play system that uses active acoustic sensing to track fine-grained touch locations as well as touch force of multiple fingers on everyday surfaces without having to permanently instrument them or do extensive calibration. Our system is versatile in multiple aspects. First, with simple calibration, VersaTouch can be arranged in arbitrary layouts in order to fit into crowded surfaces while retaining its accuracy. Second, various modalities of touch input, such as distance and position, can be supported depending on the number of sensors used to suit the interaction scenario. Third, VersaTouch can sense multi-finger touch, touch force, as well as identify the touch source. Last, VersaTouch is capable of providing vibrotactile feedback to fingertips through the same actuators used for touch sensing. We conducted a series of studies and demonstrated that VersaTouch was able to track finger touch using various layouts with average error from 9.62mm to 14.25mm on different surfaces within a circular area of 400mm diameter centred around the sensors, as well as detect touch force. Finally, we discuss the interaction design space and interaction techniques enabled by VersaTouch.
{"title":"VersaTouch","authors":"Yilei Shi, Haimo Zhang, Jiashuo Cao, Suranga Nanayakkara","doi":"10.1145/3384657.3384778","DOIUrl":"https://doi.org/10.1145/3384657.3384778","url":null,"abstract":"We present VersaTouch, a portable, plug-and-play system that uses active acoustic sensing to track fine-grained touch locations as well as touch force of multiple fingers on everyday surfaces without having to permanently instrument them or do extensive calibration. Our system is versatile in multiple aspects. First, with simple calibration, VersaTouch can be arranged in arbitrary layouts in order to fit into crowded surfaces while retaining its accuracy. Second, various modalities of touch input, such as distance and position, can be supported depending on the number of sensors used to suit the interaction scenario. Third, VersaTouch can sense multi-finger touch, touch force, as well as identify the touch source. Last, VersaTouch is capable of providing vibrotactile feedback to fingertips through the same actuators used for touch sensing. We conducted a series of studies and demonstrated that VersaTouch was able to track finger touch using various layouts with average error from 9.62mm to 14.25mm on different surfaces within a circular area of 400mm diameter centred around the sensors, as well as detect touch force. Finally, we discuss the interaction design space and interaction techniques enabled by VersaTouch.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114412813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While face gesture input has been proposed by researchers, the issue of practical gestures remains unsolved. We present the first comprehensive investigation of user-defined face gestures as an augmented input modality. Based on a focus group discussion, we developed three sets of tasks, where we asked participants to spontaneously produce face gestures to complete these tasks. We report our findings of a user study and discuss the user preference of face gestures. The results inform the development of future interaction systems utilizing face gestures.
{"title":"Understanding Face Gestures with a User-Centered Approach Using Personal Computer Applications as an Example","authors":"Yenchin Lai, Benjamin Tag, K. Kunze, R. Malaka","doi":"10.1145/3384657.3385333","DOIUrl":"https://doi.org/10.1145/3384657.3385333","url":null,"abstract":"While face gesture input has been proposed by researchers, the issue of practical gestures remains unsolved. We present the first comprehensive investigation of user-defined face gestures as an augmented input modality. Based on a focus group discussion, we developed three sets of tasks, where we asked participants to spontaneously produce face gestures to complete these tasks. We report our findings of a user study and discuss the user preference of face gestures. The results inform the development of future interaction systems utilizing face gestures.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122571410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a system that facilitates knowledge sharing among people in similar situations by providing audio of past conversations. Our system records all voices of conversations among the users in the specific fields such as tourist spots, museums, digital fabrication studio, etc. and then timely provides users in a similar situation with fragments of the accumulated conversations. For segmenting and retrieving past conversation from vast amounts of captured data, we focus on non-verbal contextual information, i.e., location, attention targets, and hand operations of the conversation participants. All voices of conversation are recorded, without any selection or classification. The delivery of the voices to a user is determined not based on the content of the conversation but on the similarity of situations between the conversation participants and the user. To demonstrate the concept of the proposed system, we performed a series of experiments to observe changes in user behavior due to past conversations related to the situation at the digital fabrication workshop. Since we have not achieved a satisfactory implementation to sense user's situation, we used Wizard of Oz (WOZ) method. That is, the experimenter visually judges the change in the situation of the user and inputs it to the system, and the system automatically provides the users with voices of past conversation corresponding to the situation. Experimental results show that most of the conversations presented when the situation perfectly matches is related to the user's situation, and some of them prompts the user to change their behavior effectively. Interestingly, we could observe that conversations that were done in the same area but not related to the current task also had the effect of expanding the user's knowledge. We also observed a case that although a conversation highly related to the user's situation was timely presented but the user could not utilize the knowledge to solve the problem of the current task. It shows the limitation of our system, i.e., even if a knowledgeable conversation is timely provided, it is useless unless it fits with the user's knowledge level.
本文提出了一个系统,通过提供过去对话的音频,促进在类似情况下的人们之间的知识共享。我们的系统记录了用户在旅游景点、博物馆、数字制作工作室等特定领域的所有对话声音,并及时将积累的对话片段提供给处于类似情况的用户。为了从大量捕获的数据中分割和检索过去的对话,我们专注于非语言上下文信息,即对话参与者的位置,注意力目标和手部操作。所有的谈话声音都被记录下来,没有任何选择或分类。对用户的语音传递不是基于会话的内容,而是基于会话参与者和用户之间情况的相似性。为了演示拟议系统的概念,我们进行了一系列实验,观察由于过去与数字制造车间的情况相关的对话而导致的用户行为变化。由于我们还没有达到一个满意的实现来感知用户的情况,我们使用了绿野仙踪(Wizard of Oz, WOZ)的方法。即实验者通过视觉判断用户情境的变化,并将其输入到系统中,系统自动为用户提供与情境相对应的过往对话语音。实验结果表明,在情境完全匹配时呈现的对话大部分与用户的情境相关,其中一些对话有效地提示用户改变其行为。有趣的是,我们可以观察到,在同一区域进行的但与当前任务无关的对话也具有扩展用户知识的效果。我们还观察到一个案例,虽然及时呈现了与用户情况高度相关的对话,但用户无法利用这些知识来解决当前任务的问题。这说明了我们系统的局限性,即即使及时提供了知识渊博的对话,但除非符合用户的知识水平,否则是无用的。
{"title":"Facilitating Experiential Knowledge Sharing through Situated Conversations","authors":"R. Fujikura, Y. Sumi","doi":"10.1145/3384657.3384798","DOIUrl":"https://doi.org/10.1145/3384657.3384798","url":null,"abstract":"This paper proposes a system that facilitates knowledge sharing among people in similar situations by providing audio of past conversations. Our system records all voices of conversations among the users in the specific fields such as tourist spots, museums, digital fabrication studio, etc. and then timely provides users in a similar situation with fragments of the accumulated conversations. For segmenting and retrieving past conversation from vast amounts of captured data, we focus on non-verbal contextual information, i.e., location, attention targets, and hand operations of the conversation participants. All voices of conversation are recorded, without any selection or classification. The delivery of the voices to a user is determined not based on the content of the conversation but on the similarity of situations between the conversation participants and the user. To demonstrate the concept of the proposed system, we performed a series of experiments to observe changes in user behavior due to past conversations related to the situation at the digital fabrication workshop. Since we have not achieved a satisfactory implementation to sense user's situation, we used Wizard of Oz (WOZ) method. That is, the experimenter visually judges the change in the situation of the user and inputs it to the system, and the system automatically provides the users with voices of past conversation corresponding to the situation. Experimental results show that most of the conversations presented when the situation perfectly matches is related to the user's situation, and some of them prompts the user to change their behavior effectively. Interestingly, we could observe that conversations that were done in the same area but not related to the current task also had the effect of expanding the user's knowledge. We also observed a case that although a conversation highly related to the user's situation was timely presented but the user could not utilize the knowledge to solve the problem of the current task. It shows the limitation of our system, i.e., even if a knowledgeable conversation is timely provided, it is useless unless it fits with the user's knowledge level.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131813834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Matti Krüger, Christiane B. Wiebel-Herboth, H. Wersing
In this paper we describe a concept for artificially supplementing peoples' spatiotemporal perception. Our target is to improve performance in tasks that rely on a fast and accurate understanding of movement dynamics in the environment. To provide an exemplary research and application scenario, we implemented a prototype of the concept in a driving simulation environment and used an interface capable of providing vibrotactile stimuli around the waist to communicate spatiotemporal information. The tactile stimuli dynamically encode directions and temporal proximities towards approaching objects. Temporal proximity is defined as inversely proportional to the time-to-contact and can be interpreted as a measure of imminent collision risk and temporal urgency. Results of a user study demonstrate performance benefits in terms of enhanced driving safety. This indicates a potential for improving peoples' capabilities in assessing relevant properties of dynamic environments in order to purposefully adapt their actions.
{"title":"The Lateral Line: Augmenting Spatiotemporal Perception with a Tactile Interface","authors":"Matti Krüger, Christiane B. Wiebel-Herboth, H. Wersing","doi":"10.1145/3384657.3384775","DOIUrl":"https://doi.org/10.1145/3384657.3384775","url":null,"abstract":"In this paper we describe a concept for artificially supplementing peoples' spatiotemporal perception. Our target is to improve performance in tasks that rely on a fast and accurate understanding of movement dynamics in the environment. To provide an exemplary research and application scenario, we implemented a prototype of the concept in a driving simulation environment and used an interface capable of providing vibrotactile stimuli around the waist to communicate spatiotemporal information. The tactile stimuli dynamically encode directions and temporal proximities towards approaching objects. Temporal proximity is defined as inversely proportional to the time-to-contact and can be interpreted as a measure of imminent collision risk and temporal urgency. Results of a user study demonstrate performance benefits in terms of enhanced driving safety. This indicates a potential for improving peoples' capabilities in assessing relevant properties of dynamic environments in order to purposefully adapt their actions.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125044026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tomohiro Sueishi, Chikara Miyaji, Masataka Narumiya, Y. Yamakawa, M. Ishikawa
Display technologies that show dynamic information such as club swing motion are useful for golf training, but conventional methods have a large latency from sensing the motion to displaying them for users. In this study, we propose an immediate, high-speed projection method of swing plane geometric information onto the ground during the swing. The method utilizes marker-based clubhead posture estimation and a mirror-based high-speed tracking system. The intersection line with the ground, which is the geometric information of the swing plane, is immediately cast by a high-speed projector. We have experimentally confirmed the sufficiently low latency of the projection itself for swing motions and have demonstrated the temporal convergence and predictive display of the swing plane line projection around the bottom of the swing motion.
{"title":"High-speed Projection Method of Swing Plane for Golf Training","authors":"Tomohiro Sueishi, Chikara Miyaji, Masataka Narumiya, Y. Yamakawa, M. Ishikawa","doi":"10.1145/3384657.3385330","DOIUrl":"https://doi.org/10.1145/3384657.3385330","url":null,"abstract":"Display technologies that show dynamic information such as club swing motion are useful for golf training, but conventional methods have a large latency from sensing the motion to displaying them for users. In this study, we propose an immediate, high-speed projection method of swing plane geometric information onto the ground during the swing. The method utilizes marker-based clubhead posture estimation and a mirror-based high-speed tracking system. The intersection line with the ground, which is the geometric information of the swing plane, is immediately cast by a high-speed projector. We have experimentally confirmed the sufficiently low latency of the projection itself for swing motions and have demonstrated the temporal convergence and predictive display of the swing plane line projection around the bottom of the swing motion.","PeriodicalId":106445,"journal":{"name":"Proceedings of the Augmented Humans International Conference","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133215217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}