Second language (L2) learners often lack opportunities or motivation to dedicate their time to vocabulary learning over other daily activities. In this work, we introduce a mobile application that allows L2 learners to instead leverage their "dead time", such as when walking to and from school or work, to study new vocabulary items. The application combines audio learning and location-based contextually relevant L1-L2 word pairs to allow L2 learners to "discover" new foreign language words while walking. We report on the evaluation of the approach from three aspects: L2 vocabulary retention after 1 month, system usability and workload.
{"title":"Second Language Vocabulary Learning While Walking","authors":"S. Fukushima, Ari Hautasaari, Takeo Hamada","doi":"10.1145/3311823.3311866","DOIUrl":"https://doi.org/10.1145/3311823.3311866","url":null,"abstract":"Second language (L2) learners often lack opportunities or motivation to dedicate their time to vocabulary learning over other daily activities. In this work, we introduce a mobile application that allows L2 learners to instead leverage their \"dead time\", such as when walking to and from school or work, to study new vocabulary items. The application combines audio learning and location-based contextually relevant L1-L2 word pairs to allow L2 learners to \"discover\" new foreign language words while walking. We report on the evaluation of the approach from three aspects: L2 vocabulary retention after 1 month, system usability and workload.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"183 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130597112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For most mammals and vertebrate animals, tail plays an important role for their body providing variant functions to expand their mobility, or as a limb that allows manipulation and gripping. In this paper, we propose an exploratory biomimicry-inspired anthropomorphic tail design to allow engineering and expanding human body functions. The proposed tail consists of adjacent joints with a spring-based structure to handle shearing and tangential forces, and allow managing the length and weight of the target tail. The internal structure of the tail is driven by four pneumatic artificial muscles providing the actuation mechanism for the tail tip. Here we describe the design and implementation process, and highlight potential applications for using such prosthetic tail.
{"title":"Prosthetic Tail: Artificial Anthropomorphic Tail for Extending Innate Body Functions","authors":"Junichi Nabeshima, M. Y. Saraiji, K. Minamizawa","doi":"10.1145/3311823.3311848","DOIUrl":"https://doi.org/10.1145/3311823.3311848","url":null,"abstract":"For most mammals and vertebrate animals, tail plays an important role for their body providing variant functions to expand their mobility, or as a limb that allows manipulation and gripping. In this paper, we propose an exploratory biomimicry-inspired anthropomorphic tail design to allow engineering and expanding human body functions. The proposed tail consists of adjacent joints with a spring-based structure to handle shearing and tangential forces, and allow managing the length and weight of the target tail. The internal structure of the tail is driven by four pneumatic artificial muscles providing the actuation mechanism for the tail tip. Here we describe the design and implementation process, and highlight potential applications for using such prosthetic tail.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127062440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thisum Buddhika, Haimo Zhang, Chamod Weerasinghe, Suranga Nanayakkara, Roger Zimmermann
Observing that, how we grasp objects is highly correlated with geometric shapes and interactions, we propose the use of hand postures and motions as an indirect source of inputs for object-activity recognition. This paradigm treats the human hand as an always-available sensor, and transforms all sensing problems to the data analysis for the "sensor hand". We envision this paradigm to be generalizable for all objects regardless of whether they are acoustically or electromagnetically active, and that it detects different motions while holding the same object. Our proof-of-concept setup consists of six IMU sensors mounted on the fingers and back of the hand. Our experiments show that when the posture is combined with the motion, the personalized object-activity detection accuracy increases from 80% to 87%.
{"title":"OSense","authors":"Thisum Buddhika, Haimo Zhang, Chamod Weerasinghe, Suranga Nanayakkara, Roger Zimmermann","doi":"10.1145/3311823.3311841","DOIUrl":"https://doi.org/10.1145/3311823.3311841","url":null,"abstract":"Observing that, how we grasp objects is highly correlated with geometric shapes and interactions, we propose the use of hand postures and motions as an indirect source of inputs for object-activity recognition. This paradigm treats the human hand as an always-available sensor, and transforms all sensing problems to the data analysis for the \"sensor hand\". We envision this paradigm to be generalizable for all objects regardless of whether they are acoustically or electromagnetically active, and that it detects different motions while holding the same object. Our proof-of-concept setup consists of six IMU sensors mounted on the fingers and back of the hand. Our experiments show that when the posture is combined with the motion, the personalized object-activity detection accuracy increases from 80% to 87%.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"24 36","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120842098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Head-mounted displays (HMDs) are expected to dominate the market of wearable electronics in the next 5 years. This foreseen proliferation of HMDs yields a plethora of design opportunities for revolutionizing everyday life via novel use cases, but also generates a considerable number of substantial safety implications. In this work, we systematically investigated the effect of a novel monocular laser-based HMD on the ability of our participants to see in low ambient light conditions in lab settings. We recruited a total of 19 participants in two studies and performed a series of established vision tests while using the newly available Focals by North HMD. We tested our participants' night vision after being exposed to different levels of laser luminous power and laser colors while using Focals, either with one or both eyes open. Our results showcase that the image perceived by the non-exposed eye compensates for the loss of contrast sensitivity observed in the image perceived by the laser-exposed eye. This indicates that monocular laser-based HMDs, such as Focals, permit dark adaptation to occur naturally for the non-exposed eye.
{"title":"Effects of a Monocular Laser-Based Head-Mounted Display on Human Night Vision","authors":"E. Niforatos, Mélodie Vidal","doi":"10.1145/3311823.3311858","DOIUrl":"https://doi.org/10.1145/3311823.3311858","url":null,"abstract":"Head-mounted displays (HMDs) are expected to dominate the market of wearable electronics in the next 5 years. This foreseen proliferation of HMDs yields a plethora of design opportunities for revolutionizing everyday life via novel use cases, but also generates a considerable number of substantial safety implications. In this work, we systematically investigated the effect of a novel monocular laser-based HMD on the ability of our participants to see in low ambient light conditions in lab settings. We recruited a total of 19 participants in two studies and performed a series of established vision tests while using the newly available Focals by North HMD. We tested our participants' night vision after being exposed to different levels of laser luminous power and laser colors while using Focals, either with one or both eyes open. Our results showcase that the image perceived by the non-exposed eye compensates for the loss of contrast sensitivity observed in the image perceived by the laser-exposed eye. This indicates that monocular laser-based HMDs, such as Focals, permit dark adaptation to occur naturally for the non-exposed eye.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121327480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tomoya Sasaki, Kao-Hua Liu, Taiki Hasegawa, Atsushi Hiyama, M. Inami
People sometimes imagine and yearn for a "Super Power," an ability they do not have naturally. In this paper, we propose Virtual Super-Leaping (VSL) as an immersive virtual experience that provides the feeling of extreme jumping in the sky. First, we define the necessary feedback elements and classify the action sequence of Super-Leaping, including the design of the multimodal feedback for each action state. Then, we describe the design of the VSL system, which has two components: (i) visual a head-mounted display-based feedback, and (ii) a VSL-enabling haptic device, which provides both kinesthesia and airflow using multiple synchronized propeller units. We end by reporting on our technical evaluation and public demonstrations. This work contributes to the enhancement of immersive virtual experiences and development of devices for human augmentation.
{"title":"Virtual Super-Leaping: Immersive Extreme Jumping in VR","authors":"Tomoya Sasaki, Kao-Hua Liu, Taiki Hasegawa, Atsushi Hiyama, M. Inami","doi":"10.1145/3311823.3311861","DOIUrl":"https://doi.org/10.1145/3311823.3311861","url":null,"abstract":"People sometimes imagine and yearn for a \"Super Power,\" an ability they do not have naturally. In this paper, we propose Virtual Super-Leaping (VSL) as an immersive virtual experience that provides the feeling of extreme jumping in the sky. First, we define the necessary feedback elements and classify the action sequence of Super-Leaping, including the design of the multimodal feedback for each action state. Then, we describe the design of the VSL system, which has two components: (i) visual a head-mounted display-based feedback, and (ii) a VSL-enabling haptic device, which provides both kinesthesia and airflow using multiple synchronized propeller units. We end by reporting on our technical evaluation and public demonstrations. This work contributes to the enhancement of immersive virtual experiences and development of devices for human augmentation.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117093710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Azumi Maekawa, Shota Takahashi, M. Y. Saraiji, S. Wakisaka, H. Iwata, M. Inami
We present a wearable haptic assistance robotic system for augmented motor learning called Naviarm. This system comprises two robotic arms that are mounted on a user's body and are used to transfer one person's motion to another offline. Naviarm prerecords the arm motion trajectories of an expert via the mounted robotic arms and then plays back these recorded trajectories to share the expert's body motion with a beginner. The Naviarm system is an ungrounded system and provides mobility for the user to conduct a variety of motions. In our demonstration, the user will experience the recording of arm movement with backpack-type robotic arm. Then, the recorded movement will replayed and the user can experience the haptic feedback.
{"title":"Demonstrating Naviarm: Augmenting the Learning of Motor Skills using a Backpack-type Robotic Arm System","authors":"Azumi Maekawa, Shota Takahashi, M. Y. Saraiji, S. Wakisaka, H. Iwata, M. Inami","doi":"10.1145/3311823.3313868","DOIUrl":"https://doi.org/10.1145/3311823.3313868","url":null,"abstract":"We present a wearable haptic assistance robotic system for augmented motor learning called Naviarm. This system comprises two robotic arms that are mounted on a user's body and are used to transfer one person's motion to another offline. Naviarm prerecords the arm motion trajectories of an expert via the mounted robotic arms and then plays back these recorded trajectories to share the expert's body motion with a beginner. The Naviarm system is an ungrounded system and provides mobility for the user to conduct a variety of motions. In our demonstration, the user will experience the recording of arm movement with backpack-type robotic arm. Then, the recorded movement will replayed and the user can experience the haptic feedback.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"2675 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132098188","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Don Samitha Elvitigala, Denys J. C. Matthies, Vipula Dissanayaka, Chamod Weerasinghe, Suranga Nanayakkara
Visual interfaces can provide a great density of information. However, the required focused visual attention results in a high cognitive effort. This cognitive load significantly increases when multiple tasks are performed that also require visual attention. In this paper, we evaluate the perceptual abilities of 2bit tactons on the wrist and the hand as a type of complementary feedback. Based on our evaluation, 2bit tactons are reasonably high perceivable (≈ 92%) at the hand distributed among several fingers. Additionally, the data concluded that vibrotactile feedback on hand is significantly more accurate than the wrist, which coincides with the subjects' preference. TactileHand's feasibility was demonstrated in three pilot studies, encoding ambient, explicit and implicit information into 2bit tactons in different scenarios.
{"title":"2bit-TactileHand: Evaluating Tactons for On-Body Vibrotactile Displays on the Hand and Wrist","authors":"Don Samitha Elvitigala, Denys J. C. Matthies, Vipula Dissanayaka, Chamod Weerasinghe, Suranga Nanayakkara","doi":"10.1145/3311823.3311832","DOIUrl":"https://doi.org/10.1145/3311823.3311832","url":null,"abstract":"Visual interfaces can provide a great density of information. However, the required focused visual attention results in a high cognitive effort. This cognitive load significantly increases when multiple tasks are performed that also require visual attention. In this paper, we evaluate the perceptual abilities of 2bit tactons on the wrist and the hand as a type of complementary feedback. Based on our evaluation, 2bit tactons are reasonably high perceivable (≈ 92%) at the hand distributed among several fingers. Additionally, the data concluded that vibrotactile feedback on hand is significantly more accurate than the wrist, which coincides with the subjects' preference. TactileHand's feasibility was demonstrated in three pilot studies, encoding ambient, explicit and implicit information into 2bit tactons in different scenarios.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126832201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sachith Muthukumarana, Denys J. C. Matthies, Chamod Weerasinghe, Don Samitha Elvitigala, Suranga Nanayakkara
In this paper, we demonstrate a smart system that creates awareness of the hand-grip force for cricket players. A custom Force-Sensitive Resistor (FSR) matrix is attached to the bat's handle to sense the gripping. Two wrist bands, incorporating vibration motors, provide feedback that helps nonexpert users to understand the relative forces exerted by each hand while performing a stroke. A preliminary user study was conducted to collect first insights.
{"title":"CricketCoach","authors":"Sachith Muthukumarana, Denys J. C. Matthies, Chamod Weerasinghe, Don Samitha Elvitigala, Suranga Nanayakkara","doi":"10.1145/3311823.3311833","DOIUrl":"https://doi.org/10.1145/3311823.3311833","url":null,"abstract":"In this paper, we demonstrate a smart system that creates awareness of the hand-grip force for cricket players. A custom Force-Sensitive Resistor (FSR) matrix is attached to the bat's handle to sense the gripping. Two wrist bands, incorporating vibration motors, provide feedback that helps nonexpert users to understand the relative forces exerted by each hand while performing a stroke. A preliminary user study was conducted to collect first insights.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"61 2-3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123715375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Naturalistic tactile sensations can be elicited by mechanical stimuli because mechanical stimulation reproduces a natural physical phenomenon. However, a mechanical stimulation that is too strong may cause injury. Although electrical stimulation can elicit strong tactile sensations without damaging the skin, electrical stimulation is inferior in terms of naturalness. Here, we propose and validate a haptic method for presenting naturalistic and intense sensations by combining electrical and mechanical stimulation. Prior to the main experiment, we measured the appropriate temporal gap between the two stimuli such that they are perceived as simultaneous, since nerve activity directly elicited by electrical stimulation is generally considered to be perceived faster than mechanical stimulation. We confirmed that enhancement of subjective strength took place when two stimuli were given simultaneously. The main experiment with simultaneous electrical and mechanical stimulation confirmed that addition of electrical stimulation enhances the sensation of mechanical stimulation, and participants' comments implied that electrical stimulation was interpreted as part of the mechanical stimulation.
{"title":"Enhancement of Subjective Mechanical Tactile Intensity via Electrical Stimulation","authors":"R. Mizuhara, Akifumi Takahashi, H. Kajimoto","doi":"10.1145/3311823.3311836","DOIUrl":"https://doi.org/10.1145/3311823.3311836","url":null,"abstract":"Naturalistic tactile sensations can be elicited by mechanical stimuli because mechanical stimulation reproduces a natural physical phenomenon. However, a mechanical stimulation that is too strong may cause injury. Although electrical stimulation can elicit strong tactile sensations without damaging the skin, electrical stimulation is inferior in terms of naturalness. Here, we propose and validate a haptic method for presenting naturalistic and intense sensations by combining electrical and mechanical stimulation. Prior to the main experiment, we measured the appropriate temporal gap between the two stimuli such that they are perceived as simultaneous, since nerve activity directly elicited by electrical stimulation is generally considered to be perceived faster than mechanical stimulation. We confirmed that enhancement of subjective strength took place when two stimuli were given simultaneously. The main experiment with simultaneous electrical and mechanical stimulation confirmed that addition of electrical stimulation enhances the sensation of mechanical stimulation, and participants' comments implied that electrical stimulation was interpreted as part of the mechanical stimulation.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"147 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128180094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Azumi Maekawa, Shota Takahashi, M. Y. Saraiji, S. Wakisaka, Hiroyasu Iwata, Masahiko Inami
We present a wearable haptic assistance robotic system for augmented motor learning called Naviarm. This system comprises two robotic arms that are mounted on a user's body and are used to transfer one person's motion to another offline. Naviarm pre-records the arm motion trajectories of an expert via the mounted robotic arms and then plays back these recorded trajectories to share the expert's body motion with a beginner. The Naviarm system is an ungrounded system and provides mobility for the user to conduct a variety of motions. In this paper, we focus on the temporal aspect of motor skill and use a mime performance as a case study learning task. We verified the system effectiveness for motor learning using the conducted experiments. The results suggest that the proposed system has benefits for learning sequential skills.
{"title":"Naviarm","authors":"Azumi Maekawa, Shota Takahashi, M. Y. Saraiji, S. Wakisaka, Hiroyasu Iwata, Masahiko Inami","doi":"10.1145/3311823.3311849","DOIUrl":"https://doi.org/10.1145/3311823.3311849","url":null,"abstract":"We present a wearable haptic assistance robotic system for augmented motor learning called Naviarm. This system comprises two robotic arms that are mounted on a user's body and are used to transfer one person's motion to another offline. Naviarm pre-records the arm motion trajectories of an expert via the mounted robotic arms and then plays back these recorded trajectories to share the expert's body motion with a beginner. The Naviarm system is an ungrounded system and provides mobility for the user to conduct a variety of motions. In this paper, we focus on the temporal aspect of motor skill and use a mime performance as a case study learning task. We verified the system effectiveness for motor learning using the conducted experiments. The results suggest that the proposed system has benefits for learning sequential skills.","PeriodicalId":433578,"journal":{"name":"Proceedings of the 10th Augmented Human International Conference 2019","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114646775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}