V. Goudar, Zhi Ren, P. Brochu, M. Potkonjak, Q. Pei
We explore the use of Dielectric Elastomer (DE) micro-generators as a means to scavenge energy from foot-strikes and power wearable systems. While they exhibit large energy densities, DEs must be closely controlled to maximize the energy they transduce. Towards this end, we propose a DE micro-generator array configuration that enhances transduction efficiency, and the use of foot pressure sensors to realize accurate control of the individual DEs. Statistical techniques are applied to customize performance for a user's gait and enable energy-optimized adaptive online control of the system. Simulations based on experimentally collected foot pressure datasets, empirical characterization of DE mechanical behavior and a detailed model of DE electrical behavior show that the proposed system can achieve between 45 and 66mJ per stride.
{"title":"Driving low-power wearable systems with an adaptively-controlled foot-strike scavenging platform","authors":"V. Goudar, Zhi Ren, P. Brochu, M. Potkonjak, Q. Pei","doi":"10.1145/2493988.2494340","DOIUrl":"https://doi.org/10.1145/2493988.2494340","url":null,"abstract":"We explore the use of Dielectric Elastomer (DE) micro-generators as a means to scavenge energy from foot-strikes and power wearable systems. While they exhibit large energy densities, DEs must be closely controlled to maximize the energy they transduce. Towards this end, we propose a DE micro-generator array configuration that enhances transduction efficiency, and the use of foot pressure sensors to realize accurate control of the individual DEs. Statistical techniques are applied to customize performance for a user's gait and enable energy-optimized adaptive online control of the system. Simulations based on experimentally collected foot pressure datasets, empirical characterization of DE mechanical behavior and a detailed model of DE electrical behavior show that the proposed system can achieve between 45 and 66mJ per stride.","PeriodicalId":90988,"journal":{"name":"The semantic Web--ISWC ... : ... International Semantic Web Conference ... proceedings. International Semantic Web Conference","volume":"108 1","pages":"135-136"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89982472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a technology to design and fabricate nanostructured gas sensors in fabric substrates. Nanostructured gas sensors were fabricated by constructing ZnO nanorods on fabrics including polyester, cotton and polyimide for continuous monitoring of wearer's breath gas that can indicate health status. The developed fabric-based gas sensors demonstrated gas sensing by monitoring electrical resistance change upon exposure of acetone and ethanol gases.
{"title":"Nanostructured gas sensors integrated into fabric for wearable breath monitoring system","authors":"Hyejin Park, Hosang Ahn, Dong-Joo Kim, Helen Koo","doi":"10.1145/2493988.2494337","DOIUrl":"https://doi.org/10.1145/2493988.2494337","url":null,"abstract":"This paper presents a technology to design and fabricate nanostructured gas sensors in fabric substrates. Nanostructured gas sensors were fabricated by constructing ZnO nanorods on fabrics including polyester, cotton and polyimide for continuous monitoring of wearer's breath gas that can indicate health status. The developed fabric-based gas sensors demonstrated gas sensing by monitoring electrical resistance change upon exposure of acetone and ethanol gases.","PeriodicalId":90988,"journal":{"name":"The semantic Web--ISWC ... : ... International Semantic Web Conference ... proceedings. International Semantic Web Conference","volume":"135 1","pages":"129-130"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79556260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we describe a wristwatch-like device using a 3-axis gyro sensor to determine how a player is strumming the guitar. The device was worn on the right-handed player's right hand to evaluate the strumming action, which is important to play the guitar musically in terms of the timing and the strength of notes. With a newly developed calculation algorithm to specify the timing and the strength of the motion when the guitar string(s) were strummed, beginners and experienced players were clearly distinguished without hearing the sounds. The beginners as well as intermediate-level players showed a fairly large variation of the maximum angular velocity around the upper arm for each strum. Since the developed system reports the evaluation results with a graphical display as well as sound effects in real time, the players may improve their strumming action without playing back the performance.
{"title":"Detecting strumming action while playing guitar","authors":"Soichiro Matsushita, D. Iwase","doi":"10.1145/2493988.2494345","DOIUrl":"https://doi.org/10.1145/2493988.2494345","url":null,"abstract":"In this paper we describe a wristwatch-like device using a 3-axis gyro sensor to determine how a player is strumming the guitar. The device was worn on the right-handed player's right hand to evaluate the strumming action, which is important to play the guitar musically in terms of the timing and the strength of notes. With a newly developed calculation algorithm to specify the timing and the strength of the motion when the guitar string(s) were strummed, beginners and experienced players were clearly distinguished without hearing the sounds. The beginners as well as intermediate-level players showed a fairly large variation of the maximum angular velocity around the upper arm for each strum. Since the developed system reports the evaluation results with a graphical display as well as sound effects in real time, the players may improve their strumming action without playing back the performance.","PeriodicalId":90988,"journal":{"name":"The semantic Web--ISWC ... : ... International Semantic Web Conference ... proceedings. International Semantic Web Conference","volume":"70 1","pages":"145-146"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77196495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an indoor tracking system based on two wearable inertial measurement units for tracking in home and workplace environments. It applies simultaneous localization and mapping with user actions as landmarks, themselves recognized by the wearable sensors. The approach is thus fully wearable and no pre-deployment effort is required. We identify weaknesses of past approaches and address them by introducing heading drift compensation, stance detection adaptation, and ellipse landmarks. Furthermore, we present an environment-independent parameter set that allows for robust tracking in daily-life scenarios. We assess the method on a dataset with five participants in different home and office environments, totaling 8.7h of daily routines and 2500m of travelled distance. This dataset is publicly released. The main outcome is that our algorithm converges 87% of the time to an accurate approximation of the ground truth map (0.52m mean landmark positioning error) in scenarios where previous approaches fail.
{"title":"Improved actionSLAM for long-term indoor tracking with wearable motion sensors","authors":"Michael Hardegger, G. Tröster, D. Roggen","doi":"10.1145/2493988.2494328","DOIUrl":"https://doi.org/10.1145/2493988.2494328","url":null,"abstract":"We present an indoor tracking system based on two wearable inertial measurement units for tracking in home and workplace environments. It applies simultaneous localization and mapping with user actions as landmarks, themselves recognized by the wearable sensors. The approach is thus fully wearable and no pre-deployment effort is required. We identify weaknesses of past approaches and address them by introducing heading drift compensation, stance detection adaptation, and ellipse landmarks. Furthermore, we present an environment-independent parameter set that allows for robust tracking in daily-life scenarios. We assess the method on a dataset with five participants in different home and office environments, totaling 8.7h of daily routines and 2500m of travelled distance. This dataset is publicly released. The main outcome is that our algorithm converges 87% of the time to an accurate approximation of the ground truth map (0.52m mean landmark positioning error) in scenarios where previous approaches fail.","PeriodicalId":90988,"journal":{"name":"The semantic Web--ISWC ... : ... International Semantic Web Conference ... proceedings. International Semantic Web Conference","volume":"10 1","pages":"1-8"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82400945","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The aim of this study was to develop a reversible electrical contacting through adhesive bonded neodymium magnets. To implement this, suitable magnets and adhesives are chosen by defined requirements and conductive bonds between textile and magnet are optimized. For the latter, three different bonds are produced and tested in terms of achievable conductivity and mechanical strength. It is shown that gold-coated neodymium magnets are most appropriate for such a contact. The reproducible electrical resistances are low with sufficient mechanical strength.
{"title":"Reversible contacting of smart textiles with adhesive bonded magnets","authors":"K. Scheulen, A. Schwarz, S. Jockenhoevel","doi":"10.1145/2493988.2494338","DOIUrl":"https://doi.org/10.1145/2493988.2494338","url":null,"abstract":"The aim of this study was to develop a reversible electrical contacting through adhesive bonded neodymium magnets. To implement this, suitable magnets and adhesives are chosen by defined requirements and conductive bonds between textile and magnet are optimized. For the latter, three different bonds are produced and tested in terms of achievable conductivity and mechanical strength. It is shown that gold-coated neodymium magnets are most appropriate for such a contact. The reproducible electrical resistances are low with sufficient mechanical strength.","PeriodicalId":90988,"journal":{"name":"The semantic Web--ISWC ... : ... International Semantic Web Conference ... proceedings. International Semantic Web Conference","volume":"7 1","pages":"131-132"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74236948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose an activity and context recognition method where the user carries a neck-worn receiver comprising a microphone, and small speakers on his wrists that generate ultrasounds. The system recognizes gestures on the basis of the volume of the received sound and the Doppler effect. The former indicates the distance between the neck and wrists, and the later indicates the speed of motions. Thus, our approach substitutes the wired or wireless communication typically required in body area motion sensing networks by ultrasounds. Our system also recognizes the place where the user is in and the people who are near the user by ID signals generated from speakers placed in rooms and on people. The strength of the approach is that, for offline recognition, a simple audio recorder can be used for the receiver. We evaluate the approach in one scenario on nine gestures/activities with 10 users. Evaluation results confirmed that when there was no environmental sound generated from other people, the recognition rate was 87% on average. When there was environmental sound generated from other people, we compare approach ultrasound-based recognition which uses only the feature value of ultrasound against standard approach, which uses feature value of ultrasound and environmental sound. Results for the proposed approach are 65%, for the standard approach are 57%.
{"title":"Ultrasound-based movement sensing, gesture-, and context-recognition","authors":"Hiroki Watanabe, T. Terada, M. Tsukamoto","doi":"10.1145/2493988.2494335","DOIUrl":"https://doi.org/10.1145/2493988.2494335","url":null,"abstract":"We propose an activity and context recognition method where the user carries a neck-worn receiver comprising a microphone, and small speakers on his wrists that generate ultrasounds. The system recognizes gestures on the basis of the volume of the received sound and the Doppler effect. The former indicates the distance between the neck and wrists, and the later indicates the speed of motions. Thus, our approach substitutes the wired or wireless communication typically required in body area motion sensing networks by ultrasounds. Our system also recognizes the place where the user is in and the people who are near the user by ID signals generated from speakers placed in rooms and on people. The strength of the approach is that, for offline recognition, a simple audio recorder can be used for the receiver. We evaluate the approach in one scenario on nine gestures/activities with 10 users. Evaluation results confirmed that when there was no environmental sound generated from other people, the recognition rate was 87% on average. When there was environmental sound generated from other people, we compare approach ultrasound-based recognition which uses only the feature value of ultrasound against standard approach, which uses feature value of ultrasound and environmental sound. Results for the proposed approach are 65%, for the standard approach are 57%.","PeriodicalId":90988,"journal":{"name":"The semantic Web--ISWC ... : ... International Semantic Web Conference ... proceedings. International Semantic Web Conference","volume":"36 1","pages":"57-64"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85131975","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Kunze, Yuzuko Utsumi, Yuki Shiga, K. Kise, A. Bulling
Reading is a ubiquitous activity that many people even perform in transit, such as while on the bus or while walking. Tracking reading enables us to gain more insights about expertise level and potential knowledge of users -- towards a reading log tracking and improve knowledge acquisition. As a first step towards this vision, in this work we investigate whether different document types can be automatically detected from visual behaviour recorded using a mobile eye tracker. We present an initial recognition approach that com- bines special purpose eye movement features as well as machine learning for document type detection. We evaluate our approach in a user study with eight participants and five Japanese document types and achieve a recognition performance of 74% using user-independent training.
{"title":"I know what you are reading: recognition of document types using mobile eye tracking","authors":"K. Kunze, Yuzuko Utsumi, Yuki Shiga, K. Kise, A. Bulling","doi":"10.1145/2493988.2494354","DOIUrl":"https://doi.org/10.1145/2493988.2494354","url":null,"abstract":"Reading is a ubiquitous activity that many people even perform in transit, such as while on the bus or while walking. Tracking reading enables us to gain more insights about expertise level and potential knowledge of users -- towards a reading log tracking and improve knowledge acquisition. As a first step towards this vision, in this work we investigate whether different document types can be automatically detected from visual behaviour recorded using a mobile eye tracker. We present an initial recognition approach that com- bines special purpose eye movement features as well as machine learning for document type detection. We evaluate our approach in a user study with eight participants and five Japanese document types and achieve a recognition performance of 74% using user-independent training.","PeriodicalId":90988,"journal":{"name":"The semantic Web--ISWC ... : ... International Semantic Web Conference ... proceedings. International Semantic Web Conference","volume":"30 1","pages":"113-116"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74199606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we introduce a wearable partner agent, that makes physical contacts corresponding to the user's clothing, posture, and detected contexts. Physical contacts are generated by combining haptic stimuli and anthropomorphic motions of the agent. The agent performs two types of the behaviors: a) it notifies the user of a message by patting the user's arm and b) it generates emotional expression by strongly enfolding the user's arm. Our experimental results demonstrated that haptic communication from the agent increases the intelligibility of the agent's messages and familiar impressions of the agent.
{"title":"Wearable partner agent with anthropomorphic physical contact with awareness of user's clothing and posture","authors":"Tomoko Yonezawa, H. Yamazoe","doi":"10.1145/2493988.2494347","DOIUrl":"https://doi.org/10.1145/2493988.2494347","url":null,"abstract":"In this paper, we introduce a wearable partner agent, that makes physical contacts corresponding to the user's clothing, posture, and detected contexts. Physical contacts are generated by combining haptic stimuli and anthropomorphic motions of the agent. The agent performs two types of the behaviors: a) it notifies the user of a message by patting the user's arm and b) it generates emotional expression by strongly enfolding the user's arm. Our experimental results demonstrated that haptic communication from the agent increases the intelligibility of the agent's messages and familiar impressions of the agent.","PeriodicalId":90988,"journal":{"name":"The semantic Web--ISWC ... : ... International Semantic Web Conference ... proceedings. International Semantic Web Conference","volume":"11 1","pages":"77-80"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86070322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a method for estimating the 3D shape of an object being observed using wearable gaze tracking. Starting from a sparse environment map generated by a simultaneous localization and mapping algorithm (SLAM), we use the gaze direction positioned in 3D to extract the model of the object under observation. By letting the user look at the object of interest, and without any feedback, the method determines 3D point-of-regards by back-projecting the user's gaze rays into the map. The 3D point-of-regards are then used as seed points for segmenting the object from captured images and the calculated silhouettes are used to estimate the 3D shape of the object. We explore methods to remove outlier gaze points that result from the user saccading to non object points and methods for reducing the error in the shape estimation. Being able to exploit gaze information in this way, enables the user of wearable gaze trackers to be able to do things as complex as object modelling in a hands-free and even feedback-free manner.
{"title":"3D from looking: using wearable gaze tracking for hands-free and feedback-free object modelling","authors":"T. Leelasawassuk, W. Mayol-Cuevas","doi":"10.1145/2493988.2494327","DOIUrl":"https://doi.org/10.1145/2493988.2494327","url":null,"abstract":"This paper presents a method for estimating the 3D shape of an object being observed using wearable gaze tracking. Starting from a sparse environment map generated by a simultaneous localization and mapping algorithm (SLAM), we use the gaze direction positioned in 3D to extract the model of the object under observation. By letting the user look at the object of interest, and without any feedback, the method determines 3D point-of-regards by back-projecting the user's gaze rays into the map. The 3D point-of-regards are then used as seed points for segmenting the object from captured images and the calculated silhouettes are used to estimate the 3D shape of the object. We explore methods to remove outlier gaze points that result from the user saccading to non object points and methods for reducing the error in the shape estimation. Being able to exploit gaze information in this way, enables the user of wearable gaze trackers to be able to do things as complex as object modelling in a hands-free and even feedback-free manner.","PeriodicalId":90988,"journal":{"name":"The semantic Web--ISWC ... : ... International Semantic Web Conference ... proceedings. International Semantic Web Conference","volume":"111 1","pages":"105-112"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90518284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents the design and implementation of a wearable oral sensory system that recognizes human oral activities, such as chewing, drinking, speaking, and coughing. We conducted an evaluation of this oral sensory system in a laboratory experiment involving 8 participants. The results show 93.8% oral activity recognition accuracy when using a person-dependent classifier and 59.8% accuracy when using a person-independent classifier.
{"title":"Sensor-embedded teeth for oral activity recognition","authors":"Cheng-Yuan Li, Yen-Chang Chen, Wei-Ju Chen, Polly Huang, Hao-Hua Chu","doi":"10.1145/2493988.2494352","DOIUrl":"https://doi.org/10.1145/2493988.2494352","url":null,"abstract":"This paper presents the design and implementation of a wearable oral sensory system that recognizes human oral activities, such as chewing, drinking, speaking, and coughing. We conducted an evaluation of this oral sensory system in a laboratory experiment involving 8 participants. The results show 93.8% oral activity recognition accuracy when using a person-dependent classifier and 59.8% accuracy when using a person-independent classifier.","PeriodicalId":90988,"journal":{"name":"The semantic Web--ISWC ... : ... International Semantic Web Conference ... proceedings. International Semantic Web Conference","volume":"7 1","pages":"41-44"},"PeriodicalIF":0.0,"publicationDate":"2013-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76264582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}