Fanny Recher, O. Baños, C. Nikamp, L. Schaake, C. Baten, J. Buurke
{"title":"优化可穿戴外骨骼中风幸存者的活动识别","authors":"Fanny Recher, O. Baños, C. Nikamp, L. Schaake, C. Baten, J. Buurke","doi":"10.1109/BIOROB.2018.8487740","DOIUrl":null,"url":null,"abstract":"Stroke affects the mobility, hence the quality of life of people victim of this cerebrovascular disease. Part of research has been focusing on the development of exoskeletons bringing support to the user's joints to improve their gait and to help regaining independence in daily life. One example is Xosoft, a soft modular exoskeleton currently being developed in the framework of the European project of the same name. On top of its assistive properties, the soft exoskeleton will provide therapeutic feedback via the analysis of kinematic data stemming from inertial sensors mounted on the exoskeleton. Prior to these analyses however, the activities performed by the user must be known in order to have sufficient behavioral context to interpret the data. Four activity recognition chains, based on machine learning algorithm, were implemented to automatically identify the nature of the activities performed by the user. To be consistent with the application they are being used for (i.e. wearable exoskeleton), focus was made on reducing energy consumption by configuration minimization and bringing robustness to these algorithms. In this study, movement sensor data was collected from eleven stroke survivors while performing daily-life activities. From this data, we evaluated the influence of sensor reduction and position on the performances of the four algorithms. Moreover, we evaluated their resistance to sensor failures. Results show that in all four activity recognition chains, and for each patient, reduction of sensors is possible until a certain limit beyond which the position on the body has to be carefully chosen in order to maintain the same performance results. In particular, the study shows the benefits of avoiding lower legs and foot locations as well as the sensors positioned on the affected side of the stroke patient. It also shows that robustness can be brought to the activity recognition chain when the data stemming from the different sensors are fused at the very end of the classification process.","PeriodicalId":382522,"journal":{"name":"2018 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Optimizing Activity Recognition in Stroke Survivors for Wearable Exoskeletons\",\"authors\":\"Fanny Recher, O. Baños, C. Nikamp, L. Schaake, C. Baten, J. Buurke\",\"doi\":\"10.1109/BIOROB.2018.8487740\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Stroke affects the mobility, hence the quality of life of people victim of this cerebrovascular disease. Part of research has been focusing on the development of exoskeletons bringing support to the user's joints to improve their gait and to help regaining independence in daily life. One example is Xosoft, a soft modular exoskeleton currently being developed in the framework of the European project of the same name. On top of its assistive properties, the soft exoskeleton will provide therapeutic feedback via the analysis of kinematic data stemming from inertial sensors mounted on the exoskeleton. Prior to these analyses however, the activities performed by the user must be known in order to have sufficient behavioral context to interpret the data. Four activity recognition chains, based on machine learning algorithm, were implemented to automatically identify the nature of the activities performed by the user. To be consistent with the application they are being used for (i.e. wearable exoskeleton), focus was made on reducing energy consumption by configuration minimization and bringing robustness to these algorithms. In this study, movement sensor data was collected from eleven stroke survivors while performing daily-life activities. From this data, we evaluated the influence of sensor reduction and position on the performances of the four algorithms. Moreover, we evaluated their resistance to sensor failures. Results show that in all four activity recognition chains, and for each patient, reduction of sensors is possible until a certain limit beyond which the position on the body has to be carefully chosen in order to maintain the same performance results. In particular, the study shows the benefits of avoiding lower legs and foot locations as well as the sensors positioned on the affected side of the stroke patient. It also shows that robustness can be brought to the activity recognition chain when the data stemming from the different sensors are fused at the very end of the classification process.\",\"PeriodicalId\":382522,\"journal\":{\"name\":\"2018 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob)\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-08-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/BIOROB.2018.8487740\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BIOROB.2018.8487740","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Optimizing Activity Recognition in Stroke Survivors for Wearable Exoskeletons
Stroke affects the mobility, hence the quality of life of people victim of this cerebrovascular disease. Part of research has been focusing on the development of exoskeletons bringing support to the user's joints to improve their gait and to help regaining independence in daily life. One example is Xosoft, a soft modular exoskeleton currently being developed in the framework of the European project of the same name. On top of its assistive properties, the soft exoskeleton will provide therapeutic feedback via the analysis of kinematic data stemming from inertial sensors mounted on the exoskeleton. Prior to these analyses however, the activities performed by the user must be known in order to have sufficient behavioral context to interpret the data. Four activity recognition chains, based on machine learning algorithm, were implemented to automatically identify the nature of the activities performed by the user. To be consistent with the application they are being used for (i.e. wearable exoskeleton), focus was made on reducing energy consumption by configuration minimization and bringing robustness to these algorithms. In this study, movement sensor data was collected from eleven stroke survivors while performing daily-life activities. From this data, we evaluated the influence of sensor reduction and position on the performances of the four algorithms. Moreover, we evaluated their resistance to sensor failures. Results show that in all four activity recognition chains, and for each patient, reduction of sensors is possible until a certain limit beyond which the position on the body has to be carefully chosen in order to maintain the same performance results. In particular, the study shows the benefits of avoiding lower legs and foot locations as well as the sensors positioned on the affected side of the stroke patient. It also shows that robustness can be brought to the activity recognition chain when the data stemming from the different sensors are fused at the very end of the classification process.