This paper presents preliminary results of the ongoing project TheOdor which explores the potential of electronic noses that make use of commodity gas sensors (MOS, MEMS) for applications in the smarthome, for example, to classify human activities based on the odors generated by activities. We describe the system and its components and report on classification results from first validation experiments.
{"title":"Theodor: A Step Towards Smart Home Applications with Electronic Noses","authors":"C. Dang, A. Seiderer, E. André","doi":"10.1145/3266157.3266215","DOIUrl":"https://doi.org/10.1145/3266157.3266215","url":null,"abstract":"This paper presents preliminary results of the ongoing project TheOdor which explores the potential of electronic noses that make use of commodity gas sensors (MOS, MEMS) for applications in the smarthome, for example, to classify human activities based on the odors generated by activities. We describe the system and its components and report on classification results from first validation experiments.","PeriodicalId":151070,"journal":{"name":"Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122809345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Johann-Peter Wolff, Florian Grützmacher, A. Wellnitz, C. Haubelt
Human activity recognition using inertial sensors is an increasingly used feature in smartphones or smartwatches, providing information on sports and physical activities of each individual. But while the position a smartphone is worn in varies between persons and circumstances, a smartwatch moves constantly, in rhythm with its user's arms. Both problems make activity recognition less reliable. Attaching an inertial sensor to the head provides reliable information on the movements of the whole body while not being superimposed by many additional movements. This can be achieved by fixing sensors to glasses, helmets, or headphones. In this paper, we present a system using head-mounted inertial sensors for human activity recognition. We compare it to existing research work and show possible advantages or disadvantages of positioning a single sensor on the head to recognize physical activities. Furthermore we evaluate the benefits of using different sensor configurations on activity recognition.
{"title":"Activity Recognition using Head Worn Inertial Sensors","authors":"Johann-Peter Wolff, Florian Grützmacher, A. Wellnitz, C. Haubelt","doi":"10.1145/3266157.3266218","DOIUrl":"https://doi.org/10.1145/3266157.3266218","url":null,"abstract":"Human activity recognition using inertial sensors is an increasingly used feature in smartphones or smartwatches, providing information on sports and physical activities of each individual. But while the position a smartphone is worn in varies between persons and circumstances, a smartwatch moves constantly, in rhythm with its user's arms. Both problems make activity recognition less reliable. Attaching an inertial sensor to the head provides reliable information on the movements of the whole body while not being superimposed by many additional movements. This can be achieved by fixing sensors to glasses, helmets, or headphones. In this paper, we present a system using head-mounted inertial sensors for human activity recognition. We compare it to existing research work and show possible advantages or disadvantages of positioning a single sensor on the head to recognize physical activities. Furthermore we evaluate the benefits of using different sensor configurations on activity recognition.","PeriodicalId":151070,"journal":{"name":"Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114098304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amit Kumar, Kristina Yordanova, T. Kirste, Mohit Kumar
Human Activity Recognition (HAR) plays an important role in many real world applications. Currently, various techniques have been proposed for sensor-based "HAR" in daily health monitoring, rehabilitative training and disease prevention. However, non-visual sensors in general and wearable sensors in specific have several limitations: acceptability and willingness to use wearable sensors; battery life; ease of use; size and effectiveness of the sensors. Therefore, adopting vision-based human activity recognition approach is more viable option since its diversity would enable the application to be deployed in wide range of domains. The most popular technique of vision based activity recognition, Deep Learning, however, requires huge domain-specific datasets for training which, is time consuming and expensive. To address this problem this paper proposes a Transfer Learning technique by adopting vision-based approach to "HAR" by using already trained Deep Learning models. A new stochastic model is developed by borrowing the concept of "Dirichlet Alloaction" from Latent Dirichlet Allocation (LDA) for an inference of the posterior distribution of the variables relating the deep learning classifiers predicted labels with the corresponding activities. Results show that an average accuracy of 95.43% is achieved during training the model as compared to 74.88 and 61.4% of Decision Tree and SVM respectively.
{"title":"Combining off-the-shelf Image Classifiers with Transfer Learning for Activity Recognition","authors":"Amit Kumar, Kristina Yordanova, T. Kirste, Mohit Kumar","doi":"10.1145/3266157.3266219","DOIUrl":"https://doi.org/10.1145/3266157.3266219","url":null,"abstract":"Human Activity Recognition (HAR) plays an important role in many real world applications. Currently, various techniques have been proposed for sensor-based \"HAR\" in daily health monitoring, rehabilitative training and disease prevention. However, non-visual sensors in general and wearable sensors in specific have several limitations: acceptability and willingness to use wearable sensors; battery life; ease of use; size and effectiveness of the sensors. Therefore, adopting vision-based human activity recognition approach is more viable option since its diversity would enable the application to be deployed in wide range of domains. The most popular technique of vision based activity recognition, Deep Learning, however, requires huge domain-specific datasets for training which, is time consuming and expensive. To address this problem this paper proposes a Transfer Learning technique by adopting vision-based approach to \"HAR\" by using already trained Deep Learning models. A new stochastic model is developed by borrowing the concept of \"Dirichlet Alloaction\" from Latent Dirichlet Allocation (LDA) for an inference of the posterior distribution of the variables relating the deep learning classifiers predicted labels with the corresponding activities. Results show that an average accuracy of 95.43% is achieved during training the model as compared to 74.88 and 61.4% of Decision Tree and SVM respectively.","PeriodicalId":151070,"journal":{"name":"Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124911195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Photoplethysmography (PPG) sensors have become a prevalent feature included in current wearables, as the cost and size of current PPG modules have dropped significantly. Research in the analysis of PPG data has recently expanded beyond the fast and accurate characterization of heart rate, into the adaptive handling of artifacts within the signal and even the capturing of respiration rate. In this paper, we instead explore using state-of-the-art PPG sensor modules for long-term wearable deployment and the observation of trends over minutes, rather than seconds. By focusing specifically on lowering the sampling rate and via analysis of the spectrum of frequencies alone, our approach minimizes the costly illumination-based sensing and can be used to detect the dominant frequencies of heart rate and respiration rate, but also enables to infer on activity of the sympathetic nervous system. We show in two experiments that such detections and measurements can still be achieved at low sampling rates down to 10 Hz, within a power-efficient platform. This approach enables miniature sensor designs that monitor average heart rate, respiration rate, and sympathetic nerve activity over longer stretches of time.
{"title":"Fewer Samples for a Longer Life Span: Towards Long-Term Wearable PPG Analysis","authors":"Florian Wolling, Kristof Van Laerhoven","doi":"10.1145/3266157.3266209","DOIUrl":"https://doi.org/10.1145/3266157.3266209","url":null,"abstract":"Photoplethysmography (PPG) sensors have become a prevalent feature included in current wearables, as the cost and size of current PPG modules have dropped significantly. Research in the analysis of PPG data has recently expanded beyond the fast and accurate characterization of heart rate, into the adaptive handling of artifacts within the signal and even the capturing of respiration rate. In this paper, we instead explore using state-of-the-art PPG sensor modules for long-term wearable deployment and the observation of trends over minutes, rather than seconds. By focusing specifically on lowering the sampling rate and via analysis of the spectrum of frequencies alone, our approach minimizes the costly illumination-based sensing and can be used to detect the dominant frequencies of heart rate and respiration rate, but also enables to infer on activity of the sympathetic nervous system. We show in two experiments that such detections and measurements can still be achieved at low sampling rates down to 10 Hz, within a power-efficient platform. This approach enables miniature sensor designs that monitor average heart rate, respiration rate, and sympathetic nerve activity over longer stretches of time.","PeriodicalId":151070,"journal":{"name":"Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122145697","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Inertial Measurement Units (IMUs) have proven to be a promising candidate for joint kinematics assessment during human locomotion. The benefits associated with IMU-based joint angle measurements are ease of handling, flexibility and low cost. However, a known limitation is that the joint axes in terms of the attached IMUs need to be identified in order to decompose IMU measurements into joint angles. Conventionally, careful alignment of the IMUs with respect to the body segments and/or calibration motions are required. In this paper, a novel approach is proposed to estimate the joint axes of the hip and knee joint during gait. Our method is easy to use, self-calibrating and real-time capable using the obtained IMU data during gait. In addition to prior methods, the algorithm profits from the periodicity during gait in order to deal with three (rotational) degrees of freedom (3-DoF) motions. Experiments with 8 healthy subjects walking on a motor-driven treadmill have been conducted. The joint axes converged onto the expected axes in all trials and the convergence times averaged less than 15 seconds.
{"title":"Real-Time Joint Axes Estimation of the Hip and Knee Joint during Gait using Inertial Sensors","authors":"Markus Nordén, Philipp Müller, T. Schauer","doi":"10.1145/3266157.3266213","DOIUrl":"https://doi.org/10.1145/3266157.3266213","url":null,"abstract":"Inertial Measurement Units (IMUs) have proven to be a promising candidate for joint kinematics assessment during human locomotion. The benefits associated with IMU-based joint angle measurements are ease of handling, flexibility and low cost. However, a known limitation is that the joint axes in terms of the attached IMUs need to be identified in order to decompose IMU measurements into joint angles. Conventionally, careful alignment of the IMUs with respect to the body segments and/or calibration motions are required. In this paper, a novel approach is proposed to estimate the joint axes of the hip and knee joint during gait. Our method is easy to use, self-calibrating and real-time capable using the obtained IMU data during gait. In addition to prior methods, the algorithm profits from the periodicity during gait in order to deal with three (rotational) degrees of freedom (3-DoF) motions. Experiments with 8 healthy subjects walking on a motor-driven treadmill have been conducted. The joint axes converged onto the expected axes in all trials and the convergence times averaged less than 15 seconds.","PeriodicalId":151070,"journal":{"name":"Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128949517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Motion Capture (MOCAP) Systems have been used to analyze body motion and postures in biomedicine, sports, rehabilitation, and music. With the aim to compare the precision of low-cost devices for motion tracking (e.g. Myo) with the precision of MOCAP systems in the context of music performance, we recorded MOCAP and Myo data of a top professional violinist executing four fundamental bowing techniques (i.e. Détaché, Martelé, Spiccato and Ricochet). Using the recorded data we applied machine learning techniques to train models to classify the four bowing techniques. Despite intrinsic differences between the MOCAP and low-cost data, the Myo-based classifier resulted in slightly higher accuracy than the MOCAP-based classifier. This result shows that it is possible to develop music-gesture learning applications based on low-cost technology which can be used in home environments for self-learning practitioners.
{"title":"A Machine Learning Approach to Violin Bow Technique Classification: a Comparison Between IMU and MOCAP systems","authors":"D. Dalmazzo, S. Tassani, R. Ramírez","doi":"10.1145/3266157.3266216","DOIUrl":"https://doi.org/10.1145/3266157.3266216","url":null,"abstract":"Motion Capture (MOCAP) Systems have been used to analyze body motion and postures in biomedicine, sports, rehabilitation, and music. With the aim to compare the precision of low-cost devices for motion tracking (e.g. Myo) with the precision of MOCAP systems in the context of music performance, we recorded MOCAP and Myo data of a top professional violinist executing four fundamental bowing techniques (i.e. Détaché, Martelé, Spiccato and Ricochet). Using the recorded data we applied machine learning techniques to train models to classify the four bowing techniques. Despite intrinsic differences between the MOCAP and low-cost data, the Myo-based classifier resulted in slightly higher accuracy than the MOCAP-based classifier. This result shows that it is possible to develop music-gesture learning applications based on low-cost technology which can be used in home environments for self-learning practitioners.","PeriodicalId":151070,"journal":{"name":"Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114248878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Depth cameras have been known to be capable of picking up the small changes in distance from users' torsos, to estimate respiration rate. Several studies have shown that under certain conditions, the respiration rate from a non-mobile user facing the camera can be accurately estimated from parts of the depth data. It is however to date not clear, what factors might hinder the application of this technology in any setting, what areas of the torso need to be observed, and how readings are affected for persons at larger distances from the RGB-D camera. In this paper, we present a benchmark dataset that consists of the point cloud data from a depth camera, which monitors 7 volunteers at variable distances, for variable methods to pin-point the person's torso, and at variable breathing rates. Our findings show that the respiration signal's signal-to-noise ratio becomes debilitating as the distance to the person approaches 4 metres, and that bigger windows over the person's chest work particularly well. The sampling rate of the depth camera was also found to impact the signal's quality significantly.
{"title":"Respiration Rate Estimation with Depth Cameras: An Evaluation of Parameters","authors":"Jochen Kempfle, Kristof Van Laerhoven","doi":"10.1145/3266157.3266208","DOIUrl":"https://doi.org/10.1145/3266157.3266208","url":null,"abstract":"Depth cameras have been known to be capable of picking up the small changes in distance from users' torsos, to estimate respiration rate. Several studies have shown that under certain conditions, the respiration rate from a non-mobile user facing the camera can be accurately estimated from parts of the depth data. It is however to date not clear, what factors might hinder the application of this technology in any setting, what areas of the torso need to be observed, and how readings are affected for persons at larger distances from the RGB-D camera. In this paper, we present a benchmark dataset that consists of the point cloud data from a depth camera, which monitors 7 volunteers at variable distances, for variable methods to pin-point the person's torso, and at variable breathing rates. Our findings show that the respiration signal's signal-to-noise ratio becomes debilitating as the distance to the person approaches 4 metres, and that bigger windows over the person's chest work particularly well. The sampling rate of the depth camera was also found to impact the signal's quality significantly.","PeriodicalId":151070,"journal":{"name":"Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129421910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a new activity recognition technique is introduced based on the gray level co-occurrence matrices (GLCM) from a 3D dense optical flow of the input RGB and Depth videos. These matrices are one of the earliest techniques used for image texture analysis which are representing the distribution of the intensities and information about relative positions of neighboring pixels of an image. In this work, we propose a new method to extract feature vector values using the well-known Haralick features from GLCM matrices to describe the flow pattern by measuring meaningful properties such as energy, contrast, homogeneity, entropy, correlation and sum average to capture local spatial and temporal characteristics of the motion through the neighboring optical flow orientation and magnitude. To evaluate the proposed method and improve the activity recognition problem, we apply a recognition pipeline that involves the bag of local spatial and temporal features and three types of machine learning classifiers are used for comparing the recognition accuracy rate of our method. These classifiers are random forest, support vector machine and K-nearest neighbor. The experimental results carried on two well-known datasets (Gaming datasets (G3D) and Cornell Activity Datasets (CAD-60)), which demonstrate that our method outperforms the results achieved by several widely employed spatial and temporal feature descriptors methods.
{"title":"Dense 3D Optical Flow Co-occurrence Matrices for Human Activity Recognition","authors":"Rawya Al-Akam, D. Paulus","doi":"10.1145/3266157.3266220","DOIUrl":"https://doi.org/10.1145/3266157.3266220","url":null,"abstract":"In this paper, a new activity recognition technique is introduced based on the gray level co-occurrence matrices (GLCM) from a 3D dense optical flow of the input RGB and Depth videos. These matrices are one of the earliest techniques used for image texture analysis which are representing the distribution of the intensities and information about relative positions of neighboring pixels of an image. In this work, we propose a new method to extract feature vector values using the well-known Haralick features from GLCM matrices to describe the flow pattern by measuring meaningful properties such as energy, contrast, homogeneity, entropy, correlation and sum average to capture local spatial and temporal characteristics of the motion through the neighboring optical flow orientation and magnitude. To evaluate the proposed method and improve the activity recognition problem, we apply a recognition pipeline that involves the bag of local spatial and temporal features and three types of machine learning classifiers are used for comparing the recognition accuracy rate of our method. These classifiers are random forest, support vector machine and K-nearest neighbor. The experimental results carried on two well-known datasets (Gaming datasets (G3D) and Cornell Activity Datasets (CAD-60)), which demonstrate that our method outperforms the results achieved by several widely employed spatial and temporal feature descriptors methods.","PeriodicalId":151070,"journal":{"name":"Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132077285","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
G. Bieber, Marian Haescher, Paul Hanschmann, Denys J. C. Matthies
Step detection with accelerometers is a very common feature that smart wearables already include. However, when using a wheeled walking frame / rollator, current algorithms may be of limited use, since a different type of motion is being excreted. In this paper, we uncover these limitations of current wearables by a pilot study. Furthermore, we investigated an accelerometer-based step detection for using a wheeled walking frame, when mounting an accelerometer to the frame and at the user's wrist. Our findings include knowledge on signal propagation of each axis, knowledge on the required sensor quality and knowledge on the impact of different surfaces and floor types. In conclusion, we outline a new step detection algorithm based on accelerometer input data. Our algorithm can significantly empower future off-the-shelf wearables with the capability to sufficiently detect steps with elderly people using a wheeled walking frame. This can help to evaluate the state of health with regard to the human behavior and motor system and even to determine the progress of certain diseases.
{"title":"Exploring Accelerometer-based Step Detection by using a Wheeled Walking Frame","authors":"G. Bieber, Marian Haescher, Paul Hanschmann, Denys J. C. Matthies","doi":"10.1145/3266157.3266212","DOIUrl":"https://doi.org/10.1145/3266157.3266212","url":null,"abstract":"Step detection with accelerometers is a very common feature that smart wearables already include. However, when using a wheeled walking frame / rollator, current algorithms may be of limited use, since a different type of motion is being excreted. In this paper, we uncover these limitations of current wearables by a pilot study. Furthermore, we investigated an accelerometer-based step detection for using a wheeled walking frame, when mounting an accelerometer to the frame and at the user's wrist. Our findings include knowledge on signal propagation of each axis, knowledge on the required sensor quality and knowledge on the impact of different surfaces and floor types. In conclusion, we outline a new step detection algorithm based on accelerometer input data. Our algorithm can significantly empower future off-the-shelf wearables with the capability to sufficiently detect steps with elderly people using a wheeled walking frame. This can help to evaluate the state of health with regard to the human behavior and motor system and even to determine the progress of certain diseases.","PeriodicalId":151070,"journal":{"name":"Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121771892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Tsiakas, Michalis Papakostas, J. Ford, F. Makedon
This paper outlines the development of a task-driven framework for multimodal fatigue analysis during physical and cognitive tasks. While fatigue is a common symptom across several neurological chronic diseases, such as multiple sclerosis (MS), traumatic brain injury (TBI), cerebral palsy (CP) and others, it remains poorly understood, due to various reasons, including subjectivity and variability amongst individuals. Towards this end, we propose a task-driven data collection framework for multimodal fatigue analysis, in the domain of MS, combining behavioral, sensory and subjective measures, while users perform a set of both physical and cognitive tasks, including assessment tests and Activities of Daily Living (ADLs).
{"title":"Towards a task-driven framework for multimodal fatigue analysis during physical and cognitive tasks","authors":"K. Tsiakas, Michalis Papakostas, J. Ford, F. Makedon","doi":"10.1145/3266157.3266222","DOIUrl":"https://doi.org/10.1145/3266157.3266222","url":null,"abstract":"This paper outlines the development of a task-driven framework for multimodal fatigue analysis during physical and cognitive tasks. While fatigue is a common symptom across several neurological chronic diseases, such as multiple sclerosis (MS), traumatic brain injury (TBI), cerebral palsy (CP) and others, it remains poorly understood, due to various reasons, including subjectivity and variability amongst individuals. Towards this end, we propose a task-driven data collection framework for multimodal fatigue analysis, in the domain of MS, combining behavioral, sensory and subjective measures, while users perform a set of both physical and cognitive tasks, including assessment tests and Activities of Daily Living (ADLs).","PeriodicalId":151070,"journal":{"name":"Proceedings of the 5th International Workshop on Sensor-based Activity Recognition and Interaction","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2018-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117266617","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}