D. Kosmopoulos, Antonis A. Argyros, C. Theoharatos, V. Lambropoulou, C. Panagopoulos, Ilias Maglogiannis
This paper presents the HealthSign project, which deals with the problem of sign language recognition with focus on medical interaction scenarios. The deaf user will be able to communicate in his native sign language with a physician. The continuous signs will be translated to text and presented to the physician. Similarly, the speech will be recognized and presented as text to the deaf users. Two alternative versions of the system will be developed, one doing the recognition on a server, and another one doing the recognition on a mobile device.
{"title":"The HealthSign Project: Vision and Objectives","authors":"D. Kosmopoulos, Antonis A. Argyros, C. Theoharatos, V. Lambropoulou, C. Panagopoulos, Ilias Maglogiannis","doi":"10.1145/3197768.3201547","DOIUrl":"https://doi.org/10.1145/3197768.3201547","url":null,"abstract":"This paper presents the HealthSign project, which deals with the problem of sign language recognition with focus on medical interaction scenarios. The deaf user will be able to communicate in his native sign language with a physician. The continuous signs will be translated to text and presented to the physician. Similarly, the speech will be recognized and presented as text to the deaf users. Two alternative versions of the system will be developed, one doing the recognition on a server, and another one doing the recognition on a mobile device.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127103442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ashwin Ramesh Babu, Joe Cloud, Michail Theofanidis, F. Makedon
Considerable amount of research is being carried out in robot based rehabilitation techniques. One of the main focus when building a smart rehabilitation system is its ability to adapt based on the user experience. In this poster we attempt to build such a smart rehabilitation system that recognizes the strain/ negative emotions from the participants facial expression and adjusts its force exerted. The accuracy of the system to recognize the facial expression is assessed and the accuracy of the system as a whole is estimated through user surveys.
{"title":"Facial Expressions as a Modality for Fatigue detection in Robot based Rehabilitation","authors":"Ashwin Ramesh Babu, Joe Cloud, Michail Theofanidis, F. Makedon","doi":"10.1145/3197768.3203168","DOIUrl":"https://doi.org/10.1145/3197768.3203168","url":null,"abstract":"Considerable amount of research is being carried out in robot based rehabilitation techniques. One of the main focus when building a smart rehabilitation system is its ability to adapt based on the user experience. In this poster we attempt to build such a smart rehabilitation system that recognizes the strain/ negative emotions from the participants facial expression and adjusts its force exerted. The accuracy of the system to recognize the facial expression is assessed and the accuracy of the system as a whole is estimated through user surveys.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129830312","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qinyuan Fang, Maria Kyrarini, Danijela Ristić-Durrant, A. Gräser
In this paper, an RGB-D camera based framework for the recognition and tracking of the human mouth for the purpose of autonomous r obotic feeding is presented. The method employs the state-of-the-art face detection algorithm to acquire the 2D facial landmarks, and the corresponding 3D position of the human mouth is calculated using the depth information. In addition, a 3D point cloud visualizer of the human face with marked facial landmarks is provided for the purpose of visualization. The proposed system is applied in real-time vision-based robot control. Experiments indicate the validity of the proposed work in localising the mouth of different subjects served by the robot with a cup of water.
{"title":"RGB-D Camera based 3D Human Mouth Detection and Tracking Towards Robotic Feeding Assistance","authors":"Qinyuan Fang, Maria Kyrarini, Danijela Ristić-Durrant, A. Gräser","doi":"10.1145/3197768.3201576","DOIUrl":"https://doi.org/10.1145/3197768.3201576","url":null,"abstract":"In this paper, an RGB-D camera based framework for the recognition and tracking of the human mouth for the purpose of autonomous r obotic feeding is presented. The method employs the state-of-the-art face detection algorithm to acquire the 2D facial landmarks, and the corresponding 3D position of the human mouth is calculated using the depth information. In addition, a 3D point cloud visualizer of the human face with marked facial landmarks is provided for the purpose of visualization. The proposed system is applied in real-time vision-based robot control. Experiments indicate the validity of the proposed work in localising the mouth of different subjects served by the robot with a cup of water.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124344310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fadi Al Machot, Mouhannad Ali, S. Ranasinghe, A. Mosa, K. Kyamakya
In Active and Assisted Living environments (AAL), one of the major tasks is to make sure that old people or disabled persons do feel well in their environment. Unfortunately, it is still a difficult task to design a learning system or build a machine learning model which can be trained on a group of subjects using physiological sensors and performs well when testing it on other subjects. This paper proposes a dynamic calibration algorithm which presents promising results for subject-independent human emotion recognition. The goal of the calibration module is to calibrate itself with respect to the features of a new subject by finding the most similar subject in the training data. In order to check the overall performance, this approach is tested using the well-known MAHNOB dataset. The results show a promising improvement based on different evaluation metrics, e.g., sensitivity and specificity.
{"title":"Improving Subject-independent Human Emotion Recognition Using Electrodermal Activity Sensors for Active and Assisted Living","authors":"Fadi Al Machot, Mouhannad Ali, S. Ranasinghe, A. Mosa, K. Kyamakya","doi":"10.1145/3197768.3201523","DOIUrl":"https://doi.org/10.1145/3197768.3201523","url":null,"abstract":"In Active and Assisted Living environments (AAL), one of the major tasks is to make sure that old people or disabled persons do feel well in their environment. Unfortunately, it is still a difficult task to design a learning system or build a machine learning model which can be trained on a group of subjects using physiological sensors and performs well when testing it on other subjects. This paper proposes a dynamic calibration algorithm which presents promising results for subject-independent human emotion recognition. The goal of the calibration module is to calibrate itself with respect to the features of a new subject by finding the most similar subject in the training data. In order to check the overall performance, this approach is tested using the well-known MAHNOB dataset. The results show a promising improvement based on different evaluation metrics, e.g., sensitivity and specificity.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128151556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. E. Lithoxoidou, I. Paliokas, Ioannis Gotsos, S. Krinidis, Athanasios Tsakiris, K. Votis, D. Tzovaras
This paper presents a gamified framework designed to offer behavioural change support and treatment adherence services to people living with Dementia (PLWD), their caregivers and medical/social professionals. A flexible and scalable ICT solution architecture was proposed to support highly personalized and gamified services for all groups involved: cognitive skills training and independent living for PLWD, training and support for caregivers and high clinical and social services for professionals. The outcomes of this approach are delivered through a set of gamification concepts running in parallel to create motivation for user commitment and for achieving the desired behavioral change. After projecting all user group expectations on a social game canvas, the impact evaluation will assess the intended effects of the proposed gamification approach on the welfare on PLWD and their caregivers.
{"title":"A Gamification Engine Architecture for Enhancing Behavioral Change Support Systems","authors":"E. E. Lithoxoidou, I. Paliokas, Ioannis Gotsos, S. Krinidis, Athanasios Tsakiris, K. Votis, D. Tzovaras","doi":"10.1145/3197768.3201561","DOIUrl":"https://doi.org/10.1145/3197768.3201561","url":null,"abstract":"This paper presents a gamified framework designed to offer behavioural change support and treatment adherence services to people living with Dementia (PLWD), their caregivers and medical/social professionals. A flexible and scalable ICT solution architecture was proposed to support highly personalized and gamified services for all groups involved: cognitive skills training and independent living for PLWD, training and support for caregivers and high clinical and social services for professionals. The outcomes of this approach are delivered through a set of gamification concepts running in parallel to create motivation for user commitment and for achieving the desired behavioral change. After projecting all user group expectations on a social game canvas, the impact evaluation will assess the intended effects of the proposed gamification approach on the welfare on PLWD and their caregivers.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130202084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multiple techniques are used to extract physiological signals from the human body. These signals provide a reliable method to identify the physical and mental state of a person at any given point in time. However, these techniques require contact and cooperation of the individual as well as human effort for connecting the devices and collecting the needed measurement. Moreover, these methods can be invasive, time-consuming, and infeasible in many cases. Recent efforts have been made in order to find alternatives to extract these measurements using non-contact and efficient techniques. In this paper we provide a survey that explores different approaches for extracting vital signs from thermal images as well as review applications that could potentially leverage these techniques.
{"title":"A Survey on Extracting Physiological Measurements from Thermal Images","authors":"Christian Hessler, M. Abouelenien, Mihai Burzo","doi":"10.1145/3197768.3197792","DOIUrl":"https://doi.org/10.1145/3197768.3197792","url":null,"abstract":"Multiple techniques are used to extract physiological signals from the human body. These signals provide a reliable method to identify the physical and mental state of a person at any given point in time. However, these techniques require contact and cooperation of the individual as well as human effort for connecting the devices and collecting the needed measurement. Moreover, these methods can be invasive, time-consuming, and infeasible in many cases. Recent efforts have been made in order to find alternatives to extract these measurements using non-contact and efficient techniques. In this paper we provide a survey that explores different approaches for extracting vital signs from thermal images as well as review applications that could potentially leverage these techniques.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124732277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sebastian Günther, Sven G. Kratz, Daniel Avrahami, M. Mühlhäuser
Today, remote collaboration techniques between field workers and remotely located experts mainly focus on traditional communication channels, such as voice- or video-conferencing. Those systems may not be suitable in every situation or the communication gets cumbersome if both parties do not share a common ground. In this paper, we explore three supporting communication channels based on audio, visual, and tactile cues. We built a prototypical application implementing those cues and evaluated them in a user study. Based on the user feedback, we report first insights for building remote assistance systems utilizing additional cues.
{"title":"Exploring Audio, Visual, and Tactile Cues for Synchronous Remote Assistance","authors":"Sebastian Günther, Sven G. Kratz, Daniel Avrahami, M. Mühlhäuser","doi":"10.1145/3197768.3201568","DOIUrl":"https://doi.org/10.1145/3197768.3201568","url":null,"abstract":"Today, remote collaboration techniques between field workers and remotely located experts mainly focus on traditional communication channels, such as voice- or video-conferencing. Those systems may not be suitable in every situation or the communication gets cumbersome if both parties do not share a common ground. In this paper, we explore three supporting communication channels based on audio, visual, and tactile cues. We built a prototypical application implementing those cues and evaluated them in a user study. Based on the user feedback, we report first insights for building remote assistance systems utilizing additional cues.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126208098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The impact of information coupled with the effects of innovation is profound on all aspects of city life, from transport planning and energy use reduction to care provision and assisted living. But it also includes new ways of organising communities, as well as access to political process. The idea that information is key for the design and management of future cities matures in the relevant communities of architects, planners, engineers, computer scientists and urban innovators, so the time is right to also consider what citizenship skills are required. Familiarity, if not proficiency, in 'digital' skills emerge as essential aspect of future citizenship. We don't only mean however efficient digital consumption skills, but also digital creation skills such as computational thinking and coding, entrepreneurship and systems thinking, information architecting as well as a risk-informed perception of data privacy and security. The challenges of delivering such a skillset are many, from designing a 21st century curriculum, to ensuring fair access to technology for people of all abilities, race, gender, age and class.
{"title":"Public Policy and Skills for Smart Cities: The UK Outlook","authors":"T. Tryfonas, Tom Crick","doi":"10.1145/3197768.3203170","DOIUrl":"https://doi.org/10.1145/3197768.3203170","url":null,"abstract":"The impact of information coupled with the effects of innovation is profound on all aspects of city life, from transport planning and energy use reduction to care provision and assisted living. But it also includes new ways of organising communities, as well as access to political process. The idea that information is key for the design and management of future cities matures in the relevant communities of architects, planners, engineers, computer scientists and urban innovators, so the time is right to also consider what citizenship skills are required. Familiarity, if not proficiency, in 'digital' skills emerge as essential aspect of future citizenship. We don't only mean however efficient digital consumption skills, but also digital creation skills such as computational thinking and coding, entrepreneurship and systems thinking, information architecting as well as a risk-informed perception of data privacy and security. The challenges of delivering such a skillset are many, from designing a 21st century curriculum, to ensuring fair access to technology for people of all abilities, race, gender, age and class.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130319570","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Wolfartsberger, Jean D. Hallewell Haslwanter, R. Froschauer, René Lindorfer, M. Jungwirth, Doris Wahlmüller
Small lot sizes in modern manufacturing present new challenges for people doing manual assembly tasks. Assistance systems, including instruction systems and collaborative robots, can support the flexibility needed, while also reducing the number of errors. This session is designed to give participants a better understanding of the strengths and limitations of the different technologies with respect to the practical implementation in companies. Several new technological solutions designed for companies will be presented. In addition, participants will be given the chance to gain first-hand experience with some of the technologies presented.
{"title":"Industrial Perspectives on Assistive Systems for Manual Assembly Tasks","authors":"J. Wolfartsberger, Jean D. Hallewell Haslwanter, R. Froschauer, René Lindorfer, M. Jungwirth, Doris Wahlmüller","doi":"10.1145/3197768.3201552","DOIUrl":"https://doi.org/10.1145/3197768.3201552","url":null,"abstract":"Small lot sizes in modern manufacturing present new challenges for people doing manual assembly tasks. Assistance systems, including instruction systems and collaborative robots, can support the flexibility needed, while also reducing the number of errors. This session is designed to give participants a better understanding of the strengths and limitations of the different technologies with respect to the practical implementation in companies. Several new technological solutions designed for companies will be presented. In addition, participants will be given the chance to gain first-hand experience with some of the technologies presented.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134383825","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Haseeb, Maria Kyrarini, Shuo Jiang, Danijela Ristić-Durrant, A. Gräser
Development of assistive robotics to enable people with disabilities to work is a challenging topic. Hands-free interfaces can support a person with severe motor impairments to control robotic manipulators. The paper focuses on developing a head gesture interface, which enables the end-user to control a dual-arm industrial robot. A motion sensor is located on the head of the end-user. Support vector machine is used to recognize the head gestures and an intuitive Graphical User Interface is developed to help the user to navigate through different control modes. To evaluate the proposed framework, an industrial pick and place task was performed.
{"title":"Head Gesture-based Control for Assistive Robots","authors":"M. Haseeb, Maria Kyrarini, Shuo Jiang, Danijela Ristić-Durrant, A. Gräser","doi":"10.1145/3197768.3201574","DOIUrl":"https://doi.org/10.1145/3197768.3201574","url":null,"abstract":"Development of assistive robotics to enable people with disabilities to work is a challenging topic. Hands-free interfaces can support a person with severe motor impairments to control robotic manipulators. The paper focuses on developing a head gesture interface, which enables the end-user to control a dual-arm industrial robot. A motion sensor is located on the head of the end-user. Support vector machine is used to recognize the head gestures and an intuitive Graphical User Interface is developed to help the user to navigate through different control modes. To evaluate the proposed framework, an industrial pick and place task was performed.","PeriodicalId":130190,"journal":{"name":"Proceedings of the 11th PErvasive Technologies Related to Assistive Environments Conference","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123458261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}