{"title":"基于深度学习的物联网框架,用于使用基于手势的界面的辅助医疗保健","authors":"Somayya Avadut, S. Udgata","doi":"10.1109/IoTaIS56727.2022.9975885","DOIUrl":null,"url":null,"abstract":"Around the world, the number of senior citizens is increasing and shall continue to increase, and it is expected to be around 20 percent by 2050. Realizing its importance, the United Nations has identified Health and Wellness as one of the Sustainable Development Goals (SDG). The unfortunate pandemic situation due to the COVID-19 outbreak opened up new challenges for contact-less interactions and control of devices for ensuring the well being of citizens. In this paper, our main aim is to develop an intelligent framework based on a gesture-based interface that will help the senior citizens and physically challenged people interact and control different devices using only gestures. We focus on dynamic gesture recognition using a deep learning-based Convolutional Neural Network (CNN) model. The proposed system records continuous real-time data streams from non-invasive wearable sensors. This real-time continuous data stream is fragmented into data segments that are most likely to contain meaningful gesture data frames using the Adaptive Threshold Setting algorithm. The segmented data frames are provided as input to the CNN model to train, test, validate, and then classify it into predefined clusters, which are gestures. We have used the MPU6050 Inertial Measurement Unit based sensor model for collecting the data of the hand/ finger movement. The popular and widely used ESP8266 controller is used for data gathering, processing, and communicating. We created a dataset for 36 gestures, which includes ten digits and 26 English alphabets. For each gesture, a dataset of 300 samples has been created from 5 subjects of age group between 21-30. Thus, the final dataset consists of a total of 10800 samples belonging to 36 gestures. A total of six features comprising linear accelerations and angular rotation in 3-dimensional axes are used for training and validation. The proposed model can segment 93.75% of data segments correctly using the adaptive threshold selection algorithm, and the CNN classification algorithm can classify 98.67% gestures correctly.","PeriodicalId":138894,"journal":{"name":"2022 IEEE International Conference on Internet of Things and Intelligence Systems (IoTaIS)","volume":"84 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Deep Learning based IoT Framework for Assistive Healthcare using Gesture Based Interface\",\"authors\":\"Somayya Avadut, S. Udgata\",\"doi\":\"10.1109/IoTaIS56727.2022.9975885\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Around the world, the number of senior citizens is increasing and shall continue to increase, and it is expected to be around 20 percent by 2050. Realizing its importance, the United Nations has identified Health and Wellness as one of the Sustainable Development Goals (SDG). The unfortunate pandemic situation due to the COVID-19 outbreak opened up new challenges for contact-less interactions and control of devices for ensuring the well being of citizens. In this paper, our main aim is to develop an intelligent framework based on a gesture-based interface that will help the senior citizens and physically challenged people interact and control different devices using only gestures. We focus on dynamic gesture recognition using a deep learning-based Convolutional Neural Network (CNN) model. The proposed system records continuous real-time data streams from non-invasive wearable sensors. This real-time continuous data stream is fragmented into data segments that are most likely to contain meaningful gesture data frames using the Adaptive Threshold Setting algorithm. The segmented data frames are provided as input to the CNN model to train, test, validate, and then classify it into predefined clusters, which are gestures. We have used the MPU6050 Inertial Measurement Unit based sensor model for collecting the data of the hand/ finger movement. The popular and widely used ESP8266 controller is used for data gathering, processing, and communicating. We created a dataset for 36 gestures, which includes ten digits and 26 English alphabets. For each gesture, a dataset of 300 samples has been created from 5 subjects of age group between 21-30. Thus, the final dataset consists of a total of 10800 samples belonging to 36 gestures. A total of six features comprising linear accelerations and angular rotation in 3-dimensional axes are used for training and validation. The proposed model can segment 93.75% of data segments correctly using the adaptive threshold selection algorithm, and the CNN classification algorithm can classify 98.67% gestures correctly.\",\"PeriodicalId\":138894,\"journal\":{\"name\":\"2022 IEEE International Conference on Internet of Things and Intelligence Systems (IoTaIS)\",\"volume\":\"84 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-11-24\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE International Conference on Internet of Things and Intelligence Systems (IoTaIS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IoTaIS56727.2022.9975885\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE International Conference on Internet of Things and Intelligence Systems (IoTaIS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IoTaIS56727.2022.9975885","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Deep Learning based IoT Framework for Assistive Healthcare using Gesture Based Interface
Around the world, the number of senior citizens is increasing and shall continue to increase, and it is expected to be around 20 percent by 2050. Realizing its importance, the United Nations has identified Health and Wellness as one of the Sustainable Development Goals (SDG). The unfortunate pandemic situation due to the COVID-19 outbreak opened up new challenges for contact-less interactions and control of devices for ensuring the well being of citizens. In this paper, our main aim is to develop an intelligent framework based on a gesture-based interface that will help the senior citizens and physically challenged people interact and control different devices using only gestures. We focus on dynamic gesture recognition using a deep learning-based Convolutional Neural Network (CNN) model. The proposed system records continuous real-time data streams from non-invasive wearable sensors. This real-time continuous data stream is fragmented into data segments that are most likely to contain meaningful gesture data frames using the Adaptive Threshold Setting algorithm. The segmented data frames are provided as input to the CNN model to train, test, validate, and then classify it into predefined clusters, which are gestures. We have used the MPU6050 Inertial Measurement Unit based sensor model for collecting the data of the hand/ finger movement. The popular and widely used ESP8266 controller is used for data gathering, processing, and communicating. We created a dataset for 36 gestures, which includes ten digits and 26 English alphabets. For each gesture, a dataset of 300 samples has been created from 5 subjects of age group between 21-30. Thus, the final dataset consists of a total of 10800 samples belonging to 36 gestures. A total of six features comprising linear accelerations and angular rotation in 3-dimensional axes are used for training and validation. The proposed model can segment 93.75% of data segments correctly using the adaptive threshold selection algorithm, and the CNN classification algorithm can classify 98.67% gestures correctly.