Hasmath Farhana Thariq Ahmed, Hafisoh Ahmad, S. K. Phang, Houda Harkat, Kulasekharan Narasingamurthi
{"title":"Wi-Fi CSI Based Human Sign Language Recognition using LSTM Network","authors":"Hasmath Farhana Thariq Ahmed, Hafisoh Ahmad, S. K. Phang, Houda Harkat, Kulasekharan Narasingamurthi","doi":"10.1109/IAICT52856.2021.9532548","DOIUrl":null,"url":null,"abstract":"Human sign language gesture recognition is an emerging application in the domain of Wi-Fi-based recognition. The recognition application utilizes the Channel State Information (CSI) of the Wi-Fi signal and captures the human gestures as signal amplitude and phase values. Most existing gesture recognition studies utilize only the amplitude values ignoring the phase information. Few works use both amplitude and phase information for recognition application. Besides, the existing studies adopt deep learning networks, especially Convolutional Neural Network (CNN), to improve recognition performance better. This motivates the present work to study the influence of using (i) amplitude values and (ii) amplitude and phase values together, using the Long Short-Term Memory (LSTM) network, as an alternate for CNN. Moreover, the proposed LSTM framework is fed with the CSI values without much pre-processing applied on it, except standardizing the data to make it more suitable for classification. This paper applies the proposed LSTM framework on a public sign language gesture dataset, SignFi with Adam and SGDM optimizer and analyses the performance with increasing hidden units. LSTM reported better recognition performance using Adam with 150 hidden units, and reported 99.8%, 99.5%, 99.4% and 78.0% for lab 276, home 276, lab+home 276 and lab 150 datasets, respectively.","PeriodicalId":416542,"journal":{"name":"2021 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"5","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IAICT52856.2021.9532548","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 5
Abstract
Human sign language gesture recognition is an emerging application in the domain of Wi-Fi-based recognition. The recognition application utilizes the Channel State Information (CSI) of the Wi-Fi signal and captures the human gestures as signal amplitude and phase values. Most existing gesture recognition studies utilize only the amplitude values ignoring the phase information. Few works use both amplitude and phase information for recognition application. Besides, the existing studies adopt deep learning networks, especially Convolutional Neural Network (CNN), to improve recognition performance better. This motivates the present work to study the influence of using (i) amplitude values and (ii) amplitude and phase values together, using the Long Short-Term Memory (LSTM) network, as an alternate for CNN. Moreover, the proposed LSTM framework is fed with the CSI values without much pre-processing applied on it, except standardizing the data to make it more suitable for classification. This paper applies the proposed LSTM framework on a public sign language gesture dataset, SignFi with Adam and SGDM optimizer and analyses the performance with increasing hidden units. LSTM reported better recognition performance using Adam with 150 hidden units, and reported 99.8%, 99.5%, 99.4% and 78.0% for lab 276, home 276, lab+home 276 and lab 150 datasets, respectively.