A. Mnassri, Sihem Nasri, Mohamed Boussif, A. Cherif
{"title":"鲁棒实时自动语音命令基于树莓派的援助残疾人","authors":"A. Mnassri, Sihem Nasri, Mohamed Boussif, A. Cherif","doi":"10.1109/SETIT54465.2022.9875545","DOIUrl":null,"url":null,"abstract":"Walkers are widely used by people with limited mobility. Much research work is underway to design human-machine interfaces (HMIs) using physiological signals to improve the control of mechanical mobility devices, mainly wheelchairs. Generating exact control commands through physiology in a suitable HMI is a real challenge because severely disabled people cannot control classic wheelchairs. In this context, this work develops a new voice-signal control system to meet the needs of this physically disabled population. This system is divided into two parts: part one covers the recognition, processing and classification of speech signals in this part the relative spectral-perceptual linear prediction (RASTA-PLP) and discrete wavelet transform (DWT) are combined to process and extract speech features and the second part includes the mechanical collection module, i. e. the wheelchair control by a motor drive circuit. The microphone servers for real-time voice recording. A Raspberry-Pi with a Linux operating system kernel is used as the processor. In order to make the processor more user-friendly and reliable, Voice-received mode is integrated into the Wheelchair. The model works successfully with an average recognition of 100 in clean environment and between 80 and 100 in noisy environment.","PeriodicalId":126155,"journal":{"name":"2022 IEEE 9th International Conference on Sciences of Electronics, Technologies of Information and Telecommunications (SETIT)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Robust Real-time Automatic Voice Command based on Raspberry pi for assistance disabled people\",\"authors\":\"A. Mnassri, Sihem Nasri, Mohamed Boussif, A. Cherif\",\"doi\":\"10.1109/SETIT54465.2022.9875545\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Walkers are widely used by people with limited mobility. Much research work is underway to design human-machine interfaces (HMIs) using physiological signals to improve the control of mechanical mobility devices, mainly wheelchairs. Generating exact control commands through physiology in a suitable HMI is a real challenge because severely disabled people cannot control classic wheelchairs. In this context, this work develops a new voice-signal control system to meet the needs of this physically disabled population. This system is divided into two parts: part one covers the recognition, processing and classification of speech signals in this part the relative spectral-perceptual linear prediction (RASTA-PLP) and discrete wavelet transform (DWT) are combined to process and extract speech features and the second part includes the mechanical collection module, i. e. the wheelchair control by a motor drive circuit. The microphone servers for real-time voice recording. A Raspberry-Pi with a Linux operating system kernel is used as the processor. In order to make the processor more user-friendly and reliable, Voice-received mode is integrated into the Wheelchair. The model works successfully with an average recognition of 100 in clean environment and between 80 and 100 in noisy environment.\",\"PeriodicalId\":126155,\"journal\":{\"name\":\"2022 IEEE 9th International Conference on Sciences of Electronics, Technologies of Information and Telecommunications (SETIT)\",\"volume\":\"46 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-05-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 IEEE 9th International Conference on Sciences of Electronics, Technologies of Information and Telecommunications (SETIT)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SETIT54465.2022.9875545\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 9th International Conference on Sciences of Electronics, Technologies of Information and Telecommunications (SETIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SETIT54465.2022.9875545","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Robust Real-time Automatic Voice Command based on Raspberry pi for assistance disabled people
Walkers are widely used by people with limited mobility. Much research work is underway to design human-machine interfaces (HMIs) using physiological signals to improve the control of mechanical mobility devices, mainly wheelchairs. Generating exact control commands through physiology in a suitable HMI is a real challenge because severely disabled people cannot control classic wheelchairs. In this context, this work develops a new voice-signal control system to meet the needs of this physically disabled population. This system is divided into two parts: part one covers the recognition, processing and classification of speech signals in this part the relative spectral-perceptual linear prediction (RASTA-PLP) and discrete wavelet transform (DWT) are combined to process and extract speech features and the second part includes the mechanical collection module, i. e. the wheelchair control by a motor drive circuit. The microphone servers for real-time voice recording. A Raspberry-Pi with a Linux operating system kernel is used as the processor. In order to make the processor more user-friendly and reliable, Voice-received mode is integrated into the Wheelchair. The model works successfully with an average recognition of 100 in clean environment and between 80 and 100 in noisy environment.