Madhavikatamaneni, Riya K S, Anvar Shathik J, K. PoornaPushkala
{"title":"一种从心电图信号中检测压力并改善人类情绪的医疗保健系统","authors":"Madhavikatamaneni, Riya K S, Anvar Shathik J, K. PoornaPushkala","doi":"10.1109/ICACTA54488.2022.9753564","DOIUrl":null,"url":null,"abstract":"A strategy for communicating with another person that, if done correctly, maybe easily be understood or accepted by the other person. There are a variety of alternative modes of communication available, including visual representation, body language, conversation, written language, among others. Currently, speech recognition is evolving as a powerful technology in today's world, with applications in a wide range of areas requiring specialised hardware. Voice has a wide range of applications and is frequently regarded as the most powerful mode of communication among all other technologies. The attitude, health status, emotion, gender, and speaker's identity are all considered part of the rich dimension, also known as the rich dimension of communication. Gender and emotion are the significant components of this framework for voice recognition, and they are taken into consideration for a number of applications in this framework for voice recognition. We want to demonstrate an emotion detection system that uses a speech signal as its main input to identify various emotions with this framework. We offer a unique approach for emotion recognition from speech input that uses Artificial Neural Networks (ANN) and is implemented on a Field Programmable Gate Array device (FPGA). In this scenario, the back propagation technique underneath the ANN is utilised as a classifier in the emotion identification system. The emotions are categorised based on their intensity using this approach. Speech pre-processing, feature extraction, and classification are the proposed work's major processing stages. Here, during the features extraction process, characteristics from the data are recovered, such as Cepstrum, Pitch, Mel-frequency cepstral coefficients (MFCC), and the Discrete Wavelet Transform (DWT). In addition, the method of back propagation neural networks is used to achieve the classification task—the proposed work outcomes with the 91.235% accuracy with the less error rate.","PeriodicalId":345370,"journal":{"name":"2022 International Conference on Advanced Computing Technologies and Applications (ICACTA)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A Healthcare System for detecting Stress from ECG signals and improving the human emotional\",\"authors\":\"Madhavikatamaneni, Riya K S, Anvar Shathik J, K. PoornaPushkala\",\"doi\":\"10.1109/ICACTA54488.2022.9753564\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A strategy for communicating with another person that, if done correctly, maybe easily be understood or accepted by the other person. There are a variety of alternative modes of communication available, including visual representation, body language, conversation, written language, among others. Currently, speech recognition is evolving as a powerful technology in today's world, with applications in a wide range of areas requiring specialised hardware. Voice has a wide range of applications and is frequently regarded as the most powerful mode of communication among all other technologies. The attitude, health status, emotion, gender, and speaker's identity are all considered part of the rich dimension, also known as the rich dimension of communication. Gender and emotion are the significant components of this framework for voice recognition, and they are taken into consideration for a number of applications in this framework for voice recognition. We want to demonstrate an emotion detection system that uses a speech signal as its main input to identify various emotions with this framework. We offer a unique approach for emotion recognition from speech input that uses Artificial Neural Networks (ANN) and is implemented on a Field Programmable Gate Array device (FPGA). In this scenario, the back propagation technique underneath the ANN is utilised as a classifier in the emotion identification system. The emotions are categorised based on their intensity using this approach. Speech pre-processing, feature extraction, and classification are the proposed work's major processing stages. Here, during the features extraction process, characteristics from the data are recovered, such as Cepstrum, Pitch, Mel-frequency cepstral coefficients (MFCC), and the Discrete Wavelet Transform (DWT). In addition, the method of back propagation neural networks is used to achieve the classification task—the proposed work outcomes with the 91.235% accuracy with the less error rate.\",\"PeriodicalId\":345370,\"journal\":{\"name\":\"2022 International Conference on Advanced Computing Technologies and Applications (ICACTA)\",\"volume\":\"8 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-03-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2022 International Conference on Advanced Computing Technologies and Applications (ICACTA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICACTA54488.2022.9753564\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Advanced Computing Technologies and Applications (ICACTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICACTA54488.2022.9753564","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
A Healthcare System for detecting Stress from ECG signals and improving the human emotional
A strategy for communicating with another person that, if done correctly, maybe easily be understood or accepted by the other person. There are a variety of alternative modes of communication available, including visual representation, body language, conversation, written language, among others. Currently, speech recognition is evolving as a powerful technology in today's world, with applications in a wide range of areas requiring specialised hardware. Voice has a wide range of applications and is frequently regarded as the most powerful mode of communication among all other technologies. The attitude, health status, emotion, gender, and speaker's identity are all considered part of the rich dimension, also known as the rich dimension of communication. Gender and emotion are the significant components of this framework for voice recognition, and they are taken into consideration for a number of applications in this framework for voice recognition. We want to demonstrate an emotion detection system that uses a speech signal as its main input to identify various emotions with this framework. We offer a unique approach for emotion recognition from speech input that uses Artificial Neural Networks (ANN) and is implemented on a Field Programmable Gate Array device (FPGA). In this scenario, the back propagation technique underneath the ANN is utilised as a classifier in the emotion identification system. The emotions are categorised based on their intensity using this approach. Speech pre-processing, feature extraction, and classification are the proposed work's major processing stages. Here, during the features extraction process, characteristics from the data are recovered, such as Cepstrum, Pitch, Mel-frequency cepstral coefficients (MFCC), and the Discrete Wavelet Transform (DWT). In addition, the method of back propagation neural networks is used to achieve the classification task—the proposed work outcomes with the 91.235% accuracy with the less error rate.