Pub Date : 2021-09-02DOI: 10.1109/ICIRCA51532.2021.9544982
Saurabh Pargaien, Amrita Verma Pargaien, Dikendra K. Verma, Vatsala Sah, N. Pandey, Neetika Tripathi
Timely cleaning of dustbin is a big challenge and if left unaddressed, it may pose several health risks by making the place unhygienic. Current system for the waste management in local areas of small and highly populated cities is sluggish which leads to a lot of garbage strewn all over the city. The rate of generation of waste is so high that if the garbage collector doesn't visit a place for a couple of days it creates the conditions adverse. In covid-19 pandemic situation, it was very important to monitor and decompose medical waste properly. The handling of normal home garbage was also challenging due to lockdown. In this situation automatic monitoring and controlling of garbage using IOT can play a significance role in garbage management. This paper proposes a smart and fast approach for waste management by creating a network of smart dustbins equipped with sensors and microcontrollers in a city which is monitored by a central control unit to speed up the process in an intelligent and smart way thereby eliminating such hazardous conditions caused by the current sluggish system. The proposed system also takes into account the issue of improper internet connectivity.
{"title":"Smart Waste Collection Monitoring System using IoT","authors":"Saurabh Pargaien, Amrita Verma Pargaien, Dikendra K. Verma, Vatsala Sah, N. Pandey, Neetika Tripathi","doi":"10.1109/ICIRCA51532.2021.9544982","DOIUrl":"https://doi.org/10.1109/ICIRCA51532.2021.9544982","url":null,"abstract":"Timely cleaning of dustbin is a big challenge and if left unaddressed, it may pose several health risks by making the place unhygienic. Current system for the waste management in local areas of small and highly populated cities is sluggish which leads to a lot of garbage strewn all over the city. The rate of generation of waste is so high that if the garbage collector doesn't visit a place for a couple of days it creates the conditions adverse. In covid-19 pandemic situation, it was very important to monitor and decompose medical waste properly. The handling of normal home garbage was also challenging due to lockdown. In this situation automatic monitoring and controlling of garbage using IOT can play a significance role in garbage management. This paper proposes a smart and fast approach for waste management by creating a network of smart dustbins equipped with sensors and microcontrollers in a city which is monitored by a central control unit to speed up the process in an intelligent and smart way thereby eliminating such hazardous conditions caused by the current sluggish system. The proposed system also takes into account the issue of improper internet connectivity.","PeriodicalId":245244,"journal":{"name":"2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA)","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124405829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-02DOI: 10.1109/ICIRCA51532.2021.9544640
Deeksha Agarwal, Meenu Chawla, Namita Tiwari
With the increase in global population, food supply must be increased correspondingly while simultaneously protecting crops from numerous fatal diseases. Traditionally, plant disease identification was done by naked eyes by using experience-based studies of farmers and plant pathologists. Performing the traditional process is difficult, time-consuming, and offered inaccurate diagnosis at times, resulting in significant economic loss in agribusiness. Later, several studies have employed machine learning in the field of plant disease identification, but the findings were not promising and were too slow for practical use. Recently, Convolution Neural Networks have made an essential breakthrough in the field of computer vision due to their characteristics like automatic feature extraction and leverage effective results with small dataset in a short span of time when compared to machine learning. This paper discusses about the challenges faced in identifying the plant leaf diseases and it tries to solve the problem of inaccurate and time consuming analysis of disease detection and classification by reviewing different methods and state-of-the-art algorithms, which are trying to overcome this issue.
{"title":"Plant Leaf Disease Classification using Deep Learning: A Survey","authors":"Deeksha Agarwal, Meenu Chawla, Namita Tiwari","doi":"10.1109/ICIRCA51532.2021.9544640","DOIUrl":"https://doi.org/10.1109/ICIRCA51532.2021.9544640","url":null,"abstract":"With the increase in global population, food supply must be increased correspondingly while simultaneously protecting crops from numerous fatal diseases. Traditionally, plant disease identification was done by naked eyes by using experience-based studies of farmers and plant pathologists. Performing the traditional process is difficult, time-consuming, and offered inaccurate diagnosis at times, resulting in significant economic loss in agribusiness. Later, several studies have employed machine learning in the field of plant disease identification, but the findings were not promising and were too slow for practical use. Recently, Convolution Neural Networks have made an essential breakthrough in the field of computer vision due to their characteristics like automatic feature extraction and leverage effective results with small dataset in a short span of time when compared to machine learning. This paper discusses about the challenges faced in identifying the plant leaf diseases and it tries to solve the problem of inaccurate and time consuming analysis of disease detection and classification by reviewing different methods and state-of-the-art algorithms, which are trying to overcome this issue.","PeriodicalId":245244,"journal":{"name":"2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121124877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-02DOI: 10.1109/ICIRCA51532.2021.9545040
Suping Sun
Nowadays, with the rapid development of society, information technology is also developing rapidly. Driven by technologies such as big data, chip design and computer architecture must conform to the new development ideas of information technology, and it is necessary to understand various information in different applications and operating platforms, so as to face the various issues brought about by big data in advance. Kind of challenge. Based on big data technologies such as data collection, data mining, and data processing, this paper understands the application status of computer architecture and chip design, and analyzes future development trends.
{"title":"The Current Situation and Future Development Trend of Computer and Chip Applications in the Era of Big Data","authors":"Suping Sun","doi":"10.1109/ICIRCA51532.2021.9545040","DOIUrl":"https://doi.org/10.1109/ICIRCA51532.2021.9545040","url":null,"abstract":"Nowadays, with the rapid development of society, information technology is also developing rapidly. Driven by technologies such as big data, chip design and computer architecture must conform to the new development ideas of information technology, and it is necessary to understand various information in different applications and operating platforms, so as to face the various issues brought about by big data in advance. Kind of challenge. Based on big data technologies such as data collection, data mining, and data processing, this paper understands the application status of computer architecture and chip design, and analyzes future development trends.","PeriodicalId":245244,"journal":{"name":"2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127283486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-02DOI: 10.1109/ICIRCA51532.2021.9544898
Ankita Guleria, Ramandeep Kaur
Users often make errors in touch or swipe while interacting with mobile phones. One of the common areas of concern is accidental swiping off important notifications. We show that these unintentional notification swipes can be accurately detected by using simple touch and swipe features recorded while performing the gesture. The pre-installed touch and grip sensors were used to record data of 20 different participants asked to perform intentional and unintentional touch gestures. The various features taken into account are extracted from user's hand movement on the screen and by identifying single-handed or two-handed grip. In addition to three previously published features- Touch Time, Swipe Velocity and Average Touch Size, we introduce three novel features in our system namely Swipe Stretch, Nearest Edge Gap based on grip and Notification Expansion Action. We trained our model using Random Forest (RF) classifier and Neural Networks (NN) and achieved the accuracy of 98.8% and 100% respectively. The results prove that the model can successfully detect unintentional notification swipe and touch gestures in real time. The novelty of our research lies in considerable improvement of accuracy over previous published works attributed to a larger feature set inclusive of proposed features.
{"title":"Unintended Notification Swipe Detection System","authors":"Ankita Guleria, Ramandeep Kaur","doi":"10.1109/ICIRCA51532.2021.9544898","DOIUrl":"https://doi.org/10.1109/ICIRCA51532.2021.9544898","url":null,"abstract":"Users often make errors in touch or swipe while interacting with mobile phones. One of the common areas of concern is accidental swiping off important notifications. We show that these unintentional notification swipes can be accurately detected by using simple touch and swipe features recorded while performing the gesture. The pre-installed touch and grip sensors were used to record data of 20 different participants asked to perform intentional and unintentional touch gestures. The various features taken into account are extracted from user's hand movement on the screen and by identifying single-handed or two-handed grip. In addition to three previously published features- Touch Time, Swipe Velocity and Average Touch Size, we introduce three novel features in our system namely Swipe Stretch, Nearest Edge Gap based on grip and Notification Expansion Action. We trained our model using Random Forest (RF) classifier and Neural Networks (NN) and achieved the accuracy of 98.8% and 100% respectively. The results prove that the model can successfully detect unintentional notification swipe and touch gestures in real time. The novelty of our research lies in considerable improvement of accuracy over previous published works attributed to a larger feature set inclusive of proposed features.","PeriodicalId":245244,"journal":{"name":"2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115091758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-02DOI: 10.1109/ICIRCA51532.2021.9544905
L. Ding, Wei-Hau Du
Application analysis of the image enhancement method in deep learning image recognition scene is conducted in this paper. Generally speaking, scene recognition of natural scenes is relatively difficult due to the more complex and diverse environment. It is usually done through two steps: text detection and text recognition. To enhance the traditional methods, this paper integrates the deep learning models to construct the core efficient framework for dealing with the complex data. The text method uses a sequence recognition network based on a two-way decoder based on adjacent attention weights to recognize text images and predict the output. For the further analysis, the core systematic modelling is demonstrated. The proposed model is tested on the public data sets as a reference. The experimental verification has shown the result that the proposed model is efficient.
{"title":"Application Analysis of Image Enhancement Method in Deep Learning Image Recognition Scene","authors":"L. Ding, Wei-Hau Du","doi":"10.1109/ICIRCA51532.2021.9544905","DOIUrl":"https://doi.org/10.1109/ICIRCA51532.2021.9544905","url":null,"abstract":"Application analysis of the image enhancement method in deep learning image recognition scene is conducted in this paper. Generally speaking, scene recognition of natural scenes is relatively difficult due to the more complex and diverse environment. It is usually done through two steps: text detection and text recognition. To enhance the traditional methods, this paper integrates the deep learning models to construct the core efficient framework for dealing with the complex data. The text method uses a sequence recognition network based on a two-way decoder based on adjacent attention weights to recognize text images and predict the output. For the further analysis, the core systematic modelling is demonstrated. The proposed model is tested on the public data sets as a reference. The experimental verification has shown the result that the proposed model is efficient.","PeriodicalId":245244,"journal":{"name":"2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA)","volume":"83 3-4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114047687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-02DOI: 10.1109/ICIRCA51532.2021.9544602
Smit Shah, S. Joshi
A potential drawback of huge data is that it makes analysis of the data hard and also computationally infeasible. Health care, finance, retail, and education are a few of the data mining applications that involve very high-dimensional data. A large number of dimensions introduce a popular problem of “Curse of Dimensionality”. This problem makes it difficult to perform classification and engenders lower accuracy of machine learning classifiers. This paper computes a threshold value (35%) to which if the data is reduced, the best accuracy can be obtained. Further, this research work considers an image dataset of very high dimensions on which different dimensionality reduction techniques such as PCA, LDA, and SVD are performed to find out the best dimension fit for an image dataset. Also, various ML classification algorithms, such as Logistic Regression, Random Forest Classifier, Naive Bayes, and SVM are applied to find out the best classifier for the dimensionally reduced dataset. Finally, this research work has concluded that, PCA+SVM, LDA+Random Forest, and SVD+SVM have produced the best results out of all the possible combinations from the comparative study.
{"title":"Study of Various Dimensionality Reduction and Classification Algorithms on High Dimensional Dataset","authors":"Smit Shah, S. Joshi","doi":"10.1109/ICIRCA51532.2021.9544602","DOIUrl":"https://doi.org/10.1109/ICIRCA51532.2021.9544602","url":null,"abstract":"A potential drawback of huge data is that it makes analysis of the data hard and also computationally infeasible. Health care, finance, retail, and education are a few of the data mining applications that involve very high-dimensional data. A large number of dimensions introduce a popular problem of “Curse of Dimensionality”. This problem makes it difficult to perform classification and engenders lower accuracy of machine learning classifiers. This paper computes a threshold value (35%) to which if the data is reduced, the best accuracy can be obtained. Further, this research work considers an image dataset of very high dimensions on which different dimensionality reduction techniques such as PCA, LDA, and SVD are performed to find out the best dimension fit for an image dataset. Also, various ML classification algorithms, such as Logistic Regression, Random Forest Classifier, Naive Bayes, and SVM are applied to find out the best classifier for the dimensionally reduced dataset. Finally, this research work has concluded that, PCA+SVM, LDA+Random Forest, and SVD+SVM have produced the best results out of all the possible combinations from the comparative study.","PeriodicalId":245244,"journal":{"name":"2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117047872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-02DOI: 10.1109/ICIRCA51532.2021.9544783
S. Sindhura, S. Praveen, N. Rao, M. Arunasafali
Wireless Sensor Networks (WSN) is most widely used in many applications for various purposes. WSN consists of sensors, nodes, etc. In WSN, every node requires constant energy to transfer the data from the source node to the destination node. Several challenges are identified in WSN, such as energy at nodes, accurate routing, data loss, etc. The WSN aims to better transmit data between nodes by satisfying the user requirements without any threats. Sensor nodes are considered as small devices that will work on battery power. These are interconnected in the network, and these are distributed by using devices. In this paper, A Novel Routing Protocol (NRP) is introduced to overcome the various routing issues in WSN and maintaining the energy levels constantly. Results show the performance of NRA and display the accurate results.
{"title":"An Energy Efficient Novel Routing Protocol in Wireless Sensor Networks (WSN)","authors":"S. Sindhura, S. Praveen, N. Rao, M. Arunasafali","doi":"10.1109/ICIRCA51532.2021.9544783","DOIUrl":"https://doi.org/10.1109/ICIRCA51532.2021.9544783","url":null,"abstract":"Wireless Sensor Networks (WSN) is most widely used in many applications for various purposes. WSN consists of sensors, nodes, etc. In WSN, every node requires constant energy to transfer the data from the source node to the destination node. Several challenges are identified in WSN, such as energy at nodes, accurate routing, data loss, etc. The WSN aims to better transmit data between nodes by satisfying the user requirements without any threats. Sensor nodes are considered as small devices that will work on battery power. These are interconnected in the network, and these are distributed by using devices. In this paper, A Novel Routing Protocol (NRP) is introduced to overcome the various routing issues in WSN and maintaining the energy levels constantly. Results show the performance of NRA and display the accurate results.","PeriodicalId":245244,"journal":{"name":"2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129528721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-02DOI: 10.1109/ICIRCA51532.2021.9544664
Linye Tang
This article is based on the optimization of 3D game engine technology and 3D graphics architecture to make 3D graphics animation. First of all, this article describes the research status of traditional 3D animation and analyzes the current development trend of 3D animation. Then, the application features of the current mainstream 3D graphics engine are introduced, and the architecture and system design of the 3D engine animation system are completed. Finally, the use of the designed engine system and architecture optimization for 3D animation production has a certain impetus to the realization of the 3D animation algorithm.
{"title":"Application of Engine Technology and in 3D Animation Production","authors":"Linye Tang","doi":"10.1109/ICIRCA51532.2021.9544664","DOIUrl":"https://doi.org/10.1109/ICIRCA51532.2021.9544664","url":null,"abstract":"This article is based on the optimization of 3D game engine technology and 3D graphics architecture to make 3D graphics animation. First of all, this article describes the research status of traditional 3D animation and analyzes the current development trend of 3D animation. Then, the application features of the current mainstream 3D graphics engine are introduced, and the architecture and system design of the 3D engine animation system are completed. Finally, the use of the designed engine system and architecture optimization for 3D animation production has a certain impetus to the realization of the 3D animation algorithm.","PeriodicalId":245244,"journal":{"name":"2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129744988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-02DOI: 10.1109/ICIRCA51532.2021.9544702
Sumei Li
The advancement of science and technology and the development of the Internet make preschool education continue to reform and advance. In particular, the development of big data has brought new opportunities to preschool education. Big data algorithms can optimize the allocation of preschool education resources. This paper is based on the dynamic simulation algorithm of python to study the optimization of preschool education resource allocation by means of big data. Firstly, analyze the shortcomings and shortcomings of the current domestic preschool education resource allocation; then introduce the development of big data technology and the dynamic simulation algorithm based on python; finally use the dynamic simulation algorithm to optimize the allocation of preschool education resources.
{"title":"Big Data Means to Optimize the Allocation of Preschool Education Resources: Dynamic Simulation Algorithm based on Python","authors":"Sumei Li","doi":"10.1109/ICIRCA51532.2021.9544702","DOIUrl":"https://doi.org/10.1109/ICIRCA51532.2021.9544702","url":null,"abstract":"The advancement of science and technology and the development of the Internet make preschool education continue to reform and advance. In particular, the development of big data has brought new opportunities to preschool education. Big data algorithms can optimize the allocation of preschool education resources. This paper is based on the dynamic simulation algorithm of python to study the optimization of preschool education resource allocation by means of big data. Firstly, analyze the shortcomings and shortcomings of the current domestic preschool education resource allocation; then introduce the development of big data technology and the dynamic simulation algorithm based on python; finally use the dynamic simulation algorithm to optimize the allocation of preschool education resources.","PeriodicalId":245244,"journal":{"name":"2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128788694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-02DOI: 10.1109/ICIRCA51532.2021.9545066
E. L. Dhivya Priya, S. Karthik, A. Sharmila, K. R. G. Anand
Stuttering is disorder of speech which is characterized by the reiteration of sounds, syllables or disputes. This disorder affects the normal flow of speech accompanied by struggle behaviors. Stuttering disorder affects the mental ability of the people as it creates difficulty to communicate with other people and to maintain their interpersonal relationships. The negative influence during job interviews questions their talent and skill sets. More than 70 million people stutter which covers 1% of world's population. Though stuttering is found common during childhood, some has a prolonged stuttering for many years. Public speaking is still a big question to the people to stutter. The idea of the proposed paper is to design an Arduino based anti-Stuttering device with the help of Silence ejection speech algorithm. To remove the long gaps and to make an input shuttered signal as an un-shuttered signal, three software platforms are interconnected. AUDACITY, MATLAB and PYTHON are correlated to each other to retain the perfect flow of the proposed algorithm. AUDACITY is an open source digital audio editor and recording platform which helps to store the input shuttered signal. The stored shuttered signal is feed to MATLAB to perform magnitude filtering. The magnitude filtered output is then silence ejected. The magnitude filtration process is concerned with three set of values. The compared best value is then fed for silence ejection. The silence ejected output is converted from speech to text. This conversion helps to remove the repeated words in the silence ejected signal with reduced time. The final repetition removed text is fed to the Arduino board to convert the text to speech. The converted un-shuttered signal is given as input to the speaker from Arduino. This process of converting the shuttered speech signal to un-shuttered signal will help people who suffer from shuttering and stammering to have a balance psychological effects.
{"title":"Design of Anti-Stuttering Device with Silence Ejection Speech algorithm using Arduino","authors":"E. L. Dhivya Priya, S. Karthik, A. Sharmila, K. R. G. Anand","doi":"10.1109/ICIRCA51532.2021.9545066","DOIUrl":"https://doi.org/10.1109/ICIRCA51532.2021.9545066","url":null,"abstract":"Stuttering is disorder of speech which is characterized by the reiteration of sounds, syllables or disputes. This disorder affects the normal flow of speech accompanied by struggle behaviors. Stuttering disorder affects the mental ability of the people as it creates difficulty to communicate with other people and to maintain their interpersonal relationships. The negative influence during job interviews questions their talent and skill sets. More than 70 million people stutter which covers 1% of world's population. Though stuttering is found common during childhood, some has a prolonged stuttering for many years. Public speaking is still a big question to the people to stutter. The idea of the proposed paper is to design an Arduino based anti-Stuttering device with the help of Silence ejection speech algorithm. To remove the long gaps and to make an input shuttered signal as an un-shuttered signal, three software platforms are interconnected. AUDACITY, MATLAB and PYTHON are correlated to each other to retain the perfect flow of the proposed algorithm. AUDACITY is an open source digital audio editor and recording platform which helps to store the input shuttered signal. The stored shuttered signal is feed to MATLAB to perform magnitude filtering. The magnitude filtered output is then silence ejected. The magnitude filtration process is concerned with three set of values. The compared best value is then fed for silence ejection. The silence ejected output is converted from speech to text. This conversion helps to remove the repeated words in the silence ejected signal with reduced time. The final repetition removed text is fed to the Arduino board to convert the text to speech. The converted un-shuttered signal is given as input to the speaker from Arduino. This process of converting the shuttered speech signal to un-shuttered signal will help people who suffer from shuttering and stammering to have a balance psychological effects.","PeriodicalId":245244,"journal":{"name":"2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128660026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}