首页 > 最新文献

2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA)最新文献

英文 中文
Smart Waste Collection Monitoring System using IoT 使用物联网的智能垃圾收集监控系统
Pub Date : 2021-09-02 DOI: 10.1109/ICIRCA51532.2021.9544982
Saurabh Pargaien, Amrita Verma Pargaien, Dikendra K. Verma, Vatsala Sah, N. Pandey, Neetika Tripathi
Timely cleaning of dustbin is a big challenge and if left unaddressed, it may pose several health risks by making the place unhygienic. Current system for the waste management in local areas of small and highly populated cities is sluggish which leads to a lot of garbage strewn all over the city. The rate of generation of waste is so high that if the garbage collector doesn't visit a place for a couple of days it creates the conditions adverse. In covid-19 pandemic situation, it was very important to monitor and decompose medical waste properly. The handling of normal home garbage was also challenging due to lockdown. In this situation automatic monitoring and controlling of garbage using IOT can play a significance role in garbage management. This paper proposes a smart and fast approach for waste management by creating a network of smart dustbins equipped with sensors and microcontrollers in a city which is monitored by a central control unit to speed up the process in an intelligent and smart way thereby eliminating such hazardous conditions caused by the current sluggish system. The proposed system also takes into account the issue of improper internet connectivity.
及时清理垃圾箱是一项巨大的挑战,如果不加以解决,可能会造成一些健康风险,使这个地方不卫生。目前,在人口密集的小城市,当地的垃圾管理系统落后,导致大量垃圾散落在城市各处。垃圾的产生率是如此之高,如果垃圾收集者几天不去一个地方,它就会创造不利的条件。在新冠肺炎疫情背景下,对医疗废弃物进行监测和合理分解显得尤为重要。由于封锁,普通家庭垃圾的处理也很有挑战性。在这种情况下,利用物联网对垃圾进行自动监控可以在垃圾管理中发挥重要作用。本文提出了一种智能和快速的废物管理方法,通过在城市中创建一个配备传感器和微控制器的智能垃圾箱网络,由中央控制单元监控,以智能和智能的方式加快处理过程,从而消除当前系统缓慢造成的这种危险情况。拟议的系统也考虑到互联网连接不当的问题。
{"title":"Smart Waste Collection Monitoring System using IoT","authors":"Saurabh Pargaien, Amrita Verma Pargaien, Dikendra K. Verma, Vatsala Sah, N. Pandey, Neetika Tripathi","doi":"10.1109/ICIRCA51532.2021.9544982","DOIUrl":"https://doi.org/10.1109/ICIRCA51532.2021.9544982","url":null,"abstract":"Timely cleaning of dustbin is a big challenge and if left unaddressed, it may pose several health risks by making the place unhygienic. Current system for the waste management in local areas of small and highly populated cities is sluggish which leads to a lot of garbage strewn all over the city. The rate of generation of waste is so high that if the garbage collector doesn't visit a place for a couple of days it creates the conditions adverse. In covid-19 pandemic situation, it was very important to monitor and decompose medical waste properly. The handling of normal home garbage was also challenging due to lockdown. In this situation automatic monitoring and controlling of garbage using IOT can play a significance role in garbage management. This paper proposes a smart and fast approach for waste management by creating a network of smart dustbins equipped with sensors and microcontrollers in a city which is monitored by a central control unit to speed up the process in an intelligent and smart way thereby eliminating such hazardous conditions caused by the current sluggish system. The proposed system also takes into account the issue of improper internet connectivity.","PeriodicalId":245244,"journal":{"name":"2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA)","volume":"139 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124405829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Plant Leaf Disease Classification using Deep Learning: A Survey 基于深度学习的植物叶片病害分类研究综述
Pub Date : 2021-09-02 DOI: 10.1109/ICIRCA51532.2021.9544640
Deeksha Agarwal, Meenu Chawla, Namita Tiwari
With the increase in global population, food supply must be increased correspondingly while simultaneously protecting crops from numerous fatal diseases. Traditionally, plant disease identification was done by naked eyes by using experience-based studies of farmers and plant pathologists. Performing the traditional process is difficult, time-consuming, and offered inaccurate diagnosis at times, resulting in significant economic loss in agribusiness. Later, several studies have employed machine learning in the field of plant disease identification, but the findings were not promising and were too slow for practical use. Recently, Convolution Neural Networks have made an essential breakthrough in the field of computer vision due to their characteristics like automatic feature extraction and leverage effective results with small dataset in a short span of time when compared to machine learning. This paper discusses about the challenges faced in identifying the plant leaf diseases and it tries to solve the problem of inaccurate and time consuming analysis of disease detection and classification by reviewing different methods and state-of-the-art algorithms, which are trying to overcome this issue.
随着全球人口的增加,粮食供应必须相应增加,同时保护作物免受许多致命疾病的侵害。传统上,植物病害鉴定是通过农民和植物病理学家基于经验的研究通过肉眼完成的。执行传统的过程是困难的,耗时的,有时提供不准确的诊断,导致农业综合企业重大的经济损失。后来,有几项研究将机器学习应用于植物病害鉴定领域,但结果并不乐观,而且速度太慢,无法实际应用。近年来,卷积神经网络在计算机视觉领域取得了重大突破,与机器学习相比,卷积神经网络具有自动特征提取、在短时间内利用小数据集获得有效结果等特点。本文讨论了植物叶片病害识别所面临的挑战,并试图通过回顾不同的方法和最新的算法来解决疾病检测和分类分析不准确和耗时的问题。
{"title":"Plant Leaf Disease Classification using Deep Learning: A Survey","authors":"Deeksha Agarwal, Meenu Chawla, Namita Tiwari","doi":"10.1109/ICIRCA51532.2021.9544640","DOIUrl":"https://doi.org/10.1109/ICIRCA51532.2021.9544640","url":null,"abstract":"With the increase in global population, food supply must be increased correspondingly while simultaneously protecting crops from numerous fatal diseases. Traditionally, plant disease identification was done by naked eyes by using experience-based studies of farmers and plant pathologists. Performing the traditional process is difficult, time-consuming, and offered inaccurate diagnosis at times, resulting in significant economic loss in agribusiness. Later, several studies have employed machine learning in the field of plant disease identification, but the findings were not promising and were too slow for practical use. Recently, Convolution Neural Networks have made an essential breakthrough in the field of computer vision due to their characteristics like automatic feature extraction and leverage effective results with small dataset in a short span of time when compared to machine learning. This paper discusses about the challenges faced in identifying the plant leaf diseases and it tries to solve the problem of inaccurate and time consuming analysis of disease detection and classification by reviewing different methods and state-of-the-art algorithms, which are trying to overcome this issue.","PeriodicalId":245244,"journal":{"name":"2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121124877","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
The Current Situation and Future Development Trend of Computer and Chip Applications in the Era of Big Data 大数据时代计算机与芯片应用现状及未来发展趋势
Pub Date : 2021-09-02 DOI: 10.1109/ICIRCA51532.2021.9545040
Suping Sun
Nowadays, with the rapid development of society, information technology is also developing rapidly. Driven by technologies such as big data, chip design and computer architecture must conform to the new development ideas of information technology, and it is necessary to understand various information in different applications and operating platforms, so as to face the various issues brought about by big data in advance. Kind of challenge. Based on big data technologies such as data collection, data mining, and data processing, this paper understands the application status of computer architecture and chip design, and analyzes future development trends.
如今,随着社会的快速发展,信息技术也在迅速发展。在大数据等技术的驱动下,芯片设计和计算机架构必须符合信息技术的新发展思路,需要了解不同应用和操作平台下的各种信息,从而提前面对大数据带来的各种问题。这是一种挑战。基于数据采集、数据挖掘、数据处理等大数据技术,了解了计算机体系结构和芯片设计的应用现状,并分析了未来的发展趋势。
{"title":"The Current Situation and Future Development Trend of Computer and Chip Applications in the Era of Big Data","authors":"Suping Sun","doi":"10.1109/ICIRCA51532.2021.9545040","DOIUrl":"https://doi.org/10.1109/ICIRCA51532.2021.9545040","url":null,"abstract":"Nowadays, with the rapid development of society, information technology is also developing rapidly. Driven by technologies such as big data, chip design and computer architecture must conform to the new development ideas of information technology, and it is necessary to understand various information in different applications and operating platforms, so as to face the various issues brought about by big data in advance. Kind of challenge. Based on big data technologies such as data collection, data mining, and data processing, this paper understands the application status of computer architecture and chip design, and analyzes future development trends.","PeriodicalId":245244,"journal":{"name":"2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127283486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unintended Notification Swipe Detection System 意外通知滑动检测系统
Pub Date : 2021-09-02 DOI: 10.1109/ICIRCA51532.2021.9544898
Ankita Guleria, Ramandeep Kaur
Users often make errors in touch or swipe while interacting with mobile phones. One of the common areas of concern is accidental swiping off important notifications. We show that these unintentional notification swipes can be accurately detected by using simple touch and swipe features recorded while performing the gesture. The pre-installed touch and grip sensors were used to record data of 20 different participants asked to perform intentional and unintentional touch gestures. The various features taken into account are extracted from user's hand movement on the screen and by identifying single-handed or two-handed grip. In addition to three previously published features- Touch Time, Swipe Velocity and Average Touch Size, we introduce three novel features in our system namely Swipe Stretch, Nearest Edge Gap based on grip and Notification Expansion Action. We trained our model using Random Forest (RF) classifier and Neural Networks (NN) and achieved the accuracy of 98.8% and 100% respectively. The results prove that the model can successfully detect unintentional notification swipe and touch gestures in real time. The novelty of our research lies in considerable improvement of accuracy over previous published works attributed to a larger feature set inclusive of proposed features.
用户在与手机互动时,经常会在触摸或滑动时出错。一个常见的问题是不小心滑掉了重要的通知。我们表明,通过使用简单的触摸和在执行手势时记录的滑动特征,可以准确地检测到这些无意的通知滑动。预先安装的触摸和握力传感器被用来记录20名不同参与者的数据,这些参与者被要求做出有意和无意的触摸手势。从用户在屏幕上的手部运动中提取各种特征,并通过识别单手或双手握持。除了三个先前发布的功能-触摸时间,滑动速度和平均触摸大小,我们在我们的系统中引入了三个新功能,即滑动拉伸,基于抓地力的最近边缘间隙和通知扩展动作。我们使用随机森林(Random Forest, RF)分类器和神经网络(Neural Networks, NN)对模型进行训练,准确率分别达到98.8%和100%。结果表明,该模型能够成功地实时检测到无意的通知、滑动和触摸手势。我们研究的新颖之处在于,与之前发表的作品相比,由于包含了更大的特征集,我们的研究大大提高了准确性。
{"title":"Unintended Notification Swipe Detection System","authors":"Ankita Guleria, Ramandeep Kaur","doi":"10.1109/ICIRCA51532.2021.9544898","DOIUrl":"https://doi.org/10.1109/ICIRCA51532.2021.9544898","url":null,"abstract":"Users often make errors in touch or swipe while interacting with mobile phones. One of the common areas of concern is accidental swiping off important notifications. We show that these unintentional notification swipes can be accurately detected by using simple touch and swipe features recorded while performing the gesture. The pre-installed touch and grip sensors were used to record data of 20 different participants asked to perform intentional and unintentional touch gestures. The various features taken into account are extracted from user's hand movement on the screen and by identifying single-handed or two-handed grip. In addition to three previously published features- Touch Time, Swipe Velocity and Average Touch Size, we introduce three novel features in our system namely Swipe Stretch, Nearest Edge Gap based on grip and Notification Expansion Action. We trained our model using Random Forest (RF) classifier and Neural Networks (NN) and achieved the accuracy of 98.8% and 100% respectively. The results prove that the model can successfully detect unintentional notification swipe and touch gestures in real time. The novelty of our research lies in considerable improvement of accuracy over previous published works attributed to a larger feature set inclusive of proposed features.","PeriodicalId":245244,"journal":{"name":"2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115091758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Application Analysis of Image Enhancement Method in Deep Learning Image Recognition Scene 图像增强方法在深度学习图像识别场景中的应用分析
Pub Date : 2021-09-02 DOI: 10.1109/ICIRCA51532.2021.9544905
L. Ding, Wei-Hau Du
Application analysis of the image enhancement method in deep learning image recognition scene is conducted in this paper. Generally speaking, scene recognition of natural scenes is relatively difficult due to the more complex and diverse environment. It is usually done through two steps: text detection and text recognition. To enhance the traditional methods, this paper integrates the deep learning models to construct the core efficient framework for dealing with the complex data. The text method uses a sequence recognition network based on a two-way decoder based on adjacent attention weights to recognize text images and predict the output. For the further analysis, the core systematic modelling is demonstrated. The proposed model is tested on the public data sets as a reference. The experimental verification has shown the result that the proposed model is efficient.
本文对图像增强方法在深度学习图像识别场景中的应用进行了分析。一般来说,自然场景的场景识别比较困难,因为环境比较复杂和多样。它通常通过两个步骤来完成:文本检测和文本识别。在传统方法的基础上,结合深度学习模型构建了复杂数据处理的核心高效框架。文本方法采用基于相邻注意权值的双向解码器序列识别网络对文本图像进行识别并预测输出。为了进一步分析,对核心系统建模进行了论证。作为参考,该模型在公共数据集上进行了测试。实验验证了该模型的有效性。
{"title":"Application Analysis of Image Enhancement Method in Deep Learning Image Recognition Scene","authors":"L. Ding, Wei-Hau Du","doi":"10.1109/ICIRCA51532.2021.9544905","DOIUrl":"https://doi.org/10.1109/ICIRCA51532.2021.9544905","url":null,"abstract":"Application analysis of the image enhancement method in deep learning image recognition scene is conducted in this paper. Generally speaking, scene recognition of natural scenes is relatively difficult due to the more complex and diverse environment. It is usually done through two steps: text detection and text recognition. To enhance the traditional methods, this paper integrates the deep learning models to construct the core efficient framework for dealing with the complex data. The text method uses a sequence recognition network based on a two-way decoder based on adjacent attention weights to recognize text images and predict the output. For the further analysis, the core systematic modelling is demonstrated. The proposed model is tested on the public data sets as a reference. The experimental verification has shown the result that the proposed model is efficient.","PeriodicalId":245244,"journal":{"name":"2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA)","volume":"83 3-4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114047687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Study of Various Dimensionality Reduction and Classification Algorithms on High Dimensional Dataset 高维数据集的各种降维与分类算法研究
Pub Date : 2021-09-02 DOI: 10.1109/ICIRCA51532.2021.9544602
Smit Shah, S. Joshi
A potential drawback of huge data is that it makes analysis of the data hard and also computationally infeasible. Health care, finance, retail, and education are a few of the data mining applications that involve very high-dimensional data. A large number of dimensions introduce a popular problem of “Curse of Dimensionality”. This problem makes it difficult to perform classification and engenders lower accuracy of machine learning classifiers. This paper computes a threshold value (35%) to which if the data is reduced, the best accuracy can be obtained. Further, this research work considers an image dataset of very high dimensions on which different dimensionality reduction techniques such as PCA, LDA, and SVD are performed to find out the best dimension fit for an image dataset. Also, various ML classification algorithms, such as Logistic Regression, Random Forest Classifier, Naive Bayes, and SVM are applied to find out the best classifier for the dimensionally reduced dataset. Finally, this research work has concluded that, PCA+SVM, LDA+Random Forest, and SVD+SVM have produced the best results out of all the possible combinations from the comparative study.
海量数据的一个潜在缺点是,它使数据分析变得困难,而且在计算上也不可行。医疗保健、金融、零售和教育是一些涉及高维数据的数据挖掘应用程序。大量的维度引入了一个流行的问题“维度诅咒”。这个问题给分类带来困难,导致机器学习分类器的准确率较低。本文计算了一个阈值(35%),如果将数据减少到该阈值,则可以获得最佳精度。此外,本研究还考虑了一个非常高维的图像数据集,在该数据集上使用不同的降维技术(如PCA、LDA和SVD)来寻找图像数据集的最佳维值拟合。此外,各种ML分类算法,如逻辑回归,随机森林分类器,朴素贝叶斯和支持向量机被应用于寻找最佳分类器的降维数据集。最后,本研究工作通过对比研究得出,在所有可能的组合中,PCA+SVM、LDA+Random Forest和SVD+SVM的效果最好。
{"title":"Study of Various Dimensionality Reduction and Classification Algorithms on High Dimensional Dataset","authors":"Smit Shah, S. Joshi","doi":"10.1109/ICIRCA51532.2021.9544602","DOIUrl":"https://doi.org/10.1109/ICIRCA51532.2021.9544602","url":null,"abstract":"A potential drawback of huge data is that it makes analysis of the data hard and also computationally infeasible. Health care, finance, retail, and education are a few of the data mining applications that involve very high-dimensional data. A large number of dimensions introduce a popular problem of “Curse of Dimensionality”. This problem makes it difficult to perform classification and engenders lower accuracy of machine learning classifiers. This paper computes a threshold value (35%) to which if the data is reduced, the best accuracy can be obtained. Further, this research work considers an image dataset of very high dimensions on which different dimensionality reduction techniques such as PCA, LDA, and SVD are performed to find out the best dimension fit for an image dataset. Also, various ML classification algorithms, such as Logistic Regression, Random Forest Classifier, Naive Bayes, and SVM are applied to find out the best classifier for the dimensionally reduced dataset. Finally, this research work has concluded that, PCA+SVM, LDA+Random Forest, and SVD+SVM have produced the best results out of all the possible combinations from the comparative study.","PeriodicalId":245244,"journal":{"name":"2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117047872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Energy Efficient Novel Routing Protocol in Wireless Sensor Networks (WSN) 一种新型节能无线传感器网络路由协议
Pub Date : 2021-09-02 DOI: 10.1109/ICIRCA51532.2021.9544783
S. Sindhura, S. Praveen, N. Rao, M. Arunasafali
Wireless Sensor Networks (WSN) is most widely used in many applications for various purposes. WSN consists of sensors, nodes, etc. In WSN, every node requires constant energy to transfer the data from the source node to the destination node. Several challenges are identified in WSN, such as energy at nodes, accurate routing, data loss, etc. The WSN aims to better transmit data between nodes by satisfying the user requirements without any threats. Sensor nodes are considered as small devices that will work on battery power. These are interconnected in the network, and these are distributed by using devices. In this paper, A Novel Routing Protocol (NRP) is introduced to overcome the various routing issues in WSN and maintaining the energy levels constantly. Results show the performance of NRA and display the accurate results.
无线传感器网络(WSN)广泛应用于各种用途。WSN由传感器、节点等组成。在WSN中,每个节点都需要恒定的能量来将数据从源节点传输到目标节点。无线传感器网络面临着节点能量、精确路由、数据丢失等挑战。WSN的目标是在满足用户需求的前提下,在不受任何威胁的情况下,更好地实现节点间的数据传输。传感器节点被认为是依靠电池供电的小型设备。它们在网络中相互连接,并且通过使用设备进行分发。本文提出了一种新的路由协议(NRP)来克服无线传感器网络中的各种路由问题,并持续保持能量水平。结果显示了NRA的性能,并显示了准确的结果。
{"title":"An Energy Efficient Novel Routing Protocol in Wireless Sensor Networks (WSN)","authors":"S. Sindhura, S. Praveen, N. Rao, M. Arunasafali","doi":"10.1109/ICIRCA51532.2021.9544783","DOIUrl":"https://doi.org/10.1109/ICIRCA51532.2021.9544783","url":null,"abstract":"Wireless Sensor Networks (WSN) is most widely used in many applications for various purposes. WSN consists of sensors, nodes, etc. In WSN, every node requires constant energy to transfer the data from the source node to the destination node. Several challenges are identified in WSN, such as energy at nodes, accurate routing, data loss, etc. The WSN aims to better transmit data between nodes by satisfying the user requirements without any threats. Sensor nodes are considered as small devices that will work on battery power. These are interconnected in the network, and these are distributed by using devices. In this paper, A Novel Routing Protocol (NRP) is introduced to overcome the various routing issues in WSN and maintaining the energy levels constantly. Results show the performance of NRA and display the accurate results.","PeriodicalId":245244,"journal":{"name":"2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129528721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Application of Engine Technology and in 3D Animation Production 引擎技术在三维动画制作中的应用
Pub Date : 2021-09-02 DOI: 10.1109/ICIRCA51532.2021.9544664
Linye Tang
This article is based on the optimization of 3D game engine technology and 3D graphics architecture to make 3D graphics animation. First of all, this article describes the research status of traditional 3D animation and analyzes the current development trend of 3D animation. Then, the application features of the current mainstream 3D graphics engine are introduced, and the architecture and system design of the 3D engine animation system are completed. Finally, the use of the designed engine system and architecture optimization for 3D animation production has a certain impetus to the realization of the 3D animation algorithm.
本文是基于3D游戏引擎技术和3D图形架构的优化来制作3D图形动画。本文首先介绍了传统3D动画的研究现状,分析了当前3D动画的发展趋势。然后,介绍了当前主流3D图形引擎的应用特点,完成了3D引擎动画系统的体系结构和系统设计。最后,将所设计的引擎系统和架构优化用于三维动画制作,对三维动画算法的实现有一定的推动作用。
{"title":"Application of Engine Technology and in 3D Animation Production","authors":"Linye Tang","doi":"10.1109/ICIRCA51532.2021.9544664","DOIUrl":"https://doi.org/10.1109/ICIRCA51532.2021.9544664","url":null,"abstract":"This article is based on the optimization of 3D game engine technology and 3D graphics architecture to make 3D graphics animation. First of all, this article describes the research status of traditional 3D animation and analyzes the current development trend of 3D animation. Then, the application features of the current mainstream 3D graphics engine are introduced, and the architecture and system design of the 3D engine animation system are completed. Finally, the use of the designed engine system and architecture optimization for 3D animation production has a certain impetus to the realization of the 3D animation algorithm.","PeriodicalId":245244,"journal":{"name":"2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129744988","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Big Data Means to Optimize the Allocation of Preschool Education Resources: Dynamic Simulation Algorithm based on Python 大数据手段优化学前教育资源配置——基于Python的动态仿真算法
Pub Date : 2021-09-02 DOI: 10.1109/ICIRCA51532.2021.9544702
Sumei Li
The advancement of science and technology and the development of the Internet make preschool education continue to reform and advance. In particular, the development of big data has brought new opportunities to preschool education. Big data algorithms can optimize the allocation of preschool education resources. This paper is based on the dynamic simulation algorithm of python to study the optimization of preschool education resource allocation by means of big data. Firstly, analyze the shortcomings and shortcomings of the current domestic preschool education resource allocation; then introduce the development of big data technology and the dynamic simulation algorithm based on python; finally use the dynamic simulation algorithm to optimize the allocation of preschool education resources.
科技的进步和互联网的发展,使学前教育不断改革和进步。特别是大数据的发展,给学前教育带来了新的机遇。大数据算法可以优化学前教育资源配置。本文基于python的动态仿真算法,利用大数据对学前教育资源配置的优化进行研究。首先,分析当前国内学前教育资源配置的不足与不足;然后介绍了大数据技术的发展和基于python的动态仿真算法;最后利用动态仿真算法对学前教育资源进行优化配置。
{"title":"Big Data Means to Optimize the Allocation of Preschool Education Resources: Dynamic Simulation Algorithm based on Python","authors":"Sumei Li","doi":"10.1109/ICIRCA51532.2021.9544702","DOIUrl":"https://doi.org/10.1109/ICIRCA51532.2021.9544702","url":null,"abstract":"The advancement of science and technology and the development of the Internet make preschool education continue to reform and advance. In particular, the development of big data has brought new opportunities to preschool education. Big data algorithms can optimize the allocation of preschool education resources. This paper is based on the dynamic simulation algorithm of python to study the optimization of preschool education resource allocation by means of big data. Firstly, analyze the shortcomings and shortcomings of the current domestic preschool education resource allocation; then introduce the development of big data technology and the dynamic simulation algorithm based on python; finally use the dynamic simulation algorithm to optimize the allocation of preschool education resources.","PeriodicalId":245244,"journal":{"name":"2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128788694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design of Anti-Stuttering Device with Silence Ejection Speech algorithm using Arduino 基于Arduino的消声弹射语音算法的防口吃装置设计
Pub Date : 2021-09-02 DOI: 10.1109/ICIRCA51532.2021.9545066
E. L. Dhivya Priya, S. Karthik, A. Sharmila, K. R. G. Anand
Stuttering is disorder of speech which is characterized by the reiteration of sounds, syllables or disputes. This disorder affects the normal flow of speech accompanied by struggle behaviors. Stuttering disorder affects the mental ability of the people as it creates difficulty to communicate with other people and to maintain their interpersonal relationships. The negative influence during job interviews questions their talent and skill sets. More than 70 million people stutter which covers 1% of world's population. Though stuttering is found common during childhood, some has a prolonged stuttering for many years. Public speaking is still a big question to the people to stutter. The idea of the proposed paper is to design an Arduino based anti-Stuttering device with the help of Silence ejection speech algorithm. To remove the long gaps and to make an input shuttered signal as an un-shuttered signal, three software platforms are interconnected. AUDACITY, MATLAB and PYTHON are correlated to each other to retain the perfect flow of the proposed algorithm. AUDACITY is an open source digital audio editor and recording platform which helps to store the input shuttered signal. The stored shuttered signal is feed to MATLAB to perform magnitude filtering. The magnitude filtered output is then silence ejected. The magnitude filtration process is concerned with three set of values. The compared best value is then fed for silence ejection. The silence ejected output is converted from speech to text. This conversion helps to remove the repeated words in the silence ejected signal with reduced time. The final repetition removed text is fed to the Arduino board to convert the text to speech. The converted un-shuttered signal is given as input to the speaker from Arduino. This process of converting the shuttered speech signal to un-shuttered signal will help people who suffer from shuttering and stammering to have a balance psychological effects.
口吃是一种语言障碍,其特征是重复声音、音节或争论。这种障碍会影响正常的语言表达,并伴有挣扎行为。口吃障碍会影响人们的心理能力,因为它会给与他人沟通和维持人际关系带来困难。面试中的负面影响会质疑他们的才能和技能。超过7000万人口吃,占世界人口的1%。虽然口吃在儿童时期很常见,但有些人的口吃会持续多年。公众演讲对口吃的人来说仍然是一个大问题。本文的思路是利用Silence ejection speech算法,设计一个基于Arduino的反口吃设备。为了消除长间隙并使输入的闭波信号变为非闭波信号,三个软件平台相互连接。AUDACITY、MATLAB和PYTHON相互关联,以保持所提出算法的完美流程。AUDACITY是一个开源的数字音频编辑器和录音平台,它有助于存储输入的关闭信号。将存储的滤波信号送入MATLAB进行幅度滤波。经过幅度滤波后的输出将弹出静默。震级过滤过程涉及三组值。比较的最佳值,然后馈送沉默弹射。弹出的沉默输出从语音转换为文本。这种转换有助于在减少时间的情况下去除沉默信号中重复的单词。最后重复删除的文本被馈送到Arduino板,将文本转换为语音。转换后的未关闭信号从Arduino输入到扬声器。这一过程将被屏蔽的语音信号转换为未被屏蔽的语音信号,将有助于口吃患者获得平衡的心理效果。
{"title":"Design of Anti-Stuttering Device with Silence Ejection Speech algorithm using Arduino","authors":"E. L. Dhivya Priya, S. Karthik, A. Sharmila, K. R. G. Anand","doi":"10.1109/ICIRCA51532.2021.9545066","DOIUrl":"https://doi.org/10.1109/ICIRCA51532.2021.9545066","url":null,"abstract":"Stuttering is disorder of speech which is characterized by the reiteration of sounds, syllables or disputes. This disorder affects the normal flow of speech accompanied by struggle behaviors. Stuttering disorder affects the mental ability of the people as it creates difficulty to communicate with other people and to maintain their interpersonal relationships. The negative influence during job interviews questions their talent and skill sets. More than 70 million people stutter which covers 1% of world's population. Though stuttering is found common during childhood, some has a prolonged stuttering for many years. Public speaking is still a big question to the people to stutter. The idea of the proposed paper is to design an Arduino based anti-Stuttering device with the help of Silence ejection speech algorithm. To remove the long gaps and to make an input shuttered signal as an un-shuttered signal, three software platforms are interconnected. AUDACITY, MATLAB and PYTHON are correlated to each other to retain the perfect flow of the proposed algorithm. AUDACITY is an open source digital audio editor and recording platform which helps to store the input shuttered signal. The stored shuttered signal is feed to MATLAB to perform magnitude filtering. The magnitude filtered output is then silence ejected. The magnitude filtration process is concerned with three set of values. The compared best value is then fed for silence ejection. The silence ejected output is converted from speech to text. This conversion helps to remove the repeated words in the silence ejected signal with reduced time. The final repetition removed text is fed to the Arduino board to convert the text to speech. The converted un-shuttered signal is given as input to the speaker from Arduino. This process of converting the shuttered speech signal to un-shuttered signal will help people who suffer from shuttering and stammering to have a balance psychological effects.","PeriodicalId":245244,"journal":{"name":"2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128660026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2021 Third International Conference on Inventive Research in Computing Applications (ICIRCA)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1