Pub Date : 2022-10-10DOI: 10.1109/ICTACS56270.2022.9988184
Giridharan N, S. R
Forest is backbone of Earth's life. Recently, Remote Sensing (RS) and Geographic information system (GIS) techniques have detailed information on forest cover changes. The present work envisions that the changes in forest cover are investigated by the high-resolution satellite data (HRSD) with the help of Normalized Difference Vegetation Index (NDVI) based image processing technique in Sathyamangalam Forest, Erode District. The Multi-Temporal imagery-based six individual NDVI maps (2016 to 2021) were fixed using ArcGIS software. The importance of NDVI was performed to notice the changes in the forest cover region. The comprehensive study shows that the changes in forest cover deliberate from minimum to maximum immortal area with 197.17 sq. km (2016) and 364.19 sq. km (2021), respectively. Finally, this result predicts that sustainable growth needs to monitor for further development in the future.
{"title":"NDVI based Image Processing for Forest change Detection in Sathyamangalam Reserve Forest","authors":"Giridharan N, S. R","doi":"10.1109/ICTACS56270.2022.9988184","DOIUrl":"https://doi.org/10.1109/ICTACS56270.2022.9988184","url":null,"abstract":"Forest is backbone of Earth's life. Recently, Remote Sensing (RS) and Geographic information system (GIS) techniques have detailed information on forest cover changes. The present work envisions that the changes in forest cover are investigated by the high-resolution satellite data (HRSD) with the help of Normalized Difference Vegetation Index (NDVI) based image processing technique in Sathyamangalam Forest, Erode District. The Multi-Temporal imagery-based six individual NDVI maps (2016 to 2021) were fixed using ArcGIS software. The importance of NDVI was performed to notice the changes in the forest cover region. The comprehensive study shows that the changes in forest cover deliberate from minimum to maximum immortal area with 197.17 sq. km (2016) and 364.19 sq. km (2021), respectively. Finally, this result predicts that sustainable growth needs to monitor for further development in the future.","PeriodicalId":385163,"journal":{"name":"2022 2nd International Conference on Technological Advancements in Computational Sciences (ICTACS)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132780253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-10DOI: 10.1109/ICTACS56270.2022.9987782
Priscilla Whitin, V. Jayasankar
The Covid-19 pandemic created a massive impact on various sectors across the globe. Nearly 400 million people have been affected by Covid-19 as of January 2022. Although vaccines have been developed, only 49.8% of world population have been vaccinated. The W.H.O has advised the public to maintain social distance in crowded places and wear well fitted mask to impede the spread of corona virus. It has been made mandatory by most countries to wear mask in public places, human monitoring continuously is impossible hence we deploy Deep learning model to implement the same. In this paper we have trained mobilenetV2 architecture for facemask detection using custom dataset. The accuracy of the model in real time is 99.99%
{"title":"Deep Learning Based Facemask Detection","authors":"Priscilla Whitin, V. Jayasankar","doi":"10.1109/ICTACS56270.2022.9987782","DOIUrl":"https://doi.org/10.1109/ICTACS56270.2022.9987782","url":null,"abstract":"The Covid-19 pandemic created a massive impact on various sectors across the globe. Nearly 400 million people have been affected by Covid-19 as of January 2022. Although vaccines have been developed, only 49.8% of world population have been vaccinated. The W.H.O has advised the public to maintain social distance in crowded places and wear well fitted mask to impede the spread of corona virus. It has been made mandatory by most countries to wear mask in public places, human monitoring continuously is impossible hence we deploy Deep learning model to implement the same. In this paper we have trained mobilenetV2 architecture for facemask detection using custom dataset. The accuracy of the model in real time is 99.99%","PeriodicalId":385163,"journal":{"name":"2022 2nd International Conference on Technological Advancements in Computational Sciences (ICTACS)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132784582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-10DOI: 10.1109/ICTACS56270.2022.9987844
M. Reddy, R. Sumathi, N. K. Reddy, N. Revanth, S. Bhavani
Prediction of prices in the Stock Market is a complex task. It involves more contact between humans and computers. We will use more efficient algorithms to get the result more accurate. The proposed methodology here is Linear Regression, Ridge Regression, Lasso Regression and Polynomial Regression. This case will provide us the accurate results and this experiment results are effective and suitable for prediction. Firstly we will collect the data from the kaggle, then we will apply the proposed algorithms and the code is changed according to the results we get the accuracy we are getting. Finally this includes the workflow of the prediction of the share market. The results from the experiment can show that the methodology suggested is remarkably productive and also appropriate for predicting before a short period of time.
{"title":"Analysis of Various Regressions for Stock Data Prediction","authors":"M. Reddy, R. Sumathi, N. K. Reddy, N. Revanth, S. Bhavani","doi":"10.1109/ICTACS56270.2022.9987844","DOIUrl":"https://doi.org/10.1109/ICTACS56270.2022.9987844","url":null,"abstract":"Prediction of prices in the Stock Market is a complex task. It involves more contact between humans and computers. We will use more efficient algorithms to get the result more accurate. The proposed methodology here is Linear Regression, Ridge Regression, Lasso Regression and Polynomial Regression. This case will provide us the accurate results and this experiment results are effective and suitable for prediction. Firstly we will collect the data from the kaggle, then we will apply the proposed algorithms and the code is changed according to the results we get the accuracy we are getting. Finally this includes the workflow of the prediction of the share market. The results from the experiment can show that the methodology suggested is remarkably productive and also appropriate for predicting before a short period of time.","PeriodicalId":385163,"journal":{"name":"2022 2nd International Conference on Technological Advancements in Computational Sciences (ICTACS)","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132832203","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-10DOI: 10.1109/ICTACS56270.2022.9988465
K. S, T. Vyshnavi, Yaragandla Mounika, S. Tejaswini
Within this paper, we point to consider the plausibility of recognizing spams in mobile phone sms messages by recommending an improved Converter method. This method is planned for recognizing spams in SMS messages. We use “Spam Collection v.1 dataset” as well as “UtkMl's Twitter Spam Location Competition” dataset to evaluate our proposed spam Detector, with a number of well-known machine learning classifiers and cutting-edge SMS spam detection techniques serving as the benchmarks. In our paper, we use networks such by way of long short term memory (LSTM), bi-directional LSTM, and encoder-decoder LSTM models which are recurrent neural networks. Our investigations on SMS spam detection demonstrate that the proposed improved spam Converter outperforms all other alternatives regarding accuracy, F1-Score and recall. Additionally, the suggested model performs well on UtkMl's Twitter dataset, suggesting a favorable chance of applying model to other similar issues.
{"title":"A Revised Converter Paradigm Designed for Spam Message Exposure","authors":"K. S, T. Vyshnavi, Yaragandla Mounika, S. Tejaswini","doi":"10.1109/ICTACS56270.2022.9988465","DOIUrl":"https://doi.org/10.1109/ICTACS56270.2022.9988465","url":null,"abstract":"Within this paper, we point to consider the plausibility of recognizing spams in mobile phone sms messages by recommending an improved Converter method. This method is planned for recognizing spams in SMS messages. We use “Spam Collection v.1 dataset” as well as “UtkMl's Twitter Spam Location Competition” dataset to evaluate our proposed spam Detector, with a number of well-known machine learning classifiers and cutting-edge SMS spam detection techniques serving as the benchmarks. In our paper, we use networks such by way of long short term memory (LSTM), bi-directional LSTM, and encoder-decoder LSTM models which are recurrent neural networks. Our investigations on SMS spam detection demonstrate that the proposed improved spam Converter outperforms all other alternatives regarding accuracy, F1-Score and recall. Additionally, the suggested model performs well on UtkMl's Twitter dataset, suggesting a favorable chance of applying model to other similar issues.","PeriodicalId":385163,"journal":{"name":"2022 2nd International Conference on Technological Advancements in Computational Sciences (ICTACS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131775933","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-10DOI: 10.1109/ICTACS56270.2022.9988094
G. D. Reddy, Yaddanapudi Vssrr Uday Kiran, Prabhdeep Singh, Shubhranshu Singh, Sanchita Shaw, Jitendra Singh
People are concerned about the security of their data over the internet. The data can be protected in many ways to keep unauthorized individuals from accessing it. To secure data, steganography can be used in conjunction with cryptography. It is common for steganography to be used for hiding data or secret messages, whereas cryptography encrypts the messages so that they cannot be read. As a result, the proposed system combines both cryptography and steganography. A steganographic message can be concealed from prying eyes by using an image as a carrier of data. In steganography, writing is done secretly or covertly. The digital steganography algorithm uses text, graphics, and audio as cover media. Due to recent advancements in technology, steganography is challenging to employ to safeguard private data, messages, or digital photographs. This paper presents a new steganography strategy for confidential communications between private parties. A transformation of the ciphertext into an image system is also performed during this process. To implement XOR and ECC (Elliptic Curve Cryptography) encryption, three secure mechanisms were constructed using the least significant bit (LSB). In order to ensure a secure data transmission over web applications, both steganography and cryptography must be used in conjunction. Combined techniques can be used and replace the current security techniques, since there has been an incredible growth in security and awareness among individuals, groups, agencies, and government institutions.
{"title":"A Proficient and secure way of Transmission using Cryptography and Steganography","authors":"G. D. Reddy, Yaddanapudi Vssrr Uday Kiran, Prabhdeep Singh, Shubhranshu Singh, Sanchita Shaw, Jitendra Singh","doi":"10.1109/ICTACS56270.2022.9988094","DOIUrl":"https://doi.org/10.1109/ICTACS56270.2022.9988094","url":null,"abstract":"People are concerned about the security of their data over the internet. The data can be protected in many ways to keep unauthorized individuals from accessing it. To secure data, steganography can be used in conjunction with cryptography. It is common for steganography to be used for hiding data or secret messages, whereas cryptography encrypts the messages so that they cannot be read. As a result, the proposed system combines both cryptography and steganography. A steganographic message can be concealed from prying eyes by using an image as a carrier of data. In steganography, writing is done secretly or covertly. The digital steganography algorithm uses text, graphics, and audio as cover media. Due to recent advancements in technology, steganography is challenging to employ to safeguard private data, messages, or digital photographs. This paper presents a new steganography strategy for confidential communications between private parties. A transformation of the ciphertext into an image system is also performed during this process. To implement XOR and ECC (Elliptic Curve Cryptography) encryption, three secure mechanisms were constructed using the least significant bit (LSB). In order to ensure a secure data transmission over web applications, both steganography and cryptography must be used in conjunction. Combined techniques can be used and replace the current security techniques, since there has been an incredible growth in security and awareness among individuals, groups, agencies, and government institutions.","PeriodicalId":385163,"journal":{"name":"2022 2nd International Conference on Technological Advancements in Computational Sciences (ICTACS)","volume":"109 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134620139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-10DOI: 10.1109/ICTACS56270.2022.9988523
V. Ghodke, S. S. Pungaiah, M. Shamout, A. A. Sundarraj, Moidul Islam Judder, S. Vijayprasath
In agriculture, automation is an important attribute for improving and enhancing the quality, expansion and efficiency of the products produced. The quality of the rating has been reduced as the product classification has improved. Sorting is one of the most important challenges in the industry, so need a reliable segregation system that allows us to package our products easily and automatically. Features used in this process include pre-processing, entry, division, extraction, classification, and detection. Existing approaches is not accurately finding the fruit result and take more time take to finding the segregation part. To overcome the issue in this work proposed the method Logistic Support Vector Regression (LSVR) is efficient classified the fruits images. Initially start the process include the image dataset, and first step is preprocessing. In this stage, remove unwanted areas of images, to check the imbalanced values and eliminating the image defects. Next step segmenting the images form the stage of preproceeing filtered images, it helps to splitting the images. Extracting the features based on the images weightages and evaluating for classification. Then using the training and testing images for classification, it includes segregating or identifying color, texture, shape, and defects. Finally, classification using LSVR process improves images quality and assists the industry in segregating products. The use of images in the automated packaging process improves the quality of the results in a better way than ever before. Use this approach and smart logistics to keep track of the transaction process. The purpose of this work is primarily to minimize or eliminate waste.
{"title":"Machine Learning for Auto Segregation of Fruits Classification Based Logistic Support Vector Regression","authors":"V. Ghodke, S. S. Pungaiah, M. Shamout, A. A. Sundarraj, Moidul Islam Judder, S. Vijayprasath","doi":"10.1109/ICTACS56270.2022.9988523","DOIUrl":"https://doi.org/10.1109/ICTACS56270.2022.9988523","url":null,"abstract":"In agriculture, automation is an important attribute for improving and enhancing the quality, expansion and efficiency of the products produced. The quality of the rating has been reduced as the product classification has improved. Sorting is one of the most important challenges in the industry, so need a reliable segregation system that allows us to package our products easily and automatically. Features used in this process include pre-processing, entry, division, extraction, classification, and detection. Existing approaches is not accurately finding the fruit result and take more time take to finding the segregation part. To overcome the issue in this work proposed the method Logistic Support Vector Regression (LSVR) is efficient classified the fruits images. Initially start the process include the image dataset, and first step is preprocessing. In this stage, remove unwanted areas of images, to check the imbalanced values and eliminating the image defects. Next step segmenting the images form the stage of preproceeing filtered images, it helps to splitting the images. Extracting the features based on the images weightages and evaluating for classification. Then using the training and testing images for classification, it includes segregating or identifying color, texture, shape, and defects. Finally, classification using LSVR process improves images quality and assists the industry in segregating products. The use of images in the automated packaging process improves the quality of the results in a better way than ever before. Use this approach and smart logistics to keep track of the transaction process. The purpose of this work is primarily to minimize or eliminate waste.","PeriodicalId":385163,"journal":{"name":"2022 2nd International Conference on Technological Advancements in Computational Sciences (ICTACS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130392101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Telemedicine has the potential to be a good resource for early disease diagnosis, provided that it is utilised in the correct manner. The Internet of Things (IoT) is a concept that has developed in recent years as people have become more aware that they are continuously being watched. As a result of the increased prevalence of neurodegenerative disorders like Alzheimer's disease (AD), biomarkers for these conditions are in high demand for early-stage resource prognosis. Because of the precarious nature of the situation, it is absolutely necessary for these structures to offer remarkable qualities such as accessibility and precision. Deep learning strategies could be useful in fitness applications in situations in which there are a large number of data points to be analysed. Excellent data for a decentralized Internet of Things device that is based on block chain technology. By utilizing a connection to the internet that is of a high speed, it is feasible to obtain a prompt answer from these structures. It is not possible to run deep learning algorithms on smart gateway devices since they do not have sufficient computational capacity. In this study, we investigate the potential for increasing the speed of data flow in the healthcare industry while simultaneously improving data quality through the incorporation of blockchain-based deep neural networks into the control system. Experiments are being conducted to evaluate the speed and accuracy of real-time fitness tracking for the purpose of classifying groups. We are able to determine if diseases of the brain are benign or malignant by employing a model that utilises deep learning. For the purpose of determining the relative severity of each condition, the research examines the symptoms of several different mental diseases and compares them to those of Alzheimer's disease, moderate cognitive impairment, and normal cognition. The research calls for a number of different procedures. The majority of the data is used to train the classifiers, while the remainder of the data is utilised in conjunction with an ensemble model and meta classifier to classify individuals into the appropriate categories. The OASIS-three database is a long-term study that incorporates neuroimaging, cognitive, clinical, and biomarker measurements. This study focuses on healthy ageing as well as Alzheimer's disease. When comparing the outcomes of the simulation to those acquired from the real world, the OASIS-three database (AD), in addition to the ADNI UDS dataset, is employed as a comparison tool. The findings show that answers to questions about this issue can be arrived at quickly and categorized utilizing an in-depth methodology (98% accuracy).
{"title":"Detection of Alzheimer's Disease Using Deep Learning, Blockchain, and IoT Cognitive Data","authors":"Balbir Singh, Manjusha Tatiya, Anurag Shrivastava, Devvret Verma, Arun Pratap Srivastava, A. Rana","doi":"10.1109/ICTACS56270.2022.9988058","DOIUrl":"https://doi.org/10.1109/ICTACS56270.2022.9988058","url":null,"abstract":"Telemedicine has the potential to be a good resource for early disease diagnosis, provided that it is utilised in the correct manner. The Internet of Things (IoT) is a concept that has developed in recent years as people have become more aware that they are continuously being watched. As a result of the increased prevalence of neurodegenerative disorders like Alzheimer's disease (AD), biomarkers for these conditions are in high demand for early-stage resource prognosis. Because of the precarious nature of the situation, it is absolutely necessary for these structures to offer remarkable qualities such as accessibility and precision. Deep learning strategies could be useful in fitness applications in situations in which there are a large number of data points to be analysed. Excellent data for a decentralized Internet of Things device that is based on block chain technology. By utilizing a connection to the internet that is of a high speed, it is feasible to obtain a prompt answer from these structures. It is not possible to run deep learning algorithms on smart gateway devices since they do not have sufficient computational capacity. In this study, we investigate the potential for increasing the speed of data flow in the healthcare industry while simultaneously improving data quality through the incorporation of blockchain-based deep neural networks into the control system. Experiments are being conducted to evaluate the speed and accuracy of real-time fitness tracking for the purpose of classifying groups. We are able to determine if diseases of the brain are benign or malignant by employing a model that utilises deep learning. For the purpose of determining the relative severity of each condition, the research examines the symptoms of several different mental diseases and compares them to those of Alzheimer's disease, moderate cognitive impairment, and normal cognition. The research calls for a number of different procedures. The majority of the data is used to train the classifiers, while the remainder of the data is utilised in conjunction with an ensemble model and meta classifier to classify individuals into the appropriate categories. The OASIS-three database is a long-term study that incorporates neuroimaging, cognitive, clinical, and biomarker measurements. This study focuses on healthy ageing as well as Alzheimer's disease. When comparing the outcomes of the simulation to those acquired from the real world, the OASIS-three database (AD), in addition to the ADNI UDS dataset, is employed as a comparison tool. The findings show that answers to questions about this issue can be arrived at quickly and categorized utilizing an in-depth methodology (98% accuracy).","PeriodicalId":385163,"journal":{"name":"2022 2nd International Conference on Technological Advancements in Computational Sciences (ICTACS)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115023883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-10DOI: 10.1109/ICTACS56270.2022.9988045
Namit Chawla, Mukul Bedwa
Radiographs of the musculoskeletal system provide significant expertise in the treatment of boned https://stanfordmlgroup.github.io/competitions/mura/isease (BD) or injury. To deal with such conditions Artificial Intelligence (Machine Learning & Deep Learning mainly) can play an important part in diagnosing anomalies in a musculoskeletal system. The approach in the proposed paper aims to create a more efficient computer diagnostics (CBD) model. During the initial stage of research, a few pre-processing techniques are used in the data set selected for wrist radiographs, which eliminates image size variability in radiographs. The given data set was then classified as abnormal or normal using three primary architectures: DenseNet201, Inception V3, and Inception ResNet V2. To improve performance of the model, the model's performance is then improved using ensemble approaches. The suggested approach is put to the test on a widely available MURA dataset also known as the musculoskeletal radiographs dataset, and the obtained outcomes are analyzed with respect to the reference document's current results. An accuracy of 86.49% was achieved for wrist radiographs. The results of the implementation show that the presented process is a worthy strategy for classifying diseases in bones.
{"title":"Optimized Ensemble Learning Technique on Wrist Radiographs using Deep Learning","authors":"Namit Chawla, Mukul Bedwa","doi":"10.1109/ICTACS56270.2022.9988045","DOIUrl":"https://doi.org/10.1109/ICTACS56270.2022.9988045","url":null,"abstract":"Radiographs of the musculoskeletal system provide significant expertise in the treatment of boned https://stanfordmlgroup.github.io/competitions/mura/isease (BD) or injury. To deal with such conditions Artificial Intelligence (Machine Learning & Deep Learning mainly) can play an important part in diagnosing anomalies in a musculoskeletal system. The approach in the proposed paper aims to create a more efficient computer diagnostics (CBD) model. During the initial stage of research, a few pre-processing techniques are used in the data set selected for wrist radiographs, which eliminates image size variability in radiographs. The given data set was then classified as abnormal or normal using three primary architectures: DenseNet201, Inception V3, and Inception ResNet V2. To improve performance of the model, the model's performance is then improved using ensemble approaches. The suggested approach is put to the test on a widely available MURA dataset also known as the musculoskeletal radiographs dataset, and the obtained outcomes are analyzed with respect to the reference document's current results. An accuracy of 86.49% was achieved for wrist radiographs. The results of the implementation show that the presented process is a worthy strategy for classifying diseases in bones.","PeriodicalId":385163,"journal":{"name":"2022 2nd International Conference on Technological Advancements in Computational Sciences (ICTACS)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117150726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-10DOI: 10.1109/ICTACS56270.2022.9988278
A. Rana, Virender Khurana, A. Shrivastava, Durgaprasad Gangodkar, Deepika Arora, Anil Kumar Dixit
Wireless sensor networks (WSNs) make use of an abundance of sensor nodes in order to gain a deeper understanding of the world around them. If the data were not gathered in an open and honest fashion, then no one would be interested in them. In military applications, for instance, the detection of opponent movement relies substantially on the placement of sensor nodes in wireless sensor networks (WSNs). Discovering the locations of all target nodes while utilizing anchor nodes is the major purpose of the localization challenge. This research suggests two adjustments that could be made to the zebra optimization algorithm (ZOA) in order to improve upon its deficiencies, one of which being its tendency to get trapped in the local optimal solution. In versions 1 and 2 of the ZOA, the exploration and exploitation components have been modified to make use of improved global and local search algorithms. In order to assess how effective, the proposed ZOA versions 1 and 2 are, a large number of simulations have been run, each with a different combination of target nodes and anchor nodes and a different number of each. In order to solve the problem of node localization, ZOA, along with a number of other attempted optimization strategies, are employed, and the outcomes obtained by each strategy are compared. Versions 1 and 2 of ZOA perform far better than its competitors in terms of the mean localization error, the number of nodes that are successfully localized, and the computation time. ZOA versions 1 and 2 are proposed, and the initial ZOA is evaluated in terms of how accurately it localizes nodes and the number of errors it generates when provided with a range of possible values for the target node and the anchor node. The simulations prove without a reasonable doubt that the suggested ZOA variation 2 performs better than both the existing ZOA and the original proposal in a variety of ways. The proposed ZOA variation 2 is superior to the proposed ZOA variation 1, ZOA, and other existing optimization methods for determining the location of a node because it performs calculations at a faster rate and has a lower mean localization error. This is due to the fact that the proposed ZOA variation 2 is based on a more accurate probability distribution.
{"title":"A ZEBRA Optimization Algorithm Search for Improving Localization in Wireless Sensor Network","authors":"A. Rana, Virender Khurana, A. Shrivastava, Durgaprasad Gangodkar, Deepika Arora, Anil Kumar Dixit","doi":"10.1109/ICTACS56270.2022.9988278","DOIUrl":"https://doi.org/10.1109/ICTACS56270.2022.9988278","url":null,"abstract":"Wireless sensor networks (WSNs) make use of an abundance of sensor nodes in order to gain a deeper understanding of the world around them. If the data were not gathered in an open and honest fashion, then no one would be interested in them. In military applications, for instance, the detection of opponent movement relies substantially on the placement of sensor nodes in wireless sensor networks (WSNs). Discovering the locations of all target nodes while utilizing anchor nodes is the major purpose of the localization challenge. This research suggests two adjustments that could be made to the zebra optimization algorithm (ZOA) in order to improve upon its deficiencies, one of which being its tendency to get trapped in the local optimal solution. In versions 1 and 2 of the ZOA, the exploration and exploitation components have been modified to make use of improved global and local search algorithms. In order to assess how effective, the proposed ZOA versions 1 and 2 are, a large number of simulations have been run, each with a different combination of target nodes and anchor nodes and a different number of each. In order to solve the problem of node localization, ZOA, along with a number of other attempted optimization strategies, are employed, and the outcomes obtained by each strategy are compared. Versions 1 and 2 of ZOA perform far better than its competitors in terms of the mean localization error, the number of nodes that are successfully localized, and the computation time. ZOA versions 1 and 2 are proposed, and the initial ZOA is evaluated in terms of how accurately it localizes nodes and the number of errors it generates when provided with a range of possible values for the target node and the anchor node. The simulations prove without a reasonable doubt that the suggested ZOA variation 2 performs better than both the existing ZOA and the original proposal in a variety of ways. The proposed ZOA variation 2 is superior to the proposed ZOA variation 1, ZOA, and other existing optimization methods for determining the location of a node because it performs calculations at a faster rate and has a lower mean localization error. This is due to the fact that the proposed ZOA variation 2 is based on a more accurate probability distribution.","PeriodicalId":385163,"journal":{"name":"2022 2nd International Conference on Technological Advancements in Computational Sciences (ICTACS)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125511965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-10-10DOI: 10.1109/ICTACS56270.2022.9988646
P. William, Yaddanapudi Vssrr Uday Kiran, A. Rana, Durgaprasad Gangodkar, Irfan Khan, Kumar Ashutosh
This article describes a system that uses Internet of Things (IOT) architecture to deliver real-time air quality data. Real-time air quality monitoring enables us to limit the degradation of air quality. The degree of pollution in the air is measured using the Air Quality Index (AQI). In general, a higher AQI indicates that the air quality is more dangerous to breathing. With this setup, it is possible to measure gas concentrations such as NO2, CO, and PM2.5 with the help of an Arduino UNO running on both software and hardware. An IoT platform called Thing Speak serves as an IoT analytics platform that is connected to the hardware through the ESP8266 Wi-Fi module in this research. Additionally, it's capable of integrating real-time data with our Android Studio-built mobile phone app. Finally, an Android app that pulls data from Thing Speak displays the PPM and Air Quality levels of gases in the circuit. Successful development of this model has made it suitable for usage in real-world systems.
{"title":"Design and Implementation of IoT based Framework for Air Quality Sensing and Monitoring","authors":"P. William, Yaddanapudi Vssrr Uday Kiran, A. Rana, Durgaprasad Gangodkar, Irfan Khan, Kumar Ashutosh","doi":"10.1109/ICTACS56270.2022.9988646","DOIUrl":"https://doi.org/10.1109/ICTACS56270.2022.9988646","url":null,"abstract":"This article describes a system that uses Internet of Things (IOT) architecture to deliver real-time air quality data. Real-time air quality monitoring enables us to limit the degradation of air quality. The degree of pollution in the air is measured using the Air Quality Index (AQI). In general, a higher AQI indicates that the air quality is more dangerous to breathing. With this setup, it is possible to measure gas concentrations such as NO2, CO, and PM2.5 with the help of an Arduino UNO running on both software and hardware. An IoT platform called Thing Speak serves as an IoT analytics platform that is connected to the hardware through the ESP8266 Wi-Fi module in this research. Additionally, it's capable of integrating real-time data with our Android Studio-built mobile phone app. Finally, an Android app that pulls data from Thing Speak displays the PPM and Air Quality levels of gases in the circuit. Successful development of this model has made it suitable for usage in real-world systems.","PeriodicalId":385163,"journal":{"name":"2022 2nd International Conference on Technological Advancements in Computational Sciences (ICTACS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129432777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}