INTRODUCTION: Reservoir characterisation and geomechanical modelling benefit significantly from diverse machine learning techniques, addressing complexities inherent in subsurface information. Accurate lithology identification is pivotal, furnishing crucial insights into subsurface geological formations. Lithology is pivotal in appraising hydrocarbon accumulation potential and optimising drilling strategies. OBJECTIVES: This study employs multiple machine learning models to discern lithology from the well-log data of the Volve Field. METHODS: The well log data of the Volve field comprises of 10,220 data points with diverse features influencing the target variable, lithology. The dataset encompasses four primary lithologies—sandstone, limestone, marl, and claystone—constituting a complex subsurface stratum. Lithology identification is framed as a classification problem, and four distinct ML algorithms are deployed to train and assess the models, partitioning the dataset into a 7:3 ratio for training and testing, respectively. RESULTS: The resulting confusion matrix indicates a close alignment between predicted and true labels. While all algorithms exhibit favourable performance, the decision tree algorithm demonstrates the highest efficacy, yielding an exceptional overall accuracy of 0.98. CONCLUSION: Notably, this model's training spans diverse wells within the same basin, showcasing its capability to predict lithology within intricate strata. Additionally, its robustness positions it as a potential tool for identifying other properties of rock formations.
{"title":"Identification of Lithology from Well Log Data Using Machine Learning","authors":"Rohit, Shri Ram Manda, Aditya Raj, Akshay Dheeraj, G. Rawat, Tanupriya Choudhury","doi":"10.4108/eetiot.5634","DOIUrl":"https://doi.org/10.4108/eetiot.5634","url":null,"abstract":"INTRODUCTION: Reservoir characterisation and geomechanical modelling benefit significantly from diverse machine learning techniques, addressing complexities inherent in subsurface information. Accurate lithology identification is pivotal, furnishing crucial insights into subsurface geological formations. Lithology is pivotal in appraising hydrocarbon accumulation potential and optimising drilling strategies. \u0000OBJECTIVES: This study employs multiple machine learning models to discern lithology from the well-log data of the Volve Field. \u0000METHODS: The well log data of the Volve field comprises of 10,220 data points with diverse features influencing the target variable, lithology. The dataset encompasses four primary lithologies—sandstone, limestone, marl, and claystone—constituting a complex subsurface stratum. Lithology identification is framed as a classification problem, and four distinct ML algorithms are deployed to train and assess the models, partitioning the dataset into a 7:3 ratio for training and testing, respectively. \u0000RESULTS: The resulting confusion matrix indicates a close alignment between predicted and true labels. While all algorithms exhibit favourable performance, the decision tree algorithm demonstrates the highest efficacy, yielding an exceptional overall accuracy of 0.98. \u0000CONCLUSION: Notably, this model's training spans diverse wells within the same basin, showcasing its capability to predict lithology within intricate strata. Additionally, its robustness positions it as a potential tool for identifying other properties of rock formations.","PeriodicalId":506477,"journal":{"name":"EAI Endorsed Transactions on Internet of Things","volume":"55 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140742800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the most well-known generative AI models is the Generative Adversarial Network (GAN), which is frequently employed for data generation or augmentation. In this paper a reliable GAN-based CNN deepfake detection method utilizing GAN as an augmentation element is implemented. It aims to give the CNN model a big collection of images so that it can train better with the intrinsic qualities of the images. The major objective of this research is to show how GAN innovations have enhanced and increased the use of generative AI principles, particularly in fake image classification called Deepfakes that poses concerns about misrepresentation and individual privacy. For identifying these fake photos more synthetic images are created using the GAN model that closely resemble the training data. It has been observed that GAN-augmented datasets can improve the robustness and generality of CNN-based detection models, which correctly identify between real and false images by 96.35%.
生成对抗网络(GAN)是最著名的生成人工智能模型之一,经常被用于数据生成或增强。本文利用 GAN 作为增强元素,实现了一种可靠的基于 GAN 的 CNN 深度防伪检测方法。该方法旨在为 CNN 模型提供大量图像,使其能更好地利用图像的内在质量进行训练。这项研究的主要目的是展示 GAN 创新如何增强和增加了生成式人工智能原理的应用,尤其是在被称为 Deepfakes 的假图像分类中,因为这种分类会引起对虚假陈述和个人隐私的担忧。 为了识别这些假照片,我们使用与训练数据非常相似的 GAN 模型创建了更多合成图像。 据观察,GAN 增强数据集可提高基于 CNN 的检测模型的鲁棒性和通用性,其识别真假图像的正确率高达 96.35%。
{"title":"Robust GAN-Based CNN Model as Generative AI Application for Deepfake Detection","authors":"Preeti Sharma, Manoj Kumar, Hiteshwari Sharma","doi":"10.4108/eetiot.5637","DOIUrl":"https://doi.org/10.4108/eetiot.5637","url":null,"abstract":"One of the most well-known generative AI models is the Generative Adversarial Network (GAN), which is frequently employed for data generation or augmentation. In this paper a reliable GAN-based CNN deepfake detection method utilizing GAN as an augmentation element is implemented. It aims to give the CNN model a big collection of images so that it can train better with the intrinsic qualities of the images. The major objective of this research is to show how GAN innovations have enhanced and increased the use of generative AI principles, particularly in fake image classification called Deepfakes that poses concerns about misrepresentation and individual privacy. For identifying these fake photos more synthetic images are created using the GAN model that closely resemble the training data. It has been observed that GAN-augmented datasets can improve the robustness and generality of CNN-based detection models, which correctly identify between real and false images by 96.35%.","PeriodicalId":506477,"journal":{"name":"EAI Endorsed Transactions on Internet of Things","volume":"86 9‐12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140742427","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The process of researching crime patterns and trends in order to find underlying issues and potential solutions to crime prevention is known as crime analysis. This includes using statistical analysis, geographic mapping, and other approaches of type and scope of crime in their areas. Crime analysis can also entail the creation of predictive models that use previous data to anticipate future crime tendencies. Law enforcement authorities can more efficiently allocate resources and target initiatives to reduce crime and increase public safety by evaluating crime data and finding trends. For prediction, this data was fed into algorithms such as Linear Regression and Random Forest. Using data from 2001 to 2016, crime-type projections are made for each state as well as all states in India. Simple visualisation charts are used to represent these predictions. One critical feature of these algorithms is identifying the trend-changing year in order to boost the accuracy of the predictions. The main aim is to predict crime cases from 2017 to 2020 by using the dataset from 2001 to 2016.
{"title":"Crime Prediction using Machine Learning","authors":"Sridharan S, Srish N, Vigneswaran S, Santhi P","doi":"10.4108/eetiot.5123","DOIUrl":"https://doi.org/10.4108/eetiot.5123","url":null,"abstract":"The process of researching crime patterns and trends in order to find underlying issues and potential solutions to crime prevention is known as crime analysis. This includes using statistical analysis, geographic mapping, and other approaches of type and scope of crime in their areas. Crime analysis can also entail the creation of predictive models that use previous data to anticipate future crime tendencies. Law enforcement authorities can more efficiently allocate resources and target initiatives to reduce crime and increase public safety by evaluating crime data and finding trends. For prediction, this data was fed into algorithms such as Linear Regression and Random Forest. Using data from 2001 to 2016, crime-type projections are made for each state as well as all states in India. Simple visualisation charts are used to represent these predictions. One critical feature of these algorithms is identifying the trend-changing year in order to boost the accuracy of the predictions. The main aim is to predict crime cases from 2017 to 2020 by using the dataset from 2001 to 2016.","PeriodicalId":506477,"journal":{"name":"EAI Endorsed Transactions on Internet of Things","volume":"224 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139834082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The process of researching crime patterns and trends in order to find underlying issues and potential solutions to crime prevention is known as crime analysis. This includes using statistical analysis, geographic mapping, and other approaches of type and scope of crime in their areas. Crime analysis can also entail the creation of predictive models that use previous data to anticipate future crime tendencies. Law enforcement authorities can more efficiently allocate resources and target initiatives to reduce crime and increase public safety by evaluating crime data and finding trends. For prediction, this data was fed into algorithms such as Linear Regression and Random Forest. Using data from 2001 to 2016, crime-type projections are made for each state as well as all states in India. Simple visualisation charts are used to represent these predictions. One critical feature of these algorithms is identifying the trend-changing year in order to boost the accuracy of the predictions. The main aim is to predict crime cases from 2017 to 2020 by using the dataset from 2001 to 2016.
{"title":"Crime Prediction using Machine Learning","authors":"Sridharan S, Srish N, Vigneswaran S, Santhi P","doi":"10.4108/eetiot.5123","DOIUrl":"https://doi.org/10.4108/eetiot.5123","url":null,"abstract":"The process of researching crime patterns and trends in order to find underlying issues and potential solutions to crime prevention is known as crime analysis. This includes using statistical analysis, geographic mapping, and other approaches of type and scope of crime in their areas. Crime analysis can also entail the creation of predictive models that use previous data to anticipate future crime tendencies. Law enforcement authorities can more efficiently allocate resources and target initiatives to reduce crime and increase public safety by evaluating crime data and finding trends. For prediction, this data was fed into algorithms such as Linear Regression and Random Forest. Using data from 2001 to 2016, crime-type projections are made for each state as well as all states in India. Simple visualisation charts are used to represent these predictions. One critical feature of these algorithms is identifying the trend-changing year in order to boost the accuracy of the predictions. The main aim is to predict crime cases from 2017 to 2020 by using the dataset from 2001 to 2016.","PeriodicalId":506477,"journal":{"name":"EAI Endorsed Transactions on Internet of Things","volume":"1 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139774585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Ravikumar, Harini Sriraman, Saddikuti Lokesh, Jitendra Sai
INTRODUCTION: Using neural networks for these inherently distributed applications is challenging and time-consuming. There is a crucial need for a framework that supports a distributed deep neural network to yield accurate results at an accelerated time. METHODS: In the proposed framework, any experienced novice user can utilize and execute the neural network models in a distributed manner with the automated hyperparameter tuning feature. In addition, the proposed framework is provided in AWS Sage maker for scaling the distribution and achieving exascale FLOPS. We benchmarked the framework performance by applying it to a medical dataset. RESULTS: The maximum performance is achieved with a speedup of 6.59 in 5 nodes. The model encourages expert/ novice neural network users to apply neural network models in the distributed platform and get enhanced results with accelerated training time. There has been a lot of research on how to improve the training time of Convolutional Neural Networks (CNNs) using distributed models, with a particular emphasis on automating the hyperparameter tweaking process. The study shows that training times may be decreased across the board by not just manually tweaking hyperparameters, but also by using L2 regularization, a dropout layer, and ConvLSTM for automatic hyperparameter modification. CONCLUSION: The proposed method improved the training speed for model-parallel setups by 1.4% and increased the speed for parallel data by 2.206%. Data-parallel execution achieved a high accuracy of 93.3825%, whereas model-parallel execution achieved a top accuracy of 89.59%.
{"title":"Circumventing Stragglers and Staleness in Distributed CNN using LSTM","authors":"A. Ravikumar, Harini Sriraman, Saddikuti Lokesh, Jitendra Sai","doi":"10.4108/eetiot.5119","DOIUrl":"https://doi.org/10.4108/eetiot.5119","url":null,"abstract":"INTRODUCTION: Using neural networks for these inherently distributed applications is challenging and time-consuming. There is a crucial need for a framework that supports a distributed deep neural network to yield accurate results at an accelerated time. \u0000METHODS: In the proposed framework, any experienced novice user can utilize and execute the neural network models in a distributed manner with the automated hyperparameter tuning feature. In addition, the proposed framework is provided in AWS Sage maker for scaling the distribution and achieving exascale FLOPS. We benchmarked the framework performance by applying it to a medical dataset. \u0000RESULTS: The maximum performance is achieved with a speedup of 6.59 in 5 nodes. The model encourages expert/ novice neural network users to apply neural network models in the distributed platform and get enhanced results with accelerated training time. There has been a lot of research on how to improve the training time of Convolutional Neural Networks (CNNs) using distributed models, with a particular emphasis on automating the hyperparameter tweaking process. The study shows that training times may be decreased across the board by not just manually tweaking hyperparameters, but also by using L2 regularization, a dropout layer, and ConvLSTM for automatic hyperparameter modification. \u0000CONCLUSION: The proposed method improved the training speed for model-parallel setups by 1.4% and increased the speed for parallel data by 2.206%. Data-parallel execution achieved a high accuracy of 93.3825%, whereas model-parallel execution achieved a top accuracy of 89.59%.","PeriodicalId":506477,"journal":{"name":"EAI Endorsed Transactions on Internet of Things","volume":"50 21","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139778071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Ravikumar, Harini Sriraman, Saddikuti Lokesh, Jitendra Sai
INTRODUCTION: Using neural networks for these inherently distributed applications is challenging and time-consuming. There is a crucial need for a framework that supports a distributed deep neural network to yield accurate results at an accelerated time. METHODS: In the proposed framework, any experienced novice user can utilize and execute the neural network models in a distributed manner with the automated hyperparameter tuning feature. In addition, the proposed framework is provided in AWS Sage maker for scaling the distribution and achieving exascale FLOPS. We benchmarked the framework performance by applying it to a medical dataset. RESULTS: The maximum performance is achieved with a speedup of 6.59 in 5 nodes. The model encourages expert/ novice neural network users to apply neural network models in the distributed platform and get enhanced results with accelerated training time. There has been a lot of research on how to improve the training time of Convolutional Neural Networks (CNNs) using distributed models, with a particular emphasis on automating the hyperparameter tweaking process. The study shows that training times may be decreased across the board by not just manually tweaking hyperparameters, but also by using L2 regularization, a dropout layer, and ConvLSTM for automatic hyperparameter modification. CONCLUSION: The proposed method improved the training speed for model-parallel setups by 1.4% and increased the speed for parallel data by 2.206%. Data-parallel execution achieved a high accuracy of 93.3825%, whereas model-parallel execution achieved a top accuracy of 89.59%.
{"title":"Circumventing Stragglers and Staleness in Distributed CNN using LSTM","authors":"A. Ravikumar, Harini Sriraman, Saddikuti Lokesh, Jitendra Sai","doi":"10.4108/eetiot.5119","DOIUrl":"https://doi.org/10.4108/eetiot.5119","url":null,"abstract":"INTRODUCTION: Using neural networks for these inherently distributed applications is challenging and time-consuming. There is a crucial need for a framework that supports a distributed deep neural network to yield accurate results at an accelerated time. \u0000METHODS: In the proposed framework, any experienced novice user can utilize and execute the neural network models in a distributed manner with the automated hyperparameter tuning feature. In addition, the proposed framework is provided in AWS Sage maker for scaling the distribution and achieving exascale FLOPS. We benchmarked the framework performance by applying it to a medical dataset. \u0000RESULTS: The maximum performance is achieved with a speedup of 6.59 in 5 nodes. The model encourages expert/ novice neural network users to apply neural network models in the distributed platform and get enhanced results with accelerated training time. There has been a lot of research on how to improve the training time of Convolutional Neural Networks (CNNs) using distributed models, with a particular emphasis on automating the hyperparameter tweaking process. The study shows that training times may be decreased across the board by not just manually tweaking hyperparameters, but also by using L2 regularization, a dropout layer, and ConvLSTM for automatic hyperparameter modification. \u0000CONCLUSION: The proposed method improved the training speed for model-parallel setups by 1.4% and increased the speed for parallel data by 2.206%. Data-parallel execution achieved a high accuracy of 93.3825%, whereas model-parallel execution achieved a top accuracy of 89.59%.","PeriodicalId":506477,"journal":{"name":"EAI Endorsed Transactions on Internet of Things","volume":"116 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139837797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. K S, Chinmaya Kumar Pradhan, Venkateswarlu A N, Harini G, Geetha P
Farming is a crucial vocation for survival on this planet because it meets the majority of people's necessities to live. However, as technology developed and the Internet of Things was created, automation (smarter technologies) began to replace old approaches, leading to a broad improvement in all fields. Currently in an automated condition where newer, smarter technologies are being upgraded daily throughout a wide range of industries, including smart homes, waste management, automobiles, industries, farming, health, grids, and more. Farmers go through significant losses as a result of the regular crop destruction caused by local animals like buffaloes, cows, goats, elephants, and others. To protect their fields, farmers have been using animal traps or electric fences. Both animals and humans perish as a result of these countless deaths. Many individuals are giving up farming because of the serious harm that animals inflict on crops. The systems now in use make it challenging to identify the animal species. Consequently, animal detection is made simple and effective by employing the Artificial Intelligence based Convolution Neural Network method. The concept of playing animal-specific sounds is by far the most accurate execution. Rotating cameras are put to good use. The percentage of animals detected by this technique has grown from 55% to 79%.
{"title":"An internet of things based smart agriculture monitoring system using convolution neural network algorithm","authors":"B. K S, Chinmaya Kumar Pradhan, Venkateswarlu A N, Harini G, Geetha P","doi":"10.4108/eetiot.5105","DOIUrl":"https://doi.org/10.4108/eetiot.5105","url":null,"abstract":"Farming is a crucial vocation for survival on this planet because it meets the majority of people's necessities to live. However, as technology developed and the Internet of Things was created, automation (smarter technologies) began to replace old approaches, leading to a broad improvement in all fields. Currently in an automated condition where newer, smarter technologies are being upgraded daily throughout a wide range of industries, including smart homes, waste management, automobiles, industries, farming, health, grids, and more. Farmers go through significant losses as a result of the regular crop destruction caused by local animals like buffaloes, cows, goats, elephants, and others. To protect their fields, farmers have been using animal traps or electric fences. Both animals and humans perish as a result of these countless deaths. Many individuals are giving up farming because of the serious harm that animals inflict on crops. The systems now in use make it challenging to identify the animal species. Consequently, animal detection is made simple and effective by employing the Artificial Intelligence based Convolution Neural Network method. The concept of playing animal-specific sounds is by far the most accurate execution. Rotating cameras are put to good use. The percentage of animals detected by this technique has grown from 55% to 79%.","PeriodicalId":506477,"journal":{"name":"EAI Endorsed Transactions on Internet of Things","volume":"80 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139839857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
B. K S, Chinmaya Kumar Pradhan, Venkateswarlu A N, Harini G, Geetha P
Farming is a crucial vocation for survival on this planet because it meets the majority of people's necessities to live. However, as technology developed and the Internet of Things was created, automation (smarter technologies) began to replace old approaches, leading to a broad improvement in all fields. Currently in an automated condition where newer, smarter technologies are being upgraded daily throughout a wide range of industries, including smart homes, waste management, automobiles, industries, farming, health, grids, and more. Farmers go through significant losses as a result of the regular crop destruction caused by local animals like buffaloes, cows, goats, elephants, and others. To protect their fields, farmers have been using animal traps or electric fences. Both animals and humans perish as a result of these countless deaths. Many individuals are giving up farming because of the serious harm that animals inflict on crops. The systems now in use make it challenging to identify the animal species. Consequently, animal detection is made simple and effective by employing the Artificial Intelligence based Convolution Neural Network method. The concept of playing animal-specific sounds is by far the most accurate execution. Rotating cameras are put to good use. The percentage of animals detected by this technique has grown from 55% to 79%.
{"title":"An internet of things based smart agriculture monitoring system using convolution neural network algorithm","authors":"B. K S, Chinmaya Kumar Pradhan, Venkateswarlu A N, Harini G, Geetha P","doi":"10.4108/eetiot.5105","DOIUrl":"https://doi.org/10.4108/eetiot.5105","url":null,"abstract":"Farming is a crucial vocation for survival on this planet because it meets the majority of people's necessities to live. However, as technology developed and the Internet of Things was created, automation (smarter technologies) began to replace old approaches, leading to a broad improvement in all fields. Currently in an automated condition where newer, smarter technologies are being upgraded daily throughout a wide range of industries, including smart homes, waste management, automobiles, industries, farming, health, grids, and more. Farmers go through significant losses as a result of the regular crop destruction caused by local animals like buffaloes, cows, goats, elephants, and others. To protect their fields, farmers have been using animal traps or electric fences. Both animals and humans perish as a result of these countless deaths. Many individuals are giving up farming because of the serious harm that animals inflict on crops. The systems now in use make it challenging to identify the animal species. Consequently, animal detection is made simple and effective by employing the Artificial Intelligence based Convolution Neural Network method. The concept of playing animal-specific sounds is by far the most accurate execution. Rotating cameras are put to good use. The percentage of animals detected by this technique has grown from 55% to 79%.","PeriodicalId":506477,"journal":{"name":"EAI Endorsed Transactions on Internet of Things","volume":"55 18","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139779966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Cruz Antony, G. M. Karpura Dheepan, Veena K, Vellanki Vikas, Vuppala Satyamitra
In the realm of contemporary applications and everyday life, the significance of object recognition and classification cannot be overstated. A multitude of valuable domains, including G-lens technology, cancer prediction, Optical Character Recognition (OCR), Face Recognition, and more, heavily rely on the efficacy of image identification algorithms. Among these, Convolutional Neural Networks (CNN) have emerged as a cutting-edge technique that excels in its aptitude for feature extraction, offering pragmatic solutions to a diverse array of object recognition challenges. CNN's notable strength is underscored by its swifter execution, rendering it particularly advantageous for real-time processing. The domain of traffic sign recognition holds profound importance, especially in the development of practical applications like autonomous driving for vehicles such as Tesla, as well as in the realm of traffic surveillance. In this research endeavour, the focus was directed towards the Belgium Traffic Signs Dataset (BTS), an encompassing repository comprising a total of 62 distinct traffic signs. By employing a CNN model, a meticulously methodical approach was obtained commencing with a rigorous phase of data pre-processing. This preparatory stage was complemented by the strategic incorporation of residual blocks during model training, thereby enhancing the network's ability to glean intricate features from traffic sign images. Notably, our proposed methodology yielded a commendable accuracy rate of 94.25%, demonstrating the system's robust and proficient recognition capabilities. The distinctive prowess of our methodology shines through its substantial improvements in specific parameters compared to pre-existing techniques. Our approach thrives in terms of accuracy, capitalizing on CNN's rapid execution speed, and offering an efficient means of feature extraction. By effectively training on a diverse dataset encompassing 62 varied traffic signs, our model showcases a promising potential for real-world applications. The overarching analysis highlights the efficacy of our proposed technique, reaffirming its potency in achieving precise traffic sign recognition and positioning it as a viable solution for real-time scenarios and autonomous systems.
{"title":"Traffic sign recognition using CNN and Res-Net","authors":"J. Cruz Antony, G. M. Karpura Dheepan, Veena K, Vellanki Vikas, Vuppala Satyamitra","doi":"10.4108/eetiot.5098","DOIUrl":"https://doi.org/10.4108/eetiot.5098","url":null,"abstract":" \u0000In the realm of contemporary applications and everyday life, the significance of object recognition and classification cannot be overstated. A multitude of valuable domains, including G-lens technology, cancer prediction, Optical Character Recognition (OCR), Face Recognition, and more, heavily rely on the efficacy of image identification algorithms. Among these, Convolutional Neural Networks (CNN) have emerged as a cutting-edge technique that excels in its aptitude for feature extraction, offering pragmatic solutions to a diverse array of object recognition challenges. CNN's notable strength is underscored by its swifter execution, rendering it particularly advantageous for real-time processing. The domain of traffic sign recognition holds profound importance, especially in the development of practical applications like autonomous driving for vehicles such as Tesla, as well as in the realm of traffic surveillance. In this research endeavour, the focus was directed towards the Belgium Traffic Signs Dataset (BTS), an encompassing repository comprising a total of 62 distinct traffic signs. By employing a CNN model, a meticulously methodical approach was obtained commencing with a rigorous phase of data pre-processing. This preparatory stage was complemented by the strategic incorporation of residual blocks during model training, thereby enhancing the network's ability to glean intricate features from traffic sign images. Notably, our proposed methodology yielded a commendable accuracy rate of 94.25%, demonstrating the system's robust and proficient recognition capabilities. The distinctive prowess of our methodology shines through its substantial improvements in specific parameters compared to pre-existing techniques. Our approach thrives in terms of accuracy, capitalizing on CNN's rapid execution speed, and offering an efficient means of feature extraction. By effectively training on a diverse dataset encompassing 62 varied traffic signs, our model showcases a promising potential for real-world applications. The overarching analysis highlights the efficacy of our proposed technique, reaffirming its potency in achieving precise traffic sign recognition and positioning it as a viable solution for real-time scenarios and autonomous systems.","PeriodicalId":506477,"journal":{"name":"EAI Endorsed Transactions on Internet of Things","volume":"48 40","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139845050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jency Rubia J, Babitha Lincy R, E. Nithila, Sherin Shibi C, Rosi A
Cryptography is an art of hiding the significant data or information with some other codes. It is a practice and study of securing information and communication. Thus, cryptography prevents third party intervention over the data communication. The cryptography technology transforms the data into some other form to enhance security and robustness against the attacks. The thrust of enhancing the security among data transfer has been emerged ever since the need of Artificial Intelligence field came into a market. Therefore, modern way of computing cryptographic algorithm came into practice such as AES, 3DES, RSA, Diffie-Hellman and ECC. These public-key encryption techniques now in use are based on challenging discrete logarithms for elliptic curves and complex factorization. However, those two difficult problems can be effectively solved with the help of sufficient large-scale quantum computer. The Post Quantum Cryptography (PQC) aims to deal with an attacker who has a large-scale quantum computer. Therefore, it is essential to build a robust and secure cryptography algorithm against most vulnerable pre-quantum cryptography methods. That is called ‘Post Quantum Cryptography’. Therefore, the present crypto system needs to propose encryption key and signature size is very large.in addition to careful prediction of encryption/decryption time and amount of traffic over the communication wire is required. The post-quantum cryptography (PQC) article discusses different families of post-quantum cryptosystems, analyses the current status of the National Institute of Standards and Technology (NIST) post-quantum cryptography standardisation process, and looks at the difficulties faced by the PQC community.
{"title":"A Survey about Post Quantum Cryptography Methods","authors":"Jency Rubia J, Babitha Lincy R, E. Nithila, Sherin Shibi C, Rosi A","doi":"10.4108/eetiot.5099","DOIUrl":"https://doi.org/10.4108/eetiot.5099","url":null,"abstract":"Cryptography is an art of hiding the significant data or information with some other codes. It is a practice and study of securing information and communication. Thus, cryptography prevents third party intervention over the data communication. The cryptography technology transforms the data into some other form to enhance security and robustness against the attacks. The thrust of enhancing the security among data transfer has been emerged ever since the need of Artificial Intelligence field came into a market. Therefore, modern way of computing cryptographic algorithm came into practice such as AES, 3DES, RSA, Diffie-Hellman and ECC. These public-key encryption techniques now in use are based on challenging discrete logarithms for elliptic curves and complex factorization. However, those two difficult problems can be effectively solved with the help of sufficient large-scale quantum computer. The Post Quantum Cryptography (PQC) aims to deal with an attacker who has a large-scale quantum computer. Therefore, it is essential to build a robust and secure cryptography algorithm against most vulnerable pre-quantum cryptography methods. That is called ‘Post Quantum Cryptography’. Therefore, the present crypto system needs to propose encryption key and signature size is very large.in addition to careful prediction of encryption/decryption time and amount of traffic over the communication wire is required. The post-quantum cryptography (PQC) article discusses different families of post-quantum cryptosystems, analyses the current status of the National Institute of Standards and Technology (NIST) post-quantum cryptography standardisation process, and looks at the difficulties faced by the PQC community.","PeriodicalId":506477,"journal":{"name":"EAI Endorsed Transactions on Internet of Things","volume":"86 16","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139784484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}