Pub Date : 2024-05-14DOI: 10.47164/ijngc.v15i1.1643
L. K., SURESH S, SAVITHA S, ANANDAMURUGAN S
This study investigates the use of evolutionary computation for mining high-value patterns from benchmark datasets. The approach employs a fitness function to assess the usefulness of each pattern. However, the effectiveness of evolutionary algorithms heavily relies on the chosen strategy parameters during execution. Conventional methods set these parameters arbitrarily, often leading to suboptimal solutions. To address this limitation, the research proposes a method for dynamically adjusting strategy parameters using temporal difference approaches, a machine learning technique called Reinforcement Learning (RL). Specifically, the proposed IPSO RLON algorithm utilizes SARSA learning to intelligently adapt the Crossover Rate and Mutation Rate within the Practical Swarm Optimization Algorithm. This allows IPSO RLON to effectively mine high-utility itemsets from the data.The key benefit of IPSO RLON lies in its adaptive control parameters. This enables it to discover optimal high-utility itemsets when applied to various benchmark datasets. To assess its performance, IPSO RLON is compared to existing approaches like HUPEUMU-GRAM, HUIM-BPSO, IGA RLOFF, and IPSO RLOFF using metrics like execution time, convergence speed, and the percentage of high-utility itemsets mined. From the evaluation it is observed that the proposed IPSO RLON perfroms better than the other methodology.
本研究探讨了利用进化计算从基准数据集中挖掘高价值模式的方法。该方法采用适合度函数来评估每个模式的有用性。然而,进化算法的有效性在很大程度上取决于执行过程中选择的策略参数。传统方法任意设置这些参数,往往会导致次优解决方案的出现。为解决这一局限性,研究提出了一种利用时差方法动态调整策略参数的方法,这是一种称为强化学习(RL)的机器学习技术。具体来说,所提出的 IPSO RLON 算法利用 SARSA 学习来智能调整实用蜂群优化算法中的交叉率和突变率。IPSO RLON 的主要优势在于其自适应控制参数。IPSO RLON 的主要优势在于其自适应控制参数,这使其在应用于各种基准数据集时,能够发现最佳的高实用性项目集。为了评估 IPSO RLON 的性能,我们使用执行时间、收敛速度和挖掘出的高实用性项目集百分比等指标,将 IPSO RLON 与 HUPEUMU-GRAM、HUIM-BPSO、IGA RLOFF 和 IPSO RLOFF 等现有方法进行了比较。评估结果表明,提议的 IPSO RLON 比其他方法更好。
{"title":"High Utility Itemset Extraction using PSO with Online Control Parameter Calibration","authors":"L. K., SURESH S, SAVITHA S, ANANDAMURUGAN S","doi":"10.47164/ijngc.v15i1.1643","DOIUrl":"https://doi.org/10.47164/ijngc.v15i1.1643","url":null,"abstract":"This study investigates the use of evolutionary computation for mining high-value patterns from benchmark datasets. The approach employs a fitness function to assess the usefulness of each pattern. However, the effectiveness of evolutionary algorithms heavily relies on the chosen strategy parameters during execution. Conventional methods set these parameters arbitrarily, often leading to suboptimal solutions. To address this limitation, the research proposes a method for dynamically adjusting strategy parameters using temporal difference approaches, a machine learning technique called Reinforcement Learning (RL). Specifically, the proposed IPSO RLON algorithm utilizes SARSA learning to intelligently adapt the Crossover Rate and Mutation Rate within the Practical Swarm Optimization Algorithm. This allows IPSO RLON to effectively mine high-utility itemsets from the data.The key benefit of IPSO RLON lies in its adaptive control parameters. This enables it to discover optimal high-utility itemsets when applied to various benchmark datasets. To assess its performance, IPSO RLON is compared to existing approaches like HUPEUMU-GRAM, HUIM-BPSO, IGA RLOFF, and IPSO RLOFF using metrics like execution time, convergence speed, and the percentage of high-utility itemsets mined. From the evaluation it is observed that the proposed IPSO RLON perfroms better than the other methodology.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"17 10","pages":""},"PeriodicalIF":0.3,"publicationDate":"2024-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140980662","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-14DOI: 10.47164/ijngc.v15i1.1242
R. Sreemathy, Danish Khan, Kisley Chandra, Tejas Bora, S. Khurana
Neurodegenerative disorders are one of the most insidious disorders that affect millions around the world. Presently, these disorders do not have any remedy, however, if detected at an early stage, therapy can prevent further degeneration. This study aims to detect the early onset of one such neurodegenerative disorder called Alzheimer’s Disease, which is the most prevalent neurological disorder using the proposed Convolutional Neural Network (CNN). These MRI scans are pre-processed by applying various filters, namely, High-Pass Filter, Contrast Stretching, Sharpening Filter, and Anisotropic Diffusion Filter to enhance the Biomarkers in MRI images. A total of 21 models are proposed using different preprocessing and enhancement techniques on transverse and sagittal MRI images. The comparative analysis of the proposed five-layer Convolutional Neural Network (CNN) model with Alex Net is presented. The proposed CNN model outperforms AlexNet and achieves an accuracy of 99.40%, with a precision of 0.988, and recall of 1.00, by using an edge enhanced, contrast stretched, anisotropic diffusion filter. The proposed method may be used to implement automated diagnosis of neurodegenerative disorders.
神经退行性疾病是最隐蔽的疾病之一,影响着全世界数百万人。目前,这些疾病还没有任何治疗方法,但如果能在早期发现,治疗可以防止进一步退化。本研究旨在利用所提出的卷积神经网络(CNN)检测阿尔茨海默病这种神经退行性疾病的早期发病情况。这些核磁共振成像扫描通过应用各种滤波器(即高通滤波器、对比度拉伸滤波器、锐化滤波器和各向异性扩散滤波器)进行预处理,以增强核磁共振成像图像中的生物标记。在横向和矢状磁共振成像上使用不同的预处理和增强技术,共提出了 21 个模型。对所提出的五层卷积神经网络(CNN)模型与 Alex Net 进行了比较分析。通过使用边缘增强、对比度拉伸、各向异性扩散滤波器,所提出的 CNN 模型的准确率达到 99.40%,精确度为 0.988,召回率为 1.00,优于 AlexNet。该方法可用于神经退行性疾病的自动诊断。
{"title":"Alzheimer’s Disease Classification using Feature Enhanced Deep Convolutional Neural Networks","authors":"R. Sreemathy, Danish Khan, Kisley Chandra, Tejas Bora, S. Khurana","doi":"10.47164/ijngc.v15i1.1242","DOIUrl":"https://doi.org/10.47164/ijngc.v15i1.1242","url":null,"abstract":"Neurodegenerative disorders are one of the most insidious disorders that affect millions around the world. Presently, these disorders do not have any remedy, however, if detected at an early stage, therapy can prevent further degeneration. This study aims to detect the early onset of one such neurodegenerative disorder called Alzheimer’s Disease, which is the most prevalent neurological disorder using the proposed Convolutional Neural Network (CNN). These MRI scans are pre-processed by applying various filters, namely, High-Pass Filter, Contrast Stretching, Sharpening Filter, and Anisotropic Diffusion Filter to enhance the Biomarkers in MRI images. A total of 21 models are proposed using different preprocessing and enhancement techniques on transverse and sagittal MRI images. The comparative analysis of the proposed five-layer Convolutional Neural Network (CNN) model with Alex Net is presented. The proposed CNN model outperforms AlexNet and achieves an accuracy of 99.40%, with a precision of 0.988, and recall of 1.00, by using an edge enhanced, contrast stretched, anisotropic diffusion filter. The proposed method may be used to implement automated diagnosis of neurodegenerative disorders.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":" 22","pages":""},"PeriodicalIF":0.3,"publicationDate":"2024-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141128338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-14DOI: 10.47164/ijngc.v15i1.1645
Miral Patel, Hasmukh P. Koringa
Building extraction from remote sensing images is the process of automatically identifying and extracting the boundaries of buildings from high-resolution aerial or satellite images. The extracted building footprints can be used for a variety of applications, such as urban planning, disaster management, city development, land management, environmental monitoring, and 3D modeling. The results of building extraction from remote sensing images depend on several factors, such as the quality and resolution of the image and the choice of algorithm.The process of building extraction from remote sensing images typically involves a series of steps, including image pre-processing, feature extraction, and classification. Building extraction from remote sensing images can be challenging due to factors such as varying building sizes and shapes, shadows, and occlusions. However, recent advances in deep learning and computer vision techniques have led to significant improvements in the accuracy and efficiency of building extraction methods. This research presents a deep learning semantic segmentation architecture-based model for developing building detection from high resolution remote sensing images. The open-source Massachusetts dataset is used to train the suggested UNet architecture. The model is optimized using the RMSProp algorithm with a learning rate of 0.0001 for 100 epochs. After 1.52 hours of training on Google Colab the model achieved an 83.55% F1 score, which indicates strong precision and recall.
从遥感图像中提取建筑物是指从高分辨率航空或卫星图像中自动识别和提取建筑物边界的过程。提取的建筑物足迹可用于多种应用,如城市规划、灾害管理、城市发展、土地管理、环境监测和三维建模。从遥感图像中提取建筑物的结果取决于多个因素,如图像的质量和分辨率以及算法的选择。从遥感图像中提取建筑物可能具有挑战性,因为建筑物的大小和形状、阴影和遮挡物等因素各不相同。然而,深度学习和计算机视觉技术的最新进展大大提高了建筑物提取方法的准确性和效率。本研究提出了一种基于深度学习语义分割架构的模型,用于开发高分辨率遥感图像中的建筑物检测。开源的马萨诸塞州数据集用于训练建议的 UNet 架构。使用 RMSProp 算法对模型进行优化,学习率为 0.0001,历时 100 次。在 Google Colab 上训练 1.52 小时后,该模型获得了 83.55% 的 F1 分数,这表明该模型具有很高的精确度和召回率。
{"title":"Deep Learning based Semantic Segmentation for Buildings Detection from Remote Sensing Images","authors":"Miral Patel, Hasmukh P. Koringa","doi":"10.47164/ijngc.v15i1.1645","DOIUrl":"https://doi.org/10.47164/ijngc.v15i1.1645","url":null,"abstract":"Building extraction from remote sensing images is the process of automatically identifying and extracting the boundaries of buildings from high-resolution aerial or satellite images. The extracted building footprints can be used for a variety of applications, such as urban planning, disaster management, city development, land management, environmental monitoring, and 3D modeling. The results of building extraction from remote sensing images depend on several factors, such as the quality and resolution of the image and the choice of algorithm.The process of building extraction from remote sensing images typically involves a series of steps, including image pre-processing, feature extraction, and classification. Building extraction from remote sensing images can be challenging due to factors such as varying building sizes and shapes, shadows, and occlusions. However, recent advances in deep learning and computer vision techniques have led to significant improvements in the accuracy and efficiency of building extraction methods. This research presents a deep learning semantic segmentation architecture-based model for developing building detection from high resolution remote sensing images. The open-source Massachusetts dataset is used to train the suggested UNet architecture. The model is optimized using the RMSProp algorithm with a learning rate of 0.0001 for 100 epochs. After 1.52 hours of training on Google Colab the model achieved an 83.55% F1 score, which indicates strong precision and recall.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"19 9","pages":""},"PeriodicalIF":0.3,"publicationDate":"2024-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140979541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-14DOI: 10.47164/ijngc.v15i1.1472
Amit Choksi, Mehul Shah
A Vehicular Ad-hoc Network (VANET) is an essential component of intelligent transportation systems in the building of smart cities. A VANET is a self-configure high mobile and dynamic potential wireless ad-hoc network that joins all vehicle nodes in a smart city to provide in-vehicle infotainment services to city administrators and residents. In the smart city, the On-board Unit (OBU) of each vehicle has multiple onboard sensors that are used for data collection from the surrounding environment. One of the main issues in VANET is energy efficiency and balance because the small onboard sensors can’t be quickly recharged once installed on On-board Units (OBUs). Moreover, conserving energy stands out as a crucial challenge in VANET which is primarily contingent on the selection of Cluster Heads (CH) and the adopted packet routing strategy. To address this issue, this paper proposes distance and energy-aware clustering algorithms named SOMNNDP, which use a Self-Organizing Map Neural Network (SOMNN) machine learning technique to perform faster multi-hop data dissemination. Individual Euclidean distances and residual node energy are considered as mobility parameters throughout the cluster routing process to improve and balance the energy consumption among the participating vehicle nodes. This maximizes the lifetime of VANET by ensuring that all intermediate vehicle nodes use energy at approximately the same rate. Simulation findings demonstrate that SOMNNDP improves Quality of Service (QoS) better and consumes 17% and 14% less energy during cluster routing than distance and energy-aware variation of K-Means (KM) and Fuzzy C-Means (FCM) called KMDP and FCMDP respectively.
{"title":"Machine Learning-assisted Distance Based Residual Energy Aware Clustering Algorithm for Energy Efficient Information Dissemination in Urban VANETs","authors":"Amit Choksi, Mehul Shah","doi":"10.47164/ijngc.v15i1.1472","DOIUrl":"https://doi.org/10.47164/ijngc.v15i1.1472","url":null,"abstract":"A Vehicular Ad-hoc Network (VANET) is an essential component of intelligent transportation systems in the building of smart cities. A VANET is a self-configure high mobile and dynamic potential wireless ad-hoc network that joins all vehicle nodes in a smart city to provide in-vehicle infotainment services to city administrators and residents. In the smart city, the On-board Unit (OBU) of each vehicle has multiple onboard sensors that are used for data collection from the surrounding environment. One of the main issues in VANET is energy efficiency and balance because the small onboard sensors can’t be quickly recharged once installed on On-board Units (OBUs). Moreover, conserving energy stands out as a crucial challenge in VANET which is primarily contingent on the selection of Cluster Heads (CH) and the adopted packet routing strategy. To address this issue, this paper proposes distance and energy-aware clustering algorithms named SOMNNDP, which use a Self-Organizing Map Neural Network (SOMNN) machine learning technique to perform faster multi-hop data dissemination. Individual Euclidean distances and residual node energy are considered as mobility parameters throughout the cluster routing process to improve and balance the energy consumption among the participating vehicle nodes. This maximizes the lifetime of VANET by ensuring that all intermediate vehicle nodes use energy at approximately the same rate. Simulation findings demonstrate that SOMNNDP improves Quality of Service (QoS) better and consumes 17% and 14% less energy during cluster routing than distance and energy-aware variation of K-Means (KM) and Fuzzy C-Means (FCM) called KMDP and FCMDP respectively.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"29 22","pages":""},"PeriodicalIF":0.3,"publicationDate":"2024-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140980153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-14DOI: 10.47164/ijngc.v15i1.1676
Sudipta Saha, Saikat Basu, Koushik Majumder, Sourav Das
Childhood obesity remains a pervasive global challenge, often accompanied by deficits in working memory and fine motor skills among affected children. These deficits detrimentally impact academic performance. Despite limited evidence, home-based interventions targeting both fine motor skills and working memory remain underexplored. Leveraging game-based approaches holds promise in behavior modification, self-management of chronic conditions, therapy adherence, and patient monitoring. In this study, a novel smartphone-based game was meticulously developed to target the enhancement of working memory and fine motor skills in a cohort of thirty-two obese or overweight children. Over two weeks, participants engaged in regular gameplay sessions within the comfort of their homes. Pretest and post-test assessments yielded compelling evidence of significant improvements, with statistical significance established at a robust 95% confidence level. Notably, participants exhibited a progressive trend of improvement in their gameplay performance. Recognizing the profound impact of academic achievement on future socioeconomic trajectories, regardless of weight management outcomes, the importance of bolstering cognitive skills cannot be overstated. This innovative intervention offers a pragmatic and cost-effective solution to empower children to cultivate essential cognitive abilities within their home environment. By fostering the development of working memory and fine motor skills, this intervention holds promise in facilitating improved academic performance and, consequently, enhancing long-term prospects for these children.
{"title":"Integrating Smartphone Sensor Technology to Enhance Fine Motor and Working Memory Skills in Pediatric Obesity: A Gamified Approach","authors":"Sudipta Saha, Saikat Basu, Koushik Majumder, Sourav Das","doi":"10.47164/ijngc.v15i1.1676","DOIUrl":"https://doi.org/10.47164/ijngc.v15i1.1676","url":null,"abstract":"Childhood obesity remains a pervasive global challenge, often accompanied by deficits in working memory and fine motor skills among affected children. These deficits detrimentally impact academic performance. Despite limited evidence, home-based interventions targeting both fine motor skills and working memory remain underexplored. Leveraging game-based approaches holds promise in behavior modification, self-management of chronic conditions, therapy adherence, and patient monitoring. In this study, a novel smartphone-based game was meticulously developed to target the enhancement of working memory and fine motor skills in a cohort of thirty-two obese or overweight children. Over two weeks, participants engaged in regular gameplay sessions within the comfort of their homes. Pretest and post-test assessments yielded compelling evidence of significant improvements, with statistical significance established at a robust 95% confidence level. Notably, participants exhibited a progressive trend of improvement in their gameplay performance. Recognizing the profound impact of academic achievement on future socioeconomic trajectories, regardless of weight management outcomes, the importance of bolstering cognitive skills cannot be overstated. This innovative intervention offers a pragmatic and cost-effective solution to empower children to cultivate essential cognitive abilities within their home environment. By fostering the development of working memory and fine motor skills, this intervention holds promise in facilitating improved academic performance and, consequently, enhancing long-term prospects for these children.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"48 5","pages":""},"PeriodicalIF":0.3,"publicationDate":"2024-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140979131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-28DOI: 10.47164/ijngc.v14i4.1121
Sayantan Chakraborty
This study proposes a conceptualization of a prototype And a possibility to converge classical database and fully quantum database. This study mostly identifies the gap between this classical and quantum database and proposes a prototype that can be implemented in future products. It is a way that can be used in future industrial product development on hybrid quantum computers. The existing concept used to consider oracle as a black box in this study opens up the possibility for the quantum industry to develop the QASAM module so that we can create a fully quantum database instead of using a classical database as BlackBox.As the Toffoli gate is basically an effective NAND gate it is possible to run any algorithm theoretically in quantum computers. So we will propose a logical design for memory management for the quantum database, security enhancement model, Quantum Recovery Manager & automatic storage management model, and more for the quantum database which will ensure the quantum advantages. In this study, we will also explain the Quantum Vector Database as well as the possibility of improvement in duality quantum computing. It opens up a new scope, possibilities, and research areas in a new approach for quantum databases and duality quantum computing.
{"title":"Towards Conceptualization Of A Prototype For Quantum Database: A Complete Ecosystem","authors":"Sayantan Chakraborty","doi":"10.47164/ijngc.v14i4.1121","DOIUrl":"https://doi.org/10.47164/ijngc.v14i4.1121","url":null,"abstract":"This study proposes a conceptualization of a prototype And a possibility to converge classical database and fully quantum database. This study mostly identifies the gap between this classical and quantum database and proposes a prototype that can be implemented in future products. It is a way that can be used in future industrial product development on hybrid quantum computers. The existing concept used to consider oracle as a black box in this study opens up the possibility for the quantum industry to develop the QASAM module so that we can create a fully quantum database instead of using a classical database as BlackBox.As the Toffoli gate is basically an effective NAND gate it is possible to run any algorithm theoretically in quantum computers. So we will propose a logical design for memory management for the quantum database, security enhancement model, Quantum Recovery Manager & automatic storage management model, and more for the quantum database which will ensure the quantum advantages. In this study, we will also explain the Quantum Vector Database as well as the possibility of improvement in duality quantum computing. It opens up a new scope, possibilities, and research areas in a new approach for quantum databases and duality quantum computing.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"342 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139224516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-28DOI: 10.47164/ijngc.v14i4.1415
Soma Mitra, Samarjit Naskar, Dr. Saikat Basu
The present study explores vegetation health and forest canopy density in the Sundarbans region using Landsat-8 images. This work analyzes changes in vegetation health using two vegetation indices, the Normalized Difference Vegetation Index (NDVI) and Forest Canopy Density (FCD) values of the Sundarbans, from 2014 to 2020. NDVI, comprising two bands, Red and Near-infrared (NIR), shows a declining trend during the period. Two NDVI land cover classification maps for 2014 and 2020 are produced, and the interest area is divided into five classes: Scanty, Low, Medium, and Densely Vegetated Regions and Water Bodies. A single-band linear gradient pseudo-color is used to assess the land cover difference between 2020 and 2014, showing marked changes in densely vegetative areas. The NDVI difference marks the coastal regions with a higher depletion rate of vegetation than the regions away from the seacoasts. FCD has been taken to compare the results of NDVI with it. FCD consists of another four models: AVI (advanced vegetative index), BI (Bare soil index), SSI (scaled shadow index), and TI (thermal index). FCD is also called crown cover or canopy coverage, which refers to the portion of an area in the field covered by the crown of trees. 2014 and 2015 FCD maps are produced with a single band linear gradient pseudocolor with five land cover classifications: bare soil, Bare Soil, Shrubs, Low, Medium, and Highly vegetated regions. Both maps bear a significant resemblance to NDVI land classification maps. Further, the FCD values of the two maps are scaled between 1 and 100, and the area of each class is calculated. To check the veracity of the NDVI and FCD analysis, a Deep Neural Network (DNN) model has been developed to classify each year’s image taken from Google Earth Engine (GEE). It classifies each year’s image with 99% accuracy. The calculation of the area of each class emphasizes the rapid decline of densely wooded vegetation. Almost 80% of the highly forested zone has been diminished and has become part of the medium-forested region. Area inflation in medium-forested regions corroborates the same. The study also analyzes the migration of vegetation density, i.e., where and how many areas are unchanged, growing, or deforested.
{"title":"Vegetation Health and Forest Canopy Density Monitoring in The Sundarban Region Using Remote Sensing and GIS","authors":"Soma Mitra, Samarjit Naskar, Dr. Saikat Basu","doi":"10.47164/ijngc.v14i4.1415","DOIUrl":"https://doi.org/10.47164/ijngc.v14i4.1415","url":null,"abstract":"The present study explores vegetation health and forest canopy density in the Sundarbans region using Landsat-8 images. This work analyzes changes in vegetation health using two vegetation indices, the Normalized Difference Vegetation Index (NDVI) and Forest Canopy Density (FCD) values of the Sundarbans, from 2014 to 2020. NDVI, comprising two bands, Red and Near-infrared (NIR), shows a declining trend during the period. Two NDVI land cover classification maps for 2014 and 2020 are produced, and the interest area is divided into five classes: Scanty, Low, Medium, and Densely Vegetated Regions and Water Bodies. A single-band linear gradient pseudo-color is used to assess the land cover difference between 2020 and 2014, showing marked changes in densely vegetative areas. The NDVI difference marks the coastal regions with a higher depletion rate of vegetation than the regions away from the seacoasts. FCD has been taken to compare the results of NDVI with it. FCD consists of another four models: AVI (advanced vegetative index), BI (Bare soil index), SSI (scaled shadow index), and TI (thermal index). FCD is also called crown cover or canopy coverage, which refers to the portion of an area in the field covered by the crown of trees. 2014 and 2015 FCD maps are produced with a single band linear gradient pseudocolor with five land cover classifications: bare soil, Bare Soil, Shrubs, Low, Medium, and Highly vegetated regions. Both maps bear a significant resemblance to NDVI land classification maps. Further, the FCD values of the two maps are scaled between 1 and 100, and the area of each class is calculated. To check the veracity of the NDVI and FCD analysis, a Deep Neural Network (DNN) model has been developed to classify each year’s image taken from Google Earth Engine (GEE). It classifies each year’s image with 99% accuracy. The calculation of the area of each class emphasizes the rapid decline of densely wooded vegetation. Almost 80% of the highly forested zone has been diminished and has become part of the medium-forested region. Area inflation in medium-forested regions corroborates the same. The study also analyzes the migration of vegetation density, i.e., where and how many areas are unchanged, growing, or deforested.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"19 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139218406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-28DOI: 10.47164/ijngc.v14i4.1170
Abhinandan Tripathi, Jay Prakash
In this article, we suggest a new interpolation technique as well as a novel Reversible Data Hiding (RDH) approach for up scaling the actual image and concealing sensitive information within the up scaled/interpolated image. This data hiding strategy takes into account the features of the Human Visual System (HVS) when concealing the secret data in order to prevent detection of the private data even after extensive embedding. The private data bits are adaptively embedded into the picture cell based on its values in the suggested hiding strategy after grouping different pixel intensity ranges. As a result, the proposed approach can preserve the stego-visual image’s quality. According to experimental findings, the proposed interpolation approach achieves PSNRs of over 40 dB for all experimental images. The outcomes further demonstrate that the suggested data concealing strategy outperforms every other interpolation-based data hiding scheme existing in use.
{"title":"Interpolation Based Reversible Data Hiding using Pixel Intensity Classes","authors":"Abhinandan Tripathi, Jay Prakash","doi":"10.47164/ijngc.v14i4.1170","DOIUrl":"https://doi.org/10.47164/ijngc.v14i4.1170","url":null,"abstract":"In this article, we suggest a new interpolation technique as well as a novel Reversible Data Hiding (RDH) approach for up scaling the actual image and concealing sensitive information within the up scaled/interpolated image. This data hiding strategy takes into account the features of the Human Visual System (HVS) when concealing the secret data in order to prevent detection of the private data even after extensive embedding. The private data bits are adaptively embedded into the picture cell based on its values in the suggested hiding strategy after grouping different pixel intensity ranges. As a result, the proposed approach can preserve the stego-visual image’s quality. According to experimental findings, the proposed interpolation approach achieves PSNRs of over 40 dB for all experimental images. The outcomes further demonstrate that the suggested data concealing strategy outperforms every other interpolation-based data hiding scheme existing in use.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"21 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139221420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-28DOI: 10.47164/ijngc.v14i4.1039
Pradip Patel, Narendra Patel
Human Centered Computing is an emerging research field that aims to understand human behavior. Dynamic hand gesture recognition is one of the most recent, challenging and appealing application in this field. We have proposed one vision based system to recognize dynamic hand gestures for Indian Sign Language (ISL) in this paper. The system is built by using a unified architecture formed by combining Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM). In order to hit the shortage of a huge labeled hand gesture dataset, we have created two different CNN by retraining a well known image classification networks GoogLeNet and VGG16 using transfer learning. Frames of gesture videos are transformed into features vectors using these CNNs. As these videos are prearranged series of image frames, LSTM model have been used to join with the fully-connected layer of CNN. We have evaluated the system on three different datasets consisting of color videos with 11, 64 and 8 classes. During experiments it is found that the proposed CNN-LSTM architecture using GoogLeNet is fast and efficient having capability to achieve very high recognition rates of 93.18%, 97.50%, and 96.65% on the three datasets respectively.
{"title":"Dynamic Hand Gesture Recognition for Indian Sign Language using Integrated CNN-LSTM Architecture","authors":"Pradip Patel, Narendra Patel","doi":"10.47164/ijngc.v14i4.1039","DOIUrl":"https://doi.org/10.47164/ijngc.v14i4.1039","url":null,"abstract":"Human Centered Computing is an emerging research field that aims to understand human behavior. Dynamic hand gesture recognition is one of the most recent, challenging and appealing application in this field. We have proposed one vision based system to recognize dynamic hand gestures for Indian Sign Language (ISL) in this paper. The system is built by using a unified architecture formed by combining Convolutional Neural Network (CNN) and Long Short Term Memory (LSTM). In order to hit the shortage of a huge labeled hand gesture dataset, we have created two different CNN by retraining a well known image classification networks GoogLeNet and VGG16 using transfer learning. Frames of gesture videos are transformed into features vectors using these CNNs. As these videos are prearranged series of image frames, LSTM model have been used to join with the fully-connected layer of CNN. We have evaluated the system on three different datasets consisting of color videos with 11, 64 and 8 classes. During experiments it is found that the proposed CNN-LSTM architecture using GoogLeNet is fast and efficient having capability to achieve very high recognition rates of 93.18%, 97.50%, and 96.65% on the three datasets respectively.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"31 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139227375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-28DOI: 10.47164/ijngc.v14i4.1267
Reema Gupta, Priti Singla
In India and throughout the world, air pollution is becoming a severe worry day by day. Governments and the general public have grown more concerned about how air pollution affects human health. Consequently, it is crucial to forecast the air quality with accuracy. In this paper, Machine learning methods SVR and RFR were used to build the hybrid forecast model to predict the concentrations of Air Quality Index in Haryana Cities. The forecast models were built using air pollutants and meteorological parameters from 2019 to 2021 and testing and validation was conducted on the air quality data for the year 2022 of Jind and Panipat city in the State of Haryana. Further, performance of hybrid forecast model was enhanced using scalar technique and performance was evaluated using various coefficient metrics and other parameters. First, the important factors affecting air quality are extracted and irregularities from the dataset are removed. Second, for forecasting AQI various approaches have been used and evaluation is carried out using performance metrics. The experimental results showed that the proposed hybrid model had a better forecast result than the standard Random forest Regression, Support Vector Regression and Multiple Linear Regression.
{"title":"Forecasting Time Series AQI Using Machine learning of Haryana Cities Using Machine Learning","authors":"Reema Gupta, Priti Singla","doi":"10.47164/ijngc.v14i4.1267","DOIUrl":"https://doi.org/10.47164/ijngc.v14i4.1267","url":null,"abstract":"In India and throughout the world, air pollution is becoming a severe worry day by day. Governments and the general public have grown more concerned about how air pollution affects human health. Consequently, it is crucial to forecast the air quality with accuracy. In this paper, Machine learning methods SVR and RFR were used to build the hybrid forecast model to predict the concentrations of Air Quality Index in Haryana Cities. The forecast models were built using air pollutants and meteorological parameters from 2019 to 2021 and testing and validation was conducted on the air quality data for the year 2022 of Jind and Panipat city in the State of Haryana. Further, performance of hybrid forecast model was enhanced using scalar technique and performance was evaluated using various coefficient metrics and other parameters. First, the important factors affecting air quality are extracted and irregularities from the dataset are removed. Second, for forecasting AQI various approaches have been used and evaluation is carried out using performance metrics. The experimental results showed that the proposed hybrid model had a better forecast result than the standard Random forest Regression, Support Vector Regression and Multiple Linear Regression.","PeriodicalId":42021,"journal":{"name":"International Journal of Next-Generation Computing","volume":"49 1","pages":""},"PeriodicalIF":0.3,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139218970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}