In general, Wireless Sensor Networks (WSNs) require secure routing approaches for delivering the data packets to their sinks or destinations. Most of the WSNs identify particular events in their explicit platforms. However, several WSNs may examine multiple events using numerous sensors in a similar place. Multi-sink and multi-hop WSNs include the ability to offer network efficiency by securing effective data exchanges. The group of nodes in the multi-sink scenario is described through a distance vector. Though, the efficiency of multi-sink WSNs is considerably impacted by the routing of data packets and sink node placement in the cluster. In addition, many WSNs for diverse reasons existed in the similar geographical region. Hence, in this task, a secured energy-efficient routing technique is designed for a Wireless sensor network with Large-scale and multiple sink nodes. Here, the concept of an improved meta-heuristic algorithm termed Adaptive Squirrel Coyote Search Optimization (ASCSO) is implemented for selecting the accurate selection of cluster head. The fitness function regarding residual distance, security risk, energy, delay, trust, and Quality of Service (QoS) is used for rating the optimal solutions. The consumption of energy can be reduced by measuring the mean length along with the cluster head and multiple sink nodes. The latest two heuristic algorithms such as Coyote Optimization Algorithm (COA) and Squirrel Search Algorithm (SSA) are integrated for suggesting a new hybrid heuristic technique. Finally, the offered work is validated and evaluated by comparing it with several optimization algorithms regarding different evaluation metrics between the sensor and sink node.
{"title":"Adaptive squirrel coyote optimization-based secured energy efficient routing technique for large scale WSN with multiple sink nodes","authors":"Chada Sampath Reddy, G. Narsimha","doi":"10.3233/idt-220045","DOIUrl":"https://doi.org/10.3233/idt-220045","url":null,"abstract":"In general, Wireless Sensor Networks (WSNs) require secure routing approaches for delivering the data packets to their sinks or destinations. Most of the WSNs identify particular events in their explicit platforms. However, several WSNs may examine multiple events using numerous sensors in a similar place. Multi-sink and multi-hop WSNs include the ability to offer network efficiency by securing effective data exchanges. The group of nodes in the multi-sink scenario is described through a distance vector. Though, the efficiency of multi-sink WSNs is considerably impacted by the routing of data packets and sink node placement in the cluster. In addition, many WSNs for diverse reasons existed in the similar geographical region. Hence, in this task, a secured energy-efficient routing technique is designed for a Wireless sensor network with Large-scale and multiple sink nodes. Here, the concept of an improved meta-heuristic algorithm termed Adaptive Squirrel Coyote Search Optimization (ASCSO) is implemented for selecting the accurate selection of cluster head. The fitness function regarding residual distance, security risk, energy, delay, trust, and Quality of Service (QoS) is used for rating the optimal solutions. The consumption of energy can be reduced by measuring the mean length along with the cluster head and multiple sink nodes. The latest two heuristic algorithms such as Coyote Optimization Algorithm (COA) and Squirrel Search Algorithm (SSA) are integrated for suggesting a new hybrid heuristic technique. Finally, the offered work is validated and evaluated by comparing it with several optimization algorithms regarding different evaluation metrics between the sensor and sink node.","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136228067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent days people are affected with lung cancer in, and the severe stage of this disease leads to death for human beings. Lung cancer is the second most typical cancer type to be found worldwide. Pulmonary nodules present in the lung can be used to identify cancer metastases because these nodules are visible in the lungs. Cancer diagnosis and region segmentation are the most important procedures because the prosperous prediction-affected area can accurately identify the variation in cancer and normal cell. By analyzing the lung nodules present in the image, the radiologists missed several useful low-density and small nodules, and this may tend to the diagnose process very difficult, and the radiologists needs more time to decide the prediction of affected lung nodules. Due to the radiologist’s physical inspection time and the possibility of missing nodules, automatic identification is needed to address these issues. In order to achieve this, a new hybrid deep learning model is developed for lung cancer detection with the help of CT images. At first, input images like CT images are gathered from the standard data sources. Once the images are collected, it undergoes for the pre-processing stage, where it is accomplished by Weighted mean histogram equalization and mean filtering. Consequently, a novel hybrid segmentation model is developed, in which Adaptive fuzzy clustering is incorporated with the Optimized region growing; here, the parameters are optimized by Improved Harris Hawks Optimization (IHHO). At last, the classification is accomplished by Ensemble-based Deep Learning Model (EDLM) that is constructed by VGG-16, Residual Network (ResNet) and Gated Recurrent Unit (GRU), in which the hyperparameters are tuned optimally by an improved HHO algorithm. The experimental outcomes and its performance analysis elucidate the effectiveness of the suggested detection model aids to early recognition of lung cancer.
{"title":"Integration of adaptive segmentation with heuristic-aided novel ensemble-based deep learning model for lung cancer detection using CT images","authors":"Potti Nagaraja, Sumanth Kumar Chennupati","doi":"10.3233/idt-230071","DOIUrl":"https://doi.org/10.3233/idt-230071","url":null,"abstract":"In recent days people are affected with lung cancer in, and the severe stage of this disease leads to death for human beings. Lung cancer is the second most typical cancer type to be found worldwide. Pulmonary nodules present in the lung can be used to identify cancer metastases because these nodules are visible in the lungs. Cancer diagnosis and region segmentation are the most important procedures because the prosperous prediction-affected area can accurately identify the variation in cancer and normal cell. By analyzing the lung nodules present in the image, the radiologists missed several useful low-density and small nodules, and this may tend to the diagnose process very difficult, and the radiologists needs more time to decide the prediction of affected lung nodules. Due to the radiologist’s physical inspection time and the possibility of missing nodules, automatic identification is needed to address these issues. In order to achieve this, a new hybrid deep learning model is developed for lung cancer detection with the help of CT images. At first, input images like CT images are gathered from the standard data sources. Once the images are collected, it undergoes for the pre-processing stage, where it is accomplished by Weighted mean histogram equalization and mean filtering. Consequently, a novel hybrid segmentation model is developed, in which Adaptive fuzzy clustering is incorporated with the Optimized region growing; here, the parameters are optimized by Improved Harris Hawks Optimization (IHHO). At last, the classification is accomplished by Ensemble-based Deep Learning Model (EDLM) that is constructed by VGG-16, Residual Network (ResNet) and Gated Recurrent Unit (GRU), in which the hyperparameters are tuned optimally by an improved HHO algorithm. The experimental outcomes and its performance analysis elucidate the effectiveness of the suggested detection model aids to early recognition of lung cancer.","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136228066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aikaterini Karanikola, Gregory Davrazos, Charalampos M. Liapis, Sotiris Kotsiantis
Sentiment Analysis, also known as Opinion Mining, gained prominence in the early 2000s alongside the emergence of internet forums, blogs, and social media platforms. Researchers and businesses recognized the imperative to automate the extraction of valuable insights from the vast pool of textual data generated online. Its utility in the business domain is undeniable, offering actionable insights into customer opinions and attitudes, empowering data-driven decisions that enhance products, services, and customer satisfaction. The expansion of Sentiment Analysis into the financial sector came as a direct consequence, prompting the adaptation of powerful Natural Language Processing models to these contexts. In this study, we rigorously test numerous classical Machine Learning classification algorithms and ensembles against five contemporary Deep Learning Pre-Trained models, like BERT, RoBERTa, and three variants of FinBERT. However, its aim extends beyond evaluating the performance of modern methods, especially those designed for financial tasks, to a comparison of them with classical ones. We also explore how different text representation and data augmentation techniques impact classification outcomes when classical methods are employed. The study yields a wealth of intriguing results, which are thoroughly discussed.
{"title":"Financial sentiment analysis: Classic methods vs. deep learning models","authors":"Aikaterini Karanikola, Gregory Davrazos, Charalampos M. Liapis, Sotiris Kotsiantis","doi":"10.3233/idt-230478","DOIUrl":"https://doi.org/10.3233/idt-230478","url":null,"abstract":"Sentiment Analysis, also known as Opinion Mining, gained prominence in the early 2000s alongside the emergence of internet forums, blogs, and social media platforms. Researchers and businesses recognized the imperative to automate the extraction of valuable insights from the vast pool of textual data generated online. Its utility in the business domain is undeniable, offering actionable insights into customer opinions and attitudes, empowering data-driven decisions that enhance products, services, and customer satisfaction. The expansion of Sentiment Analysis into the financial sector came as a direct consequence, prompting the adaptation of powerful Natural Language Processing models to these contexts. In this study, we rigorously test numerous classical Machine Learning classification algorithms and ensembles against five contemporary Deep Learning Pre-Trained models, like BERT, RoBERTa, and three variants of FinBERT. However, its aim extends beyond evaluating the performance of modern methods, especially those designed for financial tasks, to a comparison of them with classical ones. We also explore how different text representation and data augmentation techniques impact classification outcomes when classical methods are employed. The study yields a wealth of intriguing results, which are thoroughly discussed.","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136228068","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The virtualization of hardware resources like network, memory and storage are included in the core of cloud computing and are provided with the help of Virtual Machines (VM). The issues based on reliability and security reside in its acceptance in the cloud environment during the migration of VMs. VM migration highly enhanced the manageability, performance, and fault tolerance of cloud systems. Here, a set of tasks submitted by various users are arranged in the virtual cloud computing platform by using a set of VMs. Energy efficiency is effectively attained with the help of a loadbalancing strategy and it is a critical issue in the cloud environment. During the migration of VMs, providing high security is a very important task in the cloud environment. To resolve such challenges, an effective method is proposed using an optimal key-based encryption process. The main objective of this research work is to perform the VM migration and derive the multi-objective constraints with the help of hybrid heuristic improvement. The optimal VM migration is achieved by the hybrid algorithm as Improved Binary Battle Royale with Moth-flame Optimization (IBinBRMO). It can also be used to derive the multi-objective functions by some constraints like resource utilization, active servers, makespan, energy consumption, etc. After VM migration, the data transmission should take place securely between the source and destination. To secure the data, the HybridHomophorphic and Advanced Encryption Standard(HH-AES) Algorithm, where IBinBRMO optimizes the key. After optimizing the keys, the data are securely transformed along with multi-objective functions using parameters includingthe degree of modification, hiding failure rate and information preservation rate. Thus, the effectiveness is guaranteed and analyzed with other classical models. Hence, the results illustrate that the proposed work attains better performance.
{"title":"An effective process of VM migration with hybrid heuristic-assisted encryption technique for secured data transmission in cloud environment","authors":"H. Niroshini Infantia, C. Anbuananth, S. Kalarani","doi":"10.3233/idt-230264","DOIUrl":"https://doi.org/10.3233/idt-230264","url":null,"abstract":"The virtualization of hardware resources like network, memory and storage are included in the core of cloud computing and are provided with the help of Virtual Machines (VM). The issues based on reliability and security reside in its acceptance in the cloud environment during the migration of VMs. VM migration highly enhanced the manageability, performance, and fault tolerance of cloud systems. Here, a set of tasks submitted by various users are arranged in the virtual cloud computing platform by using a set of VMs. Energy efficiency is effectively attained with the help of a loadbalancing strategy and it is a critical issue in the cloud environment. During the migration of VMs, providing high security is a very important task in the cloud environment. To resolve such challenges, an effective method is proposed using an optimal key-based encryption process. The main objective of this research work is to perform the VM migration and derive the multi-objective constraints with the help of hybrid heuristic improvement. The optimal VM migration is achieved by the hybrid algorithm as Improved Binary Battle Royale with Moth-flame Optimization (IBinBRMO). It can also be used to derive the multi-objective functions by some constraints like resource utilization, active servers, makespan, energy consumption, etc. After VM migration, the data transmission should take place securely between the source and destination. To secure the data, the HybridHomophorphic and Advanced Encryption Standard(HH-AES) Algorithm, where IBinBRMO optimizes the key. After optimizing the keys, the data are securely transformed along with multi-objective functions using parameters includingthe degree of modification, hiding failure rate and information preservation rate. Thus, the effectiveness is guaranteed and analyzed with other classical models. Hence, the results illustrate that the proposed work attains better performance.","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136228538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N.S. Ninu Preetha, G. Brammya, Mahbub Arab Majumder, M.K. Nagarajan, M. Therasa
Recently, Aspect-based Sentiment Analysis (ABSA) is considered a more demanding research topic that tries to discover the sentiment of particular aspects of the text. The key issue of this model is to discover the significant contexts for diverse aspects in an accurate manner. There will be variation among the sentiment of a few contexts based on their aspect, which stands as another challenging point that puts off the high performance. The major intent of this paper is to plan an analysis of ABSA using twitter data. The review is concentrated on a detailed analysis of diverse models performing the ABSA. Here, the main challenges and drawbacks based on ABSA baseline approaches are analyzed from the past 10 years’ references. Moreover, this review will also focus on analyzing different tools, and different data utilized by each contribution. Additionally, diverse machine learning is categorized according to their existence. This survey also points out the performance metrics and best performance values to validate the effectiveness of entire contributions. Finally, it highlights the challenges and research gaps to be addressed in modeling and learning about effectual, competent, and vigorous deep-learning algorithms for ABSA and pays attention to new directions for effective future research.
{"title":"A systematic review and research contributions on aspect-based sentiment analysis using twitter data","authors":"N.S. Ninu Preetha, G. Brammya, Mahbub Arab Majumder, M.K. Nagarajan, M. Therasa","doi":"10.3233/idt-220063","DOIUrl":"https://doi.org/10.3233/idt-220063","url":null,"abstract":"Recently, Aspect-based Sentiment Analysis (ABSA) is considered a more demanding research topic that tries to discover the sentiment of particular aspects of the text. The key issue of this model is to discover the significant contexts for diverse aspects in an accurate manner. There will be variation among the sentiment of a few contexts based on their aspect, which stands as another challenging point that puts off the high performance. The major intent of this paper is to plan an analysis of ABSA using twitter data. The review is concentrated on a detailed analysis of diverse models performing the ABSA. Here, the main challenges and drawbacks based on ABSA baseline approaches are analyzed from the past 10 years’ references. Moreover, this review will also focus on analyzing different tools, and different data utilized by each contribution. Additionally, diverse machine learning is categorized according to their existence. This survey also points out the performance metrics and best performance values to validate the effectiveness of entire contributions. Finally, it highlights the challenges and research gaps to be addressed in modeling and learning about effectual, competent, and vigorous deep-learning algorithms for ABSA and pays attention to new directions for effective future research.","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136228540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"International Distinctions of the Editors-in-Chief of the Intelligent Decision Technologies Journal","authors":"","doi":"10.3233/idt-239004","DOIUrl":"https://doi.org/10.3233/idt-239004","url":null,"abstract":"","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136228534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Big data is the amount of data that surpasses the ability to process the data of a system concerning memory usage and computation time. It is commonly applied in several domains like healthcare, education, social networks, e-commerce, etc., as they have progressively obtained a massive quantity of input data. A major research problem is big data analytics, which can be carried out using expert systems and deep structured architectures. Besides, data wrangling and class imbalance data handling are challenging issues that need to be resolved in big data analytics. Class imbalance data degrade the performance of the classification model, which remains a challenging process due to the heterogeneous and complex structure of the comparatively huge datasets. Thus, the research focused on presenting a Class Imbalance Handling with Optimal Deep Learning Enabled Big Data Classification (CIHODL-BDC) framework. The core perception of the CIHODL-BDC framework helps to classify the big data in the Hadoop MapReduce framework. To accomplish this, the presented CIHODL-BDC model initially performs a data wrangling process is performed to alter the unrefined data into a useful layout. Next, the CIHODL-BDC model handles the class imbalance problem using a grey wolf optimizer (GWO) with Synthetic Minority Oversampling (SMOTE) technique. Besides, the Adam optimizer procedure with the Bidirectional Long Short Term Memory (BiLSTM) approach is performed to categorize the big data. The result analysis of the proposed CIHODL-BDC model is evaluated by two standard datasets. The simulation outcomes revealed the elevated performance of the CIHODL-BDC approach over existing methods.
{"title":"Modeling of class imbalance handling with optimal deep learning enabled big data classification model","authors":"Varshavardhini S, Rajesh A","doi":"10.3233/idt-230198","DOIUrl":"https://doi.org/10.3233/idt-230198","url":null,"abstract":"Big data is the amount of data that surpasses the ability to process the data of a system concerning memory usage and computation time. It is commonly applied in several domains like healthcare, education, social networks, e-commerce, etc., as they have progressively obtained a massive quantity of input data. A major research problem is big data analytics, which can be carried out using expert systems and deep structured architectures. Besides, data wrangling and class imbalance data handling are challenging issues that need to be resolved in big data analytics. Class imbalance data degrade the performance of the classification model, which remains a challenging process due to the heterogeneous and complex structure of the comparatively huge datasets. Thus, the research focused on presenting a Class Imbalance Handling with Optimal Deep Learning Enabled Big Data Classification (CIHODL-BDC) framework. The core perception of the CIHODL-BDC framework helps to classify the big data in the Hadoop MapReduce framework. To accomplish this, the presented CIHODL-BDC model initially performs a data wrangling process is performed to alter the unrefined data into a useful layout. Next, the CIHODL-BDC model handles the class imbalance problem using a grey wolf optimizer (GWO) with Synthetic Minority Oversampling (SMOTE) technique. Besides, the Adam optimizer procedure with the Bidirectional Long Short Term Memory (BiLSTM) approach is performed to categorize the big data. The result analysis of the proposed CIHODL-BDC model is evaluated by two standard datasets. The simulation outcomes revealed the elevated performance of the CIHODL-BDC approach over existing methods.","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136228535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Professor Junzo Watada resigning as an Editor-in-Chief of the Intelligent Decision Technologies Journal due to retirement","authors":"","doi":"10.3233/idt-239003","DOIUrl":"https://doi.org/10.3233/idt-239003","url":null,"abstract":"","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136228536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nishant Nilkanth Pachpor, B. Suresh Kumar, Prakash S. Prasad
Nowadays, various research works is explored to predict the rainfall in the different areas. The emerging research is assisted to make effective decision capacities that are involved in the field of agriculture broadly related to the irrigation process and cultivation. Here, the atmospheric and climatic factors such as wind speed, temperature, and humidity get varies from one place to another place. Thus, it makes the system more complex, and it attains higher error rate during computation for providing accurate rainfall prediction results. In this paper, the major intention is to design an advanced Artificial Intelligent (AI) model for rainfall prediction for different areas. The rainfall data from diverse areas are collected initially, and data cleaning is performed. Further, data normalization is done for ensuring the proper organization and related data in each record. Once these pre-processing phases are completed, rainfall recognition is the main step, in which Adaptive Membership Enhanced Fuzzy Classifier (AME-FC) is adopted for classifying the data into low, medium, and high rainfall. Then for each degree of low, medium, and high rainfall, the prediction process is performed individually by training the developed Tri-Long Short-Term Memory (TRI-LSTM). Additionally, the output achieved from the trained TRI-LSTM rainfall prediction in cm for each low, medium, and high rainfall. The meta-heuristic technique with Hybrid Moth-Flame Colliding Bodies Optimization (HMFCBO) enhances the recognition and prediction phases. The experimental outcome shows that the different rainfall prediction databases prove the developed model overwhelms the conventional models, and thus it would be helpful to predict more accurate rainfall.
{"title":"Adaptive membership enhanced fuzzy classifier with modified LSTM for automated rainfall prediction model","authors":"Nishant Nilkanth Pachpor, B. Suresh Kumar, Prakash S. Prasad","doi":"10.3233/idt-220157","DOIUrl":"https://doi.org/10.3233/idt-220157","url":null,"abstract":"Nowadays, various research works is explored to predict the rainfall in the different areas. The emerging research is assisted to make effective decision capacities that are involved in the field of agriculture broadly related to the irrigation process and cultivation. Here, the atmospheric and climatic factors such as wind speed, temperature, and humidity get varies from one place to another place. Thus, it makes the system more complex, and it attains higher error rate during computation for providing accurate rainfall prediction results. In this paper, the major intention is to design an advanced Artificial Intelligent (AI) model for rainfall prediction for different areas. The rainfall data from diverse areas are collected initially, and data cleaning is performed. Further, data normalization is done for ensuring the proper organization and related data in each record. Once these pre-processing phases are completed, rainfall recognition is the main step, in which Adaptive Membership Enhanced Fuzzy Classifier (AME-FC) is adopted for classifying the data into low, medium, and high rainfall. Then for each degree of low, medium, and high rainfall, the prediction process is performed individually by training the developed Tri-Long Short-Term Memory (TRI-LSTM). Additionally, the output achieved from the trained TRI-LSTM rainfall prediction in cm for each low, medium, and high rainfall. The meta-heuristic technique with Hybrid Moth-Flame Colliding Bodies Optimization (HMFCBO) enhances the recognition and prediction phases. The experimental outcome shows that the different rainfall prediction databases prove the developed model overwhelms the conventional models, and thus it would be helpful to predict more accurate rainfall.","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136228539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
3D urban landscape visualization is a key technology in digital city construction. Based on the research and analysis of the three-dimensional space of the urban landscape space, the three-dimensional space can not only allow users to intuitively perceive the development of the city. It also enables decision makers, planners, and users to more intuitively, objectively, and rationally recognize and understand the current urban development and planning design. Defining the data content of the 3D city landscape image model is the basis for creating the 3D city image model. It not only guides producers to select data, but also serves as the basis for sharing data between different applications. With the continuous development of society, the number of people living in rural areas migrating to cities to make a living has increased rapidly, leading to the growing problem of “urban congestion” in many areas. In order to effectively solve these problems, “smart cities” came into being. It quickly triggered a boom in global urban development. Based on a survey of the state-of-the-art in the field of 3D modeling and engineering design visualization, this paper analyzes 3D rendering acceleration algorithms used to speed up rendering and improve the quality of 3D design. By utilizing BSP technology, transparent objects can be drawn in any order in any scene, which solves the problem of incorrectly occluding transparent objects during rendering. This paper also applies collision detection technology, which enhances the user’s immersive feeling when roaming the landscape. In the 3D reconstruction process, it can complete the column and wall recognition for the test image with complex composition. Its recognition rate for various urban features has reached more than 80%.
{"title":"3D urban landscape rendering and optimization algorithm for smart city","authors":"Li Wang","doi":"10.3233/idt-230418","DOIUrl":"https://doi.org/10.3233/idt-230418","url":null,"abstract":"3D urban landscape visualization is a key technology in digital city construction. Based on the research and analysis of the three-dimensional space of the urban landscape space, the three-dimensional space can not only allow users to intuitively perceive the development of the city. It also enables decision makers, planners, and users to more intuitively, objectively, and rationally recognize and understand the current urban development and planning design. Defining the data content of the 3D city landscape image model is the basis for creating the 3D city image model. It not only guides producers to select data, but also serves as the basis for sharing data between different applications. With the continuous development of society, the number of people living in rural areas migrating to cities to make a living has increased rapidly, leading to the growing problem of “urban congestion” in many areas. In order to effectively solve these problems, “smart cities” came into being. It quickly triggered a boom in global urban development. Based on a survey of the state-of-the-art in the field of 3D modeling and engineering design visualization, this paper analyzes 3D rendering acceleration algorithms used to speed up rendering and improve the quality of 3D design. By utilizing BSP technology, transparent objects can be drawn in any order in any scene, which solves the problem of incorrectly occluding transparent objects during rendering. This paper also applies collision detection technology, which enhances the user’s immersive feeling when roaming the landscape. In the 3D reconstruction process, it can complete the column and wall recognition for the test image with complex composition. Its recognition rate for various urban features has reached more than 80%.","PeriodicalId":43932,"journal":{"name":"Intelligent Decision Technologies-Netherlands","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135464310","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}