Abstract This paper presents the design of a desired linear phase digital Finite Impulse Response (FIR) High Pass (HP) filter based on Adaptive Systematic Cuckoo Search Algorithm (ACSA). The deviation, or error from the desired response, is assessed along with the stop-band and pass-band attenuation of the filter. The Cuckoo Search algorithm (CS) is used to avoid local minima because the error surface is typically non-differentiable, nonlinear, and multimodal. The ACSA is applied to the minimax criterion (L∞-norm) based error fitness function, which offers a better equiripple response for passband and stopband, high stopband attenuation, and rapid convergence for the developed optimal HP FIR filter algorithm. The simulation findings demonstrate that when compared to the Parks McClellan (PM), Particle Swarm Optimization (PSO), CRazy Particle Swarm Optimization (CRPSO), and Cuckoo Search algorithms, the proposed HP FIR filter employing ACSA leads to better solutions.
{"title":"Optimal High Pass FIR Filter Based on Adaptive Systematic Cuckoo Search Algorithm","authors":"Puneet Bansal, S. S. Gill","doi":"10.2478/cait-2022-0046","DOIUrl":"https://doi.org/10.2478/cait-2022-0046","url":null,"abstract":"Abstract This paper presents the design of a desired linear phase digital Finite Impulse Response (FIR) High Pass (HP) filter based on Adaptive Systematic Cuckoo Search Algorithm (ACSA). The deviation, or error from the desired response, is assessed along with the stop-band and pass-band attenuation of the filter. The Cuckoo Search algorithm (CS) is used to avoid local minima because the error surface is typically non-differentiable, nonlinear, and multimodal. The ACSA is applied to the minimax criterion (L∞-norm) based error fitness function, which offers a better equiripple response for passband and stopband, high stopband attenuation, and rapid convergence for the developed optimal HP FIR filter algorithm. The simulation findings demonstrate that when compared to the Parks McClellan (PM), Particle Swarm Optimization (PSO), CRazy Particle Swarm Optimization (CRPSO), and Cuckoo Search algorithms, the proposed HP FIR filter employing ACSA leads to better solutions.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41495008","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The low quality of the collected fish image data directly from its habitat affects its feature qualities. Previous studies tended to be more concerned with finding the best method rather than the feature quality. This article proposes a new fish classification workflow using a combination of Contrast-Adaptive Color Correction (NCACC) image enhancement and optimization-based feature construction called Grey Wolf Optimizer (GWO). This approach improves the image feature extraction results to obtain new and more meaningful features. This article compares the GWO-based and other optimization method-based fish classification on the newly generated features. The comparison results show that GWO-based classification had 0.22% lower accuracy than GA-based but 1.13 % higher than PSO. Based on ANOVA tests, the accuracy of GA and GWO were statistically indifferent, and GWO and PSO were statistically different. On the other hand, GWO-based performed 0.61 times faster than GA-based classification and 1.36 minutes faster than the other.
{"title":"A Robust Feature Construction for Fish Classification Using Grey Wolf Optimizer","authors":"P. Santosa, R. A. Pramunendar","doi":"10.2478/cait-2022-0045","DOIUrl":"https://doi.org/10.2478/cait-2022-0045","url":null,"abstract":"Abstract The low quality of the collected fish image data directly from its habitat affects its feature qualities. Previous studies tended to be more concerned with finding the best method rather than the feature quality. This article proposes a new fish classification workflow using a combination of Contrast-Adaptive Color Correction (NCACC) image enhancement and optimization-based feature construction called Grey Wolf Optimizer (GWO). This approach improves the image feature extraction results to obtain new and more meaningful features. This article compares the GWO-based and other optimization method-based fish classification on the newly generated features. The comparison results show that GWO-based classification had 0.22% lower accuracy than GA-based but 1.13 % higher than PSO. Based on ANOVA tests, the accuracy of GA and GWO were statistically indifferent, and GWO and PSO were statistically different. On the other hand, GWO-based performed 0.61 times faster than GA-based classification and 1.36 minutes faster than the other.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44064441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The aim of the study is to obtain an accurate result from forecasting the EUR/USD exchange rate. To this end, high-performance machine learning models using CART Ensembles and Bagging method have been developed. Key macroeconomic indicators have been also examined including inflation in Europe and the United States, the index of unemployment in Europe and the United States, and more. Official monthly data in the period from December 1998 to December 2021 have been studied. A careful analysis of the macroeconomic time series has shown that their lagged variables are suitable for model’s predictors. CART Ensembles and Bagging predictive models having been built, explaining up to 98.8% of the data with MAPE of 1%. The degree of influence of the considered macroeconomic indicators on the EUR/USD rate has been established. The models have been used for forecasting one-month-ahead. The proposed approach could find a practical application in professional trading, budgeting and currency risk hedging.
{"title":"Modelling and Forecasting of EUR/USD Exchange Rate Using Ensemble Learning Approach","authors":"I. Boyoukliev, H. Kulina, S. Gocheva-Ilieva","doi":"10.2478/cait-2022-0044","DOIUrl":"https://doi.org/10.2478/cait-2022-0044","url":null,"abstract":"Abstract The aim of the study is to obtain an accurate result from forecasting the EUR/USD exchange rate. To this end, high-performance machine learning models using CART Ensembles and Bagging method have been developed. Key macroeconomic indicators have been also examined including inflation in Europe and the United States, the index of unemployment in Europe and the United States, and more. Official monthly data in the period from December 1998 to December 2021 have been studied. A careful analysis of the macroeconomic time series has shown that their lagged variables are suitable for model’s predictors. CART Ensembles and Bagging predictive models having been built, explaining up to 98.8% of the data with MAPE of 1%. The degree of influence of the considered macroeconomic indicators on the EUR/USD rate has been established. The models have been used for forecasting one-month-ahead. The proposed approach could find a practical application in professional trading, budgeting and currency risk hedging.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46811602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In this work a model is introduced to improve forgery detection on the basis of superpixel clustering algorithm and enhanced Grey Wolf Optimizer (GWO) based AlexNet. After collecting the images from MICC-F600, MICC-F2000 and GRIP datasets, patch segmentation is accomplished using a superpixel clustering algorithm. Then, feature extraction is performed on the segmented images to extract deep learning features using an enhanced GWO based AlexNet model for better forgery detection. In the enhanced GWO technique, multi-objective functions are used for selecting the optimal hyper-parameters of AlexNet. Based on the obtained features, the adaptive matching algorithm is used for locating the forged regions in the tampered images. Simulation outcome showed that the proposed model is effective under the conditions: salt & pepper noise, Gaussian noise, rotation, blurring and enhancement. The enhanced GWO based AlexNet model attained maximum detection accuracy of 99.66%, 99.75%, and 98.48% on MICC-F600, MICC-F2000 and GRIP datasets.
{"title":"Copy-Move Forgery Detection Using Superpixel Clustering Algorithm and Enhanced GWO Based AlexNet Model","authors":"Sreenivasu Tinnathi, G. Sudhavani","doi":"10.2478/cait-2022-0041","DOIUrl":"https://doi.org/10.2478/cait-2022-0041","url":null,"abstract":"Abstract In this work a model is introduced to improve forgery detection on the basis of superpixel clustering algorithm and enhanced Grey Wolf Optimizer (GWO) based AlexNet. After collecting the images from MICC-F600, MICC-F2000 and GRIP datasets, patch segmentation is accomplished using a superpixel clustering algorithm. Then, feature extraction is performed on the segmented images to extract deep learning features using an enhanced GWO based AlexNet model for better forgery detection. In the enhanced GWO technique, multi-objective functions are used for selecting the optimal hyper-parameters of AlexNet. Based on the obtained features, the adaptive matching algorithm is used for locating the forged regions in the tampered images. Simulation outcome showed that the proposed model is effective under the conditions: salt & pepper noise, Gaussian noise, rotation, blurring and enhancement. The enhanced GWO based AlexNet model attained maximum detection accuracy of 99.66%, 99.75%, and 98.48% on MICC-F600, MICC-F2000 and GRIP datasets.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45614121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Effective load balancing is tougher in grid computing compared to other conventional distributed computing platforms due to its heterogeneity, autonomy, scalability, and adaptability characteristics, resource selection and distribution mechanisms, and data separation. Hence, it is necessary to identify and handle the uncertainty of the tasks and grid resources before making load balancing decisions. Using two potential forms of Hidden Markov Models (HMM), i.e., Profile Hidden Markov Model (PF_HMM) and Pair Hidden Markov Model (PR_HMM), the uncertainties in the task and system parameters are identified. Load balancing is then carried out using our novel Fuzzy Neutrosophic Soft Set theory (FNSS) based transfer Q-learning with pre-trained knowledge. The transfer Q-learning enabled with FNSS solves large scale load balancing problems efficiently as the models are already trained and do not need pre-training. Our expected value analysis and simulation results confirm that the proposed scheme is 90 percent better than three of the recent load balancing schemes.
{"title":"Fuzzy Neutrosophic Soft Set Based Transfer-Q-Learning Scheme for Load Balancing in Uncertain Grid Computing Environments","authors":"K. Bhargavi, S. Shiva","doi":"10.2478/cait-2022-0038","DOIUrl":"https://doi.org/10.2478/cait-2022-0038","url":null,"abstract":"Abstract Effective load balancing is tougher in grid computing compared to other conventional distributed computing platforms due to its heterogeneity, autonomy, scalability, and adaptability characteristics, resource selection and distribution mechanisms, and data separation. Hence, it is necessary to identify and handle the uncertainty of the tasks and grid resources before making load balancing decisions. Using two potential forms of Hidden Markov Models (HMM), i.e., Profile Hidden Markov Model (PF_HMM) and Pair Hidden Markov Model (PR_HMM), the uncertainties in the task and system parameters are identified. Load balancing is then carried out using our novel Fuzzy Neutrosophic Soft Set theory (FNSS) based transfer Q-learning with pre-trained knowledge. The transfer Q-learning enabled with FNSS solves large scale load balancing problems efficiently as the models are already trained and do not need pre-training. Our expected value analysis and simulation results confirm that the proposed scheme is 90 percent better than three of the recent load balancing schemes.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41414140","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Esraa Alhenawi, Hadeel Alazzam, R. Al-Sayyed, Orieb Abualghanam, Omar Y. Adwan
Abstract A critical task and a competitive research area is to secure networks against attacks. One of the most popular security solutions is Intrusion Detection Systems (IDS). Machine learning has been recently used by researchers to develop high performance IDS. One of the main challenges in developing intelligent IDS is Feature Selection (FS). In this manuscript, a hybrid FS for the IDS network is proposed based on an ensemble filter, and an improved Intelligent Water Drop (IWD) wrapper. The Improved version from IWD algorithm uses local search algorithm as an extra operator to increase the exploiting capability of the basic IWD algorithm. Experimental results on three benchmark datasets “UNSW-NB15”, “NLS-KDD”, and “KDDCUPP99” demonstrate the effectiveness of the proposed model for IDS versus some of the most recent IDS algorithms existing in the literature depending on “F-score”, “accuracy”, “FPR”, “TPR” and “the number of selected features” metrics.
{"title":"Hybrid Feature Selection Method for Intrusion Detection Systems Based on an Improved Intelligent Water Drop Algorithm","authors":"Esraa Alhenawi, Hadeel Alazzam, R. Al-Sayyed, Orieb Abualghanam, Omar Y. Adwan","doi":"10.2478/cait-2022-0040","DOIUrl":"https://doi.org/10.2478/cait-2022-0040","url":null,"abstract":"Abstract A critical task and a competitive research area is to secure networks against attacks. One of the most popular security solutions is Intrusion Detection Systems (IDS). Machine learning has been recently used by researchers to develop high performance IDS. One of the main challenges in developing intelligent IDS is Feature Selection (FS). In this manuscript, a hybrid FS for the IDS network is proposed based on an ensemble filter, and an improved Intelligent Water Drop (IWD) wrapper. The Improved version from IWD algorithm uses local search algorithm as an extra operator to increase the exploiting capability of the basic IWD algorithm. Experimental results on three benchmark datasets “UNSW-NB15”, “NLS-KDD”, and “KDDCUPP99” demonstrate the effectiveness of the proposed model for IDS versus some of the most recent IDS algorithms existing in the literature depending on “F-score”, “accuracy”, “FPR”, “TPR” and “the number of selected features” metrics.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46211531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Personal Medical Records (PMR) manage an individual’s medical information in digital form and allow patients to view their medical information and doctors to diagnose diseases. Today’s institution-dependent centralized storage, fails to give trustworthy, secure, reliable, and traceable patient controls. This leads to a serious disadvantage in diagnosing and preventing diseases. The proposed blockchain technique forms a secured network between doctors of the same specialization for gathering opinions on a particular diagnosis by sharing the PMR with consent to provide better care to patients. To finalize the disease prediction, members can approve the diagnosis. The smart contract access control allows doctors to view and access the PMR. The scalability issue is resolved by the Huffman code data compression technique, and security of the PMR is achieved by an advanced encryption standard. The proposed techniques’ requirements, latency time, compression ratio and security analysis have been compared with existing techniques.
{"title":"A Decentralized Medical Network for Maintaining Patient Records Using Blockchain Technology","authors":"M. Sumathi, S. Raja, N. Vijayaraj, M. Rajkamal","doi":"10.2478/cait-2022-0043","DOIUrl":"https://doi.org/10.2478/cait-2022-0043","url":null,"abstract":"Abstract Personal Medical Records (PMR) manage an individual’s medical information in digital form and allow patients to view their medical information and doctors to diagnose diseases. Today’s institution-dependent centralized storage, fails to give trustworthy, secure, reliable, and traceable patient controls. This leads to a serious disadvantage in diagnosing and preventing diseases. The proposed blockchain technique forms a secured network between doctors of the same specialization for gathering opinions on a particular diagnosis by sharing the PMR with consent to provide better care to patients. To finalize the disease prediction, members can approve the diagnosis. The smart contract access control allows doctors to view and access the PMR. The scalability issue is resolved by the Huffman code data compression technique, and security of the PMR is achieved by an advanced encryption standard. The proposed techniques’ requirements, latency time, compression ratio and security analysis have been compared with existing techniques.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46514269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Modern networking systems can benefit from Cognitive Radio (CR) because it mitigates spectrum scarcity. CR is prone to jamming attacks due to shared communication medium that results in a drop of spectrum usage. Existing solutions to jamming attacks are frequently based on Q-learning and deep Q-learning networks. Such solutions have a reputation for slow convergence and learning, particularly when states and action spaces are continuous. This paper introduces a unique reinforcement learning driven anti-jamming scheme that uses adversarial learning mechanism to counter hostile jammers. A mathematical model is employed in the formulation of jamming and anti-jamming strategies based on deep deterministic policy gradients to improve their policies against each other. An open-AI gym-oriented customized environment is used to evaluate proposed solution concerning power-factor and signal-to-noise-ratio. The simulation outcome shows that the proposed anti-jamming solution allows the transmitter to learn more about the jammer and devise the optimal countermeasures than conventional algorithms.
{"title":"A Model-Free Cognitive Anti-Jamming Strategy Using Adversarial Learning Algorithm","authors":"Y. Sudha, V. Sarasvathi","doi":"10.2478/cait-2022-0039","DOIUrl":"https://doi.org/10.2478/cait-2022-0039","url":null,"abstract":"Abstract Modern networking systems can benefit from Cognitive Radio (CR) because it mitigates spectrum scarcity. CR is prone to jamming attacks due to shared communication medium that results in a drop of spectrum usage. Existing solutions to jamming attacks are frequently based on Q-learning and deep Q-learning networks. Such solutions have a reputation for slow convergence and learning, particularly when states and action spaces are continuous. This paper introduces a unique reinforcement learning driven anti-jamming scheme that uses adversarial learning mechanism to counter hostile jammers. A mathematical model is employed in the formulation of jamming and anti-jamming strategies based on deep deterministic policy gradients to improve their policies against each other. An open-AI gym-oriented customized environment is used to evaluate proposed solution concerning power-factor and signal-to-noise-ratio. The simulation outcome shows that the proposed anti-jamming solution allows the transmitter to learn more about the jammer and devise the optimal countermeasures than conventional algorithms.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2022-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47796603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The continuous progress of computing technologies increases the need for improved methods and tools for assessing the performance of information systems in terms of reliability, conformance, and quality of service. This paper presents an extension of Information Theory by introducing a novel hierarchy concept as a complement to the traditional entropy approach. The methodology adjustments are applied to a simulative numerical example for assessing the reliability of systems with different complexity and performance behavior.
{"title":"Information Systems Reliability in Traditional Entropy and Novel Hierarchy","authors":"Iliyan I. Petrov","doi":"10.2478/cait-2022-0024","DOIUrl":"https://doi.org/10.2478/cait-2022-0024","url":null,"abstract":"Abstract The continuous progress of computing technologies increases the need for improved methods and tools for assessing the performance of information systems in terms of reliability, conformance, and quality of service. This paper presents an extension of Information Theory by introducing a novel hierarchy concept as a complement to the traditional entropy approach. The methodology adjustments are applied to a simulative numerical example for assessing the reliability of systems with different complexity and performance behavior.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45812749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Task scheduling is an important activity in parallel and distributed computing environment like grid because the performance depends on it. Task scheduling gets affected by behavioral and primary uncertainties. Behavioral uncertainty arises due to variability in the workload characteristics, size of data and dynamic partitioning of applications. Primary uncertainty arises due to variability in data handling capabilities, processor context switching and interplay between the computation intensive applications. In this paper behavioral uncertainty and primary uncertainty with respect to tasks and resources parameters are managed using Type-2-Soft-Set (T2SS) theory. Dyna-Q-Learning task scheduling technique is designed over the uncertainty free tasks and resource parameters. The results obtained are further validated through simulation using GridSim simulator. The performance is good based on metrics such as learning rate, accuracy, execution time and resource utilization rate.
{"title":"Uncertainty Aware T2SS Based Dyna-Q-Learning Framework for Task Scheduling in Grid Computing","authors":"K. Bhargavi, S. Shiva","doi":"10.2478/cait-2022-0027","DOIUrl":"https://doi.org/10.2478/cait-2022-0027","url":null,"abstract":"Abstract Task scheduling is an important activity in parallel and distributed computing environment like grid because the performance depends on it. Task scheduling gets affected by behavioral and primary uncertainties. Behavioral uncertainty arises due to variability in the workload characteristics, size of data and dynamic partitioning of applications. Primary uncertainty arises due to variability in data handling capabilities, processor context switching and interplay between the computation intensive applications. In this paper behavioral uncertainty and primary uncertainty with respect to tasks and resources parameters are managed using Type-2-Soft-Set (T2SS) theory. Dyna-Q-Learning task scheduling technique is designed over the uncertainty free tasks and resource parameters. The results obtained are further validated through simulation using GridSim simulator. The performance is good based on metrics such as learning rate, accuracy, execution time and resource utilization rate.","PeriodicalId":45562,"journal":{"name":"Cybernetics and Information Technologies","volume":null,"pages":null},"PeriodicalIF":1.2,"publicationDate":"2022-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44037688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}