With the recent advancements in technology, there has been a tremendous growth in the usage of images captured using satellites in various applications, like defense, academics, resource exploration, land-use mapping, and so on. Certain mission-critical applications need images of higher visual quality, but the images captured by the sensors normally suffer from a tradeoff between high spectral and spatial resolutions. Hence, for obtaining images with high visual quality, it is necessary to combine the low resolution multispectral (MS) image with the high resolution panchromatic (PAN) image, and this is accomplished by means of pansharpening. In this paper, an efficient pansharpening technique is devised by using a hybrid optimized deep learning network. Zeiler and Fergus network (ZF Net) is utilized for performing the fusion of the sharpened and upsampled MS image with the PAN image. A novel Dingo coot (DICO) optimization is created for updating the learning parameters and weights of the ZF Net. Moreover, the devised DICO_ZF Net for pansharpening is examined for its effectiveness by considering measures, like Peak Signal To Noise Ratio (PSNR) and Degree of Distortion (DD) and is found to have attained values at 50.177 dB and 0.063 dB.
{"title":"DICO: Dingo coot optimization-based ZF net for pansharpening","authors":"Preeti Singh, S. Singh, M. Paprzycki","doi":"10.3233/kes-221530","DOIUrl":"https://doi.org/10.3233/kes-221530","url":null,"abstract":"With the recent advancements in technology, there has been a tremendous growth in the usage of images captured using satellites in various applications, like defense, academics, resource exploration, land-use mapping, and so on. Certain mission-critical applications need images of higher visual quality, but the images captured by the sensors normally suffer from a tradeoff between high spectral and spatial resolutions. Hence, for obtaining images with high visual quality, it is necessary to combine the low resolution multispectral (MS) image with the high resolution panchromatic (PAN) image, and this is accomplished by means of pansharpening. In this paper, an efficient pansharpening technique is devised by using a hybrid optimized deep learning network. Zeiler and Fergus network (ZF Net) is utilized for performing the fusion of the sharpened and upsampled MS image with the PAN image. A novel Dingo coot (DICO) optimization is created for updating the learning parameters and weights of the ZF Net. Moreover, the devised DICO_ZF Net for pansharpening is examined for its effectiveness by considering measures, like Peak Signal To Noise Ratio (PSNR) and Degree of Distortion (DD) and is found to have attained values at 50.177 dB and 0.063 dB.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"76 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131714613","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents forecasting and trend analysis of foreign currency exchange rate in financial market using a hybrid Deep Analytic Network (DAN) technique optimized by a modified water cycle algorithm called Weighted WCA (WWCA) with better generalization capability than the traditional WCA.DAN comprises several stacked KRR (Kernel Ridge Regression) Auto encoders in a multilayer nonlinear regression architecture approach that provides better generalization and accuracy using regularized least squares technique. Further DAN using wavelet kernel function is particularly attractive for its strong data fitting and generalization ability along with its simplified execution procedure, high speed, and better performance achievements in comparison to LSSVM (least squares support vector machine). The output from the DAN is fed to a weighted KRR module to reject noise or the outliers in the noisy data and to make DAN a more robust predictor of the Forex markets, To obtain optimal values of wavelet kernel parameters, a modified metaheuristic water cycle algorithm i.e. the proposed WWCA is utilized. Applications of this new approach to predict forex rate along with trend analysis on three stock markets provide successful results and validate its superiority over some well known approaches like ANN, SVM, Naïve-Bayes, ELM.
{"title":"Hybrid modified weighted water cycle algorithm and Deep Analytic Network for forecasting and trend detection of forex market indices","authors":"R. Bisoi, Pournamasi Parhi, P. Dash","doi":"10.3233/kes-218014","DOIUrl":"https://doi.org/10.3233/kes-218014","url":null,"abstract":"This paper presents forecasting and trend analysis of foreign currency exchange rate in financial market using a hybrid Deep Analytic Network (DAN) technique optimized by a modified water cycle algorithm called Weighted WCA (WWCA) with better generalization capability than the traditional WCA.DAN comprises several stacked KRR (Kernel Ridge Regression) Auto encoders in a multilayer nonlinear regression architecture approach that provides better generalization and accuracy using regularized least squares technique. Further DAN using wavelet kernel function is particularly attractive for its strong data fitting and generalization ability along with its simplified execution procedure, high speed, and better performance achievements in comparison to LSSVM (least squares support vector machine). The output from the DAN is fed to a weighted KRR module to reject noise or the outliers in the noisy data and to make DAN a more robust predictor of the Forex markets, To obtain optimal values of wavelet kernel parameters, a modified metaheuristic water cycle algorithm i.e. the proposed WWCA is utilized. Applications of this new approach to predict forex rate along with trend analysis on three stock markets provide successful results and validate its superiority over some well known approaches like ANN, SVM, Naïve-Bayes, ELM.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130131560","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Intelligent answering technology, which enables computers to solve problems automatically, is often used to develop tutorial systems, and has a wide range of application prospects. However, due to the lack of linguistic analysis and understanding methods, there are few researches on intelligent algorithms for solving kinematics problems. Developing such an algorithm is challenging, because solving kinematics problems is a complex task that includes text understanding, problem analysis, and automatic solution. To understand all these complexities involved in kinematics problems requires background knowledge. And only when an automatic solver contains a powerful internal knowledge representation system can it perform these tasks. We, thus, develop KinRob, an tutorial system for solving kinematics problems by combining neural network and ontology. Firstly, we propose an ontology for KinRob, which defines the knowledge of kinematics, and can help the robot understand a kinematics problem. Secondly, to match the text in natural language with the ontology, we propose a novel tagging scheme based on the kinematic problem understanding model in named entity recognition (NER). Finally, extensive experiments are conducted, and the experimental results show that the performance of the proposed method on a dataset of kinematic problems from authoritative sources better than the baseline algorithms.
{"title":"KinRob: An ontology based robot for solving kinematic problems","authors":"Jiarong Zhang, Jinsha Yuan, Jianing Xu, Shuangshuang Ban, Xinyu Zan, Jin Zhang","doi":"10.3233/kes-218162","DOIUrl":"https://doi.org/10.3233/kes-218162","url":null,"abstract":"Intelligent answering technology, which enables computers to solve problems automatically, is often used to develop tutorial systems, and has a wide range of application prospects. However, due to the lack of linguistic analysis and understanding methods, there are few researches on intelligent algorithms for solving kinematics problems. Developing such an algorithm is challenging, because solving kinematics problems is a complex task that includes text understanding, problem analysis, and automatic solution. To understand all these complexities involved in kinematics problems requires background knowledge. And only when an automatic solver contains a powerful internal knowledge representation system can it perform these tasks. We, thus, develop KinRob, an tutorial system for solving kinematics problems by combining neural network and ontology. Firstly, we propose an ontology for KinRob, which defines the knowledge of kinematics, and can help the robot understand a kinematics problem. Secondly, to match the text in natural language with the ontology, we propose a novel tagging scheme based on the kinematic problem understanding model in named entity recognition (NER). Finally, extensive experiments are conducted, and the experimental results show that the performance of the proposed method on a dataset of kinematic problems from authoritative sources better than the baseline algorithms.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132965639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, due to the reasonable price of RGB-D devices, the use of skeletal-based data in the field of human-computer interaction has attracted a lot of attention. Being free from problems such as complex backgrounds as well as changes in light is another reason for the popularity of this type of data. In the existing methods, the use of joint and bone information has had significant results in improving the recognition of human movements and even emotions. However, how to combine these two types of information in the best possible way to define the relationship between joints and bones is a problem that has not yet been solved. In this article, we used the Laban Movement Analysis (LMA) to build a robust descriptor and present a precise description of the connection of the different parts of the body to itself and its surrounding environment while performing a gesture. To do this, in addition to the distances between the hip center and other joints of the body and the changes of the quaternion angles in time, we define the triangles formed by the different parts of the body and calculate their area. We also calculate the area of the single conforming 3-D boundary around all the joints of the body. We use a long short-term memory (LSTM) network to evaluate this descriptor. The proposed algorithm is implemented on five public datasets: NTU RGB+D 120, SYSU 3D HOI, FLORENCE 3D ACTIONS, MSR Action3D and UTKinect-Action3D datasets, and the results are compared with those available in the literature.
近年来,由于RGB-D设备价格合理,基于骨骼的数据在人机交互领域的应用备受关注。不受复杂背景和光线变化等问题的困扰是这类数据受欢迎的另一个原因。在现有的方法中,利用关节和骨骼信息在提高对人类运动甚至情绪的识别方面取得了显著的成果。然而,如何以最好的方式结合这两种类型的信息来定义关节和骨骼之间的关系是一个尚未解决的问题。在本文中,我们使用Laban运动分析(LMA)来构建一个健壮的描述符,并在执行手势时对身体不同部位与自身及其周围环境的连接进行精确描述。为此,除了髋中心与身体其他关节之间的距离和四元数角度随时间的变化外,我们还定义了身体不同部位形成的三角形,并计算它们的面积。我们还计算了人体所有关节周围的统一三维边界的面积。我们使用长短期记忆(LSTM)网络来评估这个描述符。在NTU RGB+D 120、SYSU 3D HOI、FLORENCE 3D ACTIONS、MSR Action3D和UTKinect-Action3D 5个公共数据集上实现了该算法,并与文献结果进行了比较。
{"title":"Autonomous gesture recognition using multi-layer LSTM networks and laban movement analysis","authors":"Zahra Ramezanpanah, M. Mallem, F. Davesne","doi":"10.3233/kes-208195","DOIUrl":"https://doi.org/10.3233/kes-208195","url":null,"abstract":"In recent years, due to the reasonable price of RGB-D devices, the use of skeletal-based data in the field of human-computer interaction has attracted a lot of attention. Being free from problems such as complex backgrounds as well as changes in light is another reason for the popularity of this type of data. In the existing methods, the use of joint and bone information has had significant results in improving the recognition of human movements and even emotions. However, how to combine these two types of information in the best possible way to define the relationship between joints and bones is a problem that has not yet been solved. In this article, we used the Laban Movement Analysis (LMA) to build a robust descriptor and present a precise description of the connection of the different parts of the body to itself and its surrounding environment while performing a gesture. To do this, in addition to the distances between the hip center and other joints of the body and the changes of the quaternion angles in time, we define the triangles formed by the different parts of the body and calculate their area. We also calculate the area of the single conforming 3-D boundary around all the joints of the body. We use a long short-term memory (LSTM) network to evaluate this descriptor. The proposed algorithm is implemented on five public datasets: NTU RGB+D 120, SYSU 3D HOI, FLORENCE 3D ACTIONS, MSR Action3D and UTKinect-Action3D datasets, and the results are compared with those available in the literature.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124831168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Chinese population is numerous. Energy resources are limited. The ownership of per capita resource is far lower than the world average level. China is in the process of industrialization and urbanization, but energy resources are consumed and environmental pollution is serious. The energy crisis and environmental protection has restricted our country economy development and social harmony. As a source of energy consumption and environmental pollution, power industry is one of the important fields of energy saving and emission reduction. The reasonable power dispatch is the breakthrough to reduce the energy consumption and environmental pollution. In this paper, we first introduce some operations on interval-valued intuitionistic fuzzy sets, such as Heronian mean (HM) operator and Dombi operations, etc., and further develop the induced interval-valued intuitionistic fuzzy Dombi weighted Heronian mean (I-IVIFDWHM) operator. We also establish some desirable properties of this operator, such as commutativity, idempotency and monotonicity. Then, we apply the I-IVIFDWHM operator to deal with the interval-valued intuitionistic fuzzy multiple attribute decision making (MADM) problems. Finally, an illustrative example for evaluating the energy-saving and economic operation of power systems is given to verify the developed approach.
中国人口众多。能源是有限的。人均资源拥有量远低于世界平均水平。中国正处于工业化、城镇化的进程中,能源消耗严重,环境污染严重。能源危机和环境保护问题已经制约了我国经济的发展和社会的和谐。电力工业作为能源消耗和环境污染的来源,是节能减排的重要领域之一。合理的电力调度是降低能源消耗和环境污染的突破口。本文首先介绍了区间值直觉模糊集上的一些运算,如Heronian mean (HM)算子和Dombi算子等,并进一步发展了诱导的区间值直觉模糊Dombi加权Heronian mean (I-IVIFDWHM)算子。我们还建立了该算子的交换性、幂等性和单调性。然后,应用I-IVIFDWHM算子处理区间值直觉模糊多属性决策问题。最后,以电力系统的节能经济运行评价为例,对所提出的方法进行了验证。
{"title":"An integrated method for evaluating the energy-saving and economic operation of power systems with interval-valued intuitionistic fuzzy numbers","authors":"Xinrui Xu","doi":"10.3233/kes-220019","DOIUrl":"https://doi.org/10.3233/kes-220019","url":null,"abstract":"Chinese population is numerous. Energy resources are limited. The ownership of per capita resource is far lower than the world average level. China is in the process of industrialization and urbanization, but energy resources are consumed and environmental pollution is serious. The energy crisis and environmental protection has restricted our country economy development and social harmony. As a source of energy consumption and environmental pollution, power industry is one of the important fields of energy saving and emission reduction. The reasonable power dispatch is the breakthrough to reduce the energy consumption and environmental pollution. In this paper, we first introduce some operations on interval-valued intuitionistic fuzzy sets, such as Heronian mean (HM) operator and Dombi operations, etc., and further develop the induced interval-valued intuitionistic fuzzy Dombi weighted Heronian mean (I-IVIFDWHM) operator. We also establish some desirable properties of this operator, such as commutativity, idempotency and monotonicity. Then, we apply the I-IVIFDWHM operator to deal with the interval-valued intuitionistic fuzzy multiple attribute decision making (MADM) problems. Finally, an illustrative example for evaluating the energy-saving and economic operation of power systems is given to verify the developed approach.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"09 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124445363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
S. Biswas, Gautam Bandyopadhyay, D. Pamučar, Aparajita Sanyal
Sales and operations planning translates the requirements of the customers at the market place (related to new and/or existing products and services) into actionable tactical plans to drive the activities of the value chain of the organization. The present work aims to provide a multi-period and multi-perspective evaluation framework to compare the sales and operational performance (SOP) of firms in an emerging market. SOP is one of the frontline KPIs that describes the efficiency and effectiveness of the sales and operations planning. There is a scantiness in the extant literature about well-defined indicators to measure SOP. The current work fills the gap in the literature by developing a hybrid multi-criteria decision making (MCDM) framework utilizing the Logarithmic Percentage Change-driven Objective Weighting (LOPCOW) and Evaluation based on Distance from Average Solution (EDAS) models for a novel application in assessing SOP. From the data analysis, it is also evident that there is a variations in the year wise ranking of the companies. However, all individual year wise rankings maintain statistically significant correlations with the aggregated ranking. For aggregation purpose, Borda Count Method is used. The companies like ITC Limited, Hindustan Unilever Ltd., Avanti Feeds Ltd., Britannia Industries Ltd., and Symphony Ltd. hold the top five positions on aggregate. The comparison with other MCDM models is made and sensitivity analysis is carried out. The present work is a first of its kind that would encourage the analysts and the policy makers to evaluate the sales and operational performance using a scientific way.
{"title":"A decision making framework for comparing sales and operational performance of firms in emerging market","authors":"S. Biswas, Gautam Bandyopadhyay, D. Pamučar, Aparajita Sanyal","doi":"10.3233/kes-221601","DOIUrl":"https://doi.org/10.3233/kes-221601","url":null,"abstract":"Sales and operations planning translates the requirements of the customers at the market place (related to new and/or existing products and services) into actionable tactical plans to drive the activities of the value chain of the organization. The present work aims to provide a multi-period and multi-perspective evaluation framework to compare the sales and operational performance (SOP) of firms in an emerging market. SOP is one of the frontline KPIs that describes the efficiency and effectiveness of the sales and operations planning. There is a scantiness in the extant literature about well-defined indicators to measure SOP. The current work fills the gap in the literature by developing a hybrid multi-criteria decision making (MCDM) framework utilizing the Logarithmic Percentage Change-driven Objective Weighting (LOPCOW) and Evaluation based on Distance from Average Solution (EDAS) models for a novel application in assessing SOP. From the data analysis, it is also evident that there is a variations in the year wise ranking of the companies. However, all individual year wise rankings maintain statistically significant correlations with the aggregated ranking. For aggregation purpose, Borda Count Method is used. The companies like ITC Limited, Hindustan Unilever Ltd., Avanti Feeds Ltd., Britannia Industries Ltd., and Symphony Ltd. hold the top five positions on aggregate. The comparison with other MCDM models is made and sensitivity analysis is carried out. The present work is a first of its kind that would encourage the analysts and the policy makers to evaluate the sales and operational performance using a scientific way.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126394409","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The expansion of service-oriented architecture and the increasing number of web services has led to an increase in demand for their use. But since a single service alone may not be enough to meet the most relatively complex business processes requirements, it is necessary to combine several individual services to deliver user satisfaction. By increasing the number of services that have the same functionality, the quality of service provided by each service will play an important role in the service selection process; in the process of service composition, different services with different quality parameters come together to provide a new task. Therefore, offering the best quality service to the user is considered an important issue. The challenging issues in the service composition process include how to combine the web services with quality parameters based on user preference, long response time for the composition process, large search space, and the correlation between the services. In this paper, the quality-based service composition is modeled by considering the relationship between the services to improve the quality of service (QoS) parameters. The proposed model consists of several steps. In the first step, the inappropriate services will be pruned by applying the correlation between the services. In the second step, by determining the quality levels for the QoS, the APSO algorithm is used to select the best levels and, finally, the best services. In the service combination stage, the services selected from the previous stage are combined using a fuzzy genetic algorithm (FGA) to create a suitable combination service. The results show that when the correlation between the services is considered, the response time criterion improves significantly by integrating the quality parameters and pruning the candidate services, and reduces the search space.
{"title":"Service composition based on genetic algorithm and fuzzy rules","authors":"Mohammad Reza Gheisari, S. Emadi","doi":"10.3233/kes-220016","DOIUrl":"https://doi.org/10.3233/kes-220016","url":null,"abstract":"The expansion of service-oriented architecture and the increasing number of web services has led to an increase in demand for their use. But since a single service alone may not be enough to meet the most relatively complex business processes requirements, it is necessary to combine several individual services to deliver user satisfaction. By increasing the number of services that have the same functionality, the quality of service provided by each service will play an important role in the service selection process; in the process of service composition, different services with different quality parameters come together to provide a new task. Therefore, offering the best quality service to the user is considered an important issue. The challenging issues in the service composition process include how to combine the web services with quality parameters based on user preference, long response time for the composition process, large search space, and the correlation between the services. In this paper, the quality-based service composition is modeled by considering the relationship between the services to improve the quality of service (QoS) parameters. The proposed model consists of several steps. In the first step, the inappropriate services will be pruned by applying the correlation between the services. In the second step, by determining the quality levels for the QoS, the APSO algorithm is used to select the best levels and, finally, the best services. In the service combination stage, the services selected from the previous stage are combined using a fuzzy genetic algorithm (FGA) to create a suitable combination service. The results show that when the correlation between the services is considered, the response time criterion improves significantly by integrating the quality parameters and pruning the candidate services, and reduces the search space.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122872801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kadali Dileep Kumar, N.V.Jagan Mohan Dr. Remani, Neelamadhab Padhy, S. C. Satapathy, Nagesh Salimath, Rahul Deo Sah
Supervised/unsupervised machine learning processes are a prevalent method in the field of Data Mining and Big Data. Corona Virus disease assessment using COVID-19 health data has recently exposed the potential application area for these methods. This study classifies significant propensities in a variety of monitored unsupervised machine learning of K-Means Cluster procedures and their function and use for disease performance assessment. In this, we proposed structural risk minimization means that a number of issues affect the classification efficiency that including changing training data as the characteristics of the input space, the natural environment, and the structure of the classification and the learning process. The three problems mentioned above improve the broad perspective of the trajectory cluster data prediction experimental coronavirus to control linear classification capability and to issue clues to each individual. K-Means Clustering is an effective way to calculate the built-in of coronavirus data. It is to separate unknown variables in the database for the disease detection process using a hyperplane. This virus can reduce the proposed programming model for K-means, map data with the help of hyperplane using a distance-based nearest neighbor classification by classifying subgroups of patient records into inputs. The linear regression and logistic regression for coronavirus data can provide valuation, and tracing the disease credentials is trial.
{"title":"Machine learning approach for corona virus disease extrapolation: A case study","authors":"Kadali Dileep Kumar, N.V.Jagan Mohan Dr. Remani, Neelamadhab Padhy, S. C. Satapathy, Nagesh Salimath, Rahul Deo Sah","doi":"10.3233/kes-220015","DOIUrl":"https://doi.org/10.3233/kes-220015","url":null,"abstract":"Supervised/unsupervised machine learning processes are a prevalent method in the field of Data Mining and Big Data. Corona Virus disease assessment using COVID-19 health data has recently exposed the potential application area for these methods. This study classifies significant propensities in a variety of monitored unsupervised machine learning of K-Means Cluster procedures and their function and use for disease performance assessment. In this, we proposed structural risk minimization means that a number of issues affect the classification efficiency that including changing training data as the characteristics of the input space, the natural environment, and the structure of the classification and the learning process. The three problems mentioned above improve the broad perspective of the trajectory cluster data prediction experimental coronavirus to control linear classification capability and to issue clues to each individual. K-Means Clustering is an effective way to calculate the built-in of coronavirus data. It is to separate unknown variables in the database for the disease detection process using a hyperplane. This virus can reduce the proposed programming model for K-means, map data with the help of hyperplane using a distance-based nearest neighbor classification by classifying subgroups of patient records into inputs. The linear regression and logistic regression for coronavirus data can provide valuation, and tracing the disease credentials is trial.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121305424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The aim of this study is to predict the profitability of Indian banks. Several factors both internal and external, affecting bank profitability were derived from extensive review of literature. We used Artificial Neural Network (ANN) with cross-validation technique to perform predictive analysis. ANN was chosen due to its flexibility and non-linear modelling capability. Several structures of ANN with a single and two hidden layers along with varying hidden neurons were implemented. Further, a comparison was made with the multiple linear regression (MLR) model. We found the models based on ANN to offer very accurate results in prediction and are marginally better as compared to the regression model. Higher accuracy of the model makes a significant difference due to the astronomically large size of the balance sheet of banks. This article is unique in the approach of handling the panel data for predictive analysis wherein the training of the model was done on a single bank’s data, thus, reducing the panel data to a time series data. This approach shows the ability to work with large panel data and make accurate predictions.
{"title":"Application of artificial neural network model in predicting profitability of Indian banks","authors":"Zericho R. Marak, Dilip Ambarkhane, A. Kulkarni","doi":"10.3233/kes-220020","DOIUrl":"https://doi.org/10.3233/kes-220020","url":null,"abstract":"The aim of this study is to predict the profitability of Indian banks. Several factors both internal and external, affecting bank profitability were derived from extensive review of literature. We used Artificial Neural Network (ANN) with cross-validation technique to perform predictive analysis. ANN was chosen due to its flexibility and non-linear modelling capability. Several structures of ANN with a single and two hidden layers along with varying hidden neurons were implemented. Further, a comparison was made with the multiple linear regression (MLR) model. We found the models based on ANN to offer very accurate results in prediction and are marginally better as compared to the regression model. Higher accuracy of the model makes a significant difference due to the astronomically large size of the balance sheet of banks. This article is unique in the approach of handling the panel data for predictive analysis wherein the training of the model was done on a single bank’s data, thus, reducing the panel data to a time series data. This approach shows the ability to work with large panel data and make accurate predictions.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128772489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Despite the importance of multi-attribute group decision making (MAGDM) problem in the field of optimal design, it is still a huge challenge to propose a solution due to its uncertainty and fuzziness. The spherical fuzzy sets (SFSs) can express vague and complicated information of MAGDM problem more widely. The Evaluation based on Distance from Average Solution (EDAS) method, as a highly practical decision-making method, has received extensive attention from researchers for solving MAGDM problem. In this paper, a spherical fuzzy EDAS (SF-EDAS) method is proposed to solve the MAGDM problem. Moreover, the entropy method is also introduced to determine objective weights, resulting in a more proper weight information. In addition, a practical example is settled by SF-EDAS method, which proves the excellent efficiency in applications of MAGDM problem. The SF-EDAS method provides an effective method for solving MAGDM problems under SFSs, and EDAS also provides a reference for further promotion of other decision-making environments.
{"title":"EDAS method for multiple attribute group decision making under spherical fuzzy environment","authors":"F. Diao, G. Wei","doi":"10.3233/kes-220018","DOIUrl":"https://doi.org/10.3233/kes-220018","url":null,"abstract":"Despite the importance of multi-attribute group decision making (MAGDM) problem in the field of optimal design, it is still a huge challenge to propose a solution due to its uncertainty and fuzziness. The spherical fuzzy sets (SFSs) can express vague and complicated information of MAGDM problem more widely. The Evaluation based on Distance from Average Solution (EDAS) method, as a highly practical decision-making method, has received extensive attention from researchers for solving MAGDM problem. In this paper, a spherical fuzzy EDAS (SF-EDAS) method is proposed to solve the MAGDM problem. Moreover, the entropy method is also introduced to determine objective weights, resulting in a more proper weight information. In addition, a practical example is settled by SF-EDAS method, which proves the excellent efficiency in applications of MAGDM problem. The SF-EDAS method provides an effective method for solving MAGDM problems under SFSs, and EDAS also provides a reference for further promotion of other decision-making environments.","PeriodicalId":210048,"journal":{"name":"Int. J. Knowl. Based Intell. Eng. Syst.","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133291317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}