Abstract This study proposes and analyzes a new method for the post-Pareto analysis of multicriteria decision-making (MCDM) problems: the revealed comparative advantage (RCA) assessment method. An interesting feature of the suggested method is that it uses the solution to a special eigenvalue problem and can be considered an analog/modification in the MCDM context of well-known ranking methods including the authority-hub method, PageRank method, and so on, which have been successfully applied to such fields as economics, bibliometrics, web search design, and so on. For illustrative purposes, this study discusses a particular MCDM problem to demonstrate the practicality of the method. The theoretical considerations and conducted calculations reveal that the RCA assessment method is self-consistent and easily implementable. Moreover, comparisons with well-known tools of an MCDM analysis shows that the results obtained using this method are appropriate and competitive. An important particularity of the RCA assessment method is that it can be useful for decision-makers in the case in which no decision-making authority is available or when the relative importance of various criteria has not been preliminarily evaluated.
{"title":"Revealed Comparative Advantage Method for Solving Multicriteria Decision-making Problems","authors":"Joseph Gogodze","doi":"10.2478/fcds-2021-0006","DOIUrl":"https://doi.org/10.2478/fcds-2021-0006","url":null,"abstract":"Abstract This study proposes and analyzes a new method for the post-Pareto analysis of multicriteria decision-making (MCDM) problems: the revealed comparative advantage (RCA) assessment method. An interesting feature of the suggested method is that it uses the solution to a special eigenvalue problem and can be considered an analog/modification in the MCDM context of well-known ranking methods including the authority-hub method, PageRank method, and so on, which have been successfully applied to such fields as economics, bibliometrics, web search design, and so on. For illustrative purposes, this study discusses a particular MCDM problem to demonstrate the practicality of the method. The theoretical considerations and conducted calculations reveal that the RCA assessment method is self-consistent and easily implementable. Moreover, comparisons with well-known tools of an MCDM analysis shows that the results obtained using this method are appropriate and competitive. An important particularity of the RCA assessment method is that it can be useful for decision-makers in the case in which no decision-making authority is available or when the relative importance of various criteria has not been preliminarily evaluated.","PeriodicalId":42909,"journal":{"name":"Foundations of Computing and Decision Sciences","volume":"46 1","pages":"85 - 96"},"PeriodicalIF":1.1,"publicationDate":"2021-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44898740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Laser Beam machining (LBM) nowadays finds a wide acceptance for cutting various materials and cutting of polymer sheets is no exception. Greater reliability of process coupled with superior quality of finished product makes LBM widely used for cutting polymeric materials. Earlier researchers investigated the carbon dioxide laser cutting to a few thermoplastic polymers in thickness varying from 2mm to 10mm. Here, an approach is being made for grading the suitability of polymeric materials and to answer the problem of selection for LBM cutting as per their weightages obtained by using multi-decision making (MCDM) approach. An attempt has also been made to validate the result thus obtained with the experimental results obtained by previous researchers. The analysis encompasses the use of non-parametric linear-programming method of data envelopment analysis (DEA) for process efficiency assessment combined with technique for order preference by similarity to an ideal solution (TOPSIS) for selection of polymer sheets, which is based on the closeness values. The results of this uniquely blended analysis reflect that for 3mm thick polymer sheet is polypropelene (PP) to be highly preferable over polyethylene (PE) and polycarbonate (PC). While it turns out to be that polycarbonate (PC) to be highly preferable to other two polymers for 5mm thick polymer sheets. Hence the present research analysis fits very good for the polymer sheets of 3mm thickness while it deviates a little bit for the 5mm sheets.
{"title":"A Holistic Approach to Polymeric Material Selection for Laser Beam Machining using Methods of DEA and TOPSIS","authors":"M. K. Roy, I. Shivakoti, R. Phipon, Ashis Sharma","doi":"10.2478/fcds-2020-0017","DOIUrl":"https://doi.org/10.2478/fcds-2020-0017","url":null,"abstract":"Abstract Laser Beam machining (LBM) nowadays finds a wide acceptance for cutting various materials and cutting of polymer sheets is no exception. Greater reliability of process coupled with superior quality of finished product makes LBM widely used for cutting polymeric materials. Earlier researchers investigated the carbon dioxide laser cutting to a few thermoplastic polymers in thickness varying from 2mm to 10mm. Here, an approach is being made for grading the suitability of polymeric materials and to answer the problem of selection for LBM cutting as per their weightages obtained by using multi-decision making (MCDM) approach. An attempt has also been made to validate the result thus obtained with the experimental results obtained by previous researchers. The analysis encompasses the use of non-parametric linear-programming method of data envelopment analysis (DEA) for process efficiency assessment combined with technique for order preference by similarity to an ideal solution (TOPSIS) for selection of polymer sheets, which is based on the closeness values. The results of this uniquely blended analysis reflect that for 3mm thick polymer sheet is polypropelene (PP) to be highly preferable over polyethylene (PE) and polycarbonate (PC). While it turns out to be that polycarbonate (PC) to be highly preferable to other two polymers for 5mm thick polymer sheets. Hence the present research analysis fits very good for the polymer sheets of 3mm thickness while it deviates a little bit for the 5mm sheets.","PeriodicalId":42909,"journal":{"name":"Foundations of Computing and Decision Sciences","volume":"45 1","pages":"339 - 357"},"PeriodicalIF":1.1,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41569686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jan Mizgajski, Adrian Szymczak, M. Morzy, Łukasz Augustyniak, Piotr Szymański, Piotr Żelasko
Abstract Academia remains the central place of machine learning education. While academic culture is the predominant factor influencing the way we teach machine learning to students, many practitioners question this culture, claiming the lack of alignment between academic and business environments. Drawing on professional experiences from both sides of the chasm, we describe the main points of contention, in the hope that it will help better align academic syllabi with the expectations towards future machine learning practitioners. We also provide recommendations for teaching of the applied aspects of machine learning.
{"title":"Return on Investment in Machine Learning: Crossing the Chasm between Academia and Business","authors":"Jan Mizgajski, Adrian Szymczak, M. Morzy, Łukasz Augustyniak, Piotr Szymański, Piotr Żelasko","doi":"10.2478/fcds-2020-0015","DOIUrl":"https://doi.org/10.2478/fcds-2020-0015","url":null,"abstract":"Abstract Academia remains the central place of machine learning education. While academic culture is the predominant factor influencing the way we teach machine learning to students, many practitioners question this culture, claiming the lack of alignment between academic and business environments. Drawing on professional experiences from both sides of the chasm, we describe the main points of contention, in the hope that it will help better align academic syllabi with the expectations towards future machine learning practitioners. We also provide recommendations for teaching of the applied aspects of machine learning.","PeriodicalId":42909,"journal":{"name":"Foundations of Computing and Decision Sciences","volume":"45 1","pages":"281 - 304"},"PeriodicalIF":1.1,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46172703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Manufacturers need to select the best design from alternative design concepts in order to meet up with the demand of customers and have a larger share of the competitive market that is flooded with multifarious designs. Evaluation of conceptual design alternatives can be modelled as a Multi-Criteria Decision Making (MCDM) process because it includes conflicting design features with different sub features. Hybridization of Multi Attribute Decision Making (MADM) models has been applied in various field of management, science and engineering in order to have a robust decision-making process but the extension of these hybridized MADM models to decision making in engineering design still requires attention. In this article, an integrated MADM model comprising of Fuzzy Analytic Hierarchy Process (FAHP), Fuzzy Pugh Matrix and Fuzzy VIKOR was developed and applied to evaluate conceptual designs of liquid spraying machine. The fuzzy AHP was used to determine weights of the design features and sub features by virtue of its fuzzified comparison matrix and synthetic extent evaluation. The fuzzy Pugh matrix provides a methodical structure for determining performance using all the design alternatives as basis and obtaining aggregates for the designs using the weights of the sub features. The fuzzy VIKOR generates the decision matrix from the aggregates of the fuzzified Pugh matrices and determine the best design concept from the defuzzified performance index. At the end, the optimal design concept is determined for the liquid spraying machine.
{"title":"Fusing Multi-Attribute Decision Models for Decision Making to Achieve Optimal Product Design","authors":"O. Olabanji, K. Mpofu","doi":"10.2478/fcds-2020-0016","DOIUrl":"https://doi.org/10.2478/fcds-2020-0016","url":null,"abstract":"Abstract Manufacturers need to select the best design from alternative design concepts in order to meet up with the demand of customers and have a larger share of the competitive market that is flooded with multifarious designs. Evaluation of conceptual design alternatives can be modelled as a Multi-Criteria Decision Making (MCDM) process because it includes conflicting design features with different sub features. Hybridization of Multi Attribute Decision Making (MADM) models has been applied in various field of management, science and engineering in order to have a robust decision-making process but the extension of these hybridized MADM models to decision making in engineering design still requires attention. In this article, an integrated MADM model comprising of Fuzzy Analytic Hierarchy Process (FAHP), Fuzzy Pugh Matrix and Fuzzy VIKOR was developed and applied to evaluate conceptual designs of liquid spraying machine. The fuzzy AHP was used to determine weights of the design features and sub features by virtue of its fuzzified comparison matrix and synthetic extent evaluation. The fuzzy Pugh matrix provides a methodical structure for determining performance using all the design alternatives as basis and obtaining aggregates for the designs using the weights of the sub features. The fuzzy VIKOR generates the decision matrix from the aggregates of the fuzzified Pugh matrices and determine the best design concept from the defuzzified performance index. At the end, the optimal design concept is determined for the liquid spraying machine.","PeriodicalId":42909,"journal":{"name":"Foundations of Computing and Decision Sciences","volume":"45 1","pages":"305 - 337"},"PeriodicalIF":1.1,"publicationDate":"2020-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43786520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an approach to mine cardinality restriction axioms from an existing knowledge graph, in order to extend an ontology describing the graph. We compare frequency estimation with kernel density estimation as approaches to obtain the cardinalities in restrictions. We also propose numerous strategies for filtering obtained axioms in order to make them more available for the ontology engineer. We report the results of experimental evaluation on DBpedia 2016-10 and show that using kernel density estimation to compute the cardinalities in cardinality restrictions yields more robust results that using frequency estimation. We also show that while filtering is of limited usability for minimum cardinality restrictions, it is much more important for maximum cardinality restrictions. The presented findings can be used to extend existing ontology engineering tools in order to support ontology construction and enable more efficient creation of knowledge-intensive artificial intelligence systems.
{"title":"Mining Cardinality Restrictions in OWL","authors":"Jedrzej Potoniec","doi":"10.2478/fcds-2020-0011","DOIUrl":"https://doi.org/10.2478/fcds-2020-0011","url":null,"abstract":"\u0000 We present an approach to mine cardinality restriction axioms from an existing knowledge graph, in order to extend an ontology describing the graph. We compare frequency estimation with kernel density estimation as approaches to obtain the cardinalities in restrictions. We also propose numerous strategies for filtering obtained axioms in order to make them more available for the ontology engineer. We report the results of experimental evaluation on DBpedia 2016-10 and show that using kernel density estimation to compute the cardinalities in cardinality restrictions yields more robust results that using frequency estimation. We also show that while filtering is of limited usability for minimum cardinality restrictions, it is much more important for maximum cardinality restrictions. The presented findings can be used to extend existing ontology engineering tools in order to support ontology construction and enable more efficient creation of knowledge-intensive artificial intelligence systems.","PeriodicalId":42909,"journal":{"name":"Foundations of Computing and Decision Sciences","volume":" ","pages":""},"PeriodicalIF":1.1,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41955596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Knowledge about future optical network traffic can be beneficial for network operators in terms of decreasing an operational cost due to efficient resource management. Machine Learning (ML) algorithms can be employed for forecasting traffic with high accuracy. In this paper we describe a methodology for predicting traffic in a dynamic optical network with service function chains (SFC). We assume that SFC is based on the Network Function Virtualization (NFV) paradigm. Moreover, other type of traffic, i.e. regular traffic, can also occur in the network. As a proof of effectiveness of our methodology we present and discuss numerical results of experiments run on three benchmark networks. We examine six ML classifiers. Our research shows that it is possible to predict a future traffic in an optical network, where SFC can be distinguished. However, there is no one universal classifier that can be used for each network. Choice of an ML algorithm should be done based on a network traffic characteristics analysis.
{"title":"Application of Machine Learning Algorithms for Traffic Forecasting in Dynamic Optical Networks with Service Function Chains","authors":"D. Szostak, K. Walkowiak","doi":"10.2478/fcds-2020-0012","DOIUrl":"https://doi.org/10.2478/fcds-2020-0012","url":null,"abstract":"\u0000 Knowledge about future optical network traffic can be beneficial for network operators in terms of decreasing an operational cost due to efficient resource management. Machine Learning (ML) algorithms can be employed for forecasting traffic with high accuracy. In this paper we describe a methodology for predicting traffic in a dynamic optical network with service function chains (SFC). We assume that SFC is based on the Network Function Virtualization (NFV) paradigm. Moreover, other type of traffic, i.e. regular traffic, can also occur in the network. As a proof of effectiveness of our methodology we present and discuss numerical results of experiments run on three benchmark networks. We examine six ML classifiers. Our research shows that it is possible to predict a future traffic in an optical network, where SFC can be distinguished. However, there is no one universal classifier that can be used for each network. Choice of an ML algorithm should be done based on a network traffic characteristics analysis.","PeriodicalId":42909,"journal":{"name":"Foundations of Computing and Decision Sciences","volume":"1 1","pages":""},"PeriodicalIF":1.1,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41694205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrzej Brodzicki, M. Piekarski, Dariusz Kucharski, J. Jaworek-Korjakowska, M. Gorgon
Deep learning methods, used in machine vision challenges, often face the problem of the amount and quality of data. To address this issue, we investigate the transfer learning method. In this study, we briefly describe the idea and introduce two main strategies of transfer learning. We also present the widely-used neural network models, that in recent years performed best in ImageNet classification challenges. Furthermore, we shortly describe three different experiments from computer vision field, that confirm the developed algorithms ability to classify images with overall accuracy 87.2-95%. Achieved numbers are state-of-the-art results in melanoma thickness prediction, anomaly detection and Clostridium di cile cytotoxicity classification problems.
{"title":"Transfer Learning Methods as a New Approach in Computer Vision Tasks with Small Datasets","authors":"Andrzej Brodzicki, M. Piekarski, Dariusz Kucharski, J. Jaworek-Korjakowska, M. Gorgon","doi":"10.2478/fcds-2020-0010","DOIUrl":"https://doi.org/10.2478/fcds-2020-0010","url":null,"abstract":"\u0000 Deep learning methods, used in machine vision challenges, often face the problem of the amount and quality of data. To address this issue, we investigate the transfer learning method. In this study, we briefly describe the idea and introduce two main strategies of transfer learning. We also present the widely-used neural network models, that in recent years performed best in ImageNet classification challenges. Furthermore, we shortly describe three different experiments from computer vision field, that confirm the developed algorithms ability to classify images with overall accuracy 87.2-95%. Achieved numbers are state-of-the-art results in melanoma thickness prediction, anomaly detection and Clostridium di cile cytotoxicity classification problems.","PeriodicalId":42909,"journal":{"name":"Foundations of Computing and Decision Sciences","volume":" ","pages":""},"PeriodicalIF":1.1,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41781682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In last years Artificial Intelligence presented a tremendous progress by offering a variety of novel methods, tools and their spectacular applications. Besides showing scientific breakthroughs it attracted interest both of the general public and industry. It also opened heated debates on the impact of Artificial Intelligence on changing the economy and society. Having in mind this international landscape, in this short paper we discuss the Polish AI research community, some of its main achievements, opportunities and limitations. We put this discussion in the context of the current developments in the international AI community. Moreover, we refer to activities of Polish scientific associations and their initiative of founding Polish Alliance for the Development of Artificial Intelligence (PP-RAI). Finally two last editions of PP-RAI joint conferences are summarized. 1. Introductory remarks Artificial Intelligence (AI) began as an academic discipline nearly 70 years ago, while during the Dartmouth conference in 1956 the expression Artificial Intelligence was coined as the label for it. Since that time it has been evolving a lot and developing in the cycles of optimism and pessimism [27]. In the first period research in several main subfields were started but the expectations the founders put were not fully real ized. Thus, the disappointments and cutting financing in the 1970s led to the first, so called, AI winter. The research was intensified again in 1980s, mainly with promoting practically useful, narrow purpose systems, such as expert systems, based on symbolic approaches and logic [21]. Nevertheless, they were not so successful as it was expected. Then, important changes in AI paradigms concern non-symbolic and more numeri cal approaches [1]. During the end of 1980s many researchers focused interests on * Institute o f Applied Computer Science, Jagiellonian University, and AGH University o f Science and Technology, Cracow, gjn@gjn.re ^Institute of Computing Sciences, Poznan University o f Technology, Poznan, jerzy.stefanowski@cs.put.poznan.pl 160 G. J. Nalepa, J. Stefanowski methodological inspirations coming from statistics, numerical methods, optimization, decision analysis and modeling uncertainty. It helped in a significant progress in new machine learning methods, rebirth of neural networks, new developments of natural language processing, image recognition, multi-agent systems, and also robotics [11]. Several researchers proposed new approaches to manage uncertainty and imprecision, while others significantly improved genetic and evolutionary computations which started computational intelligence subfield [10, 7]. All of these efforts led to the new wave of applications, which were far beyond what earlier systems did and additionally boosted the growing interest in AI. Since the beginning of this century one can observe the next renaissance of the neu ral networks research, in particular promoting deep learning, and intensive develo
{"title":"Artificial Intelligence Research Community and Associations in Poland","authors":"G. J. Nalepa, J. Stefanowski","doi":"10.2478/fcds-2020-0009","DOIUrl":"https://doi.org/10.2478/fcds-2020-0009","url":null,"abstract":"In last years Artificial Intelligence presented a tremendous progress by offering a variety of novel methods, tools and their spectacular applications. Besides showing scientific breakthroughs it attracted interest both of the general public and industry. It also opened heated debates on the impact of Artificial Intelligence on changing the economy and society. Having in mind this international landscape, in this short paper we discuss the Polish AI research community, some of its main achievements, opportunities and limitations. We put this discussion in the context of the current developments in the international AI community. Moreover, we refer to activities of Polish scientific associations and their initiative of founding Polish Alliance for the Development of Artificial Intelligence (PP-RAI). Finally two last editions of PP-RAI joint conferences are summarized. 1. Introductory remarks Artificial Intelligence (AI) began as an academic discipline nearly 70 years ago, while during the Dartmouth conference in 1956 the expression Artificial Intelligence was coined as the label for it. Since that time it has been evolving a lot and developing in the cycles of optimism and pessimism [27]. In the first period research in several main subfields were started but the expectations the founders put were not fully real ized. Thus, the disappointments and cutting financing in the 1970s led to the first, so called, AI winter. The research was intensified again in 1980s, mainly with promoting practically useful, narrow purpose systems, such as expert systems, based on symbolic approaches and logic [21]. Nevertheless, they were not so successful as it was expected. Then, important changes in AI paradigms concern non-symbolic and more numeri cal approaches [1]. During the end of 1980s many researchers focused interests on * Institute o f Applied Computer Science, Jagiellonian University, and AGH University o f Science and Technology, Cracow, gjn@gjn.re ^Institute of Computing Sciences, Poznan University o f Technology, Poznan, jerzy.stefanowski@cs.put.poznan.pl 160 G. J. Nalepa, J. Stefanowski methodological inspirations coming from statistics, numerical methods, optimization, decision analysis and modeling uncertainty. It helped in a significant progress in new machine learning methods, rebirth of neural networks, new developments of natural language processing, image recognition, multi-agent systems, and also robotics [11]. Several researchers proposed new approaches to manage uncertainty and imprecision, while others significantly improved genetic and evolutionary computations which started computational intelligence subfield [10, 7]. All of these efforts led to the new wave of applications, which were far beyond what earlier systems did and additionally boosted the growing interest in AI. Since the beginning of this century one can observe the next renaissance of the neu ral networks research, in particular promoting deep learning, and intensive develo","PeriodicalId":42909,"journal":{"name":"Foundations of Computing and Decision Sciences","volume":"45 1","pages":"159-177"},"PeriodicalIF":1.1,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41846114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents characteristics of model-based optimization methods utilized within the Generalized Self-Adapting Particle Swarm Optimization (GA– PSO) – a hybrid global optimization framework proposed by the authors. GAPSO has been designed as a generalization of a Particle Swarm Optimization (PSO) algorithm on the foundations of a large degree of independence of individual particles. GAPSO serves as a platform for studying optimization algorithms in the context of the following research hypothesis: (1) it is possible to improve the performance of an optimization algorithm through utilization of more function samples than standard PSO sample-based memory, (2) combining specialized sampling methods (i.e. PSO, Differential Evolution, model-based optimization) will result in a better algorithm performance than using each of them separately. The inclusion of model-based enhancements resulted in the necessity of extending the GAPSO framework by means of an external samples memory - this enhanced model is referred to as M-GAPSO in the paper. We investigate the features of two model-based optimizers: one utilizing a quadratic function and the other one utilizing a polynomial function. We analyze the conditions under which those model-based approaches provide an effective sampling strategy. Proposed model-based optimizers are evaluated on the functions from the COCO BBOB benchmark set.
{"title":"Analysis of statistical model-based optimization enhancements in Generalized Self-Adapting Particle Swarm Optimization framework","authors":"Mateusz Zaborski, M. Okulewicz, J. Mańdziuk","doi":"10.2478/fcds-2020-0013","DOIUrl":"https://doi.org/10.2478/fcds-2020-0013","url":null,"abstract":"\u0000 This paper presents characteristics of model-based optimization methods utilized within the Generalized Self-Adapting Particle Swarm Optimization (GA– PSO) – a hybrid global optimization framework proposed by the authors. GAPSO has been designed as a generalization of a Particle Swarm Optimization (PSO) algorithm on the foundations of a large degree of independence of individual particles. GAPSO serves as a platform for studying optimization algorithms in the context of the following research hypothesis: (1) it is possible to improve the performance of an optimization algorithm through utilization of more function samples than standard PSO sample-based memory, (2) combining specialized sampling methods (i.e. PSO, Differential Evolution, model-based optimization) will result in a better algorithm performance than using each of them separately. The inclusion of model-based enhancements resulted in the necessity of extending the GAPSO framework by means of an external samples memory - this enhanced model is referred to as M-GAPSO in the paper.\u0000 We investigate the features of two model-based optimizers: one utilizing a quadratic function and the other one utilizing a polynomial function. We analyze the conditions under which those model-based approaches provide an effective sampling strategy. Proposed model-based optimizers are evaluated on the functions from the COCO BBOB benchmark set.","PeriodicalId":42909,"journal":{"name":"Foundations of Computing and Decision Sciences","volume":" ","pages":""},"PeriodicalIF":1.1,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41968007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Krzysztof Hałas, Eugeniusz Krysiak, Tomasz Hałas, S. Stępień
Abstract Methods for solving non-linear control systems are still being developed. For many industrial devices and systems, quick and accurate regulators are investigated and required. The most effective and promising for nonlinear systems control is a State-Dependent Riccati Equation method (SDRE). In SDRE, the problem consists of finding the suboptimal solution for a given objective function considering nonlinear constraints. For this purpose, SDRE methods need improvement. In this paper, various numerical methods for solving the SDRE problem, i.e. algebraic Riccati equation, are discussed and tested. The time of computation and computational effort is presented and compared considering selected nonlinear control plants.
{"title":"Numerical Solution of SDRE Control Problem – Comparison of the Selected Methods","authors":"Krzysztof Hałas, Eugeniusz Krysiak, Tomasz Hałas, S. Stępień","doi":"10.2478/fcds-2020-0006","DOIUrl":"https://doi.org/10.2478/fcds-2020-0006","url":null,"abstract":"Abstract Methods for solving non-linear control systems are still being developed. For many industrial devices and systems, quick and accurate regulators are investigated and required. The most effective and promising for nonlinear systems control is a State-Dependent Riccati Equation method (SDRE). In SDRE, the problem consists of finding the suboptimal solution for a given objective function considering nonlinear constraints. For this purpose, SDRE methods need improvement. In this paper, various numerical methods for solving the SDRE problem, i.e. algebraic Riccati equation, are discussed and tested. The time of computation and computational effort is presented and compared considering selected nonlinear control plants.","PeriodicalId":42909,"journal":{"name":"Foundations of Computing and Decision Sciences","volume":"45 1","pages":"79 - 95"},"PeriodicalIF":1.1,"publicationDate":"2020-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42396608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}