Pub Date : 2007-04-01DOI: 10.1109/MCDM.2007.369411
P. Fleming
Summary form only given. Design problems arising in business and industry can often be conveniently formulated as multi-criteria decision-making problems. However, these often comprise a relatively large number of criteria. Through our close association with designers in industry and business we have devised a range of machine learning tools and associated techniques to address the special requirements of many-criteria decision-making. These include visualisation and analysis tools to aid the identification of features such as "hot-spots" and non-competing criteria, preference articulation techniques to assist in interrogating the search region of interest and methods to address the special computational demands of these problems. With the aid of test problems and real design exercises, we will demonstrate these approaches and also discuss alternative methods
{"title":"Tools and Techniques for Managing Many-Criteria Decision-Making","authors":"P. Fleming","doi":"10.1109/MCDM.2007.369411","DOIUrl":"https://doi.org/10.1109/MCDM.2007.369411","url":null,"abstract":"Summary form only given. Design problems arising in business and industry can often be conveniently formulated as multi-criteria decision-making problems. However, these often comprise a relatively large number of criteria. Through our close association with designers in industry and business we have devised a range of machine learning tools and associated techniques to address the special requirements of many-criteria decision-making. These include visualisation and analysis tools to aid the identification of features such as \"hot-spots\" and non-competing criteria, preference articulation techniques to assist in interrogating the search region of interest and methods to address the special computational demands of these problems. With the aid of test problems and real design exercises, we will demonstrate these approaches and also discuss alternative methods","PeriodicalId":306422,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128059813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-01DOI: 10.1109/MCDM.2007.369447
Martin Brown, Nicky Hutauruk
This paper investigates the convergence paths, rate of convergence and the convergence half-space associated with a class of descent multi-objective optimization algorithms. The first order descent algorithms are defined by maximizing the local objectives' reductions which can be interpreted in either the primal space (parameters) or the dual space (objectives). It is shown that the convergence paths are often aligned with a subset of the objectives gradients and that, in the limit, the convergence path is perpendicular to the local Pareto set. Similarities and differences are established for a range of p-norm descent algorithms. Bounds on the rate of convergence are established by considering the stability of first order learning rules. In addition, it is shown that the multi-objective descent algorithms implicitly generate a half-space which defines a convergence condition for family of optimization algorithms. Any procedure that generates updates that lie in this half-space will converge to the local Pareto set. This can be used to motivate the development of second order algorithms
{"title":"On the Convergence of Multi-Objective Descent Algorithms","authors":"Martin Brown, Nicky Hutauruk","doi":"10.1109/MCDM.2007.369447","DOIUrl":"https://doi.org/10.1109/MCDM.2007.369447","url":null,"abstract":"This paper investigates the convergence paths, rate of convergence and the convergence half-space associated with a class of descent multi-objective optimization algorithms. The first order descent algorithms are defined by maximizing the local objectives' reductions which can be interpreted in either the primal space (parameters) or the dual space (objectives). It is shown that the convergence paths are often aligned with a subset of the objectives gradients and that, in the limit, the convergence path is perpendicular to the local Pareto set. Similarities and differences are established for a range of p-norm descent algorithms. Bounds on the rate of convergence are established by considering the stability of first order learning rules. In addition, it is shown that the multi-objective descent algorithms implicitly generate a half-space which defines a convergence condition for family of optimization algorithms. Any procedure that generates updates that lie in this half-space will converge to the local Pareto set. This can be used to motivate the development of second order algorithms","PeriodicalId":306422,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114779774","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-01DOI: 10.1109/MCDM.2007.369436
M. Geiger
The article presents an approach to interactively solve multi-objective optimization problems. While the identification of efficient solutions is supported by computational intelligence techniques on the basis of local search, the search is directed by partial preference information obtained from the decision maker. An application of the approach to biobjective portfolio optimization, modeled as the well-known knapsack problem, is reported, and experimental results are reported for benchmark instances taken from the literature. In brief, we obtain encouraging results that show the applicability of the approach to the described problem. In order to stipulate a better understanding of the underlying structures of biobjective knapsack problems, we also study the characteristics of the search space of instances for which the optimal alternatives are known. As a result, optimal alternatives have been found to be relatively concentrated in alternative space, making the resolution of the studied instances possible with reasonable effort
{"title":"The Interactive Pareto Iterated Local Search (iPILS) Metaheuristic and its Application to the Biobjective Portfolio Optimization Problem","authors":"M. Geiger","doi":"10.1109/MCDM.2007.369436","DOIUrl":"https://doi.org/10.1109/MCDM.2007.369436","url":null,"abstract":"The article presents an approach to interactively solve multi-objective optimization problems. While the identification of efficient solutions is supported by computational intelligence techniques on the basis of local search, the search is directed by partial preference information obtained from the decision maker. An application of the approach to biobjective portfolio optimization, modeled as the well-known knapsack problem, is reported, and experimental results are reported for benchmark instances taken from the literature. In brief, we obtain encouraging results that show the applicability of the approach to the described problem. In order to stipulate a better understanding of the underlying structures of biobjective knapsack problems, we also study the characteristics of the search space of instances for which the optimal alternatives are known. As a result, optimal alternatives have been found to be relatively concentrated in alternative space, making the resolution of the studied instances possible with reasonable effort","PeriodicalId":306422,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130887278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-01DOI: 10.1109/MCDM.2007.369442
W. Stirling, R. Frost, M. Nokleby, Y. Luo
If preferential independence is assumed inappropriately when developing multicriterion search methods, biased results may occur. A new axiomatic approach to defining conditional preference orderings that naturally accounts for preferential dependencies is presented and illustrated. This approach applies both to scalar optimization techniques that identify a best solution and to evolutionary optimization approaches that approximate the Pareto frontier
{"title":"Multicriterion Decision Making with Depen ent Preferences","authors":"W. Stirling, R. Frost, M. Nokleby, Y. Luo","doi":"10.1109/MCDM.2007.369442","DOIUrl":"https://doi.org/10.1109/MCDM.2007.369442","url":null,"abstract":"If preferential independence is assumed inappropriately when developing multicriterion search methods, biased results may occur. A new axiomatic approach to defining conditional preference orderings that naturally accounts for preferential dependencies is presented and illustrated. This approach applies both to scalar optimization techniques that identify a best solution and to evolutionary optimization approaches that approximate the Pareto frontier","PeriodicalId":306422,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133783870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-01DOI: 10.1109/MCDM.2007.369446
L. Shao, M. Ehrgott
In this paper we address the problem of finding well distributed nondominated points for an MOLP. We propose a method which combines the global shooting and normal boundary intersection methods. It overcomes the limitation of normal boundary intersection method that parts of the non-dominated set may be missed. We prove that this method produces evenly distributed nondominated points. Moreover, the coverage error and the uniformity level can be measured. Finally, we apply this method to an optimization problem in radiation therapy and show results for some clinical cases
{"title":"Finding Representative Nondominated Points in Multiobjective Linear Programming","authors":"L. Shao, M. Ehrgott","doi":"10.1109/MCDM.2007.369446","DOIUrl":"https://doi.org/10.1109/MCDM.2007.369446","url":null,"abstract":"In this paper we address the problem of finding well distributed nondominated points for an MOLP. We propose a method which combines the global shooting and normal boundary intersection methods. It overcomes the limitation of normal boundary intersection method that parts of the non-dominated set may be missed. We prove that this method produces evenly distributed nondominated points. Moreover, the coverage error and the uniformity level can be measured. Finally, we apply this method to an optimization problem in radiation therapy and show results for some clinical cases","PeriodicalId":306422,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122072227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-01DOI: 10.1109/MCDM.2007.369430
P. Kaplan, S. Ranji Ranjithan
An interactive method is developed to aid decision makers in public sector planning and management. The method integrates machine learning algorithms along with multiobjective optimization and modeling-to-generate-alternatives procedures into decision analysis. The implicit preferences of the decision maker are elicited through screening of several alternatives. The alternatives are selected from Pareto front and near Pareto front regions that are identified first in the procedure. The decision maker's selections are input to the machine learning algorithms to generate decision rules, which are then incorporated into the analysis to generate more alternatives satisfying the decision rules. The method is illustrated using a municipal solid waste management planning problem
{"title":"A New MCDM Approach to Solve Public Sector Planning Problems","authors":"P. Kaplan, S. Ranji Ranjithan","doi":"10.1109/MCDM.2007.369430","DOIUrl":"https://doi.org/10.1109/MCDM.2007.369430","url":null,"abstract":"An interactive method is developed to aid decision makers in public sector planning and management. The method integrates machine learning algorithms along with multiobjective optimization and modeling-to-generate-alternatives procedures into decision analysis. The implicit preferences of the decision maker are elicited through screening of several alternatives. The alternatives are selected from Pareto front and near Pareto front regions that are identified first in the procedure. The decision maker's selections are input to the machine learning algorithms to generate decision rules, which are then incorporated into the analysis to generate more alternatives satisfying the decision rules. The method is illustrated using a municipal solid waste management planning problem","PeriodicalId":306422,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making","volume":"03 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127251424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-01DOI: 10.1109/MCDM.2007.369113
Jun Ma, Jie Lu, Guangquan Zhang
Information filtering is an important component in warning systems. This paper proposes a two-level information filtering model for generating warning information. In this model, information is represented by n-tuple, whose elements are values of information features. The features of information are divided into critical and uncritical features. Within this model, the collected information is filtered in two stages by users at different levels. At the first stage, exceptions are separated from normal information. And at the second stage, critical exceptions are separated from uncritical information. To illustration the proposed model, an example is discussed
{"title":"A Two-level Information Filtering Model in Generating Warning Information","authors":"Jun Ma, Jie Lu, Guangquan Zhang","doi":"10.1109/MCDM.2007.369113","DOIUrl":"https://doi.org/10.1109/MCDM.2007.369113","url":null,"abstract":"Information filtering is an important component in warning systems. This paper proposes a two-level information filtering model for generating warning information. In this model, information is represented by n-tuple, whose elements are values of information features. The features of information are divided into critical and uncritical features. Within this model, the collected information is filtered in two stages by users at different levels. At the first stage, exceptions are separated from normal information. And at the second stage, critical exceptions are separated from uncritical information. To illustration the proposed model, an example is discussed","PeriodicalId":306422,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129702046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-01DOI: 10.1109/MCDM.2007.369415
H. Ishibuchi, I. Kuwajima, Y. Nojima
Evolutionary multiobjective optimization (EMO) has been utilized in the field of data mining in the following two ways: to find Pareto-optimal rules and Pareto-optimal rule sets. Confidence and coverage are often used as two objectives to evaluate each rule in the search for Pareto-optimal rules. Whereas all association rules satisfying the minimum support and confidence are usually extracted in data mining, only Pareto-optimal rules are searched for by an EMO algorithm in multiobjective data mining. On the other hand, accuracy and complexity are used to evaluate each rule set. The complexity of each rule set is often measured by the number of rules and the number of antecedent conditions. An EMO algorithm is used to search for Pareto-optimal rule sets with respect to accuracy and complexity. In this paper, we examine the relation between Pareto-optimal rules and Pareto-optimal rule sets in the design of fuzzy rule-based systems for pattern classification problems. More specifically, we check whether Pareto-optimal rules are included in Pareto-optimal rule sets through computational experiments using multiobjective genetic fuzzy rule selection. A mixture of Pareto-optimal and non Pareto-optimal fuzzy rules are used as candidate rules in multiobjective genetic fuzzy rule selection. We also examine the performance of selected rules when we use only Pareto-optimal rules as candidate rules
{"title":"Relation between Pareto-Optimal Fuzzy Rules and Pareto-Optimal Fuzzy Rule Sets","authors":"H. Ishibuchi, I. Kuwajima, Y. Nojima","doi":"10.1109/MCDM.2007.369415","DOIUrl":"https://doi.org/10.1109/MCDM.2007.369415","url":null,"abstract":"Evolutionary multiobjective optimization (EMO) has been utilized in the field of data mining in the following two ways: to find Pareto-optimal rules and Pareto-optimal rule sets. Confidence and coverage are often used as two objectives to evaluate each rule in the search for Pareto-optimal rules. Whereas all association rules satisfying the minimum support and confidence are usually extracted in data mining, only Pareto-optimal rules are searched for by an EMO algorithm in multiobjective data mining. On the other hand, accuracy and complexity are used to evaluate each rule set. The complexity of each rule set is often measured by the number of rules and the number of antecedent conditions. An EMO algorithm is used to search for Pareto-optimal rule sets with respect to accuracy and complexity. In this paper, we examine the relation between Pareto-optimal rules and Pareto-optimal rule sets in the design of fuzzy rule-based systems for pattern classification problems. More specifically, we check whether Pareto-optimal rules are included in Pareto-optimal rule sets through computational experiments using multiobjective genetic fuzzy rule selection. A mixture of Pareto-optimal and non Pareto-optimal fuzzy rules are used as candidate rules in multiobjective genetic fuzzy rule selection. We also examine the performance of selected rules when we use only Pareto-optimal rules as candidate rules","PeriodicalId":306422,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129733625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-01DOI: 10.1109/MCDM.2007.369422
K. Zielinski, R. Laur
In multi-objective optimization not only fast convergence is important, but it is also necessary to keep enough diversity so that the whole Pareto-optimal front can be found. In this work four variants of differential evolution are examined that differ in the selection scheme and in the assignment of crowding distance. The assumption is checked that the variants differ in convergence speed and amount of diversity. The performance is shown for 1000 consecutive generations, so that different behavior over time can be detected
{"title":"Variants of Differential Evolution for Multi-Objective Optimization","authors":"K. Zielinski, R. Laur","doi":"10.1109/MCDM.2007.369422","DOIUrl":"https://doi.org/10.1109/MCDM.2007.369422","url":null,"abstract":"In multi-objective optimization not only fast convergence is important, but it is also necessary to keep enough diversity so that the whole Pareto-optimal front can be found. In this work four variants of differential evolution are examined that differ in the selection scheme and in the assignment of crowding distance. The assumption is checked that the variants differ in convergence speed and amount of diversity. The performance is shown for 1000 consecutive generations, so that different behavior over time can be detected","PeriodicalId":306422,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130588754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-04-01DOI: 10.1109/MCDM.2007.369427
K. Veeramachaneni, Weizhong Yan, K. Goebel, L. Osadciw
Both experimental and theoretical studies have proved that classifier fusion can be effective in improving overall classification performance. Classifier fusion can be performed on either score (raw classifier outputs) level or decision level. While tremendous research interests have been on score-level fusion, research work for decision-level fusion is sparse. This paper presents a particle swarm optimization based decision-level fusion scheme for optimizing classifier fusion performance. Multiple classifiers are fused at the decision level, and the particle swarm optimization algorithm finds optimal decision threshold for each classifier and the optimal fusion rule. Specifically, we present an optimal fusion strategy for fusing multiple classifiers to satisfy accuracy performance requirements, as applied to a real-world classification problem. The optimal decision fusion technique is found to perform significantly better than the conventional classifier fusion methods, i.e., traditional decision level fusion and averaged sum rule
{"title":"Improving Classifier Fusion Using Particle Swarm Optimization","authors":"K. Veeramachaneni, Weizhong Yan, K. Goebel, L. Osadciw","doi":"10.1109/MCDM.2007.369427","DOIUrl":"https://doi.org/10.1109/MCDM.2007.369427","url":null,"abstract":"Both experimental and theoretical studies have proved that classifier fusion can be effective in improving overall classification performance. Classifier fusion can be performed on either score (raw classifier outputs) level or decision level. While tremendous research interests have been on score-level fusion, research work for decision-level fusion is sparse. This paper presents a particle swarm optimization based decision-level fusion scheme for optimizing classifier fusion performance. Multiple classifiers are fused at the decision level, and the particle swarm optimization algorithm finds optimal decision threshold for each classifier and the optimal fusion rule. Specifically, we present an optimal fusion strategy for fusing multiple classifiers to satisfy accuracy performance requirements, as applied to a real-world classification problem. The optimal decision fusion technique is found to perform significantly better than the conventional classifier fusion methods, i.e., traditional decision level fusion and averaged sum rule","PeriodicalId":306422,"journal":{"name":"2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making","volume":"36 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120839996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}