Pub Date : 2021-12-05DOI: 10.1109/SSCI50451.2021.9659916
T. Kadavy, Adam Viktorin, Michal Pluhacek, S. Kovár
This paper represents the next step in the development of the recently proposed single objective metaheuristic algorithm - Self-Organizing Migrating Algorithm with CLustering-aided migration and adaptive Perturbation vector control (SOMA-CLP). The CEC 2021 single objective bound-constrained optimization benchmark testbed was used for the performance evaluation of the modifications of the algorithm. The presented modifications were invoked by the results of CEC 2021 competition, where the SOMA-CLP ranked 7th out of 9 competing algorithms. This paper introduces three modifications of population organization process focusing on one particular phase of the SOMA-CLP algorithm aimed at exploitation. All results were compared and tested for statistical significance against the original variant using the Friedman rank test. The algorithm modification and analysis of the results presented here can be inspiring for other researchers working on the development and modifications of evolutionary computing techniques.
{"title":"On Modifications Towards Improvement of the Exploitation Phase for SOMA Algorithm with Clustering-aided Migration and Adaptive Perturbation Vector Control","authors":"T. Kadavy, Adam Viktorin, Michal Pluhacek, S. Kovár","doi":"10.1109/SSCI50451.2021.9659916","DOIUrl":"https://doi.org/10.1109/SSCI50451.2021.9659916","url":null,"abstract":"This paper represents the next step in the development of the recently proposed single objective metaheuristic algorithm - Self-Organizing Migrating Algorithm with CLustering-aided migration and adaptive Perturbation vector control (SOMA-CLP). The CEC 2021 single objective bound-constrained optimization benchmark testbed was used for the performance evaluation of the modifications of the algorithm. The presented modifications were invoked by the results of CEC 2021 competition, where the SOMA-CLP ranked 7th out of 9 competing algorithms. This paper introduces three modifications of population organization process focusing on one particular phase of the SOMA-CLP algorithm aimed at exploitation. All results were compared and tested for statistical significance against the original variant using the Friedman rank test. The algorithm modification and analysis of the results presented here can be inspiring for other researchers working on the development and modifications of evolutionary computing techniques.","PeriodicalId":255763,"journal":{"name":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115620354","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/SSCI50451.2021.9660021
Cong Bao, Qiang Yang, Xudong Gao, Jun Zhang
The multiple traveling salesmen problem with visiting constraints (VCMTSP) is an extension of the multiple traveling salesmen problem (MTSP). In this problem, some cities are restricted to be only accessed by certain salesmen, which is very common in real-world applications. In the literature, evolutionary algorithms (EAs) have been demonstrated to effectively solve MTSP. In this paper, we aim to adapt three widely used EAs in solving MTSP, namely the genetic algorithm (GA), the ant colony optimization algorithm (ACO), and the artificial bee colony algorithm (ABC), to solve VCMTSP. Then, we conduct extensive experiments to investigate the optimization performance of the three EAs in solving VCMTSP. Experimental results on various VCMTSP instances demonstrate that by means of its strong local exploitation ability, ABC shows much better performance than the other two algorithms, especially on large-scale VCMTSP. Though GA and ACO are effective to solve small-scale VCMTSP, their effectiveness degrades drastically on large-scale instances. Particularly, it is found that local exploitation is very vital for EAs to effectively solve VCMTSP. With the above observations, it is expected that this paper could afford a basic guideline for new researchers who want to take attempts in this area.
{"title":"A Comparative Study on Population-Based Evolutionary Algorithms for Multiple Traveling Salesmen Problem with Visiting Constraints","authors":"Cong Bao, Qiang Yang, Xudong Gao, Jun Zhang","doi":"10.1109/SSCI50451.2021.9660021","DOIUrl":"https://doi.org/10.1109/SSCI50451.2021.9660021","url":null,"abstract":"The multiple traveling salesmen problem with visiting constraints (VCMTSP) is an extension of the multiple traveling salesmen problem (MTSP). In this problem, some cities are restricted to be only accessed by certain salesmen, which is very common in real-world applications. In the literature, evolutionary algorithms (EAs) have been demonstrated to effectively solve MTSP. In this paper, we aim to adapt three widely used EAs in solving MTSP, namely the genetic algorithm (GA), the ant colony optimization algorithm (ACO), and the artificial bee colony algorithm (ABC), to solve VCMTSP. Then, we conduct extensive experiments to investigate the optimization performance of the three EAs in solving VCMTSP. Experimental results on various VCMTSP instances demonstrate that by means of its strong local exploitation ability, ABC shows much better performance than the other two algorithms, especially on large-scale VCMTSP. Though GA and ACO are effective to solve small-scale VCMTSP, their effectiveness degrades drastically on large-scale instances. Particularly, it is found that local exploitation is very vital for EAs to effectively solve VCMTSP. With the above observations, it is expected that this paper could afford a basic guideline for new researchers who want to take attempts in this area.","PeriodicalId":255763,"journal":{"name":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116248623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/SSCI50451.2021.9659548
Tao Gao, Xiao Bai, Liang Zhang, Jian Wang
In this paper, a Group Lasso penalty based em-bedded/integrated feature selection method for multiple-input and multiple-output (MIMO) Takagi-Sugeno (TS) fuzzy neural network (FNN) is proposed. Group Lasso regularization can produce sparsity on the widths of the modified Gaussian membership function and this can guide us to select the useful features. Compared with Lasso, Group Lasso formulation has a Group penalty to the set of widths (weights) connected to a particular feature. To address the non-differentiability of the Group Lasso term, a smoothing Group Lasso method is introduced. Finally, one benchmark classification problem and two regression problems are used to validate the effectiveness of the proposed method.
{"title":"Feature Selection for Fuzzy Neural Networks using Group Lasso Regularization","authors":"Tao Gao, Xiao Bai, Liang Zhang, Jian Wang","doi":"10.1109/SSCI50451.2021.9659548","DOIUrl":"https://doi.org/10.1109/SSCI50451.2021.9659548","url":null,"abstract":"In this paper, a Group Lasso penalty based em-bedded/integrated feature selection method for multiple-input and multiple-output (MIMO) Takagi-Sugeno (TS) fuzzy neural network (FNN) is proposed. Group Lasso regularization can produce sparsity on the widths of the modified Gaussian membership function and this can guide us to select the useful features. Compared with Lasso, Group Lasso formulation has a Group penalty to the set of widths (weights) connected to a particular feature. To address the non-differentiability of the Group Lasso term, a smoothing Group Lasso method is introduced. Finally, one benchmark classification problem and two regression problems are used to validate the effectiveness of the proposed method.","PeriodicalId":255763,"journal":{"name":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122141520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/SSCI50451.2021.9659951
Thomas Miller, Christopher Archibald
In physical games, like darts, the ability of a player to accurately execute an intended action has a significant impact on their success. Determining this execution precision, or skill, for players is thus an important task. Knowledge of skill can be used for player feedback, computer-aided strategy decisions, game handicapping, and opponent modeling. Challenges to estimating player ability include getting precise feedback on executed actions as well as performing the estimation in a natural and user-friendly way. A previous method for estimating skill in darts overcomes the first challenge, but falls short on the second, requiring players to throw 50 darts at the center of the dartboard, which is not a common target in most darts games. In this paper we present an extension of this previous method that enables skill to be estimated when darts are aimed anywhere, not just the center of the dartboard. This method is then utilized to develop a much more efficient and adaptive skill estimation method which requires far fewer darts than the previous method. Experimental results demonstrate the advantages of the proposed approach and additional possible applications are discussed.
{"title":"Monte Carlo Skill Estimation for Darts","authors":"Thomas Miller, Christopher Archibald","doi":"10.1109/SSCI50451.2021.9659951","DOIUrl":"https://doi.org/10.1109/SSCI50451.2021.9659951","url":null,"abstract":"In physical games, like darts, the ability of a player to accurately execute an intended action has a significant impact on their success. Determining this execution precision, or skill, for players is thus an important task. Knowledge of skill can be used for player feedback, computer-aided strategy decisions, game handicapping, and opponent modeling. Challenges to estimating player ability include getting precise feedback on executed actions as well as performing the estimation in a natural and user-friendly way. A previous method for estimating skill in darts overcomes the first challenge, but falls short on the second, requiring players to throw 50 darts at the center of the dartboard, which is not a common target in most darts games. In this paper we present an extension of this previous method that enables skill to be estimated when darts are aimed anywhere, not just the center of the dartboard. This method is then utilized to develop a much more efficient and adaptive skill estimation method which requires far fewer darts than the previous method. Experimental results demonstrate the advantages of the proposed approach and additional possible applications are discussed.","PeriodicalId":255763,"journal":{"name":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"42 11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125752547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/SSCI50451.2021.9659957
Youkui Zhang, Qiqi Duan, Chang Shao, Yuhui Shi
In this paper, we present a simple yet efficient parallel version of simulated annealing (SA) for large-scale black-box optimization within the popular population-based framework. To achieve scalability, we adopt the island model, commonly used in parallel evolutionary algorithms, to update and communicate multiple independent SA instances. For maximizing efficiency, the copy-on-write operator is used to avoid performance-expensive lock when different instances exchange solutions. For better local search ability, individual step sizes are dynamically adjusted and learned during decomposition. Furthermore, we utilize the shared memory to reduce data redundancy and support concurrent fitness evaluations for challenging problems with costly memory consumption. Experiments based on the powerful Ray distributed computing library empirically demonstrate the effectiveness and efficiency of our parallel version on a set of 2000-dimensional benchmark functions (especially each is rotated with a 2000*2000 orthogonal matrix). To the best of our knowledge, these rotated functions with a memory-expensive data matrix were not tested in all previous works which considered only much lower dimensions. For reproducibility and benchmarking, the source code is made available at https://github.com/Evolutionary-Intelligence/PPSA.
{"title":"Parallel Population-Based Simulated Annealing for High-Dimensional Black-Box Optimization","authors":"Youkui Zhang, Qiqi Duan, Chang Shao, Yuhui Shi","doi":"10.1109/SSCI50451.2021.9659957","DOIUrl":"https://doi.org/10.1109/SSCI50451.2021.9659957","url":null,"abstract":"In this paper, we present a simple yet efficient parallel version of simulated annealing (SA) for large-scale black-box optimization within the popular population-based framework. To achieve scalability, we adopt the island model, commonly used in parallel evolutionary algorithms, to update and communicate multiple independent SA instances. For maximizing efficiency, the copy-on-write operator is used to avoid performance-expensive lock when different instances exchange solutions. For better local search ability, individual step sizes are dynamically adjusted and learned during decomposition. Furthermore, we utilize the shared memory to reduce data redundancy and support concurrent fitness evaluations for challenging problems with costly memory consumption. Experiments based on the powerful Ray distributed computing library empirically demonstrate the effectiveness and efficiency of our parallel version on a set of 2000-dimensional benchmark functions (especially each is rotated with a 2000*2000 orthogonal matrix). To the best of our knowledge, these rotated functions with a memory-expensive data matrix were not tested in all previous works which considered only much lower dimensions. For reproducibility and benchmarking, the source code is made available at https://github.com/Evolutionary-Intelligence/PPSA.","PeriodicalId":255763,"journal":{"name":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129398521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/SSCI50451.2021.9659902
Jiaojiao Yan, Jinde Cao
This paper studies the distributed online optimization problem with the property of privacy preservation over multi-agent system, where the communication topology is a fixed and strongly connected digraph. We only assume that the weight matrix is row stochastic, which relaxes the assumption of doubly stochastic in some literature and is easier to implement than the column stochastic weight matrix. A virtual agent associated with each agent is added which only communicates with the agent itself and performs gradient iterative update. The original agent only communicates with the original neighbors and virtual agent. A distributed online algorithm is designed by using gradient readjustment technology combined with distributed projection subgradient method. It is proved that the proposed algorithm can achieve the purpose of privacy preservation while realizing the sublinear regret bound. Finally, an example is provided to validate the performance of the algorithm.
{"title":"Privacy Preserving Modified Projection Subgradient Algorithm for Multi-Agent Online Optimization","authors":"Jiaojiao Yan, Jinde Cao","doi":"10.1109/SSCI50451.2021.9659902","DOIUrl":"https://doi.org/10.1109/SSCI50451.2021.9659902","url":null,"abstract":"This paper studies the distributed online optimization problem with the property of privacy preservation over multi-agent system, where the communication topology is a fixed and strongly connected digraph. We only assume that the weight matrix is row stochastic, which relaxes the assumption of doubly stochastic in some literature and is easier to implement than the column stochastic weight matrix. A virtual agent associated with each agent is added which only communicates with the agent itself and performs gradient iterative update. The original agent only communicates with the original neighbors and virtual agent. A distributed online algorithm is designed by using gradient readjustment technology combined with distributed projection subgradient method. It is proved that the proposed algorithm can achieve the purpose of privacy preservation while realizing the sublinear regret bound. Finally, an example is provided to validate the performance of the algorithm.","PeriodicalId":255763,"journal":{"name":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128406381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/SSCI50451.2021.9659904
Charles F. Mickle, D. Deb
Acute neurological complications are some of the leading causes of death and disability in the U.S. and the medical professionals that treat patients in this setting are tasked with deciding where (e.g., home or facility), how, and when to discharge these patients. It is important to be able to predict ahead of time these potential patient discharge outcomes and to know what factors influence the development of discharge planning for such adults receiving care for neurological conditions in an acute setting. The goal of this study is to develop predictive models exploring which patient characteristics and clinical variables significantly influence discharge planning with the hope that the models can be used in a suggestive context to help guide healthcare providers in efforts of planning effective, equitable discharge recommendations. Our methodology centers around building and training five different machine learning models followed by testing and tuning those models to find the best-suited predictor with a dataset of 5,245 adult patients with neurological conditions taken from the eICU-CRD database. The results of this study show XGBoost to be the most effective model for predicting between four common discharge outcomes of ‘home’, ‘nursing facility’, ‘rehab’, and ‘death’, with 71% average c-statistic. This research also explores the accuracy, reliability, and interpretability of the best performing model by identifying and analyzing the features that are most impactful to the predictions.
{"title":"Predicting Patient Discharge Disposition in Acute Neurological Care","authors":"Charles F. Mickle, D. Deb","doi":"10.1109/SSCI50451.2021.9659904","DOIUrl":"https://doi.org/10.1109/SSCI50451.2021.9659904","url":null,"abstract":"Acute neurological complications are some of the leading causes of death and disability in the U.S. and the medical professionals that treat patients in this setting are tasked with deciding where (e.g., home or facility), how, and when to discharge these patients. It is important to be able to predict ahead of time these potential patient discharge outcomes and to know what factors influence the development of discharge planning for such adults receiving care for neurological conditions in an acute setting. The goal of this study is to develop predictive models exploring which patient characteristics and clinical variables significantly influence discharge planning with the hope that the models can be used in a suggestive context to help guide healthcare providers in efforts of planning effective, equitable discharge recommendations. Our methodology centers around building and training five different machine learning models followed by testing and tuning those models to find the best-suited predictor with a dataset of 5,245 adult patients with neurological conditions taken from the eICU-CRD database. The results of this study show XGBoost to be the most effective model for predicting between four common discharge outcomes of ‘home’, ‘nursing facility’, ‘rehab’, and ‘death’, with 71% average c-statistic. This research also explores the accuracy, reliability, and interpretability of the best performing model by identifying and analyzing the features that are most impactful to the predictions.","PeriodicalId":255763,"journal":{"name":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129338612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/SSCI50451.2021.9660050
Kyle Erwin, A. Engelbrecht
Multi-guide particle swarm optimization (MGPSO) is a highly competitive algorithm for multi-objective optimization problems. MGPSO has been shown to perform better than or similar to several state-of-the-art multi-objective algorithms for a variety of multi-objective optimization problems (MOOPs). When comparing algorithmic performance it is recommended that the control parameters of each algorithm be tuned to the problem. However, control parameter tuning is often an expensive and time-consuming process. Recent work has derived the theoretical stability conditions on the MGPSO control parameters to guarantee order-1 and order-2 stability. This paper investigates an approach to randomly sample control parameter values for MGPSO that satisfy these stability conditions. It was shown that the proposed approach yields similar performance to that of MGPSO using tuned parameters, and therefore is a viable alternative to parameter tuning.
{"title":"A Tuning Free Approach to Multi-guide Particle Swarm Optimization","authors":"Kyle Erwin, A. Engelbrecht","doi":"10.1109/SSCI50451.2021.9660050","DOIUrl":"https://doi.org/10.1109/SSCI50451.2021.9660050","url":null,"abstract":"Multi-guide particle swarm optimization (MGPSO) is a highly competitive algorithm for multi-objective optimization problems. MGPSO has been shown to perform better than or similar to several state-of-the-art multi-objective algorithms for a variety of multi-objective optimization problems (MOOPs). When comparing algorithmic performance it is recommended that the control parameters of each algorithm be tuned to the problem. However, control parameter tuning is often an expensive and time-consuming process. Recent work has derived the theoretical stability conditions on the MGPSO control parameters to guarantee order-1 and order-2 stability. This paper investigates an approach to randomly sample control parameter values for MGPSO that satisfy these stability conditions. It was shown that the proposed approach yields similar performance to that of MGPSO using tuned parameters, and therefore is a viable alternative to parameter tuning.","PeriodicalId":255763,"journal":{"name":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127500841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/SSCI50451.2021.9659541
Kyle Erwin, A. Engelbrecht
Set-based algorithms have been shown to successfully find optimal solutions to the portfolio optimization problem and to scale well to larger portfolio optimization problems. Set-based algorithms work by selecting a sub-set of assets from the set universe. These assets then form a new search space where the asset weights are optimized. Erwin and Engelbrecht purposed such an algorithm that was shown to perform similarly to a well known genetic algorithm for portfolio optimization. The proposed algorithm, set-based particle swarm optimization (SBPSO), used a meta-huerstic for the weight optimization process - unlike previous set-based approaches to portfolio optimization. Erwin and Engelbrecht also developed several modifications to SBPSO that improved its performance for portfolio optimization. This paper investgates an alternative weight optimizer for SBPSO for portfolio optimization, namely adaptive coordinate descent (ACD). ACD is a completely deterministic approach and thus ensures that, after a finite time, an approximation of a global optimum will be found. It is shown that SBPSO for portfolio optimization using ACD for weight optimization found higher quality solutions than the current SBPSO algorithm, albeit slightly slower.
{"title":"Set-based Particle Swarm Optimization for Portfolio Optimization with Adaptive Coordinate Descent Weight Optimization","authors":"Kyle Erwin, A. Engelbrecht","doi":"10.1109/SSCI50451.2021.9659541","DOIUrl":"https://doi.org/10.1109/SSCI50451.2021.9659541","url":null,"abstract":"Set-based algorithms have been shown to successfully find optimal solutions to the portfolio optimization problem and to scale well to larger portfolio optimization problems. Set-based algorithms work by selecting a sub-set of assets from the set universe. These assets then form a new search space where the asset weights are optimized. Erwin and Engelbrecht purposed such an algorithm that was shown to perform similarly to a well known genetic algorithm for portfolio optimization. The proposed algorithm, set-based particle swarm optimization (SBPSO), used a meta-huerstic for the weight optimization process - unlike previous set-based approaches to portfolio optimization. Erwin and Engelbrecht also developed several modifications to SBPSO that improved its performance for portfolio optimization. This paper investgates an alternative weight optimizer for SBPSO for portfolio optimization, namely adaptive coordinate descent (ACD). ACD is a completely deterministic approach and thus ensures that, after a finite time, an approximation of a global optimum will be found. It is shown that SBPSO for portfolio optimization using ACD for weight optimization found higher quality solutions than the current SBPSO algorithm, albeit slightly slower.","PeriodicalId":255763,"journal":{"name":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130031831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-05DOI: 10.1109/SSCI50451.2021.9659953
Michael Lutz, Monsij Biswal
In the automotive industry, radar technology is an essential component for object identification due to its low cost and robust accuracy in harsh weather conditions. Clustering, an unsupervised machine learning technique, groups together individual radar responses to detect objects. Because clustering is a significant step in the automotive object identification pipeline, cluster quality and speed are especially critical. To that extent, density-based clustering algorithms have made significant progress due to their ability to operate on data sets with an unknown quantity of clusters. However, many density-based clustering algorithms such as DBSCAN remain unable to deal with inherently noisy radar data. Furthermore, many existing algorithms are not adapted to operate on state-of-the-art 4D radar systems. Thus, we propose a novel pipeline that utilizes supervised machine learning to predict noisy points on 4D radar point clouds by leveraging historical data. We then input noise predictions into two proposed cluster formation approaches, respectively involving dynamic and fixed search radii. Our best performing model performs roughly 153 percent better than the baseline DBSCAN in terms of V-Measure, and our quickest model finishes in 75 percent less time than DBSCAN while performing 130 percent better in terms of V-Measure.
{"title":"Supervised Noise Reduction for Clustering on Automotive 4D Radar","authors":"Michael Lutz, Monsij Biswal","doi":"10.1109/SSCI50451.2021.9659953","DOIUrl":"https://doi.org/10.1109/SSCI50451.2021.9659953","url":null,"abstract":"In the automotive industry, radar technology is an essential component for object identification due to its low cost and robust accuracy in harsh weather conditions. Clustering, an unsupervised machine learning technique, groups together individual radar responses to detect objects. Because clustering is a significant step in the automotive object identification pipeline, cluster quality and speed are especially critical. To that extent, density-based clustering algorithms have made significant progress due to their ability to operate on data sets with an unknown quantity of clusters. However, many density-based clustering algorithms such as DBSCAN remain unable to deal with inherently noisy radar data. Furthermore, many existing algorithms are not adapted to operate on state-of-the-art 4D radar systems. Thus, we propose a novel pipeline that utilizes supervised machine learning to predict noisy points on 4D radar point clouds by leveraging historical data. We then input noise predictions into two proposed cluster formation approaches, respectively involving dynamic and fixed search radii. Our best performing model performs roughly 153 percent better than the baseline DBSCAN in terms of V-Measure, and our quickest model finishes in 75 percent less time than DBSCAN while performing 130 percent better in terms of V-Measure.","PeriodicalId":255763,"journal":{"name":"2021 IEEE Symposium Series on Computational Intelligence (SSCI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129063020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}