Abstract: To make up for the deficiencies of the Harris hawk optimization algorithm (HHO) in solving multi-objective optimization problems with low algorithm accuracy, slow rate of convergence, and easily fall into the trap of local optima, a multi-strategy improved multi-objective Harris hawk optimization algorithm with elite opposition-based learning (MO-EMHHO) is proposed. First, the population is initialized by Sobol sequences to increase population diversity. Second, incorporate the elite backward learning strategy to improve population diversity and quality. Further, an external profile maintenance method based on an adaptive grid strategy is proposed to make the solution better contracted to the real Pareto frontier. Subsequently, optimize the update strategy of the original algorithm in a non-linear energy update way to improve the exploration and development of the algorithm. Finally, improving the diversity of the algorithm and the uniformity of the solution set using an adaptive variation strategy based on Gaussian random wandering. Experimental comparison of the multi-objective particle swarm algorithm (MOPSO), multi-objective gray wolf algorithm (MOGWO), and multi-objective Harris Hawk algorithm (MOHHO) on the commonly used benchmark functions shows that the MO-EMHHO outperforms the other compared algorithms in terms of optimization seeking accuracy, convergence speed and stability, and provides a new solution to the multi-objective optimization problem.
{"title":"Multi-strategy Improved Multi-objective Harris Hawk Optimization Algorithm with Elite Opposition-based Learning","authors":"Fulin Tian, Jiayang Wang, Fei Chu, Lin Zhou","doi":"10.1145/3590003.3590030","DOIUrl":"https://doi.org/10.1145/3590003.3590030","url":null,"abstract":"Abstract: To make up for the deficiencies of the Harris hawk optimization algorithm (HHO) in solving multi-objective optimization problems with low algorithm accuracy, slow rate of convergence, and easily fall into the trap of local optima, a multi-strategy improved multi-objective Harris hawk optimization algorithm with elite opposition-based learning (MO-EMHHO) is proposed. First, the population is initialized by Sobol sequences to increase population diversity. Second, incorporate the elite backward learning strategy to improve population diversity and quality. Further, an external profile maintenance method based on an adaptive grid strategy is proposed to make the solution better contracted to the real Pareto frontier. Subsequently, optimize the update strategy of the original algorithm in a non-linear energy update way to improve the exploration and development of the algorithm. Finally, improving the diversity of the algorithm and the uniformity of the solution set using an adaptive variation strategy based on Gaussian random wandering. Experimental comparison of the multi-objective particle swarm algorithm (MOPSO), multi-objective gray wolf algorithm (MOGWO), and multi-objective Harris Hawk algorithm (MOHHO) on the commonly used benchmark functions shows that the MO-EMHHO outperforms the other compared algorithms in terms of optimization seeking accuracy, convergence speed and stability, and provides a new solution to the multi-objective optimization problem.","PeriodicalId":340225,"journal":{"name":"Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124968430","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Due to the serious interference of illumination and background on the camera during the live operation of the distribution network robot, it is difficult to match, identify, and locate the feature points of the target image, such as the drainage line. This paper proposes the intelligent perception recognition and positioning method of the distribution network drainage line. First, YOLOv4 is used to identify and classify the typical parts of the distribution network and determine the two-dimensional position of the operation point. Subsequently, the Res-Unet segmentation network was improved to perform image segmentation of drainage lines and wires to avoid complex background interference. Finally, binocular vision is used to extract the center line of the wire through the image geometric moment and determine the image line of the wire and the center of the double eyes. The intersection line of the wire is the spatial three-dimensional coordinates of the wire. After the target detection, wire segmentation, and operation point positioning experiments, this method can achieve a positioning accuracy of 1 mm in the x and y directions and 3 mm in the z direction under the camera coordinate system, which provides a guarantee for accurate perception and recognition and reliable operation control of the power distribution robot operation.
{"title":"Intelligent perception recognition and positioning method of distribution network drainage line","authors":"Shuzhou Xiao, Qiuyan Zhang, Q. Fan, Jianrong Wu, Chao Zhao","doi":"10.1145/3590003.3590088","DOIUrl":"https://doi.org/10.1145/3590003.3590088","url":null,"abstract":"Due to the serious interference of illumination and background on the camera during the live operation of the distribution network robot, it is difficult to match, identify, and locate the feature points of the target image, such as the drainage line. This paper proposes the intelligent perception recognition and positioning method of the distribution network drainage line. First, YOLOv4 is used to identify and classify the typical parts of the distribution network and determine the two-dimensional position of the operation point. Subsequently, the Res-Unet segmentation network was improved to perform image segmentation of drainage lines and wires to avoid complex background interference. Finally, binocular vision is used to extract the center line of the wire through the image geometric moment and determine the image line of the wire and the center of the double eyes. The intersection line of the wire is the spatial three-dimensional coordinates of the wire. After the target detection, wire segmentation, and operation point positioning experiments, this method can achieve a positioning accuracy of 1 mm in the x and y directions and 3 mm in the z direction under the camera coordinate system, which provides a guarantee for accurate perception and recognition and reliable operation control of the power distribution robot operation.","PeriodicalId":340225,"journal":{"name":"Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127102150","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The clinically acquired heart sound signals always have inevitable noise, and the statistical features of these noises are different from heart sounds, so a heart sound classification algorithm based on sub-band statistics and time-frequency fusion features is proposed. Firstly, the statistical moments (mean, variance, skewness and kurtosis), normalized correlation coefficients between sub-band and sub-band modulation spectrum are extracted from each sub-band envelope of the heart sound signal, and these three features are fused into fusion features by Z-score normalization method. Finally, a convolutional neural network classification model is constructed, which are used for training and testing. The experimental results showed that the accuracy, sensitivity, specificity and F1 score of the algorithm were 95.12%, 92.27%, 97.93% and 94.95%, respectively. It has great potential in machine-aided diagnosis of precordial diseases.
{"title":"Heart Sound Classification Algorithm Based on Sub-band Statistics and Time-frequency Fusion Features","authors":"Xiaoqin Zhang, Weilian Wang","doi":"10.1145/3590003.3590013","DOIUrl":"https://doi.org/10.1145/3590003.3590013","url":null,"abstract":"The clinically acquired heart sound signals always have inevitable noise, and the statistical features of these noises are different from heart sounds, so a heart sound classification algorithm based on sub-band statistics and time-frequency fusion features is proposed. Firstly, the statistical moments (mean, variance, skewness and kurtosis), normalized correlation coefficients between sub-band and sub-band modulation spectrum are extracted from each sub-band envelope of the heart sound signal, and these three features are fused into fusion features by Z-score normalization method. Finally, a convolutional neural network classification model is constructed, which are used for training and testing. The experimental results showed that the accuracy, sensitivity, specificity and F1 score of the algorithm were 95.12%, 92.27%, 97.93% and 94.95%, respectively. It has great potential in machine-aided diagnosis of precordial diseases.","PeriodicalId":340225,"journal":{"name":"Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114268719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The COVID-19 epidemic has been raging overseas for more than three years, and inbound goods and people have become the main risk points of the domestic epidemic. As the main window for China to exchange materials and personnel with foreign countries, under the dual pressure of the global economic downturn and the China-US economic confrontation, ports’ pressure and responsibility to ensure material transportation and foreign trade are particularly heavy. However, the risk screening of ship and crew epidemic information based on manual methods is extremely time-consuming and labor-intensive, and it is difficult to take into account the efficiency and accuracy requirements of the port's own business and disease control and traceability. To this end, this study proposes an epidemic risk screening method based on knowledge graphs. This method is based on shipping big data and community discovery algorithms, analyzes the geospatial similarity of ship information, crew information and real-time epidemic policy information, and quickly establishes a structure. Map data, quickly screen high-risk ships and crew members, and access the business system to arrange nucleic acid testing tasks. When the time cost is only one thousandth of that of manual labor, the detection accuracy rate approaches and exceeds the accuracy level of manual screening, with an average precision advantage of 8.18% and an average time advantage of 1423 times. It is further found that it is more capable of performing heavy screening tasks than humans, and its AUC decline rate with the increase of the amount of measured data is only 34% of that of the manual method. The research results have been initially applied in Ningbo Port, which has greatly improved the informatization level and screening efficiency of Ningbo Port's risk screening during COVID-19 epidemic.
{"title":"Research on Epidemic Big Data Monitoring and Application of Ship Berthing Based on Knowledge Graph-Community Detection","authors":"Dongfang Shang, Yuesong Li, Jiashuai Xu, Kexin Bao, Ruixi Wang, Liu Qin","doi":"10.1145/3590003.3590026","DOIUrl":"https://doi.org/10.1145/3590003.3590026","url":null,"abstract":"The COVID-19 epidemic has been raging overseas for more than three years, and inbound goods and people have become the main risk points of the domestic epidemic. As the main window for China to exchange materials and personnel with foreign countries, under the dual pressure of the global economic downturn and the China-US economic confrontation, ports’ pressure and responsibility to ensure material transportation and foreign trade are particularly heavy. However, the risk screening of ship and crew epidemic information based on manual methods is extremely time-consuming and labor-intensive, and it is difficult to take into account the efficiency and accuracy requirements of the port's own business and disease control and traceability. To this end, this study proposes an epidemic risk screening method based on knowledge graphs. This method is based on shipping big data and community discovery algorithms, analyzes the geospatial similarity of ship information, crew information and real-time epidemic policy information, and quickly establishes a structure. Map data, quickly screen high-risk ships and crew members, and access the business system to arrange nucleic acid testing tasks. When the time cost is only one thousandth of that of manual labor, the detection accuracy rate approaches and exceeds the accuracy level of manual screening, with an average precision advantage of 8.18% and an average time advantage of 1423 times. It is further found that it is more capable of performing heavy screening tasks than humans, and its AUC decline rate with the increase of the amount of measured data is only 34% of that of the manual method. The research results have been initially applied in Ningbo Port, which has greatly improved the informatization level and screening efficiency of Ningbo Port's risk screening during COVID-19 epidemic.","PeriodicalId":340225,"journal":{"name":"Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130136017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual position and attitude measurement (VPAM) system has been widely used in obtaining space target information. In order to better obtain different target information and meet the requirements, it is particularly important to select a correct and effective measurement algorithm. In this paper, a performance evaluation software of VPAM algorithm is designed, which can compare and analyze the accuracy and complexity of algorithms used by different VPAM models, and help users select appropriate position models to obtain more accurate target information. Finally, the software is verified by using the dual photogrammetric model in the shipborne helicopter landing system, and the validity of the analysis software is verified by comparing the calculation results with the theoretical value of the algorithm accuracy analysis. The main contribution of this paper is that, as far as we know, it is the first time to try to evaluate the complexity and accuracy of the algorithm by building analysis software instead of theoretical analysis.
{"title":"An Analysis Software for Visual Position and Attitude Measurement Algorithm","authors":"Tao-rang Xu, Jing Zhang, Bin Cai, Yafei Wang","doi":"10.1145/3590003.3590043","DOIUrl":"https://doi.org/10.1145/3590003.3590043","url":null,"abstract":"Visual position and attitude measurement (VPAM) system has been widely used in obtaining space target information. In order to better obtain different target information and meet the requirements, it is particularly important to select a correct and effective measurement algorithm. In this paper, a performance evaluation software of VPAM algorithm is designed, which can compare and analyze the accuracy and complexity of algorithms used by different VPAM models, and help users select appropriate position models to obtain more accurate target information. Finally, the software is verified by using the dual photogrammetric model in the shipborne helicopter landing system, and the validity of the analysis software is verified by comparing the calculation results with the theoretical value of the algorithm accuracy analysis. The main contribution of this paper is that, as far as we know, it is the first time to try to evaluate the complexity and accuracy of the algorithm by building analysis software instead of theoretical analysis.","PeriodicalId":340225,"journal":{"name":"Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130249393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the video recommendation scenario, knowledge graphs are usually introduced to supplement the data information between videos to achieve information expansion and solve the problems of data sparsity and user cold start. However, there are few high-quality knowledge graphs available in the field of video recommendation, and there are many schemes based on knowledge graph embedding, which have different effects on recommendation performance and bring difficulties to researchers. Based on the streaming media video website data, this paper constructs knowledge graphs of two typical scenarios (i.e., sparse distribution scenarios and dense distribution scenarios ). Moreover, six state-of-the-art knowledge graph embedding methods are analyzed based on extensive experiments from three aspects: data distribution type, data set segmentation method, and recommended quantity range. Comparing the recommendation effect of knowledge graph embedding methods. The experimental results demonstrate that: in the sparse distribution scenario , the recommendation effect using TransE is the best; in the dense distribution scenario, the recommendation effect using TransE or TranD is the best. It provides a reference for subsequent researchers on how to choose knowledge map embedding methods under specific data distribution.
{"title":"Comparative Research on Embedding Methods for Video Knowledge Graph","authors":"Zhihong Zhou, Qiang Xu, Hui Ding, Shengwei Ji","doi":"10.1145/3590003.3590049","DOIUrl":"https://doi.org/10.1145/3590003.3590049","url":null,"abstract":"In the video recommendation scenario, knowledge graphs are usually introduced to supplement the data information between videos to achieve information expansion and solve the problems of data sparsity and user cold start. However, there are few high-quality knowledge graphs available in the field of video recommendation, and there are many schemes based on knowledge graph embedding, which have different effects on recommendation performance and bring difficulties to researchers. Based on the streaming media video website data, this paper constructs knowledge graphs of two typical scenarios (i.e., sparse distribution scenarios and dense distribution scenarios ). Moreover, six state-of-the-art knowledge graph embedding methods are analyzed based on extensive experiments from three aspects: data distribution type, data set segmentation method, and recommended quantity range. Comparing the recommendation effect of knowledge graph embedding methods. The experimental results demonstrate that: in the sparse distribution scenario , the recommendation effect using TransE is the best; in the dense distribution scenario, the recommendation effect using TransE or TranD is the best. It provides a reference for subsequent researchers on how to choose knowledge map embedding methods under specific data distribution.","PeriodicalId":340225,"journal":{"name":"Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131648880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the era of Big Data, more and more IoT devices are generating huge amounts of high-dimensional, real-time and dynamic data streams. As a result, there is a growing interest in how to cluster this data effectively and efficiently. Although a number of popular two-stage data stream clustering algorithms have been proposed, these algorithms still have some problems that are difficult to solve in the face of real-world data streams: poor handling of high-dimensional data streams and difficulty in effective dimensionality reduction; a slow clustering process that makes it difficult to meet real-time requirements; and too many manually defined parameters that make it difficult to cope with evolving data streams. This paper proposes an autoencoder-based fast online clustering algorithm for evolving data stream(AFOCEDS). The algorithm uses a stacked denoising autoencoder to reduce the dimensionality of the data, a multi-threaded approach to improve response speed, and a mechanism to automatically update parameters to cope with evolving data streams. The experiments on several realistic data streams show that AFOCEDS outperforms other algorithms in terms of effectiveness and speed.
{"title":"An autoencoder-based fast online clustering algorithm for evolving data stream","authors":"Dazheng Gao","doi":"10.1145/3590003.3590020","DOIUrl":"https://doi.org/10.1145/3590003.3590020","url":null,"abstract":"In the era of Big Data, more and more IoT devices are generating huge amounts of high-dimensional, real-time and dynamic data streams. As a result, there is a growing interest in how to cluster this data effectively and efficiently. Although a number of popular two-stage data stream clustering algorithms have been proposed, these algorithms still have some problems that are difficult to solve in the face of real-world data streams: poor handling of high-dimensional data streams and difficulty in effective dimensionality reduction; a slow clustering process that makes it difficult to meet real-time requirements; and too many manually defined parameters that make it difficult to cope with evolving data streams. This paper proposes an autoencoder-based fast online clustering algorithm for evolving data stream(AFOCEDS). The algorithm uses a stacked denoising autoencoder to reduce the dimensionality of the data, a multi-threaded approach to improve response speed, and a mechanism to automatically update parameters to cope with evolving data streams. The experiments on several realistic data streams show that AFOCEDS outperforms other algorithms in terms of effectiveness and speed.","PeriodicalId":340225,"journal":{"name":"Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124666491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With current advances in Machine Learning and its growing use in high-impact scenarios, the demand for interpretable and explainable models becomes crucial. Causality research tries to go beyond statistical correlations by focusing on causal relationships, which is fundamental for Interpretable and Explainable Artificial Intelligence. In this paper, we perturb the input for explanation surrogates based on causal graphs. We present an approach to combine surrogate-based explanations with causal knowledge. We apply the perturbed data to the Local Interpretable Model-agnostic Explanations (LIME) approach to showcase how causal graphs improve explanations of surrogate models. We thus integrate features from both domains by adding a causal component to local explanations. The proposed approach enables explanations that suit the expectations of the user by having the user define an appropriate causal graph. Accordingly, these expectations are true to the user. We demonstrate the suitability of our method using real world data.
{"title":"CIP-ES: Causal Input Perturbation for Explanation Surrogates","authors":"Sebastian Steindl, Martin Surner","doi":"10.1145/3590003.3590107","DOIUrl":"https://doi.org/10.1145/3590003.3590107","url":null,"abstract":"With current advances in Machine Learning and its growing use in high-impact scenarios, the demand for interpretable and explainable models becomes crucial. Causality research tries to go beyond statistical correlations by focusing on causal relationships, which is fundamental for Interpretable and Explainable Artificial Intelligence. In this paper, we perturb the input for explanation surrogates based on causal graphs. We present an approach to combine surrogate-based explanations with causal knowledge. We apply the perturbed data to the Local Interpretable Model-agnostic Explanations (LIME) approach to showcase how causal graphs improve explanations of surrogate models. We thus integrate features from both domains by adding a causal component to local explanations. The proposed approach enables explanations that suit the expectations of the user by having the user define an appropriate causal graph. Accordingly, these expectations are true to the user. We demonstrate the suitability of our method using real world data.","PeriodicalId":340225,"journal":{"name":"Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115928293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The pose estimation of space targets is of great significance for space target state assessment, anomaly detection, fault diagnosis, etc. With the development of adaptive optics technology, the imaging quality of ground-based optical systems has been greatly improved, and we can use the observed images to estimate the pose of space targets. However, the imaging process of the ground-based optical system is still affected by various noises and disturbances, which makes the images degrade. Aiming at the space target pose estimation with these degraded images, we propose a new pose estimation pipeline based on robust geometry structure features. By associating the corresponding geometry structure feature between consecutive frames, we can get the target pose by optimization method. This paper will explain the definition and extraction of the proposed geometry structure feature. We propose a geometry structure feature prediction method base on set prediction in a multi-task way with target components classification and segmentation. Experiments show that our structure feature prediction network achieves competitive results on the simulated photo-realistic SpaceShuttle dataset which is rendered according to the physics imaging process.
{"title":"Pose Estimation of Space Targets Based on Geometry Structure Features","authors":"Xiwen Liu, Shuling Hao, Kefeng Xu","doi":"10.1145/3590003.3590096","DOIUrl":"https://doi.org/10.1145/3590003.3590096","url":null,"abstract":"The pose estimation of space targets is of great significance for space target state assessment, anomaly detection, fault diagnosis, etc. With the development of adaptive optics technology, the imaging quality of ground-based optical systems has been greatly improved, and we can use the observed images to estimate the pose of space targets. However, the imaging process of the ground-based optical system is still affected by various noises and disturbances, which makes the images degrade. Aiming at the space target pose estimation with these degraded images, we propose a new pose estimation pipeline based on robust geometry structure features. By associating the corresponding geometry structure feature between consecutive frames, we can get the target pose by optimization method. This paper will explain the definition and extraction of the proposed geometry structure feature. We propose a geometry structure feature prediction method base on set prediction in a multi-task way with target components classification and segmentation. Experiments show that our structure feature prediction network achieves competitive results on the simulated photo-realistic SpaceShuttle dataset which is rendered according to the physics imaging process.","PeriodicalId":340225,"journal":{"name":"Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning","volume":"219 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130420633","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many-objective optimization problems (MaOPs), are the most difficult problems to solve when it comes to multiobjective optimization issues (MOPs). MaOPs provide formidable challenges to current multiobjective evolutionary methods such as selection operators, computational cost, visualization of the high-dimensional trade-off front. Removal of the reductant objectives from the original objective set, known as objective reduction, is one of the most significant approaches for MaOPs, which can tackle optimization problems with more than 15 objectives is made feasible by its ability to greatly overcome the challenges of existing multi-objective evolutionary computing techniques. In this study, an objective reduction evolutionary multiobjective algorithm using adaptive density-based clustering is presented for MaOPs. The parameters in the density-based clustering can be adaptively determined by depending on the data samples constructed. Based on the clustering result, the algorithm employs an adaptive strategy for objective aggregation that preserves the structure of the original Pareto front as much as feasible. Finally, the performance of the proposed multiobjective algorithms on benchmarks is thoroughly investigated. The numerical findings and comparisons demonstrate the efficacy and superiority of the suggested multiobjective algorithms and it may be treated as a potential tool for MaOPs.
{"title":"An Objective Reduction Evolutionary Multiobjective Algorithm using Adaptive Density-Based Clustering for Many-objective Optimization Problem","authors":"Mingjing Wang, Long Chen, Huiling Chen","doi":"10.1145/3590003.3590103","DOIUrl":"https://doi.org/10.1145/3590003.3590103","url":null,"abstract":"Many-objective optimization problems (MaOPs), are the most difficult problems to solve when it comes to multiobjective optimization issues (MOPs). MaOPs provide formidable challenges to current multiobjective evolutionary methods such as selection operators, computational cost, visualization of the high-dimensional trade-off front. Removal of the reductant objectives from the original objective set, known as objective reduction, is one of the most significant approaches for MaOPs, which can tackle optimization problems with more than 15 objectives is made feasible by its ability to greatly overcome the challenges of existing multi-objective evolutionary computing techniques. In this study, an objective reduction evolutionary multiobjective algorithm using adaptive density-based clustering is presented for MaOPs. The parameters in the density-based clustering can be adaptively determined by depending on the data samples constructed. Based on the clustering result, the algorithm employs an adaptive strategy for objective aggregation that preserves the structure of the original Pareto front as much as feasible. Finally, the performance of the proposed multiobjective algorithms on benchmarks is thoroughly investigated. The numerical findings and comparisons demonstrate the efficacy and superiority of the suggested multiobjective algorithms and it may be treated as a potential tool for MaOPs.","PeriodicalId":340225,"journal":{"name":"Proceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128325571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}