Fatma A Hashim, Reham R. Mostafa, Ruba Abu Khurma, R. Qaddoura, P. A. Castillo
Sea Horse Optimizer (SHO) is a noteworthy metaheuristic algorithm that emulates various intelligent behaviors exhibited by sea horses, encompassing feeding patterns, male reproductive strategies, and intricate movement patterns. To mimic the nuanced locomotion of sea horses, SHO integrates the logarithmic helical equation and Levy flight, effectively incorporating both random movements with substantial step sizes and refined local exploitation. Additionally, the utilization of Brownian motion facilitates a more comprehensive exploration of the search space. This study introduces a robust and high-performance variant of the SHO algorithm named mSHO. The enhancement primarily focuses on bolstering SHO's exploitation capabilities by replacing its original method with an innovative local search strategy encompassing three distinct steps: a neighborhood-based local search, a global non-neighbor-based search, and a method involving circumnavigation of the existing search region. These techniques improve mSHO algorithm's search capabilities, allowing it to navigate the search space and converge toward optimal solutions efficiently. To evaluate the efficacy of the mSHO algorithm, comprehensive assessments are conducted across both the CEC2020 benchmark functions and nine distinct engineering problems. A meticulous comparison is drawn against nine metaheuristic algorithms to validate the achieved outcomes. Statistical tests, including Wilcoxon's rank-sum and Friedman's tests, are aptly applied to discern noteworthy differences among the compared algorithms. Empirical findings consistently underscore the exceptional performance of mSHO across diverse benchmark functions, reinforcing its prowess in solving complex optimization problems. Furthermore, the robustness of mSHO endures even as the dimensions of optimization challenges expand, signifying its unwavering efficacy in navigating complex search spaces. The comprehensive results distinctly establish the supremacy and efficiency of the mSHO method as an exemplary tool for tackling an array of optimization quandaries. The results show that the proposed mSHO algorithm has a total rank of 1 for CEC’2020 test functions. In contrast, the mSHO achieved the best value for the engineering problems, recording a value of 0.012665, 2993.634, 0.01266, 1.724967, 263.8915, 0.032255, 58507.14, 1.339956, and 0.23524 for the pressure vessel design, speed reducer design, tension/compression spring, welded beam design, three-bar truss engineering design, industrial refrigeration system, multi-Product batch plant, cantilever beam problem, multiple disc clutch brake problems, respectively. Source codes of mSHO are publicly available at https://www.mathworks.com/matlabcentral/fileexchange/135882-improved-sea-horse-algorithm.
{"title":"A new approach for solving global optimization and engineering problems based on modified Sea Horse Optimizer","authors":"Fatma A Hashim, Reham R. Mostafa, Ruba Abu Khurma, R. Qaddoura, P. A. Castillo","doi":"10.1093/jcde/qwae001","DOIUrl":"https://doi.org/10.1093/jcde/qwae001","url":null,"abstract":"\u0000 Sea Horse Optimizer (SHO) is a noteworthy metaheuristic algorithm that emulates various intelligent behaviors exhibited by sea horses, encompassing feeding patterns, male reproductive strategies, and intricate movement patterns. To mimic the nuanced locomotion of sea horses, SHO integrates the logarithmic helical equation and Levy flight, effectively incorporating both random movements with substantial step sizes and refined local exploitation. Additionally, the utilization of Brownian motion facilitates a more comprehensive exploration of the search space. This study introduces a robust and high-performance variant of the SHO algorithm named mSHO. The enhancement primarily focuses on bolstering SHO's exploitation capabilities by replacing its original method with an innovative local search strategy encompassing three distinct steps: a neighborhood-based local search, a global non-neighbor-based search, and a method involving circumnavigation of the existing search region. These techniques improve mSHO algorithm's search capabilities, allowing it to navigate the search space and converge toward optimal solutions efficiently. To evaluate the efficacy of the mSHO algorithm, comprehensive assessments are conducted across both the CEC2020 benchmark functions and nine distinct engineering problems. A meticulous comparison is drawn against nine metaheuristic algorithms to validate the achieved outcomes. Statistical tests, including Wilcoxon's rank-sum and Friedman's tests, are aptly applied to discern noteworthy differences among the compared algorithms. Empirical findings consistently underscore the exceptional performance of mSHO across diverse benchmark functions, reinforcing its prowess in solving complex optimization problems. Furthermore, the robustness of mSHO endures even as the dimensions of optimization challenges expand, signifying its unwavering efficacy in navigating complex search spaces. The comprehensive results distinctly establish the supremacy and efficiency of the mSHO method as an exemplary tool for tackling an array of optimization quandaries. The results show that the proposed mSHO algorithm has a total rank of 1 for CEC’2020 test functions. In contrast, the mSHO achieved the best value for the engineering problems, recording a value of 0.012665, 2993.634, 0.01266, 1.724967, 263.8915, 0.032255, 58507.14, 1.339956, and 0.23524 for the pressure vessel design, speed reducer design, tension/compression spring, welded beam design, three-bar truss engineering design, industrial refrigeration system, multi-Product batch plant, cantilever beam problem, multiple disc clutch brake problems, respectively. Source codes of mSHO are publicly available at https://www.mathworks.com/matlabcentral/fileexchange/135882-improved-sea-horse-algorithm.","PeriodicalId":48611,"journal":{"name":"Journal of Computational Design and Engineering","volume":"13 13","pages":""},"PeriodicalIF":4.9,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139388909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The optimal ship hull form in contemporary design practice primarily consists of three parts: hull form modification, performance prediction, and optimization. Hull form modification is a crucial step to affect optimization efficiency because the baseline hull form is varied to search for performance improvements. The conventional hull form modification methods mainly rely on human decisions and intervention. As a direct expression of the 3-D hull form, the lines are not appropriate for machine learning techniques. This is because they do not explicitly express a meaningful performance metric despite their relatively large data dimension. To solve this problem and develop a novel machine-based hull form design technique, an autoencoder, which is a dimensional reduction technique based on an artificial neural network, was created in this study. Specifically, a convolutional autoencoder was designed; firstly, a convolutional neural network (CNN) preprocessor was used to effectively train the offsets, which are the half-width coordinate values on the hull surface, to extract feature maps. Secondly, the stacked encoder compressed the feature maps into an optimal lower-dimensional-latent vector. Finally, a transposed convolution layer restored the dimension of the lines. In this study, 21 250 hull forms belonging to three different ship types of containership, LNG carrier, and tanker, were used as training data. To describe the hull form in more detail, each was divided into several zones, which were then input into the CNN preprocessor separately. After the training, a low-dimensional manifold consisting of the components of the latent vector was derived to represent the distinctive hull form features of the three ship types considered. The autoencoder technique was then combined with another novel approach of the surrogate model to form an objective function neural network. Further combination with the deterministic particle swarm optimization (DPSO) method led to a successful hull form optimization example. In summary, the present convolutional autoencoder has demonstrated its significance within the machine learning-based design process for ship hull forms.
{"title":"A Study on Ship Hull Form Transformation Using Convolutional Autoencoder","authors":"Jeongbeom Seo, Dayeon Kim, Inwon Lee","doi":"10.1093/jcde/qwad111","DOIUrl":"https://doi.org/10.1093/jcde/qwad111","url":null,"abstract":"\u0000 The optimal ship hull form in contemporary design practice primarily consists of three parts: hull form modification, performance prediction, and optimization. Hull form modification is a crucial step to affect optimization efficiency because the baseline hull form is varied to search for performance improvements. The conventional hull form modification methods mainly rely on human decisions and intervention. As a direct expression of the 3-D hull form, the lines are not appropriate for machine learning techniques. This is because they do not explicitly express a meaningful performance metric despite their relatively large data dimension. To solve this problem and develop a novel machine-based hull form design technique, an autoencoder, which is a dimensional reduction technique based on an artificial neural network, was created in this study. Specifically, a convolutional autoencoder was designed; firstly, a convolutional neural network (CNN) preprocessor was used to effectively train the offsets, which are the half-width coordinate values on the hull surface, to extract feature maps. Secondly, the stacked encoder compressed the feature maps into an optimal lower-dimensional-latent vector. Finally, a transposed convolution layer restored the dimension of the lines. In this study, 21 250 hull forms belonging to three different ship types of containership, LNG carrier, and tanker, were used as training data. To describe the hull form in more detail, each was divided into several zones, which were then input into the CNN preprocessor separately. After the training, a low-dimensional manifold consisting of the components of the latent vector was derived to represent the distinctive hull form features of the three ship types considered. The autoencoder technique was then combined with another novel approach of the surrogate model to form an objective function neural network. Further combination with the deterministic particle swarm optimization (DPSO) method led to a successful hull form optimization example. In summary, the present convolutional autoencoder has demonstrated its significance within the machine learning-based design process for ship hull forms.","PeriodicalId":48611,"journal":{"name":"Journal of Computational Design and Engineering","volume":"88 1","pages":""},"PeriodicalIF":4.9,"publicationDate":"2024-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139388176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ruyi Dong, Yanan Liu, Siwen Wang, A. Heidari, Mingjing Wang, Yi Chen, Shuihua Wang, Huiling Chen, Yu-dong Zhang
The Kernel Search Optimizer (KSO) is a recent metaheuristic optimization algorithm that has been proposed in recent years. The KSO is based on kernel theory, eliminating the need for hyper-parameter adjustments, and demonstrating excellent global search capabilities. However, the original KSO exhibits insufficient accuracy in local search, and there is a high probability that it may fail to achieve local optimization in complex tasks. Therefore, this paper proposes a Multi-Strategy Enhanced Kernel Search Optimizer (MSKSO) to enhance the local search ability of the KSO. The MSKSO combines several control strategies, including chaotic initialization, chaotic local search mechanisms, the High-Altitude Walk Strategy (HWS), and the Levy Flight (LF), to effectively balance exploration and exploitation. The MSKSO is compared with ten well-known algorithms on fifty benchmark test functions to validate its performance, including single-peak, multi-peak, separable variable, and non-separable variable functions. Additionally, the MSKSO is applied to two real engineering economic emission dispatch (EED) problems in power systems. Experimental results demonstrate that the performance of the MSKSO nearly optimizes that of other well-known algorithms and achieves favorable results on the EED problem. These case studies verify that the MSKSO outperforms other algorithms and can serve as an effective optimization tool.
{"title":"Multi-strategy enhanced kernel search optimization and its application in economic emission dispatch problems","authors":"Ruyi Dong, Yanan Liu, Siwen Wang, A. Heidari, Mingjing Wang, Yi Chen, Shuihua Wang, Huiling Chen, Yu-dong Zhang","doi":"10.1093/jcde/qwad110","DOIUrl":"https://doi.org/10.1093/jcde/qwad110","url":null,"abstract":"The Kernel Search Optimizer (KSO) is a recent metaheuristic optimization algorithm that has been proposed in recent years. The KSO is based on kernel theory, eliminating the need for hyper-parameter adjustments, and demonstrating excellent global search capabilities. However, the original KSO exhibits insufficient accuracy in local search, and there is a high probability that it may fail to achieve local optimization in complex tasks. Therefore, this paper proposes a Multi-Strategy Enhanced Kernel Search Optimizer (MSKSO) to enhance the local search ability of the KSO. The MSKSO combines several control strategies, including chaotic initialization, chaotic local search mechanisms, the High-Altitude Walk Strategy (HWS), and the Levy Flight (LF), to effectively balance exploration and exploitation. The MSKSO is compared with ten well-known algorithms on fifty benchmark test functions to validate its performance, including single-peak, multi-peak, separable variable, and non-separable variable functions. Additionally, the MSKSO is applied to two real engineering economic emission dispatch (EED) problems in power systems. Experimental results demonstrate that the performance of the MSKSO nearly optimizes that of other well-known algorithms and achieves favorable results on the EED problem. These case studies verify that the MSKSO outperforms other algorithms and can serve as an effective optimization tool.","PeriodicalId":48611,"journal":{"name":"Journal of Computational Design and Engineering","volume":"35 ","pages":""},"PeriodicalIF":4.9,"publicationDate":"2023-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139174429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jinwon Lee, Changmo Yeo, Sang-Uk Cheon, Jun Hwan Park, D. Mun
In recent years, there have been many studies using artificial intelligence to recognize machining features in 3D models in the CAD/CAM field. Most of these studies converted the original CAD data into images, point clouds, or voxels for recognition. This led to information loss during the conversion process, resulting in decreased recognition accuracy. In this paper, we propose a graph-based network called BRepGAT to segment faces in an original B-rep model containing machining features. We define descriptors that represent information about the faces and edges of the B-rep model from the perspective of feature recognition. These descriptors are extracted from the B-rep model and transformed into homogeneous graph data, which is then passed to graph networks. BRepGAT recognize machining features on a face-by-face based on the graph data input. Our experimental results using the MFCAD18++ dataset showed that BRepGAT achieved state-of-the-art recognition accuracy (99.1%). Furthermore, BRepGAT showed relatively robust performance on other datasets besides MFCAD18++.
{"title":"BRepGAT: Graph neural network to segment machining feature faces in a B-rep model","authors":"Jinwon Lee, Changmo Yeo, Sang-Uk Cheon, Jun Hwan Park, D. Mun","doi":"10.1093/jcde/qwad106","DOIUrl":"https://doi.org/10.1093/jcde/qwad106","url":null,"abstract":"In recent years, there have been many studies using artificial intelligence to recognize machining features in 3D models in the CAD/CAM field. Most of these studies converted the original CAD data into images, point clouds, or voxels for recognition. This led to information loss during the conversion process, resulting in decreased recognition accuracy. In this paper, we propose a graph-based network called BRepGAT to segment faces in an original B-rep model containing machining features. We define descriptors that represent information about the faces and edges of the B-rep model from the perspective of feature recognition. These descriptors are extracted from the B-rep model and transformed into homogeneous graph data, which is then passed to graph networks. BRepGAT recognize machining features on a face-by-face based on the graph data input. Our experimental results using the MFCAD18++ dataset showed that BRepGAT achieved state-of-the-art recognition accuracy (99.1%). Furthermore, BRepGAT showed relatively robust performance on other datasets besides MFCAD18++.","PeriodicalId":48611,"journal":{"name":"Journal of Computational Design and Engineering","volume":"25 1","pages":""},"PeriodicalIF":4.9,"publicationDate":"2023-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139216195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Computational progressive failure analysis (PFA) of carbon fiber reinforced polymer composites (CFRP) is of vital importance in the verification and validation process of the structural integrity and damage tolerance of modern lightweight aeronautical structures. Enhanced Schapery Theory (EST) has been developed and applied to predict the damage pattern and load-bearing capacity of various composite structures. In this paper, EST is enhanced by a deep neural network (DNN) model, which enables fast and accurate predictions of matrix cracking angles under arbitrary stress states of any composite laminate. The DNN model is trained by TensorFlow based on data generated by a damage initiation criterion, which originates from the Mohr-Coulomb failure theory. The EST-DNN model is applied to open-hole tension/compression (OHT/OHC) problems. The results from the EST-DNN model are obtained with no loss in accuracy. The results presented combine the efficient and accurate predicting capabilities brought by machine learning tools and the robustness and user-friendliness of the EST finite element model.
{"title":"Embedding Deep Neural Network in Enhanced Schapery Theory for Progressive Failure Analysis of Fiber Reinforced Laminates","authors":"Shiyao Lin, Alex Post, Anthony M Waas","doi":"10.1093/jcde/qwad103","DOIUrl":"https://doi.org/10.1093/jcde/qwad103","url":null,"abstract":"Abstract Computational progressive failure analysis (PFA) of carbon fiber reinforced polymer composites (CFRP) is of vital importance in the verification and validation process of the structural integrity and damage tolerance of modern lightweight aeronautical structures. Enhanced Schapery Theory (EST) has been developed and applied to predict the damage pattern and load-bearing capacity of various composite structures. In this paper, EST is enhanced by a deep neural network (DNN) model, which enables fast and accurate predictions of matrix cracking angles under arbitrary stress states of any composite laminate. The DNN model is trained by TensorFlow based on data generated by a damage initiation criterion, which originates from the Mohr-Coulomb failure theory. The EST-DNN model is applied to open-hole tension/compression (OHT/OHC) problems. The results from the EST-DNN model are obtained with no loss in accuracy. The results presented combine the efficient and accurate predicting capabilities brought by machine learning tools and the robustness and user-friendliness of the EST finite element model.","PeriodicalId":48611,"journal":{"name":"Journal of Computational Design and Engineering","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134992200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract As LiDAR sensors become increasingly prevalent in the field of autonomous driving, the need for accurate semantic segmentation of 3D points grows accordingly. To address this challenge, we propose a novel network model that enhances segmentation performance by utilizing normal vector information. Firstly, we present a method to improve the accuracy of normal estimation by using the intensity and reflection angles of the light emitted from the LiDAR sensor. Secondly, we introduce a novel local feature aggregation module that integrates normal vector information into the network to improve the performance of local feature extraction. The normal information is closely related to the local structure of the shape of an object, which helps the network to associate unique features with corresponding objects. We propose four different structures for local feature aggregation, evaluate them, and choose the one that shows the best performance. Experiments using the SemanticKITTI dataset demonstrate that the proposed architecture outperforms both the baseline model, RandLA-Net, and other existing methods, achieving mean Intersection over Union (mIoU) of 57.9%. Furthermore, it shows highly competitive performance compared to RandLA-Net for small and dynamic objects in a real road environment. For example, it yielded 95.2% for cars, 47.4% for bicycles, 41.0% for motorcycles, 57.4% for bicycles, and 53.2% for pedestrians.
{"title":"Improved semantic segmentation network using normal vector guidance for LiDAR point clouds","authors":"Minsung Kim, Inyoung Oh, Dongho Yun, Kwanghee Ko","doi":"10.1093/jcde/qwad102","DOIUrl":"https://doi.org/10.1093/jcde/qwad102","url":null,"abstract":"Abstract As LiDAR sensors become increasingly prevalent in the field of autonomous driving, the need for accurate semantic segmentation of 3D points grows accordingly. To address this challenge, we propose a novel network model that enhances segmentation performance by utilizing normal vector information. Firstly, we present a method to improve the accuracy of normal estimation by using the intensity and reflection angles of the light emitted from the LiDAR sensor. Secondly, we introduce a novel local feature aggregation module that integrates normal vector information into the network to improve the performance of local feature extraction. The normal information is closely related to the local structure of the shape of an object, which helps the network to associate unique features with corresponding objects. We propose four different structures for local feature aggregation, evaluate them, and choose the one that shows the best performance. Experiments using the SemanticKITTI dataset demonstrate that the proposed architecture outperforms both the baseline model, RandLA-Net, and other existing methods, achieving mean Intersection over Union (mIoU) of 57.9%. Furthermore, it shows highly competitive performance compared to RandLA-Net for small and dynamic objects in a real road environment. For example, it yielded 95.2% for cars, 47.4% for bicycles, 41.0% for motorcycles, 57.4% for bicycles, and 53.2% for pedestrians.","PeriodicalId":48611,"journal":{"name":"Journal of Computational Design and Engineering","volume":"132 16","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136351563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Modular construction is becoming more popular because of its efficiency, cost-saving, and environmental benefits, but its successful implementation necessitates detailed planning, scheduling, and coordination. BIM and 4D simulation techniques have emerged as invaluable tools for visualizing and analyzing the construction process in order to meet these requirements. However, integrating distinctive data sources and developing comprehensive 4D BIM simulations tailored to modular construction projects present significant challenges. Case studies are used in this paper to define precise data needs and to design a robust data integration framework for improving 4D BIM simulations in modular construction. The validation of the framework in a real-world project demonstrates its efficacy in integrating data, promoting cooperation, detecting risks, and supporting informed decision-making, ultimately enhancing modular building results through more realistic simulations. By solving data integration difficulties, this research provides useful insights for industry practitioners and researchers, enabling informed decision-making and optimization of modular building projects.
{"title":"Data-driven integration framework for 4D BIM simulation in modular construction: A case study approach","authors":"Saddiq Ur Rehman, Inhan Kim, Jungsik Choi","doi":"10.1093/jcde/qwad100","DOIUrl":"https://doi.org/10.1093/jcde/qwad100","url":null,"abstract":"Abstract Modular construction is becoming more popular because of its efficiency, cost-saving, and environmental benefits, but its successful implementation necessitates detailed planning, scheduling, and coordination. BIM and 4D simulation techniques have emerged as invaluable tools for visualizing and analyzing the construction process in order to meet these requirements. However, integrating distinctive data sources and developing comprehensive 4D BIM simulations tailored to modular construction projects present significant challenges. Case studies are used in this paper to define precise data needs and to design a robust data integration framework for improving 4D BIM simulations in modular construction. The validation of the framework in a real-world project demonstrates its efficacy in integrating data, promoting cooperation, detecting risks, and supporting informed decision-making, ultimately enhancing modular building results through more realistic simulations. By solving data integration difficulties, this research provides useful insights for industry practitioners and researchers, enabling informed decision-making and optimization of modular building projects.","PeriodicalId":48611,"journal":{"name":"Journal of Computational Design and Engineering","volume":"132 14","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136351565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jisoo Ahn, Sewoong Jung, Hansom Kim, Ho-Jin Hwang, Hong-Bae Jun
Abstract This study focuses on the path planning problem for Unmanned Combat Vehicles (UCVs), where the goal is to find a viable path from the starting point to the destination while avoiding collisions with moving obstacles, such as enemy forces. The objective is to minimize the overall cost, which encompasses factors like travel distance, geographical difficulty, and the risk posed by enemy forces. To address this challenge, we have proposed a heuristic algorithm based on D* lite. This modified algorithm considers not only travel distance but also other military-relevant costs, such as travel difficulty and risk. It generates a path that navigates around both fixed unknown obstacles and dynamically moving obstacles (enemy forces) that change positions over time. To assess the effectiveness of our proposed algorithm, we conducted comprehensive experiments, comparing and analyzing its performance in terms of average pathfinding success rate, average number of turns, and average execution time. Notably, we examined how the algorithm performs under two UCV path search strategies and two obstacle movement strategies. Our findings shed light on the potential of our approach in real-world UCV path planning scenarios.
{"title":"A study on UCV path planning for collision avoidance with enemy forces in dynamic situations","authors":"Jisoo Ahn, Sewoong Jung, Hansom Kim, Ho-Jin Hwang, Hong-Bae Jun","doi":"10.1093/jcde/qwad099","DOIUrl":"https://doi.org/10.1093/jcde/qwad099","url":null,"abstract":"Abstract This study focuses on the path planning problem for Unmanned Combat Vehicles (UCVs), where the goal is to find a viable path from the starting point to the destination while avoiding collisions with moving obstacles, such as enemy forces. The objective is to minimize the overall cost, which encompasses factors like travel distance, geographical difficulty, and the risk posed by enemy forces. To address this challenge, we have proposed a heuristic algorithm based on D* lite. This modified algorithm considers not only travel distance but also other military-relevant costs, such as travel difficulty and risk. It generates a path that navigates around both fixed unknown obstacles and dynamically moving obstacles (enemy forces) that change positions over time. To assess the effectiveness of our proposed algorithm, we conducted comprehensive experiments, comparing and analyzing its performance in terms of average pathfinding success rate, average number of turns, and average execution time. Notably, we examined how the algorithm performs under two UCV path search strategies and two obstacle movement strategies. Our findings shed light on the potential of our approach in real-world UCV path planning scenarios.","PeriodicalId":48611,"journal":{"name":"Journal of Computational Design and Engineering","volume":" 47","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135291793","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ruba Abu Khurma, Esraa Alhenawi, Malik Braik, Fatma A Hashim, Amit Chhabra, Pedro A Castillo
Abstract It is of paramount importance to enhance medical practices, given how important it is to protect human life. Medical therapy can be accelerated by automating patient prediction using machine learning techniques. To double the efficiency of classifiers, several preprocessing strategies must be adopted for their crucial duty in this field. Feature selection (FS) is one tool that has been used frequently to modify data and enhance classification outcomes by lowering the dimensionality of datasets. Excluded features are those that have a poor correlation coefficient with the label class, that is, they have no meaningful correlation with classification and do not indicate where the instance belongs. Along with the recurring features, which show a strong association with the remainder of the features. Contrarily, the model being produced during training is harmed, and the classifier is misled by their presence. This causes overfitting and increases algorithm complexity and processing time. The pattern is made clearer by FS, which also creates a broader classification model with a lower chance of overfitting in an acceptable amount of time and algorithmic complexity. To optimize the FS process, building wrappers must employ metaheuristic algorithms (MAs) as search algorithms. The best solution, which reflects the best subset of features within a particular medical dataset that aids in patient diagnosis, is sought in this study using the Snake Optimizer (SO). The swarm-based approaches that SO is founded on have left it with several general flaws, like local minimum trapping, early convergence, uneven exploration and exploitation, and early convergence. By employing the cosine function to calculate the separation between the present solution and the ideal solution, the logarithm operator was paired with SO to better the exploitation process and get over these restrictions. In order to get the best overall answer, this forces the solutions to spiral downward. Additionally, SO is employed to put the evolutionary algorithms’ preservation of the best premise into practice. This is accomplished by utilizing three alternative selection systems tournament, proportional, and linear to improve the exploration phase. These are used in exploration to allow solutions to be found more thoroughly and in relation to a chosen solution than at random. TLSO, PLSO, and LLSO stand for Tournament Logarithmic Snake Optimizer, Proportional Logarithmic Snake Optimizer, and Linear Order Logarithmic Snake Optimizer, respectively. A number of 22 reference medical datasets were used in experiments. The findings indicate that, among 86% of the datasets, TLSO attained the best accuracy, and among 82% of the datasets, the best feature reduction. In terms of the standard deviation, the TLSO also attained noteworthy reliability and stability. On the basis of running duration, it is, nonetheless, quite effective.
{"title":"A Bio-Medical Snake Optimizer System Driven by Logarithmic Surviving Global Search for Optimizing Feature Selection and its application for Disorder Recognition","authors":"Ruba Abu Khurma, Esraa Alhenawi, Malik Braik, Fatma A Hashim, Amit Chhabra, Pedro A Castillo","doi":"10.1093/jcde/qwad101","DOIUrl":"https://doi.org/10.1093/jcde/qwad101","url":null,"abstract":"Abstract It is of paramount importance to enhance medical practices, given how important it is to protect human life. Medical therapy can be accelerated by automating patient prediction using machine learning techniques. To double the efficiency of classifiers, several preprocessing strategies must be adopted for their crucial duty in this field. Feature selection (FS) is one tool that has been used frequently to modify data and enhance classification outcomes by lowering the dimensionality of datasets. Excluded features are those that have a poor correlation coefficient with the label class, that is, they have no meaningful correlation with classification and do not indicate where the instance belongs. Along with the recurring features, which show a strong association with the remainder of the features. Contrarily, the model being produced during training is harmed, and the classifier is misled by their presence. This causes overfitting and increases algorithm complexity and processing time. The pattern is made clearer by FS, which also creates a broader classification model with a lower chance of overfitting in an acceptable amount of time and algorithmic complexity. To optimize the FS process, building wrappers must employ metaheuristic algorithms (MAs) as search algorithms. The best solution, which reflects the best subset of features within a particular medical dataset that aids in patient diagnosis, is sought in this study using the Snake Optimizer (SO). The swarm-based approaches that SO is founded on have left it with several general flaws, like local minimum trapping, early convergence, uneven exploration and exploitation, and early convergence. By employing the cosine function to calculate the separation between the present solution and the ideal solution, the logarithm operator was paired with SO to better the exploitation process and get over these restrictions. In order to get the best overall answer, this forces the solutions to spiral downward. Additionally, SO is employed to put the evolutionary algorithms’ preservation of the best premise into practice. This is accomplished by utilizing three alternative selection systems tournament, proportional, and linear to improve the exploration phase. These are used in exploration to allow solutions to be found more thoroughly and in relation to a chosen solution than at random. TLSO, PLSO, and LLSO stand for Tournament Logarithmic Snake Optimizer, Proportional Logarithmic Snake Optimizer, and Linear Order Logarithmic Snake Optimizer, respectively. A number of 22 reference medical datasets were used in experiments. The findings indicate that, among 86% of the datasets, TLSO attained the best accuracy, and among 82% of the datasets, the best feature reduction. In terms of the standard deviation, the TLSO also attained noteworthy reliability and stability. On the basis of running duration, it is, nonetheless, quite effective.","PeriodicalId":48611,"journal":{"name":"Journal of Computational Design and Engineering","volume":" 42","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135292051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The formation of lubrication films is described using the hydrodynamic lubrication theory, which is based on the Reynolds equation that includes shear thinning behaviors of lubricant. Contacting surfaces are considered to undergo elastic deformation owing to concentrated contact pressures that exceed 1.0 GPa in most engineering applications. Under the contact condition of a high load on a relatively small contact area, elastic deformation of contacting bodies directly influences the formation of the lubricated film. Elastohydrodynamic lubrication (EHL) analysis is applied to correctly analyze the lubricated contact. Under an EHL contact, the scale of the lubrication film thickness is frequently less than that of the surface roughness that results from either the manufacturing or running-in processes. In this work, surface roughness is considered in detail, and two-dimensional surface roughness is measured as that characterizing general engineering surface roughness. The deterministic method regarding the surface roughness is considered for computing EHL film formation under several contact conditions such as load, contact velocity, and elasticity of contacting materials.
{"title":"Deterministic surface roughness effects on elastic material contact with shear thinning fluid media","authors":"Siyoul Jang","doi":"10.1093/jcde/qwad098","DOIUrl":"https://doi.org/10.1093/jcde/qwad098","url":null,"abstract":"Abstract The formation of lubrication films is described using the hydrodynamic lubrication theory, which is based on the Reynolds equation that includes shear thinning behaviors of lubricant. Contacting surfaces are considered to undergo elastic deformation owing to concentrated contact pressures that exceed 1.0 GPa in most engineering applications. Under the contact condition of a high load on a relatively small contact area, elastic deformation of contacting bodies directly influences the formation of the lubricated film. Elastohydrodynamic lubrication (EHL) analysis is applied to correctly analyze the lubricated contact. Under an EHL contact, the scale of the lubrication film thickness is frequently less than that of the surface roughness that results from either the manufacturing or running-in processes. In this work, surface roughness is considered in detail, and two-dimensional surface roughness is measured as that characterizing general engineering surface roughness. The deterministic method regarding the surface roughness is considered for computing EHL film formation under several contact conditions such as load, contact velocity, and elasticity of contacting materials.","PeriodicalId":48611,"journal":{"name":"Journal of Computational Design and Engineering","volume":"3 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135685416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}