Marine container terminals play a significant role for international trade networks and global market. To cope with the rapid and steady growth of the seaborne trade market, marine container terminal operators must address the operational challenges with appropriate analytical methods to meet the needs of the market. The berth allocation and scheduling problem is one of the important decisions faced by operators during operations planning. The optimization of a berth schedule is strongly associated with the allocation of spatial and temporal resources. An optimal and robust berth schedule remarkably improves the productivity and competitiveness of a seaport. A significant number of berth allocation and scheduling studies have been conducted over the last years. Thus, there is an existing need for a comprehensive and critical literature survey to analyze the state-of-the-art research progress, developing tendencies, current shortcomings, and potential future research directions. Therefore, this study thoroughly selected scientific manuscripts dedicated to the berth allocation and scheduling problem. The identified studies were categorized based on spatial attributes, including discrete, continuous, and hybrid berth allocation and scheduling problems. A detailed review was performed for the identified study categories. A representative mathematical formulation for each category was presented along with a detailed summary of various considerations and characteristics of every study. A specific emphasis was given to the solution methods adopted. The current research shortcomings and important research needs were outlined based on the review of the state-of-the-art. This study was conducted with the expectation of assisting the scientific community and relevant stakeholders with berth allocation and scheduling.
{"title":"Berth allocation and scheduling at marine container terminals: A state-of-the-art review of solution approaches and relevant scheduling attributes","authors":"Bokang Li, Zeinab Elmi, Ashley Manske, Edwina Jacobs, Yui-yip Lau, Qiong Chen, M. Dulebenets","doi":"10.1093/jcde/qwad075","DOIUrl":"https://doi.org/10.1093/jcde/qwad075","url":null,"abstract":"\u0000 Marine container terminals play a significant role for international trade networks and global market. To cope with the rapid and steady growth of the seaborne trade market, marine container terminal operators must address the operational challenges with appropriate analytical methods to meet the needs of the market. The berth allocation and scheduling problem is one of the important decisions faced by operators during operations planning. The optimization of a berth schedule is strongly associated with the allocation of spatial and temporal resources. An optimal and robust berth schedule remarkably improves the productivity and competitiveness of a seaport. A significant number of berth allocation and scheduling studies have been conducted over the last years. Thus, there is an existing need for a comprehensive and critical literature survey to analyze the state-of-the-art research progress, developing tendencies, current shortcomings, and potential future research directions. Therefore, this study thoroughly selected scientific manuscripts dedicated to the berth allocation and scheduling problem. The identified studies were categorized based on spatial attributes, including discrete, continuous, and hybrid berth allocation and scheduling problems. A detailed review was performed for the identified study categories. A representative mathematical formulation for each category was presented along with a detailed summary of various considerations and characteristics of every study. A specific emphasis was given to the solution methods adopted. The current research shortcomings and important research needs were outlined based on the review of the state-of-the-art. This study was conducted with the expectation of assisting the scientific community and relevant stakeholders with berth allocation and scheduling.","PeriodicalId":48611,"journal":{"name":"Journal of Computational Design and Engineering","volume":"22 1","pages":"1707-1735"},"PeriodicalIF":4.9,"publicationDate":"2023-07-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88037428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Seung-Jun Shin, Sung-Ho Hong, Sainand Jadhav, Duck Bong Kim
Wire arc additive manufacturing (WAAM) has gained attention as a feasible process in large-scale metal additive manufacturing due to its high deposition rate, cost efficiency, and material diversity. However, WAAM induces a degree of uncertainty in the process stability and the part quality owing to its non-equilibrium thermal cycles and layer-by-layer stacking mechanism. Anomaly detection is therefore necessary for the quality monitoring of the parts. Most relevant studies have applied machine learning to derive data-driven models that detect defects through feature and pattern learning. However, acquiring sufficient data is time- and/or resource-intensive, which introduces a challenge to applying machine learning-based anomaly detection. This study proposes a multisource transfer learning method that generates anomaly detection models for balling defect detection, thus ensuring quality monitoring in WAAM. The proposed method uses convolutional neural network models to extract sufficient image features from multisource materials, then transfers and fine-tunes the models for anomaly detection in the target material. Stepwise learning is applied to extract image features sequentially from individual source materials, and composite learning is employed to assign the optimal frozen ratio for converging transferred and present features. Experiments were performed using a gas tungsten arc welding-based WAAM process to validate the classification accuracy of the models using low-carbon steel, stainless steel, and Inconel.
{"title":"Detecting balling defects using multisource transfer learning in wire arc additive manufacturing","authors":"Seung-Jun Shin, Sung-Ho Hong, Sainand Jadhav, Duck Bong Kim","doi":"10.1093/jcde/qwad067","DOIUrl":"https://doi.org/10.1093/jcde/qwad067","url":null,"abstract":"\u0000 Wire arc additive manufacturing (WAAM) has gained attention as a feasible process in large-scale metal additive manufacturing due to its high deposition rate, cost efficiency, and material diversity. However, WAAM induces a degree of uncertainty in the process stability and the part quality owing to its non-equilibrium thermal cycles and layer-by-layer stacking mechanism. Anomaly detection is therefore necessary for the quality monitoring of the parts. Most relevant studies have applied machine learning to derive data-driven models that detect defects through feature and pattern learning. However, acquiring sufficient data is time- and/or resource-intensive, which introduces a challenge to applying machine learning-based anomaly detection. This study proposes a multisource transfer learning method that generates anomaly detection models for balling defect detection, thus ensuring quality monitoring in WAAM. The proposed method uses convolutional neural network models to extract sufficient image features from multisource materials, then transfers and fine-tunes the models for anomaly detection in the target material. Stepwise learning is applied to extract image features sequentially from individual source materials, and composite learning is employed to assign the optimal frozen ratio for converging transferred and present features. Experiments were performed using a gas tungsten arc welding-based WAAM process to validate the classification accuracy of the models using low-carbon steel, stainless steel, and Inconel.","PeriodicalId":48611,"journal":{"name":"Journal of Computational Design and Engineering","volume":"89 1","pages":"1423-1442"},"PeriodicalIF":4.9,"publicationDate":"2023-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80264456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, a dynamic gesture recognition system is proposed using triaxial acceleration signal and image-based deep neural network. With our dexterous glove device, 1D acceleration signal can be measured from each finger and decomposed to time-divided frequency components via wavelet transformation, which known as scalogram as image-like format. To feed-forward the scalogram with single 2D convolutional neural networks(CNN) allows the gesture having temporality to be easily recognized without any complex system such as RNN, LSTM, or spatio-temporal feature as 3D CNN, etc. To classify the image with general input dimension of image RGB channels, we numerically reconstruct fifteen scalograms into one RGB image with various representation methods. In experiments, we employ the off-the-shelf model, EfficientNetV2 small to large model as an image classification model with fine-tuning. To evaluate our system, we bulid our custom bicycle hand signals as dynamic gesture dataset under our transformation system, and then qualitatively compare the reconstruction method with matrix representation methods. In addition, we use other signal transformation tools such as the fast Fourier transform, and short-time Fourier transform and then explain the advantages of scalogram classification in the terms of time-frequency resolution trade-off issue.
{"title":"EfficientNetV2-based dynamic gesture recognition using transformed scalogram from triaxial acceleration signal","authors":"Bumsoo Kim, Sanghyun Seo","doi":"10.1093/jcde/qwad068","DOIUrl":"https://doi.org/10.1093/jcde/qwad068","url":null,"abstract":"\u0000 In this paper, a dynamic gesture recognition system is proposed using triaxial acceleration signal and image-based deep neural network. With our dexterous glove device, 1D acceleration signal can be measured from each finger and decomposed to time-divided frequency components via wavelet transformation, which known as scalogram as image-like format. To feed-forward the scalogram with single 2D convolutional neural networks(CNN) allows the gesture having temporality to be easily recognized without any complex system such as RNN, LSTM, or spatio-temporal feature as 3D CNN, etc. To classify the image with general input dimension of image RGB channels, we numerically reconstruct fifteen scalograms into one RGB image with various representation methods. In experiments, we employ the off-the-shelf model, EfficientNetV2 small to large model as an image classification model with fine-tuning. To evaluate our system, we bulid our custom bicycle hand signals as dynamic gesture dataset under our transformation system, and then qualitatively compare the reconstruction method with matrix representation methods. In addition, we use other signal transformation tools such as the fast Fourier transform, and short-time Fourier transform and then explain the advantages of scalogram classification in the terms of time-frequency resolution trade-off issue.","PeriodicalId":48611,"journal":{"name":"Journal of Computational Design and Engineering","volume":"53 1","pages":"1694-1706"},"PeriodicalIF":4.9,"publicationDate":"2023-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84653884","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Guangwen Yan, Desheng Zhang, Jinting Xu, Yuwen Sun
Corner rounding methods have been widely developed to pursue the smooth motions of machine tools. However, most corner rounding methods, which adopt the double inscribed transitions, still remain an inherent issue of retaining large curvatures of transition curves. Even for those double circumscribed transitions-based methods with relatively small curvatures, they also constrain excessively the transition lengths and are limited to a low-order continuity, deteriorating the feedrate and jerk of machine tools. For addressing these problems, a C3 continuous double circumscribed corner rounding (DCCR) method is proposed for five-axis linear tool path. In this method, the C3 continuous double circumscribed B-splines are specially designed to round the corners of tool position and tool orientation, whose transition lengths are analytically determined by jointly constraining the approximation errors, overlaps elimination and parameter synchronization. Moreover, the excessive constrains of transition lengths imposed by traditional methods are alleviated by fully considering the effects of overlaps and parameter synchronization, and the jerk of rotary axes is also limited with a high-order continuity. Compared to the existing double inscribed corner rounding (DICR) and DCCR methods, experiment results demonstrate that our method can improve further the feedrate while limiting the jerk of machine tools.
{"title":"A C3 continuous double circumscribed corner rounding method for five-axis linear tool path with improved kinematics performance","authors":"Guangwen Yan, Desheng Zhang, Jinting Xu, Yuwen Sun","doi":"10.1093/jcde/qwad066","DOIUrl":"https://doi.org/10.1093/jcde/qwad066","url":null,"abstract":"\u0000 Corner rounding methods have been widely developed to pursue the smooth motions of machine tools. However, most corner rounding methods, which adopt the double inscribed transitions, still remain an inherent issue of retaining large curvatures of transition curves. Even for those double circumscribed transitions-based methods with relatively small curvatures, they also constrain excessively the transition lengths and are limited to a low-order continuity, deteriorating the feedrate and jerk of machine tools. For addressing these problems, a C3 continuous double circumscribed corner rounding (DCCR) method is proposed for five-axis linear tool path. In this method, the C3 continuous double circumscribed B-splines are specially designed to round the corners of tool position and tool orientation, whose transition lengths are analytically determined by jointly constraining the approximation errors, overlaps elimination and parameter synchronization. Moreover, the excessive constrains of transition lengths imposed by traditional methods are alleviated by fully considering the effects of overlaps and parameter synchronization, and the jerk of rotary axes is also limited with a high-order continuity. Compared to the existing double inscribed corner rounding (DICR) and DCCR methods, experiment results demonstrate that our method can improve further the feedrate while limiting the jerk of machine tools.","PeriodicalId":48611,"journal":{"name":"Journal of Computational Design and Engineering","volume":"187 1","pages":"1490-1506"},"PeriodicalIF":4.9,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73939800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A new approach to anomaly detection termed “anomaly detection with designable generative adversarial network (Ano-DGAN)” is proposed, which is a series connection of a designable generative adversarial network and anomaly detection with a generative adversarial network. The proposed Ano-DGAN, based on a deep neural network, overcomes the limitations of abnormal data collection when performing anomaly detection. In addition, it can perform statistical diagnosis by identifying the healthy range of each design variable without a massive amount of initial data. A model was constructed to simulate a high-pressure liquefied natural gas pipeline for data collection and the determination of the critical design variables. The simulation model was validated and compared with the failure mode and effect analysis of a real pipeline, which showed that stress was concentrated in the weld joints of the branch pipe. A crack-growth degradation factor was applied to the weld, and anomaly detection was performed. The performance of the proposed model was highly accurate compared with that of other anomaly detection models, such as support vector machine (SVM), one-dimensional convolutional neural network (1D CNN), and long short term memory (LSTM). The results provided a statistical estimate of the design variable ranges and were validated statistically, indicating that the diagnosis was acceptable.
{"title":"Crack growth degradation-based diagnosis and design of high pressure liquefied natural gas pipe via designable data-augmented anomaly detection","authors":"Dabin Yang, Sanghoon Lee, Jongsoo Lee","doi":"10.1093/jcde/qwad065","DOIUrl":"https://doi.org/10.1093/jcde/qwad065","url":null,"abstract":"\u0000 A new approach to anomaly detection termed “anomaly detection with designable generative adversarial network (Ano-DGAN)” is proposed, which is a series connection of a designable generative adversarial network and anomaly detection with a generative adversarial network. The proposed Ano-DGAN, based on a deep neural network, overcomes the limitations of abnormal data collection when performing anomaly detection. In addition, it can perform statistical diagnosis by identifying the healthy range of each design variable without a massive amount of initial data. A model was constructed to simulate a high-pressure liquefied natural gas pipeline for data collection and the determination of the critical design variables. The simulation model was validated and compared with the failure mode and effect analysis of a real pipeline, which showed that stress was concentrated in the weld joints of the branch pipe. A crack-growth degradation factor was applied to the weld, and anomaly detection was performed. The performance of the proposed model was highly accurate compared with that of other anomaly detection models, such as support vector machine (SVM), one-dimensional convolutional neural network (1D CNN), and long short term memory (LSTM). The results provided a statistical estimate of the design variable ranges and were validated statistically, indicating that the diagnosis was acceptable.","PeriodicalId":48611,"journal":{"name":"Journal of Computational Design and Engineering","volume":"15 1","pages":"1531-1546"},"PeriodicalIF":4.9,"publicationDate":"2023-06-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83630687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In a car accident, negligence is evaluated through a process known as split liability assessment. This assessment involves reconstructing the accident scenario based on information gathered from sources such as dashcam footage. The final determination of negligence is made by simulating the information contained in the video. Therefore, accident cases for split liability assessment should be classified based on information affecting the negligence degree. While deep learning has recently been in the spotlight for video recognition using short video clips, no research has been conducted to extract meaningful information from long videos, which are necessary for split liability assessment. To address this issue, we propose a new task for analyzing long videos by stacking the important information predicted through the 3D CNNs model. We demonstrate the feasibility of our approach by proposing a split liability assessment method using dashcam footage.
{"title":"Split liability assessment in car accident using 3D convolutional neural network","authors":"Sungjae Lee, Yong-Gu Lee","doi":"10.1093/jcde/qwad063","DOIUrl":"https://doi.org/10.1093/jcde/qwad063","url":null,"abstract":"\u0000 In a car accident, negligence is evaluated through a process known as split liability assessment. This assessment involves reconstructing the accident scenario based on information gathered from sources such as dashcam footage. The final determination of negligence is made by simulating the information contained in the video. Therefore, accident cases for split liability assessment should be classified based on information affecting the negligence degree. While deep learning has recently been in the spotlight for video recognition using short video clips, no research has been conducted to extract meaningful information from long videos, which are necessary for split liability assessment. To address this issue, we propose a new task for analyzing long videos by stacking the important information predicted through the 3D CNNs model. We demonstrate the feasibility of our approach by proposing a split liability assessment method using dashcam footage.","PeriodicalId":48611,"journal":{"name":"Journal of Computational Design and Engineering","volume":"11 1","pages":"1579-1601"},"PeriodicalIF":4.9,"publicationDate":"2023-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89385467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marta Gil pérez, Pascal Mindermann, C. Zechmeister, David Forster, Yanan Guo, S. Hügle, Fabian Kannenberg, L. Balangé, V. Schwieger, P. Middendorf, M. Bischoff, A. Menges, G. T. Gresser, J. Knippers
The linear design workflow for structural systems, involving a multitude of iterative loops and specialists, obstructs disruptive innovations. During design iterations, vast amounts of data in different reference systems, origins, and significance are generated. This data is often not directly comparable or is not collected at all, which implies a great unused potential for advancements in the process. In this paper, a novel workflow to process and analyze the data sets in a unified reference frame is proposed. From this, differently sophisticated iteration loops can be derived. The developed methods are presented within a case study using coreless filament winding as an exemplary fabrication process within an architectural context. This additive manufacturing process, using fiber-reinforced plastics, exhibits great potential for efficient structures when its intrinsic parameter variations can be minimized. The presented method aims to make data sets comparable by identifying the steps each data set needs to undergo (acquisition, pre-processing, mapping, post-processing, analysis, and evaluation). These processes are imperative to provide the means to find domain interrelations, which in the future can provide quantitative results that will help to inform the design process, making it more reliable, and allowing for the reduction of safety factors. The results of the case study demonstrate the data set processes, proving the necessity of these methods for the comprehensive inter-domain data comparison.
{"title":"Data processing, analysis, and evaluation methods for co-design of coreless filament-wound building systems","authors":"Marta Gil pérez, Pascal Mindermann, C. Zechmeister, David Forster, Yanan Guo, S. Hügle, Fabian Kannenberg, L. Balangé, V. Schwieger, P. Middendorf, M. Bischoff, A. Menges, G. T. Gresser, J. Knippers","doi":"10.1093/jcde/qwad064","DOIUrl":"https://doi.org/10.1093/jcde/qwad064","url":null,"abstract":"\u0000 The linear design workflow for structural systems, involving a multitude of iterative loops and specialists, obstructs disruptive innovations. During design iterations, vast amounts of data in different reference systems, origins, and significance are generated. This data is often not directly comparable or is not collected at all, which implies a great unused potential for advancements in the process. In this paper, a novel workflow to process and analyze the data sets in a unified reference frame is proposed. From this, differently sophisticated iteration loops can be derived. The developed methods are presented within a case study using coreless filament winding as an exemplary fabrication process within an architectural context. This additive manufacturing process, using fiber-reinforced plastics, exhibits great potential for efficient structures when its intrinsic parameter variations can be minimized. The presented method aims to make data sets comparable by identifying the steps each data set needs to undergo (acquisition, pre-processing, mapping, post-processing, analysis, and evaluation). These processes are imperative to provide the means to find domain interrelations, which in the future can provide quantitative results that will help to inform the design process, making it more reliable, and allowing for the reduction of safety factors. The results of the case study demonstrate the data set processes, proving the necessity of these methods for the comprehensive inter-domain data comparison.","PeriodicalId":48611,"journal":{"name":"Journal of Computational Design and Engineering","volume":"40 1","pages":"1460-1478"},"PeriodicalIF":4.9,"publicationDate":"2023-06-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80928453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Beluga whale optimization (BWO) algorithm is a recently proposed population intelligence algorithm. Inspired by the swimming, foraging and whale falling behaviors of beluga whale populations, it shows good competitive performance compared to other state-of-the-art algorithms. However, the original BWO faces the challenges of unbalanced exploration and exploitation, premature stagnation of iterations, and low convergence accuracy in high-dimensional complex applications. Aiming at these challenges, a hybrid beluga whale optimization based on the jellyfish search optimizer (HBWO-JS), which combines the vertical crossover operator and Gaussian variation strategy with a fusion of jellyfish search (JS) optimizer, is developed for solving global optimization in this paper. First, the BWO algorithm is fused with the JS optimizer to improve the problem that BWO tends to fall into the best local solution and low convergence accuracy in the exploitation stage through multi-stage exploration and collaborative exploitation. Then, the introduced vertical cross operator solves the problem of unbalanced exploration and exploitation processes by normalizing the upper and lower bounds of two stochastic dimensions of the search agent, thus further improving the overall optimization capability. In addition, the introduced Gaussian variation strategy forces the agent to explore the minimum neighborhood, extending the entire iterative search process and thus alleviating the problem of premature stagnation of the algorithm. Finally, the superiority of the proposed HBWO-JS is verified in detail by comparing it with basic BWO and eight state-of-the-art algorithms on the CEC2019 and CEC2020 test suites, respectively. Also, the scalability of HBWO-JS is evaluated in three dimensions (10-dim, 30-dim, 50-dim), and the results show the stable performance of the proposed algorithm in terms of dimensional scalability. In addition, three practical engineering designs and two Truss topology optimization problems demonstrate the practicality of HBWO-JS. The optimization results show that HBWO-JS has a strong competitive ability and broad application prospects.
{"title":"HBWO-JS: jellyfish search boosted hybrid beluga whale optimization algorithm for engineering applications","authors":"Xinguang Yuan, Gang Hu, J. Zhong, Guo Wei","doi":"10.1093/jcde/qwad060","DOIUrl":"https://doi.org/10.1093/jcde/qwad060","url":null,"abstract":"\u0000 Beluga whale optimization (BWO) algorithm is a recently proposed population intelligence algorithm. Inspired by the swimming, foraging and whale falling behaviors of beluga whale populations, it shows good competitive performance compared to other state-of-the-art algorithms. However, the original BWO faces the challenges of unbalanced exploration and exploitation, premature stagnation of iterations, and low convergence accuracy in high-dimensional complex applications. Aiming at these challenges, a hybrid beluga whale optimization based on the jellyfish search optimizer (HBWO-JS), which combines the vertical crossover operator and Gaussian variation strategy with a fusion of jellyfish search (JS) optimizer, is developed for solving global optimization in this paper. First, the BWO algorithm is fused with the JS optimizer to improve the problem that BWO tends to fall into the best local solution and low convergence accuracy in the exploitation stage through multi-stage exploration and collaborative exploitation. Then, the introduced vertical cross operator solves the problem of unbalanced exploration and exploitation processes by normalizing the upper and lower bounds of two stochastic dimensions of the search agent, thus further improving the overall optimization capability. In addition, the introduced Gaussian variation strategy forces the agent to explore the minimum neighborhood, extending the entire iterative search process and thus alleviating the problem of premature stagnation of the algorithm. Finally, the superiority of the proposed HBWO-JS is verified in detail by comparing it with basic BWO and eight state-of-the-art algorithms on the CEC2019 and CEC2020 test suites, respectively. Also, the scalability of HBWO-JS is evaluated in three dimensions (10-dim, 30-dim, 50-dim), and the results show the stable performance of the proposed algorithm in terms of dimensional scalability. In addition, three practical engineering designs and two Truss topology optimization problems demonstrate the practicality of HBWO-JS. The optimization results show that HBWO-JS has a strong competitive ability and broad application prospects.","PeriodicalId":48611,"journal":{"name":"Journal of Computational Design and Engineering","volume":"117 4","pages":"1615-1656"},"PeriodicalIF":4.9,"publicationDate":"2023-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72586451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The conventional deep learning-based fault diagnosis approach faces challenges under the domain shift problem, where the model encounters different working conditions from the ones it was trained on. This challenge is particularly pronounced in the diagnosis of planetary gearboxes due to the complicated vibrations they generate, which can vary significantly based on the system characteristics of the gearbox. To solve this challenge, this paper proposes a robust deep-learning-based fault-detection approach for planetary gearboxes by utilizing an enhanced health data map (enHDMap). Although there is an HDMap method that visually expresses the vibration signal of the planetary gearbox according to the gear meshing position, it is greatly influenced by machine operating conditions. In this study, domain-specific features from the HDMap are further removed, while the fault-related features are enhanced. Autoencoder-based residual analysis and digital image-processing techniques are employed to address the domain-shift problem. The performance of the proposed method was validated under significant domain-shift problem conditions, as demonstrated by studying two gearbox test rigs with different configurations operated under stationary and non-stationary operating conditions. Validation accuracy was measured in all 12 possible domain-shift scenarios. The proposed method achieved robust fault detection accuracy, outperforming prior methods in most cases.
{"title":"Robust deep learning-based fault detection of planetary gearbox using enhanced health data map under domain shift problem","authors":"Taewan Hwang, J. Ha, B. Youn","doi":"10.1093/jcde/qwad056","DOIUrl":"https://doi.org/10.1093/jcde/qwad056","url":null,"abstract":"\u0000 The conventional deep learning-based fault diagnosis approach faces challenges under the domain shift problem, where the model encounters different working conditions from the ones it was trained on. This challenge is particularly pronounced in the diagnosis of planetary gearboxes due to the complicated vibrations they generate, which can vary significantly based on the system characteristics of the gearbox. To solve this challenge, this paper proposes a robust deep-learning-based fault-detection approach for planetary gearboxes by utilizing an enhanced health data map (enHDMap). Although there is an HDMap method that visually expresses the vibration signal of the planetary gearbox according to the gear meshing position, it is greatly influenced by machine operating conditions. In this study, domain-specific features from the HDMap are further removed, while the fault-related features are enhanced. Autoencoder-based residual analysis and digital image-processing techniques are employed to address the domain-shift problem. The performance of the proposed method was validated under significant domain-shift problem conditions, as demonstrated by studying two gearbox test rigs with different configurations operated under stationary and non-stationary operating conditions. Validation accuracy was measured in all 12 possible domain-shift scenarios. The proposed method achieved robust fault detection accuracy, outperforming prior methods in most cases.","PeriodicalId":48611,"journal":{"name":"Journal of Computational Design and Engineering","volume":"7 1","pages":"1677-1693"},"PeriodicalIF":4.9,"publicationDate":"2023-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81980144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, exploration tasks in disaster environments, victim localization and primary assistance have been the main focuses of Search and Rescue (SAR) Robotics. Developing new technologies in Mixed Reality (M-R) and legged robotics has taken a big step in developing robust field applications in the Robotics field. This article presents MR-RAS (Mixed-Reality for Robotic Assistance), which aims to assist rescuers and protect their integrity when exploring post-disaster areas (against collapse, electrical, and toxic risks) by facilitating the robot’s gesture guidance and allowing them to manage interest visual information of the environment. Thus, ARTU-R (A1 Rescue Tasks UPM Robot) quadruped robot has been equipped with a sensory system (lidar, thermal and RGB-D cameras) to validate this proof of concept. On the other hand, Human-Robot interaction is executed by using the Hololens glasses. This work’s main contribution is the implementation and evaluation of a Mixed-Reality system based on a ROS-Unity solution, capable of managing at a high level the guidance of a complex legged robot through different interest zones (defined by a Neural Network and a vision system) of a post-disaster environment. The robot’s main tasks at each point visited involve detecting victims through thermal, RGB imaging and neural networks and assisting victims with medical equipment. Tests have been carried out in scenarios that recreate the conditions of post-disaster environments (debris, simulation of victims, etc.). An average efficiency improvement of 48% has been obtained when using the immersive interface and a time optimization of 21.4% compared to conventional interfaces. The proposed method has proven to improve rescuers’ immersive experience of controlling a complex robotic system.
{"title":"Mixed-reality for quadruped-robotic guidance in SAR tasks","authors":"Christyan Cruz Ulloa, J. Cerro, A. Barrientos","doi":"10.1093/jcde/qwad061","DOIUrl":"https://doi.org/10.1093/jcde/qwad061","url":null,"abstract":"\u0000 In recent years, exploration tasks in disaster environments, victim localization and primary assistance have been the main focuses of Search and Rescue (SAR) Robotics. Developing new technologies in Mixed Reality (M-R) and legged robotics has taken a big step in developing robust field applications in the Robotics field. This article presents MR-RAS (Mixed-Reality for Robotic Assistance), which aims to assist rescuers and protect their integrity when exploring post-disaster areas (against collapse, electrical, and toxic risks) by facilitating the robot’s gesture guidance and allowing them to manage interest visual information of the environment. Thus, ARTU-R (A1 Rescue Tasks UPM Robot) quadruped robot has been equipped with a sensory system (lidar, thermal and RGB-D cameras) to validate this proof of concept. On the other hand, Human-Robot interaction is executed by using the Hololens glasses. This work’s main contribution is the implementation and evaluation of a Mixed-Reality system based on a ROS-Unity solution, capable of managing at a high level the guidance of a complex legged robot through different interest zones (defined by a Neural Network and a vision system) of a post-disaster environment. The robot’s main tasks at each point visited involve detecting victims through thermal, RGB imaging and neural networks and assisting victims with medical equipment. Tests have been carried out in scenarios that recreate the conditions of post-disaster environments (debris, simulation of victims, etc.). An average efficiency improvement of 48% has been obtained when using the immersive interface and a time optimization of 21.4% compared to conventional interfaces. The proposed method has proven to improve rescuers’ immersive experience of controlling a complex robotic system.","PeriodicalId":48611,"journal":{"name":"Journal of Computational Design and Engineering","volume":"34 1","pages":"1479-1489"},"PeriodicalIF":4.9,"publicationDate":"2023-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81163643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}