Pub Date : 2023-10-16DOI: 10.1080/02533839.2023.2261984
Yong-Lin Kuo, Chia-Hang Hsieh
ABSTRACTThis paper presents both static calibration and dynamics compensation to reduce the positioning errors of the SCORBOT robot. First, a sensor fusion scheme is proposed to estimate the position and attitude of the end-effector of a robot instead of using laser trackers or coordinate measuring machines. The scheme integrates an extended Kalman filter (EKF) with the models of an inertial measurement unit (IMU) and a depth camera. Second, a static calibration scheme is presented to reduce the mechanism errors of robots. The scheme modifies the Denavit-Hartenberg (D-H) parameters provided by the manufacturer based on the least squares method. Third, a dynamic compensation scheme is proposed to reduce the errors caused by robot motions. The scheme establishes a long short-term memory (LSTM) network to compensate the joint angles, where the robot dynamics is integrated into the scheme. Finally, both simulations and experiments are performed to validate the proposed schemes.CO EDITOR-IN-CHIEF: Kuo, Cheng-ChienASSOCIATE EDITOR: Su, Shun-FengKEYWORDS: Static calibrationdynamic compensationsensor fusionLSTM network Nomenclature iAj=transformation matrix form coordinate systems i to jaidiαi=D-H parameters of the ith joint axisariami=actual and measured linear accelerations of the ith joint axisbaibωi=signal biases of linear accelerations and angular velocitiesbfbibcbo=biases of LSTM networkscDHcDH0=of D-H parameters and nominal D-H parameterscisi=cosine and sine functions of rotating angle of the ith joint axisE[]=expected valueFw=matrix and vector in the continuous-time state equationFDK=position vector of the end-effector by direct kinematicsG=gravitational force vectorHv=matrix and vector in the measurement equationJ=objective functionK=Kalman filter gainM=inertia matrixnainωi=signal noises of linear accelerations and angular velocitiesP=covariance matrix of the statesp=position vector of the end-effectorq=generalized coordinatesqi=rotating angle of the ith joint axis.T=generalized force vector.t=discrete timeu, v, w=vectors to describe the orientation of the end-effectorV=Centrifugal and Coriolis force vectorWfWiWcWo=weights of LSTM networksx=state vectorxtht=input and output of LSTM arrays(Xi,Yi,Zi)=ith coordinate systemz=measurementsΔcDH=variations of D-H parameter vectorΔt=sampling timeΦη=matrix and vector in the discrete-time state equationϕθψ=Euler anglesωriωmi=actual and measured angular velocities⋅2=2-normAcknowledgmentsThis work was supported in part by the Ministry of Science and Technology, Taiwan, under Grant MOST 109-2221-E-011-068.Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingThe work was supported by the Ministry of Science and Technology, Taiwan [MOST 109-2221-E-011-068].
{"title":"Static calibration and dynamic compensation of the SCORBOT robot using sensor fusion and LSTM networks","authors":"Yong-Lin Kuo, Chia-Hang Hsieh","doi":"10.1080/02533839.2023.2261984","DOIUrl":"https://doi.org/10.1080/02533839.2023.2261984","url":null,"abstract":"ABSTRACTThis paper presents both static calibration and dynamics compensation to reduce the positioning errors of the SCORBOT robot. First, a sensor fusion scheme is proposed to estimate the position and attitude of the end-effector of a robot instead of using laser trackers or coordinate measuring machines. The scheme integrates an extended Kalman filter (EKF) with the models of an inertial measurement unit (IMU) and a depth camera. Second, a static calibration scheme is presented to reduce the mechanism errors of robots. The scheme modifies the Denavit-Hartenberg (D-H) parameters provided by the manufacturer based on the least squares method. Third, a dynamic compensation scheme is proposed to reduce the errors caused by robot motions. The scheme establishes a long short-term memory (LSTM) network to compensate the joint angles, where the robot dynamics is integrated into the scheme. Finally, both simulations and experiments are performed to validate the proposed schemes.CO EDITOR-IN-CHIEF: Kuo, Cheng-ChienASSOCIATE EDITOR: Su, Shun-FengKEYWORDS: Static calibrationdynamic compensationsensor fusionLSTM network Nomenclature iAj=transformation matrix form coordinate systems i to jaidiαi=D-H parameters of the ith joint axisariami=actual and measured linear accelerations of the ith joint axisbaibωi=signal biases of linear accelerations and angular velocitiesbfbibcbo=biases of LSTM networkscDHcDH0=of D-H parameters and nominal D-H parameterscisi=cosine and sine functions of rotating angle of the ith joint axisE[]=expected valueFw=matrix and vector in the continuous-time state equationFDK=position vector of the end-effector by direct kinematicsG=gravitational force vectorHv=matrix and vector in the measurement equationJ=objective functionK=Kalman filter gainM=inertia matrixnainωi=signal noises of linear accelerations and angular velocitiesP=covariance matrix of the statesp=position vector of the end-effectorq=generalized coordinatesqi=rotating angle of the ith joint axis.T=generalized force vector.t=discrete timeu, v, w=vectors to describe the orientation of the end-effectorV=Centrifugal and Coriolis force vectorWfWiWcWo=weights of LSTM networksx=state vectorxtht=input and output of LSTM arrays(Xi,Yi,Zi)=ith coordinate systemz=measurementsΔcDH=variations of D-H parameter vectorΔt=sampling timeΦη=matrix and vector in the discrete-time state equationϕθψ=Euler anglesωriωmi=actual and measured angular velocities⋅2=2-normAcknowledgmentsThis work was supported in part by the Ministry of Science and Technology, Taiwan, under Grant MOST 109-2221-E-011-068.Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingThe work was supported by the Ministry of Science and Technology, Taiwan [MOST 109-2221-E-011-068].","PeriodicalId":17313,"journal":{"name":"Journal of the Chinese Institute of Engineers","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136114590","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-10DOI: 10.1080/02533839.2023.2262724
Wei-Liang Ou, Yu-Hsiu Cheng, Chin-Chieh Chang, Hua-Luen Chen, Chih-Peng Fan
ABSTRACTBecause of the COVID-19 pandemic, gaze tracking for nontouch user interface designs used in advertising displays or automatic vending machines has become an emerging research topic. In this study, a cost-effective deep-learning-based customer gaze direction detection technology was developed for a smart advertising display. To achieve calibration-free interactions between customers and displays, the You-Only-Look-Once (YOLO)-v3-tiny-based deep learning model was used for determining the bounding boxes of eyes and pupils. Next, postprocessing was conducted using a voting mechanism and difference vectors between the central coordinates of the bounding boxes for effectively predicting customer gaze directions. Product images were separated into two or four gaze zones. For cross-person testing, the Recall, Precision, Accuracy, and F1-score for two gaze zones were approximately 77%, 99%, 88%, and 87%, respectively, and those for four gaze zones were approximately 72%, 91%, 91%, and 79%, respectively. Software implementations on NVIDIA graphics-processing-unit-accelerated embedded platforms exhibited a frame rate of nearly 30 frames per second. The proposed design achieved real-time gaze direction detection for a smart advertising platform.CO EDITOR-IN-CHIEF: Yuan, Shyan-MingASSOCIATE EDITOR: Yuan, Shyan-MingKEYWORDS: Deep learningYOLOv3-tinyintelligent systemssmart displaysnontouch user interface designgaze direction detectioncalibration-free Nomenclature UL=the gaze state estimated at the upper left directionUR=the gaze state estimated at the upper right directionDL=the gaze state estimated at the down left directionDR=the gaze state estimated1 at the down right directionC_pupil=the central coordinate position of the right or left pupilC_eye=the central coordinate position of the right or left eyeV_d=the difference vector between two central coordinate positionsX1=the central coordinate position of X-axis of the pupil’s bounding boxY1=the central coordinate position of Y-axis of the pupil’s bounding boxX2=the central coordinate position of X-axis of the eye’s bounding boxY2=the central coordinate position of Y-axis of the eye’s bounding boxTN=the number of true negative casesTP=the number of true positive casesFN=the number of false negative casesFP=the number of false positive casesF1 Score=it is a measure of a test’s accuracy by using 2×Precision×Recall/(Precision + Recall)mAP=it is a metric used to measure the performance of models doing object detection tasksDisclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingThis work was financially supported by the Ministry of Science and Technology (MOST) under Grant No. [109-2218-E-005-008].
{"title":"Calibration-free and deep-learning-based customer gaze direction detection technology based on the YOLOv3-tiny model for smart advertising displays","authors":"Wei-Liang Ou, Yu-Hsiu Cheng, Chin-Chieh Chang, Hua-Luen Chen, Chih-Peng Fan","doi":"10.1080/02533839.2023.2262724","DOIUrl":"https://doi.org/10.1080/02533839.2023.2262724","url":null,"abstract":"ABSTRACTBecause of the COVID-19 pandemic, gaze tracking for nontouch user interface designs used in advertising displays or automatic vending machines has become an emerging research topic. In this study, a cost-effective deep-learning-based customer gaze direction detection technology was developed for a smart advertising display. To achieve calibration-free interactions between customers and displays, the You-Only-Look-Once (YOLO)-v3-tiny-based deep learning model was used for determining the bounding boxes of eyes and pupils. Next, postprocessing was conducted using a voting mechanism and difference vectors between the central coordinates of the bounding boxes for effectively predicting customer gaze directions. Product images were separated into two or four gaze zones. For cross-person testing, the Recall, Precision, Accuracy, and F1-score for two gaze zones were approximately 77%, 99%, 88%, and 87%, respectively, and those for four gaze zones were approximately 72%, 91%, 91%, and 79%, respectively. Software implementations on NVIDIA graphics-processing-unit-accelerated embedded platforms exhibited a frame rate of nearly 30 frames per second. The proposed design achieved real-time gaze direction detection for a smart advertising platform.CO EDITOR-IN-CHIEF: Yuan, Shyan-MingASSOCIATE EDITOR: Yuan, Shyan-MingKEYWORDS: Deep learningYOLOv3-tinyintelligent systemssmart displaysnontouch user interface designgaze direction detectioncalibration-free Nomenclature UL=the gaze state estimated at the upper left directionUR=the gaze state estimated at the upper right directionDL=the gaze state estimated at the down left directionDR=the gaze state estimated1 at the down right directionC_pupil=the central coordinate position of the right or left pupilC_eye=the central coordinate position of the right or left eyeV_d=the difference vector between two central coordinate positionsX1=the central coordinate position of X-axis of the pupil’s bounding boxY1=the central coordinate position of Y-axis of the pupil’s bounding boxX2=the central coordinate position of X-axis of the eye’s bounding boxY2=the central coordinate position of Y-axis of the eye’s bounding boxTN=the number of true negative casesTP=the number of true positive casesFN=the number of false negative casesFP=the number of false positive casesF1 Score=it is a measure of a test’s accuracy by using 2×Precision×Recall/(Precision + Recall)mAP=it is a metric used to measure the performance of models doing object detection tasksDisclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingThis work was financially supported by the Ministry of Science and Technology (MOST) under Grant No. [109-2218-E-005-008].","PeriodicalId":17313,"journal":{"name":"Journal of the Chinese Institute of Engineers","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136353094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ABSTRACTThis paper proposes a method to estimate external forces at the tip of a robot end-effector by using a neural network model. In order to avoid the use of an expensive force sensor in the training purpose, the proposed method implements the indirect training method by including the inverse dynamic model of the robot manipulator to the training algorithm with available information from a default robot system. In this method, the robot dynamics equations are necessary for the training, therefore a disturbance observer is adopted to deal with the existing uncertainties and errors. The performance of the proposed estimation method is evaluated through experiments of a 5-DOF robotic experimental platform, comparing to another existing estimation method using recurrent neural network with a type-1 disturbance observer for the external force estimation. The estimation results show that the behavior of the estimated external forces strongly correlates with the applied external forces and the proposed method is superior to the other method.CO EDITOR-IN-CHIEF: Kuo, Cheng-ChienASSOCIATE EDITOR: Zhang, XuefengKEYWORDS: external force estimationindirect trainingdisturbance observerneural networks (NNs) Nomenclature e=the error between the actual and estimated applied torquesε=the loss functionFextand Fˆext=the actual and estimated external force, respectivelyg=the gradient vector of the loss function with respect to the weighting parametersH=the Hessian matrix of the loss function with respect to the weighting parametersI=the identity matrixJ=the Jacobian matrix of the robot kinematicsk=the epoch indexλ=the positive damping factorM=the robot mass inertia matrixΔM=the modeling errors and parameter uncertainties in the matrix Mn=the number of the degree of freedomni, nh, and no=the number of nodes in the input, hidden, and output layers, respectivelyn=the torque vector contributed by the centrifugal, Coriolis, gravitational, and friction effectsΔn=the modeling errors and parameter uncertainties in the vector nN=the number of datasetsq˙,q,andq¨=the angular displacement, velocity, and acceleration of the robot system, respectivelyτ and τˆ=the actual and estimated applied torques, respectivelyτdand τˆd=the actual and estimated internal disturbances, respectivelyw=the weighting parameters in the NN modelDisclosure statementNo penitential conflict of interest was reported by the authors.Additional informationFundingThis work was supported in part by the Advanced Institute of Manufacturing with High-tech Innovations (AIM-HI) from The Featured Areas Research Center Program within the framework of the Higher Education Sprout Project by the Ministry of Education (MOE) in Taiwan and was also supported in part by the National Science and Technology Council, Taiwan, ROC, under Grants NSTC 111-2218-E-194-005 and NSTC 111-2221-E-194 -039 -MY2.
{"title":"Modeling of an external force estimator for an end-effector of a robot by neural networks","authors":"Goragod Junplod, Woraphrut Kornmaneesang, Shyh-Leh Chen, Sarawan Wongsa","doi":"10.1080/02533839.2023.2262047","DOIUrl":"https://doi.org/10.1080/02533839.2023.2262047","url":null,"abstract":"ABSTRACTThis paper proposes a method to estimate external forces at the tip of a robot end-effector by using a neural network model. In order to avoid the use of an expensive force sensor in the training purpose, the proposed method implements the indirect training method by including the inverse dynamic model of the robot manipulator to the training algorithm with available information from a default robot system. In this method, the robot dynamics equations are necessary for the training, therefore a disturbance observer is adopted to deal with the existing uncertainties and errors. The performance of the proposed estimation method is evaluated through experiments of a 5-DOF robotic experimental platform, comparing to another existing estimation method using recurrent neural network with a type-1 disturbance observer for the external force estimation. The estimation results show that the behavior of the estimated external forces strongly correlates with the applied external forces and the proposed method is superior to the other method.CO EDITOR-IN-CHIEF: Kuo, Cheng-ChienASSOCIATE EDITOR: Zhang, XuefengKEYWORDS: external force estimationindirect trainingdisturbance observerneural networks (NNs) Nomenclature e=the error between the actual and estimated applied torquesε=the loss functionFextand Fˆext=the actual and estimated external force, respectivelyg=the gradient vector of the loss function with respect to the weighting parametersH=the Hessian matrix of the loss function with respect to the weighting parametersI=the identity matrixJ=the Jacobian matrix of the robot kinematicsk=the epoch indexλ=the positive damping factorM=the robot mass inertia matrixΔM=the modeling errors and parameter uncertainties in the matrix Mn=the number of the degree of freedomni, nh, and no=the number of nodes in the input, hidden, and output layers, respectivelyn=the torque vector contributed by the centrifugal, Coriolis, gravitational, and friction effectsΔn=the modeling errors and parameter uncertainties in the vector nN=the number of datasetsq˙,q,andq¨=the angular displacement, velocity, and acceleration of the robot system, respectivelyτ and τˆ=the actual and estimated applied torques, respectivelyτdand τˆd=the actual and estimated internal disturbances, respectivelyw=the weighting parameters in the NN modelDisclosure statementNo penitential conflict of interest was reported by the authors.Additional informationFundingThis work was supported in part by the Advanced Institute of Manufacturing with High-tech Innovations (AIM-HI) from The Featured Areas Research Center Program within the framework of the Higher Education Sprout Project by the Ministry of Education (MOE) in Taiwan and was also supported in part by the National Science and Technology Council, Taiwan, ROC, under Grants NSTC 111-2218-E-194-005 and NSTC 111-2221-E-194 -039 -MY2.","PeriodicalId":17313,"journal":{"name":"Journal of the Chinese Institute of Engineers","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135094535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-04DOI: 10.1080/02533839.2023.2262711
Zheng-Yun Zhuang, Yu-Chuan Hsu, Shyan-Ming Yuan
ABSTRACTThis study uses retrospective data for firmware tests as the input data sets to train four machine learning models with embedded standalone classifiers. None of these models provide accurate predictions during validation, so model optimization trials adjust the training-validation data portfolio and hyper parameters for each model. Consequently, only the random forest classifier with the best parametric settings just achieves the 90% prediction accuracy required by the standard. Ensemble learning (EL) is then applied using several combinations over the standalone models, and the EL model using logistic regression as the meta classifier increases the accuracy by 6% (i.e. to 96%), which is sufficient for establishing a predictive system. Using the ‘X-minute’ method, it is further identified that the execution period (also the data sampling period) for the sequential read test workload can be reduced from 30 (in current practice) to 20 minutes and that the predictions are sufficiently accurate for system implementation using the EL model. Applying the similarity confirmation method for each pair of ‘score vectors’ (each of which contains a model’s prediction accuracies), several observations distinguishing the performance and the predictive behavioral patterns of the benchmarked models are further confirmed. The knowledge from this advanced research has implications which may benefit future practice in industry.CO EDITOR-IN-CHIEF: Sun, Hung-MinASSOCIATE EDITOR: Sun, Hung-MinKEYWORDS: Quality controlfirmware testingensemble machine learningprocess re-engineering and optimizationdecision-support systemAI in industry Nomenclature AI=artificial intelligenceAPS=automated predictive systemCD=continuous deliveryCI=continuous integrationCOVID-19=corona-virus disease 2019CSV=comma-separated valuesCWV=criteria weight vectorDDDM (D3M)=data-driven decision-makingDSS=decision support systemsEL=ensemble learningFN=false negativeFP=false positiveFW=firmwareI/O=input and outputk-NN=k nearest neighborsLR=logistic regressionMADM=multi-attribute decision-makingMCDM=multi-criteria decision-makingML=machine learningOWV=opinion weight vectorR&D=research and developmentRF=random forestROV=rand order vectorSCM=similarity confirmation methodSOP=standard operating procedureSSD=solid state driveSV=score vectorSVM=support vector machineTN=true negativeTP=true positiveTTM=time to marketVCS=version control systemDisclosure statementNo potential conflict of interest was reported by the authors.Additional informationFundingThis work was supported by the Ministry of Science and Technology, Taiwan (ROC), under grants [MOST-108-2511-H-009-009-MY3, MOST-109-2410-H-992 -015 and MOST-111-2410-H-992-011], each in part.
{"title":"Assisting pre-delivery firmware quality assessments using ensemble learning","authors":"Zheng-Yun Zhuang, Yu-Chuan Hsu, Shyan-Ming Yuan","doi":"10.1080/02533839.2023.2262711","DOIUrl":"https://doi.org/10.1080/02533839.2023.2262711","url":null,"abstract":"ABSTRACTThis study uses retrospective data for firmware tests as the input data sets to train four machine learning models with embedded standalone classifiers. None of these models provide accurate predictions during validation, so model optimization trials adjust the training-validation data portfolio and hyper parameters for each model. Consequently, only the random forest classifier with the best parametric settings just achieves the 90% prediction accuracy required by the standard. Ensemble learning (EL) is then applied using several combinations over the standalone models, and the EL model using logistic regression as the meta classifier increases the accuracy by 6% (i.e. to 96%), which is sufficient for establishing a predictive system. Using the ‘X-minute’ method, it is further identified that the execution period (also the data sampling period) for the sequential read test workload can be reduced from 30 (in current practice) to 20 minutes and that the predictions are sufficiently accurate for system implementation using the EL model. Applying the similarity confirmation method for each pair of ‘score vectors’ (each of which contains a model’s prediction accuracies), several observations distinguishing the performance and the predictive behavioral patterns of the benchmarked models are further confirmed. The knowledge from this advanced research has implications which may benefit future practice in industry.CO EDITOR-IN-CHIEF: Sun, Hung-MinASSOCIATE EDITOR: Sun, Hung-MinKEYWORDS: Quality controlfirmware testingensemble machine learningprocess re-engineering and optimizationdecision-support systemAI in industry Nomenclature AI=artificial intelligenceAPS=automated predictive systemCD=continuous deliveryCI=continuous integrationCOVID-19=corona-virus disease 2019CSV=comma-separated valuesCWV=criteria weight vectorDDDM (D3M)=data-driven decision-makingDSS=decision support systemsEL=ensemble learningFN=false negativeFP=false positiveFW=firmwareI/O=input and outputk-NN=k nearest neighborsLR=logistic regressionMADM=multi-attribute decision-makingMCDM=multi-criteria decision-makingML=machine learningOWV=opinion weight vectorR&D=research and developmentRF=random forestROV=rand order vectorSCM=similarity confirmation methodSOP=standard operating procedureSSD=solid state driveSV=score vectorSVM=support vector machineTN=true negativeTP=true positiveTTM=time to marketVCS=version control systemDisclosure statementNo potential conflict of interest was reported by the authors.Additional informationFundingThis work was supported by the Ministry of Science and Technology, Taiwan (ROC), under grants [MOST-108-2511-H-009-009-MY3, MOST-109-2410-H-992 -015 and MOST-111-2410-H-992-011], each in part.","PeriodicalId":17313,"journal":{"name":"Journal of the Chinese Institute of Engineers","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135590655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-04DOI: 10.1080/02533839.2023.2262730
Cheng-Chien Kuo
"Special issue: advanced learning in smart robotics." Journal of the Chinese Institute of Engineers, ahead-of-print(ahead-of-print), p. 1
特刊:智能机器人的高级学习《中国工程师学会学报》,印刷前,第1页
{"title":"Special issue: advanced learning in smart robotics","authors":"Cheng-Chien Kuo","doi":"10.1080/02533839.2023.2262730","DOIUrl":"https://doi.org/10.1080/02533839.2023.2262730","url":null,"abstract":"\"Special issue: advanced learning in smart robotics.\" Journal of the Chinese Institute of Engineers, ahead-of-print(ahead-of-print), p. 1","PeriodicalId":17313,"journal":{"name":"Journal of the Chinese Institute of Engineers","volume":"124 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135590703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-04DOI: 10.1080/02533839.2023.2262718
Chih-Chiang Lin, Zheng-Yun Zhuang
ABSTRACT With the growing trend for increased train speed, steel rails may suffer from quality problems due to both overloading and/or the high speed of moving trains. However, before any further analysis can be performed to gain in-depth knowledge, the relevant vibration data sets must be curated, cleansed, preprocessed, and filtered very carefully after they are recorded and collected by the installed sensor equipment. This study proposes a systematic methodological flow to obtain data sets ready for subsequent analysis from messy source data. It hybridized several statistical and unsupervised machine learning methods, with the final aim to establish meaningful rules to determine suitable data sets by referring to domain knowledge. This flow was verified using a relatively large database of records of physical vibrations measured in 2019 at specific locations along a curve of an actual railroad track. As the flow can be used to qualify empirical data sets required in practice, further analysis is provided for the effectiveness of each rule, differences in determination between the rules, and the effects of combining more than one rule.
{"title":"Qualifying data on railroad track vibrations: a hybrid data preprocessing flow of statistical and machine learning approaches","authors":"Chih-Chiang Lin, Zheng-Yun Zhuang","doi":"10.1080/02533839.2023.2262718","DOIUrl":"https://doi.org/10.1080/02533839.2023.2262718","url":null,"abstract":"ABSTRACT With the growing trend for increased train speed, steel rails may suffer from quality problems due to both overloading and/or the high speed of moving trains. However, before any further analysis can be performed to gain in-depth knowledge, the relevant vibration data sets must be curated, cleansed, preprocessed, and filtered very carefully after they are recorded and collected by the installed sensor equipment. This study proposes a systematic methodological flow to obtain data sets ready for subsequent analysis from messy source data. It hybridized several statistical and unsupervised machine learning methods, with the final aim to establish meaningful rules to determine suitable data sets by referring to domain knowledge. This flow was verified using a relatively large database of records of physical vibrations measured in 2019 at specific locations along a curve of an actual railroad track. As the flow can be used to qualify empirical data sets required in practice, further analysis is provided for the effectiveness of each rule, differences in determination between the rules, and the effects of combining more than one rule.","PeriodicalId":17313,"journal":{"name":"Journal of the Chinese Institute of Engineers","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135590637","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ABSTRACTThis paper considered an object detection system based on 3D LiDAR Sensor and Simultaneous Localization and Mapping (SLAM) to complete the navigation applications of mobile robots. A 3D-based SLAM with lightweight and ground-optimized Lidar odometry and mapping (LeGO-LOAM) appropriately generated the environmental maps. SLAM is a tool used to obtain information from the environment, allowing mobile robots to know their location. Indoor environment data is immedicably created while SLAM is processing the information. The dynamic object detection algorithm depends on the available information to realize the external morphology and circle the bounding box of moving objects. Therefore, a wheeled mobile robot (WMR) was employed to dynamically trace the object’s movement direction. Finally, This study found that the quantum genetic algorithm (QGA) is more efficient in generating a shorter path than the particle swarm optimization, and a dynamic window approach (DWA) is immediately detected as a dynamic obstacle. Therefore, WMR obtains enough object, obstacle, and routing information to effectively and safely reach the destination through the Move_base software package in Robot Operating System.CO EDITOR-IN-CHIEF: Kuo, Cheng-ChienASSOCIATE EDITOR: Zhang, XuefengKEYWORDS: Wheeled mobile robot (WMR)simultaneous localization and mapping (SLAM)navigationobject detection Nomenclature c=roughness degree.cth=Threshold of roughness degree.Fet=Current edge features.Fpt=Current planner feature.Fet−1=Previous edge features.Fpt−1=Previous planner feature.Mt−1=Previous set of all feature setspi=a point in Pt.Pt=the obtained frame of point cloud information.Qt−1=Previous point cloud map.ri=A distance between pi and the sensor.rj=A distance between pj and the sensor.tx=x coordinate of the robot at time tty=y coordinate of the robot at time ttz=z coordinate of the robot at time tθpitch=the pitch angle of the robot at time tθroll=the roll angle of the robot at time tθyaw=the yaw angle of the robot at time tAcknowledgmentsThis paper was supported by the Ministry of Science and Technology (MOST) of the Republic of China under contract MOST 109-2221-E-507-009, MOST 109-2221-E-032-038, and MOST 109-2221-E-032-036.Disclosure statementNo potential conflict of interest was reported by the authors.Additional informationFundingThe work was supported by the Ministry of Science and Technology (MOST) [109-2221-E-032-036].
{"title":"3D lidar SLAM-based systems in object detection and navigation applications","authors":"Shih-An Li, Yun-Chien Chen, Bo-Xian Wu, Hsuan-Ming Feng","doi":"10.1080/02533839.2023.2261983","DOIUrl":"https://doi.org/10.1080/02533839.2023.2261983","url":null,"abstract":"ABSTRACTThis paper considered an object detection system based on 3D LiDAR Sensor and Simultaneous Localization and Mapping (SLAM) to complete the navigation applications of mobile robots. A 3D-based SLAM with lightweight and ground-optimized Lidar odometry and mapping (LeGO-LOAM) appropriately generated the environmental maps. SLAM is a tool used to obtain information from the environment, allowing mobile robots to know their location. Indoor environment data is immedicably created while SLAM is processing the information. The dynamic object detection algorithm depends on the available information to realize the external morphology and circle the bounding box of moving objects. Therefore, a wheeled mobile robot (WMR) was employed to dynamically trace the object’s movement direction. Finally, This study found that the quantum genetic algorithm (QGA) is more efficient in generating a shorter path than the particle swarm optimization, and a dynamic window approach (DWA) is immediately detected as a dynamic obstacle. Therefore, WMR obtains enough object, obstacle, and routing information to effectively and safely reach the destination through the Move_base software package in Robot Operating System.CO EDITOR-IN-CHIEF: Kuo, Cheng-ChienASSOCIATE EDITOR: Zhang, XuefengKEYWORDS: Wheeled mobile robot (WMR)simultaneous localization and mapping (SLAM)navigationobject detection Nomenclature c=roughness degree.cth=Threshold of roughness degree.Fet=Current edge features.Fpt=Current planner feature.Fet−1=Previous edge features.Fpt−1=Previous planner feature.Mt−1=Previous set of all feature setspi=a point in Pt.Pt=the obtained frame of point cloud information.Qt−1=Previous point cloud map.ri=A distance between pi and the sensor.rj=A distance between pj and the sensor.tx=x coordinate of the robot at time tty=y coordinate of the robot at time ttz=z coordinate of the robot at time tθpitch=the pitch angle of the robot at time tθroll=the roll angle of the robot at time tθyaw=the yaw angle of the robot at time tAcknowledgmentsThis paper was supported by the Ministry of Science and Technology (MOST) of the Republic of China under contract MOST 109-2221-E-507-009, MOST 109-2221-E-032-038, and MOST 109-2221-E-032-036.Disclosure statementNo potential conflict of interest was reported by the authors.Additional informationFundingThe work was supported by the Ministry of Science and Technology (MOST) [109-2221-E-032-036].","PeriodicalId":17313,"journal":{"name":"Journal of the Chinese Institute of Engineers","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135830012","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-02DOI: 10.1080/02533839.2023.2262727
Shyan-Ming Yuan, Ruey-Kai Sheu, Zheng-Yun Zhuang
"Special Issue: Artificial Intelligence in Industrial Applications." Journal of the Chinese Institute of Engineers, ahead-of-print(ahead-of-print), pp. 1–2 Disclosure statementNo potential conflict of interest was reported by the authors.
{"title":"Special Issue: Artificial Intelligence in Industrial Applications","authors":"Shyan-Ming Yuan, Ruey-Kai Sheu, Zheng-Yun Zhuang","doi":"10.1080/02533839.2023.2262727","DOIUrl":"https://doi.org/10.1080/02533839.2023.2262727","url":null,"abstract":"\"Special Issue: Artificial Intelligence in Industrial Applications.\" Journal of the Chinese Institute of Engineers, ahead-of-print(ahead-of-print), pp. 1–2 Disclosure statementNo potential conflict of interest was reported by the authors.","PeriodicalId":17313,"journal":{"name":"Journal of the Chinese Institute of Engineers","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135835720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-02DOI: 10.1080/02533839.2023.2262722
Tian Lisi, Zhang Hongwei, Hu Bin, Yu Qiang
ABSTRACTDue to the strong nonlinearity and high complexity of NPC three-level inverter system, the model-based method is difficult to be used for open-circuit fault diagnosis of power switches. A fault diagnosis method (CNN-SVM) based on the combination of convolutional neural network (CNN) and support vector machine (SVM) is proposed. The data fusion method is used to integrate the output voltage characteristics of the inverter. The connection between data before and after is increased by it into a grayscale map. CNN is used to obtain the integrated voltage-related features, and SVM is used to classify the obtained features and then judge whether the fault occurs and the location of the fault. The experimental results show that the accuracy of the CNN-SVM model for inverter fault diagnosis is more than 96%, and it has high processing speed and strong generalization ability.CO EDITOR-IN-CHIEF: Yuan, Shyan-MingASSOCIATE EDITOR: Sun, Hung-MinKEYWORDS: Convolutional neural networksupport vector machinefault diagnosisthree-level inverter Nomenclature aandb=The size of the input feature mapa′andb′=The size of the new convolutional layerai=The fraction of output iβ=The biasdown()=The down sampling functionf()=The activation functionm=The size of the convolution kernelM=The set of input feature mapsl=The current convolution layer pi=The specified discrete probability distributiontn=Represents a nonlinear mappingw=The weight of the convolution kernelω=Denotes the weight vectorxjl=The output of the layerxn=The training datayn=Corresponding labelsεn=A slack variableDisclosure statementNo potential conflict of interest was reported by the authors.Additional informationFundingThis work was supported by Central University Basic Research Fund of China under Grant [2018QNA09].
{"title":"Fault diagnosis of three-level inverter based on convolutional neural network and support vector machine","authors":"Tian Lisi, Zhang Hongwei, Hu Bin, Yu Qiang","doi":"10.1080/02533839.2023.2262722","DOIUrl":"https://doi.org/10.1080/02533839.2023.2262722","url":null,"abstract":"ABSTRACTDue to the strong nonlinearity and high complexity of NPC three-level inverter system, the model-based method is difficult to be used for open-circuit fault diagnosis of power switches. A fault diagnosis method (CNN-SVM) based on the combination of convolutional neural network (CNN) and support vector machine (SVM) is proposed. The data fusion method is used to integrate the output voltage characteristics of the inverter. The connection between data before and after is increased by it into a grayscale map. CNN is used to obtain the integrated voltage-related features, and SVM is used to classify the obtained features and then judge whether the fault occurs and the location of the fault. The experimental results show that the accuracy of the CNN-SVM model for inverter fault diagnosis is more than 96%, and it has high processing speed and strong generalization ability.CO EDITOR-IN-CHIEF: Yuan, Shyan-MingASSOCIATE EDITOR: Sun, Hung-MinKEYWORDS: Convolutional neural networksupport vector machinefault diagnosisthree-level inverter Nomenclature aandb=The size of the input feature mapa′andb′=The size of the new convolutional layerai=The fraction of output iβ=The biasdown()=The down sampling functionf()=The activation functionm=The size of the convolution kernelM=The set of input feature mapsl=The current convolution layer pi=The specified discrete probability distributiontn=Represents a nonlinear mappingw=The weight of the convolution kernelω=Denotes the weight vectorxjl=The output of the layerxn=The training datayn=Corresponding labelsεn=A slack variableDisclosure statementNo potential conflict of interest was reported by the authors.Additional informationFundingThis work was supported by Central University Basic Research Fund of China under Grant [2018QNA09].","PeriodicalId":17313,"journal":{"name":"Journal of the Chinese Institute of Engineers","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135831330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-30DOI: 10.1080/02533839.2023.2262042
Meng-Ting Tsai, Chien-Pin Kao
ABSTRACTThe bolted joints make timber–steel composites (TSCs) easily to be assembled; however, seams exist between the timber components for bolted TSC components resulting in the risk of fire spreading into the joint seam when the component is under fire. The efficient charring depth provided by timber needs to be clarified; furthermore, the raising temperature potentially affects the strength of steel component. In this study, TSCs were tested in fire for 1 hour, and the following issues were examined, including the experimental charring depth of timber components, the influence of fire-retardant finish in order to provide the efficient charring depth, and additionally the suggested charring depth are proposed for the design of TSCs. In conclusion, the charring depth in the seams was greater than the regulation values, and evaluation methods for the charring depth at seams should be reconsidered. Although the fire-retardant finish reduced the formation rate of the char layer, the flames still breached the seams. The results reveal that Douglas fir TSCs with fire retardant are the most efficient specimen, with effective charring depth 52 mm. While the effective charring depth of Douglas fir TSCs without fire retardant and Japanese Cedar TSCs with fire retardant are increased, at least 64 mm and 65 mm needed, respectively.CO EDITOR-IN-CHIEF: Ou, Yu-ChenASSOCIATE EDITOR: Ou, Yu-ChenKEYWORDS: Timber–steel compositeseffective charring depthfire-retardantjoint seams Nomenclature b=original width of TSC componentd0=a constant of 7 mmdchar=charring depthdchar,eff=effective charring depthdchar,x=charring depth along x-axisdchar,y=charring depth along y-axish=original height of TSC componentk0d0=pyrolysis layer thicknessT=Average furnace temperature (°C)t=Elapsed time of experiment (min)β=charring rateAcknowledgmentsThis research was financially supported by National Taiwan University of Science and Technology under grant number II-2-2, Forestry Bureau, Council of Agriculture under grant number 110Linfa-04.1-Zao-24(2), and Ministry of Science and Technology under grant number MOST 110-2221-E-011-055 -.Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingThis work was supported by the Ministry of Science and Technology, Taiwan [MOST 110-2221-E-011-055 -]; National Taiwan University of Science and Technology [II-2-2]; Forestry Bureau, Council of Agriculture [110Linfa-04.1-Zao-24(2)].
{"title":"Behavior of fire-retardant treated bolted timber–steel composites (TSCs) and effective charring depth based on experiment","authors":"Meng-Ting Tsai, Chien-Pin Kao","doi":"10.1080/02533839.2023.2262042","DOIUrl":"https://doi.org/10.1080/02533839.2023.2262042","url":null,"abstract":"ABSTRACTThe bolted joints make timber–steel composites (TSCs) easily to be assembled; however, seams exist between the timber components for bolted TSC components resulting in the risk of fire spreading into the joint seam when the component is under fire. The efficient charring depth provided by timber needs to be clarified; furthermore, the raising temperature potentially affects the strength of steel component. In this study, TSCs were tested in fire for 1 hour, and the following issues were examined, including the experimental charring depth of timber components, the influence of fire-retardant finish in order to provide the efficient charring depth, and additionally the suggested charring depth are proposed for the design of TSCs. In conclusion, the charring depth in the seams was greater than the regulation values, and evaluation methods for the charring depth at seams should be reconsidered. Although the fire-retardant finish reduced the formation rate of the char layer, the flames still breached the seams. The results reveal that Douglas fir TSCs with fire retardant are the most efficient specimen, with effective charring depth 52 mm. While the effective charring depth of Douglas fir TSCs without fire retardant and Japanese Cedar TSCs with fire retardant are increased, at least 64 mm and 65 mm needed, respectively.CO EDITOR-IN-CHIEF: Ou, Yu-ChenASSOCIATE EDITOR: Ou, Yu-ChenKEYWORDS: Timber–steel compositeseffective charring depthfire-retardantjoint seams Nomenclature b=original width of TSC componentd0=a constant of 7 mmdchar=charring depthdchar,eff=effective charring depthdchar,x=charring depth along x-axisdchar,y=charring depth along y-axish=original height of TSC componentk0d0=pyrolysis layer thicknessT=Average furnace temperature (°C)t=Elapsed time of experiment (min)β=charring rateAcknowledgmentsThis research was financially supported by National Taiwan University of Science and Technology under grant number II-2-2, Forestry Bureau, Council of Agriculture under grant number 110Linfa-04.1-Zao-24(2), and Ministry of Science and Technology under grant number MOST 110-2221-E-011-055 -.Disclosure statementNo potential conflict of interest was reported by the author(s).Additional informationFundingThis work was supported by the Ministry of Science and Technology, Taiwan [MOST 110-2221-E-011-055 -]; National Taiwan University of Science and Technology [II-2-2]; Forestry Bureau, Council of Agriculture [110Linfa-04.1-Zao-24(2)].","PeriodicalId":17313,"journal":{"name":"Journal of the Chinese Institute of Engineers","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136280007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}