Pub Date : 2021-10-12DOI: 10.23919/ICCAS52745.2021.9649873
Hyeon-Yeol Lee, Hyeonhee Yi, Jung‐Dong Park
We present a design of the real-time radar sensor model for unmanned surface vehicles (USV). To construct an efficient learning environment of an unmanned surface vehicle (USV) for the swarm operation, accurate virtual modeling of the radar sensor with a light processing load is necessary. To achieve real-time modeling of the marine radar operations with a high level of modeling accuracy under a limited computational power, our work is to extract the signal-to-clutter noise ratio (SCNR) by considering physical radar specifications with pre-extracted target radar cross-section (RCS) using a 3D-EM simulator (HFSS). Modeling of various clutters such as rain, snow, fog as well as sea clutter has been carried out for each range bin with the generated clutter matrix with Rayleigh distribution. The standard deviations of the modeled clutter were calculated with widely adopted RCS estimation formulae. Also, the signal processing unit was modeled by implementing a cell average constant false alarm rate (CA-CFAR) engine to virtualize the signal processing effects of the physical radar on filtering backscattering clutters. The presented approach on maritime radar modeling can be useful in implementing a virtual environment with less computational complexity in developing various unmanned vehicles.
{"title":"Design Considerations of Real-Time Radar Sensor Modeling for Unmanned Surface Vehicle (USV)","authors":"Hyeon-Yeol Lee, Hyeonhee Yi, Jung‐Dong Park","doi":"10.23919/ICCAS52745.2021.9649873","DOIUrl":"https://doi.org/10.23919/ICCAS52745.2021.9649873","url":null,"abstract":"We present a design of the real-time radar sensor model for unmanned surface vehicles (USV). To construct an efficient learning environment of an unmanned surface vehicle (USV) for the swarm operation, accurate virtual modeling of the radar sensor with a light processing load is necessary. To achieve real-time modeling of the marine radar operations with a high level of modeling accuracy under a limited computational power, our work is to extract the signal-to-clutter noise ratio (SCNR) by considering physical radar specifications with pre-extracted target radar cross-section (RCS) using a 3D-EM simulator (HFSS). Modeling of various clutters such as rain, snow, fog as well as sea clutter has been carried out for each range bin with the generated clutter matrix with Rayleigh distribution. The standard deviations of the modeled clutter were calculated with widely adopted RCS estimation formulae. Also, the signal processing unit was modeled by implementing a cell average constant false alarm rate (CA-CFAR) engine to virtualize the signal processing effects of the physical radar on filtering backscattering clutters. The presented approach on maritime radar modeling can be useful in implementing a virtual environment with less computational complexity in developing various unmanned vehicles.","PeriodicalId":411064,"journal":{"name":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","volume":"294 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115929554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-12DOI: 10.23919/ICCAS52745.2021.9649998
Yao Wang, J. Ha
In this article, we focus on the scene text recognition problem, which is one of the challenging sub-files of computer vision because of the random existence of scene text. Recently, scene text recognition has achieved state-of-art performance because of the improvement of deep learning. At present, encoder-decoder architecture was widely used for scene recognition tasks, which consist of feature extractor, sequence module. Specifically, at the decoder part, connectionist temporal classification(CTC), attention mechanism, and transformer(self-attention) are three main approaches used in recent research. CTC decoder is flexible and can handle sequences with large changes in length for its align sequences features with labels in a frame-wise manner. Attention decoder can learn better and deeper feature expression and get the better position information of each character. Attention decoder can get more robust and accurate performance for both regular and irregular scene text. Moreover, a novel decoder mechanism is introduced in our study. The proposed architecture has several advantages: the model can be trained using the end-to-end manner under the condition of multi decoders, and can deal with the sequences of arbitrary length and the images of arbitrary shape. Extensive experiments on standard benchmarks demonstrate that our model's performance is improved for regular and irregular text recognition.
{"title":"Scene Text Recognition with Multi-decoders","authors":"Yao Wang, J. Ha","doi":"10.23919/ICCAS52745.2021.9649998","DOIUrl":"https://doi.org/10.23919/ICCAS52745.2021.9649998","url":null,"abstract":"In this article, we focus on the scene text recognition problem, which is one of the challenging sub-files of computer vision because of the random existence of scene text. Recently, scene text recognition has achieved state-of-art performance because of the improvement of deep learning. At present, encoder-decoder architecture was widely used for scene recognition tasks, which consist of feature extractor, sequence module. Specifically, at the decoder part, connectionist temporal classification(CTC), attention mechanism, and transformer(self-attention) are three main approaches used in recent research. CTC decoder is flexible and can handle sequences with large changes in length for its align sequences features with labels in a frame-wise manner. Attention decoder can learn better and deeper feature expression and get the better position information of each character. Attention decoder can get more robust and accurate performance for both regular and irregular scene text. Moreover, a novel decoder mechanism is introduced in our study. The proposed architecture has several advantages: the model can be trained using the end-to-end manner under the condition of multi decoders, and can deal with the sequences of arbitrary length and the images of arbitrary shape. Extensive experiments on standard benchmarks demonstrate that our model's performance is improved for regular and irregular text recognition.","PeriodicalId":411064,"journal":{"name":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115956054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-12DOI: 10.23919/ICCAS52745.2021.9650033
Seongwon Yoon, Junwoo Jason Son, Soohee Han
In this paper, an IMU-integrated optical navigation system is presented. In the existing motion capture system, it was necessary to use the geometry of the recognized multiple markers to obtain the attitude of the object. On the other hand, in the proposed system, with only a small number of cameras, real-time output of 6-DOF including position and attitude was enabled by compactly implementing a single infrared marker with embedded IMU and computing board as an agent module. As a result of the development, it was possible to dramatically improve the problem of the existing rigid body-based attitude estimation method in which the attitude estimation error changes according to the distance from the cameras. In addition, according to the wireless time synchronization implementation of the camera and the agent, unique identification and tracking for each of the multiple agents operating in the same system was enabled. Therefore, this system is expected to have a very high potential for use in factory automation (FA) and robotics. In this paper, the principle of this system and each system component are explained.
{"title":"IMU-integrated Optical Navigation System for Tracking Multi-agent","authors":"Seongwon Yoon, Junwoo Jason Son, Soohee Han","doi":"10.23919/ICCAS52745.2021.9650033","DOIUrl":"https://doi.org/10.23919/ICCAS52745.2021.9650033","url":null,"abstract":"In this paper, an IMU-integrated optical navigation system is presented. In the existing motion capture system, it was necessary to use the geometry of the recognized multiple markers to obtain the attitude of the object. On the other hand, in the proposed system, with only a small number of cameras, real-time output of 6-DOF including position and attitude was enabled by compactly implementing a single infrared marker with embedded IMU and computing board as an agent module. As a result of the development, it was possible to dramatically improve the problem of the existing rigid body-based attitude estimation method in which the attitude estimation error changes according to the distance from the cameras. In addition, according to the wireless time synchronization implementation of the camera and the agent, unique identification and tracking for each of the multiple agents operating in the same system was enabled. Therefore, this system is expected to have a very high potential for use in factory automation (FA) and robotics. In this paper, the principle of this system and each system component are explained.","PeriodicalId":411064,"journal":{"name":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115958302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-12DOI: 10.23919/ICCAS52745.2021.9649841
J. Nemcik, F. Krupa, S. Ozana, Z. Slanina, I. Zelinka
The importance of nonlinear model predictive control (NMPC) implementations for industrial processes rises with the increasing of computational power in all hardware units used for regulation and control in practice. However, it assumes a sufficiently accurate model. In case of more complex systems, there might be problem to perform analytical identification. Instead of this, numerical approaches may be deployed with benefit. This paper deals with the design of NMPC for a nonlinear model of an overhead crane using a neural network and compares the solution with the one achieved with the use analytical model of the system. All steps of NMPC design and verification of functionality are performed in Matlab. The paper finally suggests possibility to extend the presented approach for hosting the NMPC algorithm on some real-time embedded target.
{"title":"The use of Neural Network for Nonlinear Predictive Control design for and Overhead Crane","authors":"J. Nemcik, F. Krupa, S. Ozana, Z. Slanina, I. Zelinka","doi":"10.23919/ICCAS52745.2021.9649841","DOIUrl":"https://doi.org/10.23919/ICCAS52745.2021.9649841","url":null,"abstract":"The importance of nonlinear model predictive control (NMPC) implementations for industrial processes rises with the increasing of computational power in all hardware units used for regulation and control in practice. However, it assumes a sufficiently accurate model. In case of more complex systems, there might be problem to perform analytical identification. Instead of this, numerical approaches may be deployed with benefit. This paper deals with the design of NMPC for a nonlinear model of an overhead crane using a neural network and compares the solution with the one achieved with the use analytical model of the system. All steps of NMPC design and verification of functionality are performed in Matlab. The paper finally suggests possibility to extend the presented approach for hosting the NMPC algorithm on some real-time embedded target.","PeriodicalId":411064,"journal":{"name":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","volume":"213 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115976360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-12DOI: 10.23919/ICCAS52745.2021.9649885
Warakon Jantapoon, V. Tipsuwanporn, A. Numsomran
The problem of delay time and data packet loss in a wireless network control system causes a negative effect on the controller's performance in reference tracking of the process variable. Therefore, this research presents the design of PI with a delayed-time integral mode controller for the wireless networked control system which adds a term of delay time to an integral control in the forward path to compensate the unnecessary integration of the controller, while supporting the control system to track the reference values continuously and efficiently. The performance tests are implemented in the first order plus dead time process, a vertical take-off landing (VTOL), which are controlled using IEEE 802.15.4 (Zigbee) wireless networks, and MATLAB, TrueTime 2.0 simulator by comparing the control system response with Predictive Proportional Integral (P-PI) and Smith Predictor controls. The results showed that with the disturbance conditions and packet loss at 60% the proposed technique, PI with delayed-time integral mode overcome the performance of other methods at a percentage overshoot of 2.3% and the integral absolute error (IAE) 0.8964 degrees.
{"title":"The Design of PI with Delayed-Time Integral Mode Controller for Wireless Networked Control System","authors":"Warakon Jantapoon, V. Tipsuwanporn, A. Numsomran","doi":"10.23919/ICCAS52745.2021.9649885","DOIUrl":"https://doi.org/10.23919/ICCAS52745.2021.9649885","url":null,"abstract":"The problem of delay time and data packet loss in a wireless network control system causes a negative effect on the controller's performance in reference tracking of the process variable. Therefore, this research presents the design of PI with a delayed-time integral mode controller for the wireless networked control system which adds a term of delay time to an integral control in the forward path to compensate the unnecessary integration of the controller, while supporting the control system to track the reference values continuously and efficiently. The performance tests are implemented in the first order plus dead time process, a vertical take-off landing (VTOL), which are controlled using IEEE 802.15.4 (Zigbee) wireless networks, and MATLAB, TrueTime 2.0 simulator by comparing the control system response with Predictive Proportional Integral (P-PI) and Smith Predictor controls. The results showed that with the disturbance conditions and packet loss at 60% the proposed technique, PI with delayed-time integral mode overcome the performance of other methods at a percentage overshoot of 2.3% and the integral absolute error (IAE) 0.8964 degrees.","PeriodicalId":411064,"journal":{"name":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130479003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-12DOI: 10.23919/ICCAS52745.2021.9650029
Toui Ogawa, A. Watanabe, I. Omura, Tohru Kamiya
A power device is a semiconductor device for power control used for power conversion such as converting direct current to alternating current and alternating current to direct current. It is widely used such as refrigerators, air conditioners which is implemented electronic components that are closely related to our daily lives. Therefore, high reliability and safety are required, and power cycle tests are conducted for the purpose of evaluating them. In the conventional test, there is a problem that it is difficult to perform analysis because sparks are generated during the test and the device is severely damaged after the test. To solve this problem, a new technology has been developed that adds ultrasonic that enable internal observation during the test. However, there are remains a problem that the method for analyzing the ultrasonic image obtained in the new technology has not been established. Also, few abnormal images are obtained in the test. In this paper, we propose a method for detection of abnormal devices based on CNN. Especially, we implement a Cycle-GAN to extend the abnormal data and classify the known image based on improved VGG16. As an experimental result, classification accuracy of Precision = 97.06%, Recall = 93.58%, $F$ - measure = 95.17% were obtained.
{"title":"Determination of Abnormality of IGBT Images Using VGG16","authors":"Toui Ogawa, A. Watanabe, I. Omura, Tohru Kamiya","doi":"10.23919/ICCAS52745.2021.9650029","DOIUrl":"https://doi.org/10.23919/ICCAS52745.2021.9650029","url":null,"abstract":"A power device is a semiconductor device for power control used for power conversion such as converting direct current to alternating current and alternating current to direct current. It is widely used such as refrigerators, air conditioners which is implemented electronic components that are closely related to our daily lives. Therefore, high reliability and safety are required, and power cycle tests are conducted for the purpose of evaluating them. In the conventional test, there is a problem that it is difficult to perform analysis because sparks are generated during the test and the device is severely damaged after the test. To solve this problem, a new technology has been developed that adds ultrasonic that enable internal observation during the test. However, there are remains a problem that the method for analyzing the ultrasonic image obtained in the new technology has not been established. Also, few abnormal images are obtained in the test. In this paper, we propose a method for detection of abnormal devices based on CNN. Especially, we implement a Cycle-GAN to extend the abnormal data and classify the known image based on improved VGG16. As an experimental result, classification accuracy of Precision = 97.06%, Recall = 93.58%, $F$ - measure = 95.17% were obtained.","PeriodicalId":411064,"journal":{"name":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133905949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Elevator button recognition in an indoor multi-storey environment has been a challenging task amidst the whole scenario of indoor navigation on a mobile robot. In this paper, we integrate various computer vision approaches for the task of button recognition and tracking in an indoor multi-storey environment. To overcome the problem of detecting elevator buttons, we have prepared a framework that uses various preprocessing techniques combined with object detection and tracking approaches to recognize the buttons. Initially, a single-shot object detector YOLOv3 locates the original positions of the target buttons using region over intersection based approach to produce bounding boxes over the required objects. Then we use a part-based tracking algorithm Deep-SORT that follows the detected buttons in realtime to counter the hard movements of the camera. lastly, we take the bounding box coordinate information of the detected buttons and make a semantic map, which can be used to recreate a complete layout of the button panel even with partially detected buttons or a frame consisting of partial button information.
{"title":"Elevator button tracking and localization for multi-storey navigation","authors":"Arpan Ghosh, Jeongwon Pyo, Sunghyeon Joo, Tae-Yong Kuc","doi":"10.23919/ICCAS52745.2021.9649843","DOIUrl":"https://doi.org/10.23919/ICCAS52745.2021.9649843","url":null,"abstract":"Elevator button recognition in an indoor multi-storey environment has been a challenging task amidst the whole scenario of indoor navigation on a mobile robot. In this paper, we integrate various computer vision approaches for the task of button recognition and tracking in an indoor multi-storey environment. To overcome the problem of detecting elevator buttons, we have prepared a framework that uses various preprocessing techniques combined with object detection and tracking approaches to recognize the buttons. Initially, a single-shot object detector YOLOv3 locates the original positions of the target buttons using region over intersection based approach to produce bounding boxes over the required objects. Then we use a part-based tracking algorithm Deep-SORT that follows the detected buttons in realtime to counter the hard movements of the camera. lastly, we take the bounding box coordinate information of the detected buttons and make a semantic map, which can be used to recreate a complete layout of the button panel even with partially detected buttons or a frame consisting of partial button information.","PeriodicalId":411064,"journal":{"name":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","volume":"180 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131594416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-12DOI: 10.23919/ICCAS52745.2021.9650007
Naoki Miyamura, N. Matsunaga, Hiroshi Okajima
Welfare vehicles are widely used for disabled or elders, however, it is difficult for users to pick up their package while they are in the vehicle. For these picking tasks, the automatic mobile robot with a robot arm has been used. The pointing of the package and its actions should be instructed at first. For disabled or elders, however, it is desirable to give instructions as simple as possible to operate the robot. Recently, Mixed Reality (MR) devices attract attention to support these tasks. Users will give instructions for the robot intuitively by using an HMD with MR. In this paper, an assist robot in which a robot arm automatically sucks the package is proposed by indicating the rough position using HMD. We have developed a system in which the user commands an assist robot to handle packages from a distance location using an HMD and an assist robot picks packages automatically using AR markers.
{"title":"Development of an automatic picking system for assistive robots using MR","authors":"Naoki Miyamura, N. Matsunaga, Hiroshi Okajima","doi":"10.23919/ICCAS52745.2021.9650007","DOIUrl":"https://doi.org/10.23919/ICCAS52745.2021.9650007","url":null,"abstract":"Welfare vehicles are widely used for disabled or elders, however, it is difficult for users to pick up their package while they are in the vehicle. For these picking tasks, the automatic mobile robot with a robot arm has been used. The pointing of the package and its actions should be instructed at first. For disabled or elders, however, it is desirable to give instructions as simple as possible to operate the robot. Recently, Mixed Reality (MR) devices attract attention to support these tasks. Users will give instructions for the robot intuitively by using an HMD with MR. In this paper, an assist robot in which a robot arm automatically sucks the package is proposed by indicating the rough position using HMD. We have developed a system in which the user commands an assist robot to handle packages from a distance location using an HMD and an assist robot picks packages automatically using AR markers.","PeriodicalId":411064,"journal":{"name":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130758380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-12DOI: 10.23919/ICCAS52745.2021.9649781
Bongsang Kim, Nahyun Kim, Insung Kang, Myeonghun Sim, Jungeun Seok, Sehyun Lim, Heechang Moon
The spread of Covid-19 has raised the importance of unmanned and disinfection tasks, therefore the development and commercialization of service robots, such as disinfection robots, is actively underway. In addition, modern society is a personalized, where individuality is valued, with understanding and interaction of individual tastes and preferences, playing an important factor in value creation. Individual understanding starts with communication, and in this context, popularization of service robots believes that smooth interaction between humans and robots is a factor that determines success, and research is needed. In this research, the design of robots performing disinfection tasks in subway stations is studied for interactions in which robots coexist with humans and facilitate disinfection tasks. Through this work, a designed subway disinfection robot system and an appearance design considering human-centered factors are proposed. It also designed practical Human-Robot Interaction elements such as displays, voice, and laser projectors, and presented ways to utilize them to respond to possible situations in the subway. In particular, measures are shown to solve the structural problems of robots through human consideration and interaction in disinfection tasks and elevator boarding situations.
{"title":"Suggestion on the Practical Human Robot Interaction Design for the Autonomous Driving Disinfection Robot","authors":"Bongsang Kim, Nahyun Kim, Insung Kang, Myeonghun Sim, Jungeun Seok, Sehyun Lim, Heechang Moon","doi":"10.23919/ICCAS52745.2021.9649781","DOIUrl":"https://doi.org/10.23919/ICCAS52745.2021.9649781","url":null,"abstract":"The spread of Covid-19 has raised the importance of unmanned and disinfection tasks, therefore the development and commercialization of service robots, such as disinfection robots, is actively underway. In addition, modern society is a personalized, where individuality is valued, with understanding and interaction of individual tastes and preferences, playing an important factor in value creation. Individual understanding starts with communication, and in this context, popularization of service robots believes that smooth interaction between humans and robots is a factor that determines success, and research is needed. In this research, the design of robots performing disinfection tasks in subway stations is studied for interactions in which robots coexist with humans and facilitate disinfection tasks. Through this work, a designed subway disinfection robot system and an appearance design considering human-centered factors are proposed. It also designed practical Human-Robot Interaction elements such as displays, voice, and laser projectors, and presented ways to utilize them to respond to possible situations in the subway. In particular, measures are shown to solve the structural problems of robots through human consideration and interaction in disinfection tasks and elevator boarding situations.","PeriodicalId":411064,"journal":{"name":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133057946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-12DOI: 10.23919/ICCAS52745.2021.9649755
C. Stoeffler, Michael Zipper, Jonathan Babel
In this work, we show a design procedure for Transverse Flux Machines that belong to the class of direct drives, which become more relevant in robotics - also for underwater applications. These drives exhibit a range of advantages, due to the omittance of gears. This usually requires that the geometry is adapted to the motor's use case, which is a demanding task. An analytical modeling approach, based on the work of Pourmoosa [15], is used in combination with the open-source software OpenModelica to simulate arbitrary designs of this type. This allows a fast simulation of a multitude of motors. In combination with a specifically constructed genetic algorithm, we show that preferable designs can be obtained under predefined performance parameters. The method therefore gives rise to useful pre-computations for drives of this kind and potentially allows their usage in more robotic applications.
{"title":"A Rapid Design Approach for Transverse Flux Machines in Underwater Applications","authors":"C. Stoeffler, Michael Zipper, Jonathan Babel","doi":"10.23919/ICCAS52745.2021.9649755","DOIUrl":"https://doi.org/10.23919/ICCAS52745.2021.9649755","url":null,"abstract":"In this work, we show a design procedure for Transverse Flux Machines that belong to the class of direct drives, which become more relevant in robotics - also for underwater applications. These drives exhibit a range of advantages, due to the omittance of gears. This usually requires that the geometry is adapted to the motor's use case, which is a demanding task. An analytical modeling approach, based on the work of Pourmoosa [15], is used in combination with the open-source software OpenModelica to simulate arbitrary designs of this type. This allows a fast simulation of a multitude of motors. In combination with a specifically constructed genetic algorithm, we show that preferable designs can be obtained under predefined performance parameters. The method therefore gives rise to useful pre-computations for drives of this kind and potentially allows their usage in more robotic applications.","PeriodicalId":411064,"journal":{"name":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133402980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}