Pub Date : 2021-10-12DOI: 10.23919/ICCAS52745.2021.9649889
Jeong Sik Kim, Woo Young Choi, Yong Woo Jeong, C. Chung
As autonomous driving technology develops, research on localization methods is becoming more important. In this paper, we propose global positioning system (GPS) and radar calibration method, and vehicle localization method using a radar sensor based on vehicle to everything (V2X). For vehicle localization, we first propose GPS and radar calibration, which is a way to solve the differences between detection points. With this calibration, during disconnection of GPS, we calculate the position of the ego vehicle by using the rotational transform with vehicle chassis data and radar data. The localization process estimated the absolute coordinates of the ego vehicle by adding the relative coordinates of the ego vehicle and the absolute coordinates of the object vehicle. The advantage of this method what this paper proposes is that if radar is available, we can localize continuously.
{"title":"Vehicle Localization Using Radar Calibration with Disconnected GPS","authors":"Jeong Sik Kim, Woo Young Choi, Yong Woo Jeong, C. Chung","doi":"10.23919/ICCAS52745.2021.9649889","DOIUrl":"https://doi.org/10.23919/ICCAS52745.2021.9649889","url":null,"abstract":"As autonomous driving technology develops, research on localization methods is becoming more important. In this paper, we propose global positioning system (GPS) and radar calibration method, and vehicle localization method using a radar sensor based on vehicle to everything (V2X). For vehicle localization, we first propose GPS and radar calibration, which is a way to solve the differences between detection points. With this calibration, during disconnection of GPS, we calculate the position of the ego vehicle by using the rotational transform with vehicle chassis data and radar data. The localization process estimated the absolute coordinates of the ego vehicle by adding the relative coordinates of the ego vehicle and the absolute coordinates of the object vehicle. The advantage of this method what this paper proposes is that if radar is available, we can localize continuously.","PeriodicalId":411064,"journal":{"name":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132382365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-12DOI: 10.23919/ICCAS52745.2021.9649793
Yun-Joo Choi, Minju Kim, Jongsu Kim, Dojin Heo, Sung-Phil Kim
The development of non-visual P300-based brain-computer interfaces (BCIs) is needed for patients with unreliable gaze control or healthy users with visual distractors. As an alternative means, auditory BCIs have been developed, but reportedly showed relatively low performance. To elucidate the performance gap, this study investigated the feature domain between the auditory and visual BCIs, along with the audiovisual BCI as the combination of the two. Not only the online test, but also a cross-modality assessment was conducted to compare the performance of three modalities, revealing that the classification performance became significantly low when the feature of the auditory BCI was included. When comparing the features that showed significant differences between the target and nontarget stimuli of each subject in each modality, significant individual differences in selected features were more pronounced in the auditory BCI than others, meaning that the common features across subjects were scarce for the auditory BCI. Moreover, the biggest decrease was shown in the auditory modality when comparing the performance of online test and Leave-One-Subject-Out (LOSO) cross validation, which was conducted with selected features. Our results suggest potential sources of the performance gap between auditory and visual BCIs in the context of feature domain.
对于凝视控制不可靠的患者或有视觉干扰的健康用户,需要开发基于p300的非视觉脑机接口(bci)。作为替代手段,听觉脑机接口已经开发出来,但据报道表现相对较低。为了阐明这种性能差距,本研究研究了听觉脑机接口和视觉脑机接口之间的特征域,以及作为两者结合的视听脑机接口。除了在线测试,我们还进行了跨模态评估,比较了三种模态的表现,发现当包括听觉脑机接口的特征时,分类表现明显较低。当比较每个被试在每个模态下的目标和非目标刺激之间的显著差异特征时,听觉脑机接口中所选择的特征的显著个体差异比其他特征更明显,这意味着听觉脑机接口中缺乏跨受试者的共同特征。此外,在线测试和留一被试(leave - one -被试out, LOSO)交叉验证在选择特征时的表现比较,听觉模态的下降幅度最大。我们的研究结果提示了听觉和视觉脑机接口在特征域背景下性能差距的潜在来源。
{"title":"Comparisons of Auditory, Audiovisual, and Visual Modalities in Feature Domain for Auditory Brain-Computer Interfaces","authors":"Yun-Joo Choi, Minju Kim, Jongsu Kim, Dojin Heo, Sung-Phil Kim","doi":"10.23919/ICCAS52745.2021.9649793","DOIUrl":"https://doi.org/10.23919/ICCAS52745.2021.9649793","url":null,"abstract":"The development of non-visual P300-based brain-computer interfaces (BCIs) is needed for patients with unreliable gaze control or healthy users with visual distractors. As an alternative means, auditory BCIs have been developed, but reportedly showed relatively low performance. To elucidate the performance gap, this study investigated the feature domain between the auditory and visual BCIs, along with the audiovisual BCI as the combination of the two. Not only the online test, but also a cross-modality assessment was conducted to compare the performance of three modalities, revealing that the classification performance became significantly low when the feature of the auditory BCI was included. When comparing the features that showed significant differences between the target and nontarget stimuli of each subject in each modality, significant individual differences in selected features were more pronounced in the auditory BCI than others, meaning that the common features across subjects were scarce for the auditory BCI. Moreover, the biggest decrease was shown in the auditory modality when comparing the performance of online test and Leave-One-Subject-Out (LOSO) cross validation, which was conducted with selected features. Our results suggest potential sources of the performance gap between auditory and visual BCIs in the context of feature domain.","PeriodicalId":411064,"journal":{"name":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132399431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-12DOI: 10.23919/ICCAS52745.2021.9649801
Farhan Nabil Mohd Noor, A. P. Majeed, Mohd Azraai Mod Razmam, I. M. Khairuddin, W. M. Isa
Diabetic Retinopathy is one of the complications of diabetes mellitus that occurs to the eye. It damages the blood vessels, which cause the leaking of the blood and other fluids due to the elevated blood glucose level. Diabetic Retinopathy is a quiet ailment that patients may not discover until abnormalities in the retina have progressed to the point that medication is difficult or impossible. It can also result in patients losing their sight completely. However, an automated screening machine may help overcome this problem by helping the ophthalmologist diagnose diabetic retinopathy patients as soon as possible. Hence, this research investigates the effectiveness of automatic screening machine by employing the Transfer Learning model such as VGG16 to extract the features and fed them to the Support Vector Machine (SVM), k-Nearest Neighbour (kNN) and Random Forest (RF) for the classification. It was shown that the VGG16-SVM pipeline displayed the most promising performance on the classification of Diabetic Retinopathy.
{"title":"The Diagnosis of Diabetic Retinopathy: A Transfer Learning Approach","authors":"Farhan Nabil Mohd Noor, A. P. Majeed, Mohd Azraai Mod Razmam, I. M. Khairuddin, W. M. Isa","doi":"10.23919/ICCAS52745.2021.9649801","DOIUrl":"https://doi.org/10.23919/ICCAS52745.2021.9649801","url":null,"abstract":"Diabetic Retinopathy is one of the complications of diabetes mellitus that occurs to the eye. It damages the blood vessels, which cause the leaking of the blood and other fluids due to the elevated blood glucose level. Diabetic Retinopathy is a quiet ailment that patients may not discover until abnormalities in the retina have progressed to the point that medication is difficult or impossible. It can also result in patients losing their sight completely. However, an automated screening machine may help overcome this problem by helping the ophthalmologist diagnose diabetic retinopathy patients as soon as possible. Hence, this research investigates the effectiveness of automatic screening machine by employing the Transfer Learning model such as VGG16 to extract the features and fed them to the Support Vector Machine (SVM), k-Nearest Neighbour (kNN) and Random Forest (RF) for the classification. It was shown that the VGG16-SVM pipeline displayed the most promising performance on the classification of Diabetic Retinopathy.","PeriodicalId":411064,"journal":{"name":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132455529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-12DOI: 10.23919/ICCAS52745.2021.9649733
Chin-Sheng Chen, Si-Yu Lin
The environmental conditions corresponding to dangerous or collided areas are generally represented by Costmap when the Autonomous Mobile Robot (AMR) is navigated. Here, this paper provides a Costmap 2D layer plug-in, Velocity Obstacle layer, it can accurately detect obstacle's coordination and radius and then estimate the obstacle's velocity to create Velocity Obstacle which can represent the potential collision vector in the future. In the simulation, we assume the robot's max velocity is 0.2m/s and an obstacle move forward to the robot with 0.3m/s. The results show the AMR can avoid the obstacle well. In experiment, the AMR also can avoid the people moving toward it in the real world.
自主移动机器人(Autonomous Mobile Robot, AMR)导航时,危险或碰撞区域对应的环境条件通常由Costmap表示。在这里,本文提供了一个Costmap的二维图层插件Velocity Obstacle layer,它可以准确地检测障碍物的协调和半径,然后估计障碍物的速度来创建Velocity Obstacle,从而代表未来可能发生的碰撞向量。在仿真中,我们假设机器人的最大速度为0.2m/s,障碍物以0.3m/s的速度向机器人移动。结果表明,AMR能很好地避开障碍物。在实验中,AMR也可以避免人们在现实世界中向它移动。
{"title":"Costmap Generation Based on Dynamic Obstacle Detection and Velocity Obstacle Estimation for Autonomous Mobile Robot","authors":"Chin-Sheng Chen, Si-Yu Lin","doi":"10.23919/ICCAS52745.2021.9649733","DOIUrl":"https://doi.org/10.23919/ICCAS52745.2021.9649733","url":null,"abstract":"The environmental conditions corresponding to dangerous or collided areas are generally represented by Costmap when the Autonomous Mobile Robot (AMR) is navigated. Here, this paper provides a Costmap 2D layer plug-in, Velocity Obstacle layer, it can accurately detect obstacle's coordination and radius and then estimate the obstacle's velocity to create Velocity Obstacle which can represent the potential collision vector in the future. In the simulation, we assume the robot's max velocity is 0.2m/s and an obstacle move forward to the robot with 0.3m/s. The results show the AMR can avoid the obstacle well. In experiment, the AMR also can avoid the people moving toward it in the real world.","PeriodicalId":411064,"journal":{"name":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130794851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-12DOI: 10.23919/ICCAS52745.2021.9650059
Nguyen Truong Thinh, N. Quoc, Nguyen Vo Tam Toan, Tran The Luc
Stroke is now one of the leading causes of death and disability in both motor and cognitive functions. In addition, rehabilitation for stroke survivors also faces many physical and human difficulties. To address this issue, we studied the use of Augmented Reality (AR) in rehabilitation using a system called RARS. This system creates limbs rehabilitation exercises in the form of games using an AR interface. The goal of the RARS is to increase patient's positive emotions, which motivates them to enjoy rehabilitation exercises. As a result, recovery efficiency will be improved, and the burden on physiotherapists will be reduced. The RARS is evaluated through a research about the effectiveness of this system on rehabilitation for patients after stroke (n=10). Reported results showed that the RARS produced significant improvement in the patient's indicators of functional status. From that, this system shows that it not only creates great benefits for personnel and economic but also bring about huge potential for future growth.
{"title":"Implementation of Rehabilitation Platform based on Augmented Reality Technology","authors":"Nguyen Truong Thinh, N. Quoc, Nguyen Vo Tam Toan, Tran The Luc","doi":"10.23919/ICCAS52745.2021.9650059","DOIUrl":"https://doi.org/10.23919/ICCAS52745.2021.9650059","url":null,"abstract":"Stroke is now one of the leading causes of death and disability in both motor and cognitive functions. In addition, rehabilitation for stroke survivors also faces many physical and human difficulties. To address this issue, we studied the use of Augmented Reality (AR) in rehabilitation using a system called RARS. This system creates limbs rehabilitation exercises in the form of games using an AR interface. The goal of the RARS is to increase patient's positive emotions, which motivates them to enjoy rehabilitation exercises. As a result, recovery efficiency will be improved, and the burden on physiotherapists will be reduced. The RARS is evaluated through a research about the effectiveness of this system on rehabilitation for patients after stroke (n=10). Reported results showed that the RARS produced significant improvement in the patient's indicators of functional status. From that, this system shows that it not only creates great benefits for personnel and economic but also bring about huge potential for future growth.","PeriodicalId":411064,"journal":{"name":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130887323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-12DOI: 10.23919/ICCAS52745.2021.9649855
Sergazy Narynov, Z. Zhumanov, Aidana Gumar, Mariyam Khassanova, B. Omarov
In this study, we looked at chatbots, conversational agents, technologies for creating conversational agents, perspectives, and ethical issues in this direction. Also examples of therapy that are used by psychologists, psychotherapists, and the prospects of using them in a chatbot are explored in this review. As a result of the review, we considered the chatbot concepts for ourselves and identified technologies and methods for further development of the chatbot for mental health. We came to the conclusion to develop a chatbot for psychological help with the use of cognitive behavioral therapy. As a result of the study, we conclude that chatbots are really able to provide effective psychological assistance and reduce depression and anxiety in people.
{"title":"Chatbots and Conversational Agents in Mental Health: A Literature Review","authors":"Sergazy Narynov, Z. Zhumanov, Aidana Gumar, Mariyam Khassanova, B. Omarov","doi":"10.23919/ICCAS52745.2021.9649855","DOIUrl":"https://doi.org/10.23919/ICCAS52745.2021.9649855","url":null,"abstract":"In this study, we looked at chatbots, conversational agents, technologies for creating conversational agents, perspectives, and ethical issues in this direction. Also examples of therapy that are used by psychologists, psychotherapists, and the prospects of using them in a chatbot are explored in this review. As a result of the review, we considered the chatbot concepts for ourselves and identified technologies and methods for further development of the chatbot for mental health. We came to the conclusion to develop a chatbot for psychological help with the use of cognitive behavioral therapy. As a result of the study, we conclude that chatbots are really able to provide effective psychological assistance and reduce depression and anxiety in people.","PeriodicalId":411064,"journal":{"name":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130279473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-12DOI: 10.23919/ICCAS52745.2021.9649994
Sung Hoon Kim, Jong Gyu Park
It has become an important trend to implement Autonomous vehicle systems. Many of OEM's are studying and developing Autonomous system at their own vehicle. It is very attractive promotion point in the market. For Autonomous Vehicle system, many sensor solutions are considered and implemented. Radar application is the most important Sensor solution for Autonomous vehicle systems. In Radar application case, it has robustness in weather condition but electrical noise effect cause very critical issue for object detection and target classification in Radar system. In post processing point of view, It is difficult to detect and classification object in electrical noise situation. Detection and classification performance directly effect to safety of Autonomous vehicle system. Normally, external hardware filter is used to reduce noise on PCB but it has issue about production cost and PCB space increase to assemble filter. Frequency modulation based on spreading is well-known general solution to reduce noise effect in system. In this paper, it suggests frequency modulation method by hardware peripheral of microcontroller and PMIC about electrical noise effect in radar application.
{"title":"Frequency modulation about electrical noise effect in Radar application for Autonomous Vehicle Systems","authors":"Sung Hoon Kim, Jong Gyu Park","doi":"10.23919/ICCAS52745.2021.9649994","DOIUrl":"https://doi.org/10.23919/ICCAS52745.2021.9649994","url":null,"abstract":"It has become an important trend to implement Autonomous vehicle systems. Many of OEM's are studying and developing Autonomous system at their own vehicle. It is very attractive promotion point in the market. For Autonomous Vehicle system, many sensor solutions are considered and implemented. Radar application is the most important Sensor solution for Autonomous vehicle systems. In Radar application case, it has robustness in weather condition but electrical noise effect cause very critical issue for object detection and target classification in Radar system. In post processing point of view, It is difficult to detect and classification object in electrical noise situation. Detection and classification performance directly effect to safety of Autonomous vehicle system. Normally, external hardware filter is used to reduce noise on PCB but it has issue about production cost and PCB space increase to assemble filter. Frequency modulation based on spreading is well-known general solution to reduce noise effect in system. In this paper, it suggests frequency modulation method by hardware peripheral of microcontroller and PMIC about electrical noise effect in radar application.","PeriodicalId":411064,"journal":{"name":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130286862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-12DOI: 10.23919/ICCAS52745.2021.9649880
Seongbong Lee, Cheonman Park, Sang-yeoun Lee, J. Jeon, Dongjin Lee
In this paper, we propose UWB based leader-follower formation flight method. For leader-follower formation flight, relative position between leader and follower is required, but when using only UWB, bearing angle between them is not obtained. In this paper, to estimate relative position we use relative velocity based on reference frame. It assumes that velocities of UAVs are estimated each other using inertial measurement unit and shared through ad-hoc network and change of relative position is estimated by numerically integrating relative velocity. using set of change of relative position and relative distance, we estimate relative position. Conditions that relative position cannot estimate exist, to avoid satisfying it we modified guidance law of one of followers. proposed method is validated through simulation.
{"title":"UWB based Relative Navigation and Leader-Follower Formation for UAVs using Maneuvering of a Follower","authors":"Seongbong Lee, Cheonman Park, Sang-yeoun Lee, J. Jeon, Dongjin Lee","doi":"10.23919/ICCAS52745.2021.9649880","DOIUrl":"https://doi.org/10.23919/ICCAS52745.2021.9649880","url":null,"abstract":"In this paper, we propose UWB based leader-follower formation flight method. For leader-follower formation flight, relative position between leader and follower is required, but when using only UWB, bearing angle between them is not obtained. In this paper, to estimate relative position we use relative velocity based on reference frame. It assumes that velocities of UAVs are estimated each other using inertial measurement unit and shared through ad-hoc network and change of relative position is estimated by numerically integrating relative velocity. using set of change of relative position and relative distance, we estimate relative position. Conditions that relative position cannot estimate exist, to avoid satisfying it we modified guidance law of one of followers. proposed method is validated through simulation.","PeriodicalId":411064,"journal":{"name":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127909220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-12DOI: 10.23919/ICCAS52745.2021.9649783
H. Madokoro, Satoshi Yamamoto, Yoshiteru Nishimura, Stephanie Nix, Hanwool Woo, Kazuhito Sato
This study was conducted to develop a 3D reconstruction procedure for application to crop monitoring. For 3D construction of a similar target object, we compared images obtained from two camera types: a compact digital camera (CDC) and a spherical panoramic camera (SPC). First, we calculate camera parameters from images that include a checkerboard. Subsequently, we correct the image distortion including that of the target object using the camera parameters. Finally, we estimate camera positions and three-dimensional (3D) reconstruction based on the structure from motion (SfM). Experimentally obtained results demonstrated that the 3D reconstruction of a target object was improved after calibration compared with that before calibration. Moreover, we conducted an application experiment using a tree in an outdoor environment as a trial of practical use at a farm.
{"title":"Calibration and 3D Reconstruction of Images Obtained Using Spherical Panoramic Camera","authors":"H. Madokoro, Satoshi Yamamoto, Yoshiteru Nishimura, Stephanie Nix, Hanwool Woo, Kazuhito Sato","doi":"10.23919/ICCAS52745.2021.9649783","DOIUrl":"https://doi.org/10.23919/ICCAS52745.2021.9649783","url":null,"abstract":"This study was conducted to develop a 3D reconstruction procedure for application to crop monitoring. For 3D construction of a similar target object, we compared images obtained from two camera types: a compact digital camera (CDC) and a spherical panoramic camera (SPC). First, we calculate camera parameters from images that include a checkerboard. Subsequently, we correct the image distortion including that of the target object using the camera parameters. Finally, we estimate camera positions and three-dimensional (3D) reconstruction based on the structure from motion (SfM). Experimentally obtained results demonstrated that the 3D reconstruction of a target object was improved after calibration compared with that before calibration. Moreover, we conducted an application experiment using a tree in an outdoor environment as a trial of practical use at a farm.","PeriodicalId":411064,"journal":{"name":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129171071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-12DOI: 10.23919/ICCAS52745.2021.9649767
Hye Ji Lee, S. Fadda, Lorena F.S. Souza, Jong Min Lee
Real-time monitoring of the batch process is beneficial as many batch processes are employed to produce high value-added products. Although many theoretical studies deal with the monitoring algorithm, the platform where to apply these ideas as well as the algorithm based on a mechanistic model are rarely proposed. In this study, the gPROMS Digital Applications Platform (gDAP) that takes care of all activities for online implementation and execution of models is described. This platform allows connecting a mechanistic cell culture model implemented in the gPROMS Formulat-edProducts modeling environment with plant data and web-based Dashboards, which displays both plant data and values estimated by the model. The mathematical model is used to predict the process performance, and it is started by the user from the dashboards. Model outputs are instantly stored in a database, where all the plant and model data are historized. Users can access the dashboards which displays the values stored in the database through the internet. The proposed platform is applied to a monoclonal antibody production process.
{"title":"Digital application of bioreactor monitoring","authors":"Hye Ji Lee, S. Fadda, Lorena F.S. Souza, Jong Min Lee","doi":"10.23919/ICCAS52745.2021.9649767","DOIUrl":"https://doi.org/10.23919/ICCAS52745.2021.9649767","url":null,"abstract":"Real-time monitoring of the batch process is beneficial as many batch processes are employed to produce high value-added products. Although many theoretical studies deal with the monitoring algorithm, the platform where to apply these ideas as well as the algorithm based on a mechanistic model are rarely proposed. In this study, the gPROMS Digital Applications Platform (gDAP) that takes care of all activities for online implementation and execution of models is described. This platform allows connecting a mechanistic cell culture model implemented in the gPROMS Formulat-edProducts modeling environment with plant data and web-based Dashboards, which displays both plant data and values estimated by the model. The mathematical model is used to predict the process performance, and it is started by the user from the dashboards. Model outputs are instantly stored in a database, where all the plant and model data are historized. Users can access the dashboards which displays the values stored in the database through the internet. The proposed platform is applied to a monoclonal antibody production process.","PeriodicalId":411064,"journal":{"name":"2021 21st International Conference on Control, Automation and Systems (ICCAS)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128788295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}