Pub Date : 2023-02-10DOI: 10.1109/ICARA56516.2023.10125719
J. Ke, Y. J. Wang
Because of the ability to move at high speed and the high stiffness, Parallel Kinematic Machines (PKMs) have been widely used in industry and research in recent years. In this paper, the kinematic error model was established for the trolley-type PUU (prismatic-universal-universal joint). Three translation errors were added to the ideal kinematic model present the joint position because of manufacturing and assembly. The calibration process included measuring the positioning error by a laser interferometer, identifying the error parameters using least square method and revised the kinematic model in the controller. After compensation, the positioning errors of end effector were indeed reduced. It could show the practicality of this model. Moreover, modifying the kinematics in the controller is a time-saving and convenient compensation method.
{"title":"Kinematic Error Model for Trolley-type PUU","authors":"J. Ke, Y. J. Wang","doi":"10.1109/ICARA56516.2023.10125719","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125719","url":null,"abstract":"Because of the ability to move at high speed and the high stiffness, Parallel Kinematic Machines (PKMs) have been widely used in industry and research in recent years. In this paper, the kinematic error model was established for the trolley-type PUU (prismatic-universal-universal joint). Three translation errors were added to the ideal kinematic model present the joint position because of manufacturing and assembly. The calibration process included measuring the positioning error by a laser interferometer, identifying the error parameters using least square method and revised the kinematic model in the controller. After compensation, the positioning errors of end effector were indeed reduced. It could show the practicality of this model. Moreover, modifying the kinematics in the controller is a time-saving and convenient compensation method.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128749969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-10DOI: 10.1109/ICARA56516.2023.10125856
Samira Chaychi, D. Zampuniéris, Sandro Reis
In this paper, we are going to consider a current challenge in a robotic software system. We consider a problem, which is the lack of separation of concerns in robotic systems, and propose a software model to address the problem and resolve the current challenges. The core purpose of this paper is to demonstrate the advantages of using separation of concerns principles to create a well-ordered model of independent components that address separated concerns individually. Considering the problem, we developed a software model with the help of a proactive engine to address the challenges. We use robotic operating systems to help us to implement the robot simulator.
{"title":"Software Model for Robot Programming and Example of Implementation for Navigation System","authors":"Samira Chaychi, D. Zampuniéris, Sandro Reis","doi":"10.1109/ICARA56516.2023.10125856","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125856","url":null,"abstract":"In this paper, we are going to consider a current challenge in a robotic software system. We consider a problem, which is the lack of separation of concerns in robotic systems, and propose a software model to address the problem and resolve the current challenges. The core purpose of this paper is to demonstrate the advantages of using separation of concerns principles to create a well-ordered model of independent components that address separated concerns individually. Considering the problem, we developed a software model with the help of a proactive engine to address the challenges. We use robotic operating systems to help us to implement the robot simulator.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131643073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-10DOI: 10.1109/ICARA56516.2023.10125743
Vahan Babushkin, Muhammad Hassan Jamil, Dianne Sefo, P. Loomer, M. Eid
Currently available dental simulators provide a wide range of visual, auditory, and haptic cues to play back the pre-recorded skill, however, they do not extract skill descriptors and do not attempt to model the skill. To ensure efficient communication of a sensorimotor skill, a model that captures the skill's main features and provides real-time feedback and guidance based on the user's expertise is desirable. To develop this model, a complex periodontal probing skill can be considered as a composition of primitives, that can be extracted from the recordings of several professionals performing the probing task. This model will be capable of evaluating the user's proficiency level to ensure adaptation and providing corresponding guidance and feedback. We developed a SVM model that characterizes the sensorimotor skill of periodontal probing by detecting the specific region of the tooth being probed. We explore the features affecting the accuracy of the model and provide a reduced feature set capable of capturing the regions with relatively high accuracy. Finally, we consider the problem of periodontal pocket detection. The SVM model trained to detect pockets was able to achieve a recall around 0.68. We discuss challenges associated with pocket detection and propose directions for future work.
{"title":"Classifying a Sensorimotor Skill of Periodontal Probing","authors":"Vahan Babushkin, Muhammad Hassan Jamil, Dianne Sefo, P. Loomer, M. Eid","doi":"10.1109/ICARA56516.2023.10125743","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125743","url":null,"abstract":"Currently available dental simulators provide a wide range of visual, auditory, and haptic cues to play back the pre-recorded skill, however, they do not extract skill descriptors and do not attempt to model the skill. To ensure efficient communication of a sensorimotor skill, a model that captures the skill's main features and provides real-time feedback and guidance based on the user's expertise is desirable. To develop this model, a complex periodontal probing skill can be considered as a composition of primitives, that can be extracted from the recordings of several professionals performing the probing task. This model will be capable of evaluating the user's proficiency level to ensure adaptation and providing corresponding guidance and feedback. We developed a SVM model that characterizes the sensorimotor skill of periodontal probing by detecting the specific region of the tooth being probed. We explore the features affecting the accuracy of the model and provide a reduced feature set capable of capturing the regions with relatively high accuracy. Finally, we consider the problem of periodontal pocket detection. The SVM model trained to detect pockets was able to achieve a recall around 0.68. We discuss challenges associated with pocket detection and propose directions for future work.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123339069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-10DOI: 10.1109/ICARA56516.2023.10125846
Wang Kun, Liang Huawei
Based on high mobility requirements of unmanned ground platforms in various environments and combining the advantages of traditional wheeled, crawler and legged walking mechanisms, a novel deformable wheel-track composite walking mechanism with multi-movement modes is proposed,obtaining better obstacle-surmounting performances than the famous reconfigurable wheel-track mechanism(RWT type) in recent years. The walking mechanism has two stable forms of round wheel configuration and prolate polygon configuration with three modes of wheel rolling, track movement and gait rotation, which are respectively suitable for travelling on hard roads at high speeds, moving on soft earth with low contact pressure and crossing obstacles. Design of the innovative structure and a platform equipped with the novel machines is carried out. Subsequently, the platform's obstacle surmounting performance is analyzed through dynamic modelling and calculation. Ultimately to verify the advantages of the machine and its design rationality, comparative simulation experiments of obstacle-surmounting performances is implemented in the multi-body simulation environment,setting up obstacle models and building experimental platform models integrating this machine and the RWT type machine respectively.
{"title":"Obstacle-surmounting Analysis of a Novel Deformable Wheel-track Composite Walking Platform","authors":"Wang Kun, Liang Huawei","doi":"10.1109/ICARA56516.2023.10125846","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125846","url":null,"abstract":"Based on high mobility requirements of unmanned ground platforms in various environments and combining the advantages of traditional wheeled, crawler and legged walking mechanisms, a novel deformable wheel-track composite walking mechanism with multi-movement modes is proposed,obtaining better obstacle-surmounting performances than the famous reconfigurable wheel-track mechanism(RWT type) in recent years. The walking mechanism has two stable forms of round wheel configuration and prolate polygon configuration with three modes of wheel rolling, track movement and gait rotation, which are respectively suitable for travelling on hard roads at high speeds, moving on soft earth with low contact pressure and crossing obstacles. Design of the innovative structure and a platform equipped with the novel machines is carried out. Subsequently, the platform's obstacle surmounting performance is analyzed through dynamic modelling and calculation. Ultimately to verify the advantages of the machine and its design rationality, comparative simulation experiments of obstacle-surmounting performances is implemented in the multi-body simulation environment,setting up obstacle models and building experimental platform models integrating this machine and the RWT type machine respectively.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122885757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-10DOI: 10.1109/ICARA56516.2023.10125816
Satakshi Ghosh, Avisek Sharma, Pritam Goswami, B. Sau
Two fundamental problems of distributed computing are Gathering and Arbitrary pattern formation (APF). These two tasks are different in nature as in gathering robots meet at a point but in Apfrobots form a fixed pattern in distinct positions. In most of the current literature on swarm robot algorithms, it is assumed that all robots in the system perform one single task together. Two teams of oblivious robots deployed in the same system and different teams of robots performing two different works simultaneously where no robot knows the team of another robot is a new concept in the literature introduced by Bhagat et al. [ICDCN'2020]. In this work, a swarm of silent and oblivious robots are deployed on an infinite grid under an asynchronous scheduler. The robots do not have access to any global coordinates. Some of the robots are given input of an arbitrary but unique pattern. The set of robots with the given pattern is assigned the task of forming the given pattern on the grid. The remaining robots are assigned with the task of gathering to a vertex of the grid (not fixed from earlier and not any point where a robot that is forming a pattern terminates). Each robot knows to which team it belongs, but can not recognize the team of another robot. Considering weak multiplicity detection, a distributed algorithm is presented in this paper which leads the robots with the input pattern into forming it and other robots into gathering on a vertex of the grid on which no other robot forming the pattern, terminates.
分布式计算的两个基本问题是集合和任意模式形成(APF)。这两个任务在本质上是不同的,在集合机器人在一个点相遇,但在Apfrobots在不同的位置形成固定的模式。在目前大多数关于群体机器人算法的文献中,都假设系统中的所有机器人共同执行一项任务。在Bhagat et al. [ICDCN'2020]引入的文献中,两个遗忘机器人团队部署在同一个系统中,不同的机器人团队同时执行两种不同的工作,其中没有机器人知道另一个机器人的团队。在这项工作中,一群沉默和健忘的机器人被部署在异步调度程序下的无限网格上。机器人无法获得任何全局坐标。有些机器人输入的是任意但独特的模式。具有给定图案的一组机器人被分配在网格上形成给定图案的任务。其余机器人的任务是聚集到网格的一个顶点(不是先前固定的,也不是正在形成图案的机器人终止的任何点)。每个机器人都知道自己属于哪个队,但不能识别其他机器人的队。考虑到弱多重性检测,本文提出了一种分布式算法,该算法使具有输入模式的机器人形成该模式,而其他机器人聚集在没有其他形成该模式的机器人终止的网格顶点上。
{"title":"Oblivious Robots Performing Different Tasks on Grid Without Knowing Their Team Members","authors":"Satakshi Ghosh, Avisek Sharma, Pritam Goswami, B. Sau","doi":"10.1109/ICARA56516.2023.10125816","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125816","url":null,"abstract":"Two fundamental problems of distributed computing are Gathering and Arbitrary pattern formation (APF). These two tasks are different in nature as in gathering robots meet at a point but in Apfrobots form a fixed pattern in distinct positions. In most of the current literature on swarm robot algorithms, it is assumed that all robots in the system perform one single task together. Two teams of oblivious robots deployed in the same system and different teams of robots performing two different works simultaneously where no robot knows the team of another robot is a new concept in the literature introduced by Bhagat et al. [ICDCN'2020]. In this work, a swarm of silent and oblivious robots are deployed on an infinite grid under an asynchronous scheduler. The robots do not have access to any global coordinates. Some of the robots are given input of an arbitrary but unique pattern. The set of robots with the given pattern is assigned the task of forming the given pattern on the grid. The remaining robots are assigned with the task of gathering to a vertex of the grid (not fixed from earlier and not any point where a robot that is forming a pattern terminates). Each robot knows to which team it belongs, but can not recognize the team of another robot. Considering weak multiplicity detection, a distributed algorithm is presented in this paper which leads the robots with the input pattern into forming it and other robots into gathering on a vertex of the grid on which no other robot forming the pattern, terminates.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"200 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133794768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-10DOI: 10.1109/ICARA56516.2023.10125666
V. Vivekanand, S. Hashemkhani, Shanmuga Venkatachalam, R. Kubendran
Central pattern generators (CPG) generate rhythmic gait patterns that can be tuned to exhibit various locomotion behaviors like walking, trotting, etc. CPGs inspired by biology have been implemented previously in robotics to generate periodic motion patterns. This paper aims to take the inspiration even further to present a novel methodology to control movement of a four-legged robot using a non-linear bio-mimetic neuron model. In contrast to using regular leaky integrate and fire (LIF) neurons to create coupled neural networks, our design uses non-linear neurons constituting a mixed-feedback (positive and negative) control system operating at multiple timescales (fast, slow and ultraslow ranging from sub-ms to seconds), to generate a variety of spike patterns that control the robotic limbs and hence its gait. The use of spikes as motor control signals allows for low memory usage and low latency operation of the robot. Unlike LIF neurons, the bio-mimetic neurons are also jitter tolerant making the CPG network more resilient and robust to perturbations in the input stimulus. As a proof of concept, we implemented our model on the Petoi Bittle bot, a quadruped pet dog robot and were able to reliably observe different modes of locomotion-walk, trot and jump. Four bio-mimetic neurons forming a CPG network to control the four limbs were implemented on Arduino microcontroller and compared to a similar CPG built using four LIF neurons. The differential equations for both neurons were solved real-time on Arduino and profiled for memory usage, latency and jitter tolerance. The CPG using bio-mimetic non-linear neurons used marginally higher memory (378 bytes, 18% higher than LIF neurons), incurred insignificant latency of 3.54ms compared to motor activation delay of 200ms, while providing upto 5-10x higher jitter tolerance.
中枢模式发生器(CPG)产生有节奏的步态模式,可以调整以显示各种运动行为,如步行,小跑等。受生物学启发的cpg以前已经在机器人技术中实现,以产生周期性的运动模式。本文旨在进一步利用这一灵感,提出一种利用非线性仿生神经元模型控制四足机器人运动的新方法。与使用常规的LIF神经元创建耦合神经网络相比,我们的设计使用非线性神经元构成混合反馈(正反馈和负反馈)控制系统,在多个时间尺度(从亚毫秒到秒不等的快、慢和超低)下运行,以产生各种脉冲模式来控制机器人的四肢和步态。使用尖峰作为电机控制信号,允许机器人的低内存使用和低延迟操作。与LIF神经元不同,仿生神经元也具有抗抖动能力,使CPG网络对输入刺激的扰动更具弹性和鲁棒性。作为概念验证,我们在Petoi little bot(一种四足宠物狗机器人)上实现了我们的模型,并能够可靠地观察不同的运动模式——步行、小跑和跳跃。在Arduino微控制器上实现了四个仿生神经元形成CPG网络来控制四肢,并与使用四个LIF神经元构建的类似CPG进行了比较。在Arduino上实时求解两个神经元的微分方程,并对内存使用、延迟和抖动容忍度进行分析。使用仿生非线性神经元的CPG使用了略高的内存(378字节,比LIF神经元高18%),与运动激活延迟200ms相比,延迟为3.54ms,但提供高达5-10倍的高抖动容忍。
{"title":"Robot Locomotion Control Using Central Pattern Generator with Non-linear Bio-mimetic Neurons","authors":"V. Vivekanand, S. Hashemkhani, Shanmuga Venkatachalam, R. Kubendran","doi":"10.1109/ICARA56516.2023.10125666","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125666","url":null,"abstract":"Central pattern generators (CPG) generate rhythmic gait patterns that can be tuned to exhibit various locomotion behaviors like walking, trotting, etc. CPGs inspired by biology have been implemented previously in robotics to generate periodic motion patterns. This paper aims to take the inspiration even further to present a novel methodology to control movement of a four-legged robot using a non-linear bio-mimetic neuron model. In contrast to using regular leaky integrate and fire (LIF) neurons to create coupled neural networks, our design uses non-linear neurons constituting a mixed-feedback (positive and negative) control system operating at multiple timescales (fast, slow and ultraslow ranging from sub-ms to seconds), to generate a variety of spike patterns that control the robotic limbs and hence its gait. The use of spikes as motor control signals allows for low memory usage and low latency operation of the robot. Unlike LIF neurons, the bio-mimetic neurons are also jitter tolerant making the CPG network more resilient and robust to perturbations in the input stimulus. As a proof of concept, we implemented our model on the Petoi Bittle bot, a quadruped pet dog robot and were able to reliably observe different modes of locomotion-walk, trot and jump. Four bio-mimetic neurons forming a CPG network to control the four limbs were implemented on Arduino microcontroller and compared to a similar CPG built using four LIF neurons. The differential equations for both neurons were solved real-time on Arduino and profiled for memory usage, latency and jitter tolerance. The CPG using bio-mimetic non-linear neurons used marginally higher memory (378 bytes, 18% higher than LIF neurons), incurred insignificant latency of 3.54ms compared to motor activation delay of 200ms, while providing upto 5-10x higher jitter tolerance.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115845021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-10DOI: 10.1109/ICARA56516.2023.10125977
R. Beneder, Patrick Schmitt, Clemens Környefalvy
Typically, universities with a focus on technical sciences provide courses where students have to design control systems, to implement these control systems on embedded hardware and to verify the functionality of their implementations. Hence, the students work in groups and implement practical demonstrators based on given problem statements. In order to design, implement, and test robotic applications, it is mandatory to utilize expertise within the field of robotics and the field of embedded systems. The combination of expertise within both fields (robotics and embedded systems) is a highly demanded skill set, which is required to work for companies with focus on aviation, automotive, and even emerging applications for agricultural technology. The technical complexity of these applications is increasing almost exponentially, which requires abstract model-based approaches to ease the design flow of such implementations. This paper introduces a model-based approach for students within robotics and/or embedded systems degree programs. Moreover, this paper describes the state-of-the-art workflow to implement problem statements within the field of robotics and embedded systems (tools, approach and test), gives and overview of the model-based approach for students within these field of applications, and shows the integration of the results into courses based on a control system model.
{"title":"A Model-Based Approach for Robotics Education with Emphasis on Embedded Systems","authors":"R. Beneder, Patrick Schmitt, Clemens Környefalvy","doi":"10.1109/ICARA56516.2023.10125977","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125977","url":null,"abstract":"Typically, universities with a focus on technical sciences provide courses where students have to design control systems, to implement these control systems on embedded hardware and to verify the functionality of their implementations. Hence, the students work in groups and implement practical demonstrators based on given problem statements. In order to design, implement, and test robotic applications, it is mandatory to utilize expertise within the field of robotics and the field of embedded systems. The combination of expertise within both fields (robotics and embedded systems) is a highly demanded skill set, which is required to work for companies with focus on aviation, automotive, and even emerging applications for agricultural technology. The technical complexity of these applications is increasing almost exponentially, which requires abstract model-based approaches to ease the design flow of such implementations. This paper introduces a model-based approach for students within robotics and/or embedded systems degree programs. Moreover, this paper describes the state-of-the-art workflow to implement problem statements within the field of robotics and embedded systems (tools, approach and test), gives and overview of the model-based approach for students within these field of applications, and shows the integration of the results into courses based on a control system model.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129609608","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-10DOI: 10.1109/ICARA56516.2023.10125817
Shanmuga Venkatachalam, V. Vivekanand, R. Kubendran
Computer vision traditionally uses cameras that capture visual information as frames at periodic intervals. On the other hand, Dynamic Vision Sensors (DVS) capture temporal contrast (TC) in each pixel asynchronously and stream them serially. This paper proposes a hybrid approach to generate input visual data as ‘frame of events’ for a stereo vision pipeline. We demonstrate that using hybrid vision sensors that produce frames made up of TC events can achieve superior results in terms of low latency, less compute and low memory footprint as compared to the traditional cameras and the event-based DVS. The frame-of-events approach eliminates the latency and memory resources involved in the accumulation of asynchronous events into synchronous frames, while generating acceptable disparity maps for depth estimation. Benchmarking results show that the frame-of-events pipeline outperforms others with the least average latency per frame of 3.8 ms and least average memory usage per frame of 112.4 Kb, which amounts to 7.32% and 9.75% reduction when compared to traditional frame-based pipeline. Hence, the proposed method is suitable for missioncritical robotics applications that involve path planning and localization mapping in a resource-constrained environment, such as drone navigation and autonomous vehicles.
{"title":"Frame of Events: A Low-latency Resource-efficient Approach for Stereo Depth Maps","authors":"Shanmuga Venkatachalam, V. Vivekanand, R. Kubendran","doi":"10.1109/ICARA56516.2023.10125817","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125817","url":null,"abstract":"Computer vision traditionally uses cameras that capture visual information as frames at periodic intervals. On the other hand, Dynamic Vision Sensors (DVS) capture temporal contrast (TC) in each pixel asynchronously and stream them serially. This paper proposes a hybrid approach to generate input visual data as ‘frame of events’ for a stereo vision pipeline. We demonstrate that using hybrid vision sensors that produce frames made up of TC events can achieve superior results in terms of low latency, less compute and low memory footprint as compared to the traditional cameras and the event-based DVS. The frame-of-events approach eliminates the latency and memory resources involved in the accumulation of asynchronous events into synchronous frames, while generating acceptable disparity maps for depth estimation. Benchmarking results show that the frame-of-events pipeline outperforms others with the least average latency per frame of 3.8 ms and least average memory usage per frame of 112.4 Kb, which amounts to 7.32% and 9.75% reduction when compared to traditional frame-based pipeline. Hence, the proposed method is suitable for missioncritical robotics applications that involve path planning and localization mapping in a resource-constrained environment, such as drone navigation and autonomous vehicles.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125079776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-10DOI: 10.1109/ICARA56516.2023.10126038
Xu Wang, Huachao Yu, Caixia Lu, Xueyan Liu, Xing Cui, Xijun Zhao, Bo Su
Ground segmentation is an essential preprocessing task for autonomous driving. Most existing 3D LiDAR-based ground segmentation methods segment the ground by fitting a ground model. However, these methods may fail to achieve ground segmentation in some challenging terrains, such as slope roads. In this paper, a novel framework is proposed to improve the performance of these methods. First, vertical points in the point cloud are filtered out by a gradient-based method. Second, a polar grid map is built to extract the seed points for model fitting. Moreover, the fitting-based method is used to model the ground. And a coarse segmentation result can be obtained by the fitted model. Next, the coarse segmentation result is used to update the ground height value for each grid in the grid map. Finally, the segmentation result is refined by the grid map. Experiments on the SemanticKITTI dataset have shown that the fitting-based method can achieve more accurate segmentation results by integrating with our proposed framework.
{"title":"A Novel Framework for Ground Segmentation Using 3D Point Cloud","authors":"Xu Wang, Huachao Yu, Caixia Lu, Xueyan Liu, Xing Cui, Xijun Zhao, Bo Su","doi":"10.1109/ICARA56516.2023.10126038","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10126038","url":null,"abstract":"Ground segmentation is an essential preprocessing task for autonomous driving. Most existing 3D LiDAR-based ground segmentation methods segment the ground by fitting a ground model. However, these methods may fail to achieve ground segmentation in some challenging terrains, such as slope roads. In this paper, a novel framework is proposed to improve the performance of these methods. First, vertical points in the point cloud are filtered out by a gradient-based method. Second, a polar grid map is built to extract the seed points for model fitting. Moreover, the fitting-based method is used to model the ground. And a coarse segmentation result can be obtained by the fitted model. Next, the coarse segmentation result is used to update the ground height value for each grid in the grid map. Finally, the segmentation result is refined by the grid map. Experiments on the SemanticKITTI dataset have shown that the fitting-based method can achieve more accurate segmentation results by integrating with our proposed framework.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125653971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-10DOI: 10.1109/ICARA56516.2023.10125799
Juan Du, Songxuan Liu
The substantially similar texture features of sticks and shiitake mushrooms in the mushroom-growing environment make precisely labeled samples more expensive and semantic segmentation of shiitake mushrooms more challenging. In this paper, a search focus network(SFNet) for semantic segmentation of shiitake mushrooms was proposed, which utilized the group-reversal attention module(GRAM) to strengthen semantic information understanding and trained via transfer learning and data augmentation strategies. The experimental results on the self-built shiitake mushroom sticks dataset revealed that structural measure $S_{alpha}$, weighted F-measure $F_{beta}^{omega}$, adaptive E-measure $E_{phi}^{ad}$, and absolute mean error $M$ of SFNet were 0.9161, 0.9113, 0.9808, and 0.0049, respectively, with practical and steady performance. With only a few training samples, the proposed approach can accomplish the semantic segmentation task of shiitake mushrooms.
{"title":"Shiitake Mushroom Semantic Segmentation Method Based on Search Focus Network","authors":"Juan Du, Songxuan Liu","doi":"10.1109/ICARA56516.2023.10125799","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125799","url":null,"abstract":"The substantially similar texture features of sticks and shiitake mushrooms in the mushroom-growing environment make precisely labeled samples more expensive and semantic segmentation of shiitake mushrooms more challenging. In this paper, a search focus network(SFNet) for semantic segmentation of shiitake mushrooms was proposed, which utilized the group-reversal attention module(GRAM) to strengthen semantic information understanding and trained via transfer learning and data augmentation strategies. The experimental results on the self-built shiitake mushroom sticks dataset revealed that structural measure $S_{alpha}$, weighted F-measure $F_{beta}^{omega}$, adaptive E-measure $E_{phi}^{ad}$, and absolute mean error $M$ of SFNet were 0.9161, 0.9113, 0.9808, and 0.0049, respectively, with practical and steady performance. With only a few training samples, the proposed approach can accomplish the semantic segmentation task of shiitake mushrooms.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122220122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}