Reliable extrinsic calibration is a crucial first step for multi-sensor data fusion, which is the key part of the autonomous vehicle to perceive the environment carefully and effectively. In this paper, we propose an effective extrinsic calibration pipeline to establish the transformation between camera and LiDAR and update the decalibration online on an autonomous driving platform. We obtain rotation extrinsic parameters using parallel lines features in road scene, and infer translation extrinsic parameters by an online search approach based on selective edge alignment of point cloud and image. In order to evaluate our calibration system, it is first validated on KITTI benchmark and compared with the baseline algorithm. After that, the proposed method is tested on our own data. The results show that our method has a better rotation accuracy and demonstrate the necessity of error correction online.
{"title":"Line Feature Based Extrinsic Calibration of LiDAR and Camera","authors":"Jingjing Jiang, Peixin Xue, Shi-tao Chen, Zi-yi Liu, Xuetao Zhang, Nanning Zheng","doi":"10.1109/ICVES.2018.8519493","DOIUrl":"https://doi.org/10.1109/ICVES.2018.8519493","url":null,"abstract":"Reliable extrinsic calibration is a crucial first step for multi-sensor data fusion, which is the key part of the autonomous vehicle to perceive the environment carefully and effectively. In this paper, we propose an effective extrinsic calibration pipeline to establish the transformation between camera and LiDAR and update the decalibration online on an autonomous driving platform. We obtain rotation extrinsic parameters using parallel lines features in road scene, and infer translation extrinsic parameters by an online search approach based on selective edge alignment of point cloud and image. In order to evaluate our calibration system, it is first validated on KITTI benchmark and compared with the baseline algorithm. After that, the proposed method is tested on our own data. The results show that our method has a better rotation accuracy and demonstrate the necessity of error correction online.","PeriodicalId":203807,"journal":{"name":"2018 IEEE International Conference on Vehicular Electronics and Safety (ICVES)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129843263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICVES.2018.8519514
Catherine M. Elias, Shaimaa K. El-Baklish, Nada N. El-Ghandoor, Omar M. Shehata, E. I. Morgan
This paper extends the traditional single vehicle trajectory tracking problem to develop a control approach for a group of N-vehicles which are capable of converging to a desired formation while moving on a predefined trajectory. A novel hybrid control algorithm is introduced to handle the two aforementioned problems separately. The proposed controller integrates three different techniques; graph theory, Lyapunov stability and leader-follower approach. The controller structure consists of two main subsystems. In the first subsystem, a leader virtual robot is responsible for the trajectory tracking problem, actuated via a Lyapunov-based controller. Simultaneously, a graph-based controller is used to guarantee the convergence of the follower mobile robots to the desired formation using local positioning information; thus, ensuring the stability of the formation of the follower robots with respect to the moving leader. This new hybrid approach handles the problem of impractical velocities providing control inputs within acceptable ranges that can be practically implemented. Furthermore, it eliminates the oscillations in the motor inputs. Several scenarios are implemented to verify the proposed hybrid control algorithm. The simulations show significant and promising results which prove the effectiveness of this hybrid approach.
{"title":"Practical Hybrid Graph-Based Formation Control Architecture for Leader-Follower Trajectory Tracking Problem","authors":"Catherine M. Elias, Shaimaa K. El-Baklish, Nada N. El-Ghandoor, Omar M. Shehata, E. I. Morgan","doi":"10.1109/ICVES.2018.8519514","DOIUrl":"https://doi.org/10.1109/ICVES.2018.8519514","url":null,"abstract":"This paper extends the traditional single vehicle trajectory tracking problem to develop a control approach for a group of N-vehicles which are capable of converging to a desired formation while moving on a predefined trajectory. A novel hybrid control algorithm is introduced to handle the two aforementioned problems separately. The proposed controller integrates three different techniques; graph theory, Lyapunov stability and leader-follower approach. The controller structure consists of two main subsystems. In the first subsystem, a leader virtual robot is responsible for the trajectory tracking problem, actuated via a Lyapunov-based controller. Simultaneously, a graph-based controller is used to guarantee the convergence of the follower mobile robots to the desired formation using local positioning information; thus, ensuring the stability of the formation of the follower robots with respect to the moving leader. This new hybrid approach handles the problem of impractical velocities providing control inputs within acceptable ranges that can be practically implemented. Furthermore, it eliminates the oscillations in the motor inputs. Several scenarios are implemented to verify the proposed hybrid control algorithm. The simulations show significant and promising results which prove the effectiveness of this hybrid approach.","PeriodicalId":203807,"journal":{"name":"2018 IEEE International Conference on Vehicular Electronics and Safety (ICVES)","volume":"111 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131520930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICVES.2018.8519485
Laura Garcia Cuenca, N. Aliane, Enrique Puertas, Javier Fernández Andres
this article presents the development of a geographical information system (GIS). App to visualize hotspots in the road network inside the metropolitan area of Madrid. On the other hand, this App aims to warn drivers when approaching those spots. The App is fed by the open data provided by the City Hall of Madrid and the Spanish Traffic Agency (DGT), as well as by the data recorded in the on-board system of the vehicle. Firstly, this article presents the general structure of the system with comments on the sources and the nature of the data used. Secondly, it describes the process of data mining carried out for the generation of the structured data used by the App. Thirdly, some characteristics of the developed App are described.
{"title":"Traffic Hotspots Visualization and Warning System","authors":"Laura Garcia Cuenca, N. Aliane, Enrique Puertas, Javier Fernández Andres","doi":"10.1109/ICVES.2018.8519485","DOIUrl":"https://doi.org/10.1109/ICVES.2018.8519485","url":null,"abstract":"this article presents the development of a geographical information system (GIS). App to visualize hotspots in the road network inside the metropolitan area of Madrid. On the other hand, this App aims to warn drivers when approaching those spots. The App is fed by the open data provided by the City Hall of Madrid and the Spanish Traffic Agency (DGT), as well as by the data recorded in the on-board system of the vehicle. Firstly, this article presents the general structure of the system with comments on the sources and the nature of the data used. Secondly, it describes the process of data mining carried out for the generation of the structured data used by the App. Thirdly, some characteristics of the developed App are described.","PeriodicalId":203807,"journal":{"name":"2018 IEEE International Conference on Vehicular Electronics and Safety (ICVES)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132429815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICVES.2018.8519590
Mayar Tarek, Ahmed Moataz, Mennat-allah Khaled, A. Hammam, Omar M. Shehata, E. I. Morgan
Using a proposed Three-Layer Architecture, several filtration and fusion techniques are experimented using various sensors. Different scenarios were tested to validate the architecture on three different platforms; a Mobile Robot, a Four-Wheel Vehicle and a Quadcopter. The techniques investigated which yielded the best results were fusing an Infrared sensor along with an Ultrasonic sensor on a Mobile Robot through a Particle Filter and Fuzzy Logic to optimize the fusion. For the Quadcopter, an IMU was fused using Extended Kalman Filter with Fuzzy Logic to compensate for the IMU’s drift. As for the Four-Wheel Vehicle, an IMU with an Encoder was fused to estimate the odometry of the vehicle using an Extended Kalman Filter. Communication between the platforms and the signals was done on a three-layer communication system that uses multimaster package of ROS, I2C and WiFi to communicate between the platforms and the signals being sent and received.
{"title":"Multisensor Filtration and Fusion on a Three-Layer Architecture","authors":"Mayar Tarek, Ahmed Moataz, Mennat-allah Khaled, A. Hammam, Omar M. Shehata, E. I. Morgan","doi":"10.1109/ICVES.2018.8519590","DOIUrl":"https://doi.org/10.1109/ICVES.2018.8519590","url":null,"abstract":"Using a proposed Three-Layer Architecture, several filtration and fusion techniques are experimented using various sensors. Different scenarios were tested to validate the architecture on three different platforms; a Mobile Robot, a Four-Wheel Vehicle and a Quadcopter. The techniques investigated which yielded the best results were fusing an Infrared sensor along with an Ultrasonic sensor on a Mobile Robot through a Particle Filter and Fuzzy Logic to optimize the fusion. For the Quadcopter, an IMU was fused using Extended Kalman Filter with Fuzzy Logic to compensate for the IMU’s drift. As for the Four-Wheel Vehicle, an IMU with an Encoder was fused to estimate the odometry of the vehicle using an Extended Kalman Filter. Communication between the platforms and the signals was done on a three-layer communication system that uses multimaster package of ROS, I2C and WiFi to communicate between the platforms and the signals being sent and received.","PeriodicalId":203807,"journal":{"name":"2018 IEEE International Conference on Vehicular Electronics and Safety (ICVES)","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133717822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICVES.2018.8519491
S. Ifqir, N. A. Oufroukh, D. Ichalal, S. Mammar
Advanced Driver assistance systems that assist in safe vehicle guidance require a thorough understanding of the crossed environment. Because these systems rely on the vision sensor’s ability to detect road boundaries and lane marking, these systems are extremely sensitive to road conditions. Itis therefore important to take a critical look at their key aspects and evaluate their performance and accuracy. In this work, an evaluation framework to measure the performance of a vehicle onboard stereo-vision system using set-membership theory is presented. The proposed approach considers the changes that might occur in vehicle dynamics during the maneuver. The experiment design and evaluation of the proposed method is shown using field data acquired using an instrumented vehicle.
{"title":"Performance Measure of Vehicle Onboard Vision System: An Interval Observer-based Approach","authors":"S. Ifqir, N. A. Oufroukh, D. Ichalal, S. Mammar","doi":"10.1109/ICVES.2018.8519491","DOIUrl":"https://doi.org/10.1109/ICVES.2018.8519491","url":null,"abstract":"Advanced Driver assistance systems that assist in safe vehicle guidance require a thorough understanding of the crossed environment. Because these systems rely on the vision sensor’s ability to detect road boundaries and lane marking, these systems are extremely sensitive to road conditions. Itis therefore important to take a critical look at their key aspects and evaluate their performance and accuracy. In this work, an evaluation framework to measure the performance of a vehicle onboard stereo-vision system using set-membership theory is presented. The proposed approach considers the changes that might occur in vehicle dynamics during the maneuver. The experiment design and evaluation of the proposed method is shown using field data acquired using an instrumented vehicle.","PeriodicalId":203807,"journal":{"name":"2018 IEEE International Conference on Vehicular Electronics and Safety (ICVES)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130124054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICVES.2018.8519527
L. Vismari, J. B. C. Junior, J. K. Naufal, J. R. Almeida, C. Molina, R. Inam, E. Fersman, M. Marquezini
Autonomous Vehicles (AV) are expected to bring significant advantages in safety and efficiency to the Roadway Transport Systems (RTS). However, AVs will be incorporated into RTS only if their benefit outweigh safety risks and, thus, it is mandatory assuring that they will be safe during their operation. In our previous work published on IEEE Transaction on ITS, we proposed the Autonomous Automotive Cyber Physical Systems (A2CPS) Engine, a conceptual framework for the AV Supervision and Control Systems (SCS). Supported by a consolidated, international normative risk management process and by a proven-in-use SCS architecture adopted in other transport modes, we advocated this run-time, vehicle-centric risk management concept could minimize the safety risks related to the AV operation. In this paper, the engine concept is implemented using Fuzzy Logic and embedded in an computerbased AV model. Its capability in managing safety risks related to the AV operation, including the impacts of AV behavior over the RTS safety, is assessed using a simulation-based safety analysis approach. As a result, we observe this AV is able to identify potential collision conditions, estimating and assessing its level of risk during runtime execution and mitigating the associated risks by speed reduction or stopping the AV. Concluding, the engine concept has potential to manage the safety risks related to the operation of an AV in RTS scenarios in an innovative, cost-effective and safe way.
{"title":"A Fuzzy logic, risk-based autonomous vehicle control approach and its impacts on road transportation safety","authors":"L. Vismari, J. B. C. Junior, J. K. Naufal, J. R. Almeida, C. Molina, R. Inam, E. Fersman, M. Marquezini","doi":"10.1109/ICVES.2018.8519527","DOIUrl":"https://doi.org/10.1109/ICVES.2018.8519527","url":null,"abstract":"Autonomous Vehicles (AV) are expected to bring significant advantages in safety and efficiency to the Roadway Transport Systems (RTS). However, AVs will be incorporated into RTS only if their benefit outweigh safety risks and, thus, it is mandatory assuring that they will be safe during their operation. In our previous work published on IEEE Transaction on ITS, we proposed the Autonomous Automotive Cyber Physical Systems (A2CPS) Engine, a conceptual framework for the AV Supervision and Control Systems (SCS). Supported by a consolidated, international normative risk management process and by a proven-in-use SCS architecture adopted in other transport modes, we advocated this run-time, vehicle-centric risk management concept could minimize the safety risks related to the AV operation. In this paper, the engine concept is implemented using Fuzzy Logic and embedded in an computerbased AV model. Its capability in managing safety risks related to the AV operation, including the impacts of AV behavior over the RTS safety, is assessed using a simulation-based safety analysis approach. As a result, we observe this AV is able to identify potential collision conditions, estimating and assessing its level of risk during runtime execution and mitigating the associated risks by speed reduction or stopping the AV. Concluding, the engine concept has potential to manage the safety risks related to the operation of an AV in RTS scenarios in an innovative, cost-effective and safe way.","PeriodicalId":203807,"journal":{"name":"2018 IEEE International Conference on Vehicular Electronics and Safety (ICVES)","volume":"161 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133516251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICVES.2018.8519519
Antti Matthias vom Stein, A. Dorofeev, J. Fottner
This paper presents a study of the impact of different visual warnings on brake reaction times and perceived workload in industrial trucks. Four different warning signals for collision avoidance are investigated. A warning light in the drivers front forward field of vision, a warning light based on an existing collision warning system and two peripheral warnings based on LED bars are compared. Numerous studies in automotive research have shown that LED bar generated peripheral warnings show advantages in comparison to conventional warnings in head-down displays and head-up displays. Reaction times and driver distraction can be reduced. We transfer these warning concepts from the automotive car interior to the driver cabin of industrial trucks, in this case of an forklift truck. Since many forklift trucks are equipped with data terminals for order instructions, we apply a visual-manual secondary task (SuRT) to test potential impacts and interactions of the terminal operation. Results show that brake reaction times in our setup are lower when warnings are displayed by the LED bars and also when the driver is not distracted by the secondary task. We found no interaction between the different warning displays and the secondary task.
{"title":"Visual collision warning displays in industrial trucks","authors":"Antti Matthias vom Stein, A. Dorofeev, J. Fottner","doi":"10.1109/ICVES.2018.8519519","DOIUrl":"https://doi.org/10.1109/ICVES.2018.8519519","url":null,"abstract":"This paper presents a study of the impact of different visual warnings on brake reaction times and perceived workload in industrial trucks. Four different warning signals for collision avoidance are investigated. A warning light in the drivers front forward field of vision, a warning light based on an existing collision warning system and two peripheral warnings based on LED bars are compared. Numerous studies in automotive research have shown that LED bar generated peripheral warnings show advantages in comparison to conventional warnings in head-down displays and head-up displays. Reaction times and driver distraction can be reduced. We transfer these warning concepts from the automotive car interior to the driver cabin of industrial trucks, in this case of an forklift truck. Since many forklift trucks are equipped with data terminals for order instructions, we apply a visual-manual secondary task (SuRT) to test potential impacts and interactions of the terminal operation. Results show that brake reaction times in our setup are lower when warnings are displayed by the LED bars and also when the driver is not distracted by the secondary task. We found no interaction between the different warning displays and the secondary task.","PeriodicalId":203807,"journal":{"name":"2018 IEEE International Conference on Vehicular Electronics and Safety (ICVES)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121291179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICVES.2018.8519488
J. Pimentel, Jennifer Bastiaan
Safety has been ranked as the number one concern for the acceptance and adoption of autonomous vehicles, and understandably so since safety has some of the most complex requirements in the development of self-driving vehicles. Traditionally, automotive safety follows functional safety concepts as detailed in the standard ISO 26262. However, autonomous driving safety goes beyond ISO 26262, and includes other safety concepts such as safety of the intended functionality and multiagent safety. In this paper, we characterize the concept of safety for self-driving vehicles, and we discuss fault-tolerant techniques in the design of safe autonomous vehicles. We also propose a faultcontainment protocol for the perception system of an autonomous vehicle that involves vehicle detection.
{"title":"Characterizing the Safety of Self-Driving Vehicles: A Fault Containment Protocol for Functionality Involving Vehicle Detection","authors":"J. Pimentel, Jennifer Bastiaan","doi":"10.1109/ICVES.2018.8519488","DOIUrl":"https://doi.org/10.1109/ICVES.2018.8519488","url":null,"abstract":"Safety has been ranked as the number one concern for the acceptance and adoption of autonomous vehicles, and understandably so since safety has some of the most complex requirements in the development of self-driving vehicles. Traditionally, automotive safety follows functional safety concepts as detailed in the standard ISO 26262. However, autonomous driving safety goes beyond ISO 26262, and includes other safety concepts such as safety of the intended functionality and multiagent safety. In this paper, we characterize the concept of safety for self-driving vehicles, and we discuss fault-tolerant techniques in the design of safe autonomous vehicles. We also propose a faultcontainment protocol for the perception system of an autonomous vehicle that involves vehicle detection.","PeriodicalId":203807,"journal":{"name":"2018 IEEE International Conference on Vehicular Electronics and Safety (ICVES)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116381787","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICVES.2018.8519509
Sungwook Lee, Y. Yoon, Jahng-Hyeon Park, W. Hong
This paper has presented optimal coordination of Rear Wheel Steering and Electronic Stability Control. The algorithm consists of three parts. First, state estimators for tire force, cornering stiffness and side slip angle have been designed. Second, desired vehicle motion has been generated considering driving situation and driver’s intention. Third, optimal coordinator has distributed the control task to the individual chassis control using analytical method with Karush-Kuhn-Tucker. The purpose of this study can be divided into two categories. One is optimal control of Rear Wheel Steering and Electronic Stability Control. Therefore, new cost functions have been defmed. The other is fault tolerant control during actuator fault or brake circuit failure. The performance of the proposed control system has been validated through closed-loop simulations with MATLABlSimulink and Carsim. The simulation results show that proposed system does not only distribute the control input to each control system but also ensure vehicle safety under actuator failure.
{"title":"Control Allocation of Rear Wheel Steering and Electronic Stability Control with Actuator Failure","authors":"Sungwook Lee, Y. Yoon, Jahng-Hyeon Park, W. Hong","doi":"10.1109/ICVES.2018.8519509","DOIUrl":"https://doi.org/10.1109/ICVES.2018.8519509","url":null,"abstract":"This paper has presented optimal coordination of Rear Wheel Steering and Electronic Stability Control. The algorithm consists of three parts. First, state estimators for tire force, cornering stiffness and side slip angle have been designed. Second, desired vehicle motion has been generated considering driving situation and driver’s intention. Third, optimal coordinator has distributed the control task to the individual chassis control using analytical method with Karush-Kuhn-Tucker. The purpose of this study can be divided into two categories. One is optimal control of Rear Wheel Steering and Electronic Stability Control. Therefore, new cost functions have been defmed. The other is fault tolerant control during actuator fault or brake circuit failure. The performance of the proposed control system has been validated through closed-loop simulations with MATLABlSimulink and Carsim. The simulation results show that proposed system does not only distribute the control input to each control system but also ensure vehicle safety under actuator failure.","PeriodicalId":203807,"journal":{"name":"2018 IEEE International Conference on Vehicular Electronics and Safety (ICVES)","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132460017","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICVES.2018.8519497
Gledson Melotti, A. Asvadi, C. Premebida
The use of multiple sensors in perception systems is becoming a consensus in the automotive and robotics industries. Camera is the most popular technology, however, radar and LIDAR are increasingly being adopted more often in protection and safety systems for object/obstacle detection. In this paper, we particularly explore the LIDAR sensor as an inter-modality technology which provides two types of data, range (distance) and reflectance (intensity return), and study the influence of high-resolution distance$/$depth (DM) and reflectance maps (RM) on pedestrian classification using a deep Convolutional Neural Network (CNN). Pedestrian protection is critical for advanced driver assistance system (ADAS) and autonomous driving, and it has regained particular attention recently for known reasons. In this work, CNN-LIDAR based pedestrian classification is studied in three distinct cases: (i) having a single modality as input in the CNN, (ii) by combining distance and reflectance measurements at the CNN input-level (early fusion), and (iii) combining outputs scores from two single-modal CNNs (late fusion). Distance and intensity (reflectance) raw data from LIDAR are transformed to high-resolution (dense) maps which allow a direct implementation on CNNs both as single or multi-channel inputs (early fusion approach). In terms of late-fusion, the outputs from individual CNNs are combined by means of non-learning rules, such as: minimum, maximum, average, product. Pedestrian classification is evaluated on a 'binary classification' dataset created from the KITTI Vision Benchmark Suite, and results are shown for the three cases.
{"title":"CNN-LIDAR pedestrian classification: combining range and reflectance data","authors":"Gledson Melotti, A. Asvadi, C. Premebida","doi":"10.1109/ICVES.2018.8519497","DOIUrl":"https://doi.org/10.1109/ICVES.2018.8519497","url":null,"abstract":"The use of multiple sensors in perception systems is becoming a consensus in the automotive and robotics industries. Camera is the most popular technology, however, radar and LIDAR are increasingly being adopted more often in protection and safety systems for object/obstacle detection. In this paper, we particularly explore the LIDAR sensor as an inter-modality technology which provides two types of data, range (distance) and reflectance (intensity return), and study the influence of high-resolution distance$/$depth (DM) and reflectance maps (RM) on pedestrian classification using a deep Convolutional Neural Network (CNN). Pedestrian protection is critical for advanced driver assistance system (ADAS) and autonomous driving, and it has regained particular attention recently for known reasons. In this work, CNN-LIDAR based pedestrian classification is studied in three distinct cases: (i) having a single modality as input in the CNN, (ii) by combining distance and reflectance measurements at the CNN input-level (early fusion), and (iii) combining outputs scores from two single-modal CNNs (late fusion). Distance and intensity (reflectance) raw data from LIDAR are transformed to high-resolution (dense) maps which allow a direct implementation on CNNs both as single or multi-channel inputs (early fusion approach). In terms of late-fusion, the outputs from individual CNNs are combined by means of non-learning rules, such as: minimum, maximum, average, product. Pedestrian classification is evaluated on a 'binary classification' dataset created from the KITTI Vision Benchmark Suite, and results are shown for the three cases.","PeriodicalId":203807,"journal":{"name":"2018 IEEE International Conference on Vehicular Electronics and Safety (ICVES)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121541829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}