Pub Date : 2019-10-01DOI: 10.1109/ITSC.2019.8917366
M. Roth, Dominik Jargot, D. Gavrila
We present a method for 3D person detection from camera images and lidar point clouds in automotive scenes. The method comprises a deep neural network which estimates the 3D location and extent of persons present in the scene. 3D anchor proposals are refined in two stages: a region proposal network and a subsequent detection network.For both input modalities high-level feature representations are learned from raw sensor data instead of being manually designed. To that end, we use Voxel Feature Encoders [1] to obtain point cloud features instead of widely used projection-based point cloud representations, thus allowing the network to learn to predict the location and extent of persons in an end-to-end manner.Experiments on the validation set of the KITTI 3D object detection benchmark [2] show that the proposed method outperforms state-of-the-art methods with an average precision (AP) of 47.06% on moderate difficulty.
{"title":"Deep End-to-end 3D Person Detection from Camera and Lidar","authors":"M. Roth, Dominik Jargot, D. Gavrila","doi":"10.1109/ITSC.2019.8917366","DOIUrl":"https://doi.org/10.1109/ITSC.2019.8917366","url":null,"abstract":"We present a method for 3D person detection from camera images and lidar point clouds in automotive scenes. The method comprises a deep neural network which estimates the 3D location and extent of persons present in the scene. 3D anchor proposals are refined in two stages: a region proposal network and a subsequent detection network.For both input modalities high-level feature representations are learned from raw sensor data instead of being manually designed. To that end, we use Voxel Feature Encoders [1] to obtain point cloud features instead of widely used projection-based point cloud representations, thus allowing the network to learn to predict the location and extent of persons in an end-to-end manner.Experiments on the validation set of the KITTI 3D object detection benchmark [2] show that the proposed method outperforms state-of-the-art methods with an average precision (AP) of 47.06% on moderate difficulty.","PeriodicalId":6717,"journal":{"name":"2019 IEEE Intelligent Transportation Systems Conference (ITSC)","volume":"22 1","pages":"521-527"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86169885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ITSC.2019.8917303
Ali Alharake, Guillaume Bresson, Pierre Merriaux, Vincent Vauchey, X. Savatier
In this paper we propose the use of existing resources, cadastral plans in particular, to build maps for vehicle localization without requiring the prior passage of a mapping vehicle. This solves the inherent error accumulation in Simultaneous Localization and Mapping algorithms (SLAM). Based on cadastral plans extracted from OpenStreetMaps (OSM), we build prior maps using a Likelihood Field (LF) which takes into account the inaccuracy found in such plans. The built maps are then used to localize a vehicle equipped with an odometer used to predict its next pose, and a LIDAR used to correct the predicted pose using a matching algorithm. We have also compared the difference between using raw scans versus scans processed to include only vertical planes in the matching algorithm. Experiments in real conditions in two urban environments illustrate the benefits of using cadastral plans to constrain the drift of localization algorithms. Moreover, two metrics were used to analyze our results. The conducted tests lead us to choose a set of parameters that suits the map representation proposed herein.
{"title":"Urban Localization inside Cadastral Maps using a Likelihood Field Representation","authors":"Ali Alharake, Guillaume Bresson, Pierre Merriaux, Vincent Vauchey, X. Savatier","doi":"10.1109/ITSC.2019.8917303","DOIUrl":"https://doi.org/10.1109/ITSC.2019.8917303","url":null,"abstract":"In this paper we propose the use of existing resources, cadastral plans in particular, to build maps for vehicle localization without requiring the prior passage of a mapping vehicle. This solves the inherent error accumulation in Simultaneous Localization and Mapping algorithms (SLAM). Based on cadastral plans extracted from OpenStreetMaps (OSM), we build prior maps using a Likelihood Field (LF) which takes into account the inaccuracy found in such plans. The built maps are then used to localize a vehicle equipped with an odometer used to predict its next pose, and a LIDAR used to correct the predicted pose using a matching algorithm. We have also compared the difference between using raw scans versus scans processed to include only vertical planes in the matching algorithm. Experiments in real conditions in two urban environments illustrate the benefits of using cadastral plans to constrain the drift of localization algorithms. Moreover, two metrics were used to analyze our results. The conducted tests lead us to choose a set of parameters that suits the map representation proposed herein.","PeriodicalId":6717,"journal":{"name":"2019 IEEE Intelligent Transportation Systems Conference (ITSC)","volume":"30 1","pages":"1329-1335"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83996447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ITSC.2019.8917107
Thomas Westfechtel, K. Ohno, R. B. Neto, Shotaro Kojima, S. Tadokoro
Current self-driving vehicles rely on detailed maps of the environment, that contains exhaustive semantic information. This work presents a strategy to utilize the recent advancements in semantic segmentation of images, fuse the information extracted from the camera stream with accurate depth measurements of a Lidar sensor in order to create large scale semantic labeled point clouds of the environment. We fuse the color and semantic data gathered from a round-view camera system with the depth data gathered from a Lidar sensor. In our framework, each Lidar scan point is projected onto the camera stream to extract the color and semantic information while at the same time a large scale 3D map of the environment is generated by a Lidar-based SLAM algorithm. While we employed a network that achieved state of the art semantic segmentation results on the Cityscape dataset [1] (IoU score of 82.1%), the sole use of the extracted semantic information only achieved an IoU score of 38.9% on 105 manually labeled 5x5m tiles from 5 different trial runs within the Sendai city in Japan (this decrease in accuracy will discussed in section III-B). To increase the performance, we reclassify the label of each point. For this two different approaches were investigated: a random forest and SparseConvNet [2] (a deep learning approach). We investigated for both methods how the inclusion of semantic labels from the camera stream affected the classification task of the 3D point cloud. To which end we show, that a significant performance increase can be achieved by doing so - 25.4 percent points for random forest (40.0% w/o labels to 65.4% with labels) and 16.6 in case of the SparseConvNet (33.4% w/o labels to 50.8% with labels). Finally, we present practical examples on how semantic enriched maps can be employed for further tasks. In particular, we show how different classes (i.e. cars and vegetation) can be removed from the point cloud in order to increase the visibility of other classes (i.e. road and buildings). And how the data could be used for extracting the trajectories of vehicles and pedestrians.
{"title":"Fusion of Camera and Lidar Data for Large Scale Semantic Mapping","authors":"Thomas Westfechtel, K. Ohno, R. B. Neto, Shotaro Kojima, S. Tadokoro","doi":"10.1109/ITSC.2019.8917107","DOIUrl":"https://doi.org/10.1109/ITSC.2019.8917107","url":null,"abstract":"Current self-driving vehicles rely on detailed maps of the environment, that contains exhaustive semantic information. This work presents a strategy to utilize the recent advancements in semantic segmentation of images, fuse the information extracted from the camera stream with accurate depth measurements of a Lidar sensor in order to create large scale semantic labeled point clouds of the environment. We fuse the color and semantic data gathered from a round-view camera system with the depth data gathered from a Lidar sensor. In our framework, each Lidar scan point is projected onto the camera stream to extract the color and semantic information while at the same time a large scale 3D map of the environment is generated by a Lidar-based SLAM algorithm. While we employed a network that achieved state of the art semantic segmentation results on the Cityscape dataset [1] (IoU score of 82.1%), the sole use of the extracted semantic information only achieved an IoU score of 38.9% on 105 manually labeled 5x5m tiles from 5 different trial runs within the Sendai city in Japan (this decrease in accuracy will discussed in section III-B). To increase the performance, we reclassify the label of each point. For this two different approaches were investigated: a random forest and SparseConvNet [2] (a deep learning approach). We investigated for both methods how the inclusion of semantic labels from the camera stream affected the classification task of the 3D point cloud. To which end we show, that a significant performance increase can be achieved by doing so - 25.4 percent points for random forest (40.0% w/o labels to 65.4% with labels) and 16.6 in case of the SparseConvNet (33.4% w/o labels to 50.8% with labels). Finally, we present practical examples on how semantic enriched maps can be employed for further tasks. In particular, we show how different classes (i.e. cars and vegetation) can be removed from the point cloud in order to increase the visibility of other classes (i.e. road and buildings). And how the data could be used for extracting the trajectories of vehicles and pedestrians.","PeriodicalId":6717,"journal":{"name":"2019 IEEE Intelligent Transportation Systems Conference (ITSC)","volume":"11 1","pages":"257-264"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82611655","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ITSC.2019.8917415
Elwan Héry, Philippe Xu, P. Bonnifait
Localization remains a major issue for autonomous vehicles. Accurate localization relative to the road and other vehicles is essential for many navigation tasks. When vehicles cooperate and exchange information through wireless communications, they can improve mutually their localization. This paper presents a distributed cooperative localization method based on the exchange of Local Dynamic Maps (LDMs). Every LDM contains dynamic information on the pose and kinematic of all the cooperating agents. Different sources of information such as dead-reckoning from the CAN bus, inaccurate (i.e. biased) GNSS positions, LiDAR and road border detections are merged using an asynchronous Kalman filter strategy. The LDMs received from the other vehicles are merged using a Covariance Intersection Filter to avoid data incest. Experimental results are evaluated on platooning scenarios. They show the importance of estimating GNSS biases and having accurate relative measurements to improve the absolute localization process. These results also illustrate that the relative localization between vehicles is improved in every LDMs even for vehicles not able to perceive surrounding vehicles but which are instead perceived by others.
{"title":"Distributed asynchronous cooperative localization with inaccurate GNSS positions","authors":"Elwan Héry, Philippe Xu, P. Bonnifait","doi":"10.1109/ITSC.2019.8917415","DOIUrl":"https://doi.org/10.1109/ITSC.2019.8917415","url":null,"abstract":"Localization remains a major issue for autonomous vehicles. Accurate localization relative to the road and other vehicles is essential for many navigation tasks. When vehicles cooperate and exchange information through wireless communications, they can improve mutually their localization. This paper presents a distributed cooperative localization method based on the exchange of Local Dynamic Maps (LDMs). Every LDM contains dynamic information on the pose and kinematic of all the cooperating agents. Different sources of information such as dead-reckoning from the CAN bus, inaccurate (i.e. biased) GNSS positions, LiDAR and road border detections are merged using an asynchronous Kalman filter strategy. The LDMs received from the other vehicles are merged using a Covariance Intersection Filter to avoid data incest. Experimental results are evaluated on platooning scenarios. They show the importance of estimating GNSS biases and having accurate relative measurements to improve the absolute localization process. These results also illustrate that the relative localization between vehicles is improved in every LDMs even for vehicles not able to perceive surrounding vehicles but which are instead perceived by others.","PeriodicalId":6717,"journal":{"name":"2019 IEEE Intelligent Transportation Systems Conference (ITSC)","volume":"27 1","pages":"1857-1863"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82631261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ITSC.2019.8916981
Qingwen Han, Xiaoyuan Zhang, Junjun Zhang, Lingqiu Zeng, L. Ye, Jianmei Lei, Yang Jiang, Xuena Peng
With the concept of MEC (Multi-access Edge Computing) being put forward, RSU (Roadside Unit) is considered as a valid application provider, which not only executes transmission resource allocation and data processing related computing but also provides real-time applications to road vehicles. However, when fixed roadside nodes communicate with mobile vehicles, the high service migration rate could influence real-time feature of corresponding service. Moreover, vehicle density also affects service performance. Hence, in this paper, a new concept, MSCN (Mobile Secondary Computing Node), is defined, while a MSCN oriented infrastructure and MSCN selection mechanism are proposed. Then corresponding vehicle message dissemination mechanism is designed. A network simulator (NS-3.28) is employed to investigate the performance of the proposed architecture. The simulation results show that the proposed architecture significantly improves both communication performance and computing efficiency.
随着MEC (Multi-access Edge Computing)概念的提出,RSU(路边单元)被认为是一个有效的应用提供商,它不仅执行传输资源分配和数据处理相关的计算,而且为道路车辆提供实时应用。然而,当路边固定节点与移动车辆通信时,较高的业务迁移率会影响相应服务的实时性。此外,车辆密度也会影响服务性能。为此,本文定义了移动辅助计算节点(MSCN, Mobile Secondary Computing Node)的概念,并提出了面向MSCN的基础架构和MSCN选择机制。然后设计了相应的车载信息发布机制。采用网络模拟器(NS-3.28)对所提架构的性能进行了研究。仿真结果表明,该结构显著提高了通信性能和计算效率。
{"title":"Research on resource scheduling and allocation mechanism of computation and transmission under MEC framework","authors":"Qingwen Han, Xiaoyuan Zhang, Junjun Zhang, Lingqiu Zeng, L. Ye, Jianmei Lei, Yang Jiang, Xuena Peng","doi":"10.1109/ITSC.2019.8916981","DOIUrl":"https://doi.org/10.1109/ITSC.2019.8916981","url":null,"abstract":"With the concept of MEC (Multi-access Edge Computing) being put forward, RSU (Roadside Unit) is considered as a valid application provider, which not only executes transmission resource allocation and data processing related computing but also provides real-time applications to road vehicles. However, when fixed roadside nodes communicate with mobile vehicles, the high service migration rate could influence real-time feature of corresponding service. Moreover, vehicle density also affects service performance. Hence, in this paper, a new concept, MSCN (Mobile Secondary Computing Node), is defined, while a MSCN oriented infrastructure and MSCN selection mechanism are proposed. Then corresponding vehicle message dissemination mechanism is designed. A network simulator (NS-3.28) is employed to investigate the performance of the proposed architecture. The simulation results show that the proposed architecture significantly improves both communication performance and computing efficiency.","PeriodicalId":6717,"journal":{"name":"2019 IEEE Intelligent Transportation Systems Conference (ITSC)","volume":"26 2 1","pages":"437-442"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90049801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ITSC.2019.8917414
Lei Wang, Wanjing Ma
Carsharing is an alternative transportation mode for urban mobility. One-way carsharing model presents possible imbalance problem in fleet distribution and demands. Dynamic pricing approach can affect users’ behavior to change the users’ demands and the moving of vehicles in order to keep the system in balance. This paper presents a method to determine the pricing schemes. Firstly the paper reveals the mechanism of reaction of users who had received the variable pricing offers, and establishes a price-demand model. The price-demand model takes into account both the elasticity price-demand effect and the changing of departure and destination station of users. Subsequently, we formulate an optimization model to find out proper pricing schemes which can keep the station vehicle inventory at a proper range. Finally the paper presents a numerical example by adopting the pricing scheme method we put forward.
{"title":"Pricing Approach to Balance Demands for One-way Car-sharing Systems*","authors":"Lei Wang, Wanjing Ma","doi":"10.1109/ITSC.2019.8917414","DOIUrl":"https://doi.org/10.1109/ITSC.2019.8917414","url":null,"abstract":"Carsharing is an alternative transportation mode for urban mobility. One-way carsharing model presents possible imbalance problem in fleet distribution and demands. Dynamic pricing approach can affect users’ behavior to change the users’ demands and the moving of vehicles in order to keep the system in balance. This paper presents a method to determine the pricing schemes. Firstly the paper reveals the mechanism of reaction of users who had received the variable pricing offers, and establishes a price-demand model. The price-demand model takes into account both the elasticity price-demand effect and the changing of departure and destination station of users. Subsequently, we formulate an optimization model to find out proper pricing schemes which can keep the station vehicle inventory at a proper range. Finally the paper presents a numerical example by adopting the pricing scheme method we put forward.","PeriodicalId":6717,"journal":{"name":"2019 IEEE Intelligent Transportation Systems Conference (ITSC)","volume":"93 1","pages":"1697-1702"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91437654","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ITSC.2019.8917028
Yuki Ebizuka, S. Kato, M. Itami
When an emergency vehicle approaches a self- driving automobile, the automobile must give precedence to the emergency vehicle and needs to perform driving operations such as stopping or yielding. This requires the ability to automatically detect the approach of an emergency vehicle. In this study, for siren sounds based on emergency vehicle standards in Japan, we developed a method that uses siren sound processing to detect the approach of emergency vehicles and identify the type. These methods were used to evaluate the detection and identification of multiple types of emergency vehicle sound sources and in the presence of background noise set to approximate a real driving environment.
{"title":"Detecting approach of emergency vehicles using siren sound processing*","authors":"Yuki Ebizuka, S. Kato, M. Itami","doi":"10.1109/ITSC.2019.8917028","DOIUrl":"https://doi.org/10.1109/ITSC.2019.8917028","url":null,"abstract":"When an emergency vehicle approaches a self- driving automobile, the automobile must give precedence to the emergency vehicle and needs to perform driving operations such as stopping or yielding. This requires the ability to automatically detect the approach of an emergency vehicle. In this study, for siren sounds based on emergency vehicle standards in Japan, we developed a method that uses siren sound processing to detect the approach of emergency vehicles and identify the type. These methods were used to evaluate the detection and identification of multiple types of emergency vehicle sound sources and in the presence of background noise set to approximate a real driving environment.","PeriodicalId":6717,"journal":{"name":"2019 IEEE Intelligent Transportation Systems Conference (ITSC)","volume":"137 1","pages":"4431-4436"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83118867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ITSC.2019.8917490
Takayuki Sugiura, Tomoki Watanabe
We present a method for estimating free spaces and obstacles in blind spots occluded from a single view. Knowledge about blind spots helps autonomous vehicles make better decisions, such as avoiding a probable collision risk. It is essentially ill-posed to estimate whether unobservable areas are uniquely assigned as free or occupied spaces. Therefore, our framework is designed to be able to produce probable multi-hypothesis occupancy grid maps (OGM) from a single-frame input based on posterior distribution of blind spot environments. Compared to deterministic single result, each hypothesis OGM can show other probable environments explicitly even in uncertain areas. In order to handle this, we introduce a combination of generative adversarial networks (GANs) and Monte Carlo sampling. Our deep convolutional neural network (CNN) is trained to model an approximate posterior distribution with an adversarial loss and dropout layers. While activating dropout even at inference step, the network generates diverse multi-hypothesis OGMs sampled from the distribution by Monte Carlo sampling. We demonstrate that the proposed method estimates diverse occluded free spaces and obstacles in multi-hypothesis OGMs from either a two-dimensional (2D) range sensor measurement or a monocular camera image. Our method can also detect blind spots ahead of vehicle as driving risks in real outdoor dataset.
{"title":"Probable Multi-hypothesis Blind Spot Estimation for Driving Risk Prediction","authors":"Takayuki Sugiura, Tomoki Watanabe","doi":"10.1109/ITSC.2019.8917490","DOIUrl":"https://doi.org/10.1109/ITSC.2019.8917490","url":null,"abstract":"We present a method for estimating free spaces and obstacles in blind spots occluded from a single view. Knowledge about blind spots helps autonomous vehicles make better decisions, such as avoiding a probable collision risk. It is essentially ill-posed to estimate whether unobservable areas are uniquely assigned as free or occupied spaces. Therefore, our framework is designed to be able to produce probable multi-hypothesis occupancy grid maps (OGM) from a single-frame input based on posterior distribution of blind spot environments. Compared to deterministic single result, each hypothesis OGM can show other probable environments explicitly even in uncertain areas. In order to handle this, we introduce a combination of generative adversarial networks (GANs) and Monte Carlo sampling. Our deep convolutional neural network (CNN) is trained to model an approximate posterior distribution with an adversarial loss and dropout layers. While activating dropout even at inference step, the network generates diverse multi-hypothesis OGMs sampled from the distribution by Monte Carlo sampling. We demonstrate that the proposed method estimates diverse occluded free spaces and obstacles in multi-hypothesis OGMs from either a two-dimensional (2D) range sensor measurement or a monocular camera image. Our method can also detect blind spots ahead of vehicle as driving risks in real outdoor dataset.","PeriodicalId":6717,"journal":{"name":"2019 IEEE Intelligent Transportation Systems Conference (ITSC)","volume":"22 1","pages":"4295-4302"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79283771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ITSC.2019.8917338
Xiaoxuan Wang, Lingjia Liu, T. Tang
The high-efficiency and security-guarantee of wireless communication systems are critical in urban rail transits, which are related to the safe operation. In this paper, the designed train-centric communication-based train control (CBTC) systems are established based on train-to-train (T2T) wireless communication firstly. Then, a novel cooperated security check scheme is served as the security assurance function in this T2T scenario to reduce the hazard from Sybil attacks. The quantized age of information (AoI) is used as a Quality of Service (QoS) indicator of the CBTC wireless communication systems in urban rail transit. Simulation results show that the proposed LTE-T2T based wireless communication systems can achieve improved system AoI performance compared with traditional systems. Furthermore, with the help of cooperated security check scheme, the defense function against the Sybil attack of proposed T2T is shown better than normal T2T based wireless communication systems.
{"title":"Improved T2T based Communication-Based Train Control Systems Through Cooperated Security Check","authors":"Xiaoxuan Wang, Lingjia Liu, T. Tang","doi":"10.1109/ITSC.2019.8917338","DOIUrl":"https://doi.org/10.1109/ITSC.2019.8917338","url":null,"abstract":"The high-efficiency and security-guarantee of wireless communication systems are critical in urban rail transits, which are related to the safe operation. In this paper, the designed train-centric communication-based train control (CBTC) systems are established based on train-to-train (T2T) wireless communication firstly. Then, a novel cooperated security check scheme is served as the security assurance function in this T2T scenario to reduce the hazard from Sybil attacks. The quantized age of information (AoI) is used as a Quality of Service (QoS) indicator of the CBTC wireless communication systems in urban rail transit. Simulation results show that the proposed LTE-T2T based wireless communication systems can achieve improved system AoI performance compared with traditional systems. Furthermore, with the help of cooperated security check scheme, the defense function against the Sybil attack of proposed T2T is shown better than normal T2T based wireless communication systems.","PeriodicalId":6717,"journal":{"name":"2019 IEEE Intelligent Transportation Systems Conference (ITSC)","volume":"108 1","pages":"2509-2514"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80812293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-01DOI: 10.1109/ITSC.2019.8916780
Xin Chang, Haijian Li, J. Rong, Xiaohua Zhao, Guoqiang Zhao
In this paper, we discuss the impacts of the fog warning systems on driving behavior and traffic safety under different fog conditions. Firstly, in order to obtain the driving data, an empirical driving simulator platform was established based on a real-world road in Beijing. The comparison study was conducted for eight scenarios which comprise four warning systems under fog conditions. Thirty-five test drivers drove an instrumented vehicle eight times, 2 fog conditions (light fog, heavy fog) × 4 systems (No warning, Dynamic Message Sign only, On-Board Unit only, On-Board Unit & Dynamic Message Sign). The results show that intelligent warning systems could be beneficial to driving behavior and traffic safety. Meanwhile, the vehicle-to-vehicle warning system can significantly optimize individual driving behavior. It is suggested that proper warning systems should be considered for different fog conditions since they have different effect. The study results are helpful to select proper information release form in the context of connected vehicle.
{"title":"Effects of Warning Systems on Longitudinal Driving Behavior and Safety under Fog Conditions*","authors":"Xin Chang, Haijian Li, J. Rong, Xiaohua Zhao, Guoqiang Zhao","doi":"10.1109/ITSC.2019.8916780","DOIUrl":"https://doi.org/10.1109/ITSC.2019.8916780","url":null,"abstract":"In this paper, we discuss the impacts of the fog warning systems on driving behavior and traffic safety under different fog conditions. Firstly, in order to obtain the driving data, an empirical driving simulator platform was established based on a real-world road in Beijing. The comparison study was conducted for eight scenarios which comprise four warning systems under fog conditions. Thirty-five test drivers drove an instrumented vehicle eight times, 2 fog conditions (light fog, heavy fog) × 4 systems (No warning, Dynamic Message Sign only, On-Board Unit only, On-Board Unit & Dynamic Message Sign). The results show that intelligent warning systems could be beneficial to driving behavior and traffic safety. Meanwhile, the vehicle-to-vehicle warning system can significantly optimize individual driving behavior. It is suggested that proper warning systems should be considered for different fog conditions since they have different effect. The study results are helpful to select proper information release form in the context of connected vehicle.","PeriodicalId":6717,"journal":{"name":"2019 IEEE Intelligent Transportation Systems Conference (ITSC)","volume":"109 1","pages":"2154-2159"},"PeriodicalIF":0.0,"publicationDate":"2019-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88579665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}