Roni Ash, Dolev Ofri, Jonathan Brokman, Idan Friedman, Y. Moshe
{"title":"实时行人红绿灯检测","authors":"Roni Ash, Dolev Ofri, Jonathan Brokman, Idan Friedman, Y. Moshe","doi":"10.1109/ICSEE.2018.8646287","DOIUrl":null,"url":null,"abstract":"Crossing a road is a dangerous activity for pedestrians and therefore pedestrian crossings and intersections often include pedestrian-directed traffic lights. These traffic lights may be accompanied by audio signals to aid the visually impaired. In many cases, when such an audio signal is not available, a visually impaired pedestrian cannot cross the road without help. In this paper, we propose a technique that may help visually impaired people by detecting pedestrian traffic lights and their state (walk/don’t walk) from video taken with a mobile phone camera. The proposed technique consists of two main modules- an object detector that uses a deep convolutional network, and a decision module. We investigate two variants for object detection (Faster R-CNN combined with a KCF tracker, or Tiny YOLOv2) and compare them. For better robustness, we exploit the fact that abrupt switching from red to green or vice versa is unique to traffic lights. The proposed technique aims to operate on a mobile phone in a client-server architecture. It proves to be fast and accurate with running time of 6 ms per frame on a desktop computer with GeForce GTX 1080 GPU and detection accuracy of more than 99%.","PeriodicalId":254455,"journal":{"name":"2018 IEEE International Conference on the Science of Electrical Engineering in Israel (ICSEE)","volume":"48 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"8","resultStr":"{\"title\":\"Real-time Pedestrian Traffic Light Detection\",\"authors\":\"Roni Ash, Dolev Ofri, Jonathan Brokman, Idan Friedman, Y. Moshe\",\"doi\":\"10.1109/ICSEE.2018.8646287\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Crossing a road is a dangerous activity for pedestrians and therefore pedestrian crossings and intersections often include pedestrian-directed traffic lights. These traffic lights may be accompanied by audio signals to aid the visually impaired. In many cases, when such an audio signal is not available, a visually impaired pedestrian cannot cross the road without help. In this paper, we propose a technique that may help visually impaired people by detecting pedestrian traffic lights and their state (walk/don’t walk) from video taken with a mobile phone camera. The proposed technique consists of two main modules- an object detector that uses a deep convolutional network, and a decision module. We investigate two variants for object detection (Faster R-CNN combined with a KCF tracker, or Tiny YOLOv2) and compare them. For better robustness, we exploit the fact that abrupt switching from red to green or vice versa is unique to traffic lights. The proposed technique aims to operate on a mobile phone in a client-server architecture. It proves to be fast and accurate with running time of 6 ms per frame on a desktop computer with GeForce GTX 1080 GPU and detection accuracy of more than 99%.\",\"PeriodicalId\":254455,\"journal\":{\"name\":\"2018 IEEE International Conference on the Science of Electrical Engineering in Israel (ICSEE)\",\"volume\":\"48 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2018-12-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"8\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2018 IEEE International Conference on the Science of Electrical Engineering in Israel (ICSEE)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICSEE.2018.8646287\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE International Conference on the Science of Electrical Engineering in Israel (ICSEE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICSEE.2018.8646287","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Crossing a road is a dangerous activity for pedestrians and therefore pedestrian crossings and intersections often include pedestrian-directed traffic lights. These traffic lights may be accompanied by audio signals to aid the visually impaired. In many cases, when such an audio signal is not available, a visually impaired pedestrian cannot cross the road without help. In this paper, we propose a technique that may help visually impaired people by detecting pedestrian traffic lights and their state (walk/don’t walk) from video taken with a mobile phone camera. The proposed technique consists of two main modules- an object detector that uses a deep convolutional network, and a decision module. We investigate two variants for object detection (Faster R-CNN combined with a KCF tracker, or Tiny YOLOv2) and compare them. For better robustness, we exploit the fact that abrupt switching from red to green or vice versa is unique to traffic lights. The proposed technique aims to operate on a mobile phone in a client-server architecture. It proves to be fast and accurate with running time of 6 ms per frame on a desktop computer with GeForce GTX 1080 GPU and detection accuracy of more than 99%.