D. Godoy, Bashima Islam, S. Xia, Md Tamzeed Islam, Rishikanth Chandrasekaran, Yen-Chun Chen, S. Nirjon, P. Kinget, Xiaofan Jiang
{"title":"PAWS: A Wearable Acoustic System for Pedestrian Safety","authors":"D. Godoy, Bashima Islam, S. Xia, Md Tamzeed Islam, Rishikanth Chandrasekaran, Yen-Chun Chen, S. Nirjon, P. Kinget, Xiaofan Jiang","doi":"10.1109/IoTDI.2018.00031","DOIUrl":null,"url":null,"abstract":"With the prevalence of smartphones, pedestrians and joggers today often walk or run while listening to music. Since they are deprived of their auditory senses that would have provided important cues to dangers, they are at a much greater risk of being hit by cars or other vehicles. In this paper, we build a wearable system that uses multi-channel audio sensors embedded in a headset to help detect and locate cars from their honks, engine and tire noises, and warn pedestrians of imminent dangers of approaching cars. We demonstrate that using a segmented architecture and implementation consisting of headset-mounted audio sensors, a front-end hardware that performs signal processing and feature extraction, and machine learning based classification on a smartphone, we are able to provide early danger detection in real-time, from up to 60m distance, near 100% precision on the vehicle detection and alert the user with low latency.","PeriodicalId":149725,"journal":{"name":"2018 IEEE/ACM Third International Conference on Internet-of-Things Design and Implementation (IoTDI)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2018-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"28","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE/ACM Third International Conference on Internet-of-Things Design and Implementation (IoTDI)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IoTDI.2018.00031","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 28
Abstract
With the prevalence of smartphones, pedestrians and joggers today often walk or run while listening to music. Since they are deprived of their auditory senses that would have provided important cues to dangers, they are at a much greater risk of being hit by cars or other vehicles. In this paper, we build a wearable system that uses multi-channel audio sensors embedded in a headset to help detect and locate cars from their honks, engine and tire noises, and warn pedestrians of imminent dangers of approaching cars. We demonstrate that using a segmented architecture and implementation consisting of headset-mounted audio sensors, a front-end hardware that performs signal processing and feature extraction, and machine learning based classification on a smartphone, we are able to provide early danger detection in real-time, from up to 60m distance, near 100% precision on the vehicle detection and alert the user with low latency.