{"title":"基于人工智能的虚拟传感器在移动设备上融合激光雷达+摄像头传感器数据,为视障人士提供态势感知","authors":"Vivek Bharati","doi":"10.1109/SAS51076.2021.9530102","DOIUrl":null,"url":null,"abstract":"Autonomy of the blind and visually impaired can be achieved through technological means and thereby empowering them with a sense of independence. Mobile phones are ubiquitous and can access artificial intelligence capabilities locally and in the Cloud. Navigational sensors, such as Light Detection and Ranging (LiDAR), and wide angle cameras, typically found in self-driving cars, are beginning to be incorporated into mobile phones. In this paper, we propose techniques for using mobile phone LiDAR + camera sensor data fusion along with edge + Cloud split AI to create an indoor situational awareness and navigational aid for the visually impaired. In addition to physical sensors, the system uses AI models as virtual sensors to provide the required functionality. The system enhances the image of a scene captured by a camera using distance information from the LiDAR and directional information computed by the device to provide a rich 3-D description of the space in front of the user. The system also uses a combination of sensor data fusion and geometric formulas to provide step-by-step walking instructions for the user in order to reach destinations. The user-centric system proposed here can be a valuable assistive technology for the blind and visually imnpired.","PeriodicalId":224327,"journal":{"name":"2021 IEEE Sensors Applications Symposium (SAS)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"LiDAR + Camera Sensor Data Fusion On Mobiles With AI-based Virtual Sensors To Provide Situational Awareness For The Visually Impaired\",\"authors\":\"Vivek Bharati\",\"doi\":\"10.1109/SAS51076.2021.9530102\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Autonomy of the blind and visually impaired can be achieved through technological means and thereby empowering them with a sense of independence. Mobile phones are ubiquitous and can access artificial intelligence capabilities locally and in the Cloud. Navigational sensors, such as Light Detection and Ranging (LiDAR), and wide angle cameras, typically found in self-driving cars, are beginning to be incorporated into mobile phones. In this paper, we propose techniques for using mobile phone LiDAR + camera sensor data fusion along with edge + Cloud split AI to create an indoor situational awareness and navigational aid for the visually impaired. In addition to physical sensors, the system uses AI models as virtual sensors to provide the required functionality. The system enhances the image of a scene captured by a camera using distance information from the LiDAR and directional information computed by the device to provide a rich 3-D description of the space in front of the user. The system also uses a combination of sensor data fusion and geometric formulas to provide step-by-step walking instructions for the user in order to reach destinations. The user-centric system proposed here can be a valuable assistive technology for the blind and visually imnpired.\",\"PeriodicalId\":224327,\"journal\":{\"name\":\"2021 IEEE Sensors Applications Symposium (SAS)\",\"volume\":\"26 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-08-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE Sensors Applications Symposium (SAS)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SAS51076.2021.9530102\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE Sensors Applications Symposium (SAS)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SAS51076.2021.9530102","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
LiDAR + Camera Sensor Data Fusion On Mobiles With AI-based Virtual Sensors To Provide Situational Awareness For The Visually Impaired
Autonomy of the blind and visually impaired can be achieved through technological means and thereby empowering them with a sense of independence. Mobile phones are ubiquitous and can access artificial intelligence capabilities locally and in the Cloud. Navigational sensors, such as Light Detection and Ranging (LiDAR), and wide angle cameras, typically found in self-driving cars, are beginning to be incorporated into mobile phones. In this paper, we propose techniques for using mobile phone LiDAR + camera sensor data fusion along with edge + Cloud split AI to create an indoor situational awareness and navigational aid for the visually impaired. In addition to physical sensors, the system uses AI models as virtual sensors to provide the required functionality. The system enhances the image of a scene captured by a camera using distance information from the LiDAR and directional information computed by the device to provide a rich 3-D description of the space in front of the user. The system also uses a combination of sensor data fusion and geometric formulas to provide step-by-step walking instructions for the user in order to reach destinations. The user-centric system proposed here can be a valuable assistive technology for the blind and visually imnpired.