Pub Date : 2022-08-01DOI: 10.1109/SAS54819.2022.9881378
Gernot Fiala, Zhenyu Ye, C. Steger
For future augmented reality (AR) and virtual reality (VR) applications, several different kinds of sensors will be used. These sensors, to give some examples, are used for gesture recognition, head pose tracking and pupil tracking. All these sensors send data to a host platform, where the data must be processed in real-time. This requires high processing power which leads to higher energy consumption. To lower the energy consumption, optimizations of the image processing system are necessary. This paper investigates pupil detection for AR/VR applications based on images with reduced bit depths. It shows that images with reduced bit depths even down to 3 or 2 bits can be used for pupil detection, with almost the same average detection rate. Reduced bit depths of an image reduces the memory foot-print, which allows to perform in-sensor processing for future image sensors and provides the foundation for future in-sensor processing architectures.
{"title":"Pupil Detection for Augmented and Virtual Reality based on Images with Reduced Bit Depths","authors":"Gernot Fiala, Zhenyu Ye, C. Steger","doi":"10.1109/SAS54819.2022.9881378","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881378","url":null,"abstract":"For future augmented reality (AR) and virtual reality (VR) applications, several different kinds of sensors will be used. These sensors, to give some examples, are used for gesture recognition, head pose tracking and pupil tracking. All these sensors send data to a host platform, where the data must be processed in real-time. This requires high processing power which leads to higher energy consumption. To lower the energy consumption, optimizations of the image processing system are necessary. This paper investigates pupil detection for AR/VR applications based on images with reduced bit depths. It shows that images with reduced bit depths even down to 3 or 2 bits can be used for pupil detection, with almost the same average detection rate. Reduced bit depths of an image reduces the memory foot-print, which allows to perform in-sensor processing for future image sensors and provides the foundation for future in-sensor processing architectures.","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125969028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01DOI: 10.1109/SAS54819.2022.9881354
K. Husby, A. Saasen, J. D. Ytrehus, M. Hjelstuen, T. Eriksen, A. Liberale
Active magnetic ranging (AMR) while drilling is an electromagnetic method used to map subsurface ground by its conductivity. Subsurface mapping is needed both in the oil and gas industry and in the geothermal drilling industry. In both cases, several wells are drilled close to each other to exploit the full potential of either an oil reservoir or a geothermal reservoir. The challenge however with subsurface mapping compared to thin air radar mapping is the very low skin depth given by the high conductivity of the ground. For that reason, existing systems are often limited to very short range operations.In this paper methods for range improvement are presented. To maximize the range potential the frequency of operation is reduced, and the efficiency and size of the antennas are increased as much as possible.
{"title":"Active magnetic ranging while drilling: A down-hole surroundings mapping","authors":"K. Husby, A. Saasen, J. D. Ytrehus, M. Hjelstuen, T. Eriksen, A. Liberale","doi":"10.1109/SAS54819.2022.9881354","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881354","url":null,"abstract":"Active magnetic ranging (AMR) while drilling is an electromagnetic method used to map subsurface ground by its conductivity. Subsurface mapping is needed both in the oil and gas industry and in the geothermal drilling industry. In both cases, several wells are drilled close to each other to exploit the full potential of either an oil reservoir or a geothermal reservoir. The challenge however with subsurface mapping compared to thin air radar mapping is the very low skin depth given by the high conductivity of the ground. For that reason, existing systems are often limited to very short range operations.In this paper methods for range improvement are presented. To maximize the range potential the frequency of operation is reduced, and the efficiency and size of the antennas are increased as much as possible.","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129365489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01DOI: 10.1109/SAS54819.2022.9881347
L. Kassab, Andrew J. Law, Bruce Wallace, J. Larivière-Chartier, R. Goubran, F. Knoefel
Screening people for signs of illness through contactless measurement of vital signs could be beneficial in public transportation settings or long-term care facilities. To achieve this goal, one solution could utilize Red/Green/Blue (RGB) video cameras to measure heart rate. In this work, we present results for the assessment of heart rate through Video Magnification (VM) techniques applied to RGB face video recordings from 19 subjects. The work specifically explores (1) the effect of two lighting illumination levels and (2) the effect of window length on the accuracy of heart rate extraction via Video Magnification. The results show that higher illumination, as a result of combining halogen light with LED, yielded lower average errors in heart rate measured through Video Magnification. Additionally, the results show that increasing the window length from 10 seconds up to 30 seconds improves VM heart rate accuracy when there are small frequent head movements in the video but decreases heart rate accuracy in the absence of head motion.
{"title":"Effects of Lighting and Window Length on Heart Rate Assessment through Video Magnification","authors":"L. Kassab, Andrew J. Law, Bruce Wallace, J. Larivière-Chartier, R. Goubran, F. Knoefel","doi":"10.1109/SAS54819.2022.9881347","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881347","url":null,"abstract":"Screening people for signs of illness through contactless measurement of vital signs could be beneficial in public transportation settings or long-term care facilities. To achieve this goal, one solution could utilize Red/Green/Blue (RGB) video cameras to measure heart rate. In this work, we present results for the assessment of heart rate through Video Magnification (VM) techniques applied to RGB face video recordings from 19 subjects. The work specifically explores (1) the effect of two lighting illumination levels and (2) the effect of window length on the accuracy of heart rate extraction via Video Magnification. The results show that higher illumination, as a result of combining halogen light with LED, yielded lower average errors in heart rate measured through Video Magnification. Additionally, the results show that increasing the window length from 10 seconds up to 30 seconds improves VM heart rate accuracy when there are small frequent head movements in the video but decreases heart rate accuracy in the absence of head motion.","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132137552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01DOI: 10.1109/SAS54819.2022.9881382
Pengwei Du, T. Polonelli, M. Magno, Zhiyuan Cheng
Agriculture is the pillar industry for human survival. However, various diseases threaten the health of crops and lead to a decrease in yield. Industry 4.0 is making strides in plant illness prevention and detection, other than supporting farmers to improve plantations’ income. To prevent crop diseases in time, this paper proposes, implements, and evaluates a low-power smart camera. It features a lightweight neural network to verify and monitor the growth status of crops. The proposed tiny model features optimized complexity, to be deployed in milliwatt power microcontrollers, and high accuracy. Experimental results show that our work reaches 99% accuracy on a 4-classes dataset and more than 96% for a 10 classes dataset. The compact model size (139 kB) and low complexity enable ultra-low power consumption (2.63 mW per hour) on the battery-powered Sony Spresense platform, which features a six-core ARM Cortex-M4F.
{"title":"Towards lightweight deep neural network for smart agriculture on embedded systems","authors":"Pengwei Du, T. Polonelli, M. Magno, Zhiyuan Cheng","doi":"10.1109/SAS54819.2022.9881382","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881382","url":null,"abstract":"Agriculture is the pillar industry for human survival. However, various diseases threaten the health of crops and lead to a decrease in yield. Industry 4.0 is making strides in plant illness prevention and detection, other than supporting farmers to improve plantations’ income. To prevent crop diseases in time, this paper proposes, implements, and evaluates a low-power smart camera. It features a lightweight neural network to verify and monitor the growth status of crops. The proposed tiny model features optimized complexity, to be deployed in milliwatt power microcontrollers, and high accuracy. Experimental results show that our work reaches 99% accuracy on a 4-classes dataset and more than 96% for a 10 classes dataset. The compact model size (139 kB) and low complexity enable ultra-low power consumption (2.63 mW per hour) on the battery-powered Sony Spresense platform, which features a six-core ARM Cortex-M4F.","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125414359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01DOI: 10.1109/SAS54819.2022.9881346
N. Cennamo, F. Arcadio, V. Marletta, D. D. Prete, B. Andò, L. Zeni, Mario Cesaro, Alfredo De Matteis
In this work, a force sensor based on plastic optical fibers (POFs) is realized and tested. More specifically, the optical sensor system is composed of a cantilever obtained by a spring-steel beam and a modified POF glued on the underside of the cantilever. One end of the cantilever is fixed to the optical desk using a developed support, while on the other end, a weight is applied to realize an applied force. The POF is modified by notches in order to improve the optical performance of the force sensor. An analysis is carried out to characterize the sensor system. In particular, it has a linear behaviour ranging from 50 mN to 300 mN with a sensitivity of 53.43 mV/N and a resolution of 0.01 N.
{"title":"A simple and highly sensitive Force Sensor based on modified plastic optical fibers and cantilevers","authors":"N. Cennamo, F. Arcadio, V. Marletta, D. D. Prete, B. Andò, L. Zeni, Mario Cesaro, Alfredo De Matteis","doi":"10.1109/SAS54819.2022.9881346","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881346","url":null,"abstract":"In this work, a force sensor based on plastic optical fibers (POFs) is realized and tested. More specifically, the optical sensor system is composed of a cantilever obtained by a spring-steel beam and a modified POF glued on the underside of the cantilever. One end of the cantilever is fixed to the optical desk using a developed support, while on the other end, a weight is applied to realize an applied force. The POF is modified by notches in order to improve the optical performance of the force sensor. An analysis is carried out to characterize the sensor system. In particular, it has a linear behaviour ranging from 50 mN to 300 mN with a sensitivity of 53.43 mV/N and a resolution of 0.01 N.","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131105715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01DOI: 10.1109/SAS54819.2022.9881376
P. Bellagente
According to statistics, the construction market is one of the most dangerous economic sector all around the world. Construction workers are continuously exposed to moving materials and machinery, often in constrained spaces, rising the risk of collision accidents. In this paper an Ultra-Wide Band (UWB) Real Time Location System (RTLS) designed in a previous work for proximity hazards management in construction sites is described in detail. An extensive measurement campaign have been carried out outdoor, using a square grid (15 m x 15 m) and 1 m step, for a total of 225 positions. For each position, 1000 location measures have been collected and the bidimensional localization resolution has been estimated. Results show that location resolution remains similar across the considered area and that it could be manually verified by construction workers. In optimal conditions, the resolution ranges are within 0.01 m and 0.05 m . The results highlight a major error contribution due to radio-frequency reflection interference, which makes impossible to measure positions under some conditions.
据统计,建筑市场是世界上最危险的经济部门之一。建筑工人经常在有限的空间里不断接触移动的材料和机械,这增加了发生碰撞事故的风险。本文详细介绍了前人设计的一种用于建筑工地近距离危险管理的超宽带实时定位系统。在室外进行了广泛的测量活动,使用方形网格(15米× 15米)和1米的台阶,总共225个位置。对于每个位置,收集了1000个位置测量,并估计了二维定位分辨率。结果表明,在考虑的区域内,位置分辨率保持相似,并且可以由建筑工人手动验证。在最佳条件下,分辨率范围在0.01 m ~ 0.05 m之间。结果强调了射频反射干扰对误差的主要贡献,这使得在某些条件下无法测量位置。
{"title":"Assessment of UWB RTLS for Proximity Hazards Management in Construction Sites","authors":"P. Bellagente","doi":"10.1109/SAS54819.2022.9881376","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881376","url":null,"abstract":"According to statistics, the construction market is one of the most dangerous economic sector all around the world. Construction workers are continuously exposed to moving materials and machinery, often in constrained spaces, rising the risk of collision accidents. In this paper an Ultra-Wide Band (UWB) Real Time Location System (RTLS) designed in a previous work for proximity hazards management in construction sites is described in detail. An extensive measurement campaign have been carried out outdoor, using a square grid (15 m x 15 m) and 1 m step, for a total of 225 positions. For each position, 1000 location measures have been collected and the bidimensional localization resolution has been estimated. Results show that location resolution remains similar across the considered area and that it could be manually verified by construction workers. In optimal conditions, the resolution ranges are within 0.01 m and 0.05 m . The results highlight a major error contribution due to radio-frequency reflection interference, which makes impossible to measure positions under some conditions.","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132347284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01DOI: 10.1109/SAS54819.2022.9881372
Georgekutty Jose Maniyattu, Eldho Geegy, N. Leiter, Maximilian Wohlschlager, M. Versen, C. Laforsch
Plastics have become a major part of human’s daily life. An uncontrolled usage of plastic leads to an accumulation in the environment posing a threat to flora and fauna, if not recycled correctly. The correct sorting and recycling of the most commonly available plastic types and an identification of plastic in the environment are important. Fluorescence lifetime imaging microscopy shows a high potential in sorting and identifying plastic types. A data-based and an image-based classification are investigated using python programming language to demonstrate the potential of a neural network based on fluorescence lifetime images to identify plastic types. The results indicate that the data-based classification has a higher identification accuracy compared to the image-based classification.
{"title":"Development of a neural network to identify plastics using Fluorescence Lifetime Imaging Microscopy","authors":"Georgekutty Jose Maniyattu, Eldho Geegy, N. Leiter, Maximilian Wohlschlager, M. Versen, C. Laforsch","doi":"10.1109/SAS54819.2022.9881372","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881372","url":null,"abstract":"Plastics have become a major part of human’s daily life. An uncontrolled usage of plastic leads to an accumulation in the environment posing a threat to flora and fauna, if not recycled correctly. The correct sorting and recycling of the most commonly available plastic types and an identification of plastic in the environment are important. Fluorescence lifetime imaging microscopy shows a high potential in sorting and identifying plastic types. A data-based and an image-based classification are investigated using python programming language to demonstrate the potential of a neural network based on fluorescence lifetime images to identify plastic types. The results indicate that the data-based classification has a higher identification accuracy compared to the image-based classification.","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115546176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01DOI: 10.1109/SAS54819.2022.9881356
Sanghyun Park, Dongheon Lee, Jisoo Choi, Dohyeon Ko, Minji Lee, Zack Murphy, Nowf Binhowidy, Anthony H. Smith
Shooting is a common activity all over the world for both military and recreational purposes. Shooting performance can be measured from the size of the shot group (grouping). Shooters have been calculating the size of the group by measuring the distance between bullet impacts using their hands. This paper aims to create a reasonable automated shot grouping size measuring module that is available from several kilometers away. It includes an IoT(Internet of Things) system and a mobile application that users can access. LoRa technology is adopted for covering long distances, and YOLO V5 is implemented to detect bullet impacts. Mathematical methods for calculating accurate distance and engineering techniques to fill the needs are described with experiments on various parameters and conditions. The proposed module showed that indoor tests measured the shot group with a mean accuracy of 91.8%. For future work, outdoor tests, which were affected by environmental control variables, are expected to give better accuracy.
{"title":"Feasibility of Measuring Shot Group Using LoRa Technology and YOLO V5","authors":"Sanghyun Park, Dongheon Lee, Jisoo Choi, Dohyeon Ko, Minji Lee, Zack Murphy, Nowf Binhowidy, Anthony H. Smith","doi":"10.1109/SAS54819.2022.9881356","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881356","url":null,"abstract":"Shooting is a common activity all over the world for both military and recreational purposes. Shooting performance can be measured from the size of the shot group (grouping). Shooters have been calculating the size of the group by measuring the distance between bullet impacts using their hands. This paper aims to create a reasonable automated shot grouping size measuring module that is available from several kilometers away. It includes an IoT(Internet of Things) system and a mobile application that users can access. LoRa technology is adopted for covering long distances, and YOLO V5 is implemented to detect bullet impacts. Mathematical methods for calculating accurate distance and engineering techniques to fill the needs are described with experiments on various parameters and conditions. The proposed module showed that indoor tests measured the shot group with a mean accuracy of 91.8%. For future work, outdoor tests, which were affected by environmental control variables, are expected to give better accuracy.","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121584681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01DOI: 10.1109/SAS54819.2022.9881255
Philipp Stelzer, Sebastian Reicher, Georg Macher, C. Steger, Raphael Schermann
Self-driving and self-flying vehicles have the ability to drive respectively fly independently without the intervention of an operator. For this purpose, these vehicles need sensors for environment perception and data processing systems, which are safety-critical, to process the obtained raw data from these sensors. However, if such safety-critical systems fail, this can have fatal consequences and can affect human lives and/or the environment, especially in the case of highly automated vehicles. A total failure of these systems is one of the worst scenarios in an automated vehicle. Therefore, such safety-critical systems are often designed redundantly in order to prevent a total failure of environment perception. In order to ensure that the operation of the vehicle can continue safely, however, the live migration from one system to the other must be carried out with as little downtime as possible. In our publication, we present a concept for a 3D Flash LiDAR live migration between two independent data processing systems with redundant design. This concept provides a solution for highly automated vehicles to remain fail-operational in case one of the redundant data processing systems fails. The results obtained from the implemented concept, without specifically addressing performance, are also provided to demonstrate feasibility.
{"title":"Live Migration of a 3D Flash LiDAR System between two Independent Data Processing Systems with Redundant Design","authors":"Philipp Stelzer, Sebastian Reicher, Georg Macher, C. Steger, Raphael Schermann","doi":"10.1109/SAS54819.2022.9881255","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881255","url":null,"abstract":"Self-driving and self-flying vehicles have the ability to drive respectively fly independently without the intervention of an operator. For this purpose, these vehicles need sensors for environment perception and data processing systems, which are safety-critical, to process the obtained raw data from these sensors. However, if such safety-critical systems fail, this can have fatal consequences and can affect human lives and/or the environment, especially in the case of highly automated vehicles. A total failure of these systems is one of the worst scenarios in an automated vehicle. Therefore, such safety-critical systems are often designed redundantly in order to prevent a total failure of environment perception. In order to ensure that the operation of the vehicle can continue safely, however, the live migration from one system to the other must be carried out with as little downtime as possible. In our publication, we present a concept for a 3D Flash LiDAR live migration between two independent data processing systems with redundant design. This concept provides a solution for highly automated vehicles to remain fail-operational in case one of the redundant data processing systems fails. The results obtained from the implemented concept, without specifically addressing performance, are also provided to demonstrate feasibility.","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122721611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-08-01DOI: 10.1109/SAS54819.2022.9881373
Bhaskar Anand, Harshal Verma, A. Thakur, Parvez Alam, P. Rajalakshmi
Light detection and ranging (LiDAR) is a widely used sensor for Intelligent transportation systems (ITS). It precisely determines the depth of the objects present around a vehicle. In this paper, the effect of light on the quality of acquired LiDAR data has been presented. The data was captured at different times in a day with varied light conditions. In the early morning and evening, there is partial light. At the night there is no light whereas in the mid-day there is perfect light condition. The data was acquired in the above four timings. On the acquired point cloud data, segmentation of an object, a person in the experiment, was performed. The number of object points and the point density have been observed to examine if light affects the quality of LiDAR data. The results, of the experiments, performed, suggest that the variation of light has little or no effect on the quality of LiDAR data.
{"title":"Evaluation of the quality of LiDAR data in the varying ambient light","authors":"Bhaskar Anand, Harshal Verma, A. Thakur, Parvez Alam, P. Rajalakshmi","doi":"10.1109/SAS54819.2022.9881373","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881373","url":null,"abstract":"Light detection and ranging (LiDAR) is a widely used sensor for Intelligent transportation systems (ITS). It precisely determines the depth of the objects present around a vehicle. In this paper, the effect of light on the quality of acquired LiDAR data has been presented. The data was captured at different times in a day with varied light conditions. In the early morning and evening, there is partial light. At the night there is no light whereas in the mid-day there is perfect light condition. The data was acquired in the above four timings. On the acquired point cloud data, segmentation of an object, a person in the experiment, was performed. The number of object points and the point density have been observed to examine if light affects the quality of LiDAR data. The results, of the experiments, performed, suggest that the variation of light has little or no effect on the quality of LiDAR data.","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120978146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}