首页 > 最新文献

2022 IEEE Sensors Applications Symposium (SAS)最新文献

英文 中文
Pupil Detection for Augmented and Virtual Reality based on Images with Reduced Bit Depths 基于减小位深图像的增强现实和虚拟现实瞳孔检测
Pub Date : 2022-08-01 DOI: 10.1109/SAS54819.2022.9881378
Gernot Fiala, Zhenyu Ye, C. Steger
For future augmented reality (AR) and virtual reality (VR) applications, several different kinds of sensors will be used. These sensors, to give some examples, are used for gesture recognition, head pose tracking and pupil tracking. All these sensors send data to a host platform, where the data must be processed in real-time. This requires high processing power which leads to higher energy consumption. To lower the energy consumption, optimizations of the image processing system are necessary. This paper investigates pupil detection for AR/VR applications based on images with reduced bit depths. It shows that images with reduced bit depths even down to 3 or 2 bits can be used for pupil detection, with almost the same average detection rate. Reduced bit depths of an image reduces the memory foot-print, which allows to perform in-sensor processing for future image sensors and provides the foundation for future in-sensor processing architectures.
对于未来的增强现实(AR)和虚拟现实(VR)应用,将使用几种不同类型的传感器。这些传感器,举几个例子,用于手势识别,头部姿势跟踪和瞳孔跟踪。所有这些传感器将数据发送到主机平台,主机平台必须对数据进行实时处理。这需要高处理能力,从而导致更高的能耗。为了降低能耗,必须对图像处理系统进行优化。本文研究了AR/VR应用中基于减小位深图像的瞳孔检测。结果表明,将位深降低到3位或2位的图像可以用于瞳孔检测,且平均检测率几乎相同。减小图像的位深度可以减少内存占用,从而可以为未来的图像传感器执行传感器内处理,并为未来的传感器内处理架构提供基础。
{"title":"Pupil Detection for Augmented and Virtual Reality based on Images with Reduced Bit Depths","authors":"Gernot Fiala, Zhenyu Ye, C. Steger","doi":"10.1109/SAS54819.2022.9881378","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881378","url":null,"abstract":"For future augmented reality (AR) and virtual reality (VR) applications, several different kinds of sensors will be used. These sensors, to give some examples, are used for gesture recognition, head pose tracking and pupil tracking. All these sensors send data to a host platform, where the data must be processed in real-time. This requires high processing power which leads to higher energy consumption. To lower the energy consumption, optimizations of the image processing system are necessary. This paper investigates pupil detection for AR/VR applications based on images with reduced bit depths. It shows that images with reduced bit depths even down to 3 or 2 bits can be used for pupil detection, with almost the same average detection rate. Reduced bit depths of an image reduces the memory foot-print, which allows to perform in-sensor processing for future image sensors and provides the foundation for future in-sensor processing architectures.","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125969028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Active magnetic ranging while drilling: A down-hole surroundings mapping 随钻主动磁测距:井下环境测绘
Pub Date : 2022-08-01 DOI: 10.1109/SAS54819.2022.9881354
K. Husby, A. Saasen, J. D. Ytrehus, M. Hjelstuen, T. Eriksen, A. Liberale
Active magnetic ranging (AMR) while drilling is an electromagnetic method used to map subsurface ground by its conductivity. Subsurface mapping is needed both in the oil and gas industry and in the geothermal drilling industry. In both cases, several wells are drilled close to each other to exploit the full potential of either an oil reservoir or a geothermal reservoir. The challenge however with subsurface mapping compared to thin air radar mapping is the very low skin depth given by the high conductivity of the ground. For that reason, existing systems are often limited to very short range operations.In this paper methods for range improvement are presented. To maximize the range potential the frequency of operation is reduced, and the efficiency and size of the antennas are increased as much as possible.
随钻主动磁测距(AMR)是一种利用电导率对地下地层进行测绘的电磁方法。油气行业和地热钻井行业都需要地下测绘。在这两种情况下,都是在彼此靠近的地方钻几口井,以充分开发油藏或地热油藏的潜力。然而,与稀薄空气雷达测绘相比,地下测绘的挑战在于地面的高导电性所带来的极低的表皮深度。出于这个原因,现有的系统通常仅限于非常短的距离操作。本文提出了提高距离的方法。为了最大限度地发挥范围潜力,降低了操作频率,并尽可能地增加了天线的效率和尺寸。
{"title":"Active magnetic ranging while drilling: A down-hole surroundings mapping","authors":"K. Husby, A. Saasen, J. D. Ytrehus, M. Hjelstuen, T. Eriksen, A. Liberale","doi":"10.1109/SAS54819.2022.9881354","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881354","url":null,"abstract":"Active magnetic ranging (AMR) while drilling is an electromagnetic method used to map subsurface ground by its conductivity. Subsurface mapping is needed both in the oil and gas industry and in the geothermal drilling industry. In both cases, several wells are drilled close to each other to exploit the full potential of either an oil reservoir or a geothermal reservoir. The challenge however with subsurface mapping compared to thin air radar mapping is the very low skin depth given by the high conductivity of the ground. For that reason, existing systems are often limited to very short range operations.In this paper methods for range improvement are presented. To maximize the range potential the frequency of operation is reduced, and the efficiency and size of the antennas are increased as much as possible.","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129365489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of Lighting and Window Length on Heart Rate Assessment through Video Magnification 光照和窗长对视频放大心率评估的影响
Pub Date : 2022-08-01 DOI: 10.1109/SAS54819.2022.9881347
L. Kassab, Andrew J. Law, Bruce Wallace, J. Larivière-Chartier, R. Goubran, F. Knoefel
Screening people for signs of illness through contactless measurement of vital signs could be beneficial in public transportation settings or long-term care facilities. To achieve this goal, one solution could utilize Red/Green/Blue (RGB) video cameras to measure heart rate. In this work, we present results for the assessment of heart rate through Video Magnification (VM) techniques applied to RGB face video recordings from 19 subjects. The work specifically explores (1) the effect of two lighting illumination levels and (2) the effect of window length on the accuracy of heart rate extraction via Video Magnification. The results show that higher illumination, as a result of combining halogen light with LED, yielded lower average errors in heart rate measured through Video Magnification. Additionally, the results show that increasing the window length from 10 seconds up to 30 seconds improves VM heart rate accuracy when there are small frequent head movements in the video but decreases heart rate accuracy in the absence of head motion.
通过非接触式生命体征测量来筛查人们的疾病迹象,在公共交通环境或长期护理设施中可能是有益的。为了实现这一目标,一种解决方案是利用红/绿/蓝(RGB)摄像机来测量心率。在这项工作中,我们介绍了通过视频放大(VM)技术评估心率的结果,该技术应用于19名受试者的RGB面部视频记录。这项工作具体探讨了(1)两种照明水平的影响和(2)窗口长度对通过视频放大提取心率准确性的影响。结果表明,高照度,作为卤素灯与LED相结合的结果,产生了更低的平均误差心率通过视频放大测量。此外,结果表明,当视频中有小而频繁的头部运动时,将窗口长度从10秒增加到30秒可以提高VM的心率准确性,但在没有头部运动的情况下会降低心率准确性。
{"title":"Effects of Lighting and Window Length on Heart Rate Assessment through Video Magnification","authors":"L. Kassab, Andrew J. Law, Bruce Wallace, J. Larivière-Chartier, R. Goubran, F. Knoefel","doi":"10.1109/SAS54819.2022.9881347","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881347","url":null,"abstract":"Screening people for signs of illness through contactless measurement of vital signs could be beneficial in public transportation settings or long-term care facilities. To achieve this goal, one solution could utilize Red/Green/Blue (RGB) video cameras to measure heart rate. In this work, we present results for the assessment of heart rate through Video Magnification (VM) techniques applied to RGB face video recordings from 19 subjects. The work specifically explores (1) the effect of two lighting illumination levels and (2) the effect of window length on the accuracy of heart rate extraction via Video Magnification. The results show that higher illumination, as a result of combining halogen light with LED, yielded lower average errors in heart rate measured through Video Magnification. Additionally, the results show that increasing the window length from 10 seconds up to 30 seconds improves VM heart rate accuracy when there are small frequent head movements in the video but decreases heart rate accuracy in the absence of head motion.","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132137552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Towards lightweight deep neural network for smart agriculture on embedded systems 面向嵌入式系统智能农业的轻量级深度神经网络研究
Pub Date : 2022-08-01 DOI: 10.1109/SAS54819.2022.9881382
Pengwei Du, T. Polonelli, M. Magno, Zhiyuan Cheng
Agriculture is the pillar industry for human survival. However, various diseases threaten the health of crops and lead to a decrease in yield. Industry 4.0 is making strides in plant illness prevention and detection, other than supporting farmers to improve plantations’ income. To prevent crop diseases in time, this paper proposes, implements, and evaluates a low-power smart camera. It features a lightweight neural network to verify and monitor the growth status of crops. The proposed tiny model features optimized complexity, to be deployed in milliwatt power microcontrollers, and high accuracy. Experimental results show that our work reaches 99% accuracy on a 4-classes dataset and more than 96% for a 10 classes dataset. The compact model size (139 kB) and low complexity enable ultra-low power consumption (2.63 mW per hour) on the battery-powered Sony Spresense platform, which features a six-core ARM Cortex-M4F.
农业是人类赖以生存的支柱产业。然而,各种病害威胁着作物的健康,导致产量下降。除了支持农民提高种植园收入外,工业4.0还在植物病害预防和检测方面取得了长足进步。为了及时预防作物病害,本文提出并实现了一种低功耗智能摄像机。它具有一个轻量级的神经网络来验证和监测作物的生长状态。所提出的微型模型具有优化的复杂性,可部署在毫瓦功率微控制器中,并且精度高。实验结果表明,我们的工作在4类数据集上达到99%的准确率,在10类数据集上达到96%以上的准确率。紧凑的模型尺寸(139 kB)和低复杂性使电池供电的索尼Spresense平台具有超低功耗(每小时2.63兆瓦),该平台具有六核ARM Cortex-M4F。
{"title":"Towards lightweight deep neural network for smart agriculture on embedded systems","authors":"Pengwei Du, T. Polonelli, M. Magno, Zhiyuan Cheng","doi":"10.1109/SAS54819.2022.9881382","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881382","url":null,"abstract":"Agriculture is the pillar industry for human survival. However, various diseases threaten the health of crops and lead to a decrease in yield. Industry 4.0 is making strides in plant illness prevention and detection, other than supporting farmers to improve plantations’ income. To prevent crop diseases in time, this paper proposes, implements, and evaluates a low-power smart camera. It features a lightweight neural network to verify and monitor the growth status of crops. The proposed tiny model features optimized complexity, to be deployed in milliwatt power microcontrollers, and high accuracy. Experimental results show that our work reaches 99% accuracy on a 4-classes dataset and more than 96% for a 10 classes dataset. The compact model size (139 kB) and low complexity enable ultra-low power consumption (2.63 mW per hour) on the battery-powered Sony Spresense platform, which features a six-core ARM Cortex-M4F.","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125414359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A simple and highly sensitive Force Sensor based on modified plastic optical fibers and cantilevers 一种基于改性塑料光纤和悬臂梁的简单、高灵敏度力传感器
Pub Date : 2022-08-01 DOI: 10.1109/SAS54819.2022.9881346
N. Cennamo, F. Arcadio, V. Marletta, D. D. Prete, B. Andò, L. Zeni, Mario Cesaro, Alfredo De Matteis
In this work, a force sensor based on plastic optical fibers (POFs) is realized and tested. More specifically, the optical sensor system is composed of a cantilever obtained by a spring-steel beam and a modified POF glued on the underside of the cantilever. One end of the cantilever is fixed to the optical desk using a developed support, while on the other end, a weight is applied to realize an applied force. The POF is modified by notches in order to improve the optical performance of the force sensor. An analysis is carried out to characterize the sensor system. In particular, it has a linear behaviour ranging from 50 mN to 300 mN with a sensitivity of 53.43 mV/N and a resolution of 0.01 N.
本文实现并测试了一种基于塑料光纤的力传感器。更具体地说,该光学传感器系统由由弹簧钢梁获得的悬臂梁和粘接在悬臂梁底面的改性POF组成。悬臂的一端使用开发的支架固定在光学台上,而在另一端施加重量以实现施加力。为了提高力传感器的光学性能,对POF进行了刻槽改造。对传感器系统的特性进行了分析。特别是,它具有从50 mN到300 mN的线性行为,灵敏度为53.43 mV/N,分辨率为0.01 N。
{"title":"A simple and highly sensitive Force Sensor based on modified plastic optical fibers and cantilevers","authors":"N. Cennamo, F. Arcadio, V. Marletta, D. D. Prete, B. Andò, L. Zeni, Mario Cesaro, Alfredo De Matteis","doi":"10.1109/SAS54819.2022.9881346","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881346","url":null,"abstract":"In this work, a force sensor based on plastic optical fibers (POFs) is realized and tested. More specifically, the optical sensor system is composed of a cantilever obtained by a spring-steel beam and a modified POF glued on the underside of the cantilever. One end of the cantilever is fixed to the optical desk using a developed support, while on the other end, a weight is applied to realize an applied force. The POF is modified by notches in order to improve the optical performance of the force sensor. An analysis is carried out to characterize the sensor system. In particular, it has a linear behaviour ranging from 50 mN to 300 mN with a sensitivity of 53.43 mV/N and a resolution of 0.01 N.","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131105715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Assessment of UWB RTLS for Proximity Hazards Management in Construction Sites 超宽带RTLS在建筑工地近距离危害管理中的应用评估
Pub Date : 2022-08-01 DOI: 10.1109/SAS54819.2022.9881376
P. Bellagente
According to statistics, the construction market is one of the most dangerous economic sector all around the world. Construction workers are continuously exposed to moving materials and machinery, often in constrained spaces, rising the risk of collision accidents. In this paper an Ultra-Wide Band (UWB) Real Time Location System (RTLS) designed in a previous work for proximity hazards management in construction sites is described in detail. An extensive measurement campaign have been carried out outdoor, using a square grid (15 m x 15 m) and 1 m step, for a total of 225 positions. For each position, 1000 location measures have been collected and the bidimensional localization resolution has been estimated. Results show that location resolution remains similar across the considered area and that it could be manually verified by construction workers. In optimal conditions, the resolution ranges are within 0.01 m and 0.05 m . The results highlight a major error contribution due to radio-frequency reflection interference, which makes impossible to measure positions under some conditions.
据统计,建筑市场是世界上最危险的经济部门之一。建筑工人经常在有限的空间里不断接触移动的材料和机械,这增加了发生碰撞事故的风险。本文详细介绍了前人设计的一种用于建筑工地近距离危险管理的超宽带实时定位系统。在室外进行了广泛的测量活动,使用方形网格(15米× 15米)和1米的台阶,总共225个位置。对于每个位置,收集了1000个位置测量,并估计了二维定位分辨率。结果表明,在考虑的区域内,位置分辨率保持相似,并且可以由建筑工人手动验证。在最佳条件下,分辨率范围在0.01 m ~ 0.05 m之间。结果强调了射频反射干扰对误差的主要贡献,这使得在某些条件下无法测量位置。
{"title":"Assessment of UWB RTLS for Proximity Hazards Management in Construction Sites","authors":"P. Bellagente","doi":"10.1109/SAS54819.2022.9881376","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881376","url":null,"abstract":"According to statistics, the construction market is one of the most dangerous economic sector all around the world. Construction workers are continuously exposed to moving materials and machinery, often in constrained spaces, rising the risk of collision accidents. In this paper an Ultra-Wide Band (UWB) Real Time Location System (RTLS) designed in a previous work for proximity hazards management in construction sites is described in detail. An extensive measurement campaign have been carried out outdoor, using a square grid (15 m x 15 m) and 1 m step, for a total of 225 positions. For each position, 1000 location measures have been collected and the bidimensional localization resolution has been estimated. Results show that location resolution remains similar across the considered area and that it could be manually verified by construction workers. In optimal conditions, the resolution ranges are within 0.01 m and 0.05 m . The results highlight a major error contribution due to radio-frequency reflection interference, which makes impossible to measure positions under some conditions.","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132347284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of a neural network to identify plastics using Fluorescence Lifetime Imaging Microscopy 利用荧光寿命成像显微镜识别塑料的神经网络的发展
Pub Date : 2022-08-01 DOI: 10.1109/SAS54819.2022.9881372
Georgekutty Jose Maniyattu, Eldho Geegy, N. Leiter, Maximilian Wohlschlager, M. Versen, C. Laforsch
Plastics have become a major part of human’s daily life. An uncontrolled usage of plastic leads to an accumulation in the environment posing a threat to flora and fauna, if not recycled correctly. The correct sorting and recycling of the most commonly available plastic types and an identification of plastic in the environment are important. Fluorescence lifetime imaging microscopy shows a high potential in sorting and identifying plastic types. A data-based and an image-based classification are investigated using python programming language to demonstrate the potential of a neural network based on fluorescence lifetime images to identify plastic types. The results indicate that the data-based classification has a higher identification accuracy compared to the image-based classification.
塑料已经成为人类日常生活的重要组成部分。不加控制地使用塑料会导致环境中的堆积,如果不正确回收,会对动植物构成威胁。正确分类和回收最常见的塑料类型以及识别环境中的塑料是很重要的。荧光寿命成像显微镜在分类和识别塑料类型方面显示出很高的潜力。使用python编程语言研究了基于数据和基于图像的分类,以展示基于荧光寿命图像的神经网络识别塑料类型的潜力。结果表明,与基于图像的分类相比,基于数据的分类具有更高的识别精度。
{"title":"Development of a neural network to identify plastics using Fluorescence Lifetime Imaging Microscopy","authors":"Georgekutty Jose Maniyattu, Eldho Geegy, N. Leiter, Maximilian Wohlschlager, M. Versen, C. Laforsch","doi":"10.1109/SAS54819.2022.9881372","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881372","url":null,"abstract":"Plastics have become a major part of human’s daily life. An uncontrolled usage of plastic leads to an accumulation in the environment posing a threat to flora and fauna, if not recycled correctly. The correct sorting and recycling of the most commonly available plastic types and an identification of plastic in the environment are important. Fluorescence lifetime imaging microscopy shows a high potential in sorting and identifying plastic types. A data-based and an image-based classification are investigated using python programming language to demonstrate the potential of a neural network based on fluorescence lifetime images to identify plastic types. The results indicate that the data-based classification has a higher identification accuracy compared to the image-based classification.","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115546176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Feasibility of Measuring Shot Group Using LoRa Technology and YOLO V5 利用LoRa技术和YOLO V5测量枪弹组的可行性
Pub Date : 2022-08-01 DOI: 10.1109/SAS54819.2022.9881356
Sanghyun Park, Dongheon Lee, Jisoo Choi, Dohyeon Ko, Minji Lee, Zack Murphy, Nowf Binhowidy, Anthony H. Smith
Shooting is a common activity all over the world for both military and recreational purposes. Shooting performance can be measured from the size of the shot group (grouping). Shooters have been calculating the size of the group by measuring the distance between bullet impacts using their hands. This paper aims to create a reasonable automated shot grouping size measuring module that is available from several kilometers away. It includes an IoT(Internet of Things) system and a mobile application that users can access. LoRa technology is adopted for covering long distances, and YOLO V5 is implemented to detect bullet impacts. Mathematical methods for calculating accurate distance and engineering techniques to fill the needs are described with experiments on various parameters and conditions. The proposed module showed that indoor tests measured the shot group with a mean accuracy of 91.8%. For future work, outdoor tests, which were affected by environmental control variables, are expected to give better accuracy.
射击在世界各地都是一项常见的军事和娱乐活动。射击性能可以从射击组(分组)的大小来衡量。射手们一直在用手测量子弹撞击之间的距离来计算群体的规模。本文的目的是创建一个合理的自动射击组尺寸测量模块,可在几公里外使用。它包括一个物联网系统和一个用户可以访问的移动应用程序。远距离覆盖采用LoRa技术,并实施YOLO V5检测子弹撞击。通过各种参数和条件下的实验,阐述了计算精确距离的数学方法和满足需要的工程技术。所提出的模块表明,室内测试测量射击组的平均精度为91.8%。对于未来的工作,室外测试,受环境控制变量的影响,有望提供更好的准确性。
{"title":"Feasibility of Measuring Shot Group Using LoRa Technology and YOLO V5","authors":"Sanghyun Park, Dongheon Lee, Jisoo Choi, Dohyeon Ko, Minji Lee, Zack Murphy, Nowf Binhowidy, Anthony H. Smith","doi":"10.1109/SAS54819.2022.9881356","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881356","url":null,"abstract":"Shooting is a common activity all over the world for both military and recreational purposes. Shooting performance can be measured from the size of the shot group (grouping). Shooters have been calculating the size of the group by measuring the distance between bullet impacts using their hands. This paper aims to create a reasonable automated shot grouping size measuring module that is available from several kilometers away. It includes an IoT(Internet of Things) system and a mobile application that users can access. LoRa technology is adopted for covering long distances, and YOLO V5 is implemented to detect bullet impacts. Mathematical methods for calculating accurate distance and engineering techniques to fill the needs are described with experiments on various parameters and conditions. The proposed module showed that indoor tests measured the shot group with a mean accuracy of 91.8%. For future work, outdoor tests, which were affected by environmental control variables, are expected to give better accuracy.","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121584681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Live Migration of a 3D Flash LiDAR System between two Independent Data Processing Systems with Redundant Design 基于冗余设计的三维闪光激光雷达系统在两个独立数据处理系统之间的实时迁移
Pub Date : 2022-08-01 DOI: 10.1109/SAS54819.2022.9881255
Philipp Stelzer, Sebastian Reicher, Georg Macher, C. Steger, Raphael Schermann
Self-driving and self-flying vehicles have the ability to drive respectively fly independently without the intervention of an operator. For this purpose, these vehicles need sensors for environment perception and data processing systems, which are safety-critical, to process the obtained raw data from these sensors. However, if such safety-critical systems fail, this can have fatal consequences and can affect human lives and/or the environment, especially in the case of highly automated vehicles. A total failure of these systems is one of the worst scenarios in an automated vehicle. Therefore, such safety-critical systems are often designed redundantly in order to prevent a total failure of environment perception. In order to ensure that the operation of the vehicle can continue safely, however, the live migration from one system to the other must be carried out with as little downtime as possible. In our publication, we present a concept for a 3D Flash LiDAR live migration between two independent data processing systems with redundant design. This concept provides a solution for highly automated vehicles to remain fail-operational in case one of the redundant data processing systems fails. The results obtained from the implemented concept, without specifically addressing performance, are also provided to demonstrate feasibility.
自动驾驶汽车和自动飞行汽车能够在没有操作员干预的情况下各自独立驾驶飞行。为此,这些车辆需要用于环境感知的传感器和数据处理系统来处理从这些传感器获得的原始数据,这对安全至关重要。然而,如果这些安全关键系统发生故障,可能会产生致命的后果,并可能影响人类生命和/或环境,特别是在高度自动化车辆的情况下。这些系统完全失效是自动驾驶汽车中最糟糕的情况之一。因此,这样的安全关键系统往往设计冗余,以防止环境感知的完全失败。然而,为了确保车辆能够继续安全运行,从一个系统到另一个系统的实时迁移必须在尽可能少的停机时间内进行。在我们的出版物中,我们提出了在两个具有冗余设计的独立数据处理系统之间进行3D Flash LiDAR实时迁移的概念。这一概念为高度自动化车辆提供了一种解决方案,使其在冗余数据处理系统出现故障的情况下仍能保持故障运行。从实现的概念中获得的结果,没有具体解决性能,也提供了证明可行性。
{"title":"Live Migration of a 3D Flash LiDAR System between two Independent Data Processing Systems with Redundant Design","authors":"Philipp Stelzer, Sebastian Reicher, Georg Macher, C. Steger, Raphael Schermann","doi":"10.1109/SAS54819.2022.9881255","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881255","url":null,"abstract":"Self-driving and self-flying vehicles have the ability to drive respectively fly independently without the intervention of an operator. For this purpose, these vehicles need sensors for environment perception and data processing systems, which are safety-critical, to process the obtained raw data from these sensors. However, if such safety-critical systems fail, this can have fatal consequences and can affect human lives and/or the environment, especially in the case of highly automated vehicles. A total failure of these systems is one of the worst scenarios in an automated vehicle. Therefore, such safety-critical systems are often designed redundantly in order to prevent a total failure of environment perception. In order to ensure that the operation of the vehicle can continue safely, however, the live migration from one system to the other must be carried out with as little downtime as possible. In our publication, we present a concept for a 3D Flash LiDAR live migration between two independent data processing systems with redundant design. This concept provides a solution for highly automated vehicles to remain fail-operational in case one of the redundant data processing systems fails. The results obtained from the implemented concept, without specifically addressing performance, are also provided to demonstrate feasibility.","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122721611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of the quality of LiDAR data in the varying ambient light 在不同环境光下评估激光雷达数据的质量
Pub Date : 2022-08-01 DOI: 10.1109/SAS54819.2022.9881373
Bhaskar Anand, Harshal Verma, A. Thakur, Parvez Alam, P. Rajalakshmi
Light detection and ranging (LiDAR) is a widely used sensor for Intelligent transportation systems (ITS). It precisely determines the depth of the objects present around a vehicle. In this paper, the effect of light on the quality of acquired LiDAR data has been presented. The data was captured at different times in a day with varied light conditions. In the early morning and evening, there is partial light. At the night there is no light whereas in the mid-day there is perfect light condition. The data was acquired in the above four timings. On the acquired point cloud data, segmentation of an object, a person in the experiment, was performed. The number of object points and the point density have been observed to examine if light affects the quality of LiDAR data. The results, of the experiments, performed, suggest that the variation of light has little or no effect on the quality of LiDAR data.
光探测与测距(LiDAR)是智能交通系统(ITS)中广泛应用的传感器。它可以精确地确定车辆周围物体的深度。本文介绍了光对激光雷达数据采集质量的影响。这些数据是在一天中不同光照条件下的不同时间采集的。在清晨和傍晚,有部分光线。在晚上没有光,而在中午有完美的光线条件。数据是在以上四次计时中获取的。在获取的点云数据上,对实验对象进行分割。观察了物体点的数量和点密度,以检查光是否影响激光雷达数据的质量。实验结果表明,光的变化对激光雷达数据的质量几乎没有影响。
{"title":"Evaluation of the quality of LiDAR data in the varying ambient light","authors":"Bhaskar Anand, Harshal Verma, A. Thakur, Parvez Alam, P. Rajalakshmi","doi":"10.1109/SAS54819.2022.9881373","DOIUrl":"https://doi.org/10.1109/SAS54819.2022.9881373","url":null,"abstract":"Light detection and ranging (LiDAR) is a widely used sensor for Intelligent transportation systems (ITS). It precisely determines the depth of the objects present around a vehicle. In this paper, the effect of light on the quality of acquired LiDAR data has been presented. The data was captured at different times in a day with varied light conditions. In the early morning and evening, there is partial light. At the night there is no light whereas in the mid-day there is perfect light condition. The data was acquired in the above four timings. On the acquired point cloud data, segmentation of an object, a person in the experiment, was performed. The number of object points and the point density have been observed to examine if light affects the quality of LiDAR data. The results, of the experiments, performed, suggest that the variation of light has little or no effect on the quality of LiDAR data.","PeriodicalId":129732,"journal":{"name":"2022 IEEE Sensors Applications Symposium (SAS)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120978146","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2022 IEEE Sensors Applications Symposium (SAS)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1