首页 > 最新文献

Journal of Real-Time Image Processing最新文献

英文 中文
GST-YOLO: a lightweight visual detection algorithm for underwater garbage detection GST-YOLO:用于水下垃圾探测的轻量级视觉探测算法
IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-16 DOI: 10.1007/s11554-024-01494-w
Longyi Jiang, Fanghua Liu, Junwei Lv, Binghua Liu, Chen Wang
{"title":"GST-YOLO: a lightweight visual detection algorithm for underwater garbage detection","authors":"Longyi Jiang, Fanghua Liu, Junwei Lv, Binghua Liu, Chen Wang","doi":"10.1007/s11554-024-01494-w","DOIUrl":"https://doi.org/10.1007/s11554-024-01494-w","url":null,"abstract":"","PeriodicalId":51224,"journal":{"name":"Journal of Real-Time Image Processing","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141335463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GPU-based key-frame selection of pulmonary ultrasound images to detect COVID-19 基于 GPU 的肺部超声图像关键帧选择以检测 COVID-19
IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-15 DOI: 10.1007/s11554-024-01493-x
E. Torti, M. Gazzoni, E. Marenzi, F. Leporati
{"title":"GPU-based key-frame selection of pulmonary ultrasound images to detect COVID-19","authors":"E. Torti, M. Gazzoni, E. Marenzi, F. Leporati","doi":"10.1007/s11554-024-01493-x","DOIUrl":"https://doi.org/10.1007/s11554-024-01493-x","url":null,"abstract":"","PeriodicalId":51224,"journal":{"name":"Journal of Real-Time Image Processing","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141337177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficiently adapting large pre-trained models for real-time violence recognition in smart city surveillance 高效调整大型预训练模型,实现智能城市监控中的实时暴力识别
IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-15 DOI: 10.1007/s11554-024-01486-w
Xiaohui Ren, Wenze Fan, Yinghao Wang
{"title":"Efficiently adapting large pre-trained models for real-time violence recognition in smart city surveillance","authors":"Xiaohui Ren, Wenze Fan, Yinghao Wang","doi":"10.1007/s11554-024-01486-w","DOIUrl":"https://doi.org/10.1007/s11554-024-01486-w","url":null,"abstract":"","PeriodicalId":51224,"journal":{"name":"Journal of Real-Time Image Processing","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141337314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LightYOLO-S: a lightweight algorithm for detecting small targets LightYOLO-S:探测小型目标的轻量级算法
IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-14 DOI: 10.1007/s11554-024-01485-x
Liu Zihan, Wu xu, Linyun Zhang, Panlin Yu
{"title":"LightYOLO-S: a lightweight algorithm for detecting small targets","authors":"Liu Zihan, Wu xu, Linyun Zhang, Panlin Yu","doi":"10.1007/s11554-024-01485-x","DOIUrl":"https://doi.org/10.1007/s11554-024-01485-x","url":null,"abstract":"","PeriodicalId":51224,"journal":{"name":"Journal of Real-Time Image Processing","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141341767","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
YOLOv8n-LSLW: a lightweight method for real-time detection of wild fishing behavior YOLOv8n-LSLW:实时检测野生捕鱼行为的轻量级方法
IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-12 DOI: 10.1007/s11554-024-01492-y
Peng-cheng Yan, Wenchang Wang, Guo-dong Li, Yuting Zhao, JingBao Wang, Ziming Wen
{"title":"YOLOv8n-LSLW: a lightweight method for real-time detection of wild fishing behavior","authors":"Peng-cheng Yan, Wenchang Wang, Guo-dong Li, Yuting Zhao, JingBao Wang, Ziming Wen","doi":"10.1007/s11554-024-01492-y","DOIUrl":"https://doi.org/10.1007/s11554-024-01492-y","url":null,"abstract":"","PeriodicalId":51224,"journal":{"name":"Journal of Real-Time Image Processing","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141352976","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Realistic real-time processing of anime portraits based on generative adversarial networks 基于生成式对抗网络的动漫肖像逼真实时处理技术
IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-06 DOI: 10.1007/s11554-024-01481-1
Gaofeng Zhu, Zhiguo Qu, Le Sun, Yuming Liu, Jianfeng Yang
{"title":"Realistic real-time processing of anime portraits based on generative adversarial networks","authors":"Gaofeng Zhu, Zhiguo Qu, Le Sun, Yuming Liu, Jianfeng Yang","doi":"10.1007/s11554-024-01481-1","DOIUrl":"https://doi.org/10.1007/s11554-024-01481-1","url":null,"abstract":"","PeriodicalId":51224,"journal":{"name":"Journal of Real-Time Image Processing","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141376732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A hardware-friendly logarithmic quantization method for CNNs and FPGA implementation 用于 CNN 和 FPGA 实现的硬件友好型对数量化方法
IF 3 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-06-06 DOI: 10.1007/s11554-024-01484-y
Tao Jiang, Ligang Xing, Jinming Yu, Junchao Qian
{"title":"A hardware-friendly logarithmic quantization method for CNNs and FPGA implementation","authors":"Tao Jiang, Ligang Xing, Jinming Yu, Junchao Qian","doi":"10.1007/s11554-024-01484-y","DOIUrl":"https://doi.org/10.1007/s11554-024-01484-y","url":null,"abstract":"","PeriodicalId":51224,"journal":{"name":"Journal of Real-Time Image Processing","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141378004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ARF-YOLOv8: a novel real-time object detection model for UAV-captured images detection ARF-YOLOv8:用于无人机捕获图像检测的新型实时物体检测模型
IF 3 4区 计算机科学 Q2 Computer Science Pub Date : 2024-06-04 DOI: 10.1007/s11554-024-01483-z
YaLin Zeng, DongJin Guo, WeiKai He, Tian Zhang, ZhongTao Liu

There are several difficulties in the task of object detection for Unmanned Aerial Vehicle (UAV) photography images, including the small size of objects, densely distributed objects, and diverse perspectives from which the objects are captured. To tackle these challenges, we proposed a real-time algorithm named adjusting overall receptive field enhancement YOLOv8 (ARF-YOLOv8) for object detection in UAV-captured images. Our approach begins with a comprehensive restructuring of the YOLOv8 network architecture. The primary objectives are to mitigate the loss of shallow-level information and establish an optimal model receptive field. Subsequently, we designed a bibranch fusion attention module based on Coordinate Attention which is seamlessly integrated into the detection network. This module combines features processed by Coordinate Attention module with shallow-level features, facilitating the extraction of multi-level feature information. Furthermore, recognizing the influence of target size on boundary box loss, we refine the boundary box loss function CIoU Loss employed in YOLOv8. Extensive experimentation conducted on the visdrone2019 dataset provides empirical evidence supporting the superior performance of ARF-YOLOv8. In comparison to YOLOv8, our method demonstrates a noteworthy 6.86% increase in mAP (0.5:0.95) while maintaining similar detection speeds. The code is available at https://github.com/sbzeng/ARF-YOLOv8-for-uav/tree/main.

无人飞行器(UAV)摄影图像的物体检测任务有几个难点,包括物体尺寸小、物体分布密集以及拍摄物体的视角不同。为了应对这些挑战,我们提出了一种名为调整整体感受野增强 YOLOv8(ARF-YOLOv8)的实时算法,用于无人机拍摄图像中的物体检测。我们的方法首先是全面重组 YOLOv8 网络架构。其主要目标是减少浅层信息的损失,并建立最佳模型感受野。随后,我们设计了一个基于坐标注意力的双支融合注意力模块,并将其无缝集成到检测网络中。该模块将坐标注意模块处理过的特征与浅层特征相结合,便于提取多层次特征信息。此外,考虑到目标大小对边界盒损失的影响,我们改进了 YOLOv8 中使用的边界盒损失函数 CIoU Loss。在 visdrone2019 数据集上进行的广泛实验为 ARF-YOLOv8 的卓越性能提供了实证支持。与 YOLOv8 相比,我们的方法在保持类似检测速度的同时,将 mAP(0.5:0.95)显著提高了 6.86%。代码见 https://github.com/sbzeng/ARF-YOLOv8-for-uav/tree/main。
{"title":"ARF-YOLOv8: a novel real-time object detection model for UAV-captured images detection","authors":"YaLin Zeng, DongJin Guo, WeiKai He, Tian Zhang, ZhongTao Liu","doi":"10.1007/s11554-024-01483-z","DOIUrl":"https://doi.org/10.1007/s11554-024-01483-z","url":null,"abstract":"<p>There are several difficulties in the task of object detection for Unmanned Aerial Vehicle (UAV) photography images, including the small size of objects, densely distributed objects, and diverse perspectives from which the objects are captured. To tackle these challenges, we proposed a real-time algorithm named adjusting overall receptive field enhancement YOLOv8 (ARF-YOLOv8) for object detection in UAV-captured images. Our approach begins with a comprehensive restructuring of the YOLOv8 network architecture. The primary objectives are to mitigate the loss of shallow-level information and establish an optimal model receptive field. Subsequently, we designed a bibranch fusion attention module based on Coordinate Attention which is seamlessly integrated into the detection network. This module combines features processed by Coordinate Attention module with shallow-level features, facilitating the extraction of multi-level feature information. Furthermore, recognizing the influence of target size on boundary box loss, we refine the boundary box loss function CIoU Loss employed in YOLOv8. Extensive experimentation conducted on the visdrone2019 dataset provides empirical evidence supporting the superior performance of ARF-YOLOv8. In comparison to YOLOv8, our method demonstrates a noteworthy 6.86% increase in mAP (0.5:0.95) while maintaining similar detection speeds. The code is available at https://github.com/sbzeng/ARF-YOLOv8-for-uav/tree/main.</p>","PeriodicalId":51224,"journal":{"name":"Journal of Real-Time Image Processing","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141252696","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fcd-cnn: FPGA-based CU depth decision for HEVC intra encoder using CNN Fcd-cnn:使用 CNN 为 HEVC 内编码器做出基于 FPGA 的 CU 深度决策
IF 3 4区 计算机科学 Q2 Computer Science Pub Date : 2024-06-02 DOI: 10.1007/s11554-024-01487-9
Hossein Dehnavi, Mohammad Dehnavi, Sajad Haghzad Klidbary

Video compression for storage and transmission has always been a focal point for researchers in the field of image processing. Their efforts aim to reduce the data volume required for video representation while maintaining its quality. HEVC is one of the efficient standards for video compression, receiving special attention due to the increasing demand for high-resolution videos. The main step in video compression involves dividing the coding unit (CU) blocks into smaller blocks that have a uniform texture. In traditional methods, The Discrete Cosine Transform (DCT) is applied, followed by the use of RDO for decision-making on partitioning. This paper presents a novel convolutional neural network (CNN) and its hardware implementation as an alternative to DCT, aimed at speeding up partitioning and reducing the hardware resources required. The proposed hardware utilizes an efficient and lightweight CNN to partition CUs with low hardware resources in real-time applications. This CNN is trained for different Quantization Parameters (QPs) and block sizes to prevent overfitting. Furthermore, the system’s input size is fixed at (16times 16), and other input sizes are scaled to this dimension. Loop unrolling, data reuse, and resource sharing are applied in hardware implementation to save resources. The hardware architecture is fixed for all block sizes and QPs, and only the coefficients of the CNN are changed. In terms of compression quality, the proposed hardware achieves a (4.42%) BD-BR and (-,0.19) BD-PSNR compared to HM16.5. The proposed system can process (64times 64) CU at 150 MHz and in 4914 clock cycles. The hardware resources utilized by the proposed system include 13,141 LUTs, 15,885 Flip-flops, 51 BRAMs, and 74 DSPs.

用于存储和传输的视频压缩一直是图像处理领域研究人员关注的焦点。他们的努力旨在减少视频表示所需的数据量,同时保持视频质量。HEVC 是高效的视频压缩标准之一,由于对高分辨率视频的需求日益增长,它受到了特别关注。视频压缩的主要步骤是将编码单元(CU)块划分为具有统一纹理的较小块。在传统方法中,首先应用离散余弦变换(DCT),然后使用 RDO 对分割进行决策。本文介绍了一种新型卷积神经网络(CNN)及其硬件实现,作为 DCT 的替代方案,旨在加快分割速度并减少所需的硬件资源。拟议的硬件利用高效、轻量级的 CNN,在实时应用中以较低的硬件资源对 CU 进行分区。该 CNN 针对不同的量化参数(QPs)和块大小进行训练,以防止过度拟合。此外,系统的输入大小固定为 (16times 16) ,其他输入大小也按此维度缩放。为了节省资源,硬件实现中采用了循环解卷、数据重用和资源共享等方法。对于所有的块大小和 QPs,硬件架构都是固定的,只改变 CNN 的系数。在压缩质量方面,与HM16.5相比,所提出的硬件实现了BD-BR和BD-PSNR。提议的系统可以在150 MHz和4914个时钟周期内处理64次CU。拟议系统使用的硬件资源包括 13,141 个 LUT、15,885 个触发器、51 个 BRAM 和 74 个 DSP。
{"title":"Fcd-cnn: FPGA-based CU depth decision for HEVC intra encoder using CNN","authors":"Hossein Dehnavi, Mohammad Dehnavi, Sajad Haghzad Klidbary","doi":"10.1007/s11554-024-01487-9","DOIUrl":"https://doi.org/10.1007/s11554-024-01487-9","url":null,"abstract":"<p>Video compression for storage and transmission has always been a focal point for researchers in the field of image processing. Their efforts aim to reduce the data volume required for video representation while maintaining its quality. HEVC is one of the efficient standards for video compression, receiving special attention due to the increasing demand for high-resolution videos. The main step in video compression involves dividing the coding unit (CU) blocks into smaller blocks that have a uniform texture. In traditional methods, The Discrete Cosine Transform (DCT) is applied, followed by the use of RDO for decision-making on partitioning. This paper presents a novel convolutional neural network (CNN) and its hardware implementation as an alternative to DCT, aimed at speeding up partitioning and reducing the hardware resources required. The proposed hardware utilizes an efficient and lightweight CNN to partition CUs with low hardware resources in real-time applications. This CNN is trained for different Quantization Parameters (QPs) and block sizes to prevent overfitting. Furthermore, the system’s input size is fixed at <span>(16times 16)</span>, and other input sizes are scaled to this dimension. Loop unrolling, data reuse, and resource sharing are applied in hardware implementation to save resources. The hardware architecture is fixed for all block sizes and QPs, and only the coefficients of the CNN are changed. In terms of compression quality, the proposed hardware achieves a <span>(4.42%)</span> BD-BR and <span>(-,0.19)</span> BD-PSNR compared to HM16.5. The proposed system can process <span>(64times 64)</span> CU at 150 MHz and in 4914 clock cycles. The hardware resources utilized by the proposed system include 13,141 LUTs, 15,885 Flip-flops, 51 BRAMs, and 74 DSPs.</p>","PeriodicalId":51224,"journal":{"name":"Journal of Real-Time Image Processing","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141252051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IoT-based real-time object detection system for crop protection and agriculture field security 基于物联网的作物保护和农田安全实时物体检测系统
IF 3 4区 计算机科学 Q2 Computer Science Pub Date : 2024-06-02 DOI: 10.1007/s11554-024-01488-8
Priya Singh, Rajalakshmi Krishnamurthi

In farming, clashes between humans and animals create significant challenges, risking crop yields, human well-being, and resource depletion. Farmers use traditional methods like electric fences to protect their fields but these can harm essential animals that maintain a balanced ecosystem. To address these fundamental challenges, our research presents a fresh solution harnessing the power of the Internet of Things (IoT) and deep learning. In this paper, we developed a monitoring system that takes advantage of ESP32-CAM and Raspberry Pi in collaboration with optimised YOLOv8 model. Our objective is to detect and classify objects such as animals or humans that roam around the field, providing real-time notification to the farmers by incorporating firebase cloud messaging (FCM). Initially, we have employed ultrasonic sensors that will detect any intruder movement, triggering the camera to capture an image. Further, the captured image is transmitted to a server equipped with an object detection model. Afterwards, the processed image is forwarded to FCM, responsible for managing the image and sending notifications to the farmer through an Android application. Our optimised YOLOv8 model attains an exceptional precision of 97%, recall of 96%, and accuracy of 96%. Once we achieved this optimal outcome, we integrated the model with our IoT infrastructure. This study emphasizes the effectiveness of low-power IoT devices, LoRa devices, and object detection techniques in delivering strong security solutions to the agriculture industry. These technologies hold the potential to significantly decrease crop damage while enhancing safety within the agricultural field and contribute towards wildlife conservation.

在农业生产中,人与动物之间的冲突带来了巨大挑战,危及作物产量、人类福祉和资源枯竭。农民们使用电网等传统方法来保护他们的田地,但这些方法可能会伤害维持生态系统平衡的重要动物。为了应对这些基本挑战,我们的研究提出了一种全新的解决方案,利用物联网(IoT)和深度学习的力量。在本文中,我们利用 ESP32-CAM 和树莓派(Raspberry Pi)与优化的 YOLOv8 模型合作开发了一个监控系统。我们的目标是检测和分类在田间游荡的动物或人类等物体,并通过火基云消息(FCM)向农民提供实时通知。最初,我们采用超声波传感器来检测任何入侵者的移动,并触发摄像头捕捉图像。然后,捕捉到的图像会被传输到装有物体检测模型的服务器上。之后,经过处理的图像被传送到 FCM,由其负责管理图像,并通过安卓应用程序向农民发送通知。我们优化后的 YOLOv8 模型精确度高达 97%,召回率高达 96%,准确率高达 96%。在取得最佳结果后,我们将该模型与我们的物联网基础设施进行了整合。这项研究强调了低功耗物联网设备、LoRa 设备和物体检测技术在为农业行业提供强大的安全解决方案方面的有效性。这些技术有可能在提高农田安全的同时,大幅减少农作物损失,并为保护野生动物做出贡献。
{"title":"IoT-based real-time object detection system for crop protection and agriculture field security","authors":"Priya Singh, Rajalakshmi Krishnamurthi","doi":"10.1007/s11554-024-01488-8","DOIUrl":"https://doi.org/10.1007/s11554-024-01488-8","url":null,"abstract":"<p>In farming, clashes between humans and animals create significant challenges, risking crop yields, human well-being, and resource depletion. Farmers use traditional methods like electric fences to protect their fields but these can harm essential animals that maintain a balanced ecosystem. To address these fundamental challenges, our research presents a fresh solution harnessing the power of the Internet of Things (IoT) and deep learning. In this paper, we developed a monitoring system that takes advantage of ESP32-CAM and Raspberry Pi in collaboration with optimised YOLOv8 model. Our objective is to detect and classify objects such as animals or humans that roam around the field, providing real-time notification to the farmers by incorporating firebase cloud messaging (FCM). Initially, we have employed ultrasonic sensors that will detect any intruder movement, triggering the camera to capture an image. Further, the captured image is transmitted to a server equipped with an object detection model. Afterwards, the processed image is forwarded to FCM, responsible for managing the image and sending notifications to the farmer through an Android application. Our optimised YOLOv8 model attains an exceptional precision of 97%, recall of 96%, and accuracy of 96%. Once we achieved this optimal outcome, we integrated the model with our IoT infrastructure. This study emphasizes the effectiveness of low-power IoT devices, LoRa devices, and object detection techniques in delivering strong security solutions to the agriculture industry. These technologies hold the potential to significantly decrease crop damage while enhancing safety within the agricultural field and contribute towards wildlife conservation.</p>","PeriodicalId":51224,"journal":{"name":"Journal of Real-Time Image Processing","volume":null,"pages":null},"PeriodicalIF":3.0,"publicationDate":"2024-06-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141252232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Real-Time Image Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1