YOLOv8-ESW: An Improved Oncomelania hupensis Detection Model

IF 1.5 4区 计算机科学 Q3 COMPUTER SCIENCE, SOFTWARE ENGINEERING Concurrency and Computation-Practice & Experience Pub Date : 2025-01-21 DOI:10.1002/cpe.8359
Changcheng Wei, Juanyan Fang, Zhu Xu, Jinbao Meng, Zenglu Ye, Yipeng Wang, Tumennast Erdenebold
{"title":"YOLOv8-ESW: An Improved Oncomelania hupensis Detection Model","authors":"Changcheng Wei,&nbsp;Juanyan Fang,&nbsp;Zhu Xu,&nbsp;Jinbao Meng,&nbsp;Zenglu Ye,&nbsp;Yipeng Wang,&nbsp;Tumennast Erdenebold","doi":"10.1002/cpe.8359","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Traditional <i>Oncomelania hupensis</i> detection relies on human eye observation, which results in reduced efficiency due to easy fatigue of the human eye and limited individual cognition, an improved YOLOv8 <i>O. hupensis</i> detection algorithm, YOLOv8-ESW(expectation–maximization attention [EMA], Small Target Detection Layer, and Wise-IoU), is proposed. The original dataset is augmented using the OpenCV library. To imitate image blur caused by motion jitter, salt and pepper, and Gaussian noise were added to the dataset; to imitate images from different angles captured by the camera in an instant, affine, translation, flip, and other transformations were performed on the original data, resulting in a total of 6000 images after data enhancement. Considering the insufficient feature fusion problem caused by lightweight convolution, We present the expectation–EMA module (E), which innovatively incorporates a coordinate attention mechanism and convolutional layers to introduce a specialized layer for small target detection (S). This design significantly improves the network's ability to synergize information from both superficial and deeper layers, better focusing on small target <i>O. hupensis</i> and occluded <i>O. hupensis</i>. To tackle the challenge of quality imbalance among <i>O. hupensis</i> samples, we employ the Wise-IoU (WIoU) loss function (W). This approach uses a gradient gain distribution strategy and improves the model convergence speed and regression accuracy. The YOLOv8-ESW model, with 16.8 million parameters and requiring 98.4 GFLOPS for computations, achieved a mAP of 92.74% when tested on the <i>O. hupensis</i> dataset, marking a 4.09% improvement over the baseline model. Comprehensive testing confirms the enhanced network's efficacy, significantly elevating <i>O. hupensis</i> detection precision, minimizing both missed and false detections, and fulfilling real-time processing criteria. Compared with the current mainstream models, it has certain advantages in detection accuracy and has reference value for subsequent research in actual detection.</p>\n </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 3","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Concurrency and Computation-Practice & Experience","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cpe.8359","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0

Abstract

Traditional Oncomelania hupensis detection relies on human eye observation, which results in reduced efficiency due to easy fatigue of the human eye and limited individual cognition, an improved YOLOv8 O. hupensis detection algorithm, YOLOv8-ESW(expectation–maximization attention [EMA], Small Target Detection Layer, and Wise-IoU), is proposed. The original dataset is augmented using the OpenCV library. To imitate image blur caused by motion jitter, salt and pepper, and Gaussian noise were added to the dataset; to imitate images from different angles captured by the camera in an instant, affine, translation, flip, and other transformations were performed on the original data, resulting in a total of 6000 images after data enhancement. Considering the insufficient feature fusion problem caused by lightweight convolution, We present the expectation–EMA module (E), which innovatively incorporates a coordinate attention mechanism and convolutional layers to introduce a specialized layer for small target detection (S). This design significantly improves the network's ability to synergize information from both superficial and deeper layers, better focusing on small target O. hupensis and occluded O. hupensis. To tackle the challenge of quality imbalance among O. hupensis samples, we employ the Wise-IoU (WIoU) loss function (W). This approach uses a gradient gain distribution strategy and improves the model convergence speed and regression accuracy. The YOLOv8-ESW model, with 16.8 million parameters and requiring 98.4 GFLOPS for computations, achieved a mAP of 92.74% when tested on the O. hupensis dataset, marking a 4.09% improvement over the baseline model. Comprehensive testing confirms the enhanced network's efficacy, significantly elevating O. hupensis detection precision, minimizing both missed and false detections, and fulfilling real-time processing criteria. Compared with the current mainstream models, it has certain advantages in detection accuracy and has reference value for subsequent research in actual detection.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
一种改进的湖北钉螺检测模型YOLOv8-ESW
传统的湖北血吸虫检测依赖人眼观察,由于人眼容易疲劳,个体认知有限,导致检测效率降低。本文提出一种改进的YOLOv8湖北血吸虫检测算法,YOLOv8- esw (expectation-maximization attention [EMA], Small Target detection Layer, Wise-IoU)。原始数据集使用OpenCV库进行扩充。为了模拟运动抖动引起的图像模糊,在数据集中加入了椒盐和高斯噪声;为了模拟相机在瞬间捕捉到的不同角度的图像,对原始数据进行仿射、平移、翻转等变换,数据增强后共得到6000幅图像。针对轻量卷积导致特征融合不足的问题,提出了期望- ema模块(E),该模块创新性地结合了坐标关注机制和卷积层,引入了小目标检测的专用层(S)。该设计显著提高了网络对表层和深层信息的协同能力,更好地关注小目标湖北血吸虫和闭塞湖北血吸虫。为了解决湖北钉螺样本间质量不平衡的问题,我们采用了Wise-IoU (WIoU)损失函数(W)。该方法采用梯度增益分布策略,提高了模型的收敛速度和回归精度。YOLOv8-ESW模型具有1680万个参数,需要98.4 GFLOPS进行计算,在猪O. hupenensis数据集上测试时,mAP达到了92.74%,比基线模型提高了4.09%。综合测试证实了增强后的网络的有效性,显著提高了甲型h1n1流感病毒的检测精度,最大限度地减少了漏检和误检,并满足了实时处理标准。与目前主流模型相比,在检测精度上具有一定的优势,对后续在实际检测中的研究具有参考价值。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Concurrency and Computation-Practice & Experience
Concurrency and Computation-Practice & Experience 工程技术-计算机:理论方法
CiteScore
5.00
自引率
10.00%
发文量
664
审稿时长
9.6 months
期刊介绍: Concurrency and Computation: Practice and Experience (CCPE) publishes high-quality, original research papers, and authoritative research review papers, in the overlapping fields of: Parallel and distributed computing; High-performance computing; Computational and data science; Artificial intelligence and machine learning; Big data applications, algorithms, and systems; Network science; Ontologies and semantics; Security and privacy; Cloud/edge/fog computing; Green computing; and Quantum computing.
期刊最新文献
DynaGATNet: A Lightweight Dynamic Graph Attention Network for Multimodal Fusion in Industrial PTFE Blend Ratio Prediction DynaGATNet: A Lightweight Dynamic Graph Attention Network for Multimodal Fusion in Industrial PTFE Blend Ratio Prediction A Lightweight and Efficient Insulator Defect Detection Model for Unmanned Aerial Vehicle Inspection Breaking the Efficiency-Resilience Trade-Off: High-Performance Trunk Protection for Massive Topologies Using Cut-Resistant Edge Groups User Revocable Multiple-Replica Based Distributed Auditing Using Improved Lagrange Identity Signature With Geographic Location for Distributed Cloud Storage
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1