{"title":"YOLOv8-ESW: An Improved Oncomelania hupensis Detection Model","authors":"Changcheng Wei, Juanyan Fang, Zhu Xu, Jinbao Meng, Zenglu Ye, Yipeng Wang, Tumennast Erdenebold","doi":"10.1002/cpe.8359","DOIUrl":null,"url":null,"abstract":"<div>\n \n <p>Traditional <i>Oncomelania hupensis</i> detection relies on human eye observation, which results in reduced efficiency due to easy fatigue of the human eye and limited individual cognition, an improved YOLOv8 <i>O. hupensis</i> detection algorithm, YOLOv8-ESW(expectation–maximization attention [EMA], Small Target Detection Layer, and Wise-IoU), is proposed. The original dataset is augmented using the OpenCV library. To imitate image blur caused by motion jitter, salt and pepper, and Gaussian noise were added to the dataset; to imitate images from different angles captured by the camera in an instant, affine, translation, flip, and other transformations were performed on the original data, resulting in a total of 6000 images after data enhancement. Considering the insufficient feature fusion problem caused by lightweight convolution, We present the expectation–EMA module (E), which innovatively incorporates a coordinate attention mechanism and convolutional layers to introduce a specialized layer for small target detection (S). This design significantly improves the network's ability to synergize information from both superficial and deeper layers, better focusing on small target <i>O. hupensis</i> and occluded <i>O. hupensis</i>. To tackle the challenge of quality imbalance among <i>O. hupensis</i> samples, we employ the Wise-IoU (WIoU) loss function (W). This approach uses a gradient gain distribution strategy and improves the model convergence speed and regression accuracy. The YOLOv8-ESW model, with 16.8 million parameters and requiring 98.4 GFLOPS for computations, achieved a mAP of 92.74% when tested on the <i>O. hupensis</i> dataset, marking a 4.09% improvement over the baseline model. Comprehensive testing confirms the enhanced network's efficacy, significantly elevating <i>O. hupensis</i> detection precision, minimizing both missed and false detections, and fulfilling real-time processing criteria. Compared with the current mainstream models, it has certain advantages in detection accuracy and has reference value for subsequent research in actual detection.</p>\n </div>","PeriodicalId":55214,"journal":{"name":"Concurrency and Computation-Practice & Experience","volume":"37 3","pages":""},"PeriodicalIF":1.5000,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Concurrency and Computation-Practice & Experience","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1002/cpe.8359","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Traditional Oncomelania hupensis detection relies on human eye observation, which results in reduced efficiency due to easy fatigue of the human eye and limited individual cognition, an improved YOLOv8 O. hupensis detection algorithm, YOLOv8-ESW(expectation–maximization attention [EMA], Small Target Detection Layer, and Wise-IoU), is proposed. The original dataset is augmented using the OpenCV library. To imitate image blur caused by motion jitter, salt and pepper, and Gaussian noise were added to the dataset; to imitate images from different angles captured by the camera in an instant, affine, translation, flip, and other transformations were performed on the original data, resulting in a total of 6000 images after data enhancement. Considering the insufficient feature fusion problem caused by lightweight convolution, We present the expectation–EMA module (E), which innovatively incorporates a coordinate attention mechanism and convolutional layers to introduce a specialized layer for small target detection (S). This design significantly improves the network's ability to synergize information from both superficial and deeper layers, better focusing on small target O. hupensis and occluded O. hupensis. To tackle the challenge of quality imbalance among O. hupensis samples, we employ the Wise-IoU (WIoU) loss function (W). This approach uses a gradient gain distribution strategy and improves the model convergence speed and regression accuracy. The YOLOv8-ESW model, with 16.8 million parameters and requiring 98.4 GFLOPS for computations, achieved a mAP of 92.74% when tested on the O. hupensis dataset, marking a 4.09% improvement over the baseline model. Comprehensive testing confirms the enhanced network's efficacy, significantly elevating O. hupensis detection precision, minimizing both missed and false detections, and fulfilling real-time processing criteria. Compared with the current mainstream models, it has certain advantages in detection accuracy and has reference value for subsequent research in actual detection.
期刊介绍:
Concurrency and Computation: Practice and Experience (CCPE) publishes high-quality, original research papers, and authoritative research review papers, in the overlapping fields of:
Parallel and distributed computing;
High-performance computing;
Computational and data science;
Artificial intelligence and machine learning;
Big data applications, algorithms, and systems;
Network science;
Ontologies and semantics;
Security and privacy;
Cloud/edge/fog computing;
Green computing; and
Quantum computing.