Hanhaesol Lee, J. Sa, Yongwha Chung, Daihee Park, Hakjae Kim
{"title":"Deep Learning-based Overlapping-Pig Separation by Balancing Accuracy and Execution Time","authors":"Hanhaesol Lee, J. Sa, Yongwha Chung, Daihee Park, Hakjae Kim","doi":"10.24132/csrn.2019.2901.1.3","DOIUrl":null,"url":null,"abstract":"The crowded environment of a pig farm is highly vulnerable to the spread of infectious diseases such as foot-andmouth disease, and studies have been conducted to automatically analyze behavior of pigs in a crowded pig farm through a video surveillance system using a top-view camera. Although it is required to correctly separate overlapping-pigs for tracking each individual pigs, extracting the boundaries of each pig fast and accurately is a challenging issue due to the complicated occlusion patterns such as X shape and T shape. In this study, we propose a fast and accurate method to separate overlapping-pigs not only by exploiting the advantage (i.e., one of the fast deep learning-based object detectors) of You Only Look Once, YOLO, but also by overcoming the disadvantage (i.e., the axis aligned bounding box-based object detector) of YOLO with the test-time data augmentation of rotation. Experimental results with the occlusion patterns between the overlapping-pigs show that the proposed method can provide better accuracy and faster processing speed than one of the state-of-the-art deep learningbased segmentation techniques such as Mask R-CNN (i.e., the performance improvement over Mask R-CNN was about 11 times, in terms of the accuracy/processing speed performance metrics).","PeriodicalId":322214,"journal":{"name":"Computer Science Research Notes","volume":"21 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Science Research Notes","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.24132/csrn.2019.2901.1.3","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
The crowded environment of a pig farm is highly vulnerable to the spread of infectious diseases such as foot-andmouth disease, and studies have been conducted to automatically analyze behavior of pigs in a crowded pig farm through a video surveillance system using a top-view camera. Although it is required to correctly separate overlapping-pigs for tracking each individual pigs, extracting the boundaries of each pig fast and accurately is a challenging issue due to the complicated occlusion patterns such as X shape and T shape. In this study, we propose a fast and accurate method to separate overlapping-pigs not only by exploiting the advantage (i.e., one of the fast deep learning-based object detectors) of You Only Look Once, YOLO, but also by overcoming the disadvantage (i.e., the axis aligned bounding box-based object detector) of YOLO with the test-time data augmentation of rotation. Experimental results with the occlusion patterns between the overlapping-pigs show that the proposed method can provide better accuracy and faster processing speed than one of the state-of-the-art deep learningbased segmentation techniques such as Mask R-CNN (i.e., the performance improvement over Mask R-CNN was about 11 times, in terms of the accuracy/processing speed performance metrics).