Multi-pig Pose Estimation Using DeepLabCut

F. Farahnakian, J. Heikkonen, S. Björkman
{"title":"Multi-pig Pose Estimation Using DeepLabCut","authors":"F. Farahnakian, J. Heikkonen, S. Björkman","doi":"10.1109/ICICIP53388.2021.9642168","DOIUrl":null,"url":null,"abstract":"Pose estimation towards providing the assessments of animal health and welfare monitoring has strongly gained interest in the last few years. However, it is a challenging computer vision problem as the frequent interaction causes occlusions the association of detected key-points to the correct individuals. Deep Learning (DL) offers major advances in the field of pose estimation. In this paper, we investigated the possibility of using a famous open-source DL-based toolbox, DeepLabCut [1], for the specific pig pose estimation task. We predicted the body part of each individual pig from only input images or video sequences directly with no adaptations to the application setting. We used a real dataset which contains 2000 annotated images with 24,842 individually annotated pigs from 17 different locations and light conditions. The experimental results demonstrated that we can achieve a small root mean square error between the manual and predicted labels (10.1) when detecting pigs in environments previously seen by a DL model during training. To evaluate the robustness of the trained model, it is also tested on environments and lighting conditions unseen in the training set, where it achieves 12.0 root mean square error.","PeriodicalId":435799,"journal":{"name":"2021 11th International Conference on Intelligent Control and Information Processing (ICICIP)","volume":"147 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 11th International Conference on Intelligent Control and Information Processing (ICICIP)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICICIP53388.2021.9642168","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

Pose estimation towards providing the assessments of animal health and welfare monitoring has strongly gained interest in the last few years. However, it is a challenging computer vision problem as the frequent interaction causes occlusions the association of detected key-points to the correct individuals. Deep Learning (DL) offers major advances in the field of pose estimation. In this paper, we investigated the possibility of using a famous open-source DL-based toolbox, DeepLabCut [1], for the specific pig pose estimation task. We predicted the body part of each individual pig from only input images or video sequences directly with no adaptations to the application setting. We used a real dataset which contains 2000 annotated images with 24,842 individually annotated pigs from 17 different locations and light conditions. The experimental results demonstrated that we can achieve a small root mean square error between the manual and predicted labels (10.1) when detecting pigs in environments previously seen by a DL model during training. To evaluate the robustness of the trained model, it is also tested on environments and lighting conditions unseen in the training set, where it achieves 12.0 root mean square error.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
多猪姿态估计使用DeepLabCut
姿态估计对提供动物健康和福利监测的评估在过去几年中引起了极大的兴趣。然而,由于频繁的交互会导致检测到的关键点与正确个体的关联被遮挡,这是一个具有挑战性的计算机视觉问题。深度学习(DL)在姿态估计领域取得了重大进展。在本文中,我们研究了使用著名的开源基于dl的工具箱DeepLabCut[1]的可能性,用于特定的猪姿态估计任务。我们仅从输入图像或视频序列直接预测每头猪的身体部位,而不适应应用程序设置。我们使用了一个真实的数据集,该数据集包含来自17个不同位置和光照条件的24,842头单独注释的猪的2000张注释图像。实验结果表明,当在训练期间DL模型之前看到的环境中检测猪时,我们可以在手动标签和预测标签之间实现很小的均方根误差(10.1)。为了评估训练模型的鲁棒性,它还在训练集中未见的环境和照明条件下进行了测试,其中它达到了12.0的均方根误差。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
A novel RBF neural network based recognition of human upper limb active motion intention Time-Varying Polar Decomposition by Continuous-Time Model and Discrete-Time Algorithm of Zeroing Neural Network Using Zhang Time Discretization (ZTD) Integrated Res2Net combined with Seesaw loss for Long-Tailed PCG signal classification On Pinning Synchronization of An Array of Linearly Coupled Dynamical Network Design and Implementation of Braking Control for Hybrid Electric Vehicles
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1