Deep learning-based sow posture classifier using colour and depth images

IF 6.3 Q1 AGRICULTURAL ENGINEERING Smart agricultural technology Pub Date : 2024-09-12 DOI:10.1016/j.atech.2024.100563
{"title":"Deep learning-based sow posture classifier using colour and depth images","authors":"","doi":"10.1016/j.atech.2024.100563","DOIUrl":null,"url":null,"abstract":"<div><p>Assessing sow posture is essential for understanding their physiological condition and helping farmers improve herd productivity. Deep learning-based techniques have proven effective for image interpretation, offering a better alternative to traditional image processing methods. However, distinguishing transitional postures such as sitting and kneeling is challenging with only conventional top-view RGB images. This study aimed to develop and compare different deep learning-based sow posture classifiers using different architectures and image types. Using Kinect v.2 cameras, RGB and depth images were collected from 9 sows housed individually in farrowing crates. A total of 26,362 images were manually labelled by posture: “standing”, “kneeling”, “sitting”, “ventral recumbency” and “lateral recumbency”. Different deep learning algorithms were developed to detect sow postures from three types of images: colour (RGB), depth (depth image transformed into greyscale), and fused (colour-depth composite images). Results indicated that the ResNet-18 model presented the best results and that including depth information improved the performance of all models tested. Depth and fused models achieved higher accuracies than the models using only RGB images. The best model used only depth images as input and presented an accuracy of 98.3 %. The mean precision and recall values were 97.04 % and 97.32 %, respectively (F1-score = 97.2 %). The study shows improved posture classification using depth images. Future research can improve model accuracy and speed by expanding the database, exploring fused methods and computational models, considering different breeds of sows, and incorporating more postures. These models can be integrated into computer vision systems to automatically characterise sow behavior.</p></div>","PeriodicalId":74813,"journal":{"name":"Smart agricultural technology","volume":null,"pages":null},"PeriodicalIF":6.3000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2772375524001680/pdfft?md5=129580fb02aec821e671700081475761&pid=1-s2.0-S2772375524001680-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Smart agricultural technology","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2772375524001680","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AGRICULTURAL ENGINEERING","Score":null,"Total":0}
引用次数: 0

Abstract

Assessing sow posture is essential for understanding their physiological condition and helping farmers improve herd productivity. Deep learning-based techniques have proven effective for image interpretation, offering a better alternative to traditional image processing methods. However, distinguishing transitional postures such as sitting and kneeling is challenging with only conventional top-view RGB images. This study aimed to develop and compare different deep learning-based sow posture classifiers using different architectures and image types. Using Kinect v.2 cameras, RGB and depth images were collected from 9 sows housed individually in farrowing crates. A total of 26,362 images were manually labelled by posture: “standing”, “kneeling”, “sitting”, “ventral recumbency” and “lateral recumbency”. Different deep learning algorithms were developed to detect sow postures from three types of images: colour (RGB), depth (depth image transformed into greyscale), and fused (colour-depth composite images). Results indicated that the ResNet-18 model presented the best results and that including depth information improved the performance of all models tested. Depth and fused models achieved higher accuracies than the models using only RGB images. The best model used only depth images as input and presented an accuracy of 98.3 %. The mean precision and recall values were 97.04 % and 97.32 %, respectively (F1-score = 97.2 %). The study shows improved posture classification using depth images. Future research can improve model accuracy and speed by expanding the database, exploring fused methods and computational models, considering different breeds of sows, and incorporating more postures. These models can be integrated into computer vision systems to automatically characterise sow behavior.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用彩色和深度图像的基于深度学习的母猪姿态分类器
评估母猪的姿态对于了解其生理状况和帮助牧场主提高畜群生产率至关重要。基于深度学习的技术已被证明可有效解释图像,为传统图像处理方法提供了更好的替代方案。然而,仅使用传统的俯视 RGB 图像来区分坐姿和跪姿等过渡姿势具有挑战性。本研究旨在利用不同的架构和图像类型,开发并比较不同的基于深度学习的母猪姿势分类器。研究人员使用 Kinect v.2 摄像头,从单独饲养在产仔箱中的 9 头母猪身上采集了 RGB 和深度图像。总共对 26,362 张图像按姿势进行了人工标注:"站姿"、"跪姿"、"坐姿"、"腹卧位 "和 "侧卧位"。开发了不同的深度学习算法,以检测三种类型图像中的母猪姿势:彩色图像(RGB)、深度图像(深度图像转换为灰度图像)和融合图像(彩色-深度复合图像)。结果表明,ResNet-18 模型的结果最好,而包含深度信息则提高了所有测试模型的性能。与仅使用 RGB 图像的模型相比,深度和融合模型的准确率更高。最佳模型仅使用深度图像作为输入,准确率达到 98.3%。平均精确度和召回值分别为 97.04 % 和 97.32 %(F1 分数 = 97.2 %)。这项研究表明,使用深度图像进行姿势分类的效果有所改善。未来的研究可以通过扩大数据库、探索融合方法和计算模型、考虑不同品种的母猪以及纳入更多姿势来提高模型的准确性和速度。这些模型可以集成到计算机视觉系统中,自动描述母猪的行为特征。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
4.20
自引率
0.00%
发文量
0
期刊最新文献
Development of a low-cost smart irrigation system for sustainable water management in the Mediterranean region Cover crop impacts on soil organic matter dynamics and its quantification using UAV and proximal sensing Design and development of machine vision robotic arm for vegetable crops in hydroponics Cybersecurity threats and mitigation measures in agriculture 4.0 and 5.0 Farmer's attitudes towards GHG emissions and adoption to low-cost sensor-driven smart farming for mitigation: The case of Ireland tillage and horticultural farmers
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1