Use of vision and sound to classify feller-buncher operational state

IF 2.1 3区 农林科学 Q2 FORESTRY International Journal of Forest Engineering Pub Date : 2022-02-20 DOI:10.1080/14942119.2022.2037927
Pengmin Pan, T. McDonald, M. Smidt, Rafael Dias
{"title":"Use of vision and sound to classify feller-buncher operational state","authors":"Pengmin Pan, T. McDonald, M. Smidt, Rafael Dias","doi":"10.1080/14942119.2022.2037927","DOIUrl":null,"url":null,"abstract":"ABSTRACT Productivity measures in logging involve simultaneous recognition and classification of event occurrence and timing, and the volume of stems being handled. In full-tree felling systems these measurements are difficult to implement in an autonomous manner because of the unfavorable working environment and the abundance of confounding extraneous events. This paper proposed a vision method that used a low-cost camera to recognize feller-buncher operational events including tree cutting and piling. It used a fine K-nearest neighbors (fKNN) algorithm as the final classifier based on both audio and video features derived from short video segments as inputs. The classifier’s calibration accuracy exceeds 94%. The trained model was tested on videos recorded under various conditions. The overall accurate rates for short segments were greater than 89%. Comparisons were made between the human- and algorithm-derived event detection rates, events’ durations, and inter-event timing using continuously recorded videos taken during feller operation. Video results between the fKNN model and manual observation were similar. Statistical comparison using the Kolmogorov–Smirnov test to evaluate measured parameters’ distributions (manual versus automated event duration and inter-event timing) did not show significant differences with the lowest P-value among all Kolmogorov–Smirnov tests equal to 0.12. The result indicated the feasibility and potential of using the method for the automatic time study of drive-to-tree feller bunchers.","PeriodicalId":55998,"journal":{"name":"International Journal of Forest Engineering","volume":null,"pages":null},"PeriodicalIF":2.1000,"publicationDate":"2022-02-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Forest Engineering","FirstCategoryId":"97","ListUrlMain":"https://doi.org/10.1080/14942119.2022.2037927","RegionNum":3,"RegionCategory":"农林科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"FORESTRY","Score":null,"Total":0}
引用次数: 2

Abstract

ABSTRACT Productivity measures in logging involve simultaneous recognition and classification of event occurrence and timing, and the volume of stems being handled. In full-tree felling systems these measurements are difficult to implement in an autonomous manner because of the unfavorable working environment and the abundance of confounding extraneous events. This paper proposed a vision method that used a low-cost camera to recognize feller-buncher operational events including tree cutting and piling. It used a fine K-nearest neighbors (fKNN) algorithm as the final classifier based on both audio and video features derived from short video segments as inputs. The classifier’s calibration accuracy exceeds 94%. The trained model was tested on videos recorded under various conditions. The overall accurate rates for short segments were greater than 89%. Comparisons were made between the human- and algorithm-derived event detection rates, events’ durations, and inter-event timing using continuously recorded videos taken during feller operation. Video results between the fKNN model and manual observation were similar. Statistical comparison using the Kolmogorov–Smirnov test to evaluate measured parameters’ distributions (manual versus automated event duration and inter-event timing) did not show significant differences with the lowest P-value among all Kolmogorov–Smirnov tests equal to 0.12. The result indicated the feasibility and potential of using the method for the automatic time study of drive-to-tree feller bunchers.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
利用视觉和声音对集束机运行状态进行分类
摘要测井中的生产力测量包括同时识别和分类事件的发生和时间,以及处理的树干数量。在全树砍伐系统中,由于不利的工作环境和大量混杂的无关事件,这些测量很难以自主的方式实施。本文提出了一种视觉方法,使用低成本的摄像机来识别伐木机的操作事件,包括树木砍伐和打桩。它使用精细的K近邻(fKNN)算法作为最终分类器,该算法基于从短视频片段中导出的音频和视频特征作为输入。分类器的标定精度超过94%。训练后的模型在各种条件下录制的视频上进行了测试。短片段的总体准确率大于89%。使用砍伐机操作期间连续录制的视频,对人工和算法衍生的事件检测率、事件持续时间和事件间时间进行了比较。fKNN模型和人工观察之间的视频结果相似。使用Kolmogorov–Smirnov检验来评估测量参数分布(手动与自动事件持续时间和事件间时间)的统计比较没有显示出显著差异,所有Kolmogorov-Smirnov检验中的最低P值等于0.12。研究结果表明,该方法可用于林木采伐串捆机驱动时间的自动研究,具有一定的可行性和潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
3.70
自引率
21.10%
发文量
33
期刊最新文献
Productivity benchmarks for unguyed excavator-based tower yarders Novel approach for forest road maintenance using smartphone sensor data and deep learning methods Machine learning applications in forest and biomass supply chain management: a review Mechanical site preparation in South Africa: comparing the productivity of pitting machine operators under different site conditions Stem recovery and harvesting productivity of two different harvesting systems in final felling of Pinus patula
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1