Mohammad Mehdi Rastikerdar, Jin Huang, Hui Guan, Deepak Ganesan
{"title":"原位微调物联网摄像头捕获器中的野生动物模型,实现高效适应","authors":"Mohammad Mehdi Rastikerdar, Jin Huang, Hui Guan, Deepak Ganesan","doi":"arxiv-2409.07796","DOIUrl":null,"url":null,"abstract":"Wildlife monitoring via camera traps has become an essential tool in ecology,\nbut the deployment of machine learning models for on-device animal\nclassification faces significant challenges due to domain shifts and resource\nconstraints. This paper introduces WildFit, a novel approach that reconciles\nthe conflicting goals of achieving high domain generalization performance and\nensuring efficient inference for camera trap applications. WildFit leverages\ncontinuous background-aware model fine-tuning to deploy ML models tailored to\nthe current location and time window, allowing it to maintain robust\nclassification accuracy in the new environment without requiring significant\ncomputational resources. This is achieved by background-aware data synthesis,\nwhich generates training images representing the new domain by blending\nbackground images with animal images from the source domain. We further enhance\nfine-tuning effectiveness through background drift detection and class\ndistribution drift detection, which optimize the quality of synthesized data\nand improve generalization performance. Our extensive evaluation across\nmultiple camera trap datasets demonstrates that WildFit achieves significant\nimprovements in classification accuracy and computational efficiency compared\nto traditional approaches.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":"12 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"In-Situ Fine-Tuning of Wildlife Models in IoT-Enabled Camera Traps for Efficient Adaptation\",\"authors\":\"Mohammad Mehdi Rastikerdar, Jin Huang, Hui Guan, Deepak Ganesan\",\"doi\":\"arxiv-2409.07796\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Wildlife monitoring via camera traps has become an essential tool in ecology,\\nbut the deployment of machine learning models for on-device animal\\nclassification faces significant challenges due to domain shifts and resource\\nconstraints. This paper introduces WildFit, a novel approach that reconciles\\nthe conflicting goals of achieving high domain generalization performance and\\nensuring efficient inference for camera trap applications. WildFit leverages\\ncontinuous background-aware model fine-tuning to deploy ML models tailored to\\nthe current location and time window, allowing it to maintain robust\\nclassification accuracy in the new environment without requiring significant\\ncomputational resources. This is achieved by background-aware data synthesis,\\nwhich generates training images representing the new domain by blending\\nbackground images with animal images from the source domain. We further enhance\\nfine-tuning effectiveness through background drift detection and class\\ndistribution drift detection, which optimize the quality of synthesized data\\nand improve generalization performance. Our extensive evaluation across\\nmultiple camera trap datasets demonstrates that WildFit achieves significant\\nimprovements in classification accuracy and computational efficiency compared\\nto traditional approaches.\",\"PeriodicalId\":501479,\"journal\":{\"name\":\"arXiv - CS - Artificial Intelligence\",\"volume\":\"12 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.07796\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07796","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
通过相机陷阱对野生动物进行监测已成为生态学的重要工具,但由于领域转移和资源限制,在设备上部署用于动物分类的机器学习模型面临着巨大挑战。本文介绍的 WildFit 是一种新颖的方法,它能在实现高领域泛化性能和确保相机陷阱应用的高效推理这两个相互冲突的目标之间取得平衡。WildFit 利用连续的背景感知模型微调技术,部署适合当前位置和时间窗口的 ML 模型,使其能够在新环境中保持稳健的分类准确性,而无需大量的计算资源。这是通过背景感知数据合成实现的,它通过将背景图像与源领域的动物图像混合生成代表新领域的训练图像。我们通过背景漂移检测和类分布漂移检测进一步提高了微调效果,从而优化了合成数据的质量,提高了泛化性能。我们在多个相机陷阱数据集上进行的广泛评估表明,与传统方法相比,WildFit 在分类准确性和计算效率方面都有显著提高。
In-Situ Fine-Tuning of Wildlife Models in IoT-Enabled Camera Traps for Efficient Adaptation
Wildlife monitoring via camera traps has become an essential tool in ecology,
but the deployment of machine learning models for on-device animal
classification faces significant challenges due to domain shifts and resource
constraints. This paper introduces WildFit, a novel approach that reconciles
the conflicting goals of achieving high domain generalization performance and
ensuring efficient inference for camera trap applications. WildFit leverages
continuous background-aware model fine-tuning to deploy ML models tailored to
the current location and time window, allowing it to maintain robust
classification accuracy in the new environment without requiring significant
computational resources. This is achieved by background-aware data synthesis,
which generates training images representing the new domain by blending
background images with animal images from the source domain. We further enhance
fine-tuning effectiveness through background drift detection and class
distribution drift detection, which optimize the quality of synthesized data
and improve generalization performance. Our extensive evaluation across
multiple camera trap datasets demonstrates that WildFit achieves significant
improvements in classification accuracy and computational efficiency compared
to traditional approaches.