Mohammad Mehdi Rastikerdar, Jin Huang, Hui Guan, Deepak Ganesan
{"title":"In-Situ Fine-Tuning of Wildlife Models in IoT-Enabled Camera Traps for Efficient Adaptation","authors":"Mohammad Mehdi Rastikerdar, Jin Huang, Hui Guan, Deepak Ganesan","doi":"arxiv-2409.07796","DOIUrl":null,"url":null,"abstract":"Wildlife monitoring via camera traps has become an essential tool in ecology,\nbut the deployment of machine learning models for on-device animal\nclassification faces significant challenges due to domain shifts and resource\nconstraints. This paper introduces WildFit, a novel approach that reconciles\nthe conflicting goals of achieving high domain generalization performance and\nensuring efficient inference for camera trap applications. WildFit leverages\ncontinuous background-aware model fine-tuning to deploy ML models tailored to\nthe current location and time window, allowing it to maintain robust\nclassification accuracy in the new environment without requiring significant\ncomputational resources. This is achieved by background-aware data synthesis,\nwhich generates training images representing the new domain by blending\nbackground images with animal images from the source domain. We further enhance\nfine-tuning effectiveness through background drift detection and class\ndistribution drift detection, which optimize the quality of synthesized data\nand improve generalization performance. Our extensive evaluation across\nmultiple camera trap datasets demonstrates that WildFit achieves significant\nimprovements in classification accuracy and computational efficiency compared\nto traditional approaches.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":"12 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.07796","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Wildlife monitoring via camera traps has become an essential tool in ecology,
but the deployment of machine learning models for on-device animal
classification faces significant challenges due to domain shifts and resource
constraints. This paper introduces WildFit, a novel approach that reconciles
the conflicting goals of achieving high domain generalization performance and
ensuring efficient inference for camera trap applications. WildFit leverages
continuous background-aware model fine-tuning to deploy ML models tailored to
the current location and time window, allowing it to maintain robust
classification accuracy in the new environment without requiring significant
computational resources. This is achieved by background-aware data synthesis,
which generates training images representing the new domain by blending
background images with animal images from the source domain. We further enhance
fine-tuning effectiveness through background drift detection and class
distribution drift detection, which optimize the quality of synthesized data
and improve generalization performance. Our extensive evaluation across
multiple camera trap datasets demonstrates that WildFit achieves significant
improvements in classification accuracy and computational efficiency compared
to traditional approaches.