Steven K. Filippelli , Karen Schleeweis , Mark D. Nelson , Patrick A. Fekety , Jody C. Vogeler
{"title":"Testing temporal transferability of remote sensing models for large area monitoring","authors":"Steven K. Filippelli , Karen Schleeweis , Mark D. Nelson , Patrick A. Fekety , Jody C. Vogeler","doi":"10.1016/j.srs.2024.100119","DOIUrl":null,"url":null,"abstract":"<div><p>Applying remote sensing models outside the temporal range of their training data, referred to as temporal model transfer, has become common practice for large area monitoring projects that extrapolate models for hindcasting or forecasting to time periods lacking reference data. However, the development of appropriate validation methods for temporal transfer has lagged behind its rapid adoption. Breaking temporal transfer's assumption of temporal consistency in both remote sensing and reference data and their relationship to each other could lead to biased pixel-level predictions and small area estimators, compromising the operational validity of large area monitoring products. Few studies using temporal transfer have evaluated its effects on model accuracy at the pixel/plot level, and the propensity for biased small area estimators and trends resulting from temporal transfer remains unexplored. We present a framework for evaluating temporal transferability by combining temporal cross-validation with a multiscale map assessment to aid in identifying where and when biased model predictions could scale to small area estimates and affect predicted trends.</p><p>This validation framework is demonstrated in a case study of annual percent tree canopy cover mapping in Michigan. We tested and compared temporal transferability of random forest models of canopy cover derived from 2010 to 2016 systematic dot-grid photo-interpretations at Forest Inventory and Analysis plots with Landsat spectral indices fit with the LandTrendr temporal segmentation algorithm serving as the primary predictor variables. The temporal cross-validation error (mean RMSE = 13.9% cover) was higher than the common validation approach of considering all time periods of testing data together (RMSE = 13.6% cover) and lower than models trained and tested within the same year (mean RMSE = 14.2% cover). However, the bias of model predictions and small area estimators for individual years was higher with temporal transfer models than when applying models within the same year as their training data. We also evaluated how training models using different temporal subsets and with and without LandTrendr fitting affected predictions of Michigan's 1984–2020 predicted annual mean cover. The mean cover from LandTrendr-based models followed expected and consistent trends and had less difference between models trained with different temporal subsets (max difference = 5.8% cover). While those from Landsat had high interannual variations and greater difference between temporal models (max difference = 11.2% cover). The results of this case study demonstrate that evaluation of temporal transferability is necessary for establishing the operational validity of large area monitoring products, even when using time series algorithms that improve temporal consistency.</p></div>","PeriodicalId":101147,"journal":{"name":"Science of Remote Sensing","volume":"9 ","pages":"Article 100119"},"PeriodicalIF":5.7000,"publicationDate":"2024-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666017224000038/pdfft?md5=f48e89200594309fd386391289790f8d&pid=1-s2.0-S2666017224000038-main.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Science of Remote Sensing","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666017224000038","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENVIRONMENTAL SCIENCES","Score":null,"Total":0}
引用次数: 0
Abstract
Applying remote sensing models outside the temporal range of their training data, referred to as temporal model transfer, has become common practice for large area monitoring projects that extrapolate models for hindcasting or forecasting to time periods lacking reference data. However, the development of appropriate validation methods for temporal transfer has lagged behind its rapid adoption. Breaking temporal transfer's assumption of temporal consistency in both remote sensing and reference data and their relationship to each other could lead to biased pixel-level predictions and small area estimators, compromising the operational validity of large area monitoring products. Few studies using temporal transfer have evaluated its effects on model accuracy at the pixel/plot level, and the propensity for biased small area estimators and trends resulting from temporal transfer remains unexplored. We present a framework for evaluating temporal transferability by combining temporal cross-validation with a multiscale map assessment to aid in identifying where and when biased model predictions could scale to small area estimates and affect predicted trends.
This validation framework is demonstrated in a case study of annual percent tree canopy cover mapping in Michigan. We tested and compared temporal transferability of random forest models of canopy cover derived from 2010 to 2016 systematic dot-grid photo-interpretations at Forest Inventory and Analysis plots with Landsat spectral indices fit with the LandTrendr temporal segmentation algorithm serving as the primary predictor variables. The temporal cross-validation error (mean RMSE = 13.9% cover) was higher than the common validation approach of considering all time periods of testing data together (RMSE = 13.6% cover) and lower than models trained and tested within the same year (mean RMSE = 14.2% cover). However, the bias of model predictions and small area estimators for individual years was higher with temporal transfer models than when applying models within the same year as their training data. We also evaluated how training models using different temporal subsets and with and without LandTrendr fitting affected predictions of Michigan's 1984–2020 predicted annual mean cover. The mean cover from LandTrendr-based models followed expected and consistent trends and had less difference between models trained with different temporal subsets (max difference = 5.8% cover). While those from Landsat had high interannual variations and greater difference between temporal models (max difference = 11.2% cover). The results of this case study demonstrate that evaluation of temporal transferability is necessary for establishing the operational validity of large area monitoring products, even when using time series algorithms that improve temporal consistency.