Surveys face difficult choices in managing cost-error trade-offs. Stopping rules for surveys have been proposed as a method for managing these trade-offs. A stopping rule will limit effort on a select subset of cases to reduce costs with minimal harm to quality. Previously proposed stopping rules have focused on quality with an implicit assumption that all cases have the same cost. This assumption is unlikely to be true, particularly when some cases will require more effort and, therefore, more costs than others. We propose a new rule that looks at both predicted costs and quality. This rule is tested experimentally against another rule that focuses on stopping cases that are expected to be difficult to recruit. The experiment was conducted on the 2020 data collection of the Health and Retirement Study (HRS). We test both Bayesian and non-Bayesian (maximum-likelihood or ML) versions of the rule. The Bayesian version of the prediction models uses historical data to establish prior information. The Bayesian version led to higher-quality data for roughly the same cost, while the ML version led to small reductions in quality with larger reductions in cost compared to the control rule.