{"title":"在最小l1范数中引入观测值的协方差,需要吗?","authors":"S. Suraci, L. Oliveira, I. Klein, R. Goldschmidt","doi":"10.1515/jogs-2022-0135","DOIUrl":null,"url":null,"abstract":"Abstract The most common approaches for assigning weights to observations in minimum L1-norm (ML1) is to introduce weights of p or p \\sqrt{p} , p being the weights vector of observations given by the inverse of variances. Hence, they do not take covariances into consideration, being appropriated only to independent observations. To work around this limitation, methods for decorrelation/unit-weight reduction of observations originally developed in the context of least squares (LS) have been applied for ML1, although this adaptation still requires further investigations. In this article, we presented a deeper investigation into the mentioned adaptation and proposed the new ML1 expressions that introduce weights for both independent and correlated observations; and compared their results with the usual approaches that ignore covariances. Experiments were performed in a leveling network geometry by means of Monte Carlo simulations considering three different scenarios: independent observations, observations with “weak” correlations, and observations with “strong” correlations. The main conclusions are: (1) in ML1 adjustment of independent observations, adaptation of LS techniques introduces weights proportional to p \\sqrt{p} (but not p); (2) proposed formulations allowed covariances to influence parameters estimation, which is unfeasible with usual ML1 formulations; (3) introducing weighs of p provided the closest ML1 parameters estimation compared to that of LS in networks free of outliers; (4) weighs of p \\sqrt{p} provided the highest successful rate in outlier identification with ML1. Conclusions (3) and (4) imply that introducing covariances in ML1 may adversely affect its performance in these two practical applications.","PeriodicalId":44569,"journal":{"name":"Journal of Geodetic Science","volume":"131 1","pages":"65 - 74"},"PeriodicalIF":0.9000,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Introducing covariances of observations in the minimum L1-norm, is it needed?\",\"authors\":\"S. Suraci, L. Oliveira, I. Klein, R. Goldschmidt\",\"doi\":\"10.1515/jogs-2022-0135\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Abstract The most common approaches for assigning weights to observations in minimum L1-norm (ML1) is to introduce weights of p or p \\\\sqrt{p} , p being the weights vector of observations given by the inverse of variances. Hence, they do not take covariances into consideration, being appropriated only to independent observations. To work around this limitation, methods for decorrelation/unit-weight reduction of observations originally developed in the context of least squares (LS) have been applied for ML1, although this adaptation still requires further investigations. In this article, we presented a deeper investigation into the mentioned adaptation and proposed the new ML1 expressions that introduce weights for both independent and correlated observations; and compared their results with the usual approaches that ignore covariances. Experiments were performed in a leveling network geometry by means of Monte Carlo simulations considering three different scenarios: independent observations, observations with “weak” correlations, and observations with “strong” correlations. The main conclusions are: (1) in ML1 adjustment of independent observations, adaptation of LS techniques introduces weights proportional to p \\\\sqrt{p} (but not p); (2) proposed formulations allowed covariances to influence parameters estimation, which is unfeasible with usual ML1 formulations; (3) introducing weighs of p provided the closest ML1 parameters estimation compared to that of LS in networks free of outliers; (4) weighs of p \\\\sqrt{p} provided the highest successful rate in outlier identification with ML1. Conclusions (3) and (4) imply that introducing covariances in ML1 may adversely affect its performance in these two practical applications.\",\"PeriodicalId\":44569,\"journal\":{\"name\":\"Journal of Geodetic Science\",\"volume\":\"131 1\",\"pages\":\"65 - 74\"},\"PeriodicalIF\":0.9000,\"publicationDate\":\"2022-01-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Geodetic Science\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1515/jogs-2022-0135\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q4\",\"JCRName\":\"REMOTE SENSING\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Geodetic Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1515/jogs-2022-0135","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q4","JCRName":"REMOTE SENSING","Score":null,"Total":0}
引用次数: 1
摘要
在最小l1范数(ML1)中,最常用的方法是引入p或p \sqrt{p}的权值,p是由方差的倒数给出的观测值的权值向量。因此,它们不考虑协方差,只适用于独立的观测。为了解决这一限制,最初在最小二乘(LS)背景下开发的观测值的去相关/单位权重减少方法已应用于ML1,尽管这种适应仍需要进一步研究。在本文中,我们对上述自适应进行了更深入的研究,并提出了新的ML1表达式,该表达式为独立和相关观测引入了权重;并将他们的结果与通常忽略协方差的方法进行比较。通过蒙特卡罗模拟,在一个水准网几何中进行了实验,考虑了三种不同的场景:独立观测、“弱”相关性观测和“强”相关性观测。主要结论是:(1)在独立观测值的ML1平差中,LS技术的自适应引入了与p \sqrt{p}成比例的权重(而不是p);(2)提出的公式允许协方差影响参数估计,这在通常的ML1公式中是不可行的;(3)在不存在异常值的网络中,引入p的权重提供了比LS更接近的ML1参数估计;(4) p \sqrt{p}的权重对ML1的离群值识别成功率最高。结论(3)和(4)表明,在ML1中引入协方差可能会对其在这两个实际应用中的性能产生不利影响。
Introducing covariances of observations in the minimum L1-norm, is it needed?
Abstract The most common approaches for assigning weights to observations in minimum L1-norm (ML1) is to introduce weights of p or p \sqrt{p} , p being the weights vector of observations given by the inverse of variances. Hence, they do not take covariances into consideration, being appropriated only to independent observations. To work around this limitation, methods for decorrelation/unit-weight reduction of observations originally developed in the context of least squares (LS) have been applied for ML1, although this adaptation still requires further investigations. In this article, we presented a deeper investigation into the mentioned adaptation and proposed the new ML1 expressions that introduce weights for both independent and correlated observations; and compared their results with the usual approaches that ignore covariances. Experiments were performed in a leveling network geometry by means of Monte Carlo simulations considering three different scenarios: independent observations, observations with “weak” correlations, and observations with “strong” correlations. The main conclusions are: (1) in ML1 adjustment of independent observations, adaptation of LS techniques introduces weights proportional to p \sqrt{p} (but not p); (2) proposed formulations allowed covariances to influence parameters estimation, which is unfeasible with usual ML1 formulations; (3) introducing weighs of p provided the closest ML1 parameters estimation compared to that of LS in networks free of outliers; (4) weighs of p \sqrt{p} provided the highest successful rate in outlier identification with ML1. Conclusions (3) and (4) imply that introducing covariances in ML1 may adversely affect its performance in these two practical applications.