In many real-life application problems, we are interested in numbers, namely, in the numerical values of the physical quantities. There are, however, at least two classes of problems, in which we are actually interested in sets:• In image processing (e.g., in astronomy), the desired black-and-white image is, from the mathematical viewpoint, a set.• In error estimation (e.g., in engineering, physics, geophysics, social sciences, etc.), in addition to the estimates x1, ...., xn for n physical quantities, we want to know what can the actual values xi of these quantities be, i.e., the set of all possible vectors x = (x,1, ...., xn).In both cases, we need to process sets. To define a generic set, we need infinitely many parameters; therefore, if we want to represent and process sets in the computer, we must restrict ourselves to finite-parametric families of sets that will be used to approximate the desired sets. The wrong choice of a family can lead to longer computations and worse approximation. Hence, it is desirable to find the family that it is the best in some reasonable sense.A similar problem occurs for random sets. To define a generic set, we need infinitely many parameters; as a result, traditional (finite-parametric) statistical methods are often not easily applicable to random sets. To avoid this difficulty, several researchers (including U. Grenander) have suggested to approximate arbitrary sets by sets from a certain finite-parametric family. As soon as we fix this family, we can use methods of traditional statistics. Here, a similar problem appears: a wrong choice of an approximation family can lead to a bad approximation and/or long computations; so, which family should we choose?In this paper, we show, on several application examples, how the problems of choosing the optimal family of sets can be formalized and solved. As a result of the described general methodology:•for astronomical images, we get exactly the geometric shapes that have been empirically used by astronomers and astrophysicists (thus, we have a theoretical explanation for these shapes), and• for error estimation, we get a theoretical explanation of why ellipsoids turn out to be experimentally the best shapes (and also, why ellipsoids are used in Khachiyan's and Karmarkar's algorithms for linear programming).
{"title":"Astrogeometry, error estimation, and other applications of set-valued analysis","authors":"A. Finkelstein, O. Kosheleva, V. Kreinovich","doi":"10.1145/242127.242129","DOIUrl":"https://doi.org/10.1145/242127.242129","url":null,"abstract":"In many real-life application problems, we are interested in numbers, namely, in the numerical values of the physical quantities. There are, however, at least two classes of problems, in which we are actually interested in sets:• In image processing (e.g., in astronomy), the desired black-and-white image is, from the mathematical viewpoint, a set.• In error estimation (e.g., in engineering, physics, geophysics, social sciences, etc.), in addition to the estimates x1, ...., xn for n physical quantities, we want to know what can the actual values xi of these quantities be, i.e., the set of all possible vectors x = (x,1, ...., xn).In both cases, we need to process sets. To define a generic set, we need infinitely many parameters; therefore, if we want to represent and process sets in the computer, we must restrict ourselves to finite-parametric families of sets that will be used to approximate the desired sets. The wrong choice of a family can lead to longer computations and worse approximation. Hence, it is desirable to find the family that it is the best in some reasonable sense.A similar problem occurs for random sets. To define a generic set, we need infinitely many parameters; as a result, traditional (finite-parametric) statistical methods are often not easily applicable to random sets. To avoid this difficulty, several researchers (including U. Grenander) have suggested to approximate arbitrary sets by sets from a certain finite-parametric family. As soon as we fix this family, we can use methods of traditional statistics. Here, a similar problem appears: a wrong choice of an approximation family can lead to a bad approximation and/or long computations; so, which family should we choose?In this paper, we show, on several application examples, how the problems of choosing the optimal family of sets can be formalized and solved. As a result of the described general methodology:•for astronomical images, we get exactly the geometric shapes that have been empirically used by astronomers and astrophysicists (thus, we have a theoretical explanation for these shapes), and• for error estimation, we get a theoretical explanation of why ellipsoids turn out to be experimentally the best shapes (and also, why ellipsoids are used in Khachiyan's and Karmarkar's algorithms for linear programming).","PeriodicalId":177516,"journal":{"name":"ACM Signum Newsletter","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125129218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To measure masses with high accuracy, we need a mass standard. To make a standard work, we must have a procedure that will enable us to compare a mass of a physical body with the mass of a standard. This procedure has an error (as any other measurement procedure).To measure arbitrary masses (that are not necessarily equal to the mass of the standard), we must use an indirect measuring procedure. What potential accuracy can we attain in such a procedure? In this paper, we give an answer to this question.
{"title":"With what accuracy can we measure masses if we have an (approximately known) mass standard","authors":"V. Kreinovich","doi":"10.1145/242127.242130","DOIUrl":"https://doi.org/10.1145/242127.242130","url":null,"abstract":"To measure masses with high accuracy, we need a mass standard. To make a standard work, we must have a procedure that will enable us to compare a mass of a physical body with the mass of a standard. This procedure has an error (as any other measurement procedure).To measure arbitrary masses (that are not necessarily equal to the mass of the standard), we must use an indirect measuring procedure. What potential accuracy can we attain in such a procedure? In this paper, we give an answer to this question.","PeriodicalId":177516,"journal":{"name":"ACM Signum Newsletter","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114250699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In many real-life situations (including computer-aided design and radiotelescope network design), it is necessary to estimate the derivative of a function from approximate measurement results. Usually, there exist several (approximate) models that describe measurement errors; these models may have different numbers of parameters. If we use different models, we may get estimates of different accuracy. In the design stage, we often have little information about these models, so, it is necessary to choose a model based only on the number of parameters n and on the number of measurements N.In mathematical terms, we want to estimate how having N equations Σj cijaj = yi with n (n < N) unknowns aj influences the accuracy of the result (cij are known coefficients, and yi are known with a standard deviation σ[y]). For that, we assume that the coefficients cij are independent random variables with 0 average and standard deviation 1 (this assumption is in good accordance with real-life situations). Then, we can use computer simulations to find the standard deviation σ' of the resulting error distribution for ai. For large n, this distribution is close to Gaussian (see, e.g., [21], pp. 2.17, 6.5, 9.8, and reference therein), so, we can safely assume that the actual errors are within the 3σ' limit.
在许多实际情况下(包括计算机辅助设计和射电望远镜网络设计),有必要从近似测量结果中估计函数的导数。通常,存在几种(近似)模型来描述测量误差;这些模型可能有不同数量的参数。如果我们使用不同的模型,我们可能得到不同精度的估计。在设计阶段,我们通常对这些模型的信息很少,因此,有必要仅根据参数的数量n和测量的数量n来选择模型。用数学术语来说,我们想要估计有n个方程Σj cijaj = yi, n (n < n)个未知数aj对结果精度的影响(cij是已知系数,yi是已知标准差Σ [y])。为此,我们假设系数cij是均值为0,标准差为1的独立随机变量(这个假设很符合实际情况)。然后,我们可以使用计算机模拟来找到ai的误差分布的标准差& σ;'。对于较大的n,该分布接近于高斯分布(例如,参见[21],第2.17、6.5、9.8页,以及其中的参考文献),因此,我们可以放心地假设实际误差在3σ
{"title":"Case studies of choosing a numerical differentiation method under uncertainty: computer-aided design and radiotelescope network design","authors":"A. Finkelstein, M. Koshelev","doi":"10.1145/242577.242579","DOIUrl":"https://doi.org/10.1145/242577.242579","url":null,"abstract":"In many real-life situations (including computer-aided design and radiotelescope network design), it is necessary to estimate the derivative of a function from approximate measurement results. Usually, there exist several (approximate) models that describe measurement errors; these models may have different numbers of parameters. If we use different models, we may get estimates of different accuracy. In the design stage, we often have little information about these models, so, it is necessary to choose a model based only on the number of parameters <i>n</i> and on the number of measurements <i>N.</i>In mathematical terms, we want to estimate how having <i>N</i> equations Σ<sub>j</sub> c<sub>ij</sub>a<sub>j</sub> = <i>y<sub>i</sub></i> with <i>n</i> (<i>n < N</i>) unknowns <i>a<sub>j</sub></i> influences the accuracy of the result (<i>c<sub>ij</sub></i> are known coefficients, and <i>y<sub>i</sub></i> are known with a standard deviation σ[<i>y</i>]). For that, we assume that the coefficients <i>c<sub>ij</sub></i> are independent random variables with 0 average and standard deviation 1 (this assumption is in good accordance with real-life situations). Then, we can use computer simulations to find the standard deviation σ' of the resulting error distribution for <i>a<sub>i</sub>.</i> For large <i>n</i>, this distribution is close to Gaussian (see, e.g., [21], pp. 2.17, 6.5, 9.8, and reference therein), so, we can safely assume that the actual errors are within the 3σ' limit.","PeriodicalId":177516,"journal":{"name":"ACM Signum Newsletter","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117210776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Monotonicity of functions has been successfully used in many problems of interval computations. However, in the context of interval computations, monotonicity seems somewhat ad hoc. In this paper, we show that monotonicity can be reformulated in interval terms and is, therefore, a natural condition for interval mathematics.
{"title":"Why monotonicity in interval computations? A remark","authors":"M. Koshelev, V. Kreinovich","doi":"10.1145/242577.242578","DOIUrl":"https://doi.org/10.1145/242577.242578","url":null,"abstract":"Monotonicity of functions has been successfully used in many problems of interval computations. However, in the context of interval computations, monotonicity seems somewhat ad hoc. In this paper, we show that monotonicity can be reformulated in interval terms and is, therefore, a natural condition for interval mathematics.","PeriodicalId":177516,"journal":{"name":"ACM Signum Newsletter","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126101826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The author proposes the reformulation of an algorithm which is discussed in Vandergraft's textbook as an example of an unstable method.
作者以不稳定方法为例,对Vandergraft教科书中讨论的一种算法进行了重新表述。
{"title":"Remark on “An example of error propagation reinterpreted as subtractive cancellation” by J. A. Delaney (SIGNUM Newsletter 1/96)","authors":"V. Drygalla","doi":"10.1145/230922.230929","DOIUrl":"https://doi.org/10.1145/230922.230929","url":null,"abstract":"The author proposes the reformulation of an algorithm which is discussed in Vandergraft's textbook as an example of an unstable method.","PeriodicalId":177516,"journal":{"name":"ACM Signum Newsletter","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114409263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For many measuring devices, the only information that we have about them is their biggest possible error ε > 0. In other words, we know that the error Δx = x - x (i.e., the difference between the measured value x and the actual values x) is random, that this error can sometimes become as big as ε or - ε, but we do not have any information about the probabilities of different values of error.Methods of statistics enable us to generate a better estimate for x by making several measurements x1, ..., xn. For example, if the average error is 0 (E(Δx) = 0), then after n measurements, we can take an average x = (x1 + ... + xn)/n, and get an estimate whose standard deviation (and the corresponding confidence intervals) are √n times smaller.Another estimate comes from interval analysis: for every measurement xi, we know that the actual value x belongs to an interval [xi-ε, xi+ε]. So, x belongs to the intersection of all these intervals. In one sense, this estimate is better than the one based on traditional engineering statistics (i.e., averaging): interval estimation is guaranteed. In this paper, we show that for many cases, this intersection is also better in the sense that it gives a more accurate estimate for x than averaging: namely, under certain reasonable conditions, the error of this interval estimate decreases faster (as 1/n) than the error of the average (that only decreases as 1/ √n).A similar result is proved for a multi-dimensional case, when we measure several auxiliary quantities, and use the measurement results to estimate the value of the desired quantity y.
{"title":"For unknown-but-bounded errors, interval estimates are often better than averaging","authors":"G. Walster, V. Kreinovich","doi":"10.1145/230922.230926","DOIUrl":"https://doi.org/10.1145/230922.230926","url":null,"abstract":"For many measuring devices, the only information that we have about them is their biggest possible error ε > 0. In other words, we know that the error Δ<i>x</i> = <i>x</i> - <i>x</i> (i.e., the difference between the measured value <i>x</i> and the actual values <i>x</i>) is random, that this error can sometimes become as big as ε or - ε, but we do not have any information about the probabilities of different values of error.Methods of statistics enable us to generate a better estimate for <i>x</i> by making several measurements <i>x<sub>1</sub>, ..., x<sub>n</sub>.</i> For example, if the average error is 0 (<i>E</i>(Δ<i>x</i>) = 0), then after <i>n</i> measurements, we can take an average <i>x</i> = (<i>x</i><sub>1</sub> + ... + <i>x</i><sub>n</sub>)/<i>n</i>, and get an estimate whose standard deviation (and the corresponding confidence intervals) are √<i>n</i> times smaller.Another estimate comes from interval analysis: for every measurement <i>x</i><sub>i</sub>, we know that the actual value <i>x</i> belongs to an interval [<i>x</i><sub>i</sub>-ε, <i>x</i><sub>i</sub>+ε]. So, <i>x</i> belongs to the intersection of all these intervals. In one sense, this estimate is better than the one based on traditional engineering statistics (i.e., averaging): interval estimation is <i>guaranteed.</i> In this paper, we show that for many cases, this intersection is also better in the sense that it gives a more <i>accurate</i> estimate for <i>x</i> than averaging: namely, under certain reasonable conditions, the <i>error of this interval estimate decreases faster (as 1/n) than the error of the average (that only decreases as</i> 1/ √n).A similar result is proved for a multi-dimensional case, when we measure several auxiliary quantities, and use the measurement results to estimate the value of the desired quantity <i>y</i>.","PeriodicalId":177516,"journal":{"name":"ACM Signum Newsletter","volume":"127 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127401619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
James Delaney, in his paper in SIGNUM Newsletter [1], convincingly demonstrates that the recursion[EQUATION]blows up because of catastrophic loss of precision due to subtractive cancellation. Values of In calculated using this recursion are given in the second column of Table 1.
James Delaney在他的论文《SIGNUM Newsletter b[1]》中令人信服地证明了递归[方程]的爆炸,因为减法抵消导致精度的灾难性损失。表1的第二列给出了使用这种递归计算的In值。
{"title":"An example of error propagation reinterpreted as subtractive cancellation—revisited","authors":"J. S. Dukelow","doi":"10.1145/230922.230928","DOIUrl":"https://doi.org/10.1145/230922.230928","url":null,"abstract":"James Delaney, in his paper in SIGNUM Newsletter [1], convincingly demonstrates that the recursion[EQUATION]blows up because of catastrophic loss of precision due to subtractive cancellation. Values of In calculated using this recursion are given in the second column of Table 1.","PeriodicalId":177516,"journal":{"name":"ACM Signum Newsletter","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117347226","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A novel approach to deriving a family of quadrature formulae is presented. The first member of the new family is the corrected trapezoidal rule. The second member, a two-segment rule, is obtained by interpolating the corrected trapezoidal rule and the Simpson one-third rule. The third member, a three-segment rule, is obtained by interpolating the corrected trapezoidal rule and the Simpson three-eights rule. The fourth member, a four-segment rule is obtained by interpolating the two-segment rule with the Boole rule. The process can be carried on to generate a whole class of integration rules by interpolating the proposed rules appropriately with the Newton-Cotes rules to cancel out an additional term in the Euler-MacLaurin error formula. The resulting rules integrate correctly polynomials of degrees less or equal to n+3 if n is even and n+2 if n is odd, where n is the number of segments of the single application rules. The proposed rules have excellent round-off properties, close to those of the trapezoidal rule. Members of the new family obtain with two additional functional evaluations the same order of errors as those obtained by doubling the number of segments in applying the Romberg integration to Newton-Cotes rules. Members of the proposed family are shown to be viable alternatives to Gaussian quadrature.
{"title":"A class of numerical integration rules with first order derivatives","authors":"M. A. Al-Alaoui","doi":"10.1145/230922.230930","DOIUrl":"https://doi.org/10.1145/230922.230930","url":null,"abstract":"A novel approach to deriving a family of quadrature formulae is presented. The first member of the new family is the corrected trapezoidal rule. The second member, a two-segment rule, is obtained by interpolating the corrected trapezoidal rule and the Simpson one-third rule. The third member, a three-segment rule, is obtained by interpolating the corrected trapezoidal rule and the Simpson three-eights rule. The fourth member, a four-segment rule is obtained by interpolating the two-segment rule with the Boole rule. The process can be carried on to generate a whole class of integration rules by interpolating the proposed rules appropriately with the Newton-Cotes rules to cancel out an additional term in the Euler-MacLaurin error formula. The resulting rules integrate correctly polynomials of degrees less or equal to n+3 if n is even and n+2 if n is odd, where n is the number of segments of the single application rules. The proposed rules have excellent round-off properties, close to those of the trapezoidal rule. Members of the new family obtain with two additional functional evaluations the same order of errors as those obtained by doubling the number of segments in applying the Romberg integration to Newton-Cotes rules. Members of the proposed family are shown to be viable alternatives to Gaussian quadrature.","PeriodicalId":177516,"journal":{"name":"ACM Signum Newsletter","volume":"235 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116174355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Five local methods or algorithms of univariate interpolation are mutually compared both numerically and graphically. They are Ackland's osculatory method (J. Inst. Actuar. 49, 369-375, 1915), Algorithm 433 (Commun. ACM 15, 914-918, 1972), Maude's method (Computer J. 16, 64-65, 1973), Algorithm 514 (ACM TOMS 3, 175-178, 1977), and Algorithm 697 (ACM TOMS 17, 367, 1991). The comparison results indicate that Algorithm 697 is the best among these five methods.
{"title":"Note on local methods of univariate interpolation","authors":"H. Akima","doi":"10.1145/230922.230924","DOIUrl":"https://doi.org/10.1145/230922.230924","url":null,"abstract":"Five local methods or algorithms of univariate interpolation are mutually compared both numerically and graphically. They are Ackland's osculatory method (J. Inst. Actuar. 49, 369-375, 1915), Algorithm 433 (Commun. ACM 15, 914-918, 1972), Maude's method (Computer J. 16, 64-65, 1973), Algorithm 514 (ACM TOMS 3, 175-178, 1977), and Algorithm 697 (ACM TOMS 17, 367, 1991). The comparison results indicate that Algorithm 697 is the best among these five methods.","PeriodicalId":177516,"journal":{"name":"ACM Signum Newsletter","volume":"148 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1996-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131607624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While in the process of computing poker odds, the following question occurred to me: given a random number generator that returns answers from one to 52, how many calls on average need to be made before at least one of each number is found?
{"title":"Achieving a full deck","authors":"H. Hodge","doi":"10.1145/219340.219344","DOIUrl":"https://doi.org/10.1145/219340.219344","url":null,"abstract":"While in the process of computing poker odds, the following question occurred to me: given a random number generator that returns answers from one to 52, how many calls on average need to be made before at least one of each number is found?","PeriodicalId":177516,"journal":{"name":"ACM Signum Newsletter","volume":"58 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1995-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128892135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}