Summary form only given, as follows. L/sub 2/-minimization problems are commonly solved using one of the following methods: (i) variants of the simplex method, used to solve the L/sub 1/-minimization problem formulated as a linear programming (LP) problem, and (ii) the iteratively reweighted least-squares (IRLS) method, a method favored in some signal processing applications. Interior-point methods (primal affine and Karmarkar's dual affine methods) are considerably faster than the simplex method for solving large LP problems. The principles of affine algorithms and their implementation strongly resemble the IRLS method. However, an efficient implementation is essential to obtain good performances from the interior-point methods. The implementation details for dense and sparse L/sub 1/-minimization problems with and without linear inequality constraints are discussed. A number of examples are worked out, and comparisons are made with existing algorithms wherever possible.<>
{"title":"Affine algorithms for L-minimization","authors":"K. Ponnambalam, S. Seetharaman, T. Alguindigue","doi":"10.1109/MDSP.1989.97066","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97066","url":null,"abstract":"Summary form only given, as follows. L/sub 2/-minimization problems are commonly solved using one of the following methods: (i) variants of the simplex method, used to solve the L/sub 1/-minimization problem formulated as a linear programming (LP) problem, and (ii) the iteratively reweighted least-squares (IRLS) method, a method favored in some signal processing applications. Interior-point methods (primal affine and Karmarkar's dual affine methods) are considerably faster than the simplex method for solving large LP problems. The principles of affine algorithms and their implementation strongly resemble the IRLS method. However, an efficient implementation is essential to obtain good performances from the interior-point methods. The implementation details for dense and sparse L/sub 1/-minimization problems with and without linear inequality constraints are discussed. A number of examples are worked out, and comparisons are made with existing algorithms wherever possible.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126641247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. An innovative image processing system that is based on a conceptual model, i.e. an abstract representation of the application environment that is believed to provide flexibility and modularity in addition to other good properties is proposed. An object-oriented design was used. The objects in the system are instances of classes that capture the abstractions and encapsulate both status and behavior. Status is captured through attribute values, and behavior through procedures. Classes are related to each other through standard abstraction techniques: specialization (class-subclass relation) and aggregation (class-class-component relation). The system works by sending messages (the equivalent of procedure calls) between objects. The object responds to messages by using the required procedure to perform operations. The system is intended to get raw images and sequences of images.<>
{"title":"Image processing system based on object oriented design","authors":"V. Cappellini, A. del Bimbo","doi":"10.1109/MDSP.1989.96997","DOIUrl":"https://doi.org/10.1109/MDSP.1989.96997","url":null,"abstract":"Summary form only given. An innovative image processing system that is based on a conceptual model, i.e. an abstract representation of the application environment that is believed to provide flexibility and modularity in addition to other good properties is proposed. An object-oriented design was used. The objects in the system are instances of classes that capture the abstractions and encapsulate both status and behavior. Status is captured through attribute values, and behavior through procedures. Classes are related to each other through standard abstraction techniques: specialization (class-subclass relation) and aggregation (class-class-component relation). The system works by sending messages (the equivalent of procedure calls) between objects. The object responds to messages by using the required procedure to perform operations. The system is intended to get raw images and sequences of images.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117337495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Two techniques have been used to design telescope arrays for imaging applications. The first technique is applicable to arrays with a relatively small ( approximately 10), number of apertures and is essentially an exhaustive search with a simple inline test that allows the search space to be pruned by an order of magnitude. In the second technique, arrays of a large number of apertures are designed by combining the results from several arrays with fewer apertures. The criteria is that the best array would maximize the distance from the origin to the position of the first zero in the transfer function (TF). This criterion has been selected to accommodate reconstruction of image phases from phase-difference averages, a process that is sensitive to zeros in the TF. For telescopes with a large number of individually steerable mirrors, the dominant cost moves away from the fabrication of a mirror and towards the cost of beam combination systems and civil engineering. In order to reduce these costs, a fractal-based approach that encourages modular and replicated subsystems has been adopted.<>
{"title":"Array designs for imaging","authors":"J. Fitch","doi":"10.1109/MDSP.1989.97070","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97070","url":null,"abstract":"Summary form only given. Two techniques have been used to design telescope arrays for imaging applications. The first technique is applicable to arrays with a relatively small ( approximately 10), number of apertures and is essentially an exhaustive search with a simple inline test that allows the search space to be pruned by an order of magnitude. In the second technique, arrays of a large number of apertures are designed by combining the results from several arrays with fewer apertures. The criteria is that the best array would maximize the distance from the origin to the position of the first zero in the transfer function (TF). This criterion has been selected to accommodate reconstruction of image phases from phase-difference averages, a process that is sensitive to zeros in the TF. For telescopes with a large number of individually steerable mirrors, the dominant cost moves away from the fabrication of a mirror and towards the cost of beam combination systems and civil engineering. In order to reduce these costs, a fractal-based approach that encourages modular and replicated subsystems has been adopted.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131087104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. The analysis of a sparse adaptive filtering technique and its application to the problem of system identification and noise reduction are discussed. In conventional adaptive filtering, modeling of systems whose impulse responses have clusters of nonzero coefficients, separated by samples that are small or zero, requires that the adaptive filter be sufficiently long to match the system. Consequently, in its converged state, the adaptive filter has many impulse response samples which are close to zero. These small coefficients contribute to residual filter misadjustment. Additionally, the convergence rate of the filter is determined by the total length. A sparse method that circumvents these problems by avoiding the calculations associated with the near-zero coefficients has been developed. As a result, the final mean square error attained is reduced, as is the convergence time.<>
{"title":"Analysis and application of adaptive noise reduction using sparse filters","authors":"James Normile, Yung-Fu Cheng, Delores M. Etter","doi":"10.1109/MDSP.1989.97093","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97093","url":null,"abstract":"Summary form only given. The analysis of a sparse adaptive filtering technique and its application to the problem of system identification and noise reduction are discussed. In conventional adaptive filtering, modeling of systems whose impulse responses have clusters of nonzero coefficients, separated by samples that are small or zero, requires that the adaptive filter be sufficiently long to match the system. Consequently, in its converged state, the adaptive filter has many impulse response samples which are close to zero. These small coefficients contribute to residual filter misadjustment. Additionally, the convergence rate of the filter is determined by the total length. A sparse method that circumvents these problems by avoiding the calculations associated with the near-zero coefficients has been developed. As a result, the final mean square error attained is reduced, as is the convergence time.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115479286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given, as follows. An algorithm is presented for smoothing and segmenting images with regions characterized by constant intensity levels and/or textures. It is based on a doubly stochastic model of the data, where the local behavior is modeled by autoregressive equations with piecewise constant parameters, while the regions are modeled by a Markov random field (MRF). The edges of the image, in terms of boundaries between regions, are associated with the reinitialization of the covariance matrix of the recursive-least-squares (RLS) estimator. With this approach it is shown that for any given set of edges gamma a likelihood function P( gamma mod gamma ) can be computed, with gamma denoting the noisy observations. Using this fact, a suboptimal algorithm for edge detection is devised which locally maximizes the likelihood function by operating sequentially on the observations. The main Advantage seems to be that the algorithm is robust with respect to the observation noise, in the sense that the edges of very small regions (unlikely in the MRF model) are not detected.<>
仅给出摘要形式,如下。提出了一种算法,用于平滑和分割具有恒定强度水平和/或纹理特征的区域图像。它基于数据的双重随机模型,其中局部行为由带有分段常数参数的自回归方程建模,而区域则由马尔可夫随机场(MRF)建模。图像的边缘,就区域之间的边界而言,与递归最小二乘(RLS)估计器的协方差矩阵的重新初始化相关联。用这种方法表明,对于任何给定的边集,可以计算似然函数P(gamma mod gamma),其中表示有噪声的观测值。利用这一事实,设计了一种次优边缘检测算法,该算法通过对观测值进行顺序操作来局部最大化似然函数。主要优点似乎是该算法相对于观测噪声具有鲁棒性,即不检测到非常小区域的边缘(在MRF模型中不太可能)。
{"title":"Edge detection by 2D recursive least squares and Markov random fields","authors":"R. Cristi","doi":"10.1109/MDSP.1989.97001","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97001","url":null,"abstract":"Summary form only given, as follows. An algorithm is presented for smoothing and segmenting images with regions characterized by constant intensity levels and/or textures. It is based on a doubly stochastic model of the data, where the local behavior is modeled by autoregressive equations with piecewise constant parameters, while the regions are modeled by a Markov random field (MRF). The edges of the image, in terms of boundaries between regions, are associated with the reinitialization of the covariance matrix of the recursive-least-squares (RLS) estimator. With this approach it is shown that for any given set of edges gamma a likelihood function P( gamma mod gamma ) can be computed, with gamma denoting the noisy observations. Using this fact, a suboptimal algorithm for edge detection is devised which locally maximizes the likelihood function by operating sequentially on the observations. The main Advantage seems to be that the algorithm is robust with respect to the observation noise, in the sense that the edges of very small regions (unlikely in the MRF model) are not detected.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125955125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Experiments performed with dual-polarized Ku-band radar systems have shown that there are distinct differences between the information contained in the like- and cross-polarized returns from the ice floes, particularly between those returns from new and old ice. In order to present the two different images on one monochrome display, it is necessary to combine them. The process can be expedited by using singular-value decomposition (SVD) to determine the eigenvectors, since, in doing so, it is not necessary to compute the covariance matrix explicitly. For the special case of transforming two input images into one output image, the SVD can be computed in a straightforward manner using the rotation matrix of Hestenes (1958). By performing the image transformation using parallel processors, an efficient pipelined architecture for computing the method of principal components can be realized. Such an architecture has been simulated on the Warp systolic computer and applied to the like- and cross-polarized radar images.<>
{"title":"A real-time implementation of the method of principal components applied to dual-polarized radar returns","authors":"J. R. Orlando, S. Haykin","doi":"10.1109/MDSP.1989.97021","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97021","url":null,"abstract":"Summary form only given. Experiments performed with dual-polarized Ku-band radar systems have shown that there are distinct differences between the information contained in the like- and cross-polarized returns from the ice floes, particularly between those returns from new and old ice. In order to present the two different images on one monochrome display, it is necessary to combine them. The process can be expedited by using singular-value decomposition (SVD) to determine the eigenvectors, since, in doing so, it is not necessary to compute the covariance matrix explicitly. For the special case of transforming two input images into one output image, the SVD can be computed in a straightforward manner using the rotation matrix of Hestenes (1958). By performing the image transformation using parallel processors, an efficient pipelined architecture for computing the method of principal components can be realized. Such an architecture has been simulated on the Warp systolic computer and applied to the like- and cross-polarized radar images.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126718431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. The problem of isothermal contour detection has been reduced to the problem of finding an optimal path in a weighted tree. The properties of the contour are embedded in the structure of the tree. Graph searching techniques are then used to find the optimal solution. To search the optimal isothermal contours, a heuristic search algorithm has been adopted. The algorithm has been implemented in C on the Micro-Vax II workstation. The information about the problem is given by specifying the start node, the characteristics of the goal node, and the rules for expanding a node and for computing its cost. The program has been tested by adding different amounts of Gaussian noise to a picture. It is able to draw acceptable isothermal contours even in noise-added thermal images, but the computation time for detecting an isothermal contour depends on the noise.<>
{"title":"Artificial intelligence search application to isothermal contour detection in a thermal image","authors":"D. Lee, J. A. Pearce","doi":"10.1109/MDSP.1989.96995","DOIUrl":"https://doi.org/10.1109/MDSP.1989.96995","url":null,"abstract":"Summary form only given. The problem of isothermal contour detection has been reduced to the problem of finding an optimal path in a weighted tree. The properties of the contour are embedded in the structure of the tree. Graph searching techniques are then used to find the optimal solution. To search the optimal isothermal contours, a heuristic search algorithm has been adopted. The algorithm has been implemented in C on the Micro-Vax II workstation. The information about the problem is given by specifying the start node, the characteristics of the goal node, and the rules for expanding a node and for computing its cost. The program has been tested by adding different amounts of Gaussian noise to a picture. It is able to draw acceptable isothermal contours even in noise-added thermal images, but the computation time for detecting an isothermal contour depends on the noise.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"04 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127134888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. Extensive psychophysical evidence indicates that the spatial and temporal filtering properties of the human visual system depend on local retinal image illuminance; as illuminance decreases, signals are integrated over larger areas and longer time. A model that reproduces the spatial effect is as follows. There are three layers, an array of photodetectors, a spreading network, and an array of output channels. The output of each photodetector spreads its signals in the network, in a way to be described, and the signal leaving each point in the output array is the sum of signals arriving at that point. If the point spread function were constant, this system would simply act as a spatial filter, convolving the spread function with the input.<>
{"title":"Intensity-dependent spread-a theory of human vision and a machine vision filter with interesting properties","authors":"T. Cornsweet","doi":"10.1109/MDSP.1989.97031","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97031","url":null,"abstract":"Summary form only given. Extensive psychophysical evidence indicates that the spatial and temporal filtering properties of the human visual system depend on local retinal image illuminance; as illuminance decreases, signals are integrated over larger areas and longer time. A model that reproduces the spatial effect is as follows. There are three layers, an array of photodetectors, a spreading network, and an array of output channels. The output of each photodetector spreads its signals in the network, in a way to be described, and the signal leaving each point in the output array is the sum of signals arriving at that point. If the point spread function were constant, this system would simply act as a spatial filter, convolving the spread function with the input.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123220528","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. An approach to the development of low-bit-rate video coding algorithms operating at rates below 2 Mb/s, for providing videophone/videoconferencing services over emerging ISDN networks, is reported. Two new classes of block overlap transform have been applied and their performances compared. A lapped orthogonal transform (LOT), which can also be viewed as an efficient quadrature mirror filter bank implementation in which analysis synthesis filters have identical finite impulse responses, has been applied. Although experimental results have shown a reduction in blocking effect, they have also shown an increase in the so-called mosquito effect (i.e. image degradation visible in the moving area of the picture). To reduce the latter effect and also implementation complexity, a new block overlap transform with short overlap (e.g. typically one to two adjacent pixels) has been applied. This is a simple nonorthogonal transform that uses discrete-cosine transform basis functions in combination with appropriately designed window functions.<>
{"title":"A low bit rate hybrid coding scheme based on new classes of block overlap transforms","authors":"A. Tabatabai","doi":"10.1109/MDSP.1989.97130","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97130","url":null,"abstract":"Summary form only given. An approach to the development of low-bit-rate video coding algorithms operating at rates below 2 Mb/s, for providing videophone/videoconferencing services over emerging ISDN networks, is reported. Two new classes of block overlap transform have been applied and their performances compared. A lapped orthogonal transform (LOT), which can also be viewed as an efficient quadrature mirror filter bank implementation in which analysis synthesis filters have identical finite impulse responses, has been applied. Although experimental results have shown a reduction in blocking effect, they have also shown an increase in the so-called mosquito effect (i.e. image degradation visible in the moving area of the picture). To reduce the latter effect and also implementation complexity, a new block overlap transform with short overlap (e.g. typically one to two adjacent pixels) has been applied. This is a simple nonorthogonal transform that uses discrete-cosine transform basis functions in combination with appropriately designed window functions.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126477070","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Summary form only given. A stochastic modeling approach to the restoration of misregistered corrupted images of the same scene has been adopted. If a wide-sense stationarity assumption is made on the 2-D continuous version of the image of the scene, it can be seen that individual discrete image samples from this continuous image exhibit similar statistical properties in the spatial domain, i.e. the (auto) covariance matrix defined on each discrete image is identical. The temporal correlation between a pair of discrete images is dictated by the temporal displacement between images and the stochastic model in the continuous domain. This temporal correlation can be expressed in terms of the misregistration of a pair of discrete images and the correlation matrix of the discrete image. Hence, the stochastic model-based approach defines a parametric form of the spatio-temporal correlation matrix, in terms of the spatial model parameters over a single image lattice and the displacement values between different frames.<>
{"title":"Restoration of multiple misregistered images","authors":"C. Srinivas, M. Srinath","doi":"10.1109/MDSP.1989.97121","DOIUrl":"https://doi.org/10.1109/MDSP.1989.97121","url":null,"abstract":"Summary form only given. A stochastic modeling approach to the restoration of misregistered corrupted images of the same scene has been adopted. If a wide-sense stationarity assumption is made on the 2-D continuous version of the image of the scene, it can be seen that individual discrete image samples from this continuous image exhibit similar statistical properties in the spatial domain, i.e. the (auto) covariance matrix defined on each discrete image is identical. The temporal correlation between a pair of discrete images is dictated by the temporal displacement between images and the stochastic model in the continuous domain. This temporal correlation can be expressed in terms of the misregistration of a pair of discrete images and the correlation matrix of the discrete image. Hence, the stochastic model-based approach defines a parametric form of the spatio-temporal correlation matrix, in terms of the spatial model parameters over a single image lattice and the displacement values between different frames.<<ETX>>","PeriodicalId":340681,"journal":{"name":"Sixth Multidimensional Signal Processing Workshop,","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1989-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125265894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}