Six methods for estimating the standard deviation of white additive noise in images are surveyed and evaluated experimentally by application to a set of images showing different degrees of contrast, edge details, texture, etc. The results show that on average, the most reliable estimate is obtained by prefiltering the image to suppress the image structure and then computing the standard deviation value from the filtered data.
{"title":"Estimation of Noise in Images: An Evaluation","authors":"Olsen S.I.","doi":"10.1006/cgip.1993.1022","DOIUrl":"10.1006/cgip.1993.1022","url":null,"abstract":"<div><p>Six methods for estimating the standard deviation of white additive noise in images are surveyed and evaluated experimentally by application to a set of images showing different degrees of contrast, edge details, texture, etc. The results show that on average, the most reliable estimate is obtained by prefiltering the image to suppress the image structure and then computing the standard deviation value from the filtered data.</p></div>","PeriodicalId":100349,"journal":{"name":"CVGIP: Graphical Models and Image Processing","volume":"55 4","pages":"Pages 319-323"},"PeriodicalIF":0.0,"publicationDate":"1993-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/cgip.1993.1022","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115035317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A fundamental issue in texture analysis is that of deciding what textural features are important in texture perception, and how they are used. Experiments on human preattentive vision have identified several low-level features (such as orientation of blobs and size of line segments), which are used in texture perception. However, the question of what higher level features of texture are used has not been adequately addressed. We designed an experiment to help identify the relevant higher order features of texture perceived by humans. We used 20 subjects, who were asked to perform an unsupervised classification of 30 pictures from Brodatz′s album on texture. Each subject was asked to group these pictures into as many classes as desired. Both hierarchical cluster analysis and nonparametric multidimensional scaling (MDS) were applied to the pooled similarity matrix generated from the subjects′ groupings. A surprising outcome is that the MDS solutions fit the data very well. The stress in the two-dimensional case is 0.10, and the stress in the three-dimensional case is 0.045. We rendered the original textures in these coordinate systems, and interpreted the (rotated) axes. It appears that the axes in the 2D case correspond to periodicity versus irregularity, and directionality versus nondirectionality. In the 3D case, the third dimension represents the structural complexity of the texture. Furthermore, the clusters identified by the hierarchical cluster analysis remain virtually intact in the MDS solution. The results of our experiment indicate that people use three high-level features for texture perception. Future studies are needed to determine the appropriateness of these high-level features for computational texture analysis and classification.
{"title":"Identifying High Level Features of Texture Perception","authors":"Rao A.R., Lohse G.L.","doi":"10.1006/cgip.1993.1016","DOIUrl":"https://doi.org/10.1006/cgip.1993.1016","url":null,"abstract":"<div><p>A fundamental issue in texture analysis is that of deciding what textural features are important in texture perception, and how they are used. Experiments on human preattentive vision have identified several low-level features (such as orientation of blobs and size of line segments), which are used in texture perception. However, the question of what higher level features of texture are used has not been adequately addressed. We designed an experiment to help identify the relevant higher order features of texture perceived by humans. We used 20 subjects, who were asked to perform an unsupervised classification of 30 pictures from Brodatz′s album on texture. Each subject was asked to group these pictures into as many classes as desired. Both hierarchical cluster analysis and nonparametric multidimensional scaling (MDS) were applied to the pooled similarity matrix generated from the subjects′ groupings. A surprising outcome is that the MDS solutions fit the data very well. The stress in the two-dimensional case is 0.10, and the stress in the three-dimensional case is 0.045. We rendered the original textures in these coordinate systems, and interpreted the (rotated) axes. It appears that the axes in the 2D case correspond to periodicity versus irregularity, and directionality versus nondirectionality. In the 3D case, the third dimension represents the structural complexity of the texture. Furthermore, the clusters identified by the hierarchical cluster analysis remain virtually intact in the MDS solution. The results of our experiment indicate that people use three high-level features for texture perception. Future studies are needed to determine the appropriateness of these high-level features for computational texture analysis and classification.</p></div>","PeriodicalId":100349,"journal":{"name":"CVGIP: Graphical Models and Image Processing","volume":"55 3","pages":"Pages 218-233"},"PeriodicalIF":0.0,"publicationDate":"1993-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/cgip.1993.1016","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72281746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes a parallel algorithm for computing the visible portion of a simple planar polygon with N vertices from a given point on or inside the polygon. The algorithm accomplishes this in O(k log N) time using O(N/log N) processors, where k is the link-diameter of the polygon in consideration. The link-diameter of a polygon is the maximum number of straight line segments needed to connect any two points within the polygon, where all line segments lie completely within the polygon. The algorithm can also be used to compute the visible portion of the plane given a point outside of the polygon. Except in this case, the parameter k in the asymptotic bounds would be the link diameter of a different polygon. The algorithm is optimal for sets of polygons that have a constant link diameter. It is a rather simple algorithm, and has a very small run time constant, making it fast and practical to implement. The interprocessor communication needed involves only local neighbor communication and scan operations (i.e., parallel prefix operations). Thus the algorithm can be implemented not only on an EREW PRAM, but also on a variety of other more practical machine architectures, such as hypercubes, trees, butterflies, and shuffle exchange networks. The algorithm was implemented on the Connection Machine as well as the MasPar MP- 1, and various performance tests were conducted.
{"title":"A Parallel Algorithm for the Visibility of a Simple Polygon Using Scan Operations","authors":"Chen L.T., Davis L.S.","doi":"10.1006/cgip.1993.1014","DOIUrl":"https://doi.org/10.1006/cgip.1993.1014","url":null,"abstract":"<div><p>This paper describes a parallel algorithm for computing the visible portion of a simple planar polygon with <em>N</em> vertices from a given point on or inside the polygon. The algorithm accomplishes this in <em>O</em>(<em>k</em> log <em>N</em>) time using <em>O</em>(<em>N</em>/log <em>N</em>) processors, where <em>k</em> is the <em>link-diameter</em> of the polygon in consideration. The link-diameter of a polygon is the maximum number of straight line segments needed to connect any two points within the polygon, where all line segments lie completely within the polygon. The algorithm can also be used to compute the visible portion of the plane given a point outside of the polygon. Except in this case, the parameter <em>k</em> in the asymptotic bounds would be the link diameter of a different polygon. The algorithm is optimal for sets of polygons that have a constant link diameter. It is a rather simple algorithm, and has a very small run time constant, making it fast and practical to implement. The interprocessor communication needed involves only local neighbor communication and scan operations (i.e., parallel prefix operations). Thus the algorithm can be implemented not only on an EREW PRAM, but also on a variety of other more practical machine architectures, such as hypercubes, trees, butterflies, and shuffle exchange networks. The algorithm was implemented on the Connection Machine as well as the MasPar MP- 1, and various performance tests were conducted.</p></div>","PeriodicalId":100349,"journal":{"name":"CVGIP: Graphical Models and Image Processing","volume":"55 3","pages":"Pages 192-202"},"PeriodicalIF":0.0,"publicationDate":"1993-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/cgip.1993.1014","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72281744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper new methods for detection of line targets in digital images using multiple-way Analysis of Variance (ANOVA) methods based on the Græco-Latin square (GLS) are developed and demonstrated. After presentation of the underlying statistical theory upon which the GLS is based, the philosophy of using ANOVA methods in pattern recognition problems is illustrated by one-way and two-way models. The GLS detectors are then described in detail and their performance demonstrated. The detectors are not only capable of detecting lines of different direction, but their complexity also can be used to estimate and remove some types of unwanted image structure. Also proposed is an adaptive ANOVA method for line detection, which uses information contained in the GLS statistics to eliminate unnecessary estimation of some of the structure parameters and again improve the power of the detector. The problem of false alarms in regions of the image containing sharp gray-level discontinuities also is addressed, and adjustments are made to the algorithms for their suppression.
本文开发并演示了使用基于Græco Latin square(GLS)的多元方差分析(ANOVA)方法检测数字图像中直线目标的新方法。在介绍了GLS所基于的基本统计理论之后,通过单向和双向模型说明了在模式识别问题中使用ANOVA方法的原理。然后详细描述了GLS探测器,并演示了它们的性能。检测器不仅能够检测不同方向的线,而且其复杂性还可以用于估计和去除某些类型的不需要的图像结构。还提出了一种用于线检测的自适应ANOVA方法,该方法使用GLS统计中包含的信息来消除对一些结构参数的不必要估计,并再次提高检测器的功率。还解决了图像中包含尖锐灰度级不连续性的区域中的虚警问题,并对算法进行了调整以抑制它们。
{"title":"Line Detection in Noisy and Structured Backgrounds Using Græco-Latin Squares","authors":"Haberstroh R., Kurz L.","doi":"10.1006/cgip.1993.1012","DOIUrl":"https://doi.org/10.1006/cgip.1993.1012","url":null,"abstract":"<div><p>In this paper new methods for detection of line targets in digital images using multiple-way Analysis of Variance (ANOVA) methods based on the Græco-Latin square (GLS) are developed and demonstrated. After presentation of the underlying statistical theory upon which the GLS is based, the philosophy of using ANOVA methods in pattern recognition problems is illustrated by one-way and two-way models. The GLS detectors are then described in detail and their performance demonstrated. The detectors are not only capable of detecting lines of different direction, but their complexity also can be used to estimate and remove some types of unwanted image structure. Also proposed is an adaptive ANOVA method for line detection, which uses information contained in the GLS statistics to eliminate unnecessary estimation of some of the structure parameters and again improve the power of the detector. The problem of false alarms in regions of the image containing sharp gray-level discontinuities also is addressed, and adjustments are made to the algorithms for their suppression.</p></div>","PeriodicalId":100349,"journal":{"name":"CVGIP: Graphical Models and Image Processing","volume":"55 3","pages":"Pages 161-179"},"PeriodicalIF":0.0,"publicationDate":"1993-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/cgip.1993.1012","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72281742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper deals with the detection of generalized digital straight line segments, or digital bars, using Hough transform (HT). Straight line segments are classified according to the nature of the image space. Digital straight line segments, assumed to be those which can be obtained by grid-intersect quantization of ideal straight line segments, are generalized to represent bars of non-unitary width, and their mapping into a slope/intercept parameter space is characterized. The shortcomings of having discrete parameter space implied by most HTs are identified and two post-HT techniques (connectedness analysis and merging stage) to alleviate such shortcomings are discussed. A simple technique for connectedness analysis of the evidence produced by the HT, which can confirm the presence of straight features in the image and determine their respective endpoints, is described. The complete technique for detection of digital bars is exemplified for an actual image and its implementation in linear arrays of transputers is also discussed.
{"title":"Effective Detection of Digital Bar Segments with Hough Transform","authors":"Costa L.D., Sandler M.B.","doi":"10.1006/cgip.1993.1013","DOIUrl":"https://doi.org/10.1006/cgip.1993.1013","url":null,"abstract":"<div><p>This paper deals with the detection of generalized digital straight line segments, or digital bars, using Hough transform (HT). Straight line segments are classified according to the nature of the image space. Digital straight line segments, assumed to be those which can be obtained by grid-intersect quantization of ideal straight line segments, are generalized to represent bars of non-unitary width, and their mapping into a slope/intercept parameter space is characterized. The shortcomings of having discrete parameter space implied by most HTs are identified and two post-HT techniques (connectedness analysis and merging stage) to alleviate such shortcomings are discussed. A simple technique for connectedness analysis of the evidence produced by the HT, which can confirm the presence of straight features in the image and determine their respective endpoints, is described. The complete technique for detection of digital bars is exemplified for an actual image and its implementation in linear arrays of transputers is also discussed.</p></div>","PeriodicalId":100349,"journal":{"name":"CVGIP: Graphical Models and Image Processing","volume":"55 3","pages":"Pages 180-191"},"PeriodicalIF":0.0,"publicationDate":"1993-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/cgip.1993.1013","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72281743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The extraction of binary character/graphics images from gray-scale document images with background pictures, shadows, highlight, smear, and smudge is a common critical image processing operation, particularly for document image analysis, optical character recognition, check image processing, image transmission, and videoconferencing. After a brief review of previous work with emphasis on five published extraction techniques, viz., a global thresholding technique, YDH technique, a nonlinear adaptive technique, an integrated function technique, and a local contrast technique, this paper presents two new extraction techniques: a logical level technique and a mask-based subtraction technique. With experiments on images of a typical check and a poor-quality text document, this paper systematically evaluates and analyses both new and published techniques with respect to six aspects, viz., speed, memory requirement, stroke width restriction, parameter number, parameter setting, and human subjective evaluation of result images. Experiments and evaluations have shown that one new technique is superior to the rest, suggesting its suitability for high-speed low-cost applications.
{"title":"Extraction of Binary Character/Graphics Images from Grayscale Document Images","authors":"Kamel M., Zhao A.","doi":"10.1006/cgip.1993.1015","DOIUrl":"https://doi.org/10.1006/cgip.1993.1015","url":null,"abstract":"<div><p>The extraction of binary character/graphics images from gray-scale document images with background pictures, shadows, highlight, smear, and smudge is a common critical image processing operation, particularly for document image analysis, optical character recognition, check image processing, image transmission, and videoconferencing. After a brief review of previous work with emphasis on five published extraction techniques, viz., a global thresholding technique, YDH technique, a nonlinear adaptive technique, an integrated function technique, and a local contrast technique, this paper presents two new extraction techniques: a logical level technique and a mask-based subtraction technique. With experiments on images of a typical check and a poor-quality text document, this paper systematically evaluates and analyses both new and published techniques with respect to six aspects, viz., speed, memory requirement, stroke width restriction, parameter number, parameter setting, and human subjective evaluation of result images. Experiments and evaluations have shown that one new technique is superior to the rest, suggesting its suitability for high-speed low-cost applications.</p></div>","PeriodicalId":100349,"journal":{"name":"CVGIP: Graphical Models and Image Processing","volume":"55 3","pages":"Pages 203-217"},"PeriodicalIF":0.0,"publicationDate":"1993-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/cgip.1993.1015","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72281745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comment on \"Generation of Noise in Binary Images\"","authors":"Liu Y.K.","doi":"10.1006/cgip.1993.1011","DOIUrl":"10.1006/cgip.1993.1011","url":null,"abstract":"","PeriodicalId":100349,"journal":{"name":"CVGIP: Graphical Models and Image Processing","volume":"55 2","pages":"Page 160"},"PeriodicalIF":0.0,"publicationDate":"1993-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/cgip.1993.1011","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127342390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents an algorithm to compute an approximation to the general sweep boundary of a 2D curved moving object which changes its shape dynamically while traversing a trajectory. In effect, we make polygonal approximations to the trajectory and to the object shape at every appropriate instance along the trajectory so that the approximated polygonal sweep boundary is within a given error bound ϵ > 0 from the exact sweep boundary. The algorithm interpolates intermediate polygonal shapes between any two consecutive instances, and constructs polygons which approximate the sweep boundary of the object. Previous algorithms on sweep boundary computation have been mainly concerned about moving objects with fixed shapes; nevertheless, they have involved a fair amount of symbolic and/or numerical computations that have limited their practical uses in graphics modeling systems as well as in many other applications which require fast sweep boundary computation. Although the algorithm presented here does not generate the exact sweep boundaries of objects, it does yield quite reasonable polygonal approximations to them, and our experimental results show that its computation is reasonably fast to be of a practical use.
{"title":"Approximate General Sweep Boundary of a 2D Curved Object","authors":"Ahn J.W., Kim M.S., Lim S.B.","doi":"10.1006/cgip.1993.1008","DOIUrl":"10.1006/cgip.1993.1008","url":null,"abstract":"<div><p>This paper presents an algorithm to compute an approximation to the general sweep boundary of a 2D curved moving object which changes its shape dynamically while traversing a trajectory. In effect, we make polygonal approximations to the trajectory and to the object shape at every appropriate instance along the trajectory so that the approximated polygonal sweep boundary is within a given error bound ϵ > 0 from the exact sweep boundary. The algorithm interpolates intermediate polygonal shapes between any two consecutive instances, and constructs polygons which approximate the sweep boundary of the object. Previous algorithms on sweep boundary computation have been mainly concerned about moving objects with fixed shapes; nevertheless, they have involved a fair amount of symbolic and/or numerical computations that have limited their practical uses in graphics modeling systems as well as in many other applications which require fast sweep boundary computation. Although the algorithm presented here does not generate the exact sweep boundaries of objects, it does yield quite reasonable polygonal approximations to them, and our experimental results show that its computation is reasonably fast to be of a practical use.</p></div>","PeriodicalId":100349,"journal":{"name":"CVGIP: Graphical Models and Image Processing","volume":"55 2","pages":"Pages 98-128"},"PeriodicalIF":0.0,"publicationDate":"1993-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/cgip.1993.1008","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124570660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Detecting dominant points is an important step for object recognition. Corner detection and polygonal approximation are two major approaches for dominant point detection. In this paper, we propose the curvature-based polygonal approximation method which combines the corner detection and polygonal approximation techniques to detect the dominant points. This detection method consists of three procedures: (1) extract the break points that do not lie on a straight line, (2) detect the potential corners, and (3) perform polygonal approximation by partitioning the curves between two consecutive potential corners. Both quantitative and qualitative evaluations have been conducted. Experimental results show that the combined methods are superior to the conventional methods, and the dominant points can be properly detected by the combined methods.
{"title":"Detecting the Dominant Points by the Curvature-Based Polygonal Approximation","authors":"Wu W.Y., Wang M.J.J.","doi":"10.1006/cgip.1993.1006","DOIUrl":"10.1006/cgip.1993.1006","url":null,"abstract":"<div><p>Detecting dominant points is an important step for object recognition. Corner detection and polygonal approximation are two major approaches for dominant point detection. In this paper, we propose the <em>curvature-based polygonal approximation method</em> which combines the corner detection and polygonal approximation techniques to detect the dominant points. This detection method consists of three procedures: (1) extract the break points that do not lie on a straight line, (2) detect the potential corners, and (3) perform polygonal approximation by partitioning the curves between two consecutive potential corners. Both quantitative and qualitative evaluations have been conducted. Experimental results show that the combined methods are superior to the conventional methods, and the dominant points can be properly detected by the combined methods.</p></div>","PeriodicalId":100349,"journal":{"name":"CVGIP: Graphical Models and Image Processing","volume":"55 2","pages":"Pages 79-88"},"PeriodicalIF":0.0,"publicationDate":"1993-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/cgip.1993.1006","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129911872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an algorithm for Bayesian estimation of temporally active and inactive spatial regions of video sequences. The algorithm aids in the use of conditional replenishment for video compression in many applications which feature a background/foreground format. For the sake of compatibility with common block-type coders, the binary-valued segmentation is constrained to be constant on square blocks of 8 × 8 or 16 × 16 pixels. Our approach favors connectivity at two levels of scale, with two intended effects. The first is at the pixel level, where a Gibbs distribution is used for the active pixels in the binary field of suprathreshold interframe differences. This increases the value of the likelihood ratio for blocks with spatially contiguous active pixels. The final segmentation also assigns higher probability to patterns of active blocks which are connected, since in general, macroscopic entities are assumed to be many blocks in size. Demonstrations of the advantage of the Bayesian approach are given through several simulations with standard sequences.
{"title":"Bayesian Block-Wise Segmentation of Interframe Differences in Video Sequences","authors":"Sauer K., Jones C.","doi":"10.1006/cgip.1993.1009","DOIUrl":"10.1006/cgip.1993.1009","url":null,"abstract":"<div><p>We present an algorithm for Bayesian estimation of temporally active and inactive spatial regions of video sequences. The algorithm aids in the use of conditional replenishment for video compression in many applications which feature a background/foreground format. For the sake of compatibility with common block-type coders, the binary-valued segmentation is constrained to be constant on square blocks of 8 × 8 or 16 × 16 pixels. Our approach favors connectivity at two levels of scale, with two intended effects. The first is at the pixel level, where a Gibbs distribution is used for the active pixels in the binary field of suprathreshold interframe differences. This increases the value of the likelihood ratio for blocks with spatially contiguous active pixels. The final segmentation also assigns higher probability to patterns of active blocks which are connected, since in general, macroscopic entities are assumed to be many blocks in size. Demonstrations of the advantage of the Bayesian approach are given through several simulations with standard sequences.</p></div>","PeriodicalId":100349,"journal":{"name":"CVGIP: Graphical Models and Image Processing","volume":"55 2","pages":"Pages 129-139"},"PeriodicalIF":0.0,"publicationDate":"1993-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1006/cgip.1993.1009","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116075703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}