首页 > 最新文献

2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV)最新文献

英文 中文
Variation of edge detection uncertainty on fish-eye images 鱼眼图像边缘检测不确定度的变化
Pub Date : 2015-05-11 DOI: 10.1109/FCV.2015.7103713
Kenji Terabayashi, T. Oiwa, K. Umeda
This paper reports that uncertainty of detected edges on a fish-eye image depends on the direction to observe the edges. In fish-eye cameras, dimensions of observation space corresponding to a pixel greatly change with the pixel location. This dimension is defined as “spatial uncertainty” in this paper and formulated with typical projection models of fish-eye cameras. Experimental results showed that the uncertainty of edge detection on fish-eye images increased with increase of the spatial uncertainty.
本文报道了鱼眼图像边缘检测的不确定性取决于观察边缘的方向。在鱼眼相机中,一个像素对应的观测空间维度随着像素位置的变化而变化很大。本文将该维度定义为“空间不确定性”,并采用鱼眼相机的典型投影模型来表述。实验结果表明,鱼眼图像边缘检测的不确定度随着空间不确定度的增大而增大。
{"title":"Variation of edge detection uncertainty on fish-eye images","authors":"Kenji Terabayashi, T. Oiwa, K. Umeda","doi":"10.1109/FCV.2015.7103713","DOIUrl":"https://doi.org/10.1109/FCV.2015.7103713","url":null,"abstract":"This paper reports that uncertainty of detected edges on a fish-eye image depends on the direction to observe the edges. In fish-eye cameras, dimensions of observation space corresponding to a pixel greatly change with the pixel location. This dimension is defined as “spatial uncertainty” in this paper and formulated with typical projection models of fish-eye cameras. Experimental results showed that the uncertainty of edge detection on fish-eye images increased with increase of the spatial uncertainty.","PeriodicalId":424974,"journal":{"name":"2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV)","volume":"27 15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126311602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robust detection of mosaic masking region 马赛克掩蔽区域的鲁棒检测
Pub Date : 2015-05-11 DOI: 10.1109/FCV.2015.7103722
Jung-Jae Yu, S. Han
In this paper, a novel method to automatically detect mosaic masking regions in an input image is proposed. Mosaic masking region can be used as an important clue for recognizing commercial pornographic images. The proposed method is composed of three steps. The first step is to extract SRE features using a new cross-shaped feature characteristic, and the second step is to estimate the parameters of a mosaic candidate region, and the final step is SRD verification using the characteristic of luminance distribution in a mosaic masking region. Proposed method is fast and robust to blurring effects caused by image resizing and low quality video compression and it can be used for computer vision application to block adult videos.
本文提出了一种自动检测输入图像中马赛克掩蔽区域的新方法。马赛克掩蔽区域可以作为商业色情图像识别的重要线索。该方法分为三个步骤。第一步是利用新的十字形特征特征提取SRE特征,第二步是估计马赛克候选区域的参数,最后一步是利用马赛克掩蔽区域的亮度分布特征进行SRD验证。该方法对图像大小调整和低质量视频压缩引起的模糊效果具有快速和鲁棒性,可用于成人视频的计算机视觉屏蔽。
{"title":"Robust detection of mosaic masking region","authors":"Jung-Jae Yu, S. Han","doi":"10.1109/FCV.2015.7103722","DOIUrl":"https://doi.org/10.1109/FCV.2015.7103722","url":null,"abstract":"In this paper, a novel method to automatically detect mosaic masking regions in an input image is proposed. Mosaic masking region can be used as an important clue for recognizing commercial pornographic images. The proposed method is composed of three steps. The first step is to extract SRE features using a new cross-shaped feature characteristic, and the second step is to estimate the parameters of a mosaic candidate region, and the final step is SRD verification using the characteristic of luminance distribution in a mosaic masking region. Proposed method is fast and robust to blurring effects caused by image resizing and low quality video compression and it can be used for computer vision application to block adult videos.","PeriodicalId":424974,"journal":{"name":"2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115459498","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Color palette generation for image classification by bag-of-colors 颜色袋分类图像的调色板生成
Pub Date : 2015-05-11 DOI: 10.1109/FCV.2015.7103734
Ayaka Kojima, T. Ozeki
There are a large number of colors to represent images (e.g. 256256256 = 16,777,216 colors in an RGB color space) on computers. Since there are too many colors to handle, a large number of colors are reduced by quantization in the image processing in general. When we perform a uniform color quantization, we often get colors which do not fit the real world. Therefore, typical colors should be learned from real world images to generate a practical color palette. The bag-of-visual words based only on local features of grayscale pixel values provides the state of the art technology in the field of image classification, retrieval and recognition. Therefore it is expected to improve the performance by adding the color information to the local features. However, if we increase the number of features to extract from images, it costs memory and time for computation. Moreover, the increase of features affects the performance of recognition. The aim of this paper is to generate appropriate color palette for image classification by the bag-of-colors with less computation time and as few colors as possible to improve the accuracy of image classification.
在计算机上有大量的颜色来表示图像(例如,在RGB色彩空间中256256256 = 16,777,216种颜色)。由于需要处理的颜色太多,通常在图像处理中通过量化来减少大量的颜色。当我们进行均匀颜色量化时,我们经常得到不适合真实世界的颜色。因此,典型的颜色应该从现实世界的图像中学习,以产生一个实用的调色板。仅基于灰度像素值的局部特征的视觉词袋提供了图像分类、检索和识别领域的最新技术。因此,期望通过在局部特征中添加颜色信息来提高性能。然而,如果我们从图像中提取的特征数量增加,则会消耗内存和计算时间。此外,特征的增加会影响识别的性能。本文的目的是在计算时间越少、颜色越少的情况下,利用颜色袋生成适合图像分类的调色板,以提高图像分类的准确性。
{"title":"Color palette generation for image classification by bag-of-colors","authors":"Ayaka Kojima, T. Ozeki","doi":"10.1109/FCV.2015.7103734","DOIUrl":"https://doi.org/10.1109/FCV.2015.7103734","url":null,"abstract":"There are a large number of colors to represent images (e.g. 256256256 = 16,777,216 colors in an RGB color space) on computers. Since there are too many colors to handle, a large number of colors are reduced by quantization in the image processing in general. When we perform a uniform color quantization, we often get colors which do not fit the real world. Therefore, typical colors should be learned from real world images to generate a practical color palette. The bag-of-visual words based only on local features of grayscale pixel values provides the state of the art technology in the field of image classification, retrieval and recognition. Therefore it is expected to improve the performance by adding the color information to the local features. However, if we increase the number of features to extract from images, it costs memory and time for computation. Moreover, the increase of features affects the performance of recognition. The aim of this paper is to generate appropriate color palette for image classification by the bag-of-colors with less computation time and as few colors as possible to improve the accuracy of image classification.","PeriodicalId":424974,"journal":{"name":"2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125926050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Study on performance of MPEG-7 visual descriptors for deformable object retrieval MPEG-7可视描述符在可变形对象检索中的性能研究
Pub Date : 2015-05-11 DOI: 10.1109/FCV.2015.7103701
Jung Hyun, Hae-Kwang Kim, Weon Gun Oh
This paper presents results on the study of MPEG-7 visual descriptors for deformable object retrieval. A database of 819 handbag images with shape masks are constructed with different variations such as morphing, illumination changes, view point changes and color changes. For color descriptors, all of 4 MPEG-7 color descriptors of Dominant Color descriptor, Color Structure descriptor, Color Layout descriptor and Scalable Color descriptor are tested. For texture descriptor, Homogeneous Texture descriptor and Edge Histogram descriptor are tested. For shape descriptors, Contour-based and Region based descriptor are tested. The retrieval rate of the descriptors and correlation of each pair of descriptors are studied. The result shows that the Scalable Color descriptor is the best in terms of retrieval rate and the color descriptors are relatively highly correlated among themselves.
本文介绍了用于可变形对象检索的MPEG-7视觉描述符的研究结果。通过变形、光照变化、视角变化、颜色变化等不同的变化,构建了819张带有形状面具的手袋图像数据库。对于颜色描述符,测试了主色描述符、颜色结构描述符、颜色布局描述符和可伸缩颜色描述符的所有4种MPEG-7颜色描述符。对于纹理描述符,分别测试了均匀纹理描述符和边缘直方图描述符。对于形状描述符,分别测试了基于轮廓和基于区域的描述符。研究了描述符的检索率和每对描述符的相关性。结果表明,可伸缩的颜色描述符在检索率方面是最好的,并且颜色描述符之间具有较高的相关性。
{"title":"Study on performance of MPEG-7 visual descriptors for deformable object retrieval","authors":"Jung Hyun, Hae-Kwang Kim, Weon Gun Oh","doi":"10.1109/FCV.2015.7103701","DOIUrl":"https://doi.org/10.1109/FCV.2015.7103701","url":null,"abstract":"This paper presents results on the study of MPEG-7 visual descriptors for deformable object retrieval. A database of 819 handbag images with shape masks are constructed with different variations such as morphing, illumination changes, view point changes and color changes. For color descriptors, all of 4 MPEG-7 color descriptors of Dominant Color descriptor, Color Structure descriptor, Color Layout descriptor and Scalable Color descriptor are tested. For texture descriptor, Homogeneous Texture descriptor and Edge Histogram descriptor are tested. For shape descriptors, Contour-based and Region based descriptor are tested. The retrieval rate of the descriptors and correlation of each pair of descriptors are studied. The result shows that the Scalable Color descriptor is the best in terms of retrieval rate and the color descriptors are relatively highly correlated among themselves.","PeriodicalId":424974,"journal":{"name":"2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV)","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114256740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Development of deep learning-based facial expression recognition system 基于深度学习的面部表情识别系统的开发
Pub Date : 2015-05-11 DOI: 10.1109/FCV.2015.7103729
Heechul Jung, Sihaeng Lee, Sunjeong Park, Byungju Kim, Junmo Kim, Injae Lee, C. Ahn
Deep learning is considered to be a breakthrough in the field of computer vision, since most of the world records of the recognition tasks are being broken. In this paper, we try to apply such deep learning techniques to recognizing facial expressions that represent human emotions. The procedure of our facial expression recognition system is as follows: First, face is detected from input image using Haar-like features. Second, the deep network is used for recognizing facial expression using detected faces. In this step, two different deep networks can be used such as deep neural network and convolutional neural network. Consequently, we compared experimentally two types of deep networks, and the convolutional neural network had better performance than deep neural network.
深度学习被认为是计算机视觉领域的一个突破,因为大多数识别任务的世界纪录正在被打破。在本文中,我们尝试将这种深度学习技术应用于识别代表人类情感的面部表情。我们的面部表情识别系统的流程如下:首先,使用Haar-like feature从输入图像中检测人脸。其次,利用深度网络对检测到的人脸进行表情识别。在这一步中,可以使用两种不同的深度网络,如深度神经网络和卷积神经网络。因此,我们对两种类型的深度网络进行了实验比较,发现卷积神经网络比深度神经网络具有更好的性能。
{"title":"Development of deep learning-based facial expression recognition system","authors":"Heechul Jung, Sihaeng Lee, Sunjeong Park, Byungju Kim, Junmo Kim, Injae Lee, C. Ahn","doi":"10.1109/FCV.2015.7103729","DOIUrl":"https://doi.org/10.1109/FCV.2015.7103729","url":null,"abstract":"Deep learning is considered to be a breakthrough in the field of computer vision, since most of the world records of the recognition tasks are being broken. In this paper, we try to apply such deep learning techniques to recognizing facial expressions that represent human emotions. The procedure of our facial expression recognition system is as follows: First, face is detected from input image using Haar-like features. Second, the deep network is used for recognizing facial expression using detected faces. In this step, two different deep networks can be used such as deep neural network and convolutional neural network. Consequently, we compared experimentally two types of deep networks, and the convolutional neural network had better performance than deep neural network.","PeriodicalId":424974,"journal":{"name":"2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114315077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 51
Omni-directional 3D measurement using double fish-eye stereo vision 采用双鱼眼立体视觉进行全方位三维测量
Pub Date : 2015-05-11 DOI: 10.1109/FCV.2015.7103698
Y. Iguchi, J. Yamaguchi
In this paper, the authors describe on an omnidirectional 3D measurement using double fish-eye stereo vision. This vision system consists of two stereo visions which are back to back each other. In the method, the image from the fish-eye vision is transformed into panoramic image and a parallax is detected by simple template matching to the transformed image. Using the method, three dimensional measurement of the all-directional space is possible, though the system composition is simple. In the paper, the method is explained and an example of experiment is shown.
本文介绍了一种基于双鱼眼立体视觉的全向三维测量方法。该视觉系统由两个立体视觉组成,它们彼此背靠背。该方法将鱼眼图像变换为全景图像,并对变换后的图像进行简单的模板匹配来检测视差。使用该方法,虽然系统组成简单,但可以对全方位空间进行三维测量。文中对该方法进行了说明,并给出了实验实例。
{"title":"Omni-directional 3D measurement using double fish-eye stereo vision","authors":"Y. Iguchi, J. Yamaguchi","doi":"10.1109/FCV.2015.7103698","DOIUrl":"https://doi.org/10.1109/FCV.2015.7103698","url":null,"abstract":"In this paper, the authors describe on an omnidirectional 3D measurement using double fish-eye stereo vision. This vision system consists of two stereo visions which are back to back each other. In the method, the image from the fish-eye vision is transformed into panoramic image and a parallax is detected by simple template matching to the transformed image. Using the method, three dimensional measurement of the all-directional space is possible, though the system composition is simple. In the paper, the method is explained and an example of experiment is shown.","PeriodicalId":424974,"journal":{"name":"2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130869784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Fast text line detection by finding linear connected components on Canny edge image 通过查找Canny边缘图像上的线性连接组件来快速检测文本行
Pub Date : 2015-05-11 DOI: 10.1109/FCV.2015.7103743
Jung Hyun, Hae-Kwang Kim, Weon Gun Oh
This paper presents a new way of text region detection on the basis of Canny edge detection and connected component. A Canny edge image is detected from a gray image obtained from an original color image. The Canny image is partitioned into n × n blocks and each n × n block is divided into smaller m × m blocks. If there are sufficient edge pixels in the m × m block, then the block is set to text candidate block. The number of text candidate blocks is counted in each n × n block, and the number is sufficient, then the n × n block is set to candidate text n × n block. Text regions are only detected in the candidate text n × n blocks. Connected-component is obtained from the edge pixels in the candidate text n × n blocks of the Canny edge image. The connected components are sorted with its size and grouped in to several groups. From each group, possible candidate text lines of connected components are detected and the connected components in the neighboring groups are added into the candidate text lines. The performance of proposed method is compared with the SWT (Stroke Width Transform) and Tesseraci text region detection method. The experimental results show that proposed one is faster than SWT losing accuracy and is slower than Tesseraci with better precision.
本文提出了一种基于Canny边缘检测和连通分量的文本区域检测新方法。从原始彩色图像得到的灰度图像中检测出Canny边缘图像。Canny图像被分割成n × n块,每个n × n块被分成更小的m × m块。如果在m × m块中有足够的边缘像素,则将该块设置为文本候选块。在每个n × n块中计算文本候选块的数量,数量足够时,则将n × n块设置为候选文本n × n块。文本区域仅在候选文本n × n块中检测。从Canny边缘图像的候选文本n × n块中的边缘像素获得连通分量。连接的组件按其大小进行排序,并分组为几个组。从每一组中检测连接组件的可能候选文本行,并将相邻组中的连接组件添加到候选文本行中。将该方法与SWT (Stroke Width Transform)和Tesseraci文本区域检测方法进行了性能比较。实验结果表明,该算法比SWT算法丢失精度快,比Tesseraci算法速度慢,精度更高。
{"title":"Fast text line detection by finding linear connected components on Canny edge image","authors":"Jung Hyun, Hae-Kwang Kim, Weon Gun Oh","doi":"10.1109/FCV.2015.7103743","DOIUrl":"https://doi.org/10.1109/FCV.2015.7103743","url":null,"abstract":"This paper presents a new way of text region detection on the basis of Canny edge detection and connected component. A Canny edge image is detected from a gray image obtained from an original color image. The Canny image is partitioned into n × n blocks and each n × n block is divided into smaller m × m blocks. If there are sufficient edge pixels in the m × m block, then the block is set to text candidate block. The number of text candidate blocks is counted in each n × n block, and the number is sufficient, then the n × n block is set to candidate text n × n block. Text regions are only detected in the candidate text n × n blocks. Connected-component is obtained from the edge pixels in the candidate text n × n blocks of the Canny edge image. The connected components are sorted with its size and grouped in to several groups. From each group, possible candidate text lines of connected components are detected and the connected components in the neighboring groups are added into the candidate text lines. The performance of proposed method is compared with the SWT (Stroke Width Transform) and Tesseraci text region detection method. The experimental results show that proposed one is faster than SWT losing accuracy and is slower than Tesseraci with better precision.","PeriodicalId":424974,"journal":{"name":"2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122154206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A method for character and photograph segmentation using dynamic thresholding 一种使用动态阈值分割字符和照片的方法
Pub Date : 2015-05-11 DOI: 10.1109/FCV.2015.7103716
Ryuhei Noguchi, J. Hayashi
In order to obtain the portability of a book, and reduction of a storage place, it is popular to change a book into a digital book. It has changed into the text data for search of the contents, and data size compression. However, the special character is used for the illustration page or the cover. These characters may be unable to be read as text data by the conventional character recognition. In this study, we focused on that the gradient of brightness value was flat in character area. How to presume a character area from the amount of change of the projection acquired by changing the threshold value of binarization dynamically is described.
为了获得书籍的便携性,并减少存储空间,将书籍改为数字书籍是流行的。它已改为对文本数据进行内容搜索,并对数据大小进行压缩。但是,特殊字符用于插图页或封面。这些字符可能无法被传统的字符识别作为文本数据读取。在本研究中,我们重点研究了字符区域的亮度值梯度是平坦的。描述了如何通过动态改变二值化阈值所获得的投影变化量来假定一个特征区域。
{"title":"A method for character and photograph segmentation using dynamic thresholding","authors":"Ryuhei Noguchi, J. Hayashi","doi":"10.1109/FCV.2015.7103716","DOIUrl":"https://doi.org/10.1109/FCV.2015.7103716","url":null,"abstract":"In order to obtain the portability of a book, and reduction of a storage place, it is popular to change a book into a digital book. It has changed into the text data for search of the contents, and data size compression. However, the special character is used for the illustration page or the cover. These characters may be unable to be read as text data by the conventional character recognition. In this study, we focused on that the gradient of brightness value was flat in character area. How to presume a character area from the amount of change of the projection acquired by changing the threshold value of binarization dynamically is described.","PeriodicalId":424974,"journal":{"name":"2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123142957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Impulse noise reduction using distance weighted average filter 利用距离加权平均滤波来降低脉冲噪声
Pub Date : 2015-05-11 DOI: 10.1109/FCV.2015.7103733
Seungin Baek, Soowoong Jeong, Jongsoo Choi, Sangkeun Lee
Switching median filter is known as one of effective algorithms for impulse noise reduction. In this paper, we present an improved switching median filter by considering weighted neighboring pixel locations. Specifically, the proposed method generates a flag map using boundary discriminative noise detection(BDND) detector. Next, we conduct a noise reduction by estimating the local noise density. When the local noise density is low, a corrupted pixel is replaced with the median value of uncorrupted neighboring pixels. In contrast, when the density is high, a noise searching window size increases until the predefined conditions are met. Then, a noise pixel is corrected by the weighted average of the uncorrupted values. Experiment results show that the proposed method outperforms the existing methods by about 0.5-3.7 dB on average.
切换中值滤波是一种有效的脉冲降噪算法。本文提出了一种考虑加权相邻像素位置的改进开关中值滤波器。具体来说,该方法使用边界判别噪声检测(BDND)检测器生成标志图。接下来,我们通过估计局部噪声密度进行降噪。当局部噪声密度较低时,损坏的像素被替换为未损坏的相邻像素的中值。相反,当密度较高时,噪声搜索窗口的大小会增大,直到满足预定义的条件。然后,通过未损坏值的加权平均来校正噪声像素。实验结果表明,该方法比现有方法平均提高0.5 ~ 3.7 dB。
{"title":"Impulse noise reduction using distance weighted average filter","authors":"Seungin Baek, Soowoong Jeong, Jongsoo Choi, Sangkeun Lee","doi":"10.1109/FCV.2015.7103733","DOIUrl":"https://doi.org/10.1109/FCV.2015.7103733","url":null,"abstract":"Switching median filter is known as one of effective algorithms for impulse noise reduction. In this paper, we present an improved switching median filter by considering weighted neighboring pixel locations. Specifically, the proposed method generates a flag map using boundary discriminative noise detection(BDND) detector. Next, we conduct a noise reduction by estimating the local noise density. When the local noise density is low, a corrupted pixel is replaced with the median value of uncorrupted neighboring pixels. In contrast, when the density is high, a noise searching window size increases until the predefined conditions are met. Then, a noise pixel is corrected by the weighted average of the uncorrupted values. Experiment results show that the proposed method outperforms the existing methods by about 0.5-3.7 dB on average.","PeriodicalId":424974,"journal":{"name":"2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130972939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Smoke detection for static cameras 静态摄像机的烟雾检测
Pub Date : 2015-05-11 DOI: 10.1109/FCV.2015.7103719
A. Filonenko, Danilo Cáceres Hernández, K. Jo
This paper describes the smoke detection for static cameras. The background subtraction was used to determine moving objects. Color characteristics were utilized to distinguish smoke regions and other scene members. Separate pixels were united into blobs by morphology operations and connected components labeling methods. The image is then refined by boundary roughness and edge density to decrease amount of false detections. Results of the current frame are compared to the previous one in order to check the behavior of objects in time domain.
本文介绍了静态摄像机的烟雾检测。使用背景减法来确定运动物体。利用颜色特征来区分烟雾区域和其他场景成员。通过形态学运算和连通分量标记方法,将分离的像素点统一成blob。然后通过边界粗糙度和边缘密度对图像进行细化,以减少误检量。将当前帧的结果与前一帧的结果进行比较,以检查对象在时域中的行为。
{"title":"Smoke detection for static cameras","authors":"A. Filonenko, Danilo Cáceres Hernández, K. Jo","doi":"10.1109/FCV.2015.7103719","DOIUrl":"https://doi.org/10.1109/FCV.2015.7103719","url":null,"abstract":"This paper describes the smoke detection for static cameras. The background subtraction was used to determine moving objects. Color characteristics were utilized to distinguish smoke regions and other scene members. Separate pixels were united into blobs by morphology operations and connected components labeling methods. The image is then refined by boundary roughness and edge density to decrease amount of false detections. Results of the current frame are compared to the previous one in order to check the behavior of objects in time domain.","PeriodicalId":424974,"journal":{"name":"2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV)","volume":"10 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133008965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
期刊
2015 21st Korea-Japan Joint Workshop on Frontiers of Computer Vision (FCV)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1