首页 > 最新文献

2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)最新文献

英文 中文
Breast cancer detection using spectral probable feature on thermography images 利用热成像图像的光谱可能特征检测乳腺癌
Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6779961
Rozita Rastghalam, H. Pourghassem
Thermography is a noninvasive, non-radiating, fast, and painless imaging technique that is able to detect breast tumors much earlier than the traditional mammography methods. In this paper, a novel breast cancer detection algorithm based on spectral probable features is proposed to separate healthy and pathological cases during breast cancer screening. Gray level co-occurrence matrix is made from image spectrum to obtain spectral co-occurrence feature. However, this feature is not sufficient separately. To extract directional and probable features from image spectrum, this matrix is optimized and defined as a feature vector. By asymmetry analysis, left and right breast feature vectors are compared in which certainly, more similarity in these two vectors implies healthy breasts. Our method is implemented on various breast thermograms that are generated by different thermography centers. Our algorithm is evaluated on different similarity measures such as Euclidean distance, correlation and chi-square. The obtained results show effectiveness of our proposed algorithm.
热成像是一种无创、无辐射、快速、无痛的成像技术,能够比传统的乳房x光检查方法更早地发现乳房肿瘤。本文提出了一种新的基于谱似然特征的乳腺癌检测算法,用于乳腺癌筛查中健康病例与病理病例的分离。利用图像光谱构造灰度共现矩阵,得到光谱共现特征。然而,单独使用这个特性是不够的。为了从图像光谱中提取方向特征和可能特征,对该矩阵进行了优化并定义为特征向量。通过不对称分析,比较左右乳房特征向量,两者相似度越高,说明乳房健康。我们的方法在不同的热成像中心生成的各种乳房热图上实现。我们的算法在不同的相似性度量上进行了评估,如欧几里得距离、相关性和卡方。仿真结果表明了算法的有效性。
{"title":"Breast cancer detection using spectral probable feature on thermography images","authors":"Rozita Rastghalam, H. Pourghassem","doi":"10.1109/IRANIANMVIP.2013.6779961","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779961","url":null,"abstract":"Thermography is a noninvasive, non-radiating, fast, and painless imaging technique that is able to detect breast tumors much earlier than the traditional mammography methods. In this paper, a novel breast cancer detection algorithm based on spectral probable features is proposed to separate healthy and pathological cases during breast cancer screening. Gray level co-occurrence matrix is made from image spectrum to obtain spectral co-occurrence feature. However, this feature is not sufficient separately. To extract directional and probable features from image spectrum, this matrix is optimized and defined as a feature vector. By asymmetry analysis, left and right breast feature vectors are compared in which certainly, more similarity in these two vectors implies healthy breasts. Our method is implemented on various breast thermograms that are generated by different thermography centers. Our algorithm is evaluated on different similarity measures such as Euclidean distance, correlation and chi-square. The obtained results show effectiveness of our proposed algorithm.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131731258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A new feature extraction method from dental X-ray images for human identification 一种用于人体识别的牙齿x射线图像特征提取新方法
Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6780018
Faranak Shamsafar
Using dental radiography is an alternative approach to identify a deceased person, especially in cases that other biometric traits cannot be handled. This paper proposes a new method for feature extraction from dental radiography images to identify people. First, dental works are segmented in the X-ray images using image processing techniques. Then, radius vector function and support function are extracted for each segmented region. These functions are independent of image translation. The presented algorithm modifies both functions to be invariant under image rotation as well. Also, by normalizing the functions, the problems due to image scale variations can be solved. Image translation, rotation and scale variations are basic challenges when dental features are compared in spatial domain. Experiments prove suitable recognition accuracy in the proposed approach which does not require teeth alignment at the matching level.
使用牙科x线摄影是识别死者的另一种方法,特别是在其他生物特征无法处理的情况下。本文提出了一种从口腔放射成像图像中提取特征以识别人的新方法。首先,利用图像处理技术对x射线图像进行分割。然后,对每个分割区域提取半径向量函数和支持函数。这些功能与图像翻译无关。该算法还使这两个函数在图像旋转下保持不变。此外,通过对函数进行归一化,可以解决由于图像尺度变化引起的问题。图像平移、旋转和尺度变化是在空间域比较牙齿特征时面临的基本挑战。实验证明,该方法不需要在匹配层面对牙齿进行对齐,具有较好的识别精度。
{"title":"A new feature extraction method from dental X-ray images for human identification","authors":"Faranak Shamsafar","doi":"10.1109/IRANIANMVIP.2013.6780018","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6780018","url":null,"abstract":"Using dental radiography is an alternative approach to identify a deceased person, especially in cases that other biometric traits cannot be handled. This paper proposes a new method for feature extraction from dental radiography images to identify people. First, dental works are segmented in the X-ray images using image processing techniques. Then, radius vector function and support function are extracted for each segmented region. These functions are independent of image translation. The presented algorithm modifies both functions to be invariant under image rotation as well. Also, by normalizing the functions, the problems due to image scale variations can be solved. Image translation, rotation and scale variations are basic challenges when dental features are compared in spatial domain. Experiments prove suitable recognition accuracy in the proposed approach which does not require teeth alignment at the matching level.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131766817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Robust watershed segmentation of moving shadows using wavelets 基于小波的运动阴影鲁棒分水岭分割
Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6780015
E. Shabaninia, A. Naghsh-Nilchi
Segmentation of moving objects in a video sequence is a primary mission of many computer vision tasks. However, shadows extracted along with the objects can result in large errors in object localization and recognition. We propose a novel method of moving shadow detection using wavelets and watershed segmentation algorithm, which can effectively separate the cast shadow of moving objects in a scene obtained from a video sequence. The wavelet transform is used to de-noise and enhance edges of foreground image, and to obtain an enhanced version of gradient image. Then, the watershed transform is applied to the gradient image to segment different parts of object including shadows. Finally a post-processing exertion is accommodated to mark segmented parts with chromacity close to the background reference as shadows. Experimental results on two datasets prove the efficiency and robustness of the proposed approach.
视频序列中运动物体的分割是许多计算机视觉任务的主要任务。然而,随物体一起提取的阴影会导致物体定位和识别的较大误差。本文提出了一种基于小波和分水岭分割算法的运动阴影检测方法,该方法可以有效地分离视频序列中场景中运动物体的投影。利用小波变换对前景图像进行去噪和边缘增强,得到增强版的梯度图像。然后,对梯度图像进行分水岭变换,分割出包括阴影在内的物体的不同部分。最后,一个后处理的努力是适应标记分割部分的色度接近背景参考作为阴影。在两个数据集上的实验结果证明了该方法的有效性和鲁棒性。
{"title":"Robust watershed segmentation of moving shadows using wavelets","authors":"E. Shabaninia, A. Naghsh-Nilchi","doi":"10.1109/IRANIANMVIP.2013.6780015","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6780015","url":null,"abstract":"Segmentation of moving objects in a video sequence is a primary mission of many computer vision tasks. However, shadows extracted along with the objects can result in large errors in object localization and recognition. We propose a novel method of moving shadow detection using wavelets and watershed segmentation algorithm, which can effectively separate the cast shadow of moving objects in a scene obtained from a video sequence. The wavelet transform is used to de-noise and enhance edges of foreground image, and to obtain an enhanced version of gradient image. Then, the watershed transform is applied to the gradient image to segment different parts of object including shadows. Finally a post-processing exertion is accommodated to mark segmented parts with chromacity close to the background reference as shadows. Experimental results on two datasets prove the efficiency and robustness of the proposed approach.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130993486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Texture classification using dominant gradient descriptor 基于优势梯度描述符的纹理分类
Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6779958
Maryam Mokhtari, Parvin Razzaghi, S. Samavi
Texture classification is an important part of many object recognition algorithms. In this paper, a new approach to texture classification is proposed. Recently, local binary pattern (LBP) has been widely used in texture classification. In conventional LBP, directional statistical features and color information are not considered. To extract color information of textures, we have used color LBP. Also, to consider directional statistical features, we proposed the concept of histogram of dominant gradient (HoDG). In HoDG, the image is divided into blocks. Then the dominant gradient orientation of each block of image is extracted. Histogram of dominant gradients of blocks is used to describe edges and orientations of the texture image. By coupling the color LBP with HoDG, a new rotation invariant texture classification method is presented. Experimental results on the CUReT database show that our proposed method is superior to comparable algorithms.
纹理分类是许多目标识别算法的重要组成部分。本文提出了一种新的纹理分类方法。近年来,局部二值模式在纹理分类中得到了广泛的应用。在传统的LBP中,不考虑方向统计特征和颜色信息。为了提取纹理的颜色信息,我们使用了颜色LBP。此外,为了考虑方向性统计特征,我们提出了优势梯度直方图(HoDG)的概念。在HoDG中,图像被分成块。然后提取图像各块的优势梯度方向。利用块的优势梯度直方图来描述纹理图像的边缘和方向。将颜色LBP与HoDG相结合,提出了一种新的旋转不变纹理分类方法。在CUReT数据库上的实验结果表明,该方法优于同类算法。
{"title":"Texture classification using dominant gradient descriptor","authors":"Maryam Mokhtari, Parvin Razzaghi, S. Samavi","doi":"10.1109/IRANIANMVIP.2013.6779958","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779958","url":null,"abstract":"Texture classification is an important part of many object recognition algorithms. In this paper, a new approach to texture classification is proposed. Recently, local binary pattern (LBP) has been widely used in texture classification. In conventional LBP, directional statistical features and color information are not considered. To extract color information of textures, we have used color LBP. Also, to consider directional statistical features, we proposed the concept of histogram of dominant gradient (HoDG). In HoDG, the image is divided into blocks. Then the dominant gradient orientation of each block of image is extracted. Histogram of dominant gradients of blocks is used to describe edges and orientations of the texture image. By coupling the color LBP with HoDG, a new rotation invariant texture classification method is presented. Experimental results on the CUReT database show that our proposed method is superior to comparable algorithms.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132761108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Nonrigid registration of breast MR images using residual complexity similarity measure 基于残差复杂度相似度的乳腺MR图像非刚性配准
Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6779987
Azam Hamidi Nekoo, A. Ghaffari, E. Fatemizadeh
Elimination of motion artifact in breast MR images is a significant issue in pre-processing step before utilizing images for diagnostic applications. Breast MR Images are affected by slow varying intensity distortions as a result of contrast agent enhancement. Thus a nonrigid registration algorithm considering this effect is needed. Traditional similarity measures such as sum of squared differences and cross correlation, ignore the mentioned distortion. Therefore, efficient registration is not obtained. Residual complexity is a similarity measure that considers spatially varying intensity distortions by maximizing sparseness of the residual image. In this research, the results obtained by applying nonrigid registration based on residual complexity, sum of squared differences and cross correlation similarity measures are demonstrated which show more robustness and accuracy of RC comparing with other similarity measures for breast MR images.
消除运动伪影在乳房磁共振图像是一个重要的预处理步骤,然后利用图像诊断应用。由于造影剂增强,乳房MR图像受到缓慢变化的强度扭曲的影响。因此,需要一种考虑这种影响的非刚性配准算法。传统的相似性度量,如差的平方和和互相关,忽略了上述的失真。因此,无法获得有效的注册。残差复杂度是一种相似性度量,通过最大化残差图像的稀疏性来考虑空间变化的强度畸变。在本研究中,应用基于残差复杂度、差平方和和相互关联相似度量的非刚性配准得到的结果表明,RC与其他乳房MR图像相似度量相比,具有更高的鲁棒性和准确性。
{"title":"Nonrigid registration of breast MR images using residual complexity similarity measure","authors":"Azam Hamidi Nekoo, A. Ghaffari, E. Fatemizadeh","doi":"10.1109/IRANIANMVIP.2013.6779987","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779987","url":null,"abstract":"Elimination of motion artifact in breast MR images is a significant issue in pre-processing step before utilizing images for diagnostic applications. Breast MR Images are affected by slow varying intensity distortions as a result of contrast agent enhancement. Thus a nonrigid registration algorithm considering this effect is needed. Traditional similarity measures such as sum of squared differences and cross correlation, ignore the mentioned distortion. Therefore, efficient registration is not obtained. Residual complexity is a similarity measure that considers spatially varying intensity distortions by maximizing sparseness of the residual image. In this research, the results obtained by applying nonrigid registration based on residual complexity, sum of squared differences and cross correlation similarity measures are demonstrated which show more robustness and accuracy of RC comparing with other similarity measures for breast MR images.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133766137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fusion of SPECT and MRI images using back and fore ground information 利用背景和前景信息融合SPECT和MRI图像
Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6779984
Behzad Nobariyan, S. Daneshvar, M. Hosseinzadeh
Perception and diagnosis of disorders by using single photon emission computed tomography (SPECT) image is difficult since this image does not contain anatomical Information. In the studies it is tried to make innovation SPECT image by magnetic resonance imaging (MRI) and image fusion methods. So the fused image is obtained involving functional and anatomical information. MRI image shows tissue brain anatomy and it has high spatial resolution without functional information. SPECT shows brain function and it has low spatial resolution. Fusion of SPECT and MRI images leads to a high spatial resolution image. The fused image with desired specifications consists in spatial and spectral distortions. Substitution methods such as IHS and Multi-resolution fusion methods such as wavelet transform preserve spatial and spectral information respectively. In This article we present a method that preserves both spatial and spectral information well and minimizes distortions of fused images relative to other methods.
由于单光子发射计算机断层扫描(SPECT)图像不包含解剖信息,因此对疾病的感知和诊断是困难的。在研究中,尝试采用磁共振成像和图像融合的方法对SPECT图像进行创新。从而得到包含功能信息和解剖信息的融合图像。MRI图像显示组织脑解剖结构,具有高空间分辨率,不含功能信息。SPECT显示大脑功能,空间分辨率较低。SPECT和MRI图像的融合产生了高空间分辨率的图像。融合后的图像具有一定的空间和光谱畸变。替代方法如IHS和多分辨率融合方法如小波变换分别保留空间和光谱信息。在本文中,我们提出了一种方法,可以很好地保留空间和光谱信息,并最大限度地减少融合图像相对于其他方法的畸变。
{"title":"Fusion of SPECT and MRI images using back and fore ground information","authors":"Behzad Nobariyan, S. Daneshvar, M. Hosseinzadeh","doi":"10.1109/IRANIANMVIP.2013.6779984","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779984","url":null,"abstract":"Perception and diagnosis of disorders by using single photon emission computed tomography (SPECT) image is difficult since this image does not contain anatomical Information. In the studies it is tried to make innovation SPECT image by magnetic resonance imaging (MRI) and image fusion methods. So the fused image is obtained involving functional and anatomical information. MRI image shows tissue brain anatomy and it has high spatial resolution without functional information. SPECT shows brain function and it has low spatial resolution. Fusion of SPECT and MRI images leads to a high spatial resolution image. The fused image with desired specifications consists in spatial and spectral distortions. Substitution methods such as IHS and Multi-resolution fusion methods such as wavelet transform preserve spatial and spectral information respectively. In This article we present a method that preserves both spatial and spectral information well and minimizes distortions of fused images relative to other methods.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114557492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Geometric modeling of the wavelet coefficients for image watermarking 图像水印小波系数的几何建模
Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6779944
Mohammad Hamghalam, S. Mirzakuchaki, M. Akhaee
In this paper, a robust image watermarking method based on geometric modeling is presented. Eight samples of wavelet approximation coefficients on each image block are utilized to construct two line segments in the 2-D space. We change the angle formed between these line segments for data embedding. Geometrical tools are used to solve the tradeoff between the transparency and robustness of the watermark data. Due to embedding in the angle between two line segments, the proposed scheme has high robustness against the gain attacks. In addition, using the low frequency components of the image blocks for data embedding, high robustness against noise and compression attacks has been achieved. Experimental results confirm the validity of the theoretical analyses given in the paper and show the superiority of the proposed method against common attacks, such as Gaussian filtering, median filtering and scaling attacks.
提出了一种基于几何建模的鲁棒图像水印方法。利用每个图像块上的8个小波近似系数样本在二维空间中构造两条线段。我们改变这些线段之间形成的角度进行数据嵌入。利用几何工具来解决水印数据的透明性和鲁棒性之间的权衡。由于嵌入在两条线段之间的夹角处,该方案对增益攻击具有较高的鲁棒性。此外,利用图像块的低频成分进行数据嵌入,实现了对噪声和压缩攻击的高鲁棒性。实验结果证实了本文理论分析的有效性,并证明了该方法在抵抗高斯滤波、中值滤波和尺度攻击等常见攻击方面的优越性。
{"title":"Geometric modeling of the wavelet coefficients for image watermarking","authors":"Mohammad Hamghalam, S. Mirzakuchaki, M. Akhaee","doi":"10.1109/IRANIANMVIP.2013.6779944","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779944","url":null,"abstract":"In this paper, a robust image watermarking method based on geometric modeling is presented. Eight samples of wavelet approximation coefficients on each image block are utilized to construct two line segments in the 2-D space. We change the angle formed between these line segments for data embedding. Geometrical tools are used to solve the tradeoff between the transparency and robustness of the watermark data. Due to embedding in the angle between two line segments, the proposed scheme has high robustness against the gain attacks. In addition, using the low frequency components of the image blocks for data embedding, high robustness against noise and compression attacks has been achieved. Experimental results confirm the validity of the theoretical analyses given in the paper and show the superiority of the proposed method against common attacks, such as Gaussian filtering, median filtering and scaling attacks.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117148085","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Facial expression recognition using sparse coding 基于稀疏编码的面部表情识别
Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6779968
Maryam Abdolali, M. Rahmati
In this paper a sparse coding approach is proposed. Due to the similarity of the frequency and orientation representations of Gabor filters and those of the human visual system, we have used Gabor filters in the step of creating the dictionary. It has been shown that not all Gabor filters in a typical Gabor bank is necessary and efficient in facial expression recognition. Also we proposed a voting system in the test phase of algorithm to find the best matching expression. The well known JAFFE database has been used to evaluate the proposed method and our experimental results show encouraging results within the mentioned database.
本文提出了一种稀疏编码方法。由于Gabor滤波器的频率和方向表示与人类视觉系统的相似,我们在创建字典的步骤中使用了Gabor滤波器。研究表明,并不是所有典型Gabor库中的Gabor滤波器在面部表情识别中都是必要的和有效的。在算法的测试阶段,我们提出了一个投票系统来寻找最佳的匹配表达式。我们使用著名的JAFFE数据库对所提出的方法进行了评估,我们的实验结果在该数据库中显示了令人鼓舞的结果。
{"title":"Facial expression recognition using sparse coding","authors":"Maryam Abdolali, M. Rahmati","doi":"10.1109/IRANIANMVIP.2013.6779968","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779968","url":null,"abstract":"In this paper a sparse coding approach is proposed. Due to the similarity of the frequency and orientation representations of Gabor filters and those of the human visual system, we have used Gabor filters in the step of creating the dictionary. It has been shown that not all Gabor filters in a typical Gabor bank is necessary and efficient in facial expression recognition. Also we proposed a voting system in the test phase of algorithm to find the best matching expression. The well known JAFFE database has been used to evaluate the proposed method and our experimental results show encouraging results within the mentioned database.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116417927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
An intelligent and real-time system for plate recognition under complicated conditions 复杂条件下的智能实时车牌识别系统
Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6779988
Mohammad Salahshoor, A. Broumandnia, M. Rastgarpour
Vehicle Plate Recognition (VPR) algorithm in images and videos usually consists of the following three steps: 1) Region extraction of the plate (plate localization), 2) characters segmentation of the plate 3) Recognition of each character. This paper presents new methods for real-time plate recognition in each step. We used a Detector for the Blue Area (DBA) to locate the plate, Averaging of White Pixels in Objects (AWPO) for the character segmentation, then of method the Euclidian distance and template matching for character recognition after training. This system used 250 vehicle images with different backgrounds and non-uniform conditions. The proposed system is robust against challenges such as illumination and distance changes, and different angles between camera and vehicle, the presence of shadow, scratches and dirt on the plates. The accuracy rate for the three stages are 91.6% 89% and 95.09% respectively. The real-time recognition of plates for vehicles is 2.3 seconds, too.
图像和视频中的车牌识别(VPR)算法通常包括以下三个步骤:1)车牌区域提取(车牌定位),2)车牌字符分割,3)每个字符的识别。本文在每个步骤中提出了新的实时车牌识别方法。采用蓝色区域检测器(DBA)对车牌进行定位,采用物体白像素平均法(AWPO)对车牌进行字符分割,训练后采用欧式距离法和模板匹配法对车牌进行字符识别。该系统使用了250张不同背景和非均匀条件下的车辆图像。所提出的系统对诸如照明和距离变化、相机和车辆之间的不同角度、阴影的存在、车牌上的划痕和污垢等挑战具有很强的鲁棒性。三个阶段的准确率分别为91.6%、89%和95.09%。车辆车牌的实时识别时间也只有2.3秒。
{"title":"An intelligent and real-time system for plate recognition under complicated conditions","authors":"Mohammad Salahshoor, A. Broumandnia, M. Rastgarpour","doi":"10.1109/IRANIANMVIP.2013.6779988","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779988","url":null,"abstract":"Vehicle Plate Recognition (VPR) algorithm in images and videos usually consists of the following three steps: 1) Region extraction of the plate (plate localization), 2) characters segmentation of the plate 3) Recognition of each character. This paper presents new methods for real-time plate recognition in each step. We used a Detector for the Blue Area (DBA) to locate the plate, Averaging of White Pixels in Objects (AWPO) for the character segmentation, then of method the Euclidian distance and template matching for character recognition after training. This system used 250 vehicle images with different backgrounds and non-uniform conditions. The proposed system is robust against challenges such as illumination and distance changes, and different angles between camera and vehicle, the presence of shadow, scratches and dirt on the plates. The accuracy rate for the three stages are 91.6% 89% and 95.09% respectively. The real-time recognition of plates for vehicles is 2.3 seconds, too.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117194036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
HaFT: A handwritten Farsi text database 一个手写的波斯语文本数据库
Pub Date : 2013-09-01 DOI: 10.1109/IRANIANMVIP.2013.6779956
Reza Safabaksh, A. Ghanbarian, Golnaz Ghiasi
Standard databases provide for evaluation and comparison of various pattern recognition techniques by different researchers; thus they are essential for the advance of research. There are different handwritten databases in various languages, but there is not a large standard database of handwritten text for the evaluation of different algorithms for writer identification and verification in Farsi. This paper introduces a large handwritten Farsi text database called HaFT. The database contains 1800 gray scale images of unconstrained text written by 600 writers. Each participant gave three separate eight-line samples of his handwriting, each of which was written at a different time on a separate sheet. HaFT is presented in several versions each including different lengths of text and using identical or different writing instruments. A new measure, called CVM, is defined which effectively reflects the size of handwriting and thus the content volume of a given text image. This database is designed for training and testing Farsi writer identification and verification using handwritten text. In addition, the database can also be used in training and testing handwritten Farsi text segmentation and recognition algorithms. HaFT is available for research use.
标准数据库提供了不同研究人员对各种模式识别技术的评价和比较;因此,它们对研究的进展至关重要。各种语言都有不同的手写数据库,但没有一个大型的标准手写文本数据库,用于评估波斯语写作者识别和验证的不同算法。本文介绍了一个名为HaFT的大型手写波斯语文本数据库。该数据库包含600位作者所写的1800张无约束文本的灰度图像。每个参与者提供了三个单独的八行笔迹样本,每一行都是在不同的时间写在一张单独的纸上。HaFT以几个版本呈现,每个版本包括不同长度的文本,并使用相同或不同的书写工具。定义了一种新的测量方法,称为CVM,它可以有效地反映笔迹的大小,从而反映给定文本图像的内容体积。这个数据库的目的是训练和测试波斯语作家识别和核查使用手写文本。此外,该数据库还可用于训练和测试手写波斯语文本分割和识别算法。HaFT可用于研究。
{"title":"HaFT: A handwritten Farsi text database","authors":"Reza Safabaksh, A. Ghanbarian, Golnaz Ghiasi","doi":"10.1109/IRANIANMVIP.2013.6779956","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2013.6779956","url":null,"abstract":"Standard databases provide for evaluation and comparison of various pattern recognition techniques by different researchers; thus they are essential for the advance of research. There are different handwritten databases in various languages, but there is not a large standard database of handwritten text for the evaluation of different algorithms for writer identification and verification in Farsi. This paper introduces a large handwritten Farsi text database called HaFT. The database contains 1800 gray scale images of unconstrained text written by 600 writers. Each participant gave three separate eight-line samples of his handwriting, each of which was written at a different time on a separate sheet. HaFT is presented in several versions each including different lengths of text and using identical or different writing instruments. A new measure, called CVM, is defined which effectively reflects the size of handwriting and thus the content volume of a given text image. This database is designed for training and testing Farsi writer identification and verification using handwritten text. In addition, the database can also be used in training and testing handwritten Farsi text segmentation and recognition algorithms. HaFT is available for research use.","PeriodicalId":297204,"journal":{"name":"2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123955347","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
2013 8th Iranian Conference on Machine Vision and Image Processing (MVIP)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1