首页 > 最新文献

2010 6th Iranian Conference on Machine Vision and Image Processing最新文献

英文 中文
Stationary image resolution enhancement on the basis of contourlet and wavelet transforms by means of the artificial neural network 基于contourlet和小波变换的人工神经网络对静止图像分辨率进行增强
Pub Date : 2010-10-01 DOI: 10.1109/IRANIANMVIP.2010.5941154
S. M. Entezarmahdi, M. Yazdi
In this paper two transform based super resolution methods are presented for enhancing the resolution of a stationary image. In the first method, neural network is trained by wavelet transform coefficients of lower resolution of a given image, and then this neural network are used to estimate wavelet details subbands of that given image. In this way, by using these estimated subbands as wavelet details and the given image as the approximation image, a super-resolution image is made using the inverse wavelet transform. In the second proposed method, the wavelet transform is replaced by contourlet transform and the same mentioned procedure is applied. These two methods have been compared with each other and with the bicubic method on different types of images. The experimental results demonstrate the superiority performance of the proposed methods compared with regular stationary image resolution enhancing methods.
本文提出了两种基于变换的超分辨方法来提高静止图像的分辨率。第一种方法是利用给定图像的低分辨率小波变换系数训练神经网络,然后利用该神经网络估计给定图像的小波细节子带;这样,将这些估计子带作为小波细节,将给定图像作为近似图像,利用小波反变换得到超分辨率图像。在第二种方法中,将小波变换替换为contourlet变换,采用相同的方法。在不同类型的图像上,对这两种方法进行了比较,并与双三次方法进行了比较。实验结果表明,与常规的静态图像分辨率增强方法相比,所提方法性能优越。
{"title":"Stationary image resolution enhancement on the basis of contourlet and wavelet transforms by means of the artificial neural network","authors":"S. M. Entezarmahdi, M. Yazdi","doi":"10.1109/IRANIANMVIP.2010.5941154","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2010.5941154","url":null,"abstract":"In this paper two transform based super resolution methods are presented for enhancing the resolution of a stationary image. In the first method, neural network is trained by wavelet transform coefficients of lower resolution of a given image, and then this neural network are used to estimate wavelet details subbands of that given image. In this way, by using these estimated subbands as wavelet details and the given image as the approximation image, a super-resolution image is made using the inverse wavelet transform. In the second proposed method, the wavelet transform is replaced by contourlet transform and the same mentioned procedure is applied. These two methods have been compared with each other and with the bicubic method on different types of images. The experimental results demonstrate the superiority performance of the proposed methods compared with regular stationary image resolution enhancing methods.","PeriodicalId":350778,"journal":{"name":"2010 6th Iranian Conference on Machine Vision and Image Processing","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125039317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A new scheme of face image encoding through wireless fading channels using WBCT and Block thresholding 提出了一种基于WBCT和分块阈值的无线衰落信道人脸图像编码新方案
Pub Date : 2010-10-01 DOI: 10.1109/IRANIANMVIP.2010.5941168
M. Owjimehr, M. Yazdi, A. Z. Asli
Transmitting the face image data through wireless fading channels have been widely used for face recognition and automatic surveillance applications and many techniques can be used to do that. However, due to the noise and wireless fading channels, the perfect recovery cannot be achieved. So there are needs to use efficient techniques for image recovery and denoising. The wavelet and contourlet transforms along with some denoising schemes such as Hard thresholding to estimate the true coefficients from noisy ones have been already used. In this paper, we propose to use Wavelet-Based Contourlet Transform (WBCT) comprised with Block thresholding to more efficiently denoise and recovery transmitted face images. The simulation results show that for general face images the WBCT is quite competitive to the contourlet and wavelet transforms in the SNR sense and in visual aspects.
通过无线衰落信道传输人脸图像数据已被广泛应用于人脸识别和自动监控中,有许多技术可以实现这一目标。然而,由于噪声和无线衰落信道的存在,无法实现完美的恢复。因此,需要采用高效的图像恢复和去噪技术。小波变换和轮廓波变换以及硬阈值等去噪方法已被用于从噪声中估计真系数。本文提出了基于小波的Contourlet变换(WBCT)与分块阈值相结合的方法,对传输的人脸图像进行去噪和恢复。仿真结果表明,对于一般的人脸图像,WBCT在信噪比感知和视觉方面都比轮廓波变换和小波变换具有较强的竞争力。
{"title":"A new scheme of face image encoding through wireless fading channels using WBCT and Block thresholding","authors":"M. Owjimehr, M. Yazdi, A. Z. Asli","doi":"10.1109/IRANIANMVIP.2010.5941168","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2010.5941168","url":null,"abstract":"Transmitting the face image data through wireless fading channels have been widely used for face recognition and automatic surveillance applications and many techniques can be used to do that. However, due to the noise and wireless fading channels, the perfect recovery cannot be achieved. So there are needs to use efficient techniques for image recovery and denoising. The wavelet and contourlet transforms along with some denoising schemes such as Hard thresholding to estimate the true coefficients from noisy ones have been already used. In this paper, we propose to use Wavelet-Based Contourlet Transform (WBCT) comprised with Block thresholding to more efficiently denoise and recovery transmitted face images. The simulation results show that for general face images the WBCT is quite competitive to the contourlet and wavelet transforms in the SNR sense and in visual aspects.","PeriodicalId":350778,"journal":{"name":"2010 6th Iranian Conference on Machine Vision and Image Processing","volume":"201 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122460167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Incorporating efficiency and human judgment in image retrieval for trademark matching 结合效率和人为判断的图像检索商标匹配
Pub Date : 2010-10-01 DOI: 10.1109/IRANIANMVIP.2010.5941135
A. Chalechale, A. Faramarzi
There are several studies in the literature comparing different approaches. Most of these comparisons are based on objective tests (i.e.; efficiency and effectiveness of the approaches are obtained and compared). In this paper we conducted a novel subjective test, where human perception is incorporated to the evaluation process. Five known methods in the image retrieval literature are implemented and compared for closeness to human perception and also for their search time. Here, 1) similarity, 2) symmetry, and 3) area of trademarks retrieved by five different methods are evaluated and scored by humans. Experimental results illustrate that the correlation method is the nearest to human's perception in all fields. Experiments also show that the EPNH method is more efficient (much more shorter time) than the correlation method, while the semantic powers of these two are close together.
文献中有几项研究比较了不同的方法。这些比较大多是基于客观测试(即;得到并比较了这些方法的效率和有效性。在本文中,我们进行了一种新的主观测试,其中人类感知被纳入评估过程。在图像检索文献中实现了五种已知的方法,并对其与人类感知的接近程度和搜索时间进行了比较。在这里,人类对五种不同方法检索到的商标进行了1)相似度、2)对称性和3)面积的评估和评分。实验结果表明,相关方法在各个领域中最接近人类的感知。实验还表明,EPNH方法比相关方法效率更高(时间短得多),而两者的语义能力相近。
{"title":"Incorporating efficiency and human judgment in image retrieval for trademark matching","authors":"A. Chalechale, A. Faramarzi","doi":"10.1109/IRANIANMVIP.2010.5941135","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2010.5941135","url":null,"abstract":"There are several studies in the literature comparing different approaches. Most of these comparisons are based on objective tests (i.e.; efficiency and effectiveness of the approaches are obtained and compared). In this paper we conducted a novel subjective test, where human perception is incorporated to the evaluation process. Five known methods in the image retrieval literature are implemented and compared for closeness to human perception and also for their search time. Here, 1) similarity, 2) symmetry, and 3) area of trademarks retrieved by five different methods are evaluated and scored by humans. Experimental results illustrate that the correlation method is the nearest to human's perception in all fields. Experiments also show that the EPNH method is more efficient (much more shorter time) than the correlation method, while the semantic powers of these two are close together.","PeriodicalId":350778,"journal":{"name":"2010 6th Iranian Conference on Machine Vision and Image Processing","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128733855","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Region and content based image retrieval using advanced image processing techniques 基于区域和内容的图像检索使用先进的图像处理技术
Pub Date : 2010-10-01 DOI: 10.1109/IRANIANMVIP.2010.5941152
T. Sedghi, Majid Fakheri, M. Shayesteh
The focus of this paper is to enhance retrieval performance and also to provide a better similarity distance computation. We develop a modified clustering algorithm for image retrieval where hierarchical algorithm is used to generate the initial number of clusters and the cluster centres. Experimental results show that the proposed method yields higher retrieval accuracy compared to the several conventional methods. Our work offers improvement in image segmentation and retrieval accuracy.
本文的研究重点是提高检索性能,并提供更好的相似距离计算。我们开发了一种改进的聚类算法用于图像检索,其中分层算法用于生成初始数量的聚类和聚类中心。实验结果表明,与几种传统方法相比,该方法具有更高的检索精度。我们的工作提高了图像分割和检索的准确性。
{"title":"Region and content based image retrieval using advanced image processing techniques","authors":"T. Sedghi, Majid Fakheri, M. Shayesteh","doi":"10.1109/IRANIANMVIP.2010.5941152","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2010.5941152","url":null,"abstract":"The focus of this paper is to enhance retrieval performance and also to provide a better similarity distance computation. We develop a modified clustering algorithm for image retrieval where hierarchical algorithm is used to generate the initial number of clusters and the cluster centres. Experimental results show that the proposed method yields higher retrieval accuracy compared to the several conventional methods. Our work offers improvement in image segmentation and retrieval accuracy.","PeriodicalId":350778,"journal":{"name":"2010 6th Iranian Conference on Machine Vision and Image Processing","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130331592","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
An incremental evolutionary method for optimizing dynamic image retrieval systems 一种优化动态图像检索系统的增量进化方法
Pub Date : 2010-10-01 DOI: 10.1109/IRANIANMVIP.2010.5941133
M. Nikzad, H. Moghaddam
This paper introduces a new incremental evolutionary optimization method based on evolutionary group algorithm (EGA). The EGA was presented as an approach to overcome time-consuming drawbacks related to general evolutionary algorithms in large scale content-based image indexing retrieval (CBIR) optimization tasks. Here, we consider another challengeable limitation of usual evolutionary learning and optimization systems: learning in the scale-varying and dynamic environments. Hence, we present a new strategy based on EGA that is enhanced with the ability of incremental learning. Evaluation results on scale-varying and simulated dynamic CBIR systems show that the proposed method can continuously obtain good performance in the presence of environmental or scale changes.
提出了一种新的基于进化群算法的增量进化优化方法。在大规模基于内容的图像索引检索(CBIR)优化任务中,EGA是一种克服一般进化算法耗时缺点的方法。在这里,我们考虑了通常的进化学习和优化系统的另一个具有挑战性的限制:在尺度变化和动态环境中的学习。因此,我们提出了一种基于EGA的新策略,该策略增强了增量学习的能力。对变尺度和模拟动态CBIR系统的评估结果表明,该方法在环境或尺度变化的情况下都能持续获得良好的性能。
{"title":"An incremental evolutionary method for optimizing dynamic image retrieval systems","authors":"M. Nikzad, H. Moghaddam","doi":"10.1109/IRANIANMVIP.2010.5941133","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2010.5941133","url":null,"abstract":"This paper introduces a new incremental evolutionary optimization method based on evolutionary group algorithm (EGA). The EGA was presented as an approach to overcome time-consuming drawbacks related to general evolutionary algorithms in large scale content-based image indexing retrieval (CBIR) optimization tasks. Here, we consider another challengeable limitation of usual evolutionary learning and optimization systems: learning in the scale-varying and dynamic environments. Hence, we present a new strategy based on EGA that is enhanced with the ability of incremental learning. Evaluation results on scale-varying and simulated dynamic CBIR systems show that the proposed method can continuously obtain good performance in the presence of environmental or scale changes.","PeriodicalId":350778,"journal":{"name":"2010 6th Iranian Conference on Machine Vision and Image Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133720738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A new scheme for evaluation of air-trapping in CT images 一种评估CT图像中空气捕获的新方案
Pub Date : 2010-10-01 DOI: 10.1109/IRANIANMVIP.2010.5941149
M. Hosseini, H. Soltanian-Zadeh, S. Akhlaghpoor, A. Behrad
Air trapping is an abnormal retention of air that occurs after expiration state in the lungs, observed in all types of bronchiolar and obstructive lung diseases such as chronic obstructive pulmonary disease (COPD), asthma, and bronchiolitis obliterans syndrome. Air trapping is often incidentally diagnosed on computed tomography (CT) scanning but this method needs doctors so it is subjective and depends on their experience. In this paper, we present a novel method for evaluation of air trapping in the lungs for detection of COPD in CT images. The proposed method finds volumetric variations of the lungs from inspiration to expiration. To this end, trachea CT images at full inspiration and expiration are compared and the volumetric variations are used to classify the subjects. In the evaluated cases, the proposed method is able to estimate air trapping in the lungs from CT images without human intervention. This method may assist radiologists to measure and evaluate air trapping for detection of COPD as a computer aided diagnosis (CAD) system.
空气潴留是肺部呼气状态后发生的异常空气潴留,在所有类型的细支气管和阻塞性肺疾病(如慢性阻塞性肺疾病(COPD)、哮喘和闭塞性细支气管炎综合征)中均可观察到。在计算机断层扫描(CT)中经常偶然诊断出空气捕获,但这种方法需要医生,因此它是主观的,取决于他们的经验。在本文中,我们提出了一种评估肺部空气捕获的新方法,用于在CT图像中检测COPD。所提出的方法发现肺部从吸气到呼气的体积变化。为此,比较完全吸气和呼气时的气管CT图像,并利用体积变化对受试者进行分类。在评估的病例中,所提出的方法能够在没有人为干预的情况下从CT图像中估计肺部的空气捕获。该方法可以作为计算机辅助诊断(CAD)系统,帮助放射科医生测量和评估空气捕获以检测COPD。
{"title":"A new scheme for evaluation of air-trapping in CT images","authors":"M. Hosseini, H. Soltanian-Zadeh, S. Akhlaghpoor, A. Behrad","doi":"10.1109/IRANIANMVIP.2010.5941149","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2010.5941149","url":null,"abstract":"Air trapping is an abnormal retention of air that occurs after expiration state in the lungs, observed in all types of bronchiolar and obstructive lung diseases such as chronic obstructive pulmonary disease (COPD), asthma, and bronchiolitis obliterans syndrome. Air trapping is often incidentally diagnosed on computed tomography (CT) scanning but this method needs doctors so it is subjective and depends on their experience. In this paper, we present a novel method for evaluation of air trapping in the lungs for detection of COPD in CT images. The proposed method finds volumetric variations of the lungs from inspiration to expiration. To this end, trachea CT images at full inspiration and expiration are compared and the volumetric variations are used to classify the subjects. In the evaluated cases, the proposed method is able to estimate air trapping in the lungs from CT images without human intervention. This method may assist radiologists to measure and evaluate air trapping for detection of COPD as a computer aided diagnosis (CAD) system.","PeriodicalId":350778,"journal":{"name":"2010 6th Iranian Conference on Machine Vision and Image Processing","volume":"224 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129649935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A novel mapping for fingerprint image enhancement 一种新的指纹图像增强映射方法
Pub Date : 2010-10-01 DOI: 10.1109/IRANIANMVIP.2010.5941164
M. Baghelani, Jafar Karami Eshkaftaki, A. Ebrahimi
Human fingerprints contain ridges and valleys which to gather forms distinctive patterns. These details are called minutiae, which permanent throughout whole lifetime, and then can be used as identification marks for fingerprint verification. Fingerprint image may have a poor quality that couldn't use directly for recognition processes and then must be pre-processed first. This paper proposes a novel mapping which can be used instead of traditional pre-processing algorithms. The proposed mapping changes the overall fingerprint image configuration and mappes it to another image which is more convenience for common recognition steps. This algorithm is tested over FVC2002 fingerprint database and the results were satisfying.
人类的指纹包含脊状和谷状,可以收集形成独特的图案。这些细节被称为细枝末节,它们在人的一生中都是永久的,然后可以作为指纹验证的识别标记。指纹图像可能质量较差,不能直接用于识别过程,必须先进行预处理。本文提出了一种新的映射方法,可以代替传统的预处理算法。所提出的映射改变了整个指纹图像的配置,并将其映射到另一个图像,从而更方便常见的识别步骤。该算法在FVC2002指纹数据库上进行了测试,结果令人满意。
{"title":"A novel mapping for fingerprint image enhancement","authors":"M. Baghelani, Jafar Karami Eshkaftaki, A. Ebrahimi","doi":"10.1109/IRANIANMVIP.2010.5941164","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2010.5941164","url":null,"abstract":"Human fingerprints contain ridges and valleys which to gather forms distinctive patterns. These details are called minutiae, which permanent throughout whole lifetime, and then can be used as identification marks for fingerprint verification. Fingerprint image may have a poor quality that couldn't use directly for recognition processes and then must be pre-processed first. This paper proposes a novel mapping which can be used instead of traditional pre-processing algorithms. The proposed mapping changes the overall fingerprint image configuration and mappes it to another image which is more convenience for common recognition steps. This algorithm is tested over FVC2002 fingerprint database and the results were satisfying.","PeriodicalId":350778,"journal":{"name":"2010 6th Iranian Conference on Machine Vision and Image Processing","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130108495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Online signature verification using combination of two classifiers 使用两个分类器组合的在线签名验证
Pub Date : 2010-10-01 DOI: 10.1109/IRANIANMVIP.2010.5941153
M. Saeidi, R. Amirfattahi, A. Amini, M. Sajadi
The objective of signature verification is to distinguish forgery signature from genuine one. Online signature is the one which is registered through electronic devices such as digitizers and stored on computers in time sequence form. In this kind of signatures in addition to location information, time information such as speed and acceleration is stored. In this paper after accomplishment of some pre-processing procedures like normalization of signature size, smoothing and elimination of rotation on signatures using algorithms based on extremum matching of signals and ant colony, their time duration will be equalized. Afterwards, similarities between signatures will be determined using extended regression and finally will try to distinguish between forgery signatures from genuine one using support vector machine (SVM). The suggested online verification system is tested on SVC2004 signature set which is related to the first international signature verification competition and results are compared to respective results of participants. The results state that suggested method exhibits equal error rate (EER) of %4.3 in skilled forger group.
签名验证的目的是鉴别签名的真伪。在线签名是通过数字化设备等电子设备进行注册,并以时间序列的形式存储在计算机上的签名。在这类签名中,除了存储位置信息外,还存储速度、加速度等时间信息。本文采用基于信号极值匹配算法和蚁群算法,在完成签名大小归一化、平滑和消除签名旋转等预处理程序后,使签名的持续时间相等。然后,将使用扩展回归确定签名之间的相似性,最后将尝试使用支持向量机(SVM)区分伪造签名和真实签名。在与首届国际签名验证大赛相关的SVC2004签名集上对建议的在线验证系统进行了测试,并将结果与参与者各自的结果进行了比较。结果表明,该方法在熟练伪造者组中显示出相同的误差率(EER)为%4.3。
{"title":"Online signature verification using combination of two classifiers","authors":"M. Saeidi, R. Amirfattahi, A. Amini, M. Sajadi","doi":"10.1109/IRANIANMVIP.2010.5941153","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2010.5941153","url":null,"abstract":"The objective of signature verification is to distinguish forgery signature from genuine one. Online signature is the one which is registered through electronic devices such as digitizers and stored on computers in time sequence form. In this kind of signatures in addition to location information, time information such as speed and acceleration is stored. In this paper after accomplishment of some pre-processing procedures like normalization of signature size, smoothing and elimination of rotation on signatures using algorithms based on extremum matching of signals and ant colony, their time duration will be equalized. Afterwards, similarities between signatures will be determined using extended regression and finally will try to distinguish between forgery signatures from genuine one using support vector machine (SVM). The suggested online verification system is tested on SVC2004 signature set which is related to the first international signature verification competition and results are compared to respective results of participants. The results state that suggested method exhibits equal error rate (EER) of %4.3 in skilled forger group.","PeriodicalId":350778,"journal":{"name":"2010 6th Iranian Conference on Machine Vision and Image Processing","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128952327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Farsi/Arabic text extraction from video images by corner detection 基于角点检测的视频图像波斯语/阿拉伯语文本提取
Pub Date : 2010-10-01 DOI: 10.1109/IRANIANMVIP.2010.5941145
Mohieddin Moradi, S. Mozaffari, A. Orouji
Video text information plays an important role in semantic-based video analysis, indexing and retrieval. In this paper, we proposed a novel Farsi text detection approach based on intrinsic characteristics of Farsi text lines, which is more robust to complex backgrounds and various font styles. First, by an edge detector operator, all the possible edges in vertical, horizontal, 45 and 135 degrees are extracted. Then, for extracting text strokes, some pre-processing such as dilation and erosion are done according to the font size. Afterward, by finding the edges cross points, corners map is extracted. To discard non-text corners and finding real font size, histogram analysis is done. After finding real font size, input image is rescaled and a new corner map is extracted. Finally, the detected candidate text areas undergo the empirical rules analysis to identify text areas and project profile analysis for verification and text lines extraction. Experimental results demonstrate that the proposed method is robust to font size, font colour, and background complexity.
视频文本信息在基于语义的视频分析、索引和检索中起着重要的作用。本文提出了一种基于波斯语文本线条内在特征的波斯语文本检测方法,该方法对复杂背景和各种字体样式具有更强的鲁棒性。首先,通过边缘检测算子提取垂直、水平、45度和135度的所有可能的边缘;然后,根据字体大小对文字笔画进行扩张、侵蚀等预处理,提取文字笔画。然后,通过寻找边缘交点,提取角图。为了丢弃非文本角并找到真实的字体大小,进行了直方图分析。在找到真实的字体大小后,重新缩放输入图像并提取新的角图。最后,对检测到的候选文本区域进行经验规则分析以识别文本区域,并进行项目概况分析以进行验证和文本行提取。实验结果表明,该方法对字体大小、字体颜色和背景复杂度具有较好的鲁棒性。
{"title":"Farsi/Arabic text extraction from video images by corner detection","authors":"Mohieddin Moradi, S. Mozaffari, A. Orouji","doi":"10.1109/IRANIANMVIP.2010.5941145","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2010.5941145","url":null,"abstract":"Video text information plays an important role in semantic-based video analysis, indexing and retrieval. In this paper, we proposed a novel Farsi text detection approach based on intrinsic characteristics of Farsi text lines, which is more robust to complex backgrounds and various font styles. First, by an edge detector operator, all the possible edges in vertical, horizontal, 45 and 135 degrees are extracted. Then, for extracting text strokes, some pre-processing such as dilation and erosion are done according to the font size. Afterward, by finding the edges cross points, corners map is extracted. To discard non-text corners and finding real font size, histogram analysis is done. After finding real font size, input image is rescaled and a new corner map is extracted. Finally, the detected candidate text areas undergo the empirical rules analysis to identify text areas and project profile analysis for verification and text lines extraction. Experimental results demonstrate that the proposed method is robust to font size, font colour, and background complexity.","PeriodicalId":350778,"journal":{"name":"2010 6th Iranian Conference on Machine Vision and Image Processing","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129206341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
A novel approach for fast and robust multiple license plate detection 一种快速鲁棒的多车牌检测方法
Pub Date : 2010-10-01 DOI: 10.1109/IRANIANMVIP.2010.5941136
Mahdi Yazdian Dehkordi, M. Nikzad, Vahid Reza Ekhlas, Z. Azimifar
License Plate Detection (LPD) is the most difficult, critical and time consuming task in license plate recognition (LPR) systems. In this paper, a novel texture-based method is proposed for fast and robust LPD. First, a new filter called Peak-Valley filter is applied on the lines of the image. This filter not only extracts the remarkable gray level changes as consecutive peaks and valleys, but also simultaneously removes the undesirable small variations. Secondly, a sequential Peak-Valley partitioning is utilized to segment the transitions into some groups. Afterward, a neural network is employed to find true candidate lines and finally the candidate lines are aggregated to form the plates regions. According to our experiments, the proposed method correctly detects all plates presented in the image regardless of their styles and without considering the whole image. Experimental results showed that this approach can apply on real-time application for outdoor complex scenes.
车牌检测是车牌识别系统中最困难、最关键、最耗时的任务。本文提出了一种基于纹理的快速鲁棒LPD方法。首先,在图像的线条上应用一个新的滤波器,称为峰谷滤波器。该滤波器不仅可以提取出显著的灰度变化作为连续的峰谷,而且可以同时去除不需要的小变化。其次,利用连续峰谷划分法将过渡区划分为若干组。然后,利用神经网络寻找真实的候选线,最后将候选线聚合形成板块区域。实验结果表明,该方法可以在不考虑图像整体的情况下,正确地检测图像中呈现的所有板块,而不考虑它们的风格。实验结果表明,该方法可用于室外复杂场景的实时应用。
{"title":"A novel approach for fast and robust multiple license plate detection","authors":"Mahdi Yazdian Dehkordi, M. Nikzad, Vahid Reza Ekhlas, Z. Azimifar","doi":"10.1109/IRANIANMVIP.2010.5941136","DOIUrl":"https://doi.org/10.1109/IRANIANMVIP.2010.5941136","url":null,"abstract":"License Plate Detection (LPD) is the most difficult, critical and time consuming task in license plate recognition (LPR) systems. In this paper, a novel texture-based method is proposed for fast and robust LPD. First, a new filter called Peak-Valley filter is applied on the lines of the image. This filter not only extracts the remarkable gray level changes as consecutive peaks and valleys, but also simultaneously removes the undesirable small variations. Secondly, a sequential Peak-Valley partitioning is utilized to segment the transitions into some groups. Afterward, a neural network is employed to find true candidate lines and finally the candidate lines are aggregated to form the plates regions. According to our experiments, the proposed method correctly detects all plates presented in the image regardless of their styles and without considering the whole image. Experimental results showed that this approach can apply on real-time application for outdoor complex scenes.","PeriodicalId":350778,"journal":{"name":"2010 6th Iranian Conference on Machine Vision and Image Processing","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126716014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
2010 6th Iranian Conference on Machine Vision and Image Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1