首页 > 最新文献

2010 International Conference on Digital Image Computing: Techniques and Applications最新文献

英文 中文
Asymmetric, Non-unimodal Kernel Regression for Image Processing 用于图像处理的非对称非单峰核回归
Damith J. Mudugamuwa, W. Jia, Xiangjian He
Kernel regression has been previously proposed as a robust estimator for a wide range of image processing tasks, including image denoising, interpolation and super resolution. In this article we propose a kernel formulation that relaxes the usual symmetric and unimodal properties to effectively exploit the smoothness characteristics of natural images. The proposed method extends the kernel support along similar image characteristics to further increase the robustness of the estimates. Application of the proposed method to image denoising yields significant improvement over the previously reported regression methods and produces results comparable to the state-of the-art denoising techniques.
核回归之前已经被提出作为一种鲁棒估计器,用于广泛的图像处理任务,包括图像去噪、插值和超分辨率。在本文中,我们提出了一个核公式,放宽了通常的对称和单峰性质,以有效地利用自然图像的平滑特性。该方法沿着相似的图像特征扩展核支持,进一步提高了估计的鲁棒性。将所提出的方法应用于图像去噪,与先前报道的回归方法相比,产生了显著的改进,并产生了与最先进的去噪技术相当的结果。
{"title":"Asymmetric, Non-unimodal Kernel Regression for Image Processing","authors":"Damith J. Mudugamuwa, W. Jia, Xiangjian He","doi":"10.1109/DICTA.2010.34","DOIUrl":"https://doi.org/10.1109/DICTA.2010.34","url":null,"abstract":"Kernel regression has been previously proposed as a robust estimator for a wide range of image processing tasks, including image denoising, interpolation and super resolution. In this article we propose a kernel formulation that relaxes the usual symmetric and unimodal properties to effectively exploit the smoothness characteristics of natural images. The proposed method extends the kernel support along similar image characteristics to further increase the robustness of the estimates. Application of the proposed method to image denoising yields significant improvement over the previously reported regression methods and produces results comparable to the state-of the-art denoising techniques.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128704761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accurate Silhouettes for Surveillance - Improved Motion Segmentation Using Graph Cuts 准确的轮廓监视-改进运动分割使用图形切割
Daniel Chen, S. Denman, C. Fookes, S. Sridharan
Silhouettes are common features used by many applications in computer vision. For many of these algorithms to perform optimally, accurately segmenting the objects of interest from the background to extract the silhouettes is essential. Motion segmentation is a popular technique to segment moving objects from the background, however such algorithms can be prone to poor segmentation, particularly in noisy or low contrast conditions. In this paper, the work of [1] combining motion detection with graph cuts, is extended into two novel implementations that aim to allow greater uncertainty in the output of the motion segmentation, providing a less restricted input to the graph cut algorithm. The proposed algorithms are evaluated on a portion of the ETISEO dataset using hand segmented ground truth data, and an improvement in performance over the motion segmentation alone and the baseline system of [1] is shown.
轮廓是计算机视觉中许多应用程序使用的常见特征。对于这些算法中的许多算法来说,准确地从背景中分割感兴趣的对象以提取轮廓是必不可少的。运动分割是一种流行的从背景中分割运动物体的技术,但是这种算法容易分割不良,特别是在噪声或低对比度条件下。在本文中,[1]将运动检测与图切相结合的工作扩展为两种新的实现,旨在允许运动分割的输出具有更大的不确定性,为图切算法提供更少的限制输入。使用手动分割的地面真实数据在ETISEO数据集的一部分上对所提出的算法进行了评估,结果表明,与单独的运动分割和[1]的基线系统相比,所提出的算法的性能有所提高。
{"title":"Accurate Silhouettes for Surveillance - Improved Motion Segmentation Using Graph Cuts","authors":"Daniel Chen, S. Denman, C. Fookes, S. Sridharan","doi":"10.1109/DICTA.2010.69","DOIUrl":"https://doi.org/10.1109/DICTA.2010.69","url":null,"abstract":"Silhouettes are common features used by many applications in computer vision. For many of these algorithms to perform optimally, accurately segmenting the objects of interest from the background to extract the silhouettes is essential. Motion segmentation is a popular technique to segment moving objects from the background, however such algorithms can be prone to poor segmentation, particularly in noisy or low contrast conditions. In this paper, the work of [1] combining motion detection with graph cuts, is extended into two novel implementations that aim to allow greater uncertainty in the output of the motion segmentation, providing a less restricted input to the graph cut algorithm. The proposed algorithms are evaluated on a portion of the ETISEO dataset using hand segmented ground truth data, and an improvement in performance over the motion segmentation alone and the baseline system of [1] is shown.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"362 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115943557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
On the Estimation of Extrinsic and Intrinsic Parameters of Optical Microscope Calibration 光学显微镜定标中内外参数的估计
Doreen Altinay, A. Bradley, A. Mehnert
This paper compares several camera calibration methods on the estimation of specific extrinsic and intrinsic parameters. Good estimates of the chosen parameters, rotation and radial lens distortion are essential to increase the accuracy of quantitative measurements and to accurately stitch single field-of-view-based images together. The parameters are obtained using two selected methods on different objective magnifications on a microscope system using a fixed grid calibration pattern. We evaluate two methods and show that the rotation angles from one of the methods is consistent with a simple homography while the other estimates a consistently smaller angle. The radial distortion estimates are both very small and relate to a distortion of less than one pixel.
本文比较了几种摄像机标定方法对特定外部参数和内部参数的估计。对所选参数、旋转和径向透镜畸变的良好估计对于提高定量测量的准确性和准确地将基于单一视场的图像拼接在一起至关重要。采用固定网格定标模式,在不同物镜倍率的显微镜系统上,采用两种选择的方法获得参数。我们评估了两种方法,并表明其中一种方法的旋转角度与简单的单应性一致,而另一种方法的旋转角度始终较小。径向畸变估计都非常小,并且与小于一个像素的畸变有关。
{"title":"On the Estimation of Extrinsic and Intrinsic Parameters of Optical Microscope Calibration","authors":"Doreen Altinay, A. Bradley, A. Mehnert","doi":"10.1109/DICTA.2010.43","DOIUrl":"https://doi.org/10.1109/DICTA.2010.43","url":null,"abstract":"This paper compares several camera calibration methods on the estimation of specific extrinsic and intrinsic parameters. Good estimates of the chosen parameters, rotation and radial lens distortion are essential to increase the accuracy of quantitative measurements and to accurately stitch single field-of-view-based images together. The parameters are obtained using two selected methods on different objective magnifications on a microscope system using a fixed grid calibration pattern. We evaluate two methods and show that the rotation angles from one of the methods is consistent with a simple homography while the other estimates a consistently smaller angle. The radial distortion estimates are both very small and relate to a distortion of less than one pixel.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117218953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Selection of Image Parameters as the First Step towards Creating a CBIR System for the Solar Dynamics Observatory 选择图像参数作为创建太阳动力学观测站CBIR系统的第一步
J. Banda, R. Angryk
This work describes the attribute evaluation sections of the ambitious goal of creating a large-scale content-based image retrieval (CBIR) system for solar phenomena in NASA images from the Solar Dynamics Observatory mission. This mission, with its Atmospheric Imaging Assembly (AIA), is generating eight 4096 pixels x 4096 pixels images every 10 seconds, leading to a data transmission rate of approximately 700 Gigabytes per day from only the AIA component (the entire mission is expected to be sending about 1.5 Terabytes of data per day, for a minimum of 5 years). We investigate unsupervised and supervised methods of selecting image parameters and their importance from the perspective of distinguishing between different types of solar phenomena by using correlation analysis, and three supervised attribute evaluation methods. By selecting the most relevant image parameters (out of the twelve tested) we expect to be able to save 540 Megabytes per day of storage costs for each parameter that we remove. In addition, we also applied several image filtering algorithms on these images in order to investigate the enhancement of our classification results. We confirm our experimental results by running multiple classifiers for comparative analysis on the selected image parameters and filters.
这项工作描述了为来自太阳动力学观测任务的NASA图像中的太阳现象创建一个大规模基于内容的图像检索(CBIR)系统的雄心勃勃的目标的属性评估部分。这个任务,连同它的大气成像组件(AIA),每10秒生成8张4096像素x 4096像素的图像,导致仅AIA组件每天的数据传输速率约为700千兆字节(整个任务预计每天发送约1.5太字节的数据,至少5年)。从利用相关分析区分太阳现象类型的角度,探讨了无监督和有监督图像参数的选择方法及其重要性,以及三种监督属性评价方法。通过选择最相关的映像参数(从12个测试参数中),我们希望能够为我们删除的每个参数每天节省540兆字节的存储成本。此外,我们还对这些图像应用了几种图像滤波算法,以研究我们的分类结果的增强。我们通过运行多个分类器对所选图像参数和滤波器进行对比分析来验证我们的实验结果。
{"title":"Selection of Image Parameters as the First Step towards Creating a CBIR System for the Solar Dynamics Observatory","authors":"J. Banda, R. Angryk","doi":"10.1109/DICTA.2010.94","DOIUrl":"https://doi.org/10.1109/DICTA.2010.94","url":null,"abstract":"This work describes the attribute evaluation sections of the ambitious goal of creating a large-scale content-based image retrieval (CBIR) system for solar phenomena in NASA images from the Solar Dynamics Observatory mission. This mission, with its Atmospheric Imaging Assembly (AIA), is generating eight 4096 pixels x 4096 pixels images every 10 seconds, leading to a data transmission rate of approximately 700 Gigabytes per day from only the AIA component (the entire mission is expected to be sending about 1.5 Terabytes of data per day, for a minimum of 5 years). We investigate unsupervised and supervised methods of selecting image parameters and their importance from the perspective of distinguishing between different types of solar phenomena by using correlation analysis, and three supervised attribute evaluation methods. By selecting the most relevant image parameters (out of the twelve tested) we expect to be able to save 540 Megabytes per day of storage costs for each parameter that we remove. In addition, we also applied several image filtering algorithms on these images in order to investigate the enhancement of our classification results. We confirm our experimental results by running multiple classifiers for comparative analysis on the selected image parameters and filters.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115415765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 26
A Novel Algorithm for Text Detection and Localization in Natural Scene Images 一种新的自然场景图像文本检测与定位算法
Sezer Karaoglu, Basura Fernando, A. Trémeau
Text data in an image present useful information for annotation, indexing and structuring of images. The gathered information from images can be applied for devices for impaired people, navigation, tourist assistance or georeferencing business. In this paper we propose a novel algorithm for text detection and localization from outdoor/indoor images which is robust against different font size, style, uneven illumination, shadows, highlights, over exposed regions, low contrasted images, specular reflections and many distortions which makes text localization task harder. A binarization algorithm based on difference of gamma correction and morphological reconstruction is realized to extract the connected components of an image. These connected components are classified as text and non test using a Random Forest classifier. After that text regions are localized by a novel merging algorithm for further processing.
图像中的文本数据为图像的注释、索引和结构提供了有用的信息。从图像中收集的信息可以应用于残疾人设备、导航、旅游辅助或地理参考业务。本文提出了一种新的室外/室内图像文本检测和定位算法,该算法对不同字体大小、样式、光照不均匀、阴影、高光、过度曝光区域、低对比度图像、镜面反射和许多失真都具有鲁棒性,从而使文本定位任务更加困难。实现了一种基于伽玛校正和形态重构差异的二值化算法来提取图像的连通分量。使用随机森林分类器将这些连接的组件分类为文本和非测试。然后用一种新颖的合并算法对文本区域进行定位,进一步进行处理。
{"title":"A Novel Algorithm for Text Detection and Localization in Natural Scene Images","authors":"Sezer Karaoglu, Basura Fernando, A. Trémeau","doi":"10.1109/DICTA.2010.115","DOIUrl":"https://doi.org/10.1109/DICTA.2010.115","url":null,"abstract":"Text data in an image present useful information for annotation, indexing and structuring of images. The gathered information from images can be applied for devices for impaired people, navigation, tourist assistance or georeferencing business. In this paper we propose a novel algorithm for text detection and localization from outdoor/indoor images which is robust against different font size, style, uneven illumination, shadows, highlights, over exposed regions, low contrasted images, specular reflections and many distortions which makes text localization task harder. A binarization algorithm based on difference of gamma correction and morphological reconstruction is realized to extract the connected components of an image. These connected components are classified as text and non test using a Random Forest classifier. After that text regions are localized by a novel merging algorithm for further processing.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114814639","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
A Two-Stage Correlation Method for Stereoscopic Depth Estimation 立体深度估计的两阶段相关方法
Nils Einecke, J. Eggert
The computation of stereoscopic depth is an important field of computer vision. Although a large variety of algorithms has been developed, the traditional correlation-based versions of these algorithms are prevalent. This is mainly due to easy implementation and handling but also to the linear computational complexity, as compared to more elaborated algorithms based on diffusion processes, graph-cut or bilateral filtering. In this paper, we introduce a new two-stage matching cost for the traditional approach: the summed normalized cross-correlation (SNCC). This new cost function performs a normalized cross-correlation in the first stage and aggregates the correlation values in a second stage. We show that this new measure can be implemented efficiently and that it leads to a substantial improvement of the performance of the traditional stereo approach because it is less sensitive to high contrast outliers.
立体深度的计算是计算机视觉的一个重要领域。虽然已经开发了各种各样的算法,但这些算法的传统的基于相关性的版本是普遍的。这主要是由于易于实现和处理,但与基于扩散过程、图切或双边过滤的更详细的算法相比,线性计算的复杂性也更大。本文在传统匹配方法的基础上,引入了一种新的两阶段匹配代价:求和归一化互相关(SNCC)。这个新的成本函数在第一阶段执行标准化的相互关联,并在第二阶段汇总相关值。我们表明,这种新措施可以有效地实施,并导致传统立体方法的性能有实质性的改进,因为它对高对比度异常值不太敏感。
{"title":"A Two-Stage Correlation Method for Stereoscopic Depth Estimation","authors":"Nils Einecke, J. Eggert","doi":"10.1109/DICTA.2010.49","DOIUrl":"https://doi.org/10.1109/DICTA.2010.49","url":null,"abstract":"The computation of stereoscopic depth is an important field of computer vision. Although a large variety of algorithms has been developed, the traditional correlation-based versions of these algorithms are prevalent. This is mainly due to easy implementation and handling but also to the linear computational complexity, as compared to more elaborated algorithms based on diffusion processes, graph-cut or bilateral filtering. In this paper, we introduce a new two-stage matching cost for the traditional approach: the summed normalized cross-correlation (SNCC). This new cost function performs a normalized cross-correlation in the first stage and aggregates the correlation values in a second stage. We show that this new measure can be implemented efficiently and that it leads to a substantial improvement of the performance of the traditional stereo approach because it is less sensitive to high contrast outliers.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127245317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 68
Classification of Melanoma Lesions Using Wavelet-Based Texture Analysis 基于小波纹理分析的黑色素瘤病灶分类
R. Garnavi, M. Aldeen, J. Bailey
This paper presents a wavelet-based texture analysis method for classification of melanoma. The method applies tree-structured wavelet transform on different color channels of red, green, blue and luminance of dermoscopy images, and employs various statistical measures and ratios on wavelet coefficients. Feature extraction and a two-stage feature selection method, based on entropy and correlation, were applied to a train set of 103 images. The resultant feature subsets were then fed into four different classifiers: support vector machine, random forest, logistic model tree and hidden naive bayes to classify melanoma in a test set of 102 images, which resulted in an accuracy of 88.24% and ROC area of 0.918. Comparative study carried out in this paper shows that the proposed feature extraction method outperforms three other wavelet-based approaches.
提出了一种基于小波的纹理分析方法对黑色素瘤进行分类。该方法对皮肤镜图像的红、绿、蓝、亮度等不同颜色通道进行树结构小波变换,并对小波系数采用不同的统计测度和比值。将特征提取和基于熵和相关性的两阶段特征选择方法应用于103张图像的训练集。然后将得到的特征子集输入到支持向量机、随机森林、逻辑模型树和隐朴素贝叶斯4种不同的分类器中,对102张图像进行黑色素瘤分类,准确率为88.24%,ROC面积为0.918。本文进行的对比研究表明,所提出的特征提取方法优于其他三种基于小波的方法。
{"title":"Classification of Melanoma Lesions Using Wavelet-Based Texture Analysis","authors":"R. Garnavi, M. Aldeen, J. Bailey","doi":"10.1109/DICTA.2010.22","DOIUrl":"https://doi.org/10.1109/DICTA.2010.22","url":null,"abstract":"This paper presents a wavelet-based texture analysis method for classification of melanoma. The method applies tree-structured wavelet transform on different color channels of red, green, blue and luminance of dermoscopy images, and employs various statistical measures and ratios on wavelet coefficients. Feature extraction and a two-stage feature selection method, based on entropy and correlation, were applied to a train set of 103 images. The resultant feature subsets were then fed into four different classifiers: support vector machine, random forest, logistic model tree and hidden naive bayes to classify melanoma in a test set of 102 images, which resulted in an accuracy of 88.24% and ROC area of 0.918. Comparative study carried out in this paper shows that the proposed feature extraction method outperforms three other wavelet-based approaches.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126144151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Blind Restoration of Fluorescein Angiography Images 荧光素血管造影图像的盲恢复
U. Qidwai, U. Qidwai
In this paper, a new technique is presented to enhance the blurred images obtained from Fluoresce in Angiography (FA) of the retina. One of the main steps in inspecting the eye (especially the deeper image of retina) is to look into the eye using a slit-lamp apparatus that shines a monochromatic light on to the retinal surface and captures the reflection in the camera as the retinal image. When further probing is required, such imaging is preceded by injecting a specialized dye in the eye blood vessels. This dye shines out more prominently in the imaging system and reveals the temporal as well as special behavior of the blood vessels, which, in turn, is useful in the diagnosis process. While most of the cases, the image produced is quite clean and easily used by the ophthalmologists, there are still many cases in which these images come out to be very blurred due to the disease in the eye such as cataract etc… in such cases, having an enhanced image can enable the doctors to start the appropriate treatment for the underlying disease. The proposed technique utilizes the Blind Deconvolution approach using Maximum Likelihood Estimation approach. Further post-processing steps have been proposed as well to locate the Macula in the image which is the zero-center of the image formed on the retina. The post-processing steps include thresholding, Region Growing, and morphological operations.
本文提出了一种增强视网膜荧光血管造影(FA)模糊图像的新技术。检查眼睛(尤其是视网膜的深层图像)的主要步骤之一是使用狭缝灯设备观察眼睛,该设备将单色光照射到视网膜表面,并捕获相机中的反射作为视网膜图像。当需要进一步探查时,这种成像之前要在眼血管中注射一种专门的染料。这种染料在成像系统中更加突出,可以显示血管的时间和特殊行为,这反过来又在诊断过程中很有用。虽然大多数情况下,生成的图像非常干净,很容易被眼科医生使用,但仍有许多情况下,由于眼睛的疾病,如白内障等,这些图像变得非常模糊。在这种情况下,增强图像可以使医生能够开始适当的治疗潜在的疾病。该方法采用最大似然估计的盲反卷积方法。我们还提出了进一步的后处理步骤来定位图像中的黄斑,它是在视网膜上形成的图像的零中心。后处理步骤包括阈值处理、区域生长和形态学操作。
{"title":"Blind Restoration of Fluorescein Angiography Images","authors":"U. Qidwai, U. Qidwai","doi":"10.1109/DICTA.2010.35","DOIUrl":"https://doi.org/10.1109/DICTA.2010.35","url":null,"abstract":"In this paper, a new technique is presented to enhance the blurred images obtained from Fluoresce in Angiography (FA) of the retina. One of the main steps in inspecting the eye (especially the deeper image of retina) is to look into the eye using a slit-lamp apparatus that shines a monochromatic light on to the retinal surface and captures the reflection in the camera as the retinal image. When further probing is required, such imaging is preceded by injecting a specialized dye in the eye blood vessels. This dye shines out more prominently in the imaging system and reveals the temporal as well as special behavior of the blood vessels, which, in turn, is useful in the diagnosis process. While most of the cases, the image produced is quite clean and easily used by the ophthalmologists, there are still many cases in which these images come out to be very blurred due to the disease in the eye such as cataract etc… in such cases, having an enhanced image can enable the doctors to start the appropriate treatment for the underlying disease. The proposed technique utilizes the Blind Deconvolution approach using Maximum Likelihood Estimation approach. Further post-processing steps have been proposed as well to locate the Macula in the image which is the zero-center of the image formed on the retina. The post-processing steps include thresholding, Region Growing, and morphological operations.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"4 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126630330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Segmenting Characters from License Plate Images with Little Prior Knowledge 基于小先验知识的车牌图像字符分割
W. Jia, Xiangjian He, Qiang Wu
In this paper, to enable a fast and robust system for automatically recognizing license plates with various appearances, new and simple but efficient algorithms are developed to segment characters from extracted license plate images. Our goal is to segment characters properly from a license plate image region. Different from existing methods for segmenting degraded machine-printed characters, our algorithms are based on very weak assumptions and use no prior knowledge about the format of the plates, in order for them to be applicable to wider applications. Experimental results demonstrate promising efficiency and flexibility of the proposed scheme.
为了实现快速鲁棒的车牌自动识别系统,本文提出了一种简单高效的车牌图像字符分割算法。我们的目标是从车牌图像区域中正确分割字符。与现有的机器打印字符分割方法不同,我们的算法基于非常弱的假设,并且不使用关于印版格式的先验知识,从而使它们适用于更广泛的应用。实验结果表明,该方案具有良好的效率和灵活性。
{"title":"Segmenting Characters from License Plate Images with Little Prior Knowledge","authors":"W. Jia, Xiangjian He, Qiang Wu","doi":"10.1109/DICTA.2010.48","DOIUrl":"https://doi.org/10.1109/DICTA.2010.48","url":null,"abstract":"In this paper, to enable a fast and robust system for automatically recognizing license plates with various appearances, new and simple but efficient algorithms are developed to segment characters from extracted license plate images. Our goal is to segment characters properly from a license plate image region. Different from existing methods for segmenting degraded machine-printed characters, our algorithms are based on very weak assumptions and use no prior knowledge about the format of the plates, in order for them to be applicable to wider applications. Experimental results demonstrate promising efficiency and flexibility of the proposed scheme.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"184 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122072999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Multiscale Visual Object Detection for Unsupervised Ubiquitous Projection Based on a Portable Projector-Camera System 基于便携式投影相机系统的无监督泛在投影多尺度视觉目标检测
Thitirat Siriborvornratanakul, Masanori Sugimoto
Ubiquitous projection is the recent effort that tries to close the gap between the physical world and the virtual world by using a mobile projector. Using a projector and camera together has a conflict of preferred light conditions, making it difficult to implement a robust visual detector while retaining ubiquity. In this paper, we focus on techniques of visual object detection for a portable projector-camera system. The goal is to create a visual detector that requires no guidance from a user and is robust to different light conditions. Our investigation involves the multiscale concept using Canny edge detection as a representative detector. Five image simplification filters applied to the multiscale detection are examined for both accuracy and speed. In addition, preprocessing using histogram equalization and post processing are applied to ensure robustness in a real-world scenario, and to guarantee that the detection will always successfully detect objects using a constant set of parameters defined offline. Finally, we showed that using multiscale detection in a parallel manner can speed up the detection while not affecting the accuracy of the detection.
无处不在的投影是最近的一项努力,试图通过使用移动投影仪缩小物理世界和虚拟世界之间的差距。将投影仪和摄像机一起使用会产生首选光条件的冲突,这使得在保持无处不在的情况下实现强大的视觉探测器变得困难。本文主要研究便携式投影摄像机系统的视觉目标检测技术。目标是创建一个不需要用户指导的视觉探测器,并且对不同的光线条件具有鲁棒性。我们的研究涉及多尺度概念,使用Canny边缘检测作为代表性检测器。研究了应用于多尺度检测的五种图像简化滤波器的精度和速度。此外,使用直方图均衡化的预处理和后处理应用于确保在现实场景中的鲁棒性,并保证检测将始终成功地检测到使用离线定义的一组恒定参数的对象。最后,我们证明了以并行方式使用多尺度检测可以在不影响检测精度的情况下加快检测速度。
{"title":"Multiscale Visual Object Detection for Unsupervised Ubiquitous Projection Based on a Portable Projector-Camera System","authors":"Thitirat Siriborvornratanakul, Masanori Sugimoto","doi":"10.1109/DICTA.2010.109","DOIUrl":"https://doi.org/10.1109/DICTA.2010.109","url":null,"abstract":"Ubiquitous projection is the recent effort that tries to close the gap between the physical world and the virtual world by using a mobile projector. Using a projector and camera together has a conflict of preferred light conditions, making it difficult to implement a robust visual detector while retaining ubiquity. In this paper, we focus on techniques of visual object detection for a portable projector-camera system. The goal is to create a visual detector that requires no guidance from a user and is robust to different light conditions. Our investigation involves the multiscale concept using Canny edge detection as a representative detector. Five image simplification filters applied to the multiscale detection are examined for both accuracy and speed. In addition, preprocessing using histogram equalization and post processing are applied to ensure robustness in a real-world scenario, and to guarantee that the detection will always successfully detect objects using a constant set of parameters defined offline. Finally, we showed that using multiscale detection in a parallel manner can speed up the detection while not affecting the accuracy of the detection.","PeriodicalId":246460,"journal":{"name":"2010 International Conference on Digital Image Computing: Techniques and Applications","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129581374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2010 International Conference on Digital Image Computing: Techniques and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1