首页 > 最新文献

14th International Conference on Image Analysis and Processing (ICIAP 2007)最新文献

英文 中文
Score-level fusion of fingerprint and face matchers for personal verification under "stress" conditions 在“压力”条件下,指纹和人脸匹配器的分数级融合用于个人验证
Pub Date : 2007-09-10 DOI: 10.1109/ICIAP.2007.114
G. Marcialis, F. Roli
Fusion of multiple face and fingerprint matchers based on different biometrics for personal authentication has been investigated in the last years. However, the performance achievable when the expected subject cooperation degree is different from the real one has not yet been sufficiently studied. In this paper, we investigate the performance of several score-level fusion rules when the test set is taken under non-cooperative ("stress") conditions. Results show that fusion allows to increase the robustness of the system under strong changes of the subject's cooperation degree.
基于不同生物特征的多人脸和指纹匹配器融合用于个人身份验证在过去的几年里得到了研究。然而,当预期的主体合作程度与实际的不同时,所能达到的绩效还没有得到充分的研究。在本文中,我们研究了在非合作(“应力”)条件下测试集的几种分数级融合规则的性能。结果表明,在主体合作程度发生较大变化的情况下,融合可以增强系统的鲁棒性。
{"title":"Score-level fusion of fingerprint and face matchers for personal verification under \"stress\" conditions","authors":"G. Marcialis, F. Roli","doi":"10.1109/ICIAP.2007.114","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.114","url":null,"abstract":"Fusion of multiple face and fingerprint matchers based on different biometrics for personal authentication has been investigated in the last years. However, the performance achievable when the expected subject cooperation degree is different from the real one has not yet been sufficiently studied. In this paper, we investigate the performance of several score-level fusion rules when the test set is taken under non-cooperative (\"stress\") conditions. Results show that fusion allows to increase the robustness of the system under strong changes of the subject's cooperation degree.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128249440","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 23
Rigid Image Registration based on Pixel Grouping 基于像素分组的刚性图像配准
Pub Date : 2007-09-10 DOI: 10.1109/ICIAP.2007.111
Demetrios Gerogiannis, Christophoros Nikou, A. Likas
We propose a pixel similarity-based algorithm enabling accurate rigid registration between single and multimodal images. The method relies on the partitioning of a reference image by a Gaussian mixture model (GMM). This partition is then projected onto the image to be registered. The main idea is that a Gaussian component in the reference image corresponds to a Gaussian component in the image to be registered. If the images are correctly registered the total distance between the corresponding components is minimum. An advantage of the proposed method is that it may handle multidimensional (vector valued) images where histogram-based methods such as the widely used mutual information is not tractable due to the high dimension of the data. Also, experimental results indicate that, even in the case of images presenting low SNR, the proposed algorithm compares favorably to the histogram-based mutual information method that is widely used in a variety of applications.
我们提出了一种基于像素相似性的算法,可以在单模态和多模态图像之间实现精确的刚性配准。该方法依靠高斯混合模型(GMM)对参考图像进行分割。然后将该分区投影到要注册的图像上。主要思想是参考图像中的高斯分量对应于待配准图像中的高斯分量。如果图像被正确地配准,那么对应分量之间的总距离是最小的。该方法的一个优点是它可以处理多维(向量值)图像,而基于直方图的方法,如广泛使用的互信息,由于数据的高维而无法处理。此外,实验结果表明,即使在低信噪比的图像情况下,该算法也优于各种应用中广泛使用的基于直方图的互信息方法。
{"title":"Rigid Image Registration based on Pixel Grouping","authors":"Demetrios Gerogiannis, Christophoros Nikou, A. Likas","doi":"10.1109/ICIAP.2007.111","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.111","url":null,"abstract":"We propose a pixel similarity-based algorithm enabling accurate rigid registration between single and multimodal images. The method relies on the partitioning of a reference image by a Gaussian mixture model (GMM). This partition is then projected onto the image to be registered. The main idea is that a Gaussian component in the reference image corresponds to a Gaussian component in the image to be registered. If the images are correctly registered the total distance between the corresponding components is minimum. An advantage of the proposed method is that it may handle multidimensional (vector valued) images where histogram-based methods such as the widely used mutual information is not tractable due to the high dimension of the data. Also, experimental results indicate that, even in the case of images presenting low SNR, the proposed algorithm compares favorably to the histogram-based mutual information method that is widely used in a variety of applications.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129037800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Automatic Detection of Facial Landmarks from AU-coded Expressive Facial Images 从au编码的表情图像中自动检测面部标志
Pub Date : 2007-09-10 DOI: 10.1109/ICIAP.2007.30
Y. Gizatdinova, Veikko Surakka
The present aim was to develop a fully automatic feature-based method for expression-invariant detection of facial landmarks from still facial images. It is a continuation of our earlier work where we found that some certain muscle contractions made a deteriorating effect on the feature-based landmark detection especially in the lower face. Taking into account this crucial facial behavior, we introduced improvements to the method that allowed facial landmarks to be fully automatically detected from expressive images of high complexity. In the method, information on local oriented edges was utilized to compose edge maps of the image at two levels of resolution. The landmark candidates resulted from this step were further verified by edge orientation matching. We used knowledge on face geometry to find the proper spatial arrangement of the candidates. The results obtained demonstrated a high overall performance of the method while testing a wide range official displays.
目前的目的是开发一种基于特征的全自动方法,用于从静止面部图像中检测面部特征。这是我们早期工作的延续,我们发现一些特定的肌肉收缩对基于特征的地标检测产生了恶化的影响,特别是在下面部。考虑到这一关键的面部行为,我们对方法进行了改进,使面部地标能够从高复杂性的表情图像中完全自动检测出来。该方法利用局部定向边缘信息组成两级分辨率的图像边缘图。这一步得到的候选地标通过边缘方向匹配进一步验证。我们利用人脸几何知识找到合适的候选空间排列。结果表明,该方法在广泛的官方显示器测试中具有较高的综合性能。
{"title":"Automatic Detection of Facial Landmarks from AU-coded Expressive Facial Images","authors":"Y. Gizatdinova, Veikko Surakka","doi":"10.1109/ICIAP.2007.30","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.30","url":null,"abstract":"The present aim was to develop a fully automatic feature-based method for expression-invariant detection of facial landmarks from still facial images. It is a continuation of our earlier work where we found that some certain muscle contractions made a deteriorating effect on the feature-based landmark detection especially in the lower face. Taking into account this crucial facial behavior, we introduced improvements to the method that allowed facial landmarks to be fully automatically detected from expressive images of high complexity. In the method, information on local oriented edges was utilized to compose edge maps of the image at two levels of resolution. The landmark candidates resulted from this step were further verified by edge orientation matching. We used knowledge on face geometry to find the proper spatial arrangement of the candidates. The results obtained demonstrated a high overall performance of the method while testing a wide range official displays.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127630676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 17
Integrated Edge and Corner Detection 集成边缘和角检测
Pub Date : 2007-09-10 DOI: 10.1109/ICIAP.2007.80
S. Coleman, B. Scotney, D. Kerr
Corner detection is used in many computer vision applications that require fast and efficient feature matching. For tasks such as robot localisation and navigation, the use of corners for matching is preferred over edges or other, larger, features. In recent years finite-element based methods have been used to develop gradient operators for edge detection that have improved angular accuracy over standard techniques. We extend this work to corner detection, enabling edge and corner detection to be integrated. We demonstrate that accuracy is comparable to well-known existing corner detectors, and that significantly reduced computation time can be achieved, making the approach appropriate for real-time computer vision and robotics.
角点检测被用于许多需要快速高效特征匹配的计算机视觉应用中。对于机器人定位和导航等任务,使用角进行匹配比使用边缘或其他更大的特征更可取。近年来,基于有限元的方法已被用于开发边缘检测的梯度算子,这些算子比标准技术提高了角度精度。我们将这项工作扩展到角点检测,使边缘和角点检测一体化。我们证明,该方法的精度与已知的现有角检测器相当,并且可以显著减少计算时间,使该方法适用于实时计算机视觉和机器人技术。
{"title":"Integrated Edge and Corner Detection","authors":"S. Coleman, B. Scotney, D. Kerr","doi":"10.1109/ICIAP.2007.80","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.80","url":null,"abstract":"Corner detection is used in many computer vision applications that require fast and efficient feature matching. For tasks such as robot localisation and navigation, the use of corners for matching is preferred over edges or other, larger, features. In recent years finite-element based methods have been used to develop gradient operators for edge detection that have improved angular accuracy over standard techniques. We extend this work to corner detection, enabling edge and corner detection to be integrated. We demonstrate that accuracy is comparable to well-known existing corner detectors, and that significantly reduced computation time can be achieved, making the approach appropriate for real-time computer vision and robotics.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131839657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Effective color space representation for wavelet based compression of HDR images 基于小波压缩的HDR图像的有效色彩空间表示
Pub Date : 2007-09-10 DOI: 10.1109/ICIAP.2007.57
M. Okuda, N. Adami
Effective color image and video coding is usually exploited coupling a compression method with the most suitable color space representation. This work extends a previously proposed high dynamic range (HDR) image coding method, which combines a logarithmic color adaptation module with a JPEG2000 codec. Here we propose to change the original preprocessing stage with a more adequate one, based on the LogLuv color space representation, in order to take fully advantage of wavelet based coding. The experimental comparisons confirm that the proposed method improves the compression performances and simplify the overall coding scheme.
有效的彩色图像和视频编码通常是将压缩方法与最合适的彩色空间表示相结合。这项工作扩展了先前提出的高动态范围(HDR)图像编码方法,该方法将对数颜色适应模块与JPEG2000编解码器相结合。为了充分发挥小波编码的优势,我们提出在LogLuv颜色空间表示的基础上,将原有的预处理阶段改为更充分的预处理阶段。实验结果表明,该方法提高了压缩性能,简化了整体编码方案。
{"title":"Effective color space representation for wavelet based compression of HDR images","authors":"M. Okuda, N. Adami","doi":"10.1109/ICIAP.2007.57","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.57","url":null,"abstract":"Effective color image and video coding is usually exploited coupling a compression method with the most suitable color space representation. This work extends a previously proposed high dynamic range (HDR) image coding method, which combines a logarithmic color adaptation module with a JPEG2000 codec. Here we propose to change the original preprocessing stage with a more adequate one, based on the LogLuv color space representation, in order to take fully advantage of wavelet based coding. The experimental comparisons confirm that the proposed method improves the compression performances and simplify the overall coding scheme.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127262751","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Automatic extraction of LIDAR data classification rules 激光雷达数据分类规则的自动提取
Pub Date : 2007-09-10 DOI: 10.1109/ICIAP.2007.31
P. Zingaretti, E. Frontoni, G. Forlani, C. Nardinocchi
LIDAR (Light Detection And Ranging) data are a primary data source for digital terrain model (DTM) generation and 3D city models. This paper presents an AdaBoost algorithm for the identification of rules for the classification of raw LIDAR data mainly as buildings, ground and vegetation. First raw data are filtered, interpolated over a grid and segmented. Then geometric and topological relationships among regions resulting from segmentation constitute the input to the tree-structured classification algorithm. Results obtained on data sets gathered over the town of Pavia (Italy) are compared with those obtained by a rule-based approach previously presented by the authors for the classification of the regions.
激光雷达(光探测和测距)数据是数字地形模型(DTM)生成和三维城市模型的主要数据源。本文提出了一种AdaBoost算法,用于识别以建筑物、地面和植被为主的激光雷达原始数据的分类规则。首先对原始数据进行过滤,在网格上插值并分割。然后,由分割得到的区域之间的几何和拓扑关系构成了树形分类算法的输入。在帕维亚镇(意大利)收集的数据集上获得的结果与作者先前提出的基于规则的区域分类方法获得的结果进行了比较。
{"title":"Automatic extraction of LIDAR data classification rules","authors":"P. Zingaretti, E. Frontoni, G. Forlani, C. Nardinocchi","doi":"10.1109/ICIAP.2007.31","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.31","url":null,"abstract":"LIDAR (Light Detection And Ranging) data are a primary data source for digital terrain model (DTM) generation and 3D city models. This paper presents an AdaBoost algorithm for the identification of rules for the classification of raw LIDAR data mainly as buildings, ground and vegetation. First raw data are filtered, interpolated over a grid and segmented. Then geometric and topological relationships among regions resulting from segmentation constitute the input to the tree-structured classification algorithm. Results obtained on data sets gathered over the town of Pavia (Italy) are compared with those obtained by a rule-based approach previously presented by the authors for the classification of the regions.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128819779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Cooperative Object Tracking with Multiple PTZ Cameras 多PTZ相机协同目标跟踪
Pub Date : 2007-09-10 DOI: 10.1109/ICIAP.2007.46
I. Everts, N. Sebe, Graeme A. Jones
Research in visual surveillance systems is shifting from using few stationary, passive cameras to employing large heterogeneous sensor networks. One promising type of sensor in particular is the pan-tilt-zoom (PTZ) camera, which can cover a potentially much larger area than passive cameras, and can obtain much higher resolution imagery through zoom capacity. In this paper, a system that can track objects with multiple calibrated PTZ cameras in a cooperative fashion is presented. Tracking and calibration results are combined with several image processing techniques in a statistical segmentation framework, through which the cameras can hand over targets to each other. A prototype system is presented that operates in real time.
视觉监控系统的研究正在从使用少数固定的被动摄像机转向使用大型异构传感器网络。一种特别有前途的传感器类型是pan-tilt-zoom (PTZ)相机,它可以覆盖比被动相机大得多的区域,并且可以通过变焦能力获得更高分辨率的图像。本文提出了一种多台PTZ摄像机协同跟踪目标的系统。在统计分割框架中,将跟踪和校准结果与多种图像处理技术相结合,通过该框架,相机可以相互传递目标。提出了一个实时运行的原型系统。
{"title":"Cooperative Object Tracking with Multiple PTZ Cameras","authors":"I. Everts, N. Sebe, Graeme A. Jones","doi":"10.1109/ICIAP.2007.46","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.46","url":null,"abstract":"Research in visual surveillance systems is shifting from using few stationary, passive cameras to employing large heterogeneous sensor networks. One promising type of sensor in particular is the pan-tilt-zoom (PTZ) camera, which can cover a potentially much larger area than passive cameras, and can obtain much higher resolution imagery through zoom capacity. In this paper, a system that can track objects with multiple calibrated PTZ cameras in a cooperative fashion is presented. Tracking and calibration results are combined with several image processing techniques in a statistical segmentation framework, through which the cameras can hand over targets to each other. A prototype system is presented that operates in real time.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114707517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 69
Sight enhancement through video fusion in a surveillance system 监控系统中视频融合的视觉增强
Pub Date : 2007-09-10 DOI: 10.1109/ICIAP.2007.117
A. Masini, Francesco Branchitta, M. Diani, G. Corsini
In this paper we consider the problem of fusing two video streams acquired by an RGB camera and a sensor operating in the long wave infrared (LWIR). The application of interest is area surveillance and the fusion process aims at enhancing the human perception of the monitored scene. We propose a fusion procedure where the background and the moving objects are separated and fused by means of different strategies. With respect to standard video fusion techniques this approach has the advantage of reducing the computational load and mitigating the rapid brightness variations in the fused video. It is also less sensitive to the presence of noise. We discuss the experimental results obtained on a typical area surveillance scenario and demonstrate the effectiveness of the proposed method. For this purpose, the analysis is carried out both subjectively, in terms of visual quality of the fused video stream and objectively, in terms of standard image quality indexes. The computational load is also evaluated.
本文研究了由RGB摄像机和工作在长波红外(LWIR)的传感器采集的两个视频流的融合问题。感兴趣的应用是区域监控,融合过程旨在增强人类对被监控场景的感知。我们提出了一种将背景和运动物体通过不同的策略分离融合的融合过程。与标准的视频融合技术相比,该方法具有减少计算量和减轻融合视频中快速亮度变化的优点。它对噪音的存在也不那么敏感。我们讨论了在一个典型的区域监控场景中获得的实验结果,并证明了所提出方法的有效性。为此,主观上根据融合视频流的视觉质量进行分析,客观上根据标准图像质量指标进行分析。计算负载也进行了评估。
{"title":"Sight enhancement through video fusion in a surveillance system","authors":"A. Masini, Francesco Branchitta, M. Diani, G. Corsini","doi":"10.1109/ICIAP.2007.117","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.117","url":null,"abstract":"In this paper we consider the problem of fusing two video streams acquired by an RGB camera and a sensor operating in the long wave infrared (LWIR). The application of interest is area surveillance and the fusion process aims at enhancing the human perception of the monitored scene. We propose a fusion procedure where the background and the moving objects are separated and fused by means of different strategies. With respect to standard video fusion techniques this approach has the advantage of reducing the computational load and mitigating the rapid brightness variations in the fused video. It is also less sensitive to the presence of noise. We discuss the experimental results obtained on a typical area surveillance scenario and demonstrate the effectiveness of the proposed method. For this purpose, the analysis is carried out both subjectively, in terms of visual quality of the fused video stream and objectively, in terms of standard image quality indexes. The computational load is also evaluated.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114411709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Transformation invariant SOM clustering in Document Image Analysis 变换不变SOM聚类在文档图像分析中的应用
Pub Date : 2007-09-10 DOI: 10.1109/ICIAP.2007.126
S. Marinai, E. Marino, G. Soda
In this paper, we propose the combination of the self organizing map (SOM) and of the tangent distance for effective clustering in document image analysis. The proposed model (SOM_TD) is used for character and layout clustering, with applications to word retrieval and to page classification. By using the tangent distance it is possible to improve the SOM clustering so as to be more tolerant with respect to small local transformations of the input patterns.
本文提出了将自组织映射(SOM)和切线距离相结合的方法用于文档图像分析中的有效聚类。所提出的模型(SOM_TD)用于字符和布局聚类,并应用于单词检索和页面分类。通过使用切线距离,可以改进SOM聚类,以便对输入模式的小局部变换具有更大的容忍度。
{"title":"Transformation invariant SOM clustering in Document Image Analysis","authors":"S. Marinai, E. Marino, G. Soda","doi":"10.1109/ICIAP.2007.126","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.126","url":null,"abstract":"In this paper, we propose the combination of the self organizing map (SOM) and of the tangent distance for effective clustering in document image analysis. The proposed model (SOM_TD) is used for character and layout clustering, with applications to word retrieval and to page classification. By using the tangent distance it is possible to improve the SOM clustering so as to be more tolerant with respect to small local transformations of the input patterns.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117213382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Becoming Visually Familiar 视觉熟悉
Pub Date : 2007-09-10 DOI: 10.1109/ICIAP.2007.37
M. C. Santana, O. Déniz-Suárez, J. Lorenzo-Navarro, D. Hernández-Sosa
Automatic face recognition has been mainly tackled by matching a new image to a set of previously computed identity models. The literature describes approximations where those identity models are based on a single sample or a set of them. However, face representation keeps being a topic of great debate in the psychology literature, with some results suggesting the use of an average image. In this paper, instead of restricting our system to a fixed and precomputed classifier, the system learns iteratively based on the experience extracted from each meeting. The experiments presented introduce the use of an exemplar average based approach. The results show similar performance to an approach based on the use of multiple exemplars per identity, but reducing storage and processing cost. The process is done autonomously, using an automatic face detection system that meets people, excepting the supervision provided by a human to confirm or correct each meeting classification suggested by the system.
自动人脸识别主要是通过将新图像与先前计算的身份模型进行匹配来解决的。文献描述了近似,其中这些身份模型是基于单个样本或一组样本。然而,在心理学文献中,面部表征一直是一个争论不休的话题,一些研究结果建议使用平均图像。在本文中,系统没有将我们的系统限制在一个固定的和预先计算的分类器上,而是基于从每次会议中提取的经验进行迭代学习。实验介绍了基于范例平均的方法的使用。结果显示,基于每个标识使用多个示例的方法具有类似的性能,但降低了存储和处理成本。这个过程是自主完成的,使用一个与人见面的自动面部检测系统,除了由人类提供的监督,以确认或纠正系统建议的每次会面分类。
{"title":"Becoming Visually Familiar","authors":"M. C. Santana, O. Déniz-Suárez, J. Lorenzo-Navarro, D. Hernández-Sosa","doi":"10.1109/ICIAP.2007.37","DOIUrl":"https://doi.org/10.1109/ICIAP.2007.37","url":null,"abstract":"Automatic face recognition has been mainly tackled by matching a new image to a set of previously computed identity models. The literature describes approximations where those identity models are based on a single sample or a set of them. However, face representation keeps being a topic of great debate in the psychology literature, with some results suggesting the use of an average image. In this paper, instead of restricting our system to a fixed and precomputed classifier, the system learns iteratively based on the experience extracted from each meeting. The experiments presented introduce the use of an exemplar average based approach. The results show similar performance to an approach based on the use of multiple exemplars per identity, but reducing storage and processing cost. The process is done autonomously, using an automatic face detection system that meets people, excepting the supervision provided by a human to confirm or correct each meeting classification suggested by the system.","PeriodicalId":118466,"journal":{"name":"14th International Conference on Image Analysis and Processing (ICIAP 2007)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116477447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
14th International Conference on Image Analysis and Processing (ICIAP 2007)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1