首页 > 最新文献

Proceedings 11th International Conference on Image Analysis and Processing最新文献

英文 中文
Detection of blocking artifacts of compressed still images 压缩静态图像的阻塞伪影检测
Pub Date : 2001-09-26 DOI: 10.1109/ICIAP.2001.957077
G. Triantafyllidis, D. Tzovaras, M. Strintzis
A novel frequency domain technique for image blocking artifact detection is presented. The algorithm detects the regions of the image which present visible blocking artifacts. This detection is performed in the frequency domain and uses the estimated relative quantization error calculated when the DCT coefficients are modeled by a Laplacian probability function. Experimental results illustrating the performance of the proposed method are presented and evaluated.
提出了一种新的用于图像块伪影检测的频域技术。该算法检测图像中存在可见阻塞伪影的区域。这种检测是在频域中进行的,并使用估计的相对量化误差,当DCT系数由拉普拉斯概率函数建模时计算出来。实验结果说明了该方法的性能,并进行了评价。
{"title":"Detection of blocking artifacts of compressed still images","authors":"G. Triantafyllidis, D. Tzovaras, M. Strintzis","doi":"10.1109/ICIAP.2001.957077","DOIUrl":"https://doi.org/10.1109/ICIAP.2001.957077","url":null,"abstract":"A novel frequency domain technique for image blocking artifact detection is presented. The algorithm detects the regions of the image which present visible blocking artifacts. This detection is performed in the frequency domain and uses the estimated relative quantization error calculated when the DCT coefficients are modeled by a Laplacian probability function. Experimental results illustrating the performance of the proposed method are presented and evaluated.","PeriodicalId":365627,"journal":{"name":"Proceedings 11th International Conference on Image Analysis and Processing","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132102628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Automatic graph extraction from color images 从彩色图像中自动提取图形
Pub Date : 2001-09-26 DOI: 10.1109/ICIAP.2001.957026
T. Lourens, HIroshi G. Okuno, H. Kitano
An approach to symbolic contour extraction is described that consists of three stages: enhancement, detection, and extraction of contours and corners. Contours and corners are enhanced by models of monkey cortical complex and endstopped cells. Detection of corners and local contour maxima is performed by selection of local maxima in both contour and corner enhanced images. These maxima form the anchor points of a greedy contour following algorithm that extracts the contours. This algorithm is based on the idea of spatially linking neurons along a contour that fire in synchrony to indicate an extracted contour. The extracted contours and detected corners represent the symbolic representation of the image. The advantage of the proposed model over other models is that the same low constant thresholds for corner and local contour maxima detection are used for different images. Closed contours are guaranteed by the contour following algorithm to yield a fully symbolic representation which is more suitable for reasoning and recognition. In this respect our methodology is unique, and clearly different from the standard (edge) contour detection methods. The results of the extracted contours (when displayed as being detected) show similar or better results compared to the SUSAN and Canny-CSS detectors.
描述了一种符号轮廓提取方法,该方法包括三个阶段:增强、检测和提取轮廓和角。猴皮质复合体和末端细胞模型增强了轮廓和角。通过选择轮廓增强图像和角增强图像的局部最大值来检测角点和局部轮廓最大值。这些最大值形成贪婪轮廓跟随算法提取轮廓的锚点。该算法基于沿等高线同步发射的神经元在空间上连接的思想,以指示提取的等高线。提取的轮廓和检测到的角代表图像的符号表示。与其他模型相比,该模型的优点是对不同的图像使用相同的低恒定阈值来检测角点和局部轮廓最大值。轮廓跟踪算法在保证闭合轮廓的同时,产生了更适合推理和识别的完全符号化表示。在这方面,我们的方法是独特的,明显不同于标准(边缘)轮廓检测方法。与SUSAN和Canny-CSS检测器相比,提取轮廓的结果(当显示为被检测时)显示出类似或更好的结果。
{"title":"Automatic graph extraction from color images","authors":"T. Lourens, HIroshi G. Okuno, H. Kitano","doi":"10.1109/ICIAP.2001.957026","DOIUrl":"https://doi.org/10.1109/ICIAP.2001.957026","url":null,"abstract":"An approach to symbolic contour extraction is described that consists of three stages: enhancement, detection, and extraction of contours and corners. Contours and corners are enhanced by models of monkey cortical complex and endstopped cells. Detection of corners and local contour maxima is performed by selection of local maxima in both contour and corner enhanced images. These maxima form the anchor points of a greedy contour following algorithm that extracts the contours. This algorithm is based on the idea of spatially linking neurons along a contour that fire in synchrony to indicate an extracted contour. The extracted contours and detected corners represent the symbolic representation of the image. The advantage of the proposed model over other models is that the same low constant thresholds for corner and local contour maxima detection are used for different images. Closed contours are guaranteed by the contour following algorithm to yield a fully symbolic representation which is more suitable for reasoning and recognition. In this respect our methodology is unique, and clearly different from the standard (edge) contour detection methods. The results of the extracted contours (when displayed as being detected) show similar or better results compared to the SUSAN and Canny-CSS detectors.","PeriodicalId":365627,"journal":{"name":"Proceedings 11th International Conference on Image Analysis and Processing","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128021033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Recognition of shape-changing hand gestures based on switching linear model 基于切换线性模型的变形手势识别
Pub Date : 2001-09-26 DOI: 10.1109/ICIAP.2001.956979
Mun-Ho Jeong, Y. Kuno, N. Shimada, Y. Shirai
We present a method to track and recognise shape-changing hand gestures simultaneously. The switching linear model using the active contour model corresponds well to temporal shapes and motions of hands. Inference in the switching linear model is computationally intractable and therefore the learning process cannot be performed via the exact EM (expectation maximization) algorithm. However, we present an approximate EM algorithm using a collapsing method in which some Gaussians are merged into a single Gaussian. Tracking is performed through the forward algorithm based on Kalman filtering and the collapsing method. We also present the regularized smoothing, which plays a role in reducing jump changes between the training sequences of state vectors to cope with complex-variable hand shapes. The recognition process is performed by the selection of a model with the maximum likelihood from some learned models while tracking is being performed. Experiments for several shape-changing hand gestures are demonstrated.
我们提出了一种同时跟踪和识别形状变化手势的方法。使用活动轮廓模型的切换线性模型很好地符合手的时间形状和运动。切换线性模型中的推理在计算上难以处理,因此学习过程不能通过精确的EM(期望最大化)算法来执行。然而,我们提出了一种近似的EM算法,使用一种坍缩方法,其中一些高斯分布合并为单个高斯分布。采用基于卡尔曼滤波的前向跟踪算法和压缩算法进行跟踪。我们还提出了正则化平滑,它可以减少状态向量训练序列之间的跳跃变化,以应对复杂的可变手形。识别过程是在跟踪过程中,从一些学习到的模型中选择一个具有最大似然的模型来完成的。演示了几种变形手势的实验。
{"title":"Recognition of shape-changing hand gestures based on switching linear model","authors":"Mun-Ho Jeong, Y. Kuno, N. Shimada, Y. Shirai","doi":"10.1109/ICIAP.2001.956979","DOIUrl":"https://doi.org/10.1109/ICIAP.2001.956979","url":null,"abstract":"We present a method to track and recognise shape-changing hand gestures simultaneously. The switching linear model using the active contour model corresponds well to temporal shapes and motions of hands. Inference in the switching linear model is computationally intractable and therefore the learning process cannot be performed via the exact EM (expectation maximization) algorithm. However, we present an approximate EM algorithm using a collapsing method in which some Gaussians are merged into a single Gaussian. Tracking is performed through the forward algorithm based on Kalman filtering and the collapsing method. We also present the regularized smoothing, which plays a role in reducing jump changes between the training sequences of state vectors to cope with complex-variable hand shapes. The recognition process is performed by the selection of a model with the maximum likelihood from some learned models while tracking is being performed. Experiments for several shape-changing hand gestures are demonstrated.","PeriodicalId":365627,"journal":{"name":"Proceedings 11th International Conference on Image Analysis and Processing","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134444744","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A modified fuzzy ART for image segmentation 一种用于图像分割的改进模糊ART
Pub Date : 2001-09-26 DOI: 10.1109/ICIAP.2001.956992
L. Cinque, G. Foresti, A. Gumina, S. Levialdi
This paper presents a clustering approach for image segmentation based on a modified fuzzy ART model. The goal of the proposed approach is to find a simple model able to instance a prototype for each cluster in order to avoid complex post-processing phases. Some results and comparisons with other models present in the literature, like SOM and original fuzzy ART are presented. Qualitative and quantitative evaluations confirm the validity of our approach.
提出了一种基于改进模糊ART模型的聚类图像分割方法。该方法的目标是找到一个简单的模型,能够为每个集群实例化一个原型,以避免复杂的后处理阶段。给出了一些结果,并与文献中的其他模型(如SOM和原始模糊ART)进行了比较。定性和定量评估证实了我们方法的有效性。
{"title":"A modified fuzzy ART for image segmentation","authors":"L. Cinque, G. Foresti, A. Gumina, S. Levialdi","doi":"10.1109/ICIAP.2001.956992","DOIUrl":"https://doi.org/10.1109/ICIAP.2001.956992","url":null,"abstract":"This paper presents a clustering approach for image segmentation based on a modified fuzzy ART model. The goal of the proposed approach is to find a simple model able to instance a prototype for each cluster in order to avoid complex post-processing phases. Some results and comparisons with other models present in the literature, like SOM and original fuzzy ART are presented. Qualitative and quantitative evaluations confirm the validity of our approach.","PeriodicalId":365627,"journal":{"name":"Proceedings 11th International Conference on Image Analysis and Processing","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130597343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Using feature-vector based analysis, based on principal component analysis and independent component analysis, for analysing hyperspectral images 采用基于特征向量的分析方法,基于主成分分析和独立成分分析,对高光谱图像进行分析
Pub Date : 2001-09-26 DOI: 10.1109/ICIAP.2001.957027
H. Muhammed, P. Ammenberg, E. Bengtsson
A pixel in a hyperspectral image can be considered as a mixture of the reflectance spectra of several substances. The mixture coefficients correspond to the (relative) amounts of these substances. The benefit of hyperspectral imagery is that many different substances can be characterised and recognised by their spectral signatures. Independent component analysis (ICA) can be used for the blind separation of mixed statistically independent signals. Principal component analysis (PCA) also gives interesting results. The next step is to interpret and use the ICA or PCA results efficiently. This can be achieved by using a new technique called feature-vector based analysis (FVBA), which produces a number of component-feature vector pairs. The obtained feature vectors and the corresponding components represent, in this case, the spectral signatures and the corresponding image weight coefficients (the relative concentration maps) of the different constituting substances.
高光谱图像中的像素可以看作是几种物质的反射光谱的混合物。混合系数对应于这些物质的(相对)量。高光谱成像的好处是,许多不同的物质可以通过它们的光谱特征来表征和识别。独立分量分析(ICA)可以用于统计独立混合信号的盲分离。主成分分析(PCA)也给出了有趣的结果。下一步是有效地解释和使用ICA或PCA结果。这可以通过使用一种称为基于特征向量的分析(FVBA)的新技术来实现,该技术产生许多组件-特征向量对。在这种情况下,得到的特征向量和相应的分量表示不同构成物质的光谱特征和相应的图像权重系数(相对浓度图)。
{"title":"Using feature-vector based analysis, based on principal component analysis and independent component analysis, for analysing hyperspectral images","authors":"H. Muhammed, P. Ammenberg, E. Bengtsson","doi":"10.1109/ICIAP.2001.957027","DOIUrl":"https://doi.org/10.1109/ICIAP.2001.957027","url":null,"abstract":"A pixel in a hyperspectral image can be considered as a mixture of the reflectance spectra of several substances. The mixture coefficients correspond to the (relative) amounts of these substances. The benefit of hyperspectral imagery is that many different substances can be characterised and recognised by their spectral signatures. Independent component analysis (ICA) can be used for the blind separation of mixed statistically independent signals. Principal component analysis (PCA) also gives interesting results. The next step is to interpret and use the ICA or PCA results efficiently. This can be achieved by using a new technique called feature-vector based analysis (FVBA), which produces a number of component-feature vector pairs. The obtained feature vectors and the corresponding components represent, in this case, the spectral signatures and the corresponding image weight coefficients (the relative concentration maps) of the different constituting substances.","PeriodicalId":365627,"journal":{"name":"Proceedings 11th International Conference on Image Analysis and Processing","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115338645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Quantitative assessment of qualitative color perception in image database retrieval 图像数据库检索中定性色彩感知的定量评价
Pub Date : 2001-09-26 DOI: 10.1109/ICIAP.2001.957044
M. Albanesi, S. Bandelli, Marco Ferretti
We propose a multiresolution indexing algorithm based on color histogram which exploits the wavelet decomposition and a customized quantization for content-based image retrieval. The aim is to extract automatically the chromatic content of the images and to represent it with simple, robust, efficient and low computational cost descriptors. The proposed method has been integrated for a complete CBIR system, where the classification of images is performed on a qualitative subjective color perception. The system allows testing the semantic and chromatic class homogeneity previously defined by a human observer. Experimental results have been evaluated by the quantitative assessment parameters (averaged precision and recall). Multiresolution proved to be a valid framework to introduce spatiality in color histogram indexing, to dramatically decrease the computational complexity and to validate the qualitative subjective classification.
我们提出了一种基于颜色直方图的多分辨率索引算法,该算法利用小波分解和自定义量化来实现基于内容的图像检索。目的是自动提取图像的彩色内容,并用简单、鲁棒、高效、低计算成本的描述符表示。所提出的方法已经集成到一个完整的CBIR系统中,其中图像分类是在定性的主观色彩感知上进行的。该系统允许测试先前由人类观察者定义的语义和色类同质性。通过定量评价参数(平均精密度和召回率)对实验结果进行了评价。多分辨率被证明是在颜色直方图索引中引入空间性、显著降低计算复杂度和验证定性主观分类的有效框架。
{"title":"Quantitative assessment of qualitative color perception in image database retrieval","authors":"M. Albanesi, S. Bandelli, Marco Ferretti","doi":"10.1109/ICIAP.2001.957044","DOIUrl":"https://doi.org/10.1109/ICIAP.2001.957044","url":null,"abstract":"We propose a multiresolution indexing algorithm based on color histogram which exploits the wavelet decomposition and a customized quantization for content-based image retrieval. The aim is to extract automatically the chromatic content of the images and to represent it with simple, robust, efficient and low computational cost descriptors. The proposed method has been integrated for a complete CBIR system, where the classification of images is performed on a qualitative subjective color perception. The system allows testing the semantic and chromatic class homogeneity previously defined by a human observer. Experimental results have been evaluated by the quantitative assessment parameters (averaged precision and recall). Multiresolution proved to be a valid framework to introduce spatiality in color histogram indexing, to dramatically decrease the computational complexity and to validate the qualitative subjective classification.","PeriodicalId":365627,"journal":{"name":"Proceedings 11th International Conference on Image Analysis and Processing","volume":"79 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115658683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
A World Wide Web region-based image search engine 一个基于万维网区域的图像搜索引擎
Pub Date : 2001-09-26 DOI: 10.1109/ICIAP.2001.957041
Y. Kompatsiaris, Evangelia Triantafyllou, M. Strintzis
The development of an intelligent image content-based search engine for the World-Wide Web is presented. This system will offer a new form of media representation and access of content available in the WWW. Information web crawlers continuously traverse the Internet and collect images that are subsequently indexed based on integrated feature vectors. As a basis for the indexing, the K-means algorithm is used, modified so as to take into account the coherence of the regions. Based on the extracted regions, characteristic features are extracted using color texture and shape/region boundary information. These features along with additional information such as the URL location and the date of index procedure are stored in a database. The user can access and search this indexed content through the Web with an advanced and user-friendly interface. The output of the system is a set of links to the content available in the WWW, ranked according to their similarity to the image submitted by the user. Experimental results demonstrate the performance of the system.
介绍了一种基于图像内容的智能万维网搜索引擎的开发。这个系统将提供一种新的媒体表现形式和对万维网上可用内容的访问。信息网络爬虫不断地遍历互联网并收集图像,然后基于集成的特征向量对图像进行索引。作为索引的基础,使用K-means算法,并对其进行了修改,以考虑区域的相干性。在提取区域的基础上,利用颜色纹理和形状/区域边界信息提取特征特征。这些特性以及URL位置和索引过程日期等附加信息存储在数据库中。用户可以使用高级的用户友好界面通过Web访问和搜索这些索引内容。系统的输出是一组指向WWW中可用内容的链接,根据它们与用户提交的图像的相似性进行排名。实验结果验证了系统的性能。
{"title":"A World Wide Web region-based image search engine","authors":"Y. Kompatsiaris, Evangelia Triantafyllou, M. Strintzis","doi":"10.1109/ICIAP.2001.957041","DOIUrl":"https://doi.org/10.1109/ICIAP.2001.957041","url":null,"abstract":"The development of an intelligent image content-based search engine for the World-Wide Web is presented. This system will offer a new form of media representation and access of content available in the WWW. Information web crawlers continuously traverse the Internet and collect images that are subsequently indexed based on integrated feature vectors. As a basis for the indexing, the K-means algorithm is used, modified so as to take into account the coherence of the regions. Based on the extracted regions, characteristic features are extracted using color texture and shape/region boundary information. These features along with additional information such as the URL location and the date of index procedure are stored in a database. The user can access and search this indexed content through the Web with an advanced and user-friendly interface. The output of the system is a set of links to the content available in the WWW, ranked according to their similarity to the image submitted by the user. Experimental results demonstrate the performance of the system.","PeriodicalId":365627,"journal":{"name":"Proceedings 11th International Conference on Image Analysis and Processing","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124696477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 21
Feature based merging of application specific regions 基于特性的应用程序特定区域合并
Pub Date : 2001-09-26 DOI: 10.1109/ICIAP.2001.956985
A. Rydberg, G. Borgefors
Over-segmentation is a common problem for all kinds of segmentation tasks. Automated segmentation of natural scenes is no exception. This paper proposes a solution to the over-segmentation problem, with the emphasis on satellite images of farmland. In many cases, an agricultural field can be considered as a flat region having a rather large area, a compact shape, and straight region boundaries because it is a man-made object. Our approach for dividing farmland into individual field units uses region shape, as well as spectral information, when merging over-segmented regions. The results from the presented method are compared to two different methods of segmentation as well as interpreted field boundaries. The results show that task-specific knowledge adds important information to the decision step for the merging procedure of regions. About 70% of the edges are classified within one pixel away from the ground truth edges using our methods.
过度分割是各种分割任务中普遍存在的问题。自然场景的自动分割也不例外。本文以农田卫星影像为研究对象,提出了一种解决过分割问题的方法。在许多情况下,农田可以被认为是一个平坦的区域,面积相当大,形状紧凑,区域边界直,因为它是一个人造的物体。在合并过度分割的区域时,我们使用区域形状和光谱信息将农田划分为单个农田单元。将该方法的结果与两种不同的分割方法以及解释的场边界进行了比较。结果表明,任务特定知识为区域合并过程的决策步骤增加了重要的信息。使用我们的方法,大约70%的边缘被分类在距离地面真实边缘一个像素的范围内。
{"title":"Feature based merging of application specific regions","authors":"A. Rydberg, G. Borgefors","doi":"10.1109/ICIAP.2001.956985","DOIUrl":"https://doi.org/10.1109/ICIAP.2001.956985","url":null,"abstract":"Over-segmentation is a common problem for all kinds of segmentation tasks. Automated segmentation of natural scenes is no exception. This paper proposes a solution to the over-segmentation problem, with the emphasis on satellite images of farmland. In many cases, an agricultural field can be considered as a flat region having a rather large area, a compact shape, and straight region boundaries because it is a man-made object. Our approach for dividing farmland into individual field units uses region shape, as well as spectral information, when merging over-segmented regions. The results from the presented method are compared to two different methods of segmentation as well as interpreted field boundaries. The results show that task-specific knowledge adds important information to the decision step for the merging procedure of regions. About 70% of the edges are classified within one pixel away from the ground truth edges using our methods.","PeriodicalId":365627,"journal":{"name":"Proceedings 11th International Conference on Image Analysis and Processing","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124435777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Recognition driven burst transmissions in distributed third generation surveillance systems 分布式第三代监视系统中识别驱动的突发传输
Pub Date : 2001-09-26 DOI: 10.1109/ICIAP.2001.957057
F. Oberti, G. Ferrari, C. Regazzoni
A general architecture for distributed third-generation surveillance systems is discussed. In particular an approach for selecting the optimal distribution of intelligence (task allocation) is presented. The introduction of recognition tasks which can cause the interruption of the processing and transmission flow is discussed. Experimental results over a simulated system illustrate the presented approach for optimal distribution of intelligence.
讨论了分布式第三代监控系统的通用体系结构。特别提出了一种选择最优智能分配(任务分配)的方法。讨论了可能导致处理和传输流程中断的识别任务的引入。仿真系统的实验结果验证了该方法的有效性。
{"title":"Recognition driven burst transmissions in distributed third generation surveillance systems","authors":"F. Oberti, G. Ferrari, C. Regazzoni","doi":"10.1109/ICIAP.2001.957057","DOIUrl":"https://doi.org/10.1109/ICIAP.2001.957057","url":null,"abstract":"A general architecture for distributed third-generation surveillance systems is discussed. In particular an approach for selecting the optimal distribution of intelligence (task allocation) is presented. The introduction of recognition tasks which can cause the interruption of the processing and transmission flow is discussed. Experimental results over a simulated system illustrate the presented approach for optimal distribution of intelligence.","PeriodicalId":365627,"journal":{"name":"Proceedings 11th International Conference on Image Analysis and Processing","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126006710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
CONTEXT: a technique for image retrieval integrating CONtour and TEXTure information CONTEXT:一种融合轮廓和纹理信息的图像检索技术
Pub Date : 2001-09-26 DOI: 10.1109/ICIAP.2001.957013
Riccardo Distasi, M. Nappi, M. Tucci, S. Vitulano
Many intrinsically 2-dimensional visual signals can be effectively encoded in a 1D form. This simpler representation is well-suited to both pattern recognition and image retrieval tasks. In particular, this paper deals with contour and texture, combined together in order to obtain an effective technique for content-based image indexing. The proposed method, named CONTEXT, represents CONtours and TEXTures by a vector containing the location and energy of the signal maxima. Such a representation has been utilized as the feature extraction engine in an image retrieval system for image databases. The homogeneous treatment reserved to both contour and texture information makes the algorithm elegant and easy to implement and extend. The data used for experimentally assessing CONTEXT were contours and textures from various application domains, plus a database of medical images. The experiments reveal a high discriminating power which in turn yields a high perceived quality of the retrieval results.
许多本质上是二维的视觉信号可以有效地编码为一维形式。这种更简单的表示非常适合模式识别和图像检索任务。为了获得一种有效的基于内容的图像索引技术,本文特别研究了轮廓和纹理的结合。所提出的方法名为CONTEXT,它通过包含信号最大值的位置和能量的向量来表示轮廓和纹理。该表示已被用作图像数据库图像检索系统中的特征提取引擎。该算法对轮廓信息和纹理信息进行了同质处理,使得算法简洁,易于实现和扩展。用于实验评估CONTEXT的数据是来自不同应用领域的轮廓和纹理,以及医学图像数据库。实验结果表明,该方法具有较高的判别能力,从而使检索结果具有较高的感知质量。
{"title":"CONTEXT: a technique for image retrieval integrating CONtour and TEXTure information","authors":"Riccardo Distasi, M. Nappi, M. Tucci, S. Vitulano","doi":"10.1109/ICIAP.2001.957013","DOIUrl":"https://doi.org/10.1109/ICIAP.2001.957013","url":null,"abstract":"Many intrinsically 2-dimensional visual signals can be effectively encoded in a 1D form. This simpler representation is well-suited to both pattern recognition and image retrieval tasks. In particular, this paper deals with contour and texture, combined together in order to obtain an effective technique for content-based image indexing. The proposed method, named CONTEXT, represents CONtours and TEXTures by a vector containing the location and energy of the signal maxima. Such a representation has been utilized as the feature extraction engine in an image retrieval system for image databases. The homogeneous treatment reserved to both contour and texture information makes the algorithm elegant and easy to implement and extend. The data used for experimentally assessing CONTEXT were contours and textures from various application domains, plus a database of medical images. The experiments reveal a high discriminating power which in turn yields a high perceived quality of the retrieval results.","PeriodicalId":365627,"journal":{"name":"Proceedings 11th International Conference on Image Analysis and Processing","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2001-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127831643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
期刊
Proceedings 11th International Conference on Image Analysis and Processing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1