首页 > 最新文献

2005 IEEE International Conference on Multimedia and Expo最新文献

英文 中文
Separable bilateral filtering for fast video preprocessing 用于快速视频预处理的可分离双边滤波
Pub Date : 2005-07-06 DOI: 10.1109/icme.2005.1521458
Tuan Q. Pham, L. Vliet
Bilateral filtering is an edge-preserving filtering technique that employs both geometric closeness and photometric similarity of neighboring pixels to construct its filter kernel. Multi-dimensional bilateral filtering is computationally expensive because the adaptive kernel has to be recomputed at every pixel. In this paper, we present a separable implementation of the bilateral filter. The separable implementation offers equivalent adaptive filtering capability at a fraction of execution time compared to the traditional filter. Because of this efficiency, the separable bilateral filter can be used for fast preprocessing of images and videos. Experiments show that better image quality and higher compression efficiency is achievable if the original video is preprocessed with the separable bilateral filter.
双边滤波是一种利用相邻像素的几何接近度和光度相似性来构建滤波核的边缘保持滤波技术。由于自适应核必须在每个像素上重新计算,因此多维双边滤波的计算成本很高。在本文中,我们提出了一个双边滤波器的可分离实现。与传统过滤器相比,可分离的实现在执行时间的一小部分内提供了等效的自适应过滤能力。由于这种效率,可分离双边滤波器可以用于图像和视频的快速预处理。实验表明,采用可分离双边滤波器对原始视频进行预处理,可以获得更好的图像质量和更高的压缩效率。
{"title":"Separable bilateral filtering for fast video preprocessing","authors":"Tuan Q. Pham, L. Vliet","doi":"10.1109/icme.2005.1521458","DOIUrl":"https://doi.org/10.1109/icme.2005.1521458","url":null,"abstract":"Bilateral filtering is an edge-preserving filtering technique that employs both geometric closeness and photometric similarity of neighboring pixels to construct its filter kernel. Multi-dimensional bilateral filtering is computationally expensive because the adaptive kernel has to be recomputed at every pixel. In this paper, we present a separable implementation of the bilateral filter. The separable implementation offers equivalent adaptive filtering capability at a fraction of execution time compared to the traditional filter. Because of this efficiency, the separable bilateral filter can be used for fast preprocessing of images and videos. Experiments show that better image quality and higher compression efficiency is achievable if the original video is preprocessed with the separable bilateral filter.","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114654778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 271
Segmentation of 3D Objects Using Pulse-Coupled Oscillator Networks 基于脉冲耦合振荡器网络的三维物体分割
Pub Date : 2005-07-06 DOI: 10.1109/ICME.2005.1521383
Eva Ceccarelli, A. Bimbo, P. Pala
Along with image and video libraries, archives of 3D models have recently gained increasing attention. Accordingly, there is an increasing demand for solutions enabling retrieval of 3D models based on global properties as well as properties of object parts. In particular, retrieval based on object parts relies on segmentation of 3D objects into their constituent parts. This is a challenging task, as the identification of object parts should conform to human perceptual judgement. Therefore, definition of models and solutions that enable decomposition of 3D objects into perceptually relevant parts is a fundamental step to enable effective retrieval based on object parts. However, a few approaches have been proposed to support segmentation of 3D meshes into perceptually relevant parts. In this paper, we propose a model based on pulse-coupled oscillator networks. Preliminary experiments are reported to demonstrate the validity and potential of the proposed solution
随着图像和视频库,3D模型档案最近得到了越来越多的关注。因此,对基于全局属性和对象部件属性检索3D模型的解决方案的需求越来越大。特别是,基于物体部件的检索依赖于将三维物体分割成它们的组成部分。这是一项具有挑战性的任务,因为物体部分的识别必须符合人类的感知判断。因此,定义能够将3D对象分解为感知相关部分的模型和解决方案是实现基于对象部分的有效检索的基本步骤。然而,已经提出了一些方法来支持将3D网格分割成感知相关的部分。本文提出了一种基于脉冲耦合振荡器网络的模型。初步实验证明了该方案的有效性和潜力
{"title":"Segmentation of 3D Objects Using Pulse-Coupled Oscillator Networks","authors":"Eva Ceccarelli, A. Bimbo, P. Pala","doi":"10.1109/ICME.2005.1521383","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521383","url":null,"abstract":"Along with image and video libraries, archives of 3D models have recently gained increasing attention. Accordingly, there is an increasing demand for solutions enabling retrieval of 3D models based on global properties as well as properties of object parts. In particular, retrieval based on object parts relies on segmentation of 3D objects into their constituent parts. This is a challenging task, as the identification of object parts should conform to human perceptual judgement. Therefore, definition of models and solutions that enable decomposition of 3D objects into perceptually relevant parts is a fundamental step to enable effective retrieval based on object parts. However, a few approaches have been proposed to support segmentation of 3D meshes into perceptually relevant parts. In this paper, we propose a model based on pulse-coupled oscillator networks. Preliminary experiments are reported to demonstrate the validity and potential of the proposed solution","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116034783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
An integrated approach for generic object detection using kernel PCA and boosting 一种基于核主成分分析和增强的通用目标检测方法
Pub Date : 2005-07-06 DOI: 10.1109/ICME.2005.1521600
Saad Ali, M. Shah
In this paper, we present a novel framework for generic object class detection by integrating Kernel PCA with AdaBoost. The classifier obtained in this way is invariant to changes in appearance, illumination conditions and surrounding clutter. A nonlinear shape subspace is learned for positive and negative object classes using kernel PCA. Features are derived by projecting example images onto the learned sub-spaces. Base learners are modeled using Bayes classifier. AdaBoost is then employed to discover the features that are most relevant for the object detection task at hand. Proposed method has been successfully tested on wide range of object classes (cars, airplanes, pedestrians, motorcycles etc) using standard data sets and has shown good performance. Using a small training set, the classifier learned in this way was able to generalize the intra-class variation while still maintaining high detection rate. In most object categories, we achieved detection rates of above 95% with minimal false alarm rates. We demonstrate the comparative performance of our method against current state of the art approaches.
本文提出了一种将核主成分分析与AdaBoost相结合的通用目标类检测框架。用这种方法得到的分类器不受外观、光照条件和周围杂波变化的影响。利用核主成分分析学习了正、负两类对象的非线性形状子空间。特征是通过将示例图像投影到学习到的子空间中来获得的。基础学习器使用贝叶斯分类器建模。然后使用AdaBoost来发现与手头的目标检测任务最相关的特征。本文提出的方法已经成功地使用标准数据集在广泛的对象类别(汽车、飞机、行人、摩托车等)上进行了测试,并显示出良好的性能。使用较小的训练集,以这种方式学习的分类器能够泛化类内变化,同时保持较高的检测率。在大多数对象类别中,我们实现了95%以上的检测率和最小的误报率。我们展示了我们的方法与当前最先进的方法的比较性能。
{"title":"An integrated approach for generic object detection using kernel PCA and boosting","authors":"Saad Ali, M. Shah","doi":"10.1109/ICME.2005.1521600","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521600","url":null,"abstract":"In this paper, we present a novel framework for generic object class detection by integrating Kernel PCA with AdaBoost. The classifier obtained in this way is invariant to changes in appearance, illumination conditions and surrounding clutter. A nonlinear shape subspace is learned for positive and negative object classes using kernel PCA. Features are derived by projecting example images onto the learned sub-spaces. Base learners are modeled using Bayes classifier. AdaBoost is then employed to discover the features that are most relevant for the object detection task at hand. Proposed method has been successfully tested on wide range of object classes (cars, airplanes, pedestrians, motorcycles etc) using standard data sets and has shown good performance. Using a small training set, the classifier learned in this way was able to generalize the intra-class variation while still maintaining high detection rate. In most object categories, we achieved detection rates of above 95% with minimal false alarm rates. We demonstrate the comparative performance of our method against current state of the art approaches.","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123577461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Context-Aware Dynamic Presentation Synthesis for Exploratory Multimodal Environments 探索性多模态环境的上下文感知动态表示合成
Pub Date : 2005-07-06 DOI: 10.1109/ICME.2005.1521596
H. Sridharan, Ankur Mani, H. Sundaram, J. Brungart, David Birchfield
In this paper, we develop a novel real-time, interactive, automatic multimodal exploratory environment that dynamically adapts the media presented, to user context. There are two key contributions of this paper-(a) development of multimodal user-context model and (b) modeling the dynamics of the presentation to maximize coherence. We develop a novel user-context model comprising interests, media history, interaction behavior and tasks, that evolves based on the specific interaction. We also develop novel metrics between media elements and the user context. The presentation environment dynamically adapts to the current user context. We develop an optimal media selection and display framework that maximizes coherence, while constrained by the user-context, user goals and the structure of the knowledge in the exploratory environment. The experimental results indicate that the system performs well. The results also show that user-context models significantly improve presentation coherence
在本文中,我们开发了一种新颖的实时、交互式、自动的多模式探索环境,它可以动态地适应呈现的媒体,以适应用户上下文。本文有两个关键贡献:(a)开发了多模态用户-上下文模型;(b)建模了演示的动态,以最大限度地提高连贯性。我们开发了一个新的用户上下文模型,包括兴趣、媒体历史、交互行为和任务,该模型基于特定的交互而发展。我们还开发了媒体元素和用户环境之间的新度量。表示环境动态地适应当前用户上下文。我们开发了一个最佳的媒体选择和展示框架,最大限度地提高连贯性,同时受到用户上下文,用户目标和探索环境中的知识结构的限制。实验结果表明,该系统性能良好。结果还表明,用户-上下文模型显著提高了表达一致性
{"title":"Context-Aware Dynamic Presentation Synthesis for Exploratory Multimodal Environments","authors":"H. Sridharan, Ankur Mani, H. Sundaram, J. Brungart, David Birchfield","doi":"10.1109/ICME.2005.1521596","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521596","url":null,"abstract":"In this paper, we develop a novel real-time, interactive, automatic multimodal exploratory environment that dynamically adapts the media presented, to user context. There are two key contributions of this paper-(a) development of multimodal user-context model and (b) modeling the dynamics of the presentation to maximize coherence. We develop a novel user-context model comprising interests, media history, interaction behavior and tasks, that evolves based on the specific interaction. We also develop novel metrics between media elements and the user context. The presentation environment dynamically adapts to the current user context. We develop an optimal media selection and display framework that maximizes coherence, while constrained by the user-context, user goals and the structure of the knowledge in the exploratory environment. The experimental results indicate that the system performs well. The results also show that user-context models significantly improve presentation coherence","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121896553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Constraint-Based Approach for the Authoring of Multi-Topic Multimedia Presentations 一种基于约束的多主题多媒体演示文稿创作方法
Pub Date : 2005-07-06 DOI: 10.1109/ICME.2005.1521489
E. Bertino, E. Ferrari, A. Perego, Diego Santi
Synchronized multimedia applications play an important role in a digital library environment, since they allow one to efficiently disseminate knowledge among differently skilled users through an approach, which is more direct than the classic 'static' documents. In this paper, we propose a new authoring approach based on an innovative presentation structure and a new class of content-based constraints. Thanks to a flexible heuristic process, such features allow the author to easily combine several multimedia objects into a multi-topic presentation, whose different contents can be freely chosen by end users according to their preferences or skills
同步多媒体应用程序在数字图书馆环境中扮演着重要的角色,因为它们允许人们通过一种比经典的“静态”文档更直接的方法,在不同技能的用户之间有效地传播知识。在本文中,我们提出了一种新的基于创新的表示结构和一类新的基于内容的约束的创作方法。得益于灵活的启发式过程,这些功能使作者能够轻松地将多个多媒体对象组合成一个多主题演示文稿,最终用户可以根据自己的喜好或技能自由选择不同的内容
{"title":"A Constraint-Based Approach for the Authoring of Multi-Topic Multimedia Presentations","authors":"E. Bertino, E. Ferrari, A. Perego, Diego Santi","doi":"10.1109/ICME.2005.1521489","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521489","url":null,"abstract":"Synchronized multimedia applications play an important role in a digital library environment, since they allow one to efficiently disseminate knowledge among differently skilled users through an approach, which is more direct than the classic 'static' documents. In this paper, we propose a new authoring approach based on an innovative presentation structure and a new class of content-based constraints. Thanks to a flexible heuristic process, such features allow the author to easily combine several multimedia objects into a multi-topic presentation, whose different contents can be freely chosen by end users according to their preferences or skills","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121957669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Fast Search Method for Image Vector Quantization Based on Equal-Average Equal-Variance and Partial Sum Concept 基于等平均等方差和部分和概念的图像矢量量化快速搜索方法
Pub Date : 2005-07-06 DOI: 10.1109/ICME.2005.1521702
Z. Pan, K. Kotani, T. Ohmi
The encoding process of image vector quantization (VQ) is very heavy due to it performing a lot of k-dimensional Euclidean distance computations. In order to speed up VQ encoding, it is most important to avoid unnecessary exact Euclidean distance computations as many as possible by using features of a vector to estimate how large it is first so as to reject most of unlikely codewords. The mean, the variance, L 2 norm and partial sum of a vector have been proposed as effective features in previous works for fast VQ encoding. Recently, in the previous work (Z. Lu et al., 2003), three features of the mean, the variance and L2 norm are used together to derive an EEENNS search method, which is very search efficient but still has obvious computational redundancy. This paper aims at modifying the results of EEENNS method further by introducing another feature of partial sum to replace L2 norm feature so as to reduce more search space. Mathematical analysis and experimental results confirmed that the proposed method is more search efficient compared to (Z. Lu et al., 2003)
由于图像矢量量化(VQ)需要进行大量的k维欧氏距离计算,编码过程非常繁重。为了加快VQ编码的速度,最重要的是尽可能多地避免不必要的精确欧氏距离计算,首先利用向量的特征来估计它的大小,从而拒绝大多数不可能的码字。均值、方差、l2范数和部分和是矢量快速编码的有效特征。最近,在之前的工作(Z. Lu et al., 2003)中,将均值、方差和L2范数三个特征结合在一起,推导出了一种EEENNS搜索方法,该方法搜索效率很高,但仍然存在明显的计算冗余。本文旨在进一步修改EEENNS方法的结果,通过引入部分和的另一个特征来代替L2范数特征,从而减少更多的搜索空间。数学分析和实验结果证实,该方法的搜索效率高于(Z. Lu et al., 2003)。
{"title":"Fast Search Method for Image Vector Quantization Based on Equal-Average Equal-Variance and Partial Sum Concept","authors":"Z. Pan, K. Kotani, T. Ohmi","doi":"10.1109/ICME.2005.1521702","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521702","url":null,"abstract":"The encoding process of image vector quantization (VQ) is very heavy due to it performing a lot of k-dimensional Euclidean distance computations. In order to speed up VQ encoding, it is most important to avoid unnecessary exact Euclidean distance computations as many as possible by using features of a vector to estimate how large it is first so as to reject most of unlikely codewords. The mean, the variance, L 2 norm and partial sum of a vector have been proposed as effective features in previous works for fast VQ encoding. Recently, in the previous work (Z. Lu et al., 2003), three features of the mean, the variance and L2 norm are used together to derive an EEENNS search method, which is very search efficient but still has obvious computational redundancy. This paper aims at modifying the results of EEENNS method further by introducing another feature of partial sum to replace L2 norm feature so as to reduce more search space. Mathematical analysis and experimental results confirmed that the proposed method is more search efficient compared to (Z. Lu et al., 2003)","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117089064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Spatiotemporal saliency for human action recognition 人类动作识别的时空显著性
Pub Date : 2005-07-06 DOI: 10.1109/ICME.2005.1521452
A. Oikonomopoulos, I. Patras, M. Pantic
This paper addresses the problem of human action recognition by introducing a sparse representation of image sequences as a collection of spatiotemporal events that are localized at points that are salient both in space and time. We detect the spatiotemporal salient points by measuring changes in the information content of pixel neighborhoods not only in space but also in time. We introduce an appropriate distance metric between two collections of spatiotemporal salient points that is based on the Chamfer distance and an iterative linear time warping technique that deals with time expansion or time compression issues. We propose a classification scheme that is based on relevance vector machines and on the proposed distance measure. We present results on real image sequences from a small database depicting people performing 19 aerobic exercises.
本文通过引入图像序列的稀疏表示作为时空事件的集合来解决人类行为识别的问题,这些事件在空间和时间上都是显著的。我们通过测量像素邻域信息含量在空间和时间上的变化来检测时空显著点。我们在两个时空显著点集合之间引入了一个适当的距离度量,该度量基于倒角距离和处理时间扩展或时间压缩问题的迭代线性时间翘曲技术。我们提出了一种基于相关向量机和距离度量的分类方案。我们展示了来自一个小型数据库的真实图像序列的结果,描绘了人们进行19种有氧运动。
{"title":"Spatiotemporal saliency for human action recognition","authors":"A. Oikonomopoulos, I. Patras, M. Pantic","doi":"10.1109/ICME.2005.1521452","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521452","url":null,"abstract":"This paper addresses the problem of human action recognition by introducing a sparse representation of image sequences as a collection of spatiotemporal events that are localized at points that are salient both in space and time. We detect the spatiotemporal salient points by measuring changes in the information content of pixel neighborhoods not only in space but also in time. We introduce an appropriate distance metric between two collections of spatiotemporal salient points that is based on the Chamfer distance and an iterative linear time warping technique that deals with time expansion or time compression issues. We propose a classification scheme that is based on relevance vector machines and on the proposed distance measure. We present results on real image sequences from a small database depicting people performing 19 aerobic exercises.","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124025872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
Aggregating signatures of MPEG-4 elementary streams 聚合MPEG-4基本流的签名
Pub Date : 2005-07-06 DOI: 10.1109/ICME.2005.1521390
Yongdong Wu
A complete MPEG-4 stream consists of many elementary streams, which may be generated by different authors. In the scenario of this paper, each author signs his own authentic elementary stream independently, and then an untrusted distributor aggregates these signatures into only one. Based on the unique signature, a client is able to verify the received MPEG-4 stream with the certificates of all the authors other than the certificate of the distributor. In addition, each author cannot deny what he has signed even if he is willing to admit a signature on another ES. This aggregated signature scheme is efficient in terms of transmission overhead and verification time since only one signature is processed in the client side.
一个完整的MPEG-4流由许多基本流组成,这些基本流可以由不同的作者生成。在本文的场景中,每个作者独立地对自己的可信基本流签名,然后一个不可信的分发者将这些签名聚合为一个签名。基于唯一签名,客户端可以使用所有作者的证书(而不是分发者的证书)来验证接收到的MPEG-4流。此外,每个作者即使愿意承认在另一个ES上签名,也不能否认自己签名的内容。这种聚合签名方案在传输开销和验证时间方面是高效的,因为在客户端只处理一个签名。
{"title":"Aggregating signatures of MPEG-4 elementary streams","authors":"Yongdong Wu","doi":"10.1109/ICME.2005.1521390","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521390","url":null,"abstract":"A complete MPEG-4 stream consists of many elementary streams, which may be generated by different authors. In the scenario of this paper, each author signs his own authentic elementary stream independently, and then an untrusted distributor aggregates these signatures into only one. Based on the unique signature, a client is able to verify the received MPEG-4 stream with the certificates of all the authors other than the certificate of the distributor. In addition, each author cannot deny what he has signed even if he is willing to admit a signature on another ES. This aggregated signature scheme is efficient in terms of transmission overhead and verification time since only one signature is processed in the client side.","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124760486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Watermarking based Image Authentication using Feature Amplification 基于水印的图像特征放大认证
Pub Date : 2005-07-06 DOI: 10.1109/ICME.2005.1521497
Shuiming Ye, E. Chang, Qibin Sun
In a typical content and watermarking based image authentication approach, a feature is extracted from the given image, and then embedded back into the image using a watermarking method. Since the entropy of the feature might be higher than the capacity of the watermarking scheme, or the feature is represented in a continuous domain, it has to be further quantized before embedding. The lost of information during quantization potentially degrades the overall performance of the authentication scheme. This paper propose a simple but effective approach that avoids the feature quantization by additive feature: the feature is firstly added into the image before watermark embedding, and latterly subtracted from the watermarked image. In our experiments, the proposed approach obtains larger achievable robustness/sensitivity region and has a smaller fuzzy region of authenticity than the typical approach
在典型的基于内容和水印的图像认证方法中,从给定图像中提取特征,然后使用水印方法将其嵌入到图像中。由于特征的熵可能高于水印方案的容量,或者特征在连续域中表示,因此在嵌入之前必须进一步量化。量化过程中的信息丢失可能会降低认证方案的整体性能。本文提出了一种简单有效的方法,避免了加性特征的量化,即在水印嵌入之前先将特征加入到图像中,然后再从水印图像中减去特征。在我们的实验中,与典型方法相比,该方法获得了更大的可实现鲁棒性/灵敏度区域,并且具有更小的真实性模糊区域
{"title":"Watermarking based Image Authentication using Feature Amplification","authors":"Shuiming Ye, E. Chang, Qibin Sun","doi":"10.1109/ICME.2005.1521497","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521497","url":null,"abstract":"In a typical content and watermarking based image authentication approach, a feature is extracted from the given image, and then embedded back into the image using a watermarking method. Since the entropy of the feature might be higher than the capacity of the watermarking scheme, or the feature is represented in a continuous domain, it has to be further quantized before embedding. The lost of information during quantization potentially degrades the overall performance of the authentication scheme. This paper propose a simple but effective approach that avoids the feature quantization by additive feature: the feature is firstly added into the image before watermark embedding, and latterly subtracted from the watermarked image. In our experiments, the proposed approach obtains larger achievable robustness/sensitivity region and has a smaller fuzzy region of authenticity than the typical approach","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128702665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Infolink: Analysis of Dutch Broadcast News and Cross-Media Browsing 荷兰广播新闻与跨媒体浏览分析
Pub Date : 2005-07-06 DOI: 10.1109/ICME.2005.1521738
Jeroen Morang, R. Ordelman, F. D. Jong, A. V. Hessen
In this paper, a cross-media browsing demonstrator named InfoLink is described. InfoLink automatically links the content of Dutch broadcast news videos to related information sources in parallel collections containing text and/or video. Automatic segmentation, speech recognition and available meta-data are used to index and link items. The concept is visualized using SMIL-scripts for presenting the streaming broadcast news video and the information links
本文介绍了一个名为InfoLink的跨媒体浏览演示器。InfoLink自动将荷兰广播新闻视频的内容链接到包含文本和/或视频的平行集合中的相关信息源。自动分割、语音识别和可用的元数据用于索引和链接项目。该概念使用smil脚本可视化,用于呈现流媒体广播新闻视频和信息链接
{"title":"Infolink: Analysis of Dutch Broadcast News and Cross-Media Browsing","authors":"Jeroen Morang, R. Ordelman, F. D. Jong, A. V. Hessen","doi":"10.1109/ICME.2005.1521738","DOIUrl":"https://doi.org/10.1109/ICME.2005.1521738","url":null,"abstract":"In this paper, a cross-media browsing demonstrator named InfoLink is described. InfoLink automatically links the content of Dutch broadcast news videos to related information sources in parallel collections containing text and/or video. Automatic segmentation, speech recognition and available meta-data are used to index and link items. The concept is visualized using SMIL-scripts for presenting the streaming broadcast news video and the information links","PeriodicalId":244360,"journal":{"name":"2005 IEEE International Conference on Multimedia and Expo","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-07-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127034801","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
期刊
2005 IEEE International Conference on Multimedia and Expo
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1