首页 > 最新文献

2010 2nd International Conference on Image Processing Theory, Tools and Applications最新文献

英文 中文
Intravascular Ultrasound image segmentation: A helical active contour method 血管内超声图像分割:一种螺旋活动轮廓法
M. Jourdain, J. Meunier, J. Sequeira, G. Cloutier, J. Tardif
During an Intravascular Ultrasound (IVUS) examination, a catheter with an ultrasound transducer is introduced in the body through a blood vessel and then pulled back to image a sequence of vessel cross-sections. An IVUS exam results in several hundred noisy images often hard to analyze. Hence, developing powerful automatic analysis tool would facilitate the interpretation of structures in IVUS images. In this paper we present a new IVUS segmentation method based on an original active contour model. The contour has a helical geometry and evolves like a spiral shape that is distorted until it reaches the artery lumen boundaries. Despite the use of a simple statistical model and a very sparse initialization of the snake, the algorithm converges to satisfying solutions that can be compared with much more sophisticated segmentation methods. To validate the method, we compared our results to manually traced contours and obtained an Hausdorff distance < 0∶61mm (n = 540 images) indicating the robustness of the method.
在血管内超声(IVUS)检查中,通过血管将带有超声换能器的导管引入体内,然后将其拉回以成像一系列血管横截面。IVUS检查结果是几百张噪声图像,通常难以分析。因此,开发强大的自动分析工具将有助于对IVUS图像结构的解释。本文提出了一种新的基于原始活动轮廓模型的IVUS分割方法。轮廓具有螺旋几何形状,并像螺旋形状一样演变,直到它到达动脉腔边界为止。尽管使用了简单的统计模型和非常稀疏的蛇形初始化,但该算法收敛到令人满意的解,可以与更复杂的分割方法相比。为了验证该方法的有效性,我们将结果与手工绘制的轮廓进行了比较,得到了Hausdorff距离< 0∶61mm (n = 540幅图像),表明了该方法的鲁棒性。
{"title":"Intravascular Ultrasound image segmentation: A helical active contour method","authors":"M. Jourdain, J. Meunier, J. Sequeira, G. Cloutier, J. Tardif","doi":"10.1109/IPTA.2010.5586803","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586803","url":null,"abstract":"During an Intravascular Ultrasound (IVUS) examination, a catheter with an ultrasound transducer is introduced in the body through a blood vessel and then pulled back to image a sequence of vessel cross-sections. An IVUS exam results in several hundred noisy images often hard to analyze. Hence, developing powerful automatic analysis tool would facilitate the interpretation of structures in IVUS images. In this paper we present a new IVUS segmentation method based on an original active contour model. The contour has a helical geometry and evolves like a spiral shape that is distorted until it reaches the artery lumen boundaries. Despite the use of a simple statistical model and a very sparse initialization of the snake, the algorithm converges to satisfying solutions that can be compared with much more sophisticated segmentation methods. To validate the method, we compared our results to manually traced contours and obtained an Hausdorff distance < 0∶61mm (n = 540 images) indicating the robustness of the method.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127709827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Implementation of a reticle seeker missile simulator for jamming effect analysis 用于干扰效果分析的十字导引头导弹模拟器的实现
Ga-Young Kim, Byoung-Ik Kim, Tae-Wuk Bae, Young-Choon Kim, Sang-Ho Ahn, K. Sohng
In this paper, we implement a reticle seeker missile simulator on MATLAB-SIMULINK to analyze the jamming effect of the spin-scan and conscan reticle seeker. The DIRCM (Directed Infrared Countermeasures) system uses the pulsing flashes of infrared (IR) energy and its frequency and intensity have influence on the missile guidance system. Our simulation results show that jamming effect is indicated significantly when jammer frequency and reticle frequency are similar and present a 3D trajectory of missile motions by jamming.
本文在MATLAB-SIMULINK上实现了一种导引头导弹模拟器,对自旋扫描和共扫导引头的干扰效果进行了分析。定向红外对抗(DIRCM)系统利用红外脉冲能量,其频率和强度对导弹制导系统有影响。仿真结果表明,当干扰频率与瞄准线频率相近时,干扰效果明显,干扰后导弹运动呈现出三维轨迹。
{"title":"Implementation of a reticle seeker missile simulator for jamming effect analysis","authors":"Ga-Young Kim, Byoung-Ik Kim, Tae-Wuk Bae, Young-Choon Kim, Sang-Ho Ahn, K. Sohng","doi":"10.1109/IPTA.2010.5586729","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586729","url":null,"abstract":"In this paper, we implement a reticle seeker missile simulator on MATLAB-SIMULINK to analyze the jamming effect of the spin-scan and conscan reticle seeker. The DIRCM (Directed Infrared Countermeasures) system uses the pulsing flashes of infrared (IR) energy and its frequency and intensity have influence on the missile guidance system. Our simulation results show that jamming effect is indicated significantly when jammer frequency and reticle frequency are similar and present a 3D trajectory of missile motions by jamming.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126401895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 22
Multi-level visual alphabets 多层次视觉字母
Menno Israël, J. Schaar, E. V. D. Broek, M. D. Uyl, P. V. D. Putten
A central debate in visual perception theory is the argument for indirect versus direct perception; i.e., the use of intermediate, abstract, and hierarchical representations versus direct semantic interpretation of images through interaction with the outside world. We present a content-based representation that combines both approaches. The previously developed Visual Alphabet method is extended with a hierarchy of representations, each level feeding into the next one, but based on features that are not abstract but directly relevant to the task at hand. Explorative benchmark experiments are carried out on face images to investigate and explain the impact of the key parameters such as pattern size, number of prototypes, and distance measures used. Results show that adding an additional middle layer improves results, by encoding the spatial co-occurrence of lower-level pattern prototypes.
视觉知觉理论的一个核心争论是间接知觉与直接知觉的争论;即,使用中间的、抽象的和分层的表示,而不是通过与外部世界的交互对图像进行直接的语义解释。我们提出了一种结合了这两种方法的基于内容的表示。先前开发的Visual Alphabet方法扩展了表示层次结构,每一层都进入下一层,但基于的特征不是抽象的,而是与手头的任务直接相关的。探索性基准实验在人脸图像上进行,以调查和解释关键参数的影响,如模式大小、原型数量和使用的距离测量。结果表明,增加一个额外的中间层,通过编码低层模式原型的空间共现来改善结果。
{"title":"Multi-level visual alphabets","authors":"Menno Israël, J. Schaar, E. V. D. Broek, M. D. Uyl, P. V. D. Putten","doi":"10.1109/IPTA.2010.5586757","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586757","url":null,"abstract":"A central debate in visual perception theory is the argument for indirect versus direct perception; i.e., the use of intermediate, abstract, and hierarchical representations versus direct semantic interpretation of images through interaction with the outside world. We present a content-based representation that combines both approaches. The previously developed Visual Alphabet method is extended with a hierarchy of representations, each level feeding into the next one, but based on features that are not abstract but directly relevant to the task at hand. Explorative benchmark experiments are carried out on face images to investigate and explain the impact of the key parameters such as pattern size, number of prototypes, and distance measures used. Results show that adding an additional middle layer improves results, by encoding the spatial co-occurrence of lower-level pattern prototypes.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126918369","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Stroke feature extraction for lettrine indexing 用于字母索引的笔画特征提取
Giap Nguyen, Mickaël Coustaty, J. Ogier
We commonly need to extract the image feature to understand and to realize the processes on it, and for each process we need a suitable feature extraction method. In this paper, we describe a stroke feature extraction method which will used in the lettrine indexing. A lettrine is a letter decorated who appear at the beginning of a chapter or a paragraph in ancient books. It's principally composed of strokes. This method is developed with the goal to characterize for indexing them by content using these particular components. We thus use strokes instead of pixels as elementary component. This study is innovative and first results are interesting. Indexing tests using this method will be made in the NaviDoMass project1.
我们通常需要提取图像特征来理解和实现图像上的过程,对于每个过程我们都需要一种合适的特征提取方法。本文描述了一种用于字母标引的笔画特征提取方法。字母是古书中出现在一章或一段开头的有装饰的字母。它主要由笔画组成。开发此方法的目的是通过使用这些特定组件的内容来描述它们的索引。因此,我们使用笔画代替像素作为基本成分。这项研究具有创新性,初步结果也很有趣。将在NaviDoMass项目中使用此方法进行索引测试1。
{"title":"Stroke feature extraction for lettrine indexing","authors":"Giap Nguyen, Mickaël Coustaty, J. Ogier","doi":"10.1109/IPTA.2010.5586747","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586747","url":null,"abstract":"We commonly need to extract the image feature to understand and to realize the processes on it, and for each process we need a suitable feature extraction method. In this paper, we describe a stroke feature extraction method which will used in the lettrine indexing. A lettrine is a letter decorated who appear at the beginning of a chapter or a paragraph in ancient books. It's principally composed of strokes. This method is developed with the goal to characterize for indexing them by content using these particular components. We thus use strokes instead of pixels as elementary component. This study is innovative and first results are interesting. Indexing tests using this method will be made in the NaviDoMass project1.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131857018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Spectrogram image encoding based on dynamic Hilbert curve routing 基于动态希尔伯特曲线路由的频谱图图像编码
ChingShun Lin, Daren Wang
In this paper we propose an image-based biological classification system that can identify different creatures via their sounds. The overall system involves the relative spectral transform-perceptual linear prediction for spectrogram image extraction, cosine similarity measure for feature matching, dynamic Hilbert curve for spectrogram routing, and Gaussian mixture model for 1-D spectrogram classification. As an example of our approach, results for honk, dolphin, and whale classification are presented. This method works well on a wide variety of bio-sounds, especially for the highly self-repeated ones. Applications of this approach include biological signal analysis and spectrogram library establishment.
在本文中,我们提出了一个基于图像的生物分类系统,可以通过声音来识别不同的生物。整个系统包括相对光谱变换感知线性预测用于谱图图像提取,余弦相似度度量用于特征匹配,动态希尔伯特曲线用于谱图路由,高斯混合模型用于一维谱图分类。作为我们方法的一个例子,给出了喇叭、海豚和鲸鱼分类的结果。这种方法适用于各种各样的生物声音,尤其是那些高度自我重复的声音。该方法的应用包括生物信号分析和谱图库的建立。
{"title":"Spectrogram image encoding based on dynamic Hilbert curve routing","authors":"ChingShun Lin, Daren Wang","doi":"10.1109/IPTA.2010.5586805","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586805","url":null,"abstract":"In this paper we propose an image-based biological classification system that can identify different creatures via their sounds. The overall system involves the relative spectral transform-perceptual linear prediction for spectrogram image extraction, cosine similarity measure for feature matching, dynamic Hilbert curve for spectrogram routing, and Gaussian mixture model for 1-D spectrogram classification. As an example of our approach, results for honk, dolphin, and whale classification are presented. This method works well on a wide variety of bio-sounds, especially for the highly self-repeated ones. Applications of this approach include biological signal analysis and spectrogram library establishment.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132638545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Temporal transcoding of H.264/AVC video to the scalable format 将H.264/AVC视频临时转码为可伸缩格式
H. Al-Muscati, F. Labeau
In this work, a novel implementation of a video transcoder that converts a video sequence encoded with the H.264/AVC standard to a temporally scalable H.264/SVC stream is achieved with the use of a pixel-domain heterogeneous architecture. The input H.264/AVC stream is fully decoded by the transcoder. Macroblock coding modes are extracted from the input stream and are reused to encode the output stream. A set of new motion vectors is computed from the input stream coded motion vectors, and are mapped to either the hierarchical B-frame or zero-delay referencing structures employed by H.264/SVC. These new motion vectors are further subjected to a 3 pixel refinement. As a result, a significant decrease in computational complexity is achieved, while maintaining a close to optimum compression efficiency.
在这项工作中,通过使用像素域异构架构实现了一种新的视频转码器,该转码器将用H.264/AVC标准编码的视频序列转换为临时可扩展的H.264/SVC流。输入的H.264/AVC流被转码器完全解码。从输入流中提取宏块编码模式,并重用它们对输出流进行编码。从输入流编码的运动向量中计算出一组新的运动向量,并将其映射到H.264/SVC采用的分层b帧或零延迟参考结构上。这些新的运动向量进一步受到3像素的细化。因此,在保持接近最佳压缩效率的同时,实现了计算复杂度的显著降低。
{"title":"Temporal transcoding of H.264/AVC video to the scalable format","authors":"H. Al-Muscati, F. Labeau","doi":"10.1109/IPTA.2010.5586733","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586733","url":null,"abstract":"In this work, a novel implementation of a video transcoder that converts a video sequence encoded with the H.264/AVC standard to a temporally scalable H.264/SVC stream is achieved with the use of a pixel-domain heterogeneous architecture. The input H.264/AVC stream is fully decoded by the transcoder. Macroblock coding modes are extracted from the input stream and are reused to encode the output stream. A set of new motion vectors is computed from the input stream coded motion vectors, and are mapped to either the hierarchical B-frame or zero-delay referencing structures employed by H.264/SVC. These new motion vectors are further subjected to a 3 pixel refinement. As a result, a significant decrease in computational complexity is achieved, while maintaining a close to optimum compression efficiency.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133177486","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 18
Temporal error concealment algorithm for H.264/AVC using omnidirectional motion similarity 基于全向运动相似度的H.264/AVC时间误差隐藏算法
Changki Min, S. Jin, Hyeongchul Oh, Sang-Jun Park, Jechang Jeong
H.264/AVC is the newest one among several video compression standards. The main goals of H.264/AVC are to achieve efficient compression performance and a network friendly video coding. However, if an error occurs when transmitting compressed video, error concealment is needed to prevent error propagation and to improve the video quality. In this paper, we propose the temporal error concealment algorithm which provides high performance for H.264/AVC. The proposed algorithm uses the property that the motion vectors (MVs) between the error macroblock (MB) and the neighboring MB have high similarity to select a group of candidate MVs, when an error occurs in the inter-coded frame. Next, weighted overlapped boundary matching algorithm using the credibility of information selects the best candidate MV among a group of candidate MVs. The experimental results show that the proposed algorithm improves PSNR up to 3.02 dB compared with the boundary matching algorithm (BMA).
H.264/AVC是目前最新的视频压缩标准。H.264/AVC的主要目标是实现高效的压缩性能和网络友好的视频编码。但是,当压缩视频在传输过程中出现错误时,为了防止错误传播,提高视频质量,需要进行错误隐藏。本文提出了一种能够为H.264/AVC提供高性能的时间误差隐藏算法。该算法利用错误宏块(MB)与相邻宏块(MB)之间运动向量相似性高的特性,在码间帧发生错误时选择一组候选宏块。其次,利用信息可信度加权重叠边界匹配算法从一组候选MV中选择最佳候选MV;实验结果表明,与边界匹配算法(BMA)相比,该算法可将PSNR提高到3.02 dB。
{"title":"Temporal error concealment algorithm for H.264/AVC using omnidirectional motion similarity","authors":"Changki Min, S. Jin, Hyeongchul Oh, Sang-Jun Park, Jechang Jeong","doi":"10.1109/IPTA.2010.5586725","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586725","url":null,"abstract":"H.264/AVC is the newest one among several video compression standards. The main goals of H.264/AVC are to achieve efficient compression performance and a network friendly video coding. However, if an error occurs when transmitting compressed video, error concealment is needed to prevent error propagation and to improve the video quality. In this paper, we propose the temporal error concealment algorithm which provides high performance for H.264/AVC. The proposed algorithm uses the property that the motion vectors (MVs) between the error macroblock (MB) and the neighboring MB have high similarity to select a group of candidate MVs, when an error occurs in the inter-coded frame. Next, weighted overlapped boundary matching algorithm using the credibility of information selects the best candidate MV among a group of candidate MVs. The experimental results show that the proposed algorithm improves PSNR up to 3.02 dB compared with the boundary matching algorithm (BMA).","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114339434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
SPIDAR calibration using Support Vector Regression 基于支持向量回归的SPIDAR标定
Pierre Boudoin, H. Maaref, S. Otmane, M. Mallem
This paper aims to present all the study done on the SPIDAR, which is a tracking and haptic device, in order to improve its accuracy on the given position. Firstly we proposed a new semi-automatic initialization technique for this device using an optical tracking system. Then, we propose to use Support Vector Regression (SVR) to calibrate the SPIDAR in order to reduce location errors. We obtained very good results with this calibration, since we reduced the mean error by more than 50%.
本文的目的是为了提高跟踪触觉装置SPIDAR在给定位置上的精度而进行的所有研究。首先,我们提出了一种利用光学跟踪系统的半自动初始化技术。然后,我们提出使用支持向量回归(SVR)对SPIDAR进行校准,以减少定位误差。通过这次校准,我们获得了非常好的结果,因为我们将平均误差降低了50%以上。
{"title":"SPIDAR calibration using Support Vector Regression","authors":"Pierre Boudoin, H. Maaref, S. Otmane, M. Mallem","doi":"10.1109/IPTA.2010.5586748","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586748","url":null,"abstract":"This paper aims to present all the study done on the SPIDAR, which is a tracking and haptic device, in order to improve its accuracy on the given position. Firstly we proposed a new semi-automatic initialization technique for this device using an optical tracking system. Then, we propose to use Support Vector Regression (SVR) to calibrate the SPIDAR in order to reduce location errors. We obtained very good results with this calibration, since we reduced the mean error by more than 50%.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125142122","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Bayesian regularized nonnegative matrix factorization based face features learning 基于贝叶斯正则化非负矩阵分解的人脸特征学习
Xueyi Zhao
This paper proposes a novel technique for learning face features based on Bayesian regularized non-negative matrix factorization with Itakura-Saito (IS) divergence (B-NMF). In this paper, we show, the proposed technique not only explicitly incorporates the notion of ‘Bayesian regularized prior’ which imposes onto the features learning but also holds the property of scale invariant that enables lower energy components in the learning process to be treated with equal importance as the high energy components. Real test has been conducted and the obtained results are very encouraging.
提出了一种基于Itakura-Saito (IS)散度(B-NMF)的贝叶斯正则化非负矩阵分解人脸特征学习方法。在本文中,我们表明,所提出的技术不仅明确地结合了“贝叶斯正则化先验”的概念,该概念施加于特征学习,而且还具有尺度不变性,使得学习过程中的低能量分量与高能分量同等重要。已进行了实际试验,取得了令人鼓舞的结果。
{"title":"Bayesian regularized nonnegative matrix factorization based face features learning","authors":"Xueyi Zhao","doi":"10.1109/IPTA.2010.5586732","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586732","url":null,"abstract":"This paper proposes a novel technique for learning face features based on Bayesian regularized non-negative matrix factorization with Itakura-Saito (IS) divergence (B-NMF). In this paper, we show, the proposed technique not only explicitly incorporates the notion of ‘Bayesian regularized prior’ which imposes onto the features learning but also holds the property of scale invariant that enables lower energy components in the learning process to be treated with equal importance as the high energy components. Real test has been conducted and the obtained results are very encouraging.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114213236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of feature selection schemes for color texture classification 颜色纹理分类特征选择方案的比较
A. Porebski, N. Vandenbroucke, L. Macaire
In this paper, we propose to compare the performances of two sequential feature selection schemes used for supervised color texture classification. We focus this study on the sequential forward selection (SFS) scheme and the more complex sequential forward floating selection (SFFS) scheme which avoids the “nesting effect”. These schemes retain Haralick features extracted from chromatic co-occurrence matrices of images coded in different color spaces. We experimentally study the contribution of these two feature selection schemes with three benchmark color texture databases.
在本文中,我们提出比较两种用于监督颜色纹理分类的顺序特征选择方案的性能。本文重点研究了顺序前向选择(SFS)方案和避免“嵌套效应”的更复杂的顺序前向浮动选择(SFFS)方案。这些方案保留了从不同颜色空间编码的图像的颜色共现矩阵中提取的Haralick特征。我们在三个基准颜色纹理数据库中实验研究了这两种特征选择方案的贡献。
{"title":"Comparison of feature selection schemes for color texture classification","authors":"A. Porebski, N. Vandenbroucke, L. Macaire","doi":"10.1109/IPTA.2010.5586760","DOIUrl":"https://doi.org/10.1109/IPTA.2010.5586760","url":null,"abstract":"In this paper, we propose to compare the performances of two sequential feature selection schemes used for supervised color texture classification. We focus this study on the sequential forward selection (SFS) scheme and the more complex sequential forward floating selection (SFFS) scheme which avoids the “nesting effect”. These schemes retain Haralick features extracted from chromatic co-occurrence matrices of images coded in different color spaces. We experimentally study the contribution of these two feature selection schemes with three benchmark color texture databases.","PeriodicalId":236574,"journal":{"name":"2010 2nd International Conference on Image Processing Theory, Tools and Applications","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121686419","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
期刊
2010 2nd International Conference on Image Processing Theory, Tools and Applications
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1