首页 > 最新文献

2014 IEEE International Conference on Image Processing (ICIP)最新文献

英文 中文
Identifying regions of interest for discriminating Alzheimer's disease from mild cognitive impairment 识别区分阿尔茨海默病和轻度认知障碍的兴趣区域
Pub Date : 2014-10-01 DOI: 10.1109/ICIP.2014.7025003
Helena Aidos, J. Duarte, A. Fred
Alzheimer's disease (AD) is one of the most common types of dementia that affects elderly people, with no known cure. Early diagnosis of this disease is very important to improve patients' life quality and slow down the disease progression. Over the years, researchers have been proposing several techniques to analyze brain images, like FDG-PET, to automatically find changes in the brain activity. This paper compares regions of voxels identified by an expert with regions of voxels found automatically, in terms of corresponding classification accuracies based on three well-known classifiers. The automatic identification of regions is made by segmenting FDG-PET images, and extracting features that represent each of those regions. Experimental results show that the regions found automatically are very discriminative, outperforming results with expert's defined regions.
阿尔茨海默病(AD)是影响老年人的最常见的痴呆症之一,目前尚无治愈方法。本病的早期诊断对提高患者的生活质量和延缓病情发展具有重要意义。多年来,研究人员提出了几种分析大脑图像的技术,如FDG-PET,以自动发现大脑活动的变化。本文将专家识别的体素区域与自动识别的体素区域进行比较,比较基于三种知名分类器的相应分类精度。通过分割FDG-PET图像,提取代表每个区域的特征,实现区域的自动识别。实验结果表明,自动发现的区域具有很强的判别性,优于专家定义的区域。
{"title":"Identifying regions of interest for discriminating Alzheimer's disease from mild cognitive impairment","authors":"Helena Aidos, J. Duarte, A. Fred","doi":"10.1109/ICIP.2014.7025003","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025003","url":null,"abstract":"Alzheimer's disease (AD) is one of the most common types of dementia that affects elderly people, with no known cure. Early diagnosis of this disease is very important to improve patients' life quality and slow down the disease progression. Over the years, researchers have been proposing several techniques to analyze brain images, like FDG-PET, to automatically find changes in the brain activity. This paper compares regions of voxels identified by an expert with regions of voxels found automatically, in terms of corresponding classification accuracies based on three well-known classifiers. The automatic identification of regions is made by segmenting FDG-PET images, and extracting features that represent each of those regions. Experimental results show that the regions found automatically are very discriminative, outperforming results with expert's defined regions.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80858724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Single image dehazing based on fast wavelet transform with weighted image fusion 基于快速小波变换和加权图像融合的单幅图像去雾
Pub Date : 2014-10-01 DOI: 10.1109/ICIP.2014.7025921
H. Zhang, Xuan Liu, Zhitong Huang, Yuefeng Ji
Due to the presence of bad weather conditions, images captured in outdoor environments are usually degraded. In this paper, a novel single image dehazing method is proposed to enhance the visibility of such degraded images. Since the property of haze is widely spread, the estimated transmission should be smoothly changed over the scene. The fast wavelet transform (FWT) is introduced to estimate the smooth transmission in our work. To preserve more details and correct the color distortion, a solution based on weighted image fusion strategy is provided. Compared with the state-of-the-art single image dehazing methods, our method based on FWT with weighted image fusion (FWTWIF) produces similar or even better results with lower complexity. In order to verify the high visibility restoration and efficiency of our method, comparative experiments are conducted at the end of this paper.
由于恶劣天气条件的存在,在室外环境中拍摄的图像通常会降级。本文提出了一种新的单幅图像去雾方法,以提高退化图像的可见性。由于雾霾的性质是广泛传播的,估计的传输应该在场景中平滑地改变。在工作中引入快速小波变换(FWT)来估计平滑传输。为了保留更多的细节和纠正颜色失真,提出了一种基于加权图像融合策略的解决方案。与目前最先进的单幅图像去雾方法相比,我们的基于FWT和加权图像融合(FWTWIF)的方法以更低的复杂度获得了相似甚至更好的结果。为了验证该方法的高可视性恢复和高效性,本文最后进行了对比实验。
{"title":"Single image dehazing based on fast wavelet transform with weighted image fusion","authors":"H. Zhang, Xuan Liu, Zhitong Huang, Yuefeng Ji","doi":"10.1109/ICIP.2014.7025921","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025921","url":null,"abstract":"Due to the presence of bad weather conditions, images captured in outdoor environments are usually degraded. In this paper, a novel single image dehazing method is proposed to enhance the visibility of such degraded images. Since the property of haze is widely spread, the estimated transmission should be smoothly changed over the scene. The fast wavelet transform (FWT) is introduced to estimate the smooth transmission in our work. To preserve more details and correct the color distortion, a solution based on weighted image fusion strategy is provided. Compared with the state-of-the-art single image dehazing methods, our method based on FWT with weighted image fusion (FWTWIF) produces similar or even better results with lower complexity. In order to verify the high visibility restoration and efficiency of our method, comparative experiments are conducted at the end of this paper.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80872699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Action recognition based on kinematic representation of video data 基于视频数据运动表示的动作识别
Pub Date : 2014-10-01 DOI: 10.1109/ICIP.2014.7025306
Xin Sun, Di Huang, Yunhong Wang, Jie Qin
The local space-time feature is an effective way to represent video data and achieves state-of-the-art performance in action recognition. However, in majority of cases, it only captures the static or dynamic cues of the image sequence. In this paper, we propose a novel kinematic descriptor, namely Static and Dynamic fEature Velocity (SDEV), which models the changes of both static and dynamic information with time for action recognition. It is not only discriminative itself, but also complementary to the existing descriptors, thus leading to more comprehensive representation of actions by their combination. Evaluated on two public databases, i.e. UCF sports and Olympic Sports, the results clearly illustrate the competency of SDEV.
局部时空特征是一种表示视频数据的有效方法,在动作识别中可以达到最先进的性能。然而,在大多数情况下,它只捕获图像序列的静态或动态线索。在本文中,我们提出了一种新的运动学描述符,即静态和动态特征速度(SDEV),它对静态和动态信息随时间的变化进行建模,用于动作识别。它不仅本身具有判别性,而且与现有的描述符相辅相成,从而通过它们的组合对动作进行更全面的表征。在UCF体育和Olympic体育两个公共数据库中进行了评价,结果清楚地说明了SDEV的能力。
{"title":"Action recognition based on kinematic representation of video data","authors":"Xin Sun, Di Huang, Yunhong Wang, Jie Qin","doi":"10.1109/ICIP.2014.7025306","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025306","url":null,"abstract":"The local space-time feature is an effective way to represent video data and achieves state-of-the-art performance in action recognition. However, in majority of cases, it only captures the static or dynamic cues of the image sequence. In this paper, we propose a novel kinematic descriptor, namely Static and Dynamic fEature Velocity (SDEV), which models the changes of both static and dynamic information with time for action recognition. It is not only discriminative itself, but also complementary to the existing descriptors, thus leading to more comprehensive representation of actions by their combination. Evaluated on two public databases, i.e. UCF sports and Olympic Sports, the results clearly illustrate the competency of SDEV.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81227523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Image segmentation by image foresting transform with geodesic band constraints 基于测地线带约束的图像森林变换图像分割
Pub Date : 2014-10-01 DOI: 10.1109/ICIP.2014.7025880
Caio de Moraes Braz, P. A. Miranda
In this work, we propose a novel boundary constraint, which we denote as the Geodesic Band Constraint (GBC), and we show how it can be efficiently incorporated into a subclass of the Generalized Graph Cut framework (GGC). We include a proof of the optimality of the new algorithm in terms of a global minimum of an energy function subject to the new boundary constraints. The Geodesic Band Constraint helps regularizing the boundary, and consequently, improves the segmentation of objects with more regular shape, while keeping the low computational cost of the Image Foresting Transform (IFT). It can also be combined with the Geodesic Star Convexity prior, and with polarity constraints, at no additional cost. The method is demonstrated in CT thoracic studies of the liver, and MR images of the breast.
在这项工作中,我们提出了一种新的边界约束,我们将其称为测地线带约束(GBC),并展示了如何将其有效地合并到广义图割框架(GGC)的子类中。我们包括一个新的边界约束下的能量函数的全局最小的新算法的最优性的证明。测地带约束有助于边界的正则化,从而在保持图像森林变换(IFT)较低的计算成本的同时,改善了形状更规则的目标的分割。它也可以与测地线星凸性相结合,并且具有极性约束,不需要额外的成本。该方法在肝脏的CT胸部研究和乳房的MR图像中得到证实。
{"title":"Image segmentation by image foresting transform with geodesic band constraints","authors":"Caio de Moraes Braz, P. A. Miranda","doi":"10.1109/ICIP.2014.7025880","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025880","url":null,"abstract":"In this work, we propose a novel boundary constraint, which we denote as the Geodesic Band Constraint (GBC), and we show how it can be efficiently incorporated into a subclass of the Generalized Graph Cut framework (GGC). We include a proof of the optimality of the new algorithm in terms of a global minimum of an energy function subject to the new boundary constraints. The Geodesic Band Constraint helps regularizing the boundary, and consequently, improves the segmentation of objects with more regular shape, while keeping the low computational cost of the Image Foresting Transform (IFT). It can also be combined with the Geodesic Star Convexity prior, and with polarity constraints, at no additional cost. The method is demonstrated in CT thoracic studies of the liver, and MR images of the breast.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79522999","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Screen-camera calibration using a thread 使用螺纹校准屏幕摄像头
Pub Date : 2014-10-01 DOI: 10.1109/ICIP.2014.7025698
Songnan Li, K. Ngan, Lu Sheng
In this paper, we propose a novel screen-camera calibration algorithm which aims to locate the position of the screen in the camera coordinate system. The difficulty comes from the fact that the screen is not directly visible to the camera. Rather than using an external camera or a portable mirror like in previous studies, we propose to use a more accessible and cheaper calibrating object, i.e., a thread. The thread is manipulated so that our algorithm can infer the perspective projections of the four screen corners on the image plane. The 3-dimentional (3D) position of each screen corner is then determined by minimizing the sum of squared projection errors. Experiments show that compared with the previous studies our method can generate similar calibration results without the additional hardware.
在本文中,我们提出了一种新的屏幕摄像机标定算法,其目的是定位屏幕在摄像机坐标系中的位置。难点在于镜头无法直接看到屏幕。而不是像以前的研究中使用外部相机或便携式镜子,我们建议使用更容易获得和更便宜的校准对象,即一根线。通过对线程进行操作,我们的算法可以推断出屏幕四个角在图像平面上的透视投影。然后通过最小化投影误差的平方和来确定每个屏幕角的三维(3D)位置。实验表明,与以往的研究相比,该方法在不增加硬件的情况下也能产生相似的校准结果。
{"title":"Screen-camera calibration using a thread","authors":"Songnan Li, K. Ngan, Lu Sheng","doi":"10.1109/ICIP.2014.7025698","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025698","url":null,"abstract":"In this paper, we propose a novel screen-camera calibration algorithm which aims to locate the position of the screen in the camera coordinate system. The difficulty comes from the fact that the screen is not directly visible to the camera. Rather than using an external camera or a portable mirror like in previous studies, we propose to use a more accessible and cheaper calibrating object, i.e., a thread. The thread is manipulated so that our algorithm can infer the perspective projections of the four screen corners on the image plane. The 3-dimentional (3D) position of each screen corner is then determined by minimizing the sum of squared projection errors. Experiments show that compared with the previous studies our method can generate similar calibration results without the additional hardware.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80799446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Bandwidth efficient mobile cloud gaming with layered coding and scalable phong lighting 带宽高效的移动云游戏与分层编码和可扩展的电话照明
Pub Date : 2014-10-01 DOI: 10.1109/ICIP.2014.7026212
Seong-Ping Chuah, Ngai-Man Cheung
In mobile cloud gaming, one of the main challenges is to deliver high-quality game images over wireless networks under stringent delay requirement. To reduce the bit-rate of game images, we propose Layered Coding, which leverages the graphic rendering capability of modern mobile devices to reduce transmission bit-rate. Specifically, we render a low-quality local game image, or the base layer, on the power-constrained mobile client. Instead of sending the high quality game image, the cloud server sends enhancement layer information, which the client utilizes to improve the quality of the base layer. Central to the proposed layered coding is the design of base layer (BL) rendering. We discuss BL design and propose a computationally-scalable Phong lighting that can be used in BL rendering. We performed experiments to compare our layered coding with state-of-the-art, which uses H.264/AVC inter-frame coding to compress game images. With game sequences of different model complexity and motion, our results suggest that layered coding requires substantially lower data-rate. We made available game video test sequences to stimulate future research.
在移动云游戏中,主要挑战之一是在严格的延迟要求下,通过无线网络传输高质量的游戏图像。为了降低游戏图像的比特率,我们提出了分层编码,它利用现代移动设备的图形渲染能力来降低传输比特率。具体来说,我们在功率受限的移动客户端上渲染低质量的本地游戏图像或基础层。云服务器不再发送高质量的游戏图像,而是发送增强层信息,客户端利用增强层信息来提高基础层的质量。提出的分层编码的核心是基础层(BL)呈现的设计。我们讨论了BL设计,并提出了一种可用于BL渲染的可计算扩展的Phong照明。我们通过实验将我们的分层编码与最先进的(使用H.264/AVC帧间编码来压缩游戏图像)进行比较。对于不同模型复杂度和运动的游戏序列,我们的研究结果表明,分层编码所需的数据速率大大降低。我们制作了可用的游戏视频测试序列来刺激未来的研究。
{"title":"Bandwidth efficient mobile cloud gaming with layered coding and scalable phong lighting","authors":"Seong-Ping Chuah, Ngai-Man Cheung","doi":"10.1109/ICIP.2014.7026212","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7026212","url":null,"abstract":"In mobile cloud gaming, one of the main challenges is to deliver high-quality game images over wireless networks under stringent delay requirement. To reduce the bit-rate of game images, we propose Layered Coding, which leverages the graphic rendering capability of modern mobile devices to reduce transmission bit-rate. Specifically, we render a low-quality local game image, or the base layer, on the power-constrained mobile client. Instead of sending the high quality game image, the cloud server sends enhancement layer information, which the client utilizes to improve the quality of the base layer. Central to the proposed layered coding is the design of base layer (BL) rendering. We discuss BL design and propose a computationally-scalable Phong lighting that can be used in BL rendering. We performed experiments to compare our layered coding with state-of-the-art, which uses H.264/AVC inter-frame coding to compress game images. With game sequences of different model complexity and motion, our results suggest that layered coding requires substantially lower data-rate. We made available game video test sequences to stimulate future research.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85381535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A meta-algorithm for classification by feature nomination 基于特征提名的分类元算法
Pub Date : 2014-10-01 DOI: 10.1109/ICIP.2014.7026050
Rituparna Sarkar, K. Skadron, S. Acton
With increasing complexity of the dataset it becomes impractical to use a single feature to characterize all constituent images. In this paper we describe a method that will automatically select the appropriate image features that are relevant and efficacious for classification, without requiring modifications to the feature extracting methods or the classification algorithm. We first describe a method for designing class distinctive dictionaries using a dictionary learning technique, which yields class specific sparse codes and a linear classifier parameter. Then, we apply information theoretic measures to obtain the more informative feature relevant to a test image and use only that feature to obtain final classification results. With at least one of the features classifying the query accurately, our algorithm chooses the correct feature in 88.9% of the trials.
随着数据集复杂性的增加,使用单个特征来描述所有组成图像变得不切实际。在本文中,我们描述了一种不需要修改特征提取方法或分类算法,自动选择合适且有效的图像特征进行分类的方法。我们首先描述了一种使用字典学习技术设计类独特字典的方法,该方法产生类特定的稀疏代码和线性分类器参数。然后,我们应用信息论方法获得与测试图像相关的更多信息特征,并仅使用该特征来获得最终的分类结果。至少有一个特征可以准确地分类查询,我们的算法在88.9%的试验中选择了正确的特征。
{"title":"A meta-algorithm for classification by feature nomination","authors":"Rituparna Sarkar, K. Skadron, S. Acton","doi":"10.1109/ICIP.2014.7026050","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7026050","url":null,"abstract":"With increasing complexity of the dataset it becomes impractical to use a single feature to characterize all constituent images. In this paper we describe a method that will automatically select the appropriate image features that are relevant and efficacious for classification, without requiring modifications to the feature extracting methods or the classification algorithm. We first describe a method for designing class distinctive dictionaries using a dictionary learning technique, which yields class specific sparse codes and a linear classifier parameter. Then, we apply information theoretic measures to obtain the more informative feature relevant to a test image and use only that feature to obtain final classification results. With at least one of the features classifying the query accurately, our algorithm chooses the correct feature in 88.9% of the trials.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84055350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Edge enhancement of depth based rendered images 基于深度渲染图像的边缘增强
Pub Date : 2014-10-01 DOI: 10.1109/ICIP.2014.7026103
M. S. Farid, M. Lucenteforte, Marco Grangetto
Depth image based rendering is a well-known technology for the generation of virtual views in between a limited set of views acquired by a cameras array. Intermediate views are rendered by warping image pixels based on their depth. Nonetheless, depth maps are usually imperfect as they need to be estimated through stereo matching algorithms; moreover, for representation and transmission requirements depth values are obviously quantized. Such depth representation errors translate into a warping error when generating intermediate views thus impacting on the rendered image quality. We observe that depth errors turn to be very critical when they affect the object contours since in such a case they cause significant structural distortion in the warped objects. This paper presents an algorithm to improve the visual quality of the synthesized views by enforcing the shape of the edges in presence of erroneous depth estimates. We show that it is possible to significantly improve the visual quality of the interpolated view by enforcing prior knowledge on the admissible deformations of edges under projective transformation. Both visual and objective results show that the proposed approach is very effective.
基于深度图像的渲染是一种众所周知的技术,用于在相机阵列获取的有限视图之间生成虚拟视图。中间视图是通过根据深度扭曲图像像素来渲染的。尽管如此,深度图通常是不完美的,因为它们需要通过立体匹配算法来估计;此外,为了表示和传输的要求,深度值被明显量化。这种深度表示错误在生成中间视图时转化为扭曲错误,从而影响渲染图像质量。我们观察到,深度误差在影响物体轮廓时变得非常关键,因为在这种情况下,它们会在扭曲的物体中引起严重的结构扭曲。本文提出了一种在存在错误深度估计的情况下,通过强化边缘形状来提高合成视图视觉质量的算法。我们证明了在投影变换下,通过对边缘的可允许变形施加先验知识,可以显著提高插值视图的视觉质量。视觉和客观结果表明,该方法是非常有效的。
{"title":"Edge enhancement of depth based rendered images","authors":"M. S. Farid, M. Lucenteforte, Marco Grangetto","doi":"10.1109/ICIP.2014.7026103","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7026103","url":null,"abstract":"Depth image based rendering is a well-known technology for the generation of virtual views in between a limited set of views acquired by a cameras array. Intermediate views are rendered by warping image pixels based on their depth. Nonetheless, depth maps are usually imperfect as they need to be estimated through stereo matching algorithms; moreover, for representation and transmission requirements depth values are obviously quantized. Such depth representation errors translate into a warping error when generating intermediate views thus impacting on the rendered image quality. We observe that depth errors turn to be very critical when they affect the object contours since in such a case they cause significant structural distortion in the warped objects. This paper presents an algorithm to improve the visual quality of the synthesized views by enforcing the shape of the edges in presence of erroneous depth estimates. We show that it is possible to significantly improve the visual quality of the interpolated view by enforcing prior knowledge on the admissible deformations of edges under projective transformation. Both visual and objective results show that the proposed approach is very effective.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84593157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Feature-based registration for correlative light and electron microscopy images 基于特征的相关光学和电子显微镜图像配准
Pub Date : 2014-10-01 DOI: 10.1109/ICIP.2014.7025724
D. Nam, J. Mantell, Lorna Hodgson, D. Bull, P. Verkade, A. Achim
In this paper we present a feature-based registration algorithm for largely misaligned bright-field light microscopy images and transmission electron microscopy images. We first detect cell centroids, using a gradient-based single-pass voting algorithm. Images are then aligned by finding the flip, translation and rotation parameters, which maximizes the overlap between pseudo-cell-centers. We demonstrate the effectiveness of our method, by comparing it to manually aligned images. Combining registered light and electron microscopy images together can reveal details about cellular structure with spatial and high-resolution information.
在本文中,我们提出了一种基于特征的配准算法,用于大错位的明场光学显微镜图像和透射电子显微镜图像。我们首先使用基于梯度的单次投票算法检测细胞质心。然后通过查找翻转、平移和旋转参数来对齐图像,这将最大化伪细胞中心之间的重叠。通过将其与手动对齐的图像进行比较,我们证明了该方法的有效性。结合配准的光学和电子显微镜图像可以揭示细胞结构的细节与空间和高分辨率的信息。
{"title":"Feature-based registration for correlative light and electron microscopy images","authors":"D. Nam, J. Mantell, Lorna Hodgson, D. Bull, P. Verkade, A. Achim","doi":"10.1109/ICIP.2014.7025724","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025724","url":null,"abstract":"In this paper we present a feature-based registration algorithm for largely misaligned bright-field light microscopy images and transmission electron microscopy images. We first detect cell centroids, using a gradient-based single-pass voting algorithm. Images are then aligned by finding the flip, translation and rotation parameters, which maximizes the overlap between pseudo-cell-centers. We demonstrate the effectiveness of our method, by comparing it to manually aligned images. Combining registered light and electron microscopy images together can reveal details about cellular structure with spatial and high-resolution information.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77504645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Joint sparsity-based robust visual tracking 基于联合稀疏的鲁棒视觉跟踪
Pub Date : 2014-10-01 DOI: 10.1109/ICIP.2014.7025998
B. Bozorgtabar, Roland Göcke
In this paper, we propose a new object tracking in a particle filter framework utilising a joint sparsity-based model. Based on the observation that a target can be reconstructed from several templates that are updated dynamically, we jointly analyse the representation of the particles under a single regression framework and with the shared underlying structure. Two convex regularisations are combined and used in our model to enable sparsity as well as facilitate coupling information between particles. Unlike the previous methods that consider a model commonality between particles or regard them as independent tasks, we simultaneously take into account a structure inducing norm and an outlier detecting norm. Such a formulation is shown to be more flexible in terms of handling various types of challenges including occlusion and cluttered background. To derive the optimal solution efficiently, we propose to use a Preconditioned Conjugate Gradient method, which is computationally affordable for high-dimensional data. Furthermore, an online updating procedure scheme is included in the dictionary learning, which makes the proposed tracker less vulnerable to outliers. Experiments on challenging video sequences demonstrate the robustness of the proposed approach to handling occlusion, pose and illumination variation and outperform state-of-the-art trackers in tracking accuracy.
在本文中,我们提出了一种利用联合稀疏性模型的粒子滤波框架中的目标跟踪方法。基于多个动态更新的模板可以重构目标的观察,我们共同分析了单个回归框架和共享底层结构下粒子的表示。在我们的模型中,将两个凸正则化结合起来,以实现稀疏性并促进粒子之间的耦合信息。不同于以往的方法考虑粒子之间的模型共性或将它们视为独立的任务,我们同时考虑了结构诱导范数和离群检测范数。这样的公式被证明在处理各种类型的挑战方面更加灵活,包括遮挡和杂乱的背景。为了有效地推导出最优解,我们建议使用预条件共轭梯度方法,该方法在计算上对高维数据负担得起。此外,字典学习中还包含在线更新过程方案,使所提出的跟踪器不容易受到异常值的影响。在具有挑战性的视频序列上的实验证明了该方法在处理遮挡、姿态和光照变化方面的鲁棒性,并且在跟踪精度方面优于最先进的跟踪器。
{"title":"Joint sparsity-based robust visual tracking","authors":"B. Bozorgtabar, Roland Göcke","doi":"10.1109/ICIP.2014.7025998","DOIUrl":"https://doi.org/10.1109/ICIP.2014.7025998","url":null,"abstract":"In this paper, we propose a new object tracking in a particle filter framework utilising a joint sparsity-based model. Based on the observation that a target can be reconstructed from several templates that are updated dynamically, we jointly analyse the representation of the particles under a single regression framework and with the shared underlying structure. Two convex regularisations are combined and used in our model to enable sparsity as well as facilitate coupling information between particles. Unlike the previous methods that consider a model commonality between particles or regard them as independent tasks, we simultaneously take into account a structure inducing norm and an outlier detecting norm. Such a formulation is shown to be more flexible in terms of handling various types of challenges including occlusion and cluttered background. To derive the optimal solution efficiently, we propose to use a Preconditioned Conjugate Gradient method, which is computationally affordable for high-dimensional data. Furthermore, an online updating procedure scheme is included in the dictionary learning, which makes the proposed tracker less vulnerable to outliers. Experiments on challenging video sequences demonstrate the robustness of the proposed approach to handling occlusion, pose and illumination variation and outperform state-of-the-art trackers in tracking accuracy.","PeriodicalId":6856,"journal":{"name":"2014 IEEE International Conference on Image Processing (ICIP)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2014-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78057318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2014 IEEE International Conference on Image Processing (ICIP)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1