首页 > 最新文献

2018 Eighth International Conference on Image Processing Theory, Tools and Applications (IPTA)最新文献

英文 中文
Video Tracking of Insect Flight Path: Towards Behavioral Assessment 昆虫飞行轨迹的视频跟踪:面向行为评估
Yufang Bao, H. Krim
In this paper, we propose a cohort of new methods that cooperate together to improve the detection/tracking of mosquitos in a 2D video clip. A commonly recognized challenge in the biotechnology research field is evaluating the effect of a repellent which entails tracking the unpredictable flight paths of the insects, which may be swift flying or slow moving. Our work presented in this paper provides an efficient tool to deal with tracking the small insects with unpredictable moving patterns by proposing a new dual foreground and background modeling/updating system for target detecting and tracking. The proposed processing elements take advantage of the similarity of the frames and use the estimated speeds to collectively capture the relevant information and contribute in concert to ensure fast and accurate measurement to reach the goal of behavior evaluation of mosquitos in response to a repellent.
在本文中,我们提出了一组新的方法,这些方法相互配合,以提高对2D视频片段中蚊子的检测/跟踪。生物技术研究领域的一个普遍公认的挑战是评估驱虫剂的效果,这需要跟踪昆虫的不可预测的飞行路径,这些昆虫可能是快速飞行或缓慢移动。本文提出了一种新的前景和背景双重建模/更新系统,为处理运动模式不可预测的小昆虫的目标检测和跟踪提供了一种有效的工具。所提出的处理元素利用帧的相似性,并使用估计的速度来集体捕获相关信息,并协同贡献,以确保快速准确的测量,以达到对蚊子对驱蚊剂反应的行为评估的目标。
{"title":"Video Tracking of Insect Flight Path: Towards Behavioral Assessment","authors":"Yufang Bao, H. Krim","doi":"10.1109/IPTA.2018.8608167","DOIUrl":"https://doi.org/10.1109/IPTA.2018.8608167","url":null,"abstract":"In this paper, we propose a cohort of new methods that cooperate together to improve the detection/tracking of mosquitos in a 2D video clip. A commonly recognized challenge in the biotechnology research field is evaluating the effect of a repellent which entails tracking the unpredictable flight paths of the insects, which may be swift flying or slow moving. Our work presented in this paper provides an efficient tool to deal with tracking the small insects with unpredictable moving patterns by proposing a new dual foreground and background modeling/updating system for target detecting and tracking. The proposed processing elements take advantage of the similarity of the frames and use the estimated speeds to collectively capture the relevant information and contribute in concert to ensure fast and accurate measurement to reach the goal of behavior evaluation of mosquitos in response to a repellent.","PeriodicalId":272294,"journal":{"name":"2018 Eighth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"131 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114662780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Super-Resolution based on multi-pairs of dictionaries via Patch Prior Guided Clustering 基于多对字典的图像超分辨率补丁先验引导聚类
Dongfeng Mei, Xuan Zhu, Cheng Yue, Qingwen Cao, Lei Wang, Longfei Zhang, Q. Song
Image super-resolution based on learning dictionary has recently attracted enormous interests. The learning-based methods usually train a pair of dictionaries from low-resolution and high-resolution image patches, ignoring the fact that patches have different structures. In this paper, we propose to train a set of novel multi-pairs of dictionaries for different categories of patches which clustered by gaussian mixture model, instead of a global dictionary trained from all patches. The multi-pairs of dictionaries via patch prior guided clustering can express structure information of the image patches well. Extensive experimental results prove it has strong robustness in super resolution. Compared with state-of-the-art SR methods, our method demonstrates more pleasant quality of image edge structures and texture.
近年来,基于学习字典的图像超分辨率技术引起了人们的广泛关注。基于学习的方法通常从低分辨率和高分辨率图像斑块中训练一对字典,忽略了斑块具有不同结构的事实。本文提出用高斯混合模型对不同类别的patch进行聚类训练一组新的多对字典,而不是用所有patch训练一个全局字典。采用补丁先验引导聚类的多对字典可以很好地表达图像补丁的结构信息。大量的实验结果表明,该方法具有较强的超分辨率鲁棒性。与最先进的SR方法相比,我们的方法显示出更令人满意的图像边缘结构和纹理质量。
{"title":"Image Super-Resolution based on multi-pairs of dictionaries via Patch Prior Guided Clustering","authors":"Dongfeng Mei, Xuan Zhu, Cheng Yue, Qingwen Cao, Lei Wang, Longfei Zhang, Q. Song","doi":"10.1109/IPTA.2018.8608128","DOIUrl":"https://doi.org/10.1109/IPTA.2018.8608128","url":null,"abstract":"Image super-resolution based on learning dictionary has recently attracted enormous interests. The learning-based methods usually train a pair of dictionaries from low-resolution and high-resolution image patches, ignoring the fact that patches have different structures. In this paper, we propose to train a set of novel multi-pairs of dictionaries for different categories of patches which clustered by gaussian mixture model, instead of a global dictionary trained from all patches. The multi-pairs of dictionaries via patch prior guided clustering can express structure information of the image patches well. Extensive experimental results prove it has strong robustness in super resolution. Compared with state-of-the-art SR methods, our method demonstrates more pleasant quality of image edge structures and texture.","PeriodicalId":272294,"journal":{"name":"2018 Eighth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121091383","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interval-valued JPEG decompression for artifact suppression 用于伪影抑制的间隔值JPEG解压缩
V. Itier, Florentin Kucharczak, O. Strauss, W. Puech
JPEG is the most used image compression algorithm but block wise DCT compression methods produce artifacts due to coefficient quantization. JPEG decompression can be seen as a reconstruction problem constrained by quantization. In this context, we propose to handle this problem by using interval-valued arithmetic. Our method allows to produce interval-valued image that includes the non-compressed original image. The produced convex set allows to apply constrained Total Variation (TV) reconstruction in order to reduce JPEG artifacts (blocking, grainy effects and high frequency noise). Experiments show visual improvement of JPEG decoding assessed by non-reference quality metric. In addition, the stopping criterion of the TV algorithm is given by this metric which provides evidence about JPEG decompression improvement.
JPEG是最常用的图像压缩算法,但分块DCT压缩方法由于系数量化而产生伪影。JPEG解压缩可以看作是一个受量化约束的重构问题。在这种情况下,我们建议使用区间值算法来处理这个问题。我们的方法允许生成包含未压缩原始图像的区间值图像。生成的凸集允许应用受限的总变化(TV)重建,以减少JPEG伪影(块,颗粒效果和高频噪声)。实验表明,采用非参考质量度量评价JPEG解码的视觉效果。此外,利用该度量给出了TV算法的停止准则,为JPEG解压缩的改进提供了依据。
{"title":"Interval-valued JPEG decompression for artifact suppression","authors":"V. Itier, Florentin Kucharczak, O. Strauss, W. Puech","doi":"10.1109/IPTA.2018.8608122","DOIUrl":"https://doi.org/10.1109/IPTA.2018.8608122","url":null,"abstract":"JPEG is the most used image compression algorithm but block wise DCT compression methods produce artifacts due to coefficient quantization. JPEG decompression can be seen as a reconstruction problem constrained by quantization. In this context, we propose to handle this problem by using interval-valued arithmetic. Our method allows to produce interval-valued image that includes the non-compressed original image. The produced convex set allows to apply constrained Total Variation (TV) reconstruction in order to reduce JPEG artifacts (blocking, grainy effects and high frequency noise). Experiments show visual improvement of JPEG decoding assessed by non-reference quality metric. In addition, the stopping criterion of the TV algorithm is given by this metric which provides evidence about JPEG decompression improvement.","PeriodicalId":272294,"journal":{"name":"2018 Eighth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"1198 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127235875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
3D lymphoma detection in PET-CT images with supervoxel and CRFs 超体素和CRFs在PET-CT图像中的三维淋巴瘤检测
Jierui Zha, P. Decazes, Jérôme Lapuyade, A. Elmoataz, S. Ruan
In this paper we present a lymphoma detection method on image PET-CT by combining supervoxel and conditional random fields(CRFs). Positron-emission tomography(PET) is often used to analysis diseases like cancer. And it is usually combined with computed tomography scan (CT), which provides accurate anatomical location of lesions. Most lymphoma detection in PET are based on machine learning technique which requires a large learning database. However, it is difficult to acquire such a large standard database in medical field. In our previous work, a new approach which combines an anatomical atlas obtained in CT with CRFs (Conditional Random Fields) in PET is proposed and is proved to have good results, however it is very time consuming due to the fully connection of each voxel in 3D. To cope with this problem, we proposed a method that combines supervoxel and CRFs to accelerate the progress. Our method consists of 3 steps. First, we apply the supervoxel on the PET image to group the voxels into supervoxels. Then, an anatomic atlas is applied on CT to remove the organs having hyper-fixation in PET. Finally, CRFs will detect lymphoma regions in PET. The obtained results show good performance in terms of speed and lymphoma detection.
本文提出了一种结合超体素和条件随机场的PET-CT图像淋巴瘤检测方法。正电子发射断层扫描(PET)常用于分析癌症等疾病。它通常与计算机断层扫描(CT)相结合,可以提供准确的病变解剖位置。PET中大多数淋巴瘤检测都是基于机器学习技术,这需要一个庞大的学习数据库。然而,在医学领域很难获得如此庞大的标准数据库。在我们之前的工作中,我们提出了一种将CT上获得的解剖图谱与PET上的条件随机场(CRFs)相结合的新方法,并证明了它有很好的效果,但由于每个体素在3D中完全连接,因此非常耗时。为了解决这个问题,我们提出了一种超体素和CRFs相结合的方法来加快进度。我们的方法包括3个步骤。首先,我们在PET图像上应用超体素,将体素分组为超体素。然后,在CT上应用解剖图谱去除PET上有超固定的器官。最后,CRFs将在PET中检测淋巴瘤区域。所得结果在速度和淋巴瘤检测方面都有良好的表现。
{"title":"3D lymphoma detection in PET-CT images with supervoxel and CRFs","authors":"Jierui Zha, P. Decazes, Jérôme Lapuyade, A. Elmoataz, S. Ruan","doi":"10.1109/IPTA.2018.8608129","DOIUrl":"https://doi.org/10.1109/IPTA.2018.8608129","url":null,"abstract":"In this paper we present a lymphoma detection method on image PET-CT by combining supervoxel and conditional random fields(CRFs). Positron-emission tomography(PET) is often used to analysis diseases like cancer. And it is usually combined with computed tomography scan (CT), which provides accurate anatomical location of lesions. Most lymphoma detection in PET are based on machine learning technique which requires a large learning database. However, it is difficult to acquire such a large standard database in medical field. In our previous work, a new approach which combines an anatomical atlas obtained in CT with CRFs (Conditional Random Fields) in PET is proposed and is proved to have good results, however it is very time consuming due to the fully connection of each voxel in 3D. To cope with this problem, we proposed a method that combines supervoxel and CRFs to accelerate the progress. Our method consists of 3 steps. First, we apply the supervoxel on the PET image to group the voxels into supervoxels. Then, an anatomic atlas is applied on CT to remove the organs having hyper-fixation in PET. Finally, CRFs will detect lymphoma regions in PET. The obtained results show good performance in terms of speed and lymphoma detection.","PeriodicalId":272294,"journal":{"name":"2018 Eighth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127262445","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
An Image Compression Scheme Based on Block Truncation Coding Using Real-time Block Classification and Modified Threshold for Pixels Grouping 一种基于实时块分类和改进阈值的块截断编码图像压缩方案
Zheng Hui, Quan Zhou
Block truncation coding (BTC), known as a simple and efficient digital image compression algorithm, its essential is to encode the non-overlapping sub-blocks of input images with a pair of low-/high- quantity levels and a distribution matrix. Absolute Moment Block Truncation Coding (AMBTC) is a widely used modified version of BTC. Based on BTC, we paper propose a new approach by means of an adjusted threshold to classify each sub-block. Then, for those blocks grouped as to be modified we apply a new BTC-based modification method by searching optimized threshold as the replacement of the mean value of sub-blocks in AMBTC for pixels grouping. Experimental results show that, compared with AMBTC, the reconstructed image quality of proposed scheme can be improved by 0.5~0.8dB in Peak signal to noise ratio (PSNR)
块截断编码(Block truncation coding, BTC)是一种简单高效的数字图像压缩算法,其本质是对输入图像的非重叠子块用一对低/高数量层次和一个分布矩阵进行编码。绝对矩块截断编码(AMBTC)是一种被广泛使用的改进版BTC。本文提出了一种基于BTC的新方法,通过调整阈值对每个子块进行分类。然后,对于分组待修改的块,我们采用一种新的基于btc的修改方法,通过搜索优化阈值来替换AMBTC中子块的平均值进行像素分组。实验结果表明,与AMBTC相比,所提方案的重构图像质量在峰值信噪比(PSNR)上可提高0.5~0.8dB。
{"title":"An Image Compression Scheme Based on Block Truncation Coding Using Real-time Block Classification and Modified Threshold for Pixels Grouping","authors":"Zheng Hui, Quan Zhou","doi":"10.1109/IPTA.2018.8608124","DOIUrl":"https://doi.org/10.1109/IPTA.2018.8608124","url":null,"abstract":"Block truncation coding (BTC), known as a simple and efficient digital image compression algorithm, its essential is to encode the non-overlapping sub-blocks of input images with a pair of low-/high- quantity levels and a distribution matrix. Absolute Moment Block Truncation Coding (AMBTC) is a widely used modified version of BTC. Based on BTC, we paper propose a new approach by means of an adjusted threshold to classify each sub-block. Then, for those blocks grouped as to be modified we apply a new BTC-based modification method by searching optimized threshold as the replacement of the mean value of sub-blocks in AMBTC for pixels grouping. Experimental results show that, compared with AMBTC, the reconstructed image quality of proposed scheme can be improved by 0.5~0.8dB in Peak signal to noise ratio (PSNR)","PeriodicalId":272294,"journal":{"name":"2018 Eighth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129125836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Method for Automatic Tracking of Cell Nuclei in 2D Epifluorescence Microscopy Image Sequences 一种二维荧光显微镜图像序列中细胞核自动跟踪的方法
Alexandr Yu. Kondrati'ev, H. Yaginuma, Y. Okada, D. Sorokin
The automated segmentation and tracking of cells in live cell microscopy image sequences is an actual problem in many biological research areas. Despite the existence of different cell tracking approaches, a universal solution for this problem still does not exist due to high variety of fluorescent microscopy image data obtained using different techniques, where cells have completely different visual appearance. Moreover, the cells can significantly change their shape even within a single image sequence. In this work, we propose a cell tracking algorithm designed for detecting and tracking cell nuclei in 2D image sequences obtained by epifluorescence microscopy, where the cell appearance drastically changes during cell mitosis. We used marker controlled watershed algorithm combined with blob detection for nuclei segmentation followed by a generalized nearest neighbor approach for nuclei tracking. We also employed a special mitosis detection algorithm to process cell division events. Our approach was quantitatively evaluated for its segmentation and tracking accuracy using the real image data annotated by human experts. The evaluation procedure was performed based on the protocol used in the Cell Tracking Challenge. It was shown that the proposed approach outperforms an existing semiautomatic method in both segmentation and tracking accuracy.
活细胞显微镜图像序列中细胞的自动分割和跟踪是许多生物学研究领域的实际问题。尽管存在不同的细胞跟踪方法,但由于使用不同技术获得的荧光显微镜图像数据种类繁多,细胞具有完全不同的视觉外观,因此仍然不存在针对该问题的通用解决方案。此外,即使在单个图像序列中,细胞也可以显着改变其形状。在这项工作中,我们提出了一种细胞跟踪算法,用于检测和跟踪通过荧光显微镜获得的二维图像序列中的细胞核,其中细胞外观在细胞有丝分裂期间发生了巨大变化。我们采用标记控制分水岭算法结合blob检测进行细胞核分割,然后采用广义最近邻方法进行细胞核跟踪。我们还采用了一种特殊的有丝分裂检测算法来处理细胞分裂事件。我们的方法使用人类专家注释的真实图像数据对其分割和跟踪精度进行了定量评估。评估程序根据细胞追踪挑战中使用的协议进行。结果表明,该方法在分割和跟踪精度上都优于现有的半自动方法。
{"title":"A Method for Automatic Tracking of Cell Nuclei in 2D Epifluorescence Microscopy Image Sequences","authors":"Alexandr Yu. Kondrati'ev, H. Yaginuma, Y. Okada, D. Sorokin","doi":"10.1109/IPTA.2018.8608156","DOIUrl":"https://doi.org/10.1109/IPTA.2018.8608156","url":null,"abstract":"The automated segmentation and tracking of cells in live cell microscopy image sequences is an actual problem in many biological research areas. Despite the existence of different cell tracking approaches, a universal solution for this problem still does not exist due to high variety of fluorescent microscopy image data obtained using different techniques, where cells have completely different visual appearance. Moreover, the cells can significantly change their shape even within a single image sequence. In this work, we propose a cell tracking algorithm designed for detecting and tracking cell nuclei in 2D image sequences obtained by epifluorescence microscopy, where the cell appearance drastically changes during cell mitosis. We used marker controlled watershed algorithm combined with blob detection for nuclei segmentation followed by a generalized nearest neighbor approach for nuclei tracking. We also employed a special mitosis detection algorithm to process cell division events. Our approach was quantitatively evaluated for its segmentation and tracking accuracy using the real image data annotated by human experts. The evaluation procedure was performed based on the protocol used in the Cell Tracking Challenge. It was shown that the proposed approach outperforms an existing semiautomatic method in both segmentation and tracking accuracy.","PeriodicalId":272294,"journal":{"name":"2018 Eighth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132556765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
[Copyright notice] (版权)
{"title":"[Copyright notice]","authors":"","doi":"10.1109/ipta.2018.8608135","DOIUrl":"https://doi.org/10.1109/ipta.2018.8608135","url":null,"abstract":"","PeriodicalId":272294,"journal":{"name":"2018 Eighth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"350 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134408636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparative study of visual saliency maps in the problem of classification of architectural images with Deep CNNs 视觉显著性图与深度cnn在建筑图像分类问题中的比较研究
A. M. Obeso, J. Benois-Pineau, Kamel Guissous, V. Gouet-Brunet, M. García-Vázquez, A. A. Ramírez-Acosta
Incorporating Human Visual System (HVS) models into building of classifiers has become an intensively researched field in visual content mining. In the variety of models of HVS we are interested in so-called visual saliency maps. Contrarily to scan-paths they model instantaneous attention assigning the degree of interestingness/saliency for humans to each pixel in the image plane. In various tasks of visual content understanding, these maps proved to be efficient stressing contribution of the areas of interest in image plane to classifiers models. In previous works saliency layers have been introduced in Deep CNNs, showing that they allow reducing training time getting similar accuracy and loss values in optimal models. In case of large image collections efficient building of saliency maps is based on predictive models of visual attention. They are generally bottom-up and are not adapted to specific visual tasks. Unless they are built for specific content, such as "urban images"-targeted saliency maps we also compare in this paper. In present research we propose a "bootstrap" strategy of building visual saliency maps for particular tasks of visual data mining. A small collection of images relevant to the visual understanding problem is annotated with gaze fixations. Then the propagation to a large training dataset is ensured and compared with the classical GBVS model and a recent method of saliency for urban image content. The classification results within Deep CNN framework are promising compared to the purely automatic visual saliency prediction.
将人类视觉系统(HVS)模型应用到分类器的构建中已经成为视觉内容挖掘研究的一个热点。在各种各样的HVS模型中,我们对所谓的视觉显著性地图感兴趣。与扫描路径相反,它们模拟瞬时注意力,将人类的兴趣/显著程度分配给图像平面上的每个像素。在视觉内容理解的各种任务中,这些地图被证明是有效的,强调图像平面上感兴趣的区域对分类器模型的贡献。在以前的工作中,显著性层已经被引入到深度cnn中,表明它们可以减少训练时间,在最优模型中获得相似的精度和损失值。在大型图像集合的情况下,显著性地图的有效构建是基于视觉注意的预测模型。它们通常是自下而上的,不适合特定的视觉任务。除非它们是为特定的内容而构建的,比如“城市图像”——我们也在本文中进行了比较。在目前的研究中,我们提出了一种“自举”策略,为视觉数据挖掘的特定任务构建视觉显著性图。与视觉理解问题相关的一小部分图像被标注为注视注视。然后将其传播到大型训练数据集,并与经典的GBVS模型和最近的城市图像内容显著性方法进行比较。与单纯的自动视觉显著性预测相比,Deep CNN框架下的分类结果是有希望的。
{"title":"Comparative study of visual saliency maps in the problem of classification of architectural images with Deep CNNs","authors":"A. M. Obeso, J. Benois-Pineau, Kamel Guissous, V. Gouet-Brunet, M. García-Vázquez, A. A. Ramírez-Acosta","doi":"10.1109/IPTA.2018.8608125","DOIUrl":"https://doi.org/10.1109/IPTA.2018.8608125","url":null,"abstract":"Incorporating Human Visual System (HVS) models into building of classifiers has become an intensively researched field in visual content mining. In the variety of models of HVS we are interested in so-called visual saliency maps. Contrarily to scan-paths they model instantaneous attention assigning the degree of interestingness/saliency for humans to each pixel in the image plane. In various tasks of visual content understanding, these maps proved to be efficient stressing contribution of the areas of interest in image plane to classifiers models. In previous works saliency layers have been introduced in Deep CNNs, showing that they allow reducing training time getting similar accuracy and loss values in optimal models. In case of large image collections efficient building of saliency maps is based on predictive models of visual attention. They are generally bottom-up and are not adapted to specific visual tasks. Unless they are built for specific content, such as \"urban images\"-targeted saliency maps we also compare in this paper. In present research we propose a \"bootstrap\" strategy of building visual saliency maps for particular tasks of visual data mining. A small collection of images relevant to the visual understanding problem is annotated with gaze fixations. Then the propagation to a large training dataset is ensured and compared with the classical GBVS model and a recent method of saliency for urban image content. The classification results within Deep CNN framework are promising compared to the purely automatic visual saliency prediction.","PeriodicalId":272294,"journal":{"name":"2018 Eighth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129402091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
A new enhancement algorithm for the low illumination image based on fog-degraded model 一种基于雾退化模型的低照度图像增强算法
Feiyan Cheng, Junsheng Shi, Lijun Yun, Zhenhua Du, Zhijian Xu, Xiaoqiao Huang, Zaiqing Chen
A novel enhancement algorithm is presented to solve the problem of over exposure in bright areas of Low illumination image enhancement algorithm. In this paper, a model is proposed which can make the bright region gain compressed, and a complementary map can be generated which contains the bright region information. And a segmentation method is proposed to detect the bright region of the low illumination image. Meanwhile, in order to avoid colour distortion, a brightness transfer fusion strategy is used to the bright area of low illumination images. Experiments have shown that the new algorithm has higher average gradient, higher information entropy and close structural similarity to the original algorithm. So it can get better performance in dealing with the bright region of low illumination images both in subjective and objective.
针对低照度图像增强算法中存在的亮区过度曝光问题,提出了一种新的增强算法。本文提出了一种压缩明亮区域增益的模型,并生成了包含明亮区域信息的互补图。提出了一种检测低照度图像亮区的分割方法。同时,为了避免色彩失真,对低照度图像的亮区采用了亮度转移融合策略。实验结果表明,新算法具有较高的平均梯度、较高的信息熵和较好的结构相似度。从而在主客观两方面都能较好地处理低照度图像的亮区。
{"title":"A new enhancement algorithm for the low illumination image based on fog-degraded model","authors":"Feiyan Cheng, Junsheng Shi, Lijun Yun, Zhenhua Du, Zhijian Xu, Xiaoqiao Huang, Zaiqing Chen","doi":"10.1109/IPTA.2018.8608164","DOIUrl":"https://doi.org/10.1109/IPTA.2018.8608164","url":null,"abstract":"A novel enhancement algorithm is presented to solve the problem of over exposure in bright areas of Low illumination image enhancement algorithm. In this paper, a model is proposed which can make the bright region gain compressed, and a complementary map can be generated which contains the bright region information. And a segmentation method is proposed to detect the bright region of the low illumination image. Meanwhile, in order to avoid colour distortion, a brightness transfer fusion strategy is used to the bright area of low illumination images. Experiments have shown that the new algorithm has higher average gradient, higher information entropy and close structural similarity to the original algorithm. So it can get better performance in dealing with the bright region of low illumination images both in subjective and objective.","PeriodicalId":272294,"journal":{"name":"2018 Eighth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128840691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Registration Algorithm Based on Super pixel Segmentation and SURF Feature Points 基于超像素分割和SURF特征点的图像配准算法
Weiyi Wei, Chengfeng A, Yufei Zhao, Guicang Zhang
In the current image registration technology, feature points detection and matching feature points have lower accuracy. Based on the analysis of SURF feature point detection and information entropy for image registration, an image registration algorithm based on SURF feature points is proposed. Firstly, the image is divided into super-pixels, and the information entropy of each image area is calculated. The redundant points in feature points are eliminated by using the value of information quantity. The problem that the SURF operator distributes densely is improved and the number of feature points is reduced. Experimental results show that the improved algorithm can improve the accuracy of image feature point pairs, and effectively improve the quality of registration.
在目前的图像配准技术中,特征点的检测和匹配精度较低。在分析SURF特征点检测和图像配准信息熵的基础上,提出了一种基于SURF特征点的图像配准算法。首先,将图像分割成多个超像素,计算每个图像区域的信息熵;利用信息量值消除特征点中的冗余点。改进了SURF算子分布密集的问题,减少了特征点的数量。实验结果表明,改进后的算法可以提高图像特征点对的精度,有效提高配准质量。
{"title":"Image Registration Algorithm Based on Super pixel Segmentation and SURF Feature Points","authors":"Weiyi Wei, Chengfeng A, Yufei Zhao, Guicang Zhang","doi":"10.1109/IPTA.2018.8608151","DOIUrl":"https://doi.org/10.1109/IPTA.2018.8608151","url":null,"abstract":"In the current image registration technology, feature points detection and matching feature points have lower accuracy. Based on the analysis of SURF feature point detection and information entropy for image registration, an image registration algorithm based on SURF feature points is proposed. Firstly, the image is divided into super-pixels, and the information entropy of each image area is calculated. The redundant points in feature points are eliminated by using the value of information quantity. The problem that the SURF operator distributes densely is improved and the number of feature points is reduced. Experimental results show that the improved algorithm can improve the accuracy of image feature point pairs, and effectively improve the quality of registration.","PeriodicalId":272294,"journal":{"name":"2018 Eighth International Conference on Image Processing Theory, Tools and Applications (IPTA)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114727704","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
2018 Eighth International Conference on Image Processing Theory, Tools and Applications (IPTA)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1