首页 > 最新文献

IET Image Process.最新文献

英文 中文
Optimized deep learning model for mango grading: Hybridizing lion plus firefly algorithm 芒果分级的优化深度学习模型:狮子+萤火虫杂交算法
Pub Date : 2021-03-04 DOI: 10.1049/IPR2.12163
M. Tripathi, Dhananjay D. Maktedar
This paper intends to present an automated mango grading system under four stages (1) pre-processing, (2) feature extraction, (3) optimal feature selection and (4) classification. Initially, the input image is subjected to the pre-processing phase, where the reading, sizing, noise removal and segmentation process happens. Subsequently, the features are extracted from the pre-processed image. To make the system more effective, from the extracted features, the optimal features are selected using a new hybrid optimization algorithm termed the lion assisted firefly algorithm (LA-FF), which is the combination of LA and FF, respectively. Then, the optimal features are given for the classification process, where the optimized deep convolutional neural network (CNN) is deployed. As a major contribution, the configuration of CNN is fine-tuned via selecting the optimal count of convolutional layers. This obviously enhances the classification accuracy in grading system. For fine-tuning the convolutional layers in the deep CNN, the LA-FF algorithm is used so that the classifier is optimized. The grading is evaluated on the basis of healthydiseased, ripe-unripe and bigmediumvery big cases with respect to type I and type II measures and the performance of the proposed grading model is compared over the other state-of-the-art models.
本文拟提出一种芒果自动分级系统,分为预处理、特征提取、最优特征选择和分类四个阶段。首先,输入图像经过预处理阶段,进行读取、大小调整、去噪和分割过程。然后,从预处理后的图像中提取特征。为了提高系统的有效性,从提取的特征中选择最优特征,使用一种新的混合优化算法,即狮子辅助萤火虫算法(LA-FF),该算法分别是LA和FF的结合。然后,给出分类过程的最优特征,其中部署优化后的深度卷积神经网络(CNN)。作为一个主要的贡献,CNN的配置是通过选择最优的卷积层数来微调的。这明显提高了分级系统的分类精度。为了对深度CNN中的卷积层进行微调,使用LA-FF算法对分类器进行优化。就第一类和第二类措施而言,根据健康、患病、成熟-未成熟和大中型-超大型案例对分级进行评估,并将拟议的分级模型的性能与其他最先进的模型进行比较。
{"title":"Optimized deep learning model for mango grading: Hybridizing lion plus firefly algorithm","authors":"M. Tripathi, Dhananjay D. Maktedar","doi":"10.1049/IPR2.12163","DOIUrl":"https://doi.org/10.1049/IPR2.12163","url":null,"abstract":"This paper intends to present an automated mango grading system under four stages (1) pre-processing, (2) feature extraction, (3) optimal feature selection and (4) classification. Initially, the input image is subjected to the pre-processing phase, where the reading, sizing, noise removal and segmentation process happens. Subsequently, the features are extracted from the pre-processed image. To make the system more effective, from the extracted features, the optimal features are selected using a new hybrid optimization algorithm termed the lion assisted firefly algorithm (LA-FF), which is the combination of LA and FF, respectively. Then, the optimal features are given for the classification process, where the optimized deep convolutional neural network (CNN) is deployed. As a major contribution, the configuration of CNN is fine-tuned via selecting the optimal count of convolutional layers. This obviously enhances the classification accuracy in grading system. For fine-tuning the convolutional layers in the deep CNN, the LA-FF algorithm is used so that the classifier is optimized. The grading is evaluated on the basis of healthydiseased, ripe-unripe and bigmediumvery big cases with respect to type I and type II measures and the performance of the proposed grading model is compared over the other state-of-the-art models.","PeriodicalId":13486,"journal":{"name":"IET Image Process.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79318502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Generative adversarial network for low-light image enhancement 弱光图像增强的生成对抗网络
Pub Date : 2021-01-20 DOI: 10.1049/IPR2.12124
Fei Li, Jiangbin Zheng, Yuan-fang Zhang
Low-light image enhancement is rapidly gaining research attention due to the increasing demands of extreme visual tasks in various applications. Although numerous methods exist to enhance image qualities in low light, it is still undetermined how to trade-off between the human observation and computer vision processing. In this work, an effective generative adversarial network structure is proposed comprising both the densely residual block (DRB) and the enhancing block (EB) for low-light image enhancement. Specifically, the proposed end-to-end image enhancement method, consisting of a generator and a discriminator, is trained using the hyper loss function. The DRB adopts the residual and dense skip connections to connect and enhance the features extracted from different depths in the network while the EB receives unique multi-scale features to ensure feature diversity. Additionally, increasing the feature sizes allows the discriminator to further distinguish between fake and real images from the patch levels. The merits of the loss function are also studied to recover both contextual and local details. Extensive experimental results show that our method is capable of dealing with extremely low-light scenes and the realistic feature generator outperforms several state-of-the-art methods in a number of qualitative and quantitative evaluation tests.
由于各种应用中对极端视觉任务的需求日益增加,微光图像增强正迅速受到研究的关注。虽然有许多方法可以提高低光下的图像质量,但如何在人类观察和计算机视觉处理之间权衡仍然是一个不确定的问题。在这项工作中,提出了一种有效的生成对抗网络结构,包括密集残差块(DRB)和增强块(EB),用于弱光图像增强。具体来说,提出的端到端图像增强方法由一个生成器和一个鉴别器组成,使用超损失函数进行训练。DRB采用残差和密集的跳跃连接来连接和增强网络中不同深度提取的特征,而EB则采用独特的多尺度特征来保证特征的多样性。此外,增加特征大小允许鉴别器从补丁级别进一步区分假图像和真实图像。研究了损失函数在恢复上下文和局部细节方面的优点。大量的实验结果表明,我们的方法能够处理极低光照场景,并且在许多定性和定量评估测试中,逼真特征生成器优于几种最先进的方法。
{"title":"Generative adversarial network for low-light image enhancement","authors":"Fei Li, Jiangbin Zheng, Yuan-fang Zhang","doi":"10.1049/IPR2.12124","DOIUrl":"https://doi.org/10.1049/IPR2.12124","url":null,"abstract":"Low-light image enhancement is rapidly gaining research attention due to the increasing demands of extreme visual tasks in various applications. Although numerous methods exist to enhance image qualities in low light, it is still undetermined how to trade-off between the human observation and computer vision processing. In this work, an effective generative adversarial network structure is proposed comprising both the densely residual block (DRB) and the enhancing block (EB) for low-light image enhancement. Specifically, the proposed end-to-end image enhancement method, consisting of a generator and a discriminator, is trained using the hyper loss function. The DRB adopts the residual and dense skip connections to connect and enhance the features extracted from different depths in the network while the EB receives unique multi-scale features to ensure feature diversity. Additionally, increasing the feature sizes allows the discriminator to further distinguish between fake and real images from the patch levels. The merits of the loss function are also studied to recover both contextual and local details. Extensive experimental results show that our method is capable of dealing with extremely low-light scenes and the realistic feature generator outperforms several state-of-the-art methods in a number of qualitative and quantitative evaluation tests.","PeriodicalId":13486,"journal":{"name":"IET Image Process.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76151305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Localised edge-region-based active contour for medical image segmentation 基于边缘区域的局部活动轮廓医学图像分割
Pub Date : 2021-01-20 DOI: 10.1049/IPR2.12126
Huaxiang Liu, Jiangxiong Fang, Zijian Zhang, Yongzheng Lin
{"title":"Localised edge-region-based active contour for medical image segmentation","authors":"Huaxiang Liu, Jiangxiong Fang, Zijian Zhang, Yongzheng Lin","doi":"10.1049/IPR2.12126","DOIUrl":"https://doi.org/10.1049/IPR2.12126","url":null,"abstract":"","PeriodicalId":13486,"journal":{"name":"IET Image Process.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85762177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
An efficient framework for deep learning-based light-defect image enhancement 基于深度学习的光缺陷图像增强的有效框架
Pub Date : 2021-01-13 DOI: 10.1049/IPR2.12125
Chengxu Ma, Daihui Li, Shangyou Zeng, Junbo Zhao, Hongyang Chen
The enhancement of light-defect images such as extremely low-light, low-light and dim-light has always been a research hotspot. Most of the existing methods are excellent in specific illuminations, and there is much room for improvement in processing light-defect images with different illuminations. Therefore, this study proposes an efficient framework based on deep learning to enhance various light-defect images. The proposed framework estimates the reflectance component and illumination component. Next, we propose a generator guided by an attention mechanism in the reflectance part to repair the light-defect in the dark. In addition, we design a colour loss function for the problem of colour distortion in the enhanced images. Finally, the illumination map of the light-defect images is adjusted adaptively. Extensive experiments are conducted to demonstrate that our method can not only deal with the images with different illuminations but also enhance the images with clearer details and richer colours. At the same time, we prove its superiority by compar-ing it with state-of-the-art methods under both visual quality comparison and quantitative comparison of various datasets and real-world images.
极弱光、弱光、暗光等光缺陷图像的增强一直是研究热点。现有的方法大多在特定光照条件下表现优异,在处理不同光照条件下的光缺陷图像方面还有很大的改进空间。因此,本研究提出了一种基于深度学习的高效框架来增强各种光缺陷图像。该框架估计了反射分量和光照分量。接下来,我们提出了一种由反射部分的注意机制引导的发生器来修复黑暗中的光缺陷。此外,针对增强图像的色彩失真问题,设计了色彩损失函数。最后,对光缺陷图像的光照映射进行自适应调整。大量的实验表明,该方法不仅可以处理不同光照下的图像,而且可以使图像的细节更清晰,色彩更丰富。同时,我们将其与最先进的方法在各种数据集和真实图像的视觉质量比较和定量比较中进行了比较,证明了其优越性。
{"title":"An efficient framework for deep learning-based light-defect image enhancement","authors":"Chengxu Ma, Daihui Li, Shangyou Zeng, Junbo Zhao, Hongyang Chen","doi":"10.1049/IPR2.12125","DOIUrl":"https://doi.org/10.1049/IPR2.12125","url":null,"abstract":"The enhancement of light-defect images such as extremely low-light, low-light and dim-light has always been a research hotspot. Most of the existing methods are excellent in specific illuminations, and there is much room for improvement in processing light-defect images with different illuminations. Therefore, this study proposes an efficient framework based on deep learning to enhance various light-defect images. The proposed framework estimates the reflectance component and illumination component. Next, we propose a generator guided by an attention mechanism in the reflectance part to repair the light-defect in the dark. In addition, we design a colour loss function for the problem of colour distortion in the enhanced images. Finally, the illumination map of the light-defect images is adjusted adaptively. Extensive experiments are conducted to demonstrate that our method can not only deal with the images with different illuminations but also enhance the images with clearer details and richer colours. At the same time, we prove its superiority by compar-ing it with state-of-the-art methods under both visual quality comparison and quantitative comparison of various datasets and real-world images.","PeriodicalId":13486,"journal":{"name":"IET Image Process.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85883502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Level set method with Retinex-corrected saliency embedded for image segmentation 嵌入视网膜校正显著性的水平集图像分割方法
Pub Date : 2021-01-12 DOI: 10.1049/IPR2.12123
Dongmei Liu, F. Chang, Huaxiang Zhang, Li Liu
It can be a very challenging task when using level set method segmenting natural images with high intensity inhomogeneity and complex background scenes. A new synthesis level set method for robust image segmentation based on the combination of Retinex-corrected saliency region information and edge information is proposed in this work. First, the Retinex theory is introduced to correct the saliency information extraction. Second, the Retinex-corrected saliency information is embedded into the level set method due to its advantageous quality which makes a foreground object stand out relative to the backgrounds. Combined with the edge information, the boundary of segmentation will be more precise and smooth. Experiments indicate that the proposed segmentation algorithm is efficient, fast, reliable, and robust.
当使用水平集方法分割具有高强度非均匀性和复杂背景场景的自然图像时,可能是一项非常具有挑战性的任务。提出了一种基于视黄酮校正的显著区域信息和边缘信息相结合的鲁棒图像分割的综合水平集方法。首先,引入Retinex理论对显著性信息提取进行修正。其次,将视网膜校正的显著性信息嵌入到水平集方法中,因为它具有使前景物体相对于背景突出的优势。结合边缘信息,分割的边界将更加精确和平滑。实验结果表明,该分割算法具有高效、快速、可靠和鲁棒性好等特点。
{"title":"Level set method with Retinex-corrected saliency embedded for image segmentation","authors":"Dongmei Liu, F. Chang, Huaxiang Zhang, Li Liu","doi":"10.1049/IPR2.12123","DOIUrl":"https://doi.org/10.1049/IPR2.12123","url":null,"abstract":"It can be a very challenging task when using level set method segmenting natural images with high intensity inhomogeneity and complex background scenes. A new synthesis level set method for robust image segmentation based on the combination of Retinex-corrected saliency region information and edge information is proposed in this work. First, the Retinex theory is introduced to correct the saliency information extraction. Second, the Retinex-corrected saliency information is embedded into the level set method due to its advantageous quality which makes a foreground object stand out relative to the backgrounds. Combined with the edge information, the boundary of segmentation will be more precise and smooth. Experiments indicate that the proposed segmentation algorithm is efficient, fast, reliable, and robust.","PeriodicalId":13486,"journal":{"name":"IET Image Process.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76828598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Image stitching method by multi-feature constrained alignment and colour adjustment 基于多特征约束对齐和颜色调整的图像拼接方法
Pub Date : 2021-01-07 DOI: 10.1049/IPR2.12120
Xingsheng Yuan, Yongbin Zheng, Wei Zhao, Jiongming Su, Jianzhai Wu
{"title":"Image stitching method by multi-feature constrained alignment and colour adjustment","authors":"Xingsheng Yuan, Yongbin Zheng, Wei Zhao, Jiongming Su, Jianzhai Wu","doi":"10.1049/IPR2.12120","DOIUrl":"https://doi.org/10.1049/IPR2.12120","url":null,"abstract":"","PeriodicalId":13486,"journal":{"name":"IET Image Process.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82518376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Unsupervised automated retinal vessel segmentation based on Radon line detector and morphological reconstruction 基于Radon线检测器和形态重建的无监督自动视网膜血管分割
Pub Date : 2021-01-05 DOI: 10.1049/IPR2.12119
M. Tavakoli, A. Mehdizadeh, Reza Pourreza-Shahri, J. Dehmeshki
Retinal blood vessel segmentation and analysis is critical for the computer-aided diagnosis of different diseases such as diabetic retinopathy. This study presents an automated unsupervised method for segmenting the retinal vasculature based on hybrid methods. The algorithm initially applies a preprocessing step using morphological operators to enhance the vessel tree structure against a non-uniform image background. The main processing applies the Radon transform to overlapping windows, followed by vessel validation, vessel refinement and vessel reconstruction to achieve the final segmentation. The method was tested on three publicly available datasets and a local database comprising a total of 188 images. Segmentation performance was evaluated using three measures: accuracy, receiver operating characteristic (ROC) analysis, and the structural similarity index. ROC analysis resulted in area under curve values of 97.39%, 97.01%, and 97.12%, for the DRIVE, STARE, and CHASE-DB1, respectively. Also, the results of accuracy were 0.9688, 0.9646, and 0.9475 for the same datasets. Finally, the average values of structural similarity index were computed for all four datasets, with average values of 0.9650 (DRIVE), 0.9641 (STARE), and 0.9625 (CHASE-DB1). These results compare with the best published results to date, exceeding their performance for several of the datasets; similar performance is found using accuracy.
视网膜血管的分割与分析对于糖尿病视网膜病变等不同疾病的计算机辅助诊断至关重要。提出了一种基于混合方法的视网膜血管自动无监督分割方法。该算法首先应用形态学算子预处理步骤来增强非均匀图像背景下的血管树结构。主要处理是对重叠窗口进行Radon变换,然后进行血管验证、血管细化和血管重建,最终实现分割。该方法在三个公开可用的数据集和一个包含188张图像的本地数据库上进行了测试。使用三个指标来评估分割性能:准确性、受试者工作特征(ROC)分析和结构相似性指数。ROC分析结果显示,DRIVE、STARE和CHASE-DB1的曲线下面积分别为97.39%、97.01%和97.12%。同一数据集的准确率分别为0.9688、0.9646和0.9475。最后计算4个数据集的结构相似度指数平均值,分别为0.9650 (DRIVE)、0.9641 (STARE)和0.9625 (CHASE-DB1)。这些结果与迄今为止发表的最佳结果进行了比较,在一些数据集上超过了它们的表现;使用精度可以发现类似的性能。
{"title":"Unsupervised automated retinal vessel segmentation based on Radon line detector and morphological reconstruction","authors":"M. Tavakoli, A. Mehdizadeh, Reza Pourreza-Shahri, J. Dehmeshki","doi":"10.1049/IPR2.12119","DOIUrl":"https://doi.org/10.1049/IPR2.12119","url":null,"abstract":"Retinal blood vessel segmentation and analysis is critical for the computer-aided diagnosis of different diseases such as diabetic retinopathy. This study presents an automated unsupervised method for segmenting the retinal vasculature based on hybrid methods. The algorithm initially applies a preprocessing step using morphological operators to enhance the vessel tree structure against a non-uniform image background. The main processing applies the Radon transform to overlapping windows, followed by vessel validation, vessel refinement and vessel reconstruction to achieve the final segmentation. The method was tested on three publicly available datasets and a local database comprising a total of 188 images. Segmentation performance was evaluated using three measures: accuracy, receiver operating characteristic (ROC) analysis, and the structural similarity index. ROC analysis resulted in area under curve values of 97.39%, 97.01%, and 97.12%, for the DRIVE, STARE, and CHASE-DB1, respectively. Also, the results of accuracy were 0.9688, 0.9646, and 0.9475 for the same datasets. Finally, the average values of structural similarity index were computed for all four datasets, with average values of 0.9650 (DRIVE), 0.9641 (STARE), and 0.9625 (CHASE-DB1). These results compare with the best published results to date, exceeding their performance for several of the datasets; similar performance is found using accuracy.","PeriodicalId":13486,"journal":{"name":"IET Image Process.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91246397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A hybrid feature descriptor with Jaya optimised least squares SVM for facial expression recognition 基于Jaya优化的最小二乘支持向量机混合特征描述符用于人脸表情识别
Pub Date : 2021-01-05 DOI: 10.1049/IPR2.12118
Nikunja Bihari Kar, D. Nayak, Korra Sathya Babu, Yudong Zhang
Facial expression recognition has been a long-standing problem in the field of computer vision. This paper proposes a new simple scheme for effective recognition of facial expressions based on a hybrid feature descriptor and an improved classifier. Inspired by the success of stationary wavelet transform in many computer vision tasks, stationary wavelet transform is first employed on the pre-processed face image. The pyramid of histograms of orientation gradient features is then computed from the low-frequency stationary wavelet transform coefficients to capture more prominent details from facial images. The key idea of this hybrid feature descriptor is to exploit both spatial and frequency domain features which at the same time are robust against illumination and noise. The relevant features are subsequently determined using linear discriminant analysis. A new least squares support vector machine parameter tuning strategy is proposed using a contemporary optimisation technique called Jaya optimisation for classification of facial expressions. Experimental evaluations are performed on Japanese female facial expression and the Extended Cohn–Kanade (CK + ) datasets, and the results based on 5-fold stratified cross-validation test confirm the superiority of the proposed method over state-of-the-art approaches.
面部表情识别一直是计算机视觉领域的一个难题。本文提出了一种基于混合特征描述符和改进分类器的面部表情有效识别新方案。受平稳小波变换在许多计算机视觉任务中取得成功的启发,平稳小波变换首次应用于人脸图像预处理。然后从低频平稳小波变换系数中计算方向梯度特征直方图金字塔,以捕获面部图像中更突出的细节。该混合特征描述子的关键思想是利用空间和频域特征,同时对光照和噪声具有鲁棒性。随后使用线性判别分析确定相关特征。提出了一种新的最小二乘支持向量机参数调整策略,该策略使用了一种称为Jaya优化的现代优化技术用于面部表情分类。在日本女性面部表情和扩展Cohn-Kanade (CK +)数据集上进行了实验评估,基于5倍分层交叉验证检验的结果证实了所提出方法优于现有方法。
{"title":"A hybrid feature descriptor with Jaya optimised least squares SVM for facial expression recognition","authors":"Nikunja Bihari Kar, D. Nayak, Korra Sathya Babu, Yudong Zhang","doi":"10.1049/IPR2.12118","DOIUrl":"https://doi.org/10.1049/IPR2.12118","url":null,"abstract":"Facial expression recognition has been a long-standing problem in the field of computer vision. This paper proposes a new simple scheme for effective recognition of facial expressions based on a hybrid feature descriptor and an improved classifier. Inspired by the success of stationary wavelet transform in many computer vision tasks, stationary wavelet transform is first employed on the pre-processed face image. The pyramid of histograms of orientation gradient features is then computed from the low-frequency stationary wavelet transform coefficients to capture more prominent details from facial images. The key idea of this hybrid feature descriptor is to exploit both spatial and frequency domain features which at the same time are robust against illumination and noise. The relevant features are subsequently determined using linear discriminant analysis. A new least squares support vector machine parameter tuning strategy is proposed using a contemporary optimisation technique called Jaya optimisation for classification of facial expressions. Experimental evaluations are performed on Japanese female facial expression and the Extended Cohn–Kanade (CK + ) datasets, and the results based on 5-fold stratified cross-validation test confirm the superiority of the proposed method over state-of-the-art approaches.","PeriodicalId":13486,"journal":{"name":"IET Image Process.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82036079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An exclusive-disjunction-based detection of neovascularisation using multi-scale CNN 基于多尺度CNN的排他分离的新生血管检测
Pub Date : 2021-01-03 DOI: 10.1049/ipr2.12122
Geetha Pavani Pappu, B. Biswal, M. Sairam, P. Biswal
In this article, an exclusive-disjunction-based detection of neovascularisation (NV), which is the formation of new blood vessels on the retinal surfaces, is presented. These vessels, being thin and fragile, get ruptured easily leading to permanent blindness. The proposed algorithm consists of two stages. In the first stage, the retinal images are classified into non-NV and NV using multi-scale convolutional neural network, while in the second stage, 13 relevant features are extracted from the vascular map of NV images to achieve the pixel locations of new blood vessels using a directional matched filter along with the Difference of Laplacian of Gaussian operator followed by an exclusive disjunction function with adaptive thresholding of the vascular map. At the same time, the pixel locations of optic disc (OD) are detected using intensity distribution and variations on the retinal images. Finally, the pixel locations of both new blood vessels and OD are compared for classification. If the pixel locations of new blood vessels fall inside the OD, they are labelled as NV on OD, else they are labelled as NV elsewhere. The proposed algorithm has achieved an accuracy of 99.5%, specificity of 97.5%, sensitivity of 98.9%, and area under the curve of 94.2% when tested on 155 non-NV and 115 NV images.
在这篇文章中,一个基于排他分离的检测新血管(NV),这是在视网膜表面形成的新血管,提出。这些血管又薄又脆弱,很容易破裂,导致永久性失明。该算法分为两个阶段。第一阶段,利用多尺度卷积神经网络将视网膜图像分为非NV和NV两类,第二阶段,从NV图像的血管图中提取13个相关特征,利用方向匹配滤波器,结合高斯算子拉普拉斯差分,再结合血管图的自适应阈值分离函数,得到新生血管的像素位置。同时,利用视网膜图像的强度分布和变化来检测视盘的像素位置。最后比较新血管和OD的像素位置进行分类。如果新血管的像素位置落在外径内,则在外径上标记为NV,否则在其他地方标记为NV。在155张非NV图像和115张NV图像上进行测试,该算法的准确率为99.5%,特异度为97.5%,灵敏度为98.9%,曲线下面积为94.2%。
{"title":"An exclusive-disjunction-based detection of neovascularisation using multi-scale CNN","authors":"Geetha Pavani Pappu, B. Biswal, M. Sairam, P. Biswal","doi":"10.1049/ipr2.12122","DOIUrl":"https://doi.org/10.1049/ipr2.12122","url":null,"abstract":"In this article, an exclusive-disjunction-based detection of neovascularisation (NV), which is the formation of new blood vessels on the retinal surfaces, is presented. These vessels, being thin and fragile, get ruptured easily leading to permanent blindness. The proposed algorithm consists of two stages. In the first stage, the retinal images are classified into non-NV and NV using multi-scale convolutional neural network, while in the second stage, 13 relevant features are extracted from the vascular map of NV images to achieve the pixel locations of new blood vessels using a directional matched filter along with the Difference of Laplacian of Gaussian operator followed by an exclusive disjunction function with adaptive thresholding of the vascular map. At the same time, the pixel locations of optic disc (OD) are detected using intensity distribution and variations on the retinal images. Finally, the pixel locations of both new blood vessels and OD are compared for classification. If the pixel locations of new blood vessels fall inside the OD, they are labelled as NV on OD, else they are labelled as NV elsewhere. The proposed algorithm has achieved an accuracy of 99.5%, specificity of 97.5%, sensitivity of 98.9%, and area under the curve of 94.2% when tested on 155 non-NV and 115 NV images.","PeriodicalId":13486,"journal":{"name":"IET Image Process.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88475184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Error feedback denoising network 误差反馈去噪网络
Pub Date : 2021-01-02 DOI: 10.1049/ipr2.12121
R. Hou, Fang Li
Recently, deep convolutional neural networks have been successfully used for image denoising due to their favourable performance. This paper examines the error feedback mechanism to image denoising and propose an error feedback denoising network. Specif-ically, we use the down-and-up projection sequence to estimate the noise feature. By the residual connection, the clean structures are removed from the noise features. The essential difference between the proposed network and other existing feedback networks is the projection sequence. Our error feedback projection sequence is down-and-up, which is more suitable for image denoising than the existing up-and-down order. Moreover, we design a compression block to improve the expression ability of the general 1 × 1 convolutional compression layer. The advantage of our well-designed down-and-up block is that the network parameters are fewer than other feedback networks and the receptive field is enlarged. We have implemented our error feedback denoising network on denoising and JPEG image deblocking. Extensive experiments verify the effectiveness of our down-and-up block and demonstrate that our error feedback denoising network is comparable with the state-of-the-art. The code will be open source. The source codes for reproducing the results can be found at: https://github.com/Houruizhi/EFDN.
近年来,深度卷积神经网络以其良好的性能成功地应用于图像去噪。研究了误差反馈对图像去噪的影响机理,提出了一种误差反馈去噪网络。具体来说,我们使用上下投影序列来估计噪声特征。通过残余连接,将清洁结构从噪声特征中去除。该网络与其他现有反馈网络的本质区别在于其投影序列。我们的误差反馈投影序列是上下的,比现有的上下顺序更适合图像去噪。此外,我们还设计了一个压缩块,以提高通用1 × 1卷积压缩层的表达能力。我们精心设计的上下块的优点是网络参数比其他反馈网络少,接受域扩大。在去噪和JPEG图像去块的基础上实现了误差反馈去噪网络。大量的实验验证了我们的上下块的有效性,并表明我们的误差反馈去噪网络可与最先进的去噪网络相媲美。代码将是开源的。再现结果的源代码可以在https://github.com/Houruizhi/EFDN上找到。
{"title":"Error feedback denoising network","authors":"R. Hou, Fang Li","doi":"10.1049/ipr2.12121","DOIUrl":"https://doi.org/10.1049/ipr2.12121","url":null,"abstract":"Recently, deep convolutional neural networks have been successfully used for image denoising due to their favourable performance. This paper examines the error feedback mechanism to image denoising and propose an error feedback denoising network. Specif-ically, we use the down-and-up projection sequence to estimate the noise feature. By the residual connection, the clean structures are removed from the noise features. The essential difference between the proposed network and other existing feedback networks is the projection sequence. Our error feedback projection sequence is down-and-up, which is more suitable for image denoising than the existing up-and-down order. Moreover, we design a compression block to improve the expression ability of the general 1 × 1 convolutional compression layer. The advantage of our well-designed down-and-up block is that the network parameters are fewer than other feedback networks and the receptive field is enlarged. We have implemented our error feedback denoising network on denoising and JPEG image deblocking. Extensive experiments verify the effectiveness of our down-and-up block and demonstrate that our error feedback denoising network is comparable with the state-of-the-art. The code will be open source. The source codes for reproducing the results can be found at: https://github.com/Houruizhi/EFDN.","PeriodicalId":13486,"journal":{"name":"IET Image Process.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87792341","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IET Image Process.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1