An eigenvector approach for obtaining scale and orientation invariant classification in convolutional neural networks

Swetha Velluva Chathoth, Asish Kumar Mishra, Deepak Mishra, Subrahmanyam Gorthi R. K. Sai
{"title":"An eigenvector approach for obtaining scale and orientation invariant classification in convolutional neural networks","authors":"Swetha Velluva Chathoth,&nbsp;Asish Kumar Mishra,&nbsp;Deepak Mishra,&nbsp;Subrahmanyam Gorthi R. K. Sai","doi":"10.1007/s43674-021-00023-7","DOIUrl":null,"url":null,"abstract":"<div><p>The convolution neural networks are well known for their efficiency in detecting and classifying objects once adequately trained. Though they address shift in-variance up to a limit, appreciable rotation and scale in-variances are not guaranteed by many of the existing CNN architectures, making them sensitive towards input image or feature map rotation and scale variations. Many attempts have been made in the past to acquire rotation and scale in-variances in CNNs. In this paper, an efficient approach is proposed for incorporating rotation and scale in-variances in CNN-based classifications, based on eigenvectors and eigenvalues of the image covariance matrix. Without demanding any training data augmentation or CNN architectural change, the proposed method, <b>‘Scale and Orientation Corrected Networks (SOCN)’</b>, achieves better rotation and scale-invariant performances. <b>SOCN</b> proposes a scale and orientation correction step for images before baseline CNN training and testing. Being a generalized approach, <b>SOCN</b> can be combined with any baseline CNN to improve its rotational and scale in-variance performances. We demonstrate the proposed approach’s scale and orientation invariant classification ability with several real cases ranging from scale and orientation invariant character recognition to orientation invariant image classification, with different suitable baseline architectures. The proposed approach of <b>SOCN</b>, though is simple, outperforms the current state of the art scale and orientation invariant classifiers comparatively with minimal training and testing time.</p></div>","PeriodicalId":72089,"journal":{"name":"Advances in computational intelligence","volume":"2 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2021-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s43674-021-00023-7.pdf","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advances in computational intelligence","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43674-021-00023-7","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2

Abstract

The convolution neural networks are well known for their efficiency in detecting and classifying objects once adequately trained. Though they address shift in-variance up to a limit, appreciable rotation and scale in-variances are not guaranteed by many of the existing CNN architectures, making them sensitive towards input image or feature map rotation and scale variations. Many attempts have been made in the past to acquire rotation and scale in-variances in CNNs. In this paper, an efficient approach is proposed for incorporating rotation and scale in-variances in CNN-based classifications, based on eigenvectors and eigenvalues of the image covariance matrix. Without demanding any training data augmentation or CNN architectural change, the proposed method, ‘Scale and Orientation Corrected Networks (SOCN)’, achieves better rotation and scale-invariant performances. SOCN proposes a scale and orientation correction step for images before baseline CNN training and testing. Being a generalized approach, SOCN can be combined with any baseline CNN to improve its rotational and scale in-variance performances. We demonstrate the proposed approach’s scale and orientation invariant classification ability with several real cases ranging from scale and orientation invariant character recognition to orientation invariant image classification, with different suitable baseline architectures. The proposed approach of SOCN, though is simple, outperforms the current state of the art scale and orientation invariant classifiers comparatively with minimal training and testing time.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
卷积神经网络中获得尺度和方向不变分类的特征向量方法
卷积神经网络以其在充分训练后检测和分类对象的效率而闻名。尽管它们在一定程度上解决了方差的变化,但许多现有的CNN架构并不能保证方差的显著旋转和缩放,这使得它们对输入图像或特征图的旋转和缩放变化很敏感。过去已经进行了许多尝试来获得细胞神经网络的轮换和方差规模。在本文中,基于图像协方差矩阵的特征向量和特征值,提出了一种在基于CNN的分类中结合方差中的旋转和尺度的有效方法。在不需要任何训练数据扩充或CNN架构更改的情况下,所提出的“尺度和方向校正网络(SOCN)”方法实现了更好的旋转和尺度不变性能。SOCN提出了在基线CNN训练和测试之前对图像进行尺度和方向校正的步骤。作为一种通用方法,SOCN可以与任何基线CNN相结合,以提高其旋转和方差尺度性能。我们在从尺度和方向不变的字符识别到方向不变的图像分类的几个实际案例中,用不同的合适的基线架构,证明了所提出的方法的尺度和方向无关的分类能力。所提出的SOCN方法虽然简单,但与现有技术的尺度和方向不变分类器相比,在最小的训练和测试时间下,其性能要好。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Non-linear machine learning with sample perturbation augments leukemia relapse prognostics from single-cell proteomics measurements ARBP: antibiotic-resistant bacteria propagation bio-inspired algorithm and its performance on benchmark functions Detection and classification of diabetic retinopathy based on ensemble learning Office real estate price index forecasts through Gaussian process regressions for ten major Chinese cities Systematic micro-breaks affect concentration during cognitive comparison tasks: quantitative and qualitative measurements
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1