An effective image annotation using self-attention based stacked bidirectional capsule network

IF 3.1 2区 计算机科学 Q1 COMPUTER SCIENCE, HARDWARE & ARCHITECTURE Computer Standards & Interfaces Pub Date : 2025-04-01 Epub Date: 2025-01-25 DOI:10.1016/j.csi.2025.103973
Vikas Palekar, Sathish Kumar L
{"title":"An effective image annotation using self-attention based stacked bidirectional capsule network","authors":"Vikas Palekar,&nbsp;Sathish Kumar L","doi":"10.1016/j.csi.2025.103973","DOIUrl":null,"url":null,"abstract":"<div><div>This paper provides an advanced hybrid deep learning (DL) system for accurate image annotation. The first step consists of input images being pre-processed using three techniques: i) cross-guided bilateral filtering, ii) image resizing and iii) colour conversion to the green channel. After pre-processing, key features such as shape, wavelet and texture are extracted using three models: the modified Walsh-Hadamard transform, the extended discrete wavelet transform and the grayscale run length matrix (GLRLM). Once the feature is extracted, the optimal features are selected using the Chaotic Coati Optimization (CCO) algorithm due to feature dimensionality issues. After selecting optimal features, image annotation is performed using the self-awareness-based Stacked Bidirectional Capsule Network (SA_SBiCapNet) model. The stacking Bidirectional long short term memory model (BiLSTM) approach with a capsule network is applied to annotate the given images. The accuracy rate of the proposed method is 0.99 %. Therefore, the proposed method uses a hybrid DL model to perform effective image annotation.</div></div>","PeriodicalId":50635,"journal":{"name":"Computer Standards & Interfaces","volume":"93 ","pages":"Article 103973"},"PeriodicalIF":3.1000,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computer Standards & Interfaces","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0920548925000029","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/25 0:00:00","PubModel":"Epub","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0

Abstract

This paper provides an advanced hybrid deep learning (DL) system for accurate image annotation. The first step consists of input images being pre-processed using three techniques: i) cross-guided bilateral filtering, ii) image resizing and iii) colour conversion to the green channel. After pre-processing, key features such as shape, wavelet and texture are extracted using three models: the modified Walsh-Hadamard transform, the extended discrete wavelet transform and the grayscale run length matrix (GLRLM). Once the feature is extracted, the optimal features are selected using the Chaotic Coati Optimization (CCO) algorithm due to feature dimensionality issues. After selecting optimal features, image annotation is performed using the self-awareness-based Stacked Bidirectional Capsule Network (SA_SBiCapNet) model. The stacking Bidirectional long short term memory model (BiLSTM) approach with a capsule network is applied to annotate the given images. The accuracy rate of the proposed method is 0.99 %. Therefore, the proposed method uses a hybrid DL model to perform effective image annotation.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于自关注的堆叠双向胶囊网络的有效图像标注
本文提供了一种先进的混合深度学习(DL)系统,用于精确的图像标注。第一步包括使用三种技术对输入图像进行预处理:i)交叉引导双边滤波,ii)图像调整大小和iii)颜色转换为绿色通道。预处理后,利用改进的Walsh-Hadamard变换、扩展离散小波变换和灰度行程矩阵(GLRLM)三种模型提取图像的形状、小波和纹理等关键特征。特征提取后,由于特征维数问题,采用混沌Coati优化算法选择最优特征。选择最优特征后,使用基于自我意识的堆叠双向胶囊网络(SA_SBiCapNet)模型进行图像标注。采用胶囊网络叠加双向长短期记忆模型(BiLSTM)方法对给定图像进行标注。该方法的准确率为0.99%。因此,该方法采用混合深度学习模型进行有效的图像标注。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Computer Standards & Interfaces
Computer Standards & Interfaces 工程技术-计算机:软件工程
CiteScore
11.90
自引率
16.00%
发文量
67
审稿时长
6 months
期刊介绍: The quality of software, well-defined interfaces (hardware and software), the process of digitalisation, and accepted standards in these fields are essential for building and exploiting complex computing, communication, multimedia and measuring systems. Standards can simplify the design and construction of individual hardware and software components and help to ensure satisfactory interworking. Computer Standards & Interfaces is an international journal dealing specifically with these topics. The journal • Provides information about activities and progress on the definition of computer standards, software quality, interfaces and methods, at national, European and international levels • Publishes critical comments on standards and standards activities • Disseminates user''s experiences and case studies in the application and exploitation of established or emerging standards, interfaces and methods • Offers a forum for discussion on actual projects, standards, interfaces and methods by recognised experts • Stimulates relevant research by providing a specialised refereed medium.
期刊最新文献
PrivTrajBERT: A semantics-adaptive local differential privacy framework for robust trajectory representation learning Mental illness text classification based on Late Fusion combining BERT with Apriori-informed Graph Attention Network Adversarial attacks and defenses in EEG based Brain Computer Interfaces: A comprehensive survey and future directions CLU-ID: Contrastive learning based unsupervised intrusion detection for low-power IoT devices CGNNStealer: Cooperative data-free model extraction attack on graph neural networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1