MDS-ViTNet: Improving Saliency Prediction for Eye-Tracking with Vision Transformer

IF 0.6 4区 数学 Q3 MATHEMATICS Doklady Mathematics Pub Date : 2025-03-22 DOI:10.1134/S1064562424602117
I. Polezhaev, I. Goncharenko, N. Iurina
{"title":"MDS-ViTNet: Improving Saliency Prediction for Eye-Tracking with Vision Transformer","authors":"I. Polezhaev,&nbsp;I. Goncharenko,&nbsp;N. Iurina","doi":"10.1134/S1064562424602117","DOIUrl":null,"url":null,"abstract":"<p>In this paper, we present a novel methodology we call MDS-ViTNet (Multi Decoder Saliency by Vision Transformer Network) for enhancing visual saliency prediction or eye-tracking. This approach holds significant potential for diverse fields, including marketing, medicine, robotics, and retail. We propose a network architecture that leverages the Vision Transformer, moving beyond the conventional ImageNet backbone. The framework adopts an encoder-decoder structure, with the encoder utilizing a Swin transformer to efficiently embed most important features. This process involves a Transfer Learning method, wherein layers from the Vision Transformer are converted by the Encoder Transformer and seamlessly integrated into a CNN Decoder. This methodology ensures minimal information loss from the original input image. The decoder employs a multi-decoding technique, utilizing dual decoders to generate two distinct attention maps. These maps are subsequently combined into a singular output via an additional CNN model. Our trained model MDS-ViTNet achieves state-of-the-art results across several benchmarks. Committed to fostering further collaboration, we intend to make our code, models, and datasets accessible to the public.</p>","PeriodicalId":531,"journal":{"name":"Doklady Mathematics","volume":"110 1 supplement","pages":"S230 - S235"},"PeriodicalIF":0.6000,"publicationDate":"2025-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1134/S1064562424602117.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Doklady Mathematics","FirstCategoryId":"100","ListUrlMain":"https://link.springer.com/article/10.1134/S1064562424602117","RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MATHEMATICS","Score":null,"Total":0}
引用次数: 0

Abstract

In this paper, we present a novel methodology we call MDS-ViTNet (Multi Decoder Saliency by Vision Transformer Network) for enhancing visual saliency prediction or eye-tracking. This approach holds significant potential for diverse fields, including marketing, medicine, robotics, and retail. We propose a network architecture that leverages the Vision Transformer, moving beyond the conventional ImageNet backbone. The framework adopts an encoder-decoder structure, with the encoder utilizing a Swin transformer to efficiently embed most important features. This process involves a Transfer Learning method, wherein layers from the Vision Transformer are converted by the Encoder Transformer and seamlessly integrated into a CNN Decoder. This methodology ensures minimal information loss from the original input image. The decoder employs a multi-decoding technique, utilizing dual decoders to generate two distinct attention maps. These maps are subsequently combined into a singular output via an additional CNN model. Our trained model MDS-ViTNet achieves state-of-the-art results across several benchmarks. Committed to fostering further collaboration, we intend to make our code, models, and datasets accessible to the public.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
MDS-ViTNet:基于视觉变换器的眼动追踪显著性预测改进
在本文中,我们提出了一种新的方法,我们称之为MDS-ViTNet (Multi Decoder Saliency by Vision Transformer Network),用于增强视觉显著性预测或眼动追踪。这种方法在不同的领域具有巨大的潜力,包括营销、医药、机器人和零售。我们提出了一种利用视觉转换器的网络架构,超越了传统的ImageNet主干。该框架采用编码器-解码器结构,编码器利用Swin变压器有效嵌入最重要的特性。这个过程涉及一种迁移学习方法,其中来自视觉转换器的层由编码器转换器转换并无缝集成到CNN解码器中。这种方法确保了原始输入图像的最小信息损失。解码器采用多重解码技术,利用双解码器生成两个不同的注意图。这些地图随后通过一个额外的CNN模型组合成一个单一的输出。我们训练有素的MDS-ViTNet模型在几个基准测试中取得了最先进的结果。为了促进进一步的合作,我们打算让我们的代码、模型和数据集对公众开放。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Doklady Mathematics
Doklady Mathematics 数学-数学
CiteScore
1.00
自引率
16.70%
发文量
39
审稿时长
3-6 weeks
期刊介绍: Doklady Mathematics is a journal of the Presidium of the Russian Academy of Sciences. It contains English translations of papers published in Doklady Akademii Nauk (Proceedings of the Russian Academy of Sciences), which was founded in 1933 and is published 36 times a year. Doklady Mathematics includes the materials from the following areas: mathematics, mathematical physics, computer science, control theory, and computers. It publishes brief scientific reports on previously unpublished significant new research in mathematics and its applications. The main contributors to the journal are Members of the RAS, Corresponding Members of the RAS, and scientists from the former Soviet Union and other foreign countries. Among the contributors are the outstanding Russian mathematicians.
期刊最新文献
FoCAT: Foundation Model for Estimating the Conditional Average Treatment Effect JDCEMB: Joint Distillation and Contrastive Learning for Embeddings in Task-Oriented Dialogue Systems Competing Risks Survival Models for Churn Prediction Employing Synthetic Canopy Height Model Data to Enhance Tree Identification in High-Resolution Satellite Imagery RuWikiBench: Evaluating Large Language Models Through Replication of Encyclopedia Articles
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1