Out-of-focus artifacts mitigation and autofocus methods for 3D displays

IF 3.8 3区 计算机科学 Q2 COMPUTER SCIENCE, INFORMATION SYSTEMS Visual Informatics Pub Date : 2024-12-20 DOI:10.1016/j.visinf.2024.12.001
T. Chlubna , T. Milet , P. Zemčík
{"title":"Out-of-focus artifacts mitigation and autofocus methods for 3D displays","authors":"T. Chlubna ,&nbsp;T. Milet ,&nbsp;P. Zemčík","doi":"10.1016/j.visinf.2024.12.001","DOIUrl":null,"url":null,"abstract":"<div><div>This paper proposes a novel content-aware method for automatic focusing of the scene on a 3D display. The method addresses a common problem that visualized content is often out of focus, which adversely affects perceived 3D content. The method outperforms existing focusing method, having the error lower by almost 30%. The existing and novel focusing is extended with depth-of-field enhancement of the scene to mitigate out-of-focus artifacts. The relation between the total depth range of the scene and the visual quality of the result is discussed and evaluated according to human perception experiments. A space-warping method for synthetic scenes is proposed to reduce out-of-focus artifacts while maintaining the scene appearance. A user study was conducted to evaluate the proposed methods and identify the crucial parameters in the scene-focusing process on the 3D stereoscopic display by Looking Glass Factory. The study confirmed the efficiency of the proposals and discovered that the depth-of-field artifact mitigation might not be suitable for all scenes despite theoretical hypotheses. The overall proposal of this paper is a set of methods that can be used to produce the best user experience with an arbitrary scene displayed on a 3D display.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 1","pages":"Pages 31-42"},"PeriodicalIF":3.8000,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Visual Informatics","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2468502X2400069X","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

This paper proposes a novel content-aware method for automatic focusing of the scene on a 3D display. The method addresses a common problem that visualized content is often out of focus, which adversely affects perceived 3D content. The method outperforms existing focusing method, having the error lower by almost 30%. The existing and novel focusing is extended with depth-of-field enhancement of the scene to mitigate out-of-focus artifacts. The relation between the total depth range of the scene and the visual quality of the result is discussed and evaluated according to human perception experiments. A space-warping method for synthetic scenes is proposed to reduce out-of-focus artifacts while maintaining the scene appearance. A user study was conducted to evaluate the proposed methods and identify the crucial parameters in the scene-focusing process on the 3D stereoscopic display by Looking Glass Factory. The study confirmed the efficiency of the proposals and discovered that the depth-of-field artifact mitigation might not be suitable for all scenes despite theoretical hypotheses. The overall proposal of this paper is a set of methods that can be used to produce the best user experience with an arbitrary scene displayed on a 3D display.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
求助全文
约1分钟内获得全文 去求助
来源期刊
Visual Informatics
Visual Informatics Computer Science-Computer Graphics and Computer-Aided Design
CiteScore
6.70
自引率
3.30%
发文量
33
审稿时长
79 days
期刊最新文献
Visual comparative analytics of multimodal transportation Out-of-focus artifacts mitigation and autofocus methods for 3D displays Transforming cinematography lighting education in the metaverse Editorial Board ArtEyer: Enriching GPT-based agents with contextual data visualizations for fine art authentication
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1