{"title":"Out-of-focus artifacts mitigation and autofocus methods for 3D displays","authors":"T. Chlubna , T. Milet , P. Zemčík","doi":"10.1016/j.visinf.2024.12.001","DOIUrl":null,"url":null,"abstract":"<div><div>This paper proposes a novel content-aware method for automatic focusing of the scene on a 3D display. The method addresses a common problem that visualized content is often out of focus, which adversely affects perceived 3D content. The method outperforms existing focusing method, having the error lower by almost 30%. The existing and novel focusing is extended with depth-of-field enhancement of the scene to mitigate out-of-focus artifacts. The relation between the total depth range of the scene and the visual quality of the result is discussed and evaluated according to human perception experiments. A space-warping method for synthetic scenes is proposed to reduce out-of-focus artifacts while maintaining the scene appearance. A user study was conducted to evaluate the proposed methods and identify the crucial parameters in the scene-focusing process on the 3D stereoscopic display by Looking Glass Factory. The study confirmed the efficiency of the proposals and discovered that the depth-of-field artifact mitigation might not be suitable for all scenes despite theoretical hypotheses. The overall proposal of this paper is a set of methods that can be used to produce the best user experience with an arbitrary scene displayed on a 3D display.</div></div>","PeriodicalId":36903,"journal":{"name":"Visual Informatics","volume":"9 1","pages":"Pages 31-42"},"PeriodicalIF":3.8000,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Visual Informatics","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2468502X2400069X","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
This paper proposes a novel content-aware method for automatic focusing of the scene on a 3D display. The method addresses a common problem that visualized content is often out of focus, which adversely affects perceived 3D content. The method outperforms existing focusing method, having the error lower by almost 30%. The existing and novel focusing is extended with depth-of-field enhancement of the scene to mitigate out-of-focus artifacts. The relation between the total depth range of the scene and the visual quality of the result is discussed and evaluated according to human perception experiments. A space-warping method for synthetic scenes is proposed to reduce out-of-focus artifacts while maintaining the scene appearance. A user study was conducted to evaluate the proposed methods and identify the crucial parameters in the scene-focusing process on the 3D stereoscopic display by Looking Glass Factory. The study confirmed the efficiency of the proposals and discovered that the depth-of-field artifact mitigation might not be suitable for all scenes despite theoretical hypotheses. The overall proposal of this paper is a set of methods that can be used to produce the best user experience with an arbitrary scene displayed on a 3D display.