Ziyi Cao , Tiansong Li , Guofen Wang , Haibing Yin , Hongkui Wang , Li Yu
{"title":"TRRHA: A two-stream re-parameterized refocusing hybrid attention network for synthesized view quality enhancement","authors":"Ziyi Cao , Tiansong Li , Guofen Wang , Haibing Yin , Hongkui Wang , Li Yu","doi":"10.1016/j.displa.2024.102843","DOIUrl":null,"url":null,"abstract":"<div><div>In multi-view video systems, the decoded texture video and its corresponding depth video are utilized to synthesize virtual views from different perspectives using the depth-image-based rendering (DIBR) technology in 3D-high efficiency video coding (3D-HEVC). However, the distortion of the compressed multi-view video and the disocclusion problem in DIBR can easily cause obvious holes and cracks in the synthesized views, degrading the visual quality of the synthesized views. To address this problem, a novel two-stream re-parameterized refocusing hybrid attention (TRRHA) network is proposed to significantly improve the quality of synthesized views. Firstly, a global multi-scale residual information stream is applied to extract the global context information by using refocusing attention module (RAM), and the RAM can detect the contextual feature and adaptively learn channel and spatial attention feature to selectively focus on different areas. Secondly, a local feature pyramid attention information stream is used to fully capture complex local texture details by using re-parameterized refocusing attention module (RRAM). The RRAM can effectively capture multi-scale texture details with different receptive fields, and adaptively adjust channel and spatial weights to adapt to information transformation at different sizes and levels. Finally, an efficient feature fusion module is proposed to effectively fuse the extracted global and local information streams. Extensive experimental results show that the proposed TRRHA achieves significantly better performance than the state-of-the-art methods. The source code will be available at <span><span>https://github.com/647-bei/TRRHA</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":50570,"journal":{"name":"Displays","volume":"85 ","pages":"Article 102843"},"PeriodicalIF":3.7000,"publicationDate":"2024-10-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Displays","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0141938224002075","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, HARDWARE & ARCHITECTURE","Score":null,"Total":0}
引用次数: 0
Abstract
In multi-view video systems, the decoded texture video and its corresponding depth video are utilized to synthesize virtual views from different perspectives using the depth-image-based rendering (DIBR) technology in 3D-high efficiency video coding (3D-HEVC). However, the distortion of the compressed multi-view video and the disocclusion problem in DIBR can easily cause obvious holes and cracks in the synthesized views, degrading the visual quality of the synthesized views. To address this problem, a novel two-stream re-parameterized refocusing hybrid attention (TRRHA) network is proposed to significantly improve the quality of synthesized views. Firstly, a global multi-scale residual information stream is applied to extract the global context information by using refocusing attention module (RAM), and the RAM can detect the contextual feature and adaptively learn channel and spatial attention feature to selectively focus on different areas. Secondly, a local feature pyramid attention information stream is used to fully capture complex local texture details by using re-parameterized refocusing attention module (RRAM). The RRAM can effectively capture multi-scale texture details with different receptive fields, and adaptively adjust channel and spatial weights to adapt to information transformation at different sizes and levels. Finally, an efficient feature fusion module is proposed to effectively fuse the extracted global and local information streams. Extensive experimental results show that the proposed TRRHA achieves significantly better performance than the state-of-the-art methods. The source code will be available at https://github.com/647-bei/TRRHA.
期刊介绍:
Displays is the international journal covering the research and development of display technology, its effective presentation and perception of information, and applications and systems including display-human interface.
Technical papers on practical developments in Displays technology provide an effective channel to promote greater understanding and cross-fertilization across the diverse disciplines of the Displays community. Original research papers solving ergonomics issues at the display-human interface advance effective presentation of information. Tutorial papers covering fundamentals intended for display technologies and human factor engineers new to the field will also occasionally featured.