首页 > 最新文献

IVMSP 2013最新文献

英文 中文
3D activity measurement for stereoscopic video 立体视频的三维活动测量
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611906
Kwanghyun Lee, Haksub Kim, Sanghoon Lee
One of the most challenging issues in the 3D visual research field is how to quantify the visualization displayed over the virtual 3D space. To seek an effective method of quantification, it is necessary to measure various important elements related to different depths of 3D objects. In this paper, we propose a new framework to quantify the 3D visual information, termed 3D activity by measuring natural scene statistics (NSS). In the simulation, we verify the effectiveness of 3D activity to quantify the degree of freedom of 3D space in various aspects: disparity and motion.
如何对虚拟三维空间显示的可视化效果进行量化是三维视觉研究领域最具挑战性的问题之一。为了寻求一种有效的量化方法,需要测量与三维物体不同深度相关的各种重要元素。在本文中,我们提出了一个新的框架来量化三维视觉信息,称为三维活动测量自然场景统计(NSS)。在仿真中,我们验证了三维活动在视差和运动等方面量化三维空间自由度的有效性。
{"title":"3D activity measurement for stereoscopic video","authors":"Kwanghyun Lee, Haksub Kim, Sanghoon Lee","doi":"10.1109/IVMSPW.2013.6611906","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611906","url":null,"abstract":"One of the most challenging issues in the 3D visual research field is how to quantify the visualization displayed over the virtual 3D space. To seek an effective method of quantification, it is necessary to measure various important elements related to different depths of 3D objects. In this paper, we propose a new framework to quantify the 3D visual information, termed 3D activity by measuring natural scene statistics (NSS). In the simulation, we verify the effectiveness of 3D activity to quantify the degree of freedom of 3D space in various aspects: disparity and motion.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114547682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D scene correction using disparities with its projections 3D场景校正使用差异与它的投影
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611929
M. Grum, A. Bors
In this paper we present a new approach for modeling and correcting scenes with multiple 3D objects from images taken from various viewpoints. For the 3D scene initialization we consider implicit radial basis functions (RBF) estimated from the voxel model produced by the space carving algorithm. 3D scenes are corrected using image content disparities within its image projections as well as its inconsistency with its silhouettes extracted from images. While the image content disparities are suitable for textured regions, the silhouettes can be applied to regions of uniform colour which can be accurately segmented.
在本文中,我们提出了一种新的方法,用于从不同视点拍摄的图像中建模和校正具有多个3D物体的场景。对于三维场景初始化,我们考虑隐式径向基函数(RBF),该函数由空间雕刻算法产生的体素模型估计。3D场景使用图像投影内的图像内容差异以及从图像中提取的轮廓不一致来进行校正。虽然图像内容差异适用于纹理区域,但轮廓可以应用于均匀颜色的区域,可以准确分割。
{"title":"3D scene correction using disparities with its projections","authors":"M. Grum, A. Bors","doi":"10.1109/IVMSPW.2013.6611929","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611929","url":null,"abstract":"In this paper we present a new approach for modeling and correcting scenes with multiple 3D objects from images taken from various viewpoints. For the 3D scene initialization we consider implicit radial basis functions (RBF) estimated from the voxel model produced by the space carving algorithm. 3D scenes are corrected using image content disparities within its image projections as well as its inconsistency with its silhouettes extracted from images. While the image content disparities are suitable for textured regions, the silhouettes can be applied to regions of uniform colour which can be accurately segmented.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124231014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
User-feedback and optimization for multi-view calibration 多视点校准的用户反馈与优化
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611941
O. Schreer, M. Bertzen, N. Atzpadin, C. Riechert, W. Waizenegger, I. Feldmann
Multi-view camera calibration is an essential task in the filed of 3D reconstruction which holds especially for immersive media applications like 3D videocommunication. Although the problem of multi-view calibration is basically solved, there is still space to improve the calibration process and to increase the accuracy during acquisition of calibration patterns. It is commonly known that robust and accurate calibration requires feature points that are equally distributed in 3D space covering the whole volume of interest. In this paper, we propose a user guided calibration based on a graphical user interface, which drastically simplifies the correct acquisition of calibration patterns. Based on an optimized selection of patterns and their corresponding feature points, the multi-view calibration becomes much faster in terms of data acquisition as well as computational effort by reaching the same accuracy with standard unguided acquisitions of calibration pattern.
多视点摄像机标定是三维重建领域的一项重要任务,尤其适用于三维视频通信等沉浸式媒体应用。虽然基本解决了多视点校准问题,但在校准过程中仍有改进的空间,在获取校准模式时还需要提高精度。众所周知,鲁棒和准确的校准要求特征点在3D空间中均匀分布,覆盖整个感兴趣的体积。在本文中,我们提出了一种基于图形用户界面的用户引导校准,大大简化了校准模式的正确获取。基于对模式及其对应特征点的优化选择,多视图校准在数据采集和计算量方面都大大提高,达到了与标准非制导校准模式获取相同的精度。
{"title":"User-feedback and optimization for multi-view calibration","authors":"O. Schreer, M. Bertzen, N. Atzpadin, C. Riechert, W. Waizenegger, I. Feldmann","doi":"10.1109/IVMSPW.2013.6611941","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611941","url":null,"abstract":"Multi-view camera calibration is an essential task in the filed of 3D reconstruction which holds especially for immersive media applications like 3D videocommunication. Although the problem of multi-view calibration is basically solved, there is still space to improve the calibration process and to increase the accuracy during acquisition of calibration patterns. It is commonly known that robust and accurate calibration requires feature points that are equally distributed in 3D space covering the whole volume of interest. In this paper, we propose a user guided calibration based on a graphical user interface, which drastically simplifies the correct acquisition of calibration patterns. Based on an optimized selection of patterns and their corresponding feature points, the multi-view calibration becomes much faster in terms of data acquisition as well as computational effort by reaching the same accuracy with standard unguided acquisitions of calibration pattern.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126221256","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D depth analysis of human faces 人脸三维深度分析
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611920
J. Heo
We provide an important analysis of depth variation of human faces. Throughout an extensive analysis of 3D face shapes, we claim that 3D depth information (z) of faces is not significantly changing and can be synthesized from another person's depth or a generic depth information. We also show that gender and ethnicity specific average depth models can approximate the 3D shape of the input face image more accurately, achieving a better generalization of 3D face modeling and reconstruction compared to a global average depth model.
我们提供了人脸深度变化的重要分析。通过对3D面部形状的广泛分析,我们声称面部的3D深度信息(z)没有显着变化,可以从另一个人的深度或通用深度信息合成。我们还发现,特定性别和种族的平均深度模型可以更准确地近似输入人脸图像的3D形状,与全球平均深度模型相比,可以更好地实现3D人脸建模和重建的泛化。
{"title":"3D depth analysis of human faces","authors":"J. Heo","doi":"10.1109/IVMSPW.2013.6611920","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611920","url":null,"abstract":"We provide an important analysis of depth variation of human faces. Throughout an extensive analysis of 3D face shapes, we claim that 3D depth information (z) of faces is not significantly changing and can be synthesized from another person's depth or a generic depth information. We also show that gender and ethnicity specific average depth models can approximate the 3D shape of the input face image more accurately, achieving a better generalization of 3D face modeling and reconstruction compared to a global average depth model.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130466204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Depth map up-sampling using cost-volume filtering 使用成本-体积滤波的深度图上采样
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611912
Ji-Ho Cho, Satoshi Ikehata, H. Yoo, M. Gelautz, K. Aizawa
Depth maps captured by active sensors (e.g., ToF cameras and Kinect) typically suffer from poor spatial resolution, considerable amount of noise, and missing data. To overcome these problems, we propose a novel depth map up-sampling method which increases the resolution of the original depth map while effectively suppressing aliasing artifacts. Assuming that a registered high-resolution texture image is available, the cost-volume filtering framework is applied to this problem. Our experiments show that cost-volume filtering can generate the high-resolution depth map accurately and efficiently while preserving discontinuous object boundaries, which is often a challenge when various state-of-the-art algorithms are applied.
由主动传感器(如ToF相机和Kinect)捕获的深度图通常存在空间分辨率差、大量噪声和数据缺失的问题。为了克服这些问题,我们提出了一种新的深度图上采样方法,该方法在有效抑制混叠伪影的同时提高了原始深度图的分辨率。假设有配准的高分辨率纹理图像,将代价-体积滤波框架应用于该问题。我们的实验表明,成本-体积滤波可以准确有效地生成高分辨率深度图,同时保留不连续的目标边界,这是应用各种最先进算法时经常遇到的挑战。
{"title":"Depth map up-sampling using cost-volume filtering","authors":"Ji-Ho Cho, Satoshi Ikehata, H. Yoo, M. Gelautz, K. Aizawa","doi":"10.1109/IVMSPW.2013.6611912","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611912","url":null,"abstract":"Depth maps captured by active sensors (e.g., ToF cameras and Kinect) typically suffer from poor spatial resolution, considerable amount of noise, and missing data. To overcome these problems, we propose a novel depth map up-sampling method which increases the resolution of the original depth map while effectively suppressing aliasing artifacts. Assuming that a registered high-resolution texture image is available, the cost-volume filtering framework is applied to this problem. Our experiments show that cost-volume filtering can generate the high-resolution depth map accurately and efficiently while preserving discontinuous object boundaries, which is often a challenge when various state-of-the-art algorithms are applied.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130825691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Novel motion prediction for multi-view video coding using global disparity 基于全局视差的多视点视频编码运动预测
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611924
Jung-Hak Nam, I. Bajić, D. Sim
In this paper, we present an efficient motion and disparity prediction method for multi-view video coding based on the high efficient video coding (HEVC) standard. The proposed method exploits inter-view candidates for effective prediction of the motion or disparity vector to be coded. The inter-view candidates include not only motion vectors of adjacent views, but also global disparities across views. We found that motion vectors coded earlier in an adjacent view are helpful in predicting the current motion vector to reduce the amount of bits used in the motion vector information. In addition, the proposed disparity prediction with global disparity method is effective for inter-view prediction. To evaluate the proposed algorithm, we implemented a multi-view version based on HEVC and the proposed correspondence prediction method is implemented on a multi-view platform based on HEVC. We found that the proposed algorithm yields a coding gain of around 2.9% in a high efficiency configuration random access mode.
本文提出了一种基于高效视频编码(HEVC)标准的多视点视频编码的高效运动和视差预测方法。该方法利用视点间候选点对待编码的运动矢量或视差矢量进行有效预测。视点间候选者不仅包括相邻视点的运动向量,还包括视点间的全局差异。我们发现,在相邻视图中较早编码的运动向量有助于预测当前的运动向量,以减少运动向量信息中使用的比特量。此外,本文提出的视差预测方法对视差预测是有效的。为了评估该算法,我们实现了基于HEVC的多视图版本,并在基于HEVC的多视图平台上实现了所提出的对应预测方法。我们发现所提出的算法在高效率配置随机访问模式下产生约2.9%的编码增益。
{"title":"Novel motion prediction for multi-view video coding using global disparity","authors":"Jung-Hak Nam, I. Bajić, D. Sim","doi":"10.1109/IVMSPW.2013.6611924","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611924","url":null,"abstract":"In this paper, we present an efficient motion and disparity prediction method for multi-view video coding based on the high efficient video coding (HEVC) standard. The proposed method exploits inter-view candidates for effective prediction of the motion or disparity vector to be coded. The inter-view candidates include not only motion vectors of adjacent views, but also global disparities across views. We found that motion vectors coded earlier in an adjacent view are helpful in predicting the current motion vector to reduce the amount of bits used in the motion vector information. In addition, the proposed disparity prediction with global disparity method is effective for inter-view prediction. To evaluate the proposed algorithm, we implemented a multi-view version based on HEVC and the proposed correspondence prediction method is implemented on a multi-view platform based on HEVC. We found that the proposed algorithm yields a coding gain of around 2.9% in a high efficiency configuration random access mode.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127867538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crosstalk reduction in stereoscopic displays: A combined approach of disparity adjustment and crosstalk cancellation 立体显示器中的串扰减少:一种视差调整和串扰消除的联合方法
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611908
Hosik Sohn, Yong Ju Jung, Seong-il Lee, Yong Man Ro
This paper proposes a novel crosstalk reduction method for stereoscopic 3D displays by using a combined approach of disparity adjustment and subtractive crosstalk cancellation. Specifically, we propose a disparity adjustment method that can minimize the perceived crosstalk and negative effects of the crosstalk cancellation on the image quality. In addition, we also provide a contrast reduction method optimized for the subtractive crosstalk cancellation. The experimental results showed that the proposed method could provide higher image quality than existing crosstalk cancellation methods while successfully reducing the perceived crosstalk.
本文提出了一种基于视差调整和相减相声抵消相结合的新型立体三维显示器串扰抑制方法。具体来说,我们提出了一种视差调整方法,可以最大限度地减少可感知的串扰和串扰抵消对图像质量的负面影响。此外,我们还提供了一种针对相减串音消除进行优化的对比度降低方法。实验结果表明,与现有的串扰对消方法相比,该方法能够在有效降低感知到的串扰的同时提供更高的图像质量。
{"title":"Crosstalk reduction in stereoscopic displays: A combined approach of disparity adjustment and crosstalk cancellation","authors":"Hosik Sohn, Yong Ju Jung, Seong-il Lee, Yong Man Ro","doi":"10.1109/IVMSPW.2013.6611908","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611908","url":null,"abstract":"This paper proposes a novel crosstalk reduction method for stereoscopic 3D displays by using a combined approach of disparity adjustment and subtractive crosstalk cancellation. Specifically, we propose a disparity adjustment method that can minimize the perceived crosstalk and negative effects of the crosstalk cancellation on the image quality. In addition, we also provide a contrast reduction method optimized for the subtractive crosstalk cancellation. The experimental results showed that the proposed method could provide higher image quality than existing crosstalk cancellation methods while successfully reducing the perceived crosstalk.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133725113","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Planar urban scene reconstruction from spherical images using facade alignment 基于立面对齐的球面图像平面城市场景重建
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611923
Hansung Kim, A. Hilton
We propose a plane-based urban scene reconstruction method using spherical stereo image pairs. We assume that the urban scene consists of axis-aligned approximately planar structures (Manhattan world). Captured spherical stereo images are converted into six central-point perspective images by cubic projection and facade alignment. Facade alignment automatically identifies the principal planes direction in the scene allowing the cubic projection to preserve the plane structure. Depth information is recovered by stereo matching between images and independent 3D rectangular planes are constructed by plane fitting aligned with the principal axes. Finally planar regions are refined by expanding, detecting intersections and cropping based on visibility. The reconstructed model efficiently represents the structure of the scene and texture mapping allows natural walk-through rendering.
提出了一种基于球面立体图像对的平面城市场景重建方法。我们假设城市场景由轴线排列的近似平面结构(曼哈顿世界)组成。捕获的球面立体图像通过立方投影和立面对齐转换为六个中心点透视图像。立面对齐自动识别场景中的主平面方向,允许立方体投影保留平面结构。通过图像之间的立体匹配恢复深度信息,并通过与主轴对齐的平面拟合构建独立的三维矩形平面。最后根据可见性对平面区域进行扩展、交叉点检测和裁剪。重建的模型有效地表示了场景的结构,纹理映射允许自然的遍历渲染。
{"title":"Planar urban scene reconstruction from spherical images using facade alignment","authors":"Hansung Kim, A. Hilton","doi":"10.1109/IVMSPW.2013.6611923","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611923","url":null,"abstract":"We propose a plane-based urban scene reconstruction method using spherical stereo image pairs. We assume that the urban scene consists of axis-aligned approximately planar structures (Manhattan world). Captured spherical stereo images are converted into six central-point perspective images by cubic projection and facade alignment. Facade alignment automatically identifies the principal planes direction in the scene allowing the cubic projection to preserve the plane structure. Depth information is recovered by stereo matching between images and independent 3D rectangular planes are constructed by plane fitting aligned with the principal axes. Finally planar regions are refined by expanding, detecting intersections and cropping based on visibility. The reconstructed model efficiently represents the structure of the scene and texture mapping allows natural walk-through rendering.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128473271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
A new common-hole filling algorithm for virtual view synthesis with a probability mask 一种新的基于概率掩模的虚拟视图合成共孔填充算法
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611935
M. Ko, Dongwook Kim, Jisang Yoo
In this paper, a new common-hole filling algorithm with a probability mask for a virtual view synthesis is proposed. In the proposed algorithm, we try to combine strong points of both the spiral weighted average algorithm and the gradient searching algorithm. The spiral weighted average algorithm keeps the boundary of each object well by using depth information and the gradient searching algorithm is able to preserve details. We also try to reduce the flickering defect existing around the filled common-hole region by using a probability mask. The experimental results show that the proposed algorithm performs much better than conventional algorithms.
提出了一种基于概率掩模的虚拟视图合成共孔填充算法。在该算法中,我们尝试将螺旋加权平均算法和梯度搜索算法的优点结合起来。螺旋加权平均算法利用深度信息很好地保持了每个目标的边界,梯度搜索算法能够保留细节。我们还尝试使用概率掩模来减小填充共空穴周围存在的闪烁缺陷。实验结果表明,该算法的性能明显优于传统算法。
{"title":"A new common-hole filling algorithm for virtual view synthesis with a probability mask","authors":"M. Ko, Dongwook Kim, Jisang Yoo","doi":"10.1109/IVMSPW.2013.6611935","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611935","url":null,"abstract":"In this paper, a new common-hole filling algorithm with a probability mask for a virtual view synthesis is proposed. In the proposed algorithm, we try to combine strong points of both the spiral weighted average algorithm and the gradient searching algorithm. The spiral weighted average algorithm keeps the boundary of each object well by using depth information and the gradient searching algorithm is able to preserve details. We also try to reduce the flickering defect existing around the filled common-hole region by using a probability mask. The experimental results show that the proposed algorithm performs much better than conventional algorithms.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132711722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Effect of absence on visual perception and discomfort 缺失对视觉感知和不适的影响
Pub Date : 2013-06-10 DOI: 10.1109/IVMSPW.2013.6611915
Bochao Zou, Yue Liu, Yongtian Wang, Tao Huang, Q. Zhu
Prior researches on binocular mismatches of stereoscopic images mostly focus on optical errors (magnification, shift, rotation, distortion) and photometric asymmetries (color, luminance, definition). In this paper, a type of binocular mismatches - effect of absence is investigated to find whether the partial loss of an object in either one of a stereo pair influences characteristics of human fusion and depth perception when certain disparity is provided, in which two situations are taken into account: with overlap and without overlap. Implications for the effect of absence on fusion mechanism and visual comfort are also discussed in this paper. Experimental results prove the conclusion that the effect of absence can cause misperception and contribute to visual discomfort.
以往对立体图像双目失配的研究主要集中在光学误差(放大、偏移、旋转、畸变)和光度不对称(颜色、亮度、清晰度)方面。本文研究了一种双目失配效应——缺失效应,在具有一定视差的情况下,考虑了有重叠和无重叠两种情况,研究了双目失配是否会影响人的融合和深度感知特性。本文还讨论了缺失对融合机制和视觉舒适的影响。实验结果证明,缺失效应会引起视觉上的误解和不适。
{"title":"Effect of absence on visual perception and discomfort","authors":"Bochao Zou, Yue Liu, Yongtian Wang, Tao Huang, Q. Zhu","doi":"10.1109/IVMSPW.2013.6611915","DOIUrl":"https://doi.org/10.1109/IVMSPW.2013.6611915","url":null,"abstract":"Prior researches on binocular mismatches of stereoscopic images mostly focus on optical errors (magnification, shift, rotation, distortion) and photometric asymmetries (color, luminance, definition). In this paper, a type of binocular mismatches - effect of absence is investigated to find whether the partial loss of an object in either one of a stereo pair influences characteristics of human fusion and depth perception when certain disparity is provided, in which two situations are taken into account: with overlap and without overlap. Implications for the effect of absence on fusion mechanism and visual comfort are also discussed in this paper. Experimental results prove the conclusion that the effect of absence can cause misperception and contribute to visual discomfort.","PeriodicalId":170714,"journal":{"name":"IVMSP 2013","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124286399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IVMSP 2013
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1