首页 > 最新文献

2012 International Conference on Virtual Reality and Visualization最新文献

英文 中文
Pose Measurement of a GEO Satellite Based on Natural Features 基于自然特征的GEO卫星位姿测量
Pub Date : 2012-09-14 DOI: 10.1109/ICVRV.2012.16
Xiaodong Du, Bin Liang, Wenfu Xu, Xueqian Wang, Jianghua Yu
In order to perform the on-orbit servicing mission, the robotic system is firstly required to approach and dock with the target autonomously, for which the measurement of relative pose is the key. It is a challenging task since the existing GEO satellites are generally non-cooperative, i.e. no artificial mark is mounted to aid the measurement. In this paper, a method based on natural features is proposed to estimate the pose of a GEO satellite in the phase of R-bar final approach. The adapter ring and the bottom edges of the satellite are chosen as the recognized object. By the circular feature, the relative position can be resolved while two solutions of the orientation are obtained. The vanishing points formed by the bottom edges are applied to solve the orientation-duality problem so that the on board camera requires no specific motions. The corresponding algorithm for image processing and pose estimation is presented. Computer simulations verify the proposed method.
为了完成在轨服务任务,首先要求机器人系统自主接近目标并与目标对接,其中相对位姿的测量是关键。这是一项具有挑战性的任务,因为现有的地球同步轨道卫星通常是非合作的,即没有安装人工标记来辅助测量。本文提出了一种基于自然特征的静止轨道卫星R-bar末进相位姿估计方法。选取适配环和卫星底边作为识别对象。利用圆弧的特性,可以在求解相对位置的同时,得到两个方向的解。利用底部边缘形成的消失点来解决方向对偶问题,使机载摄像机不需要特定的运动。给出了相应的图像处理和姿态估计算法。计算机仿真验证了该方法的有效性。
{"title":"Pose Measurement of a GEO Satellite Based on Natural Features","authors":"Xiaodong Du, Bin Liang, Wenfu Xu, Xueqian Wang, Jianghua Yu","doi":"10.1109/ICVRV.2012.16","DOIUrl":"https://doi.org/10.1109/ICVRV.2012.16","url":null,"abstract":"In order to perform the on-orbit servicing mission, the robotic system is firstly required to approach and dock with the target autonomously, for which the measurement of relative pose is the key. It is a challenging task since the existing GEO satellites are generally non-cooperative, i.e. no artificial mark is mounted to aid the measurement. In this paper, a method based on natural features is proposed to estimate the pose of a GEO satellite in the phase of R-bar final approach. The adapter ring and the bottom edges of the satellite are chosen as the recognized object. By the circular feature, the relative position can be resolved while two solutions of the orientation are obtained. The vanishing points formed by the bottom edges are applied to solve the orientation-duality problem so that the on board camera requires no specific motions. The corresponding algorithm for image processing and pose estimation is presented. Computer simulations verify the proposed method.","PeriodicalId":421789,"journal":{"name":"2012 International Conference on Virtual Reality and Visualization","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129970227","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Automatic Image Annotation Based on Sparse Representation and Multiple Label Learning 基于稀疏表示和多标签学习的图像自动标注
Pub Date : 2012-09-14 DOI: 10.1109/ICVRV.2012.11
Feng Tian, Sheng Xu-kun, Shang Fu-hua, Zhou Kai
Automatic image annotation has emerged as an important research topic due to its potential application on both image understanding and web image search. Due to the inherent ambiguity of image-label mapping, the annotation task has become a challenge to systematically develop robust annotation models with better performance. In this paper, we present an image annotation framework based on Sparse Representation and Multi-Label Learning (SCMLL), which aims at taking full advantage of Image Sparse representation and multi-label learning mechanism to address the annotation problem. We first treat each image as a sparse linear combination of other images, and then consider the component images as the nearest neighbors of the target image based on a sparse representation computed by L-1 minimization. Based on statistical information gained from the label sets of these neighbors, a multiple label learning algorithm based on a posteriori (MAP) principle is presented to determine the tags for the unlabeled image. The experiments over the well known data set demonstrate that the proposed method is beneficial in the image annotation task and outperforms most existing image annotation algorithms.
自动图像标注由于其在图像理解和网络图像搜索方面的潜在应用而成为一个重要的研究课题。由于图像标签映射固有的模糊性,如何系统地开发性能更好的鲁棒标注模型成为标注任务的一大挑战。本文提出了一种基于稀疏表示和多标签学习(SCMLL)的图像标注框架,旨在充分利用图像稀疏表示和多标签学习机制来解决图像标注问题。我们首先将每个图像视为其他图像的稀疏线性组合,然后基于L-1最小化计算的稀疏表示将组件图像视为目标图像的最近邻居。基于这些邻域标签集的统计信息,提出了一种基于后验(MAP)原理的多标签学习算法来确定未标记图像的标签。在已知数据集上的实验表明,该方法有利于图像标注任务,并且优于大多数现有的图像标注算法。
{"title":"Automatic Image Annotation Based on Sparse Representation and Multiple Label Learning","authors":"Feng Tian, Sheng Xu-kun, Shang Fu-hua, Zhou Kai","doi":"10.1109/ICVRV.2012.11","DOIUrl":"https://doi.org/10.1109/ICVRV.2012.11","url":null,"abstract":"Automatic image annotation has emerged as an important research topic due to its potential application on both image understanding and web image search. Due to the inherent ambiguity of image-label mapping, the annotation task has become a challenge to systematically develop robust annotation models with better performance. In this paper, we present an image annotation framework based on Sparse Representation and Multi-Label Learning (SCMLL), which aims at taking full advantage of Image Sparse representation and multi-label learning mechanism to address the annotation problem. We first treat each image as a sparse linear combination of other images, and then consider the component images as the nearest neighbors of the target image based on a sparse representation computed by L-1 minimization. Based on statistical information gained from the label sets of these neighbors, a multiple label learning algorithm based on a posteriori (MAP) principle is presented to determine the tags for the unlabeled image. The experiments over the well known data set demonstrate that the proposed method is beneficial in the image annotation task and outperforms most existing image annotation algorithms.","PeriodicalId":421789,"journal":{"name":"2012 International Conference on Virtual Reality and Visualization","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126309672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Assistive Learning for Hearing Impaired College Students using Mixed Reality: A Pilot Study 使用混合现实技术辅助听障大学生学习的初步研究
Pub Date : 2012-09-14 DOI: 10.1109/ICVRV.2012.20
Xun Luo, Mei Han, Tao Liu, Weikang Chen, Fan Bai
High quality college education for hearing impaired students is a challenging task. The most common practices nowadays intensively engage specially trained instructors, inclass and after-class tutors, as well as accessible infrastructure such as speech-to-text services. Such approaches require significant manpower investments of educators, staff and volunteers, yet are still highly susceptible to quality control and wide deployment issues. With proven records in education, mixed reality has the potential to serve as a useful assistive learning technology for hearing impaired college students. However, the fundamental technical and theoretical questions for this proposed endeavor remain largely unanswered, which motivated us to conduct this pilot study to explore the feasibilities. We designed and implemented a mixed reality system that simulated in-class assistive learning, and tested it at China's largest hearing impaired higher education institute. 15 hearing impaired college students took part in the experiments and studied a subject that is not part of their regular curriculum. Results showed that the mixed reality techniques were effective for in-class assisting, with moderate side effects. As the first step, this study validated the hypothesis that mixed reality can be used as an assistive learning technology for hearing impaired college students. It also opened the avenue to our planned next phases of mixed reality research for this purpose.
为听障学生提供高质量的大学教育是一项具有挑战性的任务。目前最常见的做法是集中聘用受过专门培训的教师、课堂和课后导师,以及语音转文本服务等无障碍基础设施。这种方法需要教育工作者、工作人员和志愿人员的大量人力投资,但仍然极易受到质量控制和广泛部署问题的影响。由于混合现实在教育方面的记录已得到证实,它有可能成为一种有用的辅助学习技术,为听力受损的大学生服务。然而,这一提议的基本技术和理论问题在很大程度上仍未得到解答,这促使我们进行这项试点研究以探索其可行性。我们设计并实现了一个模拟课堂辅助学习的混合现实系统,并在中国最大的听障高等教育机构进行了测试。15名听力受损的大学生参加了实验,学习了一门不属于他们常规课程的科目。结果表明,混合现实技术对课堂辅助是有效的,副作用适中。作为研究的第一步,本研究验证了混合现实可以作为听力障碍大学生辅助学习技术的假设。它也为我们为此目的计划的下一阶段混合现实研究开辟了道路。
{"title":"Assistive Learning for Hearing Impaired College Students using Mixed Reality: A Pilot Study","authors":"Xun Luo, Mei Han, Tao Liu, Weikang Chen, Fan Bai","doi":"10.1109/ICVRV.2012.20","DOIUrl":"https://doi.org/10.1109/ICVRV.2012.20","url":null,"abstract":"High quality college education for hearing impaired students is a challenging task. The most common practices nowadays intensively engage specially trained instructors, inclass and after-class tutors, as well as accessible infrastructure such as speech-to-text services. Such approaches require significant manpower investments of educators, staff and volunteers, yet are still highly susceptible to quality control and wide deployment issues. With proven records in education, mixed reality has the potential to serve as a useful assistive learning technology for hearing impaired college students. However, the fundamental technical and theoretical questions for this proposed endeavor remain largely unanswered, which motivated us to conduct this pilot study to explore the feasibilities. We designed and implemented a mixed reality system that simulated in-class assistive learning, and tested it at China's largest hearing impaired higher education institute. 15 hearing impaired college students took part in the experiments and studied a subject that is not part of their regular curriculum. Results showed that the mixed reality techniques were effective for in-class assisting, with moderate side effects. As the first step, this study validated the hypothesis that mixed reality can be used as an assistive learning technology for hearing impaired college students. It also opened the avenue to our planned next phases of mixed reality research for this purpose.","PeriodicalId":421789,"journal":{"name":"2012 International Conference on Virtual Reality and Visualization","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132281522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Real-time Continuous Geometric Calibration for Projector-Camera System under Ambient Illumination 环境光照下投影-摄像系统的实时连续几何标定
Pub Date : 2012-09-14 DOI: 10.1109/ICVRV.2012.15
Yuqi Li, Niguang Bao, Qingshu Yuan, Dongming Lu
This paper presents a fast continuous geometric calibration method for projector-camera system under ambient light. Our method estimates an appropriate exposure time to prevent features in captured image from degradation and adopts ORB descriptor to match features pairs in real-time. The adaptive exposure method has been verified with different exposure values and proved to be effective. We also implement our real-time continuous calibration method on Dual-projection display. The calibration process can be accomplished smoothly within 5 frames.
提出了一种环境光下投影-摄像机系统快速连续几何标定方法。该方法估计合适的曝光时间以防止捕获图像中的特征退化,并采用ORB描述符实时匹配特征对。用不同的曝光值验证了自适应曝光方法的有效性。并在双投影显示器上实现了实时连续校准方法。校准过程可以在5帧内顺利完成。
{"title":"Real-time Continuous Geometric Calibration for Projector-Camera System under Ambient Illumination","authors":"Yuqi Li, Niguang Bao, Qingshu Yuan, Dongming Lu","doi":"10.1109/ICVRV.2012.15","DOIUrl":"https://doi.org/10.1109/ICVRV.2012.15","url":null,"abstract":"This paper presents a fast continuous geometric calibration method for projector-camera system under ambient light. Our method estimates an appropriate exposure time to prevent features in captured image from degradation and adopts ORB descriptor to match features pairs in real-time. The adaptive exposure method has been verified with different exposure values and proved to be effective. We also implement our real-time continuous calibration method on Dual-projection display. The calibration process can be accomplished smoothly within 5 frames.","PeriodicalId":421789,"journal":{"name":"2012 International Conference on Virtual Reality and Visualization","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114745401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
GPU Based Compression and Rendering of Massive Aircraft CAD Models 基于GPU的海量飞机CAD模型压缩与渲染
Pub Date : 2012-09-14 DOI: 10.1109/ICVRV.2012.8
Tan Dunming, Zhao Gang, Yu Lu
The total size of massive aircraft CAD models is usually up to several GBs, which exceed not only the storage capacity of memory, but also the rendering ability of graphics card. In this paper, we present compression and rendering methods by exploring the up-to-date GPU techniques. To fit into the memory, vertex data are compressed from float to byte type with bounding box information and then decompressed with GPU. Index data are in short or byte type according to the vertex size, while normal data are deleted and generated by GPU while rendering. To render in real-time, vertex buffer object is exploited instead of traditional display list for efficiency and GPU occlusion query culls occluded parts to lower the rendering load. Furthermore, deliberately designed GPU shaders are applied to optimize the traditional rendering pipeline. The experiment results show by the GPU based methods, the compression rates get up to 5.3, massive CAD models such as the regional jet can be compressed within 178 MB and fit into memory of personal computers, and the rendering frame rates achieve up to 40 with cheap graphics card. It's proved that our method maximizes the GPU capabilities to accelerate the real-time rendering performance of massive aircraft CAD models.
海量飞机CAD模型的总尺寸通常高达数gb,这不仅超出了内存的存储容量,也超出了显卡的渲染能力。在本文中,我们通过探索最新的GPU技术来介绍压缩和渲染方法。为了适应内存,顶点数据用边界框信息从浮点压缩到字节类型,然后用GPU解压缩。索引数据根据顶点大小采用short或byte类型,正常数据在渲染时由GPU删除生成。为了实现实时渲染,利用顶点缓冲对象代替传统的显示列表来提高效率,GPU遮挡查询剔除遮挡部分来降低渲染负荷。此外,特意设计的GPU着色器被用于优化传统的渲染管道。实验结果表明,采用基于GPU的方法,压缩率可达5.3,可以将区域喷气机等大型CAD模型压缩到178 MB以内,适合个人计算机的内存,在廉价显卡上的渲染帧率可达40帧。实验证明,该方法最大限度地提高了GPU的性能,提高了大规模飞机CAD模型的实时渲染性能。
{"title":"GPU Based Compression and Rendering of Massive Aircraft CAD Models","authors":"Tan Dunming, Zhao Gang, Yu Lu","doi":"10.1109/ICVRV.2012.8","DOIUrl":"https://doi.org/10.1109/ICVRV.2012.8","url":null,"abstract":"The total size of massive aircraft CAD models is usually up to several GBs, which exceed not only the storage capacity of memory, but also the rendering ability of graphics card. In this paper, we present compression and rendering methods by exploring the up-to-date GPU techniques. To fit into the memory, vertex data are compressed from float to byte type with bounding box information and then decompressed with GPU. Index data are in short or byte type according to the vertex size, while normal data are deleted and generated by GPU while rendering. To render in real-time, vertex buffer object is exploited instead of traditional display list for efficiency and GPU occlusion query culls occluded parts to lower the rendering load. Furthermore, deliberately designed GPU shaders are applied to optimize the traditional rendering pipeline. The experiment results show by the GPU based methods, the compression rates get up to 5.3, massive CAD models such as the regional jet can be compressed within 178 MB and fit into memory of personal computers, and the rendering frame rates achieve up to 40 with cheap graphics card. It's proved that our method maximizes the GPU capabilities to accelerate the real-time rendering performance of massive aircraft CAD models.","PeriodicalId":421789,"journal":{"name":"2012 International Conference on Virtual Reality and Visualization","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125013425","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Automatic generation of large scale 3D cloud based on weather forecast data 基于天气预报数据自动生成大尺度三维云
Pub Date : 2012-09-14 DOI: 10.1109/ICVRV.2012.19
W. Wenke, Guo Yumeng, Xiong Min, Li Sikun
3D cloud scenes generation is widely used in computer graphics and virtual reality. Most of the existing methods for 3D cloud visualization first model the cloud based on the physical mechanism of cloud, and then solve the illumination model of the cloud to generate 3D scenes. However, this kind of methods cannot show the real weather condition. Moreover, the existing cloud visualization methods based on the weather forecast data cannot be applied to the large scale 3D cloud scenes due to the complicated solution of the illumination model. Borrowing the idea of particle system, this paper proposes an algorithm for automatic generation of large scale 3D cloud based on weather forecast data. The algorithm considers each grid point in the data as a particle, whose optical parameters can be determined by the input data. Multiple forward scattering is used to calculate the incident color of each particle, and the first order scattering is utilized to determine the incident color to the observer. Experimental results demonstrate that our algorithm could not only generate realistic 3D cloud scenes from the weather forecast data, but also obtain an interactive frame rates for the data that contains millions of grids.
三维云场景生成在计算机图形学和虚拟现实中有着广泛的应用。现有的三维云可视化方法大多是先根据云的物理机制对云进行建模,然后求解云的光照模型生成三维场景。然而,这种方法不能反映真实的天气状况。此外,现有的基于天气预报数据的云可视化方法由于光照模型求解复杂,无法应用于大尺度三维云场景。本文借鉴粒子系统的思想,提出了一种基于天气预报数据的大规模三维云自动生成算法。该算法将数据中的每个网格点视为一个粒子,其光学参数可由输入数据确定。利用多次前向散射计算每个粒子的入射颜色,利用一阶散射确定观察者的入射颜色。实验结果表明,该算法不仅可以从天气预报数据中生成逼真的三维云场景,而且可以获得包含数百万网格的数据的交互帧率。
{"title":"Automatic generation of large scale 3D cloud based on weather forecast data","authors":"W. Wenke, Guo Yumeng, Xiong Min, Li Sikun","doi":"10.1109/ICVRV.2012.19","DOIUrl":"https://doi.org/10.1109/ICVRV.2012.19","url":null,"abstract":"3D cloud scenes generation is widely used in computer graphics and virtual reality. Most of the existing methods for 3D cloud visualization first model the cloud based on the physical mechanism of cloud, and then solve the illumination model of the cloud to generate 3D scenes. However, this kind of methods cannot show the real weather condition. Moreover, the existing cloud visualization methods based on the weather forecast data cannot be applied to the large scale 3D cloud scenes due to the complicated solution of the illumination model. Borrowing the idea of particle system, this paper proposes an algorithm for automatic generation of large scale 3D cloud based on weather forecast data. The algorithm considers each grid point in the data as a particle, whose optical parameters can be determined by the input data. Multiple forward scattering is used to calculate the incident color of each particle, and the first order scattering is utilized to determine the incident color to the observer. Experimental results demonstrate that our algorithm could not only generate realistic 3D cloud scenes from the weather forecast data, but also obtain an interactive frame rates for the data that contains millions of grids.","PeriodicalId":421789,"journal":{"name":"2012 International Conference on Virtual Reality and Visualization","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115254344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
A Network-Oriented Application of Satellite Remote Sensing Circulation Architecture 面向网络的卫星遥感环流体系结构应用
Pub Date : 2012-09-14 DOI: 10.1109/ICVRV.2012.17
Lingda Wu, Rui Cao, Y. Bian, Jie Jiang
In order to create a rapid, easy, and circulatory capabilities for multipurpose applications of remote sensing image, a network-oriented satellite remote sensing circulation architecture was proposed, and its key components was discussed in the same time. The client scheming and block data disposure were supplements to the integrity circulation architecture, conforming to which, clients can distribute the obtained remote sensing images and transfer to be servers, remote sensing image circulation utilization was realized.
为使遥感影像具有快速、便捷、循环的多用途应用能力,提出了面向网络的卫星遥感循环体系结构,并对其关键组成部分进行了讨论。客户端规划和块数据处理是对完整循环体系结构的补充,客户端可以根据该体系结构将获取的遥感图像分发到服务器端,实现遥感图像的循环利用。
{"title":"A Network-Oriented Application of Satellite Remote Sensing Circulation Architecture","authors":"Lingda Wu, Rui Cao, Y. Bian, Jie Jiang","doi":"10.1109/ICVRV.2012.17","DOIUrl":"https://doi.org/10.1109/ICVRV.2012.17","url":null,"abstract":"In order to create a rapid, easy, and circulatory capabilities for multipurpose applications of remote sensing image, a network-oriented satellite remote sensing circulation architecture was proposed, and its key components was discussed in the same time. The client scheming and block data disposure were supplements to the integrity circulation architecture, conforming to which, clients can distribute the obtained remote sensing images and transfer to be servers, remote sensing image circulation utilization was realized.","PeriodicalId":421789,"journal":{"name":"2012 International Conference on Virtual Reality and Visualization","volume":"108 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127941138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Interactive Continuous Erasing and Clustering in 3D 三维交互式连续擦除与聚类
Pub Date : 2012-09-14 DOI: 10.1109/ICVRV.2012.21
Shen Enya, Wang Wen-ke, Li Si-kun, Cai Xun
As an important visualization way, volume rendering is widely used in many fields. However, occlusion is one of the key problems that perplex traditional volume rendering. In order to see some important features in the datasets, users have to modify the Transfer Functions in a trial and error way which is time-consuming and indirect. In this paper, we provide an interactive continuous erasing for users to quickly get features that they are interested in and an interactive clustering way to view classified features. The first method map user's direct operation on the screen to 3D data space in real time, and then change the rendering results according to the modes that users make use of. Users could directly operate on the 3D rendering results on the screen, and filter any uninterested parts as they want. The second method makes use of Gaussian Mixture Model (GMM) to cluster raw data into different parts. We check the universal practicality of our methods by various datasets from different areas.
体绘制作为一种重要的可视化方法,在许多领域得到了广泛的应用。然而,遮挡是困扰传统体绘制的关键问题之一。为了在数据集中看到一些重要的特征,用户必须以试错的方式修改传递函数,这是耗时且间接的。在本文中,我们提供了一种交互式连续擦除方法,使用户可以快速获取他们感兴趣的特征,并提供了一种交互式聚类方法来查看分类特征。第一种方法是将用户在屏幕上的直接操作实时映射到三维数据空间,然后根据用户使用的模式改变渲染结果。用户可以直接对屏幕上的3D渲染结果进行操作,并根据自己的需要过滤任何不感兴趣的部分。第二种方法是利用高斯混合模型(GMM)将原始数据聚类成不同的部分。我们通过来自不同地区的各种数据集来检验我们方法的普遍实用性。
{"title":"Interactive Continuous Erasing and Clustering in 3D","authors":"Shen Enya, Wang Wen-ke, Li Si-kun, Cai Xun","doi":"10.1109/ICVRV.2012.21","DOIUrl":"https://doi.org/10.1109/ICVRV.2012.21","url":null,"abstract":"As an important visualization way, volume rendering is widely used in many fields. However, occlusion is one of the key problems that perplex traditional volume rendering. In order to see some important features in the datasets, users have to modify the Transfer Functions in a trial and error way which is time-consuming and indirect. In this paper, we provide an interactive continuous erasing for users to quickly get features that they are interested in and an interactive clustering way to view classified features. The first method map user's direct operation on the screen to 3D data space in real time, and then change the rendering results according to the modes that users make use of. Users could directly operate on the 3D rendering results on the screen, and filter any uninterested parts as they want. The second method makes use of Gaussian Mixture Model (GMM) to cluster raw data into different parts. We check the universal practicality of our methods by various datasets from different areas.","PeriodicalId":421789,"journal":{"name":"2012 International Conference on Virtual Reality and Visualization","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134423205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Motion Parallax Rendering Approach to Real-time Stereoscopic Visualization for Aircraft Virtual Assembly 飞机虚拟装配实时立体可视化的运动视差绘制方法
Pub Date : 2012-09-14 DOI: 10.1109/ICVRV.2012.9
Junjie Xue, Gang Zhao, Dunming Tan
As a cue to depth perception, motion parallax can improve stereoscopic visualization to a level more like human natural vision. And stereoscopic visualization with motion parallax rendering can lessen the fatigue when people are long-time immersed in virtual scenes. This paper presents a three-stage approach for real-time stereoscopic visualization (SV) with motion parallax rendering (MPR), which consists of head motion sensing, head-camera motion mapping, and stereo pair generation procedures. Theory and algorithm for each stage are presented. This paper also reviews the head tracking technologies and stereoscopic rendering methods mostly used in virtual and augmented reality. A demo application is developed to show the efficiency and adaptability of the algorithms. The experimental results show that our algorithm for SV with MPR is robust and efficient. And aircraft virtual assembly environments with motion parallax rendering can guarantee better interaction experiences and higher assembly efficiency.
作为深度感知的提示,运动视差可以将立体视觉提高到更像人类自然视觉的水平。采用运动视差渲染的立体视觉可以减轻人们长时间沉浸在虚拟场景中的疲劳。本文提出了一种基于运动视差渲染(MPR)的实时立体可视化(SV)方法,包括头部运动传感、头部相机运动映射和立体对生成过程。给出了各个阶段的理论和算法。本文还综述了虚拟现实和增强现实中常用的头部跟踪技术和立体渲染方法。通过实例验证了该算法的有效性和适应性。实验结果表明,该算法具有鲁棒性和有效性。采用运动视差渲染的飞机虚拟装配环境可以保证更好的交互体验和更高的装配效率。
{"title":"A Motion Parallax Rendering Approach to Real-time Stereoscopic Visualization for Aircraft Virtual Assembly","authors":"Junjie Xue, Gang Zhao, Dunming Tan","doi":"10.1109/ICVRV.2012.9","DOIUrl":"https://doi.org/10.1109/ICVRV.2012.9","url":null,"abstract":"As a cue to depth perception, motion parallax can improve stereoscopic visualization to a level more like human natural vision. And stereoscopic visualization with motion parallax rendering can lessen the fatigue when people are long-time immersed in virtual scenes. This paper presents a three-stage approach for real-time stereoscopic visualization (SV) with motion parallax rendering (MPR), which consists of head motion sensing, head-camera motion mapping, and stereo pair generation procedures. Theory and algorithm for each stage are presented. This paper also reviews the head tracking technologies and stereoscopic rendering methods mostly used in virtual and augmented reality. A demo application is developed to show the efficiency and adaptability of the algorithms. The experimental results show that our algorithm for SV with MPR is robust and efficient. And aircraft virtual assembly environments with motion parallax rendering can guarantee better interaction experiences and higher assembly efficiency.","PeriodicalId":421789,"journal":{"name":"2012 International Conference on Virtual Reality and Visualization","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130928346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Non-uniform Illumination Representation based on HDR Light Probe Sequences 基于HDR光探测序列的非均匀照明表示
Pub Date : 2012-09-14 DOI: 10.1109/ICVRV.2012.18
Jian Hu, Tao Yu, L. Wang, Zhong Zhou, Wei Wu
This paper presents a method to represent the complicated illumination in the real world by using HDR light probe sequences. The illumination representations proposed in this paper employ non-uniform structure instead of uniform light field to simulate lighting with spatial and angular variation, which turns out to be more efficient and accurate. The captured illuminations are divided into direct and indirect parts that are modeled respectively. Both integrated with global illumination algorithm easily, the direct part is organized as an amount of clusters on a virtual plane, which can solve the lighting occlusion problem successfully, while the indirect part is represented as a bounding mesh with HDR texture. This paper demonstrates the technique that captures real illuminations for virtual scenes, and also shows the comparison with the renderings using traditional image based lighting.
本文提出了一种利用HDR光探测序列来表示现实世界中复杂光照的方法。本文提出的照明表示方法采用非均匀结构代替均匀光场来模拟具有空间和角度变化的照明,具有更高的效率和准确性。将捕获的光照分为直接部分和间接部分,分别进行建模。两者都很容易与全局照明算法相结合,将直接部分组织成虚拟平面上的多个簇,成功地解决了光照遮挡问题,而将间接部分表示为具有HDR纹理的边界网格。本文演示了捕获虚拟场景真实照明的技术,并与使用传统图像照明的渲染结果进行了比较。
{"title":"Non-uniform Illumination Representation based on HDR Light Probe Sequences","authors":"Jian Hu, Tao Yu, L. Wang, Zhong Zhou, Wei Wu","doi":"10.1109/ICVRV.2012.18","DOIUrl":"https://doi.org/10.1109/ICVRV.2012.18","url":null,"abstract":"This paper presents a method to represent the complicated illumination in the real world by using HDR light probe sequences. The illumination representations proposed in this paper employ non-uniform structure instead of uniform light field to simulate lighting with spatial and angular variation, which turns out to be more efficient and accurate. The captured illuminations are divided into direct and indirect parts that are modeled respectively. Both integrated with global illumination algorithm easily, the direct part is organized as an amount of clusters on a virtual plane, which can solve the lighting occlusion problem successfully, while the indirect part is represented as a bounding mesh with HDR texture. This paper demonstrates the technique that captures real illuminations for virtual scenes, and also shows the comparison with the renderings using traditional image based lighting.","PeriodicalId":421789,"journal":{"name":"2012 International Conference on Virtual Reality and Visualization","volume":"356 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133698328","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
2012 International Conference on Virtual Reality and Visualization
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1