首页 > 最新文献

2017 International Conference on Virtual Reality and Visualization (ICVRV)最新文献

英文 中文
Multi-events Driven Emotion Dynamic Generation Using Hawkes Process 基于Hawkes过程的多事件驱动情绪动态生成
Pub Date : 2017-10-01 DOI: 10.1109/ICVRV.2017.00034
Xiang Nan, Zhang Mingmin, Long Jianwu
Multi-events driven emotion generation was an important research point in the affective computing field. However, as the events have different types and occurred in variable times, then computing the emotion state intensity became a challenge. The existed solutions for this problem did not take time influences of different event types into consideration. In order to solve this problem, we provided a Hawkes process based multi-events driven emotion generation method. Firstly we appraised every event and generate the related emotional reaction; secondly, we treated the emotion generation process with a certain period as a point process and trained the parameters of Hawkes process by maximum likelihood estimation with real individual emotional reactions; thirdly, we used Hawkes process to simulate the accumulated emotion reactions. The experimental results showed that our method can generate a multi-events driven emotion more accurately and efficiently.
多事件驱动的情感生成是情感计算领域的一个重要研究方向。然而,由于事件的类型不同,发生的时间也不同,因此情绪状态强度的计算成为一个挑战。现有的解决方案没有考虑不同事件类型的时间影响。为了解决这一问题,我们提出了一种基于Hawkes过程的多事件驱动情感生成方法。首先,我们对每个事件进行评价,并产生相关的情绪反应;其次,将某一时间段的情绪生成过程视为一个点过程,利用真实个体情绪反应的极大似然估计训练Hawkes过程的参数;第三,我们使用Hawkes过程来模拟累积的情绪反应。实验结果表明,该方法能够更准确、更高效地生成多事件驱动情感。
{"title":"Multi-events Driven Emotion Dynamic Generation Using Hawkes Process","authors":"Xiang Nan, Zhang Mingmin, Long Jianwu","doi":"10.1109/ICVRV.2017.00034","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00034","url":null,"abstract":"Multi-events driven emotion generation was an important research point in the affective computing field. However, as the events have different types and occurred in variable times, then computing the emotion state intensity became a challenge. The existed solutions for this problem did not take time influences of different event types into consideration. In order to solve this problem, we provided a Hawkes process based multi-events driven emotion generation method. Firstly we appraised every event and generate the related emotional reaction; secondly, we treated the emotion generation process with a certain period as a point process and trained the parameters of Hawkes process by maximum likelihood estimation with real individual emotional reactions; thirdly, we used Hawkes process to simulate the accumulated emotion reactions. The experimental results showed that our method can generate a multi-events driven emotion more accurately and efficiently.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133883198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficiency Group Interaction Between Participants and Large Display 参与者之间的效率组交互与大显示
Pub Date : 2017-10-01 DOI: 10.1109/ICVRV.2017.00063
Hao Jiang, Chang Gao, Tianlu Mao, Hui Li, Zhaoqi Wang
In recent years there has been an increasing interest in human-computer interaction research in large display systems. We contribute with a design of efficient group interaction technique and application to a large display by using optical technology to track the markers on users' hands. Participants were asked to execute predefined interaction actions based on the contents of the application screen. With the help of well-designed interaction processes and algorithms, our system can receive response signals of multiple users and then trigger corresponding actions in real-time. Moreover, we designed a performance experiment to evaluate the recognition and response results from practical application. The experiment results demonstrate that, although integrating group interaction into large display system, this framework can effectively support user's participation in large display.
近年来,人们对大型显示系统的人机交互研究越来越感兴趣。我们设计了一种高效的群体交互技术,并利用光学技术跟踪用户手上的标记,将其应用于大型显示器。参与者被要求根据应用程序屏幕的内容执行预定义的交互操作。通过精心设计的交互流程和算法,我们的系统可以接收多个用户的响应信号,并实时触发相应的动作。此外,我们还设计了一个性能实验来评估实际应用中的识别和响应结果。实验结果表明,该框架虽然将群体交互集成到大型显示系统中,但能够有效地支持用户参与大型显示。
{"title":"Efficiency Group Interaction Between Participants and Large Display","authors":"Hao Jiang, Chang Gao, Tianlu Mao, Hui Li, Zhaoqi Wang","doi":"10.1109/ICVRV.2017.00063","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00063","url":null,"abstract":"In recent years there has been an increasing interest in human-computer interaction research in large display systems. We contribute with a design of efficient group interaction technique and application to a large display by using optical technology to track the markers on users' hands. Participants were asked to execute predefined interaction actions based on the contents of the application screen. With the help of well-designed interaction processes and algorithms, our system can receive response signals of multiple users and then trigger corresponding actions in real-time. Moreover, we designed a performance experiment to evaluate the recognition and response results from practical application. The experiment results demonstrate that, although integrating group interaction into large display system, this framework can effectively support user's participation in large display.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132974810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of Battlefield Situation Display System Based on ArcGIS Engine Software 基于ArcGIS引擎软件的战场态势显示系统开发
Pub Date : 2017-10-01 DOI: 10.1109/ICVRV.2017.00117
Jiaqi Wang, Yongting Wang, Feng Yang
In order to assist the military operator to make accurate analysis of the situation in the modern war, which has the characteristics of large amount of information. Based on ArcGIS Engine, in this paper, a modular two-dimensional battlefield situation display system is designed and implemented. The function of each module in the system is discussed and analyzed, then some key technologies of the system are introduced. The demonstration results show the validation of the situation display system.
在具有信息量大的现代战争中,为了协助军事操作者对态势进行准确的分析。本文基于ArcGIS Engine,设计并实现了一个模块化的二维战场态势显示系统。对系统中各模块的功能进行了讨论和分析,并对系统的关键技术进行了介绍。演示结果表明了态势显示系统的有效性。
{"title":"Development of Battlefield Situation Display System Based on ArcGIS Engine Software","authors":"Jiaqi Wang, Yongting Wang, Feng Yang","doi":"10.1109/ICVRV.2017.00117","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00117","url":null,"abstract":"In order to assist the military operator to make accurate analysis of the situation in the modern war, which has the characteristics of large amount of information. Based on ArcGIS Engine, in this paper, a modular two-dimensional battlefield situation display system is designed and implemented. The function of each module in the system is discussed and analyzed, then some key technologies of the system are introduced. The demonstration results show the validation of the situation display system.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114668487","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
End-to-End Cascade CNN for Simultaneously Face Detection and Alignment 端到端级联CNN同时人脸检测和对齐
Pub Date : 2017-10-01 DOI: 10.1109/ICVRV.2017.00016
Sanyuan Zhao, Hongmei Song, Weilin Cong, Q. Qi, Hui Tian
Real-world face detection and alignment demand an advanced discriminative model to address challenges by pose, lighting and expression. Recent studies have utilized the relation between face detection and alignment to make models computationally efficiency, but they ignore the connection between each cascade CNNs. In this paper, we combine detection, calibration and alignment in each cascade structure and propose an End-to-End cascade Online Hard Example Mining (OHEM) for training, which expert in accelerating convergence. Experiments on FDDB and AFLW demonstrate considerable improvement on accuracy and speed.
现实世界的人脸检测和对齐需要一个先进的判别模型来解决姿势、照明和表情方面的挑战。最近的研究利用了人脸检测和对齐之间的关系来提高模型的计算效率,但忽略了每个级联cnn之间的联系。在本文中,我们将每个级联结构的检测、校准和对齐结合起来,提出了一种端到端的级联在线硬例挖掘(OHEM)训练方法,该方法擅长加速收敛。在FDDB和AFLW上的实验表明,在精度和速度上都有很大的提高。
{"title":"End-to-End Cascade CNN for Simultaneously Face Detection and Alignment","authors":"Sanyuan Zhao, Hongmei Song, Weilin Cong, Q. Qi, Hui Tian","doi":"10.1109/ICVRV.2017.00016","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00016","url":null,"abstract":"Real-world face detection and alignment demand an advanced discriminative model to address challenges by pose, lighting and expression. Recent studies have utilized the relation between face detection and alignment to make models computationally efficiency, but they ignore the connection between each cascade CNNs. In this paper, we combine detection, calibration and alignment in each cascade structure and propose an End-to-End cascade Online Hard Example Mining (OHEM) for training, which expert in accelerating convergence. Experiments on FDDB and AFLW demonstrate considerable improvement on accuracy and speed.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133472190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Semantic Scene Reconstruction Using the DenseCRF Model 使用DenseCRF模型的语义场景重建
Pub Date : 2017-10-01 DOI: 10.1109/ICVRV.2017.00121
Zhixin Ma, Chong Cao, Xukun Shen
With the rapid growth of virtual reality industry, fast and accurate algorithms for scene reconstruction and understanding became the research focus in related fields. Traditional methods always consider the 3D model and scene understanding as two problems and work them out separately. In this paper, we propose a new method to reconstruct semantic 3D models from multi-view images. This method not only contains information of points in 3D space, but also builds up their relationship with pixels from images. We commit experiments on four real challenging datasets to test the effectiveness of our proposed method. The reconstruction can be directly applied to virtual reality applications, such as roaming in 3D scenes.
随着虚拟现实产业的快速发展,快速准确的场景重建与理解算法成为相关领域的研究热点。传统的方法总是把三维模型和场景理解看作两个问题,分别进行求解。本文提出了一种从多视图图像中重建语义三维模型的新方法。该方法不仅包含了三维空间中点的信息,而且建立了点与图像像素的关系。我们在四个具有挑战性的真实数据集上进行了实验,以测试我们提出的方法的有效性。该重建可以直接应用于虚拟现实应用,例如在3D场景中漫游。
{"title":"Semantic Scene Reconstruction Using the DenseCRF Model","authors":"Zhixin Ma, Chong Cao, Xukun Shen","doi":"10.1109/ICVRV.2017.00121","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00121","url":null,"abstract":"With the rapid growth of virtual reality industry, fast and accurate algorithms for scene reconstruction and understanding became the research focus in related fields. Traditional methods always consider the 3D model and scene understanding as two problems and work them out separately. In this paper, we propose a new method to reconstruct semantic 3D models from multi-view images. This method not only contains information of points in 3D space, but also builds up their relationship with pixels from images. We commit experiments on four real challenging datasets to test the effectiveness of our proposed method. The reconstruction can be directly applied to virtual reality applications, such as roaming in 3D scenes.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133124990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Tactile Perception with an Improved JSEG Algorithm 基于改进JSEG算法的图像触觉感知
Pub Date : 2017-10-01 DOI: 10.23940/IJPE.18.01.P9.7788
Yang Wenzhen, Luo Jiali, Li Xin, Wu Xinli, Jiang Zhaona, Pan Zhigeng
Image tactile interaction is a new interaction between human and digital images, allowing users to know digital images by touch. In order to improve the authenticity of image tactile perception, this paper proposes a tactile perception model based on image region features. Aiming at the phenomenon of over-partitioning and computational complexity of JSEG algorithm, we propose an improved JSEG algorithm, which can effectively reduce the computational complexity and divide the image areas which are more in line with the subjective visual judgment, and can be used for region-based image tactile generation. The experimental results show that the proposed algorithm can correctly distinguish the image area and improve the accuracy of image tactile perception.
图像触觉交互是人与数字图像之间的一种新的交互方式,让用户通过触摸来了解数字图像。为了提高图像触觉感知的真实性,本文提出了一种基于图像区域特征的触觉感知模型。针对JSEG算法的过度分割和计算量大的现象,提出了一种改进的JSEG算法,该算法可以有效降低计算量,划分出更符合主观视觉判断的图像区域,可用于基于区域的图像触觉生成。实验结果表明,该算法能够正确区分图像区域,提高图像触觉感知的精度。
{"title":"Image Tactile Perception with an Improved JSEG Algorithm","authors":"Yang Wenzhen, Luo Jiali, Li Xin, Wu Xinli, Jiang Zhaona, Pan Zhigeng","doi":"10.23940/IJPE.18.01.P9.7788","DOIUrl":"https://doi.org/10.23940/IJPE.18.01.P9.7788","url":null,"abstract":"Image tactile interaction is a new interaction between human and digital images, allowing users to know digital images by touch. In order to improve the authenticity of image tactile perception, this paper proposes a tactile perception model based on image region features. Aiming at the phenomenon of over-partitioning and computational complexity of JSEG algorithm, we propose an improved JSEG algorithm, which can effectively reduce the computational complexity and divide the image areas which are more in line with the subjective visual judgment, and can be used for region-based image tactile generation. The experimental results show that the proposed algorithm can correctly distinguish the image area and improve the accuracy of image tactile perception.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116609105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A Novel Dynamic Mesh Sequence Compression Framework for Progressive Streaming 一种新的动态网格序列压缩框架
Pub Date : 2017-10-01 DOI: 10.1109/ICVRV.2017.00019
Bailin Yang, Zhaoyi Jiang, Yan Tian, Jiantao Shangguan, Chao Song, Yibo Guo, Mingliang Xu
In this work, a novel three-dimensional (3D) mesh sequence compression framework suitable for progressive streaming is described. The proposed approach first implements a temporal frame-clustering algorithm based on the curvature of pivot vertex trajectory. Then, a decorrelation method is used to remove the redundancy of data in x, y, and z coordinates. Next, to reduce the amount of mesh sequence data, the vertex motion trajectory data in each cluster is compressed using principal component analysis (PCA). Further, the coefficients of x, y and z coordinates obtained from different principal components are considered as mesh signals, which are processed by a spectral graph wavelet transform(SGWT). Finally, the obtained wavelet coefficients are encoded using CSPECK. By transmitting data on different bit-planes from encoder, a 3D mesh sequence is encoded into a multi-resolution sequence. Experimental results show that the proposed method can realize progressive streaming of mesh sequence. Furthermore, the results also show that the proposed approach outperforms state-of-the-art methods in terms of storage space requirement and minimizing the reconstruction error
在这项工作中,描述了一种新的三维(3D)网格序列压缩框架,适用于逐行流。该方法首先实现了一种基于枢轴顶点轨迹曲率的时间帧聚类算法。然后,采用去相关方法去除x、y、z坐标上的数据冗余。其次,为了减少网格序列数据量,使用主成分分析(PCA)压缩每个聚类中的顶点运动轨迹数据。将不同主分量得到的x、y、z坐标系数作为网格信号,进行谱图小波变换(SGWT)处理。最后,对得到的小波系数进行CSPECK编码。通过从编码器传输不同位面的数据,将三维网格序列编码成多分辨率序列。实验结果表明,该方法可以实现网格序列的逐级流化。此外,结果还表明,该方法在存储空间要求和最小化重构误差方面优于现有方法
{"title":"A Novel Dynamic Mesh Sequence Compression Framework for Progressive Streaming","authors":"Bailin Yang, Zhaoyi Jiang, Yan Tian, Jiantao Shangguan, Chao Song, Yibo Guo, Mingliang Xu","doi":"10.1109/ICVRV.2017.00019","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00019","url":null,"abstract":"In this work, a novel three-dimensional (3D) mesh sequence compression framework suitable for progressive streaming is described. The proposed approach first implements a temporal frame-clustering algorithm based on the curvature of pivot vertex trajectory. Then, a decorrelation method is used to remove the redundancy of data in x, y, and z coordinates. Next, to reduce the amount of mesh sequence data, the vertex motion trajectory data in each cluster is compressed using principal component analysis (PCA). Further, the coefficients of x, y and z coordinates obtained from different principal components are considered as mesh signals, which are processed by a spectral graph wavelet transform(SGWT). Finally, the obtained wavelet coefficients are encoded using CSPECK. By transmitting data on different bit-planes from encoder, a 3D mesh sequence is encoded into a multi-resolution sequence. Experimental results show that the proposed method can realize progressive streaming of mesh sequence. Furthermore, the results also show that the proposed approach outperforms state-of-the-art methods in terms of storage space requirement and minimizing the reconstruction error","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121935287","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Realistic Modeling and Real Time Rendering Method of Fruit Decay Based on Interactive Design 基于交互设计的果树腐烂逼真建模与实时渲染方法
Pub Date : 2017-10-01 DOI: 10.1109/ICVRV.2017.00033
Sheng Wu, Teng Miao, Boxiang Xiao, Xinyu Guo
This paper presents a generally 3D approach for simulating the decay process in fruits to provide a visual model for the digital design of fruit. A global decay parameter and a decay resistance parameter are used to control the dynamic simulation process of decay for users. The decay resistance parameters are set for every point on the 3D fruit model by an interactive designing method which is similar to traditional drawing tools. The resistance parameters finally form a texture of decay region on the fruit surface. With the resistance parameters, the degree of decay which is used to control the shape and appearance of fruit decay surface, can be compute by tuning the global decay parameters. For the shape of decay region, the exponential function is given for calculating the depression displacement cased by decay. In order to render the wrinkle on the decay region, a noise normal map is used to disturb the normal vector of fruit model. We verified our method by simulating the rotten apple, the result shows that a dynamic and real-time realistic simulation can be obtained using our method which is flexible, fast and general. We believe this approach is suitable for fruit digital design as a visualization model.
本文提出了一种模拟水果腐烂过程的通用三维方法,为水果的数字化设计提供可视化模型。采用全局衰减参数和抗衰减参数为用户控制衰减的动态仿真过程。采用类似于传统绘图工具的交互设计方法,对三维水果模型上每个点的抗腐参数进行设置。这些阻力参数最终在果实表面形成一个腐烂区域的纹理。有了这些阻力参数,就可以通过调整全局衰减参数来计算衰减程度,从而控制果实腐烂表面的形状和外观。对于衰减区的形状,给出了计算衰减引起的凹陷位移的指数函数。为了在腐烂区域上呈现褶皱,采用噪声法线映射对水果模型的法向量进行扰动。通过对烂苹果的仿真验证了该方法的有效性,结果表明,该方法具有灵活、快速、通用性强的特点,可以得到动态、实时的真实仿真结果。我们认为这种方法适合作为水果数字化设计的可视化模型。
{"title":"A Realistic Modeling and Real Time Rendering Method of Fruit Decay Based on Interactive Design","authors":"Sheng Wu, Teng Miao, Boxiang Xiao, Xinyu Guo","doi":"10.1109/ICVRV.2017.00033","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00033","url":null,"abstract":"This paper presents a generally 3D approach for simulating the decay process in fruits to provide a visual model for the digital design of fruit. A global decay parameter and a decay resistance parameter are used to control the dynamic simulation process of decay for users. The decay resistance parameters are set for every point on the 3D fruit model by an interactive designing method which is similar to traditional drawing tools. The resistance parameters finally form a texture of decay region on the fruit surface. With the resistance parameters, the degree of decay which is used to control the shape and appearance of fruit decay surface, can be compute by tuning the global decay parameters. For the shape of decay region, the exponential function is given for calculating the depression displacement cased by decay. In order to render the wrinkle on the decay region, a noise normal map is used to disturb the normal vector of fruit model. We verified our method by simulating the rotten apple, the result shows that a dynamic and real-time realistic simulation can be obtained using our method which is flexible, fast and general. We believe this approach is suitable for fruit digital design as a visualization model.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124950666","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Color-Guided Coarse Registration Method Based on RGB-D Data 基于RGB-D数据的颜色引导粗配准方法
Pub Date : 2017-10-01 DOI: 10.1109/icvrv.2017.00108
Benyue Su, Wei Han, Yusheng Peng, Min Sheng
This paper proposes a coarse registration method based on RGB-D data. The feature points are obtained from the mixed feature. The corresponding points of the feature points are searched in target point cloud according to the feature descriptor. The feature points are divided into several partitions and the rigid transformation is calculated between the corresponding point pairs in each partition. The optimal rigid transformation is chosen from the rigid transformation of each partition. The mixed feature is constructed by geometric information and color information of the neighborhood points. The feature descriptor is built with the mixed feature and normalized RGB value. The experimental results demonstrated that the method is effective for RGB-D data.
提出了一种基于RGB-D数据的粗配准方法。从混合特征中获得特征点。根据特征描述符在目标点云中搜索特征点对应的点。将特征点划分为若干分区,计算每个分区中对应点对之间的刚性变换。从每个分区的刚性变换中选择最优的刚性变换。混合特征由相邻点的几何信息和颜色信息构成。特征描述符由混合特征和归一化RGB值构建。实验结果表明,该方法对RGB-D数据是有效的。
{"title":"Color-Guided Coarse Registration Method Based on RGB-D Data","authors":"Benyue Su, Wei Han, Yusheng Peng, Min Sheng","doi":"10.1109/icvrv.2017.00108","DOIUrl":"https://doi.org/10.1109/icvrv.2017.00108","url":null,"abstract":"This paper proposes a coarse registration method based on RGB-D data. The feature points are obtained from the mixed feature. The corresponding points of the feature points are searched in target point cloud according to the feature descriptor. The feature points are divided into several partitions and the rigid transformation is calculated between the corresponding point pairs in each partition. The optimal rigid transformation is chosen from the rigid transformation of each partition. The mixed feature is constructed by geometric information and color information of the neighborhood points. The feature descriptor is built with the mixed feature and normalized RGB value. The experimental results demonstrated that the method is effective for RGB-D data.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127423575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Web Virtual Reality Oriented Collision Detection 面向Web虚拟现实的碰撞检测
Pub Date : 2017-10-01 DOI: 10.1109/ICVRV.2017.00118
Peipei Yang, Haoxiang Wang, Yuchen Liu
This paper proposes an optimized method of collision detection in web virtual reality (VR) environment. The proposed solution consists of two layers. Experimental results demonstrate the effectivity of the method and its significant improvement in efficiency (measured by average frame number per second) with proper accuracy of collision detection guaranteed.
提出了一种网络虚拟现实(VR)环境下的碰撞检测优化方法。提出的解决方案由两层组成。实验结果证明了该方法的有效性和显著提高的效率(以每秒平均帧数衡量),保证了适当的碰撞检测精度。
{"title":"Web Virtual Reality Oriented Collision Detection","authors":"Peipei Yang, Haoxiang Wang, Yuchen Liu","doi":"10.1109/ICVRV.2017.00118","DOIUrl":"https://doi.org/10.1109/ICVRV.2017.00118","url":null,"abstract":"This paper proposes an optimized method of collision detection in web virtual reality (VR) environment. The proposed solution consists of two layers. Experimental results demonstrate the effectivity of the method and its significant improvement in efficiency (measured by average frame number per second) with proper accuracy of collision detection guaranteed.","PeriodicalId":187934,"journal":{"name":"2017 International Conference on Virtual Reality and Visualization (ICVRV)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128029006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2017 International Conference on Virtual Reality and Visualization (ICVRV)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1