H. Yamachi, Yasuyuki Souma, Y. Kambayashi, Y. Tsujimura, Tomoaki Iida
{"title":"Evaluation of a Technique for Collision and Object Detection with the Z-buffer in Cyber Space","authors":"H. Yamachi, Yasuyuki Souma, Y. Kambayashi, Y. Tsujimura, Tomoaki Iida","doi":"10.1109/CW.2011.28","DOIUrl":null,"url":null,"abstract":"We propose a new technique to detect objects and collisions of geometric objects in cyber space. This technique uses depth values of the Z-buffer in rendering scene. We use the orthographic projection for collision detection. Our method uses two depth value sets. One is obtained through rendering the cyber space from the sensor object toward a target point. This set does not have the depth values of the sensor object. Another one is obtained through rendering only the sensor object in the reverse direction. From these two depth value sets, we obtain the distance between the sensor object and others for each pixel. This technique requires only one or two rendering processes and it is independent from the complexity of the object's shape, deformation or motion. In this paper we evaluate the efficiency of this method on the GPUs we currently use.","PeriodicalId":231796,"journal":{"name":"2011 International Conference on Cyberworlds","volume":"12 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 International Conference on Cyberworlds","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CW.2011.28","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 3
Abstract
We propose a new technique to detect objects and collisions of geometric objects in cyber space. This technique uses depth values of the Z-buffer in rendering scene. We use the orthographic projection for collision detection. Our method uses two depth value sets. One is obtained through rendering the cyber space from the sensor object toward a target point. This set does not have the depth values of the sensor object. Another one is obtained through rendering only the sensor object in the reverse direction. From these two depth value sets, we obtain the distance between the sensor object and others for each pixel. This technique requires only one or two rendering processes and it is independent from the complexity of the object's shape, deformation or motion. In this paper we evaluate the efficiency of this method on the GPUs we currently use.