G. Marino, D. Vercelli, F. Tecchia, P. Gasparello, M. Bergamasco
Complex Virtual Environments applications may require computational resources exceeding the capabilities of a single machine. Our system, called "XVR Network Renderer" , allows rendering load to be distributed throughout a cluster of machines operating concurrently. The proposed solution consists in a set of software modules structured as a single-master multiple-slaves architecture. XVR is a development environment that allows rapid development of Virtual Environments applications. The master software intercepts all the OpenGL API calls performed by any XVR application, without requiring any code to be added or modified. The graphical commands are then re-executed individually by the slave clients. Each slave is typically configured to manage only a subset of the whole virtual context. Our system exploits the tight integration with the underlying XVR scene-graph manager at its own advantage, providing additional features other than the mere visualization of a high resolution OpenGL context, such as head tracking, GLSL shaders, and the ability to insert (and intercept) "placemarkers" inside the broadcast OpenGL data stream. Finally, the system can be configured to work with a wide range of complex visualization setups, automatically handling stereoscopy, correct perspective correction, overlapping images and other common problems, without ever changing the code of the original application. In this work we describe the proposed architecture and we discuss the results of our performance analysis.
{"title":"Description and Performance Analysis of a Distributed Rendering Architecture for Virtual Environments","authors":"G. Marino, D. Vercelli, F. Tecchia, P. Gasparello, M. Bergamasco","doi":"10.1109/ICAT.2007.58","DOIUrl":"https://doi.org/10.1109/ICAT.2007.58","url":null,"abstract":"Complex Virtual Environments applications may require computational resources exceeding the capabilities of a single machine. Our system, called \"XVR Network Renderer\" , allows rendering load to be distributed throughout a cluster of machines operating concurrently. The proposed solution consists in a set of software modules structured as a single-master multiple-slaves architecture. XVR is a development environment that allows rapid development of Virtual Environments applications. The master software intercepts all the OpenGL API calls performed by any XVR application, without requiring any code to be added or modified. The graphical commands are then re-executed individually by the slave clients. Each slave is typically configured to manage only a subset of the whole virtual context. Our system exploits the tight integration with the underlying XVR scene-graph manager at its own advantage, providing additional features other than the mere visualization of a high resolution OpenGL context, such as head tracking, GLSL shaders, and the ability to insert (and intercept) \"placemarkers\" inside the broadcast OpenGL data stream. Finally, the system can be configured to work with a wide range of complex visualization setups, automatically handling stereoscopy, correct perspective correction, overlapping images and other common problems, without ever changing the code of the original application. In this work we describe the proposed architecture and we discuss the results of our performance analysis.","PeriodicalId":110856,"journal":{"name":"17th International Conference on Artificial Reality and Telexistence (ICAT 2007)","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133408300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a new virtual reality-based skill transfer and human resource development system for casting design, which is composed of the explicit and tacit knowledge transfer systems using synchronized multimedia and the knowledge internalization system using portable virtual environment. In our proposed system, the education content is displayed in the immersive virtual environment, whereby a trainee may experience work in the virtual site operation. Provided that the trainee has gained explicit and tacit knowledge of casting through the multimedia-based knowledge transfer system, the immersive virtual environment catalyzes the internalization of knowledge and also enables the trainee to gain tacit knowledge before undergoing on- the-job training at a real-time operation site.
{"title":"Virtual Reality-Based Casting Skill Transfer and Human Resource Development","authors":"K. Watanuki","doi":"10.1109/ICAT.2007.60","DOIUrl":"https://doi.org/10.1109/ICAT.2007.60","url":null,"abstract":"This paper proposes a new virtual reality-based skill transfer and human resource development system for casting design, which is composed of the explicit and tacit knowledge transfer systems using synchronized multimedia and the knowledge internalization system using portable virtual environment. In our proposed system, the education content is displayed in the immersive virtual environment, whereby a trainee may experience work in the virtual site operation. Provided that the trainee has gained explicit and tacit knowledge of casting through the multimedia-based knowledge transfer system, the immersive virtual environment catalyzes the internalization of knowledge and also enables the trainee to gain tacit knowledge before undergoing on- the-job training at a real-time operation site.","PeriodicalId":110856,"journal":{"name":"17th International Conference on Artificial Reality and Telexistence (ICAT 2007)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129687586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hansung Kim, R. Sakamoto, I. Kitahara, N. Orman, T. Toriyama, K. Kogure
We propose an advanced visual hull technique to compensate for outliers using the reliabilities of the silhouettes. The proposed method consists of a foreground extraction technique based on the generalized Gaussian family model and a compensated shape-from-silhouette algorithm. They are connected by the intra-/inter-silhouette reliabilities to compensate for carving errors from defective segmentation or partial occlusion which may occur in a real environment. The 3D reconstruction process is implemented on a graphics processing unit (GPU) to accelerate the processing speed by using the huge computational power of modern graphics hardware. Experimental results show that the proposed method provides reliable silhouette information and an accurate visual hull in real environments at a very high speed on a common PC.
{"title":"Compensated Visual Hull for Defective Segmentation and Occlusion","authors":"Hansung Kim, R. Sakamoto, I. Kitahara, N. Orman, T. Toriyama, K. Kogure","doi":"10.1109/ICAT.2007.29","DOIUrl":"https://doi.org/10.1109/ICAT.2007.29","url":null,"abstract":"We propose an advanced visual hull technique to compensate for outliers using the reliabilities of the silhouettes. The proposed method consists of a foreground extraction technique based on the generalized Gaussian family model and a compensated shape-from-silhouette algorithm. They are connected by the intra-/inter-silhouette reliabilities to compensate for carving errors from defective segmentation or partial occlusion which may occur in a real environment. The 3D reconstruction process is implemented on a graphics processing unit (GPU) to accelerate the processing speed by using the huge computational power of modern graphics hardware. Experimental results show that the proposed method provides reliable silhouette information and an accurate visual hull in real environments at a very high speed on a common PC.","PeriodicalId":110856,"journal":{"name":"17th International Conference on Artificial Reality and Telexistence (ICAT 2007)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116726817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Sugimoto, Kazuki Kodama, A. Nakamura, Minoru Kojima, M. Inami
In this paper, we introduce a two dimensional display-based tracking system. The system consists of a regular display device and simple photo sensors. It measures the position and direction of a receiver using fiducial graphics. The result of the measurement can be acquired in the same coordinate system as the graphics. Thus, this system no longer needs the measurement devices to be calibrated to the display devices. This is beneficial for mixed reality applications that synthesize virtual and real environments.
{"title":"A Display-Based Tracking System: Display-Based Computing for Measurement Systems","authors":"M. Sugimoto, Kazuki Kodama, A. Nakamura, Minoru Kojima, M. Inami","doi":"10.1109/ICAT.2007.50","DOIUrl":"https://doi.org/10.1109/ICAT.2007.50","url":null,"abstract":"In this paper, we introduce a two dimensional display-based tracking system. The system consists of a regular display device and simple photo sensors. It measures the position and direction of a receiver using fiducial graphics. The result of the measurement can be acquired in the same coordinate system as the graphics. Thus, this system no longer needs the measurement devices to be calibrated to the display devices. This is beneficial for mixed reality applications that synthesize virtual and real environments.","PeriodicalId":110856,"journal":{"name":"17th International Conference on Artificial Reality and Telexistence (ICAT 2007)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128877412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a supporting system for pool games by computer vision based augmented reality technology. Main purpose of this system is to present visual aids drawn on a pool table through LCD display of a camera mounted handheld device without any artificial marker. Since a pool table is rectangle, a pool ball is sphere and each has a specific color, these serve as a substitute for artificial markers. Using these natural features, the registration of visual aids such as shooting direction and ball behavior is achieved. Also, our supporting information is computed based on the rules of pool games and includes the next shooting way by simulating ball behavior. Experimental results represent that the accuracy of ball positions is enough for computing our supporting information.
{"title":"AR Display of Visual Aids for Supporting Pool Games by Online Markerless Tracking","authors":"Hideaki Uchiyama, H. Saito","doi":"10.1109/ICAT.2007.35","DOIUrl":"https://doi.org/10.1109/ICAT.2007.35","url":null,"abstract":"This paper presents a supporting system for pool games by computer vision based augmented reality technology. Main purpose of this system is to present visual aids drawn on a pool table through LCD display of a camera mounted handheld device without any artificial marker. Since a pool table is rectangle, a pool ball is sphere and each has a specific color, these serve as a substitute for artificial markers. Using these natural features, the registration of visual aids such as shooting direction and ball behavior is achieved. Also, our supporting information is computed based on the rules of pool games and includes the next shooting way by simulating ball behavior. Experimental results represent that the accuracy of ball positions is enough for computing our supporting information.","PeriodicalId":110856,"journal":{"name":"17th International Conference on Artificial Reality and Telexistence (ICAT 2007)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127930652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a method for improving accuracy of marker-based tracking using a 2D marker for augmented reality. We focus on that tracking becomes unstable when the view direction of the camera is almost perpendicular to a marker plane. Especially, tracking of Z axis which is perpendicular to the marker plane (X-Y) becomes unstable. For improving tracking accuracy in this case, we search rotation parameters that are the fittest to projected pattern based on the particle filter. By using particle filtering technique, then, our method can correctly estimate rotation parameters of the camera which are important to track the 3D coordinate system and improve the accuracy of the 3D coordinate system. This method can reduce jitters between frames, which is a big problem in AR. In the experiment, we demonstrate that our method can improve the tracking accuracy of the 3D coordinate system compared with just using ARToolkit.
{"title":"Improvement of Accuracy for 2D Marker-Based Tracking Using Particle Filter","authors":"Yuko Uematsu, H. Saito","doi":"10.1109/ICAT.2007.16","DOIUrl":"https://doi.org/10.1109/ICAT.2007.16","url":null,"abstract":"This paper presents a method for improving accuracy of marker-based tracking using a 2D marker for augmented reality. We focus on that tracking becomes unstable when the view direction of the camera is almost perpendicular to a marker plane. Especially, tracking of Z axis which is perpendicular to the marker plane (X-Y) becomes unstable. For improving tracking accuracy in this case, we search rotation parameters that are the fittest to projected pattern based on the particle filter. By using particle filtering technique, then, our method can correctly estimate rotation parameters of the camera which are important to track the 3D coordinate system and improve the accuracy of the 3D coordinate system. This method can reduce jitters between frames, which is a big problem in AR. In the experiment, we demonstrate that our method can improve the tracking accuracy of the 3D coordinate system compared with just using ARToolkit.","PeriodicalId":110856,"journal":{"name":"17th International Conference on Artificial Reality and Telexistence (ICAT 2007)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133516471","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we propose the new technique that recovers camera parameters from images of orthogonal directions of a moving object, no metric information on the model plane requires. Our approach only requires four observations and each of them provides a vanishing point from its two parallel lines; such four vanishing points provide three independent constraints. Using these points and proposed close form solution, camera intrinsic parameters can be calculated. Our technique is valid though the calibrating object moves along X-Y axes. At the end, computer simulation technique are implemented to demonstrate the effectiveness of the algorithm.
{"title":"A Novel Camera Calibration Technique Based on a Rotating Planar Complex Object with a Fixed Point","authors":"Mustafizur Rahman, Gang Xu","doi":"10.1109/ICAT.2007.31","DOIUrl":"https://doi.org/10.1109/ICAT.2007.31","url":null,"abstract":"In this paper, we propose the new technique that recovers camera parameters from images of orthogonal directions of a moving object, no metric information on the model plane requires. Our approach only requires four observations and each of them provides a vanishing point from its two parallel lines; such four vanishing points provide three independent constraints. Using these points and proposed close form solution, camera intrinsic parameters can be calculated. Our technique is valid though the calibrating object moves along X-Y axes. At the end, computer simulation technique are implemented to demonstrate the effectiveness of the algorithm.","PeriodicalId":110856,"journal":{"name":"17th International Conference on Artificial Reality and Telexistence (ICAT 2007)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127075317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ronald Sidharta, Atsushi Hiyama, T. Tanikawa, M. Hirose
In our previous paper, we proposed an augmented reality display based on the pepper's ghost configuration that was able to display two-dimensional images on different image plane at different physical depths. In this paper, we propose the next generation of the display. Our latest display is able to display images at different physical depths simultaneously, thus it is able to display virtual objects with real depth, binocular parallax and motion parallax without the use of special glasses. Using the pepper's ghost setup, we are able to display real world objects and virtual objects in the same space. Furthermore, since the rendered virtual objects have real physical depth, our system does not suffer from accommodation and convergence mismatch problem. We will describe the hardware setup, software system, and follow with two user evaluation experiments that evaluated the result of our system.
{"title":"Volumetric Display for Augmented Reality","authors":"Ronald Sidharta, Atsushi Hiyama, T. Tanikawa, M. Hirose","doi":"10.1109/ICAT.2007.17","DOIUrl":"https://doi.org/10.1109/ICAT.2007.17","url":null,"abstract":"In our previous paper, we proposed an augmented reality display based on the pepper's ghost configuration that was able to display two-dimensional images on different image plane at different physical depths. In this paper, we propose the next generation of the display. Our latest display is able to display images at different physical depths simultaneously, thus it is able to display virtual objects with real depth, binocular parallax and motion parallax without the use of special glasses. Using the pepper's ghost setup, we are able to display real world objects and virtual objects in the same space. Furthermore, since the rendered virtual objects have real physical depth, our system does not suffer from accommodation and convergence mismatch problem. We will describe the hardware setup, software system, and follow with two user evaluation experiments that evaluated the result of our system.","PeriodicalId":110856,"journal":{"name":"17th International Conference on Artificial Reality and Telexistence (ICAT 2007)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127456416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Young-Bum Kim, Seung-Hoon Han, Sun-jeong Kim, Eun-Ju Kim, C. Song
In this paper we show how a motion capture system and feedback mechanism can be integrated into a virtual ping- pong game to create a multi-player platform. To trace the motion of each player, optical markers are attached to different places on each player's paddle. For tactile feedback, we designed a controller for a DC (Direct Current) motor, which is also attached to the paddle. This controller communicates with the game server through wireless Bluetooth technology. When the game server detects a collision between the paddle and ball, the controller receives the message from the game server and then triggers one of the respective paddle's DC motors to vibrate depending on the position of the impact on the paddle. During an exhibition many people positively responded to the game.
{"title":"Multi-Player Virtual Ping-Pong Game","authors":"Young-Bum Kim, Seung-Hoon Han, Sun-jeong Kim, Eun-Ju Kim, C. Song","doi":"10.1109/ICAT.2007.34","DOIUrl":"https://doi.org/10.1109/ICAT.2007.34","url":null,"abstract":"In this paper we show how a motion capture system and feedback mechanism can be integrated into a virtual ping- pong game to create a multi-player platform. To trace the motion of each player, optical markers are attached to different places on each player's paddle. For tactile feedback, we designed a controller for a DC (Direct Current) motor, which is also attached to the paddle. This controller communicates with the game server through wireless Bluetooth technology. When the game server detects a collision between the paddle and ball, the controller receives the message from the game server and then triggers one of the respective paddle's DC motors to vibrate depending on the position of the impact on the paddle. During an exhibition many people positively responded to the game.","PeriodicalId":110856,"journal":{"name":"17th International Conference on Artificial Reality and Telexistence (ICAT 2007)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116456963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ami Kadowaki, Junta Sato, Yuichi Bannai, Ken-ichi Okada
Trials on the transmission of olfactory information together with audio/visual information are currently being conducted in the field of multimedia. However, continuous emission of a scent creates problems of human adaptation to the lingering olfactory stimuli. During long movie scenes, viewers can not detect an emitted scent continuously. To overcome this problem we applied pulse ejection to repeatedly emit scent for short periods of time to ensure the olfactory stimuli do not remain in the air to cause adaptation. This study presents the decision procedure for the ejection interval Deltat required while considering the olfactory characteristics of subjects. The developed method provided the user with an olfactory experience over a long duration, avoiding adaptation.
{"title":"Presentation Technique of Scent to Avoid Olfactory Adaptation","authors":"Ami Kadowaki, Junta Sato, Yuichi Bannai, Ken-ichi Okada","doi":"10.1109/ICAT.2007.8","DOIUrl":"https://doi.org/10.1109/ICAT.2007.8","url":null,"abstract":"Trials on the transmission of olfactory information together with audio/visual information are currently being conducted in the field of multimedia. However, continuous emission of a scent creates problems of human adaptation to the lingering olfactory stimuli. During long movie scenes, viewers can not detect an emitted scent continuously. To overcome this problem we applied pulse ejection to repeatedly emit scent for short periods of time to ensure the olfactory stimuli do not remain in the air to cause adaptation. This study presents the decision procedure for the ejection interval Deltat required while considering the olfactory characteristics of subjects. The developed method provided the user with an olfactory experience over a long duration, avoiding adaptation.","PeriodicalId":110856,"journal":{"name":"17th International Conference on Artificial Reality and Telexistence (ICAT 2007)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121919263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}