M. Roussou, G. Drettakis, N. Tsingos, A. R. Martinez, Emmanuel Gallo
In this paper we describe a project that adopts a user-centered approach in the design of virtual environments (VEs) with enhanced realism and interactivity, guided by real-world applications in the areas of urban planning/architecture and cultural heritage education. In what concerns realism, we introduce an image-based 3D capture process, where realistic models are created from photographs and subsequently displayed in a VR system using a high-quality, view-dependent algorithm. The VE is further enhanced using advanced vegetation and shadow display algorithms as well as 3D sound. A high degree of interactivity is added, allowing users to build and manipulate elements of the VEs according to their needs, as specified through a user task analysis and scenario-based approach which is currently being evaluated. This work is developed as part of the Ell-funded research project CREATE.
{"title":"A user-centered approach on combining realism and interactivity in virtual environments","authors":"M. Roussou, G. Drettakis, N. Tsingos, A. R. Martinez, Emmanuel Gallo","doi":"10.1109/VR.2004.7","DOIUrl":"https://doi.org/10.1109/VR.2004.7","url":null,"abstract":"In this paper we describe a project that adopts a user-centered approach in the design of virtual environments (VEs) with enhanced realism and interactivity, guided by real-world applications in the areas of urban planning/architecture and cultural heritage education. In what concerns realism, we introduce an image-based 3D capture process, where realistic models are created from photographs and subsequently displayed in a VR system using a high-quality, view-dependent algorithm. The VE is further enhanced using advanced vegetation and shadow display algorithms as well as 3D sound. A high degree of interactivity is added, allowing users to build and manipulate elements of the VEs according to their needs, as specified through a user task analysis and scenario-based approach which is currently being evaluated. This work is developed as part of the Ell-funded research project CREATE.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117238266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Virtual reality, VR, offers many benefits to technical education, including the delivery of information through multiple active channels, the addressing of different learning styles, and experiential-based learning. This poster presents work performed by the authors to apply VR to engineering education, in three broad project areas: virtual chemical plants, virtual laboratory accidents, and a virtual UIC campus. The first area provides guided exploration of domains otherwise inaccessible, such as the interior of operating reactors and microscopic reaction mechanisms. The second promotes safety by demonstrating the consequences of not following proper lab safety procedures. And the third provides valuable guidance for (foreign) visitors. All programs developed are available on the Web, for free download to any interested parties.
{"title":"The application of virtual reality to (chemical engineering) education","authors":"John T. Bell, H. Fogler","doi":"10.1109/VR.2004.75","DOIUrl":"https://doi.org/10.1109/VR.2004.75","url":null,"abstract":"Virtual reality, VR, offers many benefits to technical education, including the delivery of information through multiple active channels, the addressing of different learning styles, and experiential-based learning. This poster presents work performed by the authors to apply VR to engineering education, in three broad project areas: virtual chemical plants, virtual laboratory accidents, and a virtual UIC campus. The first area provides guided exploration of domains otherwise inaccessible, such as the interior of operating reactors and microscopic reaction mechanisms. The second promotes safety by demonstrating the consequences of not following proper lab safety procedures. And the third provides valuable guidance for (foreign) visitors. All programs developed are available on the Web, for free download to any interested parties.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130362283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stéphane Redon, Young J. Kim, Ming C. Lin, Dinesh Manocha, Jim Templeman
We present a fast algorithm for continuous collision detection between a moving avatar and its surrounding virtual environment. We model the avatar as an articulated body using line-skeletons with constant offsets and the virtual environment as a collection of polygonized objects. Given the position and orientation of the avatar at discrete time steps, we use an arbitrary in-between motion to interpolate the path for each link between discrete instances. We bound the swept-space of each link using a swept volume (SV) and compute a bounding volume hierarchy to cull away links that are not in close proximity to the objects in the virtual environment. We generate the SV's of the remaining links and use them to check for possible interferences and estimate the time of collision between the surface of the SV and the objects in the virtual environment. Furthermore, we use graphics hardware to perform collision queries on the dynamically generated swept surfaces. Our overall algorithm requires no precomputation and is applicable to general articulated bodies. We have implemented the algorithm on a 2.4 GHz Pentium IV PC with NVIDIA GeForce FX 5800 graphics card and applied it to an avatar with 16 links, moving in a virtual environment composed of hundreds of thousands of polygons. Our prototype system is able to detect all contacts between the moving avatar and the environment in 1.0 - 30 milliseconds.
提出了一种快速的移动角色与其周围虚拟环境之间的连续碰撞检测算法。我们使用具有恒定偏移量的线骨架将化身建模为铰接体,并将虚拟环境建模为多边形对象的集合。给定角色在离散时间步长的位置和方向,我们使用任意的中间运动来插值离散实例之间每个链接的路径。我们使用扫描卷(SV)绑定每个链接的扫描空间,并计算边界卷层次结构来剔除虚拟环境中不靠近对象的链接。我们生成剩余链路的SV,并使用它们来检查可能的干扰,并估计SV表面与虚拟环境中物体之间的碰撞时间。此外,我们使用图形硬件对动态生成的扫描表面执行碰撞查询。我们的整体算法不需要预计算,适用于一般铰接体。我们在2.4 GHz Pentium IV PC和NVIDIA GeForce FX 5800显卡上实现了该算法,并将其应用于具有16个链接的化身,在由数十万个多边形组成的虚拟环境中移动。我们的原型系统能够在1.0 - 30毫秒内检测到移动角色和环境之间的所有接触。
{"title":"Interactive and continuous collision detection for avatars in virtual environments","authors":"Stéphane Redon, Young J. Kim, Ming C. Lin, Dinesh Manocha, Jim Templeman","doi":"10.1109/VR.2004.46","DOIUrl":"https://doi.org/10.1109/VR.2004.46","url":null,"abstract":"We present a fast algorithm for continuous collision detection between a moving avatar and its surrounding virtual environment. We model the avatar as an articulated body using line-skeletons with constant offsets and the virtual environment as a collection of polygonized objects. Given the position and orientation of the avatar at discrete time steps, we use an arbitrary in-between motion to interpolate the path for each link between discrete instances. We bound the swept-space of each link using a swept volume (SV) and compute a bounding volume hierarchy to cull away links that are not in close proximity to the objects in the virtual environment. We generate the SV's of the remaining links and use them to check for possible interferences and estimate the time of collision between the surface of the SV and the objects in the virtual environment. Furthermore, we use graphics hardware to perform collision queries on the dynamically generated swept surfaces. Our overall algorithm requires no precomputation and is applicable to general articulated bodies. We have implemented the algorithm on a 2.4 GHz Pentium IV PC with NVIDIA GeForce FX 5800 graphics card and applied it to an avatar with 16 links, moving in a virtual environment composed of hundreds of thousands of polygons. Our prototype system is able to detect all contacts between the moving avatar and the environment in 1.0 - 30 milliseconds.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114975318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Data are reported for symptoms of virtual environment (VE) sickness that arose in 10 behavioral experiments. In total, 134 participants took part in the experiments and were immersed in VEs for approximately 150 hours. Nineteen of the participants reported major symptoms and two were physically sick. The tasks that participants ' performed ranged from manipulating virtual objects that they "held" in their hands, to traveling distances of 10 km or more while navigating virtual mazes. The data are interpreted within a framework provided by the virtual environment description and classification system. Environmental dimensions and visual complexity had little effect on the severity of participants ' symptoms. Long periods of immersion tended to produce major ocular-motor symptoms. Nausea was affected by the type of movement made to control participants ' view, and was particularly severe when participants had to spend substantial amounts of time (3%) looking steeply downwards at their virtual feet. Contrary to expectations, large rapid movements had little effect on most participants, and neither did movements that were not under participants ' direct control.
{"title":"The effect of environment characteristics and user interaction on levels of virtual environment sickness","authors":"R. Ruddle","doi":"10.1109/VR.2004.76","DOIUrl":"https://doi.org/10.1109/VR.2004.76","url":null,"abstract":"Data are reported for symptoms of virtual environment (VE) sickness that arose in 10 behavioral experiments. In total, 134 participants took part in the experiments and were immersed in VEs for approximately 150 hours. Nineteen of the participants reported major symptoms and two were physically sick. The tasks that participants ' performed ranged from manipulating virtual objects that they \"held\" in their hands, to traveling distances of 10 km or more while navigating virtual mazes. The data are interpreted within a framework provided by the virtual environment description and classification system. Environmental dimensions and visual complexity had little effect on the severity of participants ' symptoms. Long periods of immersion tended to produce major ocular-motor symptoms. Nausea was affected by the type of movement made to control participants ' view, and was particularly severe when participants had to spend substantial amounts of time (3%) looking steeply downwards at their virtual feet. Contrary to expectations, large rapid movements had little effect on most participants, and neither did movements that were not under participants ' direct control.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":" 38","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120828772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the increasing requirements for distributed virtual environment (DVE): supporting larger number of participants and providing more smooth roaming and interactions, scalability is becoming a key issue. In this paper, we explore the scalability of participants and the scalability of servers, and mainly focus on three aspects: system architecture, communication model and interest mechanism. We present our middleware platform, HIVE, providing a variety of services such as data distribution, communication, event notification, etc. To achieve the reusability and interoperability of DVE applications, the interface specification of high level architecture (HLA) is employed as the reference. HIVE also contains the back-ends, which the middleware services depend upon. On HIVE, users can develop scalable DVE applications easily and quickly, concentrating on not the detail of distribution but the application logic. Finally an experimental demo on HIVE is given.
{"title":"HIVE: a highly scalable framework for DVE","authors":"Zonghui Wang, Xiaohong Jiang, Jiaoying Shi","doi":"10.1109/VR.2004.41","DOIUrl":"https://doi.org/10.1109/VR.2004.41","url":null,"abstract":"With the increasing requirements for distributed virtual environment (DVE): supporting larger number of participants and providing more smooth roaming and interactions, scalability is becoming a key issue. In this paper, we explore the scalability of participants and the scalability of servers, and mainly focus on three aspects: system architecture, communication model and interest mechanism. We present our middleware platform, HIVE, providing a variety of services such as data distribution, communication, event notification, etc. To achieve the reusability and interoperability of DVE applications, the interface specification of high level architecture (HLA) is employed as the reference. HIVE also contains the back-ends, which the middleware services depend upon. On HIVE, users can develop scalable DVE applications easily and quickly, concentrating on not the detail of distribution but the application logic. Finally an experimental demo on HIVE is given.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125024854","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tomohiro Amemiya, Jun Yamashita, K. Hirota, M. Hirose
In this paper, we discuss application possibilities of augmented reality technologies in the field of mobility support for the deaf blind. We propose the navigation system called virtual leading blocks for the deaf-blind, which consists of a wearable interface for Finger-Braille, one of the commonly used communication methods among deaf-blind people in Japan, and a ubiquitous environment for barrier-free application, which consists of floor-embedded active radio-frequency identification (RFID) tags. The wearable Finger-Braille interface using two Linux-based wristwatch computers has been developed as a hybrid interface of verbal and nonverbal communication in order to inform users of their direction and position through the tactile sensation. We propose the metaphor of "watermelon splitting" for navigation by this system and verify the feasibility of the proposed system through experiments.
{"title":"Virtual leading blocks for the deaf-blind: a real-time way-finder by verbal-nonverbal hybrid interface and high-density RFID tag space","authors":"Tomohiro Amemiya, Jun Yamashita, K. Hirota, M. Hirose","doi":"10.1109/VR.2004.83","DOIUrl":"https://doi.org/10.1109/VR.2004.83","url":null,"abstract":"In this paper, we discuss application possibilities of augmented reality technologies in the field of mobility support for the deaf blind. We propose the navigation system called virtual leading blocks for the deaf-blind, which consists of a wearable interface for Finger-Braille, one of the commonly used communication methods among deaf-blind people in Japan, and a ubiquitous environment for barrier-free application, which consists of floor-embedded active radio-frequency identification (RFID) tags. The wearable Finger-Braille interface using two Linux-based wristwatch computers has been developed as a hybrid interface of verbal and nonverbal communication in order to inform users of their direction and position through the tactile sensation. We propose the metaphor of \"watermelon splitting\" for navigation by this system and verify the feasibility of the proposed system through experiments.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129745683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Scharver, R. Evenhouse, Andrew E. Johnson, Jason Leigh
Repairing severe human skull injuries requires customized cranial implants, and current visualization research aims to develop a new approach to create these implants. Following pre-surgical design techniques pioneered at the University of Illinois at Chicago (VIC) in 1996, researchers have developed an immersive cranial implant application incorporating haptic force feedback and augmented reality. The application runs on the personal augmented reality immersive system (PARIS/spl trade/), allowing the modeler to see clearly both his hands and the virtual workspace. The strengths of multiple software libraries are maximized to simplify development. This research lays the foundation to eventually replace the traditional modeling and evaluation processes.
{"title":"Pre-surgical cranial implant design using the PARIS/spl trade/ prototype","authors":"C. Scharver, R. Evenhouse, Andrew E. Johnson, Jason Leigh","doi":"10.1109/VR.2004.1310075","DOIUrl":"https://doi.org/10.1109/VR.2004.1310075","url":null,"abstract":"Repairing severe human skull injuries requires customized cranial implants, and current visualization research aims to develop a new approach to create these implants. Following pre-surgical design techniques pioneered at the University of Illinois at Chicago (VIC) in 1996, researchers have developed an immersive cranial implant application incorporating haptic force feedback and augmented reality. The application runs on the personal augmented reality immersive system (PARIS/spl trade/), allowing the modeler to see clearly both his hands and the virtual workspace. The strengths of multiple software libraries are maximized to simplify development. This research lays the foundation to eventually replace the traditional modeling and evaluation processes.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128721186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andreas Simon, Randall C. Smith, Richard R. Pawlicki
This paper discusses the use of omnidirectional stereo for panoramic virtual environments. It presents two methods for real-time rendering of omnistereo images. Conventional perspective stereo is correct everywhere in the visual field, but only in one view direction. Omnistereo is correct in every view direction, but only in the center of the visual field, degrading in the periphery. Omnistereo images make it possible to use wide field of view virtual environment display systems-like the CAVE/spl trade/-without head tracking, and still show correct stereoscopic depth over the full 360/spl deg/ viewing circle. This allows the use of these systems as true multi-user displays, where viewers can look around and browse a panoramic scene independently. Because there is no need to rerender the image according to view direction, we can also use this technique to present static omnistereo images, generated by offline rendering or real image capture, in panoramic displays. We have implemented omnistereo in a four-sided CAVE/spl trade/ and in a 240/spl deg/ i-Con/spl trade/ curved screen projection system. Informal user evaluation confirms that omnistereo images present a seamless image with correct stereoscopic depth in every view direction without head tracking.
{"title":"Omnistereo for panoramic virtual environment display systems","authors":"Andreas Simon, Randall C. Smith, Richard R. Pawlicki","doi":"10.1109/VR.2004.56","DOIUrl":"https://doi.org/10.1109/VR.2004.56","url":null,"abstract":"This paper discusses the use of omnidirectional stereo for panoramic virtual environments. It presents two methods for real-time rendering of omnistereo images. Conventional perspective stereo is correct everywhere in the visual field, but only in one view direction. Omnistereo is correct in every view direction, but only in the center of the visual field, degrading in the periphery. Omnistereo images make it possible to use wide field of view virtual environment display systems-like the CAVE/spl trade/-without head tracking, and still show correct stereoscopic depth over the full 360/spl deg/ viewing circle. This allows the use of these systems as true multi-user displays, where viewers can look around and browse a panoramic scene independently. Because there is no need to rerender the image according to view direction, we can also use this technique to present static omnistereo images, generated by offline rendering or real image capture, in panoramic displays. We have implemented omnistereo in a four-sided CAVE/spl trade/ and in a 240/spl deg/ i-Con/spl trade/ curved screen projection system. Informal user evaluation confirms that omnistereo images present a seamless image with correct stereoscopic depth in every view direction without head tracking.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132073604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hiroyasu Ichida, Yuichi Itoh, Y. Kitamura, F. Kishino
We present a novel method for interactive retrieval of virtual 3D shapes using physical objects. Our method is based on simple physical 3D interaction with a set of tangible blocks. As the user connects blocks, the system automatically recognizes the shape of the constructed physical structure and picks similar 3D virtual shapes from a preset model database, in real time. Our system fully supports interactive retrieval of 3D virtual models in an extremely simple fashion, which is completely nonverbal and cross-cultural. These advantages make it an ideal interface for inexperienced users, previously barred from many applications that include 3D shape retrieval tasks.
{"title":"Interactive retrieval of 3D virtual shapes using physical objects","authors":"Hiroyasu Ichida, Yuichi Itoh, Y. Kitamura, F. Kishino","doi":"10.1109/VR.2004.47","DOIUrl":"https://doi.org/10.1109/VR.2004.47","url":null,"abstract":"We present a novel method for interactive retrieval of virtual 3D shapes using physical objects. Our method is based on simple physical 3D interaction with a set of tangible blocks. As the user connects blocks, the system automatically recognizes the shape of the constructed physical structure and picks similar 3D virtual shapes from a preset model database, in real time. Our system fully supports interactive retrieval of 3D virtual models in an extremely simple fashion, which is completely nonverbal and cross-cultural. These advantages make it an ideal interface for inexperienced users, previously barred from many applications that include 3D shape retrieval tasks.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121333688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose a system that overlays textures onto the deformable surface of an object in real time using HMD. We assume that the surface projected onto an HMD image consists of curved surfaces which can be approximated by 2D geometric curved surface, so that we can deform textures using the matrix of 2D geometric transformation and the deformed textures are overlaid onto the HMD image. The system computes the transformation matrix in each frame, the textures are overlaid in real time even if an observer with HMD moves or deforms the surface. In the system, we select a book as the object with deformable shape and documents as the textures. Therefore, the observer can read digitized documents as if he reads real books.
{"title":"Texture overlay onto deformable surface using HMD","authors":"M. Emori, H. Saito","doi":"10.1109/VR.2004.74","DOIUrl":"https://doi.org/10.1109/VR.2004.74","url":null,"abstract":"We propose a system that overlays textures onto the deformable surface of an object in real time using HMD. We assume that the surface projected onto an HMD image consists of curved surfaces which can be approximated by 2D geometric curved surface, so that we can deform textures using the matrix of 2D geometric transformation and the deformed textures are overlaid onto the HMD image. The system computes the transformation matrix in each frame, the textures are overlaid in real time even if an observer with HMD moves or deforms the surface. In the system, we select a book as the object with deformable shape and documents as the textures. Therefore, the observer can read digitized documents as if he reads real books.","PeriodicalId":375222,"journal":{"name":"IEEE Virtual Reality 2004","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132908761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}