Pub Date : 2011-03-19DOI: 10.1109/ISVRI.2011.5759592
M. Bordegoni, F. Ferrise, Joseba Lizaranzu
Virtual Prototyping (VP) aims at substituting physical prototypes currently used in the industrial design practice with their virtual replica. The ultimate goal of VP is reducing the cost and time necessary to implement and test different design solutions. The paper describes a pilot study that aims at understanding how interactive Virtual Prototypes (iVPs) of consumer products (where interaction is based on the combination of haptic, sound and 3D visualization technologies) would allow us to design the interaction parameters that concur in creating the first impression of the products that customers have when interacting with them. We have selected two commercially available products and, once created the corresponding virtual replica, we have first checked the fidelity of the iVPs by comparing them with the corresponding real products, when used to perform the same activities. Then, differently from the traditional use of Virtual Prototypes for product design evaluation, we have used them for haptic interaction design, i.e. as a means to define some design variables used for the specification of new products: variations are applied to iVP haptic parameters so as to test with final users their preferences concerning the haptic interaction with a simulated product. The iVP configuration that users liked most has then been used for the definition of the specifications for the design of the new product.
{"title":"Use of interactive Virtual Prototypes to define product design specifications: A pilot study on consumer products","authors":"M. Bordegoni, F. Ferrise, Joseba Lizaranzu","doi":"10.1109/ISVRI.2011.5759592","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759592","url":null,"abstract":"Virtual Prototyping (VP) aims at substituting physical prototypes currently used in the industrial design practice with their virtual replica. The ultimate goal of VP is reducing the cost and time necessary to implement and test different design solutions. The paper describes a pilot study that aims at understanding how interactive Virtual Prototypes (iVPs) of consumer products (where interaction is based on the combination of haptic, sound and 3D visualization technologies) would allow us to design the interaction parameters that concur in creating the first impression of the products that customers have when interacting with them. We have selected two commercially available products and, once created the corresponding virtual replica, we have first checked the fidelity of the iVPs by comparing them with the corresponding real products, when used to perform the same activities. Then, differently from the traditional use of Virtual Prototypes for product design evaluation, we have used them for haptic interaction design, i.e. as a means to define some design variables used for the specification of new products: variations are applied to iVP haptic parameters so as to test with final users their preferences concerning the haptic interaction with a simulated product. The iVP configuration that users liked most has then been used for the definition of the specifications for the design of the new product.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134221687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/ISVRI.2011.5759606
Subhashini Ganapathy, Glen J. Anderson, I. Kozintsev
Mobile Augmented Reality (MAR) enabled devices will have the capability to present a large amount of information in real time, based on sensors that determine proximity, visual reference, maps, and detailed information on the environment. This poses a challenge of presenting the information such that there is no cognitive overload for the user and the augmented information that is presented is useful and meaningful to the user. This study examined the user tolerance and identified acceptable values for the performance characteristics of the augmented information presented on - density of information, accuracy of information, delay in information presentation, and error rate. Results indicate that the amount of information presented depends on the type of activity that the user is interested in. For example, in the case of density of information - participants were interested in seeing about 7 items identified at a time. With 11 items, most were overwhelmed, but 4 items were not enough. However, desired information density depends on the information shown, and the participants wanted to control the type of information shown. The findings of the study can be used as design guidelines for MAR information overlay on small screens.
{"title":"Empirical evaluation of augmented information presentation on small form factors - navigation assistant scenario","authors":"Subhashini Ganapathy, Glen J. Anderson, I. Kozintsev","doi":"10.1109/ISVRI.2011.5759606","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759606","url":null,"abstract":"Mobile Augmented Reality (MAR) enabled devices will have the capability to present a large amount of information in real time, based on sensors that determine proximity, visual reference, maps, and detailed information on the environment. This poses a challenge of presenting the information such that there is no cognitive overload for the user and the augmented information that is presented is useful and meaningful to the user. This study examined the user tolerance and identified acceptable values for the performance characteristics of the augmented information presented on - density of information, accuracy of information, delay in information presentation, and error rate. Results indicate that the amount of information presented depends on the type of activity that the user is interested in. For example, in the case of density of information - participants were interested in seeing about 7 items identified at a time. With 11 items, most were overwhelmed, but 4 items were not enough. However, desired information density depends on the information shown, and the participants wanted to control the type of information shown. The findings of the study can be used as design guidelines for MAR information overlay on small screens.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"185 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116382920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/ISVRI.2011.5759597
Yuji Tamiya, T. Nojima
We propose a new sightseeing support system that allows users to focus on environmental information at tourist sites. The main aim of our project is to enable users to recognize the physical positional relation between their current position and their destination. The user moves our device in a 360-degree circle around his body to perceive direction and distance to the destination through the sense of touch. When pointed towards the destination, our system enables the user to estimate arrival time through simple information provided by the device. Furthermore, because the system does not hinder the user's vision or hearing, from the aspect of sightseeing and safety, our approach advances tourism. In this paper, we evaluate an information presentation method that uses vibration to provide the direction and distance to the destination. We also show the results of a navigation experiment using our system.
{"title":"A simplified vibrotactile navigation system for sightseeing","authors":"Yuji Tamiya, T. Nojima","doi":"10.1109/ISVRI.2011.5759597","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759597","url":null,"abstract":"We propose a new sightseeing support system that allows users to focus on environmental information at tourist sites. The main aim of our project is to enable users to recognize the physical positional relation between their current position and their destination. The user moves our device in a 360-degree circle around his body to perceive direction and distance to the destination through the sense of touch. When pointed towards the destination, our system enables the user to estimate arrival time through simple information provided by the device. Furthermore, because the system does not hinder the user's vision or hearing, from the aspect of sightseeing and safety, our approach advances tourism. In this paper, we evaluate an information presentation method that uses vibration to provide the direction and distance to the destination. We also show the results of a navigation experiment using our system.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122605777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/ISVRI.2011.5759654
Byung-Kuk Seo, Jong-Il Park, Hanhoon Park
This paper presents an efficient camera tracking using prior knowledge of a target scene—3-D object models with scene textures. The camera tracking uses partially modeled 3-D objects instead of complete and delicate modeling, which is not easy in complex scenes with a variety of 3-D objects. For robust and accurate camera tracking, scene textures are also sparsely modeled, and they support reducing the uncertainty of camera poses; handling partial occlusions of visual cues; initializing and recovering the camera tracking. The effectiveness is verified by demonstrating its performance using various scenes.
{"title":"Camera tracking using partially modeled 3-D objects with scene textures","authors":"Byung-Kuk Seo, Jong-Il Park, Hanhoon Park","doi":"10.1109/ISVRI.2011.5759654","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759654","url":null,"abstract":"This paper presents an efficient camera tracking using prior knowledge of a target scene—3-D object models with scene textures. The camera tracking uses partially modeled 3-D objects instead of complete and delicate modeling, which is not easy in complex scenes with a variety of 3-D objects. For robust and accurate camera tracking, scene textures are also sparsely modeled, and they support reducing the uncertainty of camera poses; handling partial occlusions of visual cues; initializing and recovering the camera tracking. The effectiveness is verified by demonstrating its performance using various scenes.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127334819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/ISVRI.2011.5759650
Michael Cohen, J. Villegas
Since audition is omnidirectional, it is especially receptive to orientation modulation. Position can be defined as the combination of location and orientation information. Location-based or location-aware services do not generally require orientation information, but position-based services are explicitly parameterized by angular bearing as well as place. “Whereware” [7] suggests using hyperlocal georeferences to allow applications location-awareness; “whence- and whitherware” suggests the potential of position-awareness to enhance navigation and situation awareness, especially in realtime high-definition communication interfaces, such as spatial sound augmented reality applications. Combining literal direction effects and metaphorical (remapped) distance effects in whence- and whitherware position-aware applications invites oversaturation of interface channels, encouraging interface strategies such as audio windowing, narrowcasting, and multipresence.
{"title":"From whereware to whence- and whitherware: Augmented audio reality for position-aware services","authors":"Michael Cohen, J. Villegas","doi":"10.1109/ISVRI.2011.5759650","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759650","url":null,"abstract":"Since audition is omnidirectional, it is especially receptive to orientation modulation. Position can be defined as the combination of location and orientation information. Location-based or location-aware services do not generally require orientation information, but position-based services are explicitly parameterized by angular bearing as well as place. “Whereware” [7] suggests using hyperlocal georeferences to allow applications location-awareness; “whence- and whitherware” suggests the potential of position-awareness to enhance navigation and situation awareness, especially in realtime high-definition communication interfaces, such as spatial sound augmented reality applications. Combining literal direction effects and metaphorical (remapped) distance effects in whence- and whitherware position-aware applications invites oversaturation of interface channels, encouraging interface strategies such as audio windowing, narrowcasting, and multipresence.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126409423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/ISVRI.2011.5759648
Kyeong-Geun Park, Hanna Lee, Hyungseok Kim, Jee-In Kim, Hanku Lee, M. Pyeon
Recently, safety problem of construction field is becoming more serious as construction environment is getting more complex. For example, construction field can cause accidents due to many heavy materials and equipments on the field. Tower crane is the one of the heavy equipments for moving heavy materials on construction field. Tower crane driver should be able to identify all materials on ground while sitting on top of the crane which can be around 100 meters high. Especially for the invisible/hided objects and small materials, the driver needs help from individuals on the ground to get information on those materials, which often provided with hand gestures or mere shoutings. Unfortunately, those communication methods are not well recognizable in realtime on the exact position of events from the driver due to long distance and small size of gestures. In this work, we suggest an augmented reality based guidance system for tower crane. We supply visualized information to tower crane driver for important events and materials on the site at the aligned position in real-time. The augmented reality technology is adopted to present information at the aligned position where the driver is looking at. To do this, we use an head tracker to provide interaction between user and 3D viewport. From the tracked head position, the system visualizes safe/dangerous areas, wind directions/velocity, and quantities of materials on tower crane's window through a transparent screen. It is designed to provide necessary information of tasks to the tower crane drivers at real-time, to increase the safety of construction field.
{"title":"AR-HUD system for tower crane on construction field","authors":"Kyeong-Geun Park, Hanna Lee, Hyungseok Kim, Jee-In Kim, Hanku Lee, M. Pyeon","doi":"10.1109/ISVRI.2011.5759648","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759648","url":null,"abstract":"Recently, safety problem of construction field is becoming more serious as construction environment is getting more complex. For example, construction field can cause accidents due to many heavy materials and equipments on the field. Tower crane is the one of the heavy equipments for moving heavy materials on construction field. Tower crane driver should be able to identify all materials on ground while sitting on top of the crane which can be around 100 meters high. Especially for the invisible/hided objects and small materials, the driver needs help from individuals on the ground to get information on those materials, which often provided with hand gestures or mere shoutings. Unfortunately, those communication methods are not well recognizable in realtime on the exact position of events from the driver due to long distance and small size of gestures. In this work, we suggest an augmented reality based guidance system for tower crane. We supply visualized information to tower crane driver for important events and materials on the site at the aligned position in real-time. The augmented reality technology is adopted to present information at the aligned position where the driver is looking at. To do this, we use an head tracker to provide interaction between user and 3D viewport. From the tracked head position, the system visualizes safe/dangerous areas, wind directions/velocity, and quantities of materials on tower crane's window through a transparent screen. It is designed to provide necessary information of tasks to the tower crane drivers at real-time, to increase the safety of construction field.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130938743","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/ISVRI.2011.5759639
Jing Fan, Xinxin Guan, Ying Tang
Forest evolution simulation and visualization are challenging tasks in terms of complex interactions at various time and space scales. In this paper we present a forest evolution simulator system, based on individual-based, spatially explicit forest gap model, incorporating with fine-scale process of neighbor competition and understory recruitment. The forest evolution for each growth cycle is visualized by taking advantage of the above simulation results to render forest scene. Users can walk through the forest scene interactively. We also adopt the billboard rendering technique to enhance the navigation experience effectively. The system is implemented by Visual C++ 6.0 and OpenGL/GLUT, and the simulation results are satisfying.
森林演化模拟和可视化是一项具有挑战性的任务,涉及不同时间和空间尺度下复杂的相互作用。本文基于基于个体的空间显式林隙模型,结合邻域竞争和林下植被补充的精细尺度过程,构建了森林进化模拟系统。利用上述模拟结果渲染森林场景,将每个生长周期的森林演变可视化。用户可以交互式地在森林场景中行走。我们还采用了广告牌渲染技术,有效地增强了导航体验。该系统在Visual c++ 6.0和OpenGL/GLUT环境下实现,仿真结果令人满意。
{"title":"The dynamic simulator of forest evolution","authors":"Jing Fan, Xinxin Guan, Ying Tang","doi":"10.1109/ISVRI.2011.5759639","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759639","url":null,"abstract":"Forest evolution simulation and visualization are challenging tasks in terms of complex interactions at various time and space scales. In this paper we present a forest evolution simulator system, based on individual-based, spatially explicit forest gap model, incorporating with fine-scale process of neighbor competition and understory recruitment. The forest evolution for each growth cycle is visualized by taking advantage of the above simulation results to render forest scene. Users can walk through the forest scene interactively. We also adopt the billboard rendering technique to enhance the navigation experience effectively. The system is implemented by Visual C++ 6.0 and OpenGL/GLUT, and the simulation results are satisfying.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133892092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/ISVRI.2011.5759673
Evan A. Suma, D. Krum, M. Bolas
We describe an approach for enabling people to share virtual space with a user that is fully immersed in a head-mounted display. By mounting a recently developed low-cost depth sensor to the user's head, depth maps can be generated in real-time based on the user's gaze direction, allowing us to create mixed reality experiences by merging real people and objects into the virtual environment. This enables verbal and nonverbal communication between users that would normally be isolated from one another. We present the implementation of the technique, then discuss the advantages and limitations of using commercially available depth sensing technology in immersive virtual reality applications.
{"title":"Sharing space in mixed and virtual reality environments using a low-cost depth sensor","authors":"Evan A. Suma, D. Krum, M. Bolas","doi":"10.1109/ISVRI.2011.5759673","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759673","url":null,"abstract":"We describe an approach for enabling people to share virtual space with a user that is fully immersed in a head-mounted display. By mounting a recently developed low-cost depth sensor to the user's head, depth maps can be generated in real-time based on the user's gaze direction, allowing us to create mixed reality experiences by merging real people and objects into the virtual environment. This enables verbal and nonverbal communication between users that would normally be isolated from one another. We present the implementation of the technique, then discuss the advantages and limitations of using commercially available depth sensing technology in immersive virtual reality applications.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"529 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122409001","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/ISVRI.2011.5759582
Geetika Sharma, Gautam M. Shroff, P. Dewan
We describe Virtual Office, an innovative application of virtual world technology for enabling informal office interactions and collaboration even when some of the participants are physically out of office. Each instance of the system is tied to an actual physical office, so the communication and visual channels created among its users are designed to offer the level of privacy in the corresponding real-world office. VirtualOffice supports auras and automated navigation based on logical seats in the office, rather than geometric distances. The system is implemented using a distributed MVC architecture employing a practical combination of (a) push and pull communication, and (b) cloud-based servers. The system is designed to support remote ‘management by walking around as well as virtually visiting both collaborators' and ones’ own offices, thereby enabling informal conversations that seamlessly bridge the physical and virtual worlds. VirtualOffice also represents a new point in both Benford's and Schroeder's taxonomies of collaboration systems that classifies instant messaging, virtual worlds, and video conferencing. A detailed scenario is used to motivate our new design point and compare it with commonly used as well as emerging collaboration applications as well as established virtual worlds such as Second Life, for the specific purpose of informal office collaboration.
{"title":"Workplace collaboration in a 3D Virtual Office","authors":"Geetika Sharma, Gautam M. Shroff, P. Dewan","doi":"10.1109/ISVRI.2011.5759582","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759582","url":null,"abstract":"We describe Virtual Office, an innovative application of virtual world technology for enabling informal office interactions and collaboration even when some of the participants are physically out of office. Each instance of the system is tied to an actual physical office, so the communication and visual channels created among its users are designed to offer the level of privacy in the corresponding real-world office. VirtualOffice supports auras and automated navigation based on logical seats in the office, rather than geometric distances. The system is implemented using a distributed MVC architecture employing a practical combination of (a) push and pull communication, and (b) cloud-based servers. The system is designed to support remote ‘management by walking around as well as virtually visiting both collaborators' and ones’ own offices, thereby enabling informal conversations that seamlessly bridge the physical and virtual worlds. VirtualOffice also represents a new point in both Benford's and Schroeder's taxonomies of collaboration systems that classifies instant messaging, virtual worlds, and video conferencing. A detailed scenario is used to motivate our new design point and compare it with commonly used as well as emerging collaboration applications as well as established virtual worlds such as Second Life, for the specific purpose of informal office collaboration.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127723506","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/ISVRI.2011.5759655
Wang Cheng, Ren Cheng, Xiaoyong Lei, Shuling Dai
An articulated character skinning usually requires manual skeleton embedding and vertex weight painting. We propose a fast and automatic method for skeleton generation and character skinning. First, we segment the given character mesh by the sequential steps of NCV(normal characteristic value) computation, segment points refinement, and principal component analysis of segment clusters. Then, two types of joints and a skeleton are generated based on the mesh segmentation result. Furthermore, we compute weights of vertices influenced by skeleton automatically and then skinning the character mesh by skeleton driven and muscle pushing algorithm. Experimental results show that our method achieves both high visual quality and fast speed. It could be used in character animation and VR real time applications.
{"title":"Automatic skeleton generation and character skinning","authors":"Wang Cheng, Ren Cheng, Xiaoyong Lei, Shuling Dai","doi":"10.1109/ISVRI.2011.5759655","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759655","url":null,"abstract":"An articulated character skinning usually requires manual skeleton embedding and vertex weight painting. We propose a fast and automatic method for skeleton generation and character skinning. First, we segment the given character mesh by the sequential steps of NCV(normal characteristic value) computation, segment points refinement, and principal component analysis of segment clusters. Then, two types of joints and a skeleton are generated based on the mesh segmentation result. Furthermore, we compute weights of vertices influenced by skeleton automatically and then skinning the character mesh by skeleton driven and muscle pushing algorithm. Experimental results show that our method achieves both high visual quality and fast speed. It could be used in character animation and VR real time applications.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132273141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}