Pub Date : 2011-03-19DOI: 10.1109/ISVRI.2011.5759649
Jae-Deok Ha, Kyusung Cho, F. Rojas, H. Yang
Recent mobile device and vision technology advances have enabled mobile Augmented Reality (AR) to be serviced in real-time using natural features. However, in viewing augmented reality while moving about, the user is always encountering new and diverse target objects in different locations. Whether the AR system is scalable or not to the number of target objects is an important issue for future mobile AR services. But this scalability has been far limited due to the small capacity of internal storage and memory of the mobile devices. In this paper, a new framework is proposed that achieves scalability for mobile augmented reality. The scalability is achieved by using a bag of visual words based recognition module on the server side with connected through conventional Wi-Fi. On the client side, the mobile phone tracks and augments based on natural features in real-time. In the experiment, it takes 0.2 seconds for the cold start of an AR service initiated on a 10k object database with recognition accuracy 95%, which is acceptable for a real-world mobile AR application.
{"title":"Real-time scalable recognition and tracking based on the server-client model for mobile Augmented Reality","authors":"Jae-Deok Ha, Kyusung Cho, F. Rojas, H. Yang","doi":"10.1109/ISVRI.2011.5759649","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759649","url":null,"abstract":"Recent mobile device and vision technology advances have enabled mobile Augmented Reality (AR) to be serviced in real-time using natural features. However, in viewing augmented reality while moving about, the user is always encountering new and diverse target objects in different locations. Whether the AR system is scalable or not to the number of target objects is an important issue for future mobile AR services. But this scalability has been far limited due to the small capacity of internal storage and memory of the mobile devices. In this paper, a new framework is proposed that achieves scalability for mobile augmented reality. The scalability is achieved by using a bag of visual words based recognition module on the server side with connected through conventional Wi-Fi. On the client side, the mobile phone tracks and augments based on natural features in real-time. In the experiment, it takes 0.2 seconds for the cold start of an AR service initiated on a 10k object database with recognition accuracy 95%, which is acceptable for a real-world mobile AR application.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115630030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/ISVRI.2011.5759595
M. N. Adi, D. Roberts
With the advancement of building material responsive buildings that can change their properties in real time according to their users or environment are becoming more common. Architecture that can think and react as if alive is becoming more feasible. As architecture begins to take on a life of its own, changing shape in response to the environment and inhabitants, we use virtual reality to ask if this is useful. We argue that immersive simulation is a useful means to study the potential impact of life-like architecture before going to the effort and expense of building intelligent animated structures that surround people. We have previously surrounded people with an animated life size visualisation of moving walls as they assemble jigsaw puzzles, to find that they feel more comfortable and can concentrate better when the walls around them appear to come to life. This paper extends that work by adding a teacher and responsive helpful patterns on the wall. Participants were again given the task of completing a jigsaw puzzle, this time without seeing what the completed puzzle should look like. Puzzle pieces were organised into groups. The teacher explained which group to do next and where groups connected. The conditions were: static blank, static patterned, moving patterned, and helpful patterned walls. Helpful walls responded to both the teacher and the test subject, through pictorial guidance. The objective measure of task performance was the number of puzzle pieces each user assembled. Subjective measures of experience were obtained through a post experiment questionnaire and an interview. This work is of relevance to those who will use surround visualisation or moving walls to enhance places where people learn, those that design animated and life like buildings, and in particular those that use virtual reality in this process.
{"title":"Using VR to assess the impact of seemingly life like and intelligent architecture on people's ability to follow instructions from a teacher","authors":"M. N. Adi, D. Roberts","doi":"10.1109/ISVRI.2011.5759595","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759595","url":null,"abstract":"With the advancement of building material responsive buildings that can change their properties in real time according to their users or environment are becoming more common. Architecture that can think and react as if alive is becoming more feasible. As architecture begins to take on a life of its own, changing shape in response to the environment and inhabitants, we use virtual reality to ask if this is useful. We argue that immersive simulation is a useful means to study the potential impact of life-like architecture before going to the effort and expense of building intelligent animated structures that surround people. We have previously surrounded people with an animated life size visualisation of moving walls as they assemble jigsaw puzzles, to find that they feel more comfortable and can concentrate better when the walls around them appear to come to life. This paper extends that work by adding a teacher and responsive helpful patterns on the wall. Participants were again given the task of completing a jigsaw puzzle, this time without seeing what the completed puzzle should look like. Puzzle pieces were organised into groups. The teacher explained which group to do next and where groups connected. The conditions were: static blank, static patterned, moving patterned, and helpful patterned walls. Helpful walls responded to both the teacher and the test subject, through pictorial guidance. The objective measure of task performance was the number of puzzle pieces each user assembled. Subjective measures of experience were obtained through a post experiment questionnaire and an interview. This work is of relevance to those who will use surround visualisation or moving walls to enhance places where people learn, those that design animated and life like buildings, and in particular those that use virtual reality in this process.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123691511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/ISVRI.2011.5759612
Katsunari Sato, H. Shinoda, S. Tachi
We have proposed a vision-based cutaneous sensor for telexistence that can simulate the physical interaction between a human fingertip and an object. The proposed sensor comprises a finger-shaped GelForce and a thermo-sensitive paint. The finger-shaped GelForce enables us to measure tactile information in terms of the distribution of forces that are calculated from the displacements of markers inside the sensor body. The thermo-sensitive paint is employed to measure thermal information on the basis of its color, which changes according to its temperature. In this study, we have described the design of the proposed cutaneous sensor, constructed its prototype, and discussed its efficiency for telexistence.
{"title":"Vision-based cutaneous sensor to measure both tactile and thermal information for telexistence","authors":"Katsunari Sato, H. Shinoda, S. Tachi","doi":"10.1109/ISVRI.2011.5759612","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759612","url":null,"abstract":"We have proposed a vision-based cutaneous sensor for telexistence that can simulate the physical interaction between a human fingertip and an object. The proposed sensor comprises a finger-shaped GelForce and a thermo-sensitive paint. The finger-shaped GelForce enables us to measure tactile information in terms of the distribution of forces that are calculated from the displacements of markers inside the sensor body. The thermo-sensitive paint is employed to measure thermal information on the basis of its color, which changes according to its temperature. In this study, we have described the design of the proposed cutaneous sensor, constructed its prototype, and discussed its efficiency for telexistence.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114273229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/ISVRI.2011.5759663
Moon-sub Jin, Jong-Il Park
Smart phone is good platform for hand-held Augmented Reality. As smart phone users grow, Augmented Reality applications are increasing. This paper proposes an interactive Mobile Augmented Reality system using a vibro-tactile pad. The proposed system can provide vibro-tactile feedback for realistic immersive experience. For interactive Mobile Augmented Reality, We focus on providing expressions augmented object's movements and location information using vibration motors. Through simple memory test application, we prove that the proposed system is useful for providing intuitive knowledge for information of augmented object's movements and location.
{"title":"Interactive Mobile Augmented Reality system using a vibro-tactile pad","authors":"Moon-sub Jin, Jong-Il Park","doi":"10.1109/ISVRI.2011.5759663","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759663","url":null,"abstract":"Smart phone is good platform for hand-held Augmented Reality. As smart phone users grow, Augmented Reality applications are increasing. This paper proposes an interactive Mobile Augmented Reality system using a vibro-tactile pad. The proposed system can provide vibro-tactile feedback for realistic immersive experience. For interactive Mobile Augmented Reality, We focus on providing expressions augmented object's movements and location information using vibration motors. Through simple memory test application, we prove that the proposed system is useful for providing intuitive knowledge for information of augmented object's movements and location.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"159 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114442232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/ISVRI.2011.5759666
Keisuke Tomono, H. Katsuyama, A. Tomono
We propose a new method in which scents are ejected through the display screen in the direction of a viewer, in order to enhance the reality of the visual images. A thin LED display panel filled with tiny pores was made for this experiment, and an air control system using a blower was placed behind the screen. We proved that the direction of airflow was controlled and scents properly travelled through the pores to the front side of the screen. Furthermore, we evaluated the psychological effects of visuals combined with scents.
{"title":"A scent-emitting video display system","authors":"Keisuke Tomono, H. Katsuyama, A. Tomono","doi":"10.1109/ISVRI.2011.5759666","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759666","url":null,"abstract":"We propose a new method in which scents are ejected through the display screen in the direction of a viewer, in order to enhance the reality of the visual images. A thin LED display panel filled with tiny pores was made for this experiment, and an air control system using a blower was placed behind the screen. We proved that the direction of airflow was controlled and scents properly travelled through the pores to the front side of the screen. Furthermore, we evaluated the psychological effects of visuals combined with scents.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114365000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/ISVRI.2011.5759598
S. Nicolau, A. Vemuri, Hurng-Sheng Wu, Min‐Ho Huang, Y. Ho, Arnaud Charnoz, A. Hostettler, L. Soler, J. Marescaux
Ultrasonography is the lowest cost no risk medical imaging technique. However, reading an ultrasound (US) image as well as performing a good US probe positioning remain difficult tasks. Education in this domain is today performed on patients, thus limiting it to the most common cases and clinical practice. In this paper, we present a low cost simulator that allows US image practice and realistic probe manipulation from CT data. More precisely, we tackle in this paper the issue of providing a cost effective realistic interface for the probe manipulation with a basic haptic feedback.
{"title":"A low cost simulator to practice ultrasound image interpretation and probe manipulation: Design and first evaluation","authors":"S. Nicolau, A. Vemuri, Hurng-Sheng Wu, Min‐Ho Huang, Y. Ho, Arnaud Charnoz, A. Hostettler, L. Soler, J. Marescaux","doi":"10.1109/ISVRI.2011.5759598","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759598","url":null,"abstract":"Ultrasonography is the lowest cost no risk medical imaging technique. However, reading an ultrasound (US) image as well as performing a good US probe positioning remain difficult tasks. Education in this domain is today performed on patients, thus limiting it to the most common cases and clinical practice. In this paper, we present a low cost simulator that allows US image practice and realistic probe manipulation from CT data. More precisely, we tackle in this paper the issue of providing a cost effective realistic interface for the probe manipulation with a basic haptic feedback.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130099790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/ISVRI.2011.5759637
Guanbo Bao, Hongjun Li, Xiaopeng Zhang, Wujun Che, M. Jaeger
Fast rendering of a large-scale forest landscape scene is important in many applications, as video games, Internet graphics applications, landscape or cityscape scene design and visualization, and virtual forestry. A challenge in virtual reality is realistic rendering of large scale scenes consisting of complex plant models. A series of level of detail tree models are usually constructed to compress the overall forest complexity in view-dependent forest navigation. In this paper a new leaf modeling method is presented to have leaf models match leaf textures, so that the visual effect and model complexity can be balanced well. In addition, vertex buffer objects and tree clipping operation allow rendering a large forest containing thousands of trees in real-time. The experiments show that these techniques can be easily used in applications such as video games and interactive navigation of landscapes. Walk-through and flyover a forest are both feasible using our techniques.
{"title":"Realistic real-time rendering for large-scale forest scenes","authors":"Guanbo Bao, Hongjun Li, Xiaopeng Zhang, Wujun Che, M. Jaeger","doi":"10.1109/ISVRI.2011.5759637","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759637","url":null,"abstract":"Fast rendering of a large-scale forest landscape scene is important in many applications, as video games, Internet graphics applications, landscape or cityscape scene design and visualization, and virtual forestry. A challenge in virtual reality is realistic rendering of large scale scenes consisting of complex plant models. A series of level of detail tree models are usually constructed to compress the overall forest complexity in view-dependent forest navigation. In this paper a new leaf modeling method is presented to have leaf models match leaf textures, so that the visual effect and model complexity can be balanced well. In addition, vertex buffer objects and tree clipping operation allow rendering a large forest containing thousands of trees in real-time. The experiments show that these techniques can be easily used in applications such as video games and interactive navigation of landscapes. Walk-through and flyover a forest are both feasible using our techniques.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132101320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/ISVRI.2011.5759670
Muzafar Khan, S. Sulaiman, A. M. Said, M. Tahir
This paper presents the development of conceptual usability evaluation framework that provides a structured approach for evaluating haptic systems. The proposed framework is based on “top-down” approach and ISO standard for guidance on haptic and tactile interaction. Interview and card sorting methods are used for the development of this framework. The framework is expected to be equally applicable for user testing and expert evaluation but it still needs refinement and empirical evaluation to assess the expected benefits.
{"title":"Development of usability evaluation framework for haptic systems","authors":"Muzafar Khan, S. Sulaiman, A. M. Said, M. Tahir","doi":"10.1109/ISVRI.2011.5759670","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759670","url":null,"abstract":"This paper presents the development of conceptual usability evaluation framework that provides a structured approach for evaluating haptic systems. The proposed framework is based on “top-down” approach and ISO standard for guidance on haptic and tactile interaction. Interview and card sorting methods are used for the development of this framework. The framework is expected to be equally applicable for user testing and expert evaluation but it still needs refinement and empirical evaluation to assess the expected benefits.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127820896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/ISVRI.2011.5759641
Bin Xiao, Jinshu Wang
Marine noise is a complex audio signal combined by multi-sources, marine noise simulation is one of the important parts in various training and research area such as sonar and ship collision prevention. Based on signal spectrum analysis, spectrums for independent sources in various steady statuses are established, and dynamic spectrum in transiting status is reached with spectrum interpolation. With acoustic spread attenuation model and Doppler effects model, spectrum at listening position is obtained. Time series data of combined noises is generated, and listened noise simulation for multi-sources is implemented finally.
{"title":"Marine noise simulation based on spectrum analysis","authors":"Bin Xiao, Jinshu Wang","doi":"10.1109/ISVRI.2011.5759641","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759641","url":null,"abstract":"Marine noise is a complex audio signal combined by multi-sources, marine noise simulation is one of the important parts in various training and research area such as sonar and ship collision prevention. Based on signal spectrum analysis, spectrums for independent sources in various steady statuses are established, and dynamic spectrum in transiting status is reached with spectrum interpolation. With acoustic spread attenuation model and Doppler effects model, spectrum at listening position is obtained. Time series data of combined noises is generated, and listened noise simulation for multi-sources is implemented finally.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117324147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-03-19DOI: 10.1109/ISVRI.2011.5759652
Yeong-Jae Choi, Yong-il Cho, Gowun Jeong, Sujung Bae, Hyunseung Yang
An interactive virtual environment (IVE) is a world where virtual objects and real world objects communicate interactively with each other like in AR or AV worlds. In terms of interaction, reasonable response time is a key aspect for real-time reaction. To achieve it, reconstruction of real world objects in a virtual world should be done as efficient as possible. The conventional multiple camera based approaches for shape modeling are not proper for a visual sensor network (VSN), because limited resources such as the lack of power, memory capacity, and computing performance were not under the consideration and the space covered by cameras gets even much larger. In IVE with VSNs, two problems for the reconstruction take place as follows. First, performance for reconstruction decreases due to increasing scanning space. Second, energy consumption for communicating silhouette images increases. In order to solve those two problems, first we build up a probability map which represents three-dimensional occupancy of multiple moving objects in the ground plane. This helps restrict the scanning space only within the occupancy region. Second, we make a schedule of reconstruction, which means to decide which cameras reconstruct which objects. With our method, shape modeling in IVE with VSNs can be done in an efficient manner.
{"title":"Efficient shape modeling using occupancy reasoning with reconstruction scheduling for interactive virtual environments","authors":"Yeong-Jae Choi, Yong-il Cho, Gowun Jeong, Sujung Bae, Hyunseung Yang","doi":"10.1109/ISVRI.2011.5759652","DOIUrl":"https://doi.org/10.1109/ISVRI.2011.5759652","url":null,"abstract":"An interactive virtual environment (IVE) is a world where virtual objects and real world objects communicate interactively with each other like in AR or AV worlds. In terms of interaction, reasonable response time is a key aspect for real-time reaction. To achieve it, reconstruction of real world objects in a virtual world should be done as efficient as possible. The conventional multiple camera based approaches for shape modeling are not proper for a visual sensor network (VSN), because limited resources such as the lack of power, memory capacity, and computing performance were not under the consideration and the space covered by cameras gets even much larger. In IVE with VSNs, two problems for the reconstruction take place as follows. First, performance for reconstruction decreases due to increasing scanning space. Second, energy consumption for communicating silhouette images increases. In order to solve those two problems, first we build up a probability map which represents three-dimensional occupancy of multiple moving objects in the ground plane. This helps restrict the scanning space only within the occupancy region. Second, we make a schedule of reconstruction, which means to decide which cameras reconstruct which objects. With our method, shape modeling in IVE with VSNs can be done in an efficient manner.","PeriodicalId":197131,"journal":{"name":"2011 IEEE International Symposium on VR Innovation","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124776556","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}