首页 > 最新文献

2009 International Symposium on Ubiquitous Virtual Reality最新文献

英文 中文
A Collaborative Virtual Reality Environment for Molecular Biology 分子生物学的协同虚拟现实环境
Pub Date : 2009-07-08 DOI: 10.1109/ISUVR.2009.14
Jun Lee, PhamSy Quy, Jee-In Kim, Lin-Woo Kang, A. Seo, Hyungseok Kim
A collaborative virtual reality environment (CRVE) can be used in molecular biology because it can provide users with virtual experiences of three dimensional molecular models in cyberspaces. Therefore, we developed a remote collaboration system for molecular docking and crystallography using virtual reality techniques. The collaborative works of molecular docking were successfully exercised. We also conducted visualization and manipulation of three dimensional biomolecular models and supported discussions of remote participants for crystallography using the collaborative system.
协同虚拟现实环境(CRVE)可以为用户提供网络空间中三维分子模型的虚拟体验,因此可以应用于分子生物学。因此,我们利用虚拟现实技术开发了分子对接和晶体学远程协作系统。完成了分子对接的协同工作。我们还进行了三维生物分子模型的可视化和操作,并支持远程参与者使用协作系统进行晶体学讨论。
{"title":"A Collaborative Virtual Reality Environment for Molecular Biology","authors":"Jun Lee, PhamSy Quy, Jee-In Kim, Lin-Woo Kang, A. Seo, Hyungseok Kim","doi":"10.1109/ISUVR.2009.14","DOIUrl":"https://doi.org/10.1109/ISUVR.2009.14","url":null,"abstract":"A collaborative virtual reality environment (CRVE) can be used in molecular biology because it can provide users with virtual experiences of three dimensional molecular models in cyberspaces. Therefore, we developed a remote collaboration system for molecular docking and crystallography using virtual reality techniques. The collaborative works of molecular docking were successfully exercised. We also conducted visualization and manipulation of three dimensional biomolecular models and supported discussions of remote participants for crystallography using the collaborative system.","PeriodicalId":373083,"journal":{"name":"2009 International Symposium on Ubiquitous Virtual Reality","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122458058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
History and Future of Tracking for Mobile Phone Augmented Reality 手机增强现实追踪技术的历史与未来
Pub Date : 2009-07-08 DOI: 10.1109/ISUVR.2009.11
Daniel Wagner, D. Schmalstieg
We present an overview on the history of tracking for mobile phone Augmented Reality. We present popular approaches using marker tracking, natural feature tracking or offloading to nearby servers. We then outline likely future work.
我们提出了对手机增强现实跟踪的历史概述。我们提出了使用标记跟踪、自然特征跟踪或卸载到附近服务器的流行方法。然后我们概述可能的未来工作。
{"title":"History and Future of Tracking for Mobile Phone Augmented Reality","authors":"Daniel Wagner, D. Schmalstieg","doi":"10.1109/ISUVR.2009.11","DOIUrl":"https://doi.org/10.1109/ISUVR.2009.11","url":null,"abstract":"We present an overview on the history of tracking for mobile phone Augmented Reality. We present popular approaches using marker tracking, natural feature tracking or offloading to nearby servers. We then outline likely future work.","PeriodicalId":373083,"journal":{"name":"2009 International Symposium on Ubiquitous Virtual Reality","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129247721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 60
A Comparative Study of PCA, LDA and Kernel LDA for Image Classification PCA、LDA和核LDA在图像分类中的比较研究
Pub Date : 2009-07-08 DOI: 10.1109/ISUVR.2009.26
Fei Ye, Zhiping Shi, Zhongzhi Shi
Although various discriminant analysis approaches have been used in Content-Based Image Retrieval (CBIR) application, there have been relatively few concerns with kernel-based methods. Furthermore, these CBIR applications still applied discriminant analysis to face images as face recognition did. In this paper we concerns images with general semantic concepts. We use our presented symmetrical invariant LBP (SILBP) texture descriptor to extract image visual features. We then explored effectiveness of Principal Component Analysis (PCA), Fisher linear discriminant analysis (LDA), and Kernel LDA algorithms in providing optimal discrimination features. Following it, we present an LDA based framework to carry out kernel discrimiant analysis in our application. By taking advantage of the efficiency in nonlinear condition of kernel-based methods and the simplicity of LDA, the proposed approach can improve the retrieval precision of CBIR. The experimental results validate the effectiveness of the proposed approach.
尽管在基于内容的图像检索(CBIR)应用中使用了多种判别分析方法,但基于核的方法相对较少受到关注。此外,这些CBIR应用程序仍然像人脸识别一样对人脸图像进行判别分析。本文主要研究具有一般语义概念的图像。利用对称不变LBP (SILBP)纹理描述符提取图像的视觉特征。然后,我们探讨了主成分分析(PCA)、Fisher线性判别分析(LDA)和核LDA算法在提供最佳判别特征方面的有效性。在此基础上,提出了一种基于LDA的核判别分析框架。该方法利用了基于核的方法在非线性条件下的有效性和LDA的简单性,提高了cir的检索精度。实验结果验证了该方法的有效性。
{"title":"A Comparative Study of PCA, LDA and Kernel LDA for Image Classification","authors":"Fei Ye, Zhiping Shi, Zhongzhi Shi","doi":"10.1109/ISUVR.2009.26","DOIUrl":"https://doi.org/10.1109/ISUVR.2009.26","url":null,"abstract":"Although various discriminant analysis approaches have been used in Content-Based Image Retrieval (CBIR) application, there have been relatively few concerns with kernel-based methods. Furthermore, these CBIR applications still applied discriminant analysis to face images as face recognition did. In this paper we concerns images with general semantic concepts. We use our presented symmetrical invariant LBP (SILBP) texture descriptor to extract image visual features. We then explored effectiveness of Principal Component Analysis (PCA), Fisher linear discriminant analysis (LDA), and Kernel LDA algorithms in providing optimal discrimination features. Following it, we present an LDA based framework to carry out kernel discrimiant analysis in our application. By taking advantage of the efficiency in nonlinear condition of kernel-based methods and the simplicity of LDA, the proposed approach can improve the retrieval precision of CBIR. The experimental results validate the effectiveness of the proposed approach.","PeriodicalId":373083,"journal":{"name":"2009 International Symposium on Ubiquitous Virtual Reality","volume":"194 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124318254","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 45
Mobile Visual Computing 移动视觉计算
Pub Date : 2009-07-08 DOI: 10.1109/ICSAMOS.2009.5289233
K. Pulli
Smart phones are becoming visual computing powerhouses. Using sensors such as camera, GPS, and others, the device can provide a new user interface to the real world, augmenting user's view of the world with additional information and controls. Combining computation with image capture allows new kind of photography that can be more expressive than is possible to obtain with a traditional camera. New APIs allow harnessing more computation power of a smart phone to visual processing than what one can obtain from just a CPU.
智能手机正在成为视觉计算的强大力量。使用摄像头、GPS等传感器,该设备可以为现实世界提供一个新的用户界面,通过额外的信息和控制来增强用户对世界的看法。将计算与图像捕捉相结合,可以实现比传统相机更有表现力的新型摄影。新的api允许利用智能手机的计算能力来进行视觉处理,而不仅仅是CPU。
{"title":"Mobile Visual Computing","authors":"K. Pulli","doi":"10.1109/ICSAMOS.2009.5289233","DOIUrl":"https://doi.org/10.1109/ICSAMOS.2009.5289233","url":null,"abstract":"Smart phones are becoming visual computing powerhouses. Using sensors such as camera, GPS, and others, the device can provide a new user interface to the real world, augmenting user's view of the world with additional information and controls. Combining computation with image capture allows new kind of photography that can be more expressive than is possible to obtain with a traditional camera. New APIs allow harnessing more computation power of a smart phone to visual processing than what one can obtain from just a CPU.","PeriodicalId":373083,"journal":{"name":"2009 International Symposium on Ubiquitous Virtual Reality","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131585037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
CAMAR Mashup: Empowering End-user Participation in U-VR Environment CAMAR Mashup:增强终端用户在U-VR环境中的参与能力
Pub Date : 2009-07-08 DOI: 10.1109/ISUVR.2009.22
Hyoseok Yoon, Woontack Woo
In this paper, we propose a concept of Context Aware Mobile AR (CAMAR) mashup as an enriched form of participatory user interaction in U-VR environment. We define CAMAR mashup and discuss how its characteristics are different and distinguishable from previous mashup activities in various additional aspects such as context-awareness, mobility and presentation. To elaborate the proposed concept, an exemplar scenario is presented and foreseeable technical challenges are discussed.
在本文中,我们提出了上下文感知移动AR (CAMAR) mashup的概念,作为U-VR环境中参与式用户交互的丰富形式。我们定义了CAMAR mashup,并讨论了其特征在上下文感知、移动性和表示等其他方面与以前的mashup活动有何不同和区别。为了详细阐述所提出的概念,提出了一个示例场景,并讨论了可预见的技术挑战。
{"title":"CAMAR Mashup: Empowering End-user Participation in U-VR Environment","authors":"Hyoseok Yoon, Woontack Woo","doi":"10.1109/ISUVR.2009.22","DOIUrl":"https://doi.org/10.1109/ISUVR.2009.22","url":null,"abstract":"In this paper, we propose a concept of Context Aware Mobile AR (CAMAR) mashup as an enriched form of participatory user interaction in U-VR environment. We define CAMAR mashup and discuss how its characteristics are different and distinguishable from previous mashup activities in various additional aspects such as context-awareness, mobility and presentation. To elaborate the proposed concept, an exemplar scenario is presented and foreseeable technical challenges are discussed.","PeriodicalId":373083,"journal":{"name":"2009 International Symposium on Ubiquitous Virtual Reality","volume":"226 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134226071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
About Two Physical Interaction Metaphors: Narrowing the Gap between the Real and the Virtual World 关于两个物理交互隐喻:缩小现实世界与虚拟世界之间的差距
Pub Date : 2009-07-08 DOI: 10.1109/ISUVR.2009.25
Antonio Kruger, Johannes Schoning, Frank Steinicke, Markus Lochtefeld, M. Rohs
In this paper we present ideas that can help to close the gap between virtual reality and embedded interaction, which are usually assumed to be the two extremes of a dimension that describes mixed reality interaction. We present our preliminary ideas on surface interaction with stereoscopic data as well as our work on mobile camera-projector units. While the first line of research tries to increase the degree of the interaction experience in virtual worlds, the second uses the physical properties of the real world to enhance the augmented reality experience. The first idea uses the physical world to constraint the interaction in the virtual world. The second idea embeds virtual information into the physical world by respecting its physical properties.
在本文中,我们提出了一些想法,可以帮助缩小虚拟现实和嵌入式交互之间的差距,这通常被认为是描述混合现实交互的两个极端。我们提出了与立体数据的表面相互作用的初步想法,以及我们在移动摄像投影仪装置上的工作。第一行研究试图增加虚拟世界中的交互体验程度,而第二行研究则利用现实世界的物理特性来增强增强现实体验。第一个想法是使用物理世界来限制虚拟世界中的交互。第二个想法是通过尊重物理世界的物理特性,将虚拟信息嵌入物理世界。
{"title":"About Two Physical Interaction Metaphors: Narrowing the Gap between the Real and the Virtual World","authors":"Antonio Kruger, Johannes Schoning, Frank Steinicke, Markus Lochtefeld, M. Rohs","doi":"10.1109/ISUVR.2009.25","DOIUrl":"https://doi.org/10.1109/ISUVR.2009.25","url":null,"abstract":"In this paper we present ideas that can help to close the gap between virtual reality and embedded interaction, which are usually assumed to be the two extremes of a dimension that describes mixed reality interaction. We present our preliminary ideas on surface interaction with stereoscopic data as well as our work on mobile camera-projector units. While the first line of research tries to increase the degree of the interaction experience in virtual worlds, the second uses the physical properties of the real world to enhance the augmented reality experience. The first idea uses the physical world to constraint the interaction in the virtual world. The second idea embeds virtual information into the physical world by respecting its physical properties.","PeriodicalId":373083,"journal":{"name":"2009 International Symposium on Ubiquitous Virtual Reality","volume":"471 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114001909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CAMAR Tag Framework: Context-Aware Mobile Augmented Reality Tag Framework for Dual-reality Linkage CAMAR标签框架:上下文感知移动增强现实标签框架,用于双现实链接
Pub Date : 2009-07-08 DOI: 10.1109/ISUVR.2009.21
Hyejin Kim, Wonwoo Lee, Woontack Woo
In this paper, we propose a novel tag framework for sharing information in dual-reality space, which is based on context-aware mobile augmented reality (CAMAR). When a user selects a target object to be tagged onto dual-reality, the proposed framework and procedures create CAMAR Tag with a user’s mobile device to be registered in virtual space. CAMAR Tag is able to play a role as a reference point, a sharing point and a key of contextual searching. We present a concept behind CAMAR Tag and how it can be generated, implemented and deployed in dual-reality.
本文提出了一种基于上下文感知移动增强现实(CAMAR)的双现实空间信息共享标签框架。当用户选择要标记的目标对象到双现实时,所提出的框架和程序使用用户的移动设备在虚拟空间中创建CAMAR标签进行注册。CAMAR标签可以作为参考点、共享点和上下文搜索的关键。我们提出了CAMAR标签背后的概念,以及它如何在双重现实中生成、实现和部署。
{"title":"CAMAR Tag Framework: Context-Aware Mobile Augmented Reality Tag Framework for Dual-reality Linkage","authors":"Hyejin Kim, Wonwoo Lee, Woontack Woo","doi":"10.1109/ISUVR.2009.21","DOIUrl":"https://doi.org/10.1109/ISUVR.2009.21","url":null,"abstract":"In this paper, we propose a novel tag framework for sharing information in dual-reality space, which is based on context-aware mobile augmented reality (CAMAR). When a user selects a target object to be tagged onto dual-reality, the proposed framework and procedures create CAMAR Tag with a user’s mobile device to be registered in virtual space. CAMAR Tag is able to play a role as a reference point, a sharing point and a key of contextual searching. We present a concept behind CAMAR Tag and how it can be generated, implemented and deployed in dual-reality.","PeriodicalId":373083,"journal":{"name":"2009 International Symposium on Ubiquitous Virtual Reality","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130834967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
CAMAR 2.0: Future Direction of Context-Aware Mobile Augmented Reality CAMAR 2.0:情境感知移动增强现实的未来方向
Pub Date : 2009-07-08 DOI: 10.1109/ISUVR.2009.24
Choonsung Shin, Wonwoo Lee, Youngjung Suh, Hyoseok Yoon, Youngho Lee, Woontack Woo
With the rapid spreading of ubiComp and mobile augmented reality, the interaction of mobile users in U-VR environments has been evolving. However, current interaction is limited in individuals’ experience with given contents and services. In this paper, we propose CAMAR 2.0 as a future direction of CAMAR aiming at improving perception and interaction of users in U-VR environments. We thus introduce three principles for future interaction and experience in U-VR environments. We also discuss technical challenges and promising scenarios for realizing the vision of CAMAR 2.0 in U-VR environments.
随着ubiComp和移动增强现实技术的迅速普及,移动用户在U-VR环境中的交互也在不断发展。然而,目前的互动仅限于个人对给定内容和服务的体验。在本文中,我们提出CAMAR 2.0作为CAMAR的未来方向,旨在改善用户在U-VR环境中的感知和交互。因此,我们介绍了U-VR环境中未来交互和体验的三个原则。我们还讨论了在U-VR环境中实现CAMAR 2.0愿景的技术挑战和有希望的场景。
{"title":"CAMAR 2.0: Future Direction of Context-Aware Mobile Augmented Reality","authors":"Choonsung Shin, Wonwoo Lee, Youngjung Suh, Hyoseok Yoon, Youngho Lee, Woontack Woo","doi":"10.1109/ISUVR.2009.24","DOIUrl":"https://doi.org/10.1109/ISUVR.2009.24","url":null,"abstract":"With the rapid spreading of ubiComp and mobile augmented reality, the interaction of mobile users in U-VR environments has been evolving. However, current interaction is limited in individuals’ experience with given contents and services. In this paper, we propose CAMAR 2.0 as a future direction of CAMAR aiming at improving perception and interaction of users in U-VR environments. We thus introduce three principles for future interaction and experience in U-VR environments. We also discuss technical challenges and promising scenarios for realizing the vision of CAMAR 2.0 in U-VR environments.","PeriodicalId":373083,"journal":{"name":"2009 International Symposium on Ubiquitous Virtual Reality","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122844906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Augmenting Wiki System for Collaborative EFL Reading by Digital Pen Annotations 利用数字笔注释增强协同阅读的Wiki系统
Pub Date : 2009-07-08 DOI: 10.1109/ISUVR.2009.10
Chih-Kai Chang
Wikis are very useful for collaborative learning because of their sharing and flexible nature. Many learning activities can use Wiki to facilitate the processes, such as online glossaries, project reports, and dictionaries. Some EFL (English as a Foreign Language) instructors have paid attention to the popularity of Wiki. Although Wikis are very simple and intuitive for users with information literacy, Wikis need computing environment for each learners to edit Web pages. Generally, an instructor can only conduct a Wiki-based learning activity in a computer classroom. Although mobile learning devices (such as PDAs) for every learner can provide ubiquitous computing environment for a Wiki-based learning activity, this paper suggests another inexpensive way by integrating digital pen with Wiki. Consequently, a learner can annotate an EFL reading with his/her mother tongue by digital pen. After everyone finishes reading, all annotations can be collected into a Wiki system for instruction. Thus, an augmenting Wiki structure is constructed. Finally, learners’ satisfactions about annotating in the prototype system are reported in this paper.
wiki对于协作学习非常有用,因为它具有共享和灵活的特性。许多学习活动都可以使用Wiki来促进学习过程,例如在线词汇表、项目报告和字典。一些作为外语的英语教师已经注意到了维基百科的普及。虽然wiki对于具有信息素养的用户来说非常简单和直观,但是wiki需要计算环境供每个学习者编辑网页。一般来说,教师只能在计算机教室中进行基于wiki的学习活动。尽管每个学习者的移动学习设备(如pda)都可以为基于Wiki的学习活动提供无处不在的计算环境,但本文提出了另一种廉价的方法,即将数字笔与Wiki集成。因此,学习者可以通过数字笔用母语对英语阅读进行注释。每个人读完后,所有的注解都可以收集到Wiki系统中进行指导。这样,就构造了一个扩展的Wiki结构。最后,本文报告了学习者对原型系统中标注的满意度。
{"title":"Augmenting Wiki System for Collaborative EFL Reading by Digital Pen Annotations","authors":"Chih-Kai Chang","doi":"10.1109/ISUVR.2009.10","DOIUrl":"https://doi.org/10.1109/ISUVR.2009.10","url":null,"abstract":"Wikis are very useful for collaborative learning because of their sharing and flexible nature. Many learning activities can use Wiki to facilitate the processes, such as online glossaries, project reports, and dictionaries. Some EFL (English as a Foreign Language) instructors have paid attention to the popularity of Wiki. Although Wikis are very simple and intuitive for users with information literacy, Wikis need computing environment for each learners to edit Web pages. Generally, an instructor can only conduct a Wiki-based learning activity in a computer classroom. Although mobile learning devices (such as PDAs) for every learner can provide ubiquitous computing environment for a Wiki-based learning activity, this paper suggests another inexpensive way by integrating digital pen with Wiki. Consequently, a learner can annotate an EFL reading with his/her mother tongue by digital pen. After everyone finishes reading, all annotations can be collected into a Wiki system for instruction. Thus, an augmenting Wiki structure is constructed. Finally, learners’ satisfactions about annotating in the prototype system are reported in this paper.","PeriodicalId":373083,"journal":{"name":"2009 International Symposium on Ubiquitous Virtual Reality","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123251098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
A System for Enhancing Relationship between Intimate Group Members with Story 用故事增进亲密群体成员之间关系的系统
Pub Date : 2009-07-08 DOI: 10.1109/ISUVR.2009.15
Hyung-Sang Cho, Minsoo Hahn
Modern society generates and shares great amount of information with rapid development of information and communication infrastructure. The development, however, makes people too busy to communicate enough time with families and accelerates separation of families in a sense of physical and social location. In this paper, we describe two concepts of stories for messages of the communication. The stories are generated from dynamic context information and story template. Then we describe an implementation of a system that enhances inter-communication among intimate group members such as families and friends by providing opportunities of story exchanges for keeping in touch with them. The system helps for improving individual’s Quality of Life (QoL) and social health.
随着信息通信基础设施的快速发展,现代社会产生和共享了大量的信息。然而,这种发展使人们太忙,没有足够的时间与家人交流,加速了家庭在物理和社会位置上的分离。在本文中,我们描述了两个关于信息传播的故事概念。故事由动态上下文信息和故事模板生成。然后,我们描述了一个系统的实现,该系统通过提供与家人和朋友保持联系的故事交换机会来增强亲密群体成员之间的相互沟通。该系统有助于提高个人的生活质量和社会健康。
{"title":"A System for Enhancing Relationship between Intimate Group Members with Story","authors":"Hyung-Sang Cho, Minsoo Hahn","doi":"10.1109/ISUVR.2009.15","DOIUrl":"https://doi.org/10.1109/ISUVR.2009.15","url":null,"abstract":"Modern society generates and shares great amount of information with rapid development of information and communication infrastructure. The development, however, makes people too busy to communicate enough time with families and accelerates separation of families in a sense of physical and social location. In this paper, we describe two concepts of stories for messages of the communication. The stories are generated from dynamic context information and story template. Then we describe an implementation of a system that enhances inter-communication among intimate group members such as families and friends by providing opportunities of story exchanges for keeping in touch with them. The system helps for improving individual’s Quality of Life (QoL) and social health.","PeriodicalId":373083,"journal":{"name":"2009 International Symposium on Ubiquitous Virtual Reality","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126589076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2009 International Symposium on Ubiquitous Virtual Reality
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1