首页 > 最新文献

International Conference on Societal Automation最新文献

英文 中文
The neuroscience social network project 神经科学社会网络项目
Pub Date : 2013-11-19 DOI: 10.1145/2542302.2542327
Jordi Puig, A. Perkis, P. Pinel, Á. Cassinelli, M. Ishikawa
Recent advances in neuroimaging over the last 15 years leaded to an explosion of knowledge in neuroscience and to the emergence of international projects and consortiums. Integration of existing knowledge as well as efficient communication between scientists are now challenging issues into the understanding of such a complex subject [Yarkoni et al., 2010]. Several Internet based tools are now available to provide databases and meta-analysis of published results (Neurosynth, Braimap, NIF, SumsDB, OpenfMRI...). These projects are aimed to provide access to activation maps and/or peak coordinates associated to semantic descriptors (cerebral mechanism, cognitive tasks, experimental stimuli...). However, these interfaces suffer from a lack of interactivity and do not allow real-time exchange of data and knowledge between authors. Moreover, classical modes of scientific communication (articles, meetings, lectures...) do not allow to create an active and updated view of the field for members of a specific community (large scientific structure, international work group...). In this view, we propose here to develop an interface designed to provide a direct mapping between neuroscientific knowledge and 3D brain anatomical space.
在过去的15年里,神经成像的最新进展导致了神经科学知识的爆炸式增长,以及国际项目和联盟的出现。整合现有知识以及科学家之间的有效沟通现在是理解这样一个复杂主题的挑战[Yarkoni et al., 2010]。现在有几个基于互联网的工具可以提供已发表结果的数据库和元分析(Neurosynth, brainmap, NIF, SumsDB, OpenfMRI…)。这些项目旨在提供与语义描述符(大脑机制、认知任务、实验刺激等)相关的激活图和/或峰值坐标。然而,这些接口缺乏交互性,不允许作者之间实时交换数据和知识。此外,经典的科学传播模式(文章、会议、讲座……)不允许为特定社区(大型科学结构、国际工作组……)的成员创造一个积极和最新的领域观点。在这种观点下,我们建议在这里开发一个接口,旨在提供神经科学知识和3D脑解剖空间之间的直接映射。
{"title":"The neuroscience social network project","authors":"Jordi Puig, A. Perkis, P. Pinel, Á. Cassinelli, M. Ishikawa","doi":"10.1145/2542302.2542327","DOIUrl":"https://doi.org/10.1145/2542302.2542327","url":null,"abstract":"Recent advances in neuroimaging over the last 15 years leaded to an explosion of knowledge in neuroscience and to the emergence of international projects and consortiums. Integration of existing knowledge as well as efficient communication between scientists are now challenging issues into the understanding of such a complex subject [Yarkoni et al., 2010]. Several Internet based tools are now available to provide databases and meta-analysis of published results (Neurosynth, Braimap, NIF, SumsDB, OpenfMRI...). These projects are aimed to provide access to activation maps and/or peak coordinates associated to semantic descriptors (cerebral mechanism, cognitive tasks, experimental stimuli...). However, these interfaces suffer from a lack of interactivity and do not allow real-time exchange of data and knowledge between authors. Moreover, classical modes of scientific communication (articles, meetings, lectures...) do not allow to create an active and updated view of the field for members of a specific community (large scientific structure, international work group...). In this view, we propose here to develop an interface designed to provide a direct mapping between neuroscientific knowledge and 3D brain anatomical space.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122310451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AMD "be invincible" commercial AMD“无敌”广告
Pub Date : 2013-11-19 DOI: 10.1145/2542398.2542494
Eszter Bohus
Unruly hoards of applications battle the forces of AMD for computational dominance.
不受控制的应用程序与AMD争夺计算优势的力量。
{"title":"AMD \"be invincible\" commercial","authors":"Eszter Bohus","doi":"10.1145/2542398.2542494","DOIUrl":"https://doi.org/10.1145/2542398.2542494","url":null,"abstract":"Unruly hoards of applications battle the forces of AMD for computational dominance.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121191559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lifetime of goodtimes: all new Toyota Corolla 一生的美好时光:全新的丰田卡罗拉
Pub Date : 2013-11-19 DOI: 10.1145/2542398.2542483
S. Bradley
TVC for Toyota's All New 2013 Corolla.
丰田全新2013卡罗拉的TVC。
{"title":"Lifetime of goodtimes: all new Toyota Corolla","authors":"S. Bradley","doi":"10.1145/2542398.2542483","DOIUrl":"https://doi.org/10.1145/2542398.2542483","url":null,"abstract":"TVC for Toyota's All New 2013 Corolla.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116697421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Free-hand interaction for handheld augmented reality using an RGB-depth camera 使用rgb深度相机的手持增强现实的徒手交互
Pub Date : 2013-11-19 DOI: 10.1145/2543651.2543667
Huidong Bai, Lei Gao, Jihad El-Sana, M. Billinghurst
In this paper, we present a novel gesture-based interaction method for handheld Augmented Reality (AR) implemented on a tablet with an RGB-Depth camera attached. Compared with conventional device-centric interaction methods like keypad, stylus, or touchscreen input, natural gesture-based interfaces offer a more intuitive experience for AR applications. Combining with depth information, gesture interfaces can extend handheld AR interaction into full 3D space. In our system we retrieve the 3D hand skeleton from color and depth frames, mapping the results to corresponding manipulations of virtual objects in the AR scene. Our method allows users to control virtual objects in 3D space using their bare hands and perform operations such as translation, rotation, and zooming.
在本文中,我们提出了一种新的基于手势的手持式增强现实(AR)交互方法,该方法在带有rgb深度相机的平板电脑上实现。与传统的以设备为中心的交互方式(如键盘、触控笔或触摸屏输入)相比,基于自然手势的界面为AR应用程序提供了更直观的体验。结合深度信息,手势界面可以将手持AR交互扩展到全3D空间。在我们的系统中,我们从颜色和深度帧中检索3D手部骨架,将结果映射到AR场景中虚拟物体的相应操作。我们的方法允许用户徒手控制3D空间中的虚拟物体,并执行平移、旋转和缩放等操作。
{"title":"Free-hand interaction for handheld augmented reality using an RGB-depth camera","authors":"Huidong Bai, Lei Gao, Jihad El-Sana, M. Billinghurst","doi":"10.1145/2543651.2543667","DOIUrl":"https://doi.org/10.1145/2543651.2543667","url":null,"abstract":"In this paper, we present a novel gesture-based interaction method for handheld Augmented Reality (AR) implemented on a tablet with an RGB-Depth camera attached. Compared with conventional device-centric interaction methods like keypad, stylus, or touchscreen input, natural gesture-based interfaces offer a more intuitive experience for AR applications. Combining with depth information, gesture interfaces can extend handheld AR interaction into full 3D space. In our system we retrieve the 3D hand skeleton from color and depth frames, mapping the results to corresponding manipulations of virtual objects in the AR scene. Our method allows users to control virtual objects in 3D space using their bare hands and perform operations such as translation, rotation, and zooming.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121701057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
Bet she'an
Pub Date : 2013-11-19 DOI: 10.1145/2542398.2542431
Annabel Sebag
A sculptor decides to leave a trace of this dwindling humanity.
一位雕刻家决定为这种日渐式微的人性留下一丝痕迹。
{"title":"Bet she'an","authors":"Annabel Sebag","doi":"10.1145/2542398.2542431","DOIUrl":"https://doi.org/10.1145/2542398.2542431","url":null,"abstract":"A sculptor decides to leave a trace of this dwindling humanity.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123773220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pond of illusion: interacting through mixed reality 幻觉的池塘:通过混合现实进行互动
Pub Date : 2013-11-19 DOI: 10.1145/2542302.2542334
Morten Nobel-Jørgensen, J. B. Nielsen, Anders Boesen Lindbo Larsen, Mikkel Damgaard Olsen, J. Frisvad, J. A. Bærentzen
Pond of Illusion is a mixed reality installation where a virtual space (the pond) is injected between two real spaces. The users are in either of the real spaces, and they can see each other through windows in the virtual space as illustrated in Figure 1(left). The installation attracts people to a large display in either of the real spaces by allowing them to feed virtual fish swimming in the pond. Figure 1(middle) shows how a Microsoft Kinect mounted on top of the display is used for detecting throw motions, which triggers virtual breadcrumbs to be thrown into the pond for feeding the nearby fish. Of course, the fish may not be available because they are busy eating what people have thrown into the pond from the other side.
幻池是一个混合现实装置,在两个真实空间之间注入了一个虚拟空间(池塘)。用户位于真实空间中的任意一个,他们可以通过虚拟空间中的窗口看到彼此,如图1(左)所示。这个装置通过让人们喂在池塘里游泳的虚拟鱼来吸引人们到两个真实空间的大型展示。图1(中)显示了安装在显示器顶部的微软Kinect如何用于检测投掷动作,从而触发将虚拟面包屑扔进池塘以喂养附近的鱼。当然,鱼可能不会出现,因为它们正忙着吃人们从另一边扔进池塘里的东西。
{"title":"Pond of illusion: interacting through mixed reality","authors":"Morten Nobel-Jørgensen, J. B. Nielsen, Anders Boesen Lindbo Larsen, Mikkel Damgaard Olsen, J. Frisvad, J. A. Bærentzen","doi":"10.1145/2542302.2542334","DOIUrl":"https://doi.org/10.1145/2542302.2542334","url":null,"abstract":"Pond of Illusion is a mixed reality installation where a virtual space (the pond) is injected between two real spaces. The users are in either of the real spaces, and they can see each other through windows in the virtual space as illustrated in Figure 1(left). The installation attracts people to a large display in either of the real spaces by allowing them to feed virtual fish swimming in the pond. Figure 1(middle) shows how a Microsoft Kinect mounted on top of the display is used for detecting throw motions, which triggers virtual breadcrumbs to be thrown into the pond for feeding the nearby fish. Of course, the fish may not be available because they are busy eating what people have thrown into the pond from the other side.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127684657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Coarse-grained multiresolution structures for mobile exploration of gigantic surface models 用于巨面模型移动勘探的粗粒度多分辨率结构
Pub Date : 2013-11-19 DOI: 10.1145/2543651.2543669
Marcos Balsa, E. Gobbetti, F. Marton, A. Tinti
We discuss our experience in creating scalable systems for distributing and rendering gigantic 3D surfaces on web environments and common handheld devices. Our methods are based on compressed streamable coarse-grained multiresolution structures. By combining CPU and GPU compression technology with our multiresolution data representation, we are able to incrementally transfer, locally store and render with unprecedented performance extremely detailed 3D mesh models on WebGL-enabled browsers, as well as on hardware-constrained mobile devices.
我们讨论了在web环境和普通手持设备上创建用于分发和渲染巨大3D表面的可扩展系统的经验。我们的方法是基于压缩流的粗粒度多分辨率结构。通过将CPU和GPU压缩技术与我们的多分辨率数据表示相结合,我们能够在支持webgl的浏览器以及硬件受限的移动设备上以前所未有的性能增量传输,本地存储和渲染极其详细的3D网格模型。
{"title":"Coarse-grained multiresolution structures for mobile exploration of gigantic surface models","authors":"Marcos Balsa, E. Gobbetti, F. Marton, A. Tinti","doi":"10.1145/2543651.2543669","DOIUrl":"https://doi.org/10.1145/2543651.2543669","url":null,"abstract":"We discuss our experience in creating scalable systems for distributing and rendering gigantic 3D surfaces on web environments and common handheld devices. Our methods are based on compressed streamable coarse-grained multiresolution structures. By combining CPU and GPU compression technology with our multiresolution data representation, we are able to incrementally transfer, locally store and render with unprecedented performance extremely detailed 3D mesh models on WebGL-enabled browsers, as well as on hardware-constrained mobile devices.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133547958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Topics on bible visualization: content, structure, citation 主题圣经可视化:内容,结构,引用
Pub Date : 2013-11-19 DOI: 10.1145/2542256.2542261
Hyoyoung Kim, Jin Wan Park
Text visualization begins with understanding text itself which is material of visual expression. To visualize any text data, sufficient understanding about characteristics of the text first and the expressive approaches can be decided depending on the derived unique characteristics of the text. In this research we aimed to establish theoretical foundation about the approaches for text visualization by diverse examples of text visualization which are derived through the various characteristics of the text. To do this, we chose the 'Bible' text which is well known globally and digital data of it can be accessed easily and thus diverse text visualization examples exist and analyzed the examples of the bible text visualization. We derived the unique characteristics of text-content, structure, quotation- as criteria for analyzing and supported validity of analysis by adopting at least 2--3 examples for each criterion. In the result, we can comprehend that the goals and expressive approaches are decided depending on the unique characteristics of the Bible text. We expect to build theoretical method for choosing the materials and approaches by analyzing more diverse examples with various point of views on the basis of this research.
文本可视化从理解文本本身开始,文本本身是视觉表达的材料。为了可视化任何文本数据,首先要充分了解文本的特征,并根据导出的文本独特特征来决定表达方法。在本研究中,我们旨在通过文本的各种特征衍生出文本可视化的各种实例,为文本可视化的方法建立理论基础。为了做到这一点,我们选择了全球知名的“圣经”文本,其数字数据易于访问,因此存在多种文本可视化示例,并分析了圣经文本可视化示例。我们推导出文本的独特特征——内容、结构、引用——作为分析的标准,并通过为每个标准采用至少2- 3个例子来支持分析的有效性。因此,我们可以理解,这是根据圣经文本的独特特征来决定的目标和表达方式。我们期望在本研究的基础上,通过分析更多样化的例子,以不同的观点来构建选择材料和方法的理论方法。
{"title":"Topics on bible visualization: content, structure, citation","authors":"Hyoyoung Kim, Jin Wan Park","doi":"10.1145/2542256.2542261","DOIUrl":"https://doi.org/10.1145/2542256.2542261","url":null,"abstract":"Text visualization begins with understanding text itself which is material of visual expression. To visualize any text data, sufficient understanding about characteristics of the text first and the expressive approaches can be decided depending on the derived unique characteristics of the text. In this research we aimed to establish theoretical foundation about the approaches for text visualization by diverse examples of text visualization which are derived through the various characteristics of the text. To do this, we chose the 'Bible' text which is well known globally and digital data of it can be accessed easily and thus diverse text visualization examples exist and analyzed the examples of the bible text visualization. We derived the unique characteristics of text-content, structure, quotation- as criteria for analyzing and supported validity of analysis by adopting at least 2--3 examples for each criterion. In the result, we can comprehend that the goals and expressive approaches are decided depending on the unique characteristics of the Bible text. We expect to build theoretical method for choosing the materials and approaches by analyzing more diverse examples with various point of views on the basis of this research.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131766380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
GPU compute for graphics 图形的GPU计算
Pub Date : 2013-11-19 DOI: 10.1145/2542266.2542275
K. Hillesland
Modern GPUs support more flexible programming models through systems such as DirectCompute, OpenGL compute, OpenCL, and CUDA. Although much has been made of GPGPU programming, this course focuses on the application of compute on GPUs for graphics in particular. We will start with a brief overview of the underlying GPU architectures for compute. We will then discuss how the languages are constructed to help take advantage of these architectures and what the differences are. Since the focus is on application to graphics, we will discuss interoperability with graphics APIs and performance implications. We will also address issues related to choosing between compute and other programmable graphics stages such as pixel or fragment shaders, as well as how to interact with these other graphics pipeline stages. Finally, we will discuss instances where compute has been used specifically for graphics. The attendee will leave the course with a basic understanding of where they can make use of compute to accelerate or extend graphics applications.
现代gpu通过DirectCompute、OpenGL compute、OpenCL和CUDA等系统支持更灵活的编程模型。虽然GPGPU编程已经做了很多,但本课程特别关注gpu上的图形计算应用。我们将从计算的底层GPU架构的简要概述开始。然后,我们将讨论如何构造语言以帮助利用这些体系结构,以及它们之间的区别。由于重点是应用程序到图形,我们将讨论与图形api的互操作性和性能影响。我们还将解决与计算和其他可编程图形阶段(如像素或片段着色器)之间的选择相关的问题,以及如何与这些其他图形管道阶段进行交互。最后,我们将讨论计算专门用于图形的实例。与会者将离开课程的基本理解,他们可以在哪里利用计算来加速或扩展图形应用程序。
{"title":"GPU compute for graphics","authors":"K. Hillesland","doi":"10.1145/2542266.2542275","DOIUrl":"https://doi.org/10.1145/2542266.2542275","url":null,"abstract":"Modern GPUs support more flexible programming models through systems such as DirectCompute, OpenGL compute, OpenCL, and CUDA. Although much has been made of GPGPU programming, this course focuses on the application of compute on GPUs for graphics in particular.\u0000 We will start with a brief overview of the underlying GPU architectures for compute. We will then discuss how the languages are constructed to help take advantage of these architectures and what the differences are. Since the focus is on application to graphics, we will discuss interoperability with graphics APIs and performance implications.\u0000 We will also address issues related to choosing between compute and other programmable graphics stages such as pixel or fragment shaders, as well as how to interact with these other graphics pipeline stages.\u0000 Finally, we will discuss instances where compute has been used specifically for graphics. The attendee will leave the course with a basic understanding of where they can make use of compute to accelerate or extend graphics applications.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"267 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134510561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
3D interactive modeling with capturing instruction interface based on area limitation 基于区域限制的具有捕获指令接口的三维交互建模
Pub Date : 2013-11-19 DOI: 10.1145/2543651.2543686
Yuuki Ueba, Nobuchika Sakata, S. Nishida
In these days, 3D models are introduced as new digital contents such as video games and data material for 3D printer. The demand of 3D modeling for ordinary people has been increased. While using existed hand-held 3D modeling system, users have to estimate unmeasured area through a display. Also users have to terminate consequently modeling with watching a process of modeling [Fudono et al. 2005]. In this paper, we propose a novel modeling system which indicates route guidance by means of area limitation of modeling at the beginning. Owing to area limitation of modeling, users can obtain desired 3D model with watching effective route guide for measuring and automatic termination of modeling. Users can obtain desired 3D model easily and quickly
最近,3D模型作为电子游戏和3D打印机的数据材料等新的数字内容被引入。普通民众对3D建模的需求越来越大。在使用现有的手持式三维建模系统时,用户必须通过显示器来估计未测量的面积。此外,用户必须通过观看建模过程来终止相应的建模[Fudono et al. 2005]。在本文中,我们提出了一种新颖的建模系统,该系统利用初始建模的面积限制来指示路线引导。由于建模面积的限制,用户可以通过观看有效的路线引导测量和自动终止建模来获得所需的三维模型。用户可以轻松快速地获得所需的3D模型
{"title":"3D interactive modeling with capturing instruction interface based on area limitation","authors":"Yuuki Ueba, Nobuchika Sakata, S. Nishida","doi":"10.1145/2543651.2543686","DOIUrl":"https://doi.org/10.1145/2543651.2543686","url":null,"abstract":"In these days, 3D models are introduced as new digital contents such as video games and data material for 3D printer. The demand of 3D modeling for ordinary people has been increased. While using existed hand-held 3D modeling system, users have to estimate unmeasured area through a display. Also users have to terminate consequently modeling with watching a process of modeling [Fudono et al. 2005]. In this paper, we propose a novel modeling system which indicates route guidance by means of area limitation of modeling at the beginning. Owing to area limitation of modeling, users can obtain desired 3D model with watching effective route guide for measuring and automatic termination of modeling. Users can obtain desired 3D model easily and quickly","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"58 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114007459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Conference on Societal Automation
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1