Jordi Puig, A. Perkis, P. Pinel, Á. Cassinelli, M. Ishikawa
Recent advances in neuroimaging over the last 15 years leaded to an explosion of knowledge in neuroscience and to the emergence of international projects and consortiums. Integration of existing knowledge as well as efficient communication between scientists are now challenging issues into the understanding of such a complex subject [Yarkoni et al., 2010]. Several Internet based tools are now available to provide databases and meta-analysis of published results (Neurosynth, Braimap, NIF, SumsDB, OpenfMRI...). These projects are aimed to provide access to activation maps and/or peak coordinates associated to semantic descriptors (cerebral mechanism, cognitive tasks, experimental stimuli...). However, these interfaces suffer from a lack of interactivity and do not allow real-time exchange of data and knowledge between authors. Moreover, classical modes of scientific communication (articles, meetings, lectures...) do not allow to create an active and updated view of the field for members of a specific community (large scientific structure, international work group...). In this view, we propose here to develop an interface designed to provide a direct mapping between neuroscientific knowledge and 3D brain anatomical space.
在过去的15年里,神经成像的最新进展导致了神经科学知识的爆炸式增长,以及国际项目和联盟的出现。整合现有知识以及科学家之间的有效沟通现在是理解这样一个复杂主题的挑战[Yarkoni et al., 2010]。现在有几个基于互联网的工具可以提供已发表结果的数据库和元分析(Neurosynth, brainmap, NIF, SumsDB, OpenfMRI…)。这些项目旨在提供与语义描述符(大脑机制、认知任务、实验刺激等)相关的激活图和/或峰值坐标。然而,这些接口缺乏交互性,不允许作者之间实时交换数据和知识。此外,经典的科学传播模式(文章、会议、讲座……)不允许为特定社区(大型科学结构、国际工作组……)的成员创造一个积极和最新的领域观点。在这种观点下,我们建议在这里开发一个接口,旨在提供神经科学知识和3D脑解剖空间之间的直接映射。
{"title":"The neuroscience social network project","authors":"Jordi Puig, A. Perkis, P. Pinel, Á. Cassinelli, M. Ishikawa","doi":"10.1145/2542302.2542327","DOIUrl":"https://doi.org/10.1145/2542302.2542327","url":null,"abstract":"Recent advances in neuroimaging over the last 15 years leaded to an explosion of knowledge in neuroscience and to the emergence of international projects and consortiums. Integration of existing knowledge as well as efficient communication between scientists are now challenging issues into the understanding of such a complex subject [Yarkoni et al., 2010]. Several Internet based tools are now available to provide databases and meta-analysis of published results (Neurosynth, Braimap, NIF, SumsDB, OpenfMRI...). These projects are aimed to provide access to activation maps and/or peak coordinates associated to semantic descriptors (cerebral mechanism, cognitive tasks, experimental stimuli...). However, these interfaces suffer from a lack of interactivity and do not allow real-time exchange of data and knowledge between authors. Moreover, classical modes of scientific communication (articles, meetings, lectures...) do not allow to create an active and updated view of the field for members of a specific community (large scientific structure, international work group...). In this view, we propose here to develop an interface designed to provide a direct mapping between neuroscientific knowledge and 3D brain anatomical space.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122310451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Unruly hoards of applications battle the forces of AMD for computational dominance.
不受控制的应用程序与AMD争夺计算优势的力量。
{"title":"AMD \"be invincible\" commercial","authors":"Eszter Bohus","doi":"10.1145/2542398.2542494","DOIUrl":"https://doi.org/10.1145/2542398.2542494","url":null,"abstract":"Unruly hoards of applications battle the forces of AMD for computational dominance.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121191559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Lifetime of goodtimes: all new Toyota Corolla","authors":"S. Bradley","doi":"10.1145/2542398.2542483","DOIUrl":"https://doi.org/10.1145/2542398.2542483","url":null,"abstract":"TVC for Toyota's All New 2013 Corolla.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116697421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Huidong Bai, Lei Gao, Jihad El-Sana, M. Billinghurst
In this paper, we present a novel gesture-based interaction method for handheld Augmented Reality (AR) implemented on a tablet with an RGB-Depth camera attached. Compared with conventional device-centric interaction methods like keypad, stylus, or touchscreen input, natural gesture-based interfaces offer a more intuitive experience for AR applications. Combining with depth information, gesture interfaces can extend handheld AR interaction into full 3D space. In our system we retrieve the 3D hand skeleton from color and depth frames, mapping the results to corresponding manipulations of virtual objects in the AR scene. Our method allows users to control virtual objects in 3D space using their bare hands and perform operations such as translation, rotation, and zooming.
{"title":"Free-hand interaction for handheld augmented reality using an RGB-depth camera","authors":"Huidong Bai, Lei Gao, Jihad El-Sana, M. Billinghurst","doi":"10.1145/2543651.2543667","DOIUrl":"https://doi.org/10.1145/2543651.2543667","url":null,"abstract":"In this paper, we present a novel gesture-based interaction method for handheld Augmented Reality (AR) implemented on a tablet with an RGB-Depth camera attached. Compared with conventional device-centric interaction methods like keypad, stylus, or touchscreen input, natural gesture-based interfaces offer a more intuitive experience for AR applications. Combining with depth information, gesture interfaces can extend handheld AR interaction into full 3D space. In our system we retrieve the 3D hand skeleton from color and depth frames, mapping the results to corresponding manipulations of virtual objects in the AR scene. Our method allows users to control virtual objects in 3D space using their bare hands and perform operations such as translation, rotation, and zooming.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121701057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A sculptor decides to leave a trace of this dwindling humanity.
一位雕刻家决定为这种日渐式微的人性留下一丝痕迹。
{"title":"Bet she'an","authors":"Annabel Sebag","doi":"10.1145/2542398.2542431","DOIUrl":"https://doi.org/10.1145/2542398.2542431","url":null,"abstract":"A sculptor decides to leave a trace of this dwindling humanity.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123773220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Morten Nobel-Jørgensen, J. B. Nielsen, Anders Boesen Lindbo Larsen, Mikkel Damgaard Olsen, J. Frisvad, J. A. Bærentzen
Pond of Illusion is a mixed reality installation where a virtual space (the pond) is injected between two real spaces. The users are in either of the real spaces, and they can see each other through windows in the virtual space as illustrated in Figure 1(left). The installation attracts people to a large display in either of the real spaces by allowing them to feed virtual fish swimming in the pond. Figure 1(middle) shows how a Microsoft Kinect mounted on top of the display is used for detecting throw motions, which triggers virtual breadcrumbs to be thrown into the pond for feeding the nearby fish. Of course, the fish may not be available because they are busy eating what people have thrown into the pond from the other side.
{"title":"Pond of illusion: interacting through mixed reality","authors":"Morten Nobel-Jørgensen, J. B. Nielsen, Anders Boesen Lindbo Larsen, Mikkel Damgaard Olsen, J. Frisvad, J. A. Bærentzen","doi":"10.1145/2542302.2542334","DOIUrl":"https://doi.org/10.1145/2542302.2542334","url":null,"abstract":"Pond of Illusion is a mixed reality installation where a virtual space (the pond) is injected between two real spaces. The users are in either of the real spaces, and they can see each other through windows in the virtual space as illustrated in Figure 1(left). The installation attracts people to a large display in either of the real spaces by allowing them to feed virtual fish swimming in the pond. Figure 1(middle) shows how a Microsoft Kinect mounted on top of the display is used for detecting throw motions, which triggers virtual breadcrumbs to be thrown into the pond for feeding the nearby fish. Of course, the fish may not be available because they are busy eating what people have thrown into the pond from the other side.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127684657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We discuss our experience in creating scalable systems for distributing and rendering gigantic 3D surfaces on web environments and common handheld devices. Our methods are based on compressed streamable coarse-grained multiresolution structures. By combining CPU and GPU compression technology with our multiresolution data representation, we are able to incrementally transfer, locally store and render with unprecedented performance extremely detailed 3D mesh models on WebGL-enabled browsers, as well as on hardware-constrained mobile devices.
{"title":"Coarse-grained multiresolution structures for mobile exploration of gigantic surface models","authors":"Marcos Balsa, E. Gobbetti, F. Marton, A. Tinti","doi":"10.1145/2543651.2543669","DOIUrl":"https://doi.org/10.1145/2543651.2543669","url":null,"abstract":"We discuss our experience in creating scalable systems for distributing and rendering gigantic 3D surfaces on web environments and common handheld devices. Our methods are based on compressed streamable coarse-grained multiresolution structures. By combining CPU and GPU compression technology with our multiresolution data representation, we are able to incrementally transfer, locally store and render with unprecedented performance extremely detailed 3D mesh models on WebGL-enabled browsers, as well as on hardware-constrained mobile devices.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"88 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133547958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Text visualization begins with understanding text itself which is material of visual expression. To visualize any text data, sufficient understanding about characteristics of the text first and the expressive approaches can be decided depending on the derived unique characteristics of the text. In this research we aimed to establish theoretical foundation about the approaches for text visualization by diverse examples of text visualization which are derived through the various characteristics of the text. To do this, we chose the 'Bible' text which is well known globally and digital data of it can be accessed easily and thus diverse text visualization examples exist and analyzed the examples of the bible text visualization. We derived the unique characteristics of text-content, structure, quotation- as criteria for analyzing and supported validity of analysis by adopting at least 2--3 examples for each criterion. In the result, we can comprehend that the goals and expressive approaches are decided depending on the unique characteristics of the Bible text. We expect to build theoretical method for choosing the materials and approaches by analyzing more diverse examples with various point of views on the basis of this research.
{"title":"Topics on bible visualization: content, structure, citation","authors":"Hyoyoung Kim, Jin Wan Park","doi":"10.1145/2542256.2542261","DOIUrl":"https://doi.org/10.1145/2542256.2542261","url":null,"abstract":"Text visualization begins with understanding text itself which is material of visual expression. To visualize any text data, sufficient understanding about characteristics of the text first and the expressive approaches can be decided depending on the derived unique characteristics of the text. In this research we aimed to establish theoretical foundation about the approaches for text visualization by diverse examples of text visualization which are derived through the various characteristics of the text. To do this, we chose the 'Bible' text which is well known globally and digital data of it can be accessed easily and thus diverse text visualization examples exist and analyzed the examples of the bible text visualization. We derived the unique characteristics of text-content, structure, quotation- as criteria for analyzing and supported validity of analysis by adopting at least 2--3 examples for each criterion. In the result, we can comprehend that the goals and expressive approaches are decided depending on the unique characteristics of the Bible text. We expect to build theoretical method for choosing the materials and approaches by analyzing more diverse examples with various point of views on the basis of this research.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131766380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Modern GPUs support more flexible programming models through systems such as DirectCompute, OpenGL compute, OpenCL, and CUDA. Although much has been made of GPGPU programming, this course focuses on the application of compute on GPUs for graphics in particular. We will start with a brief overview of the underlying GPU architectures for compute. We will then discuss how the languages are constructed to help take advantage of these architectures and what the differences are. Since the focus is on application to graphics, we will discuss interoperability with graphics APIs and performance implications. We will also address issues related to choosing between compute and other programmable graphics stages such as pixel or fragment shaders, as well as how to interact with these other graphics pipeline stages. Finally, we will discuss instances where compute has been used specifically for graphics. The attendee will leave the course with a basic understanding of where they can make use of compute to accelerate or extend graphics applications.
{"title":"GPU compute for graphics","authors":"K. Hillesland","doi":"10.1145/2542266.2542275","DOIUrl":"https://doi.org/10.1145/2542266.2542275","url":null,"abstract":"Modern GPUs support more flexible programming models through systems such as DirectCompute, OpenGL compute, OpenCL, and CUDA. Although much has been made of GPGPU programming, this course focuses on the application of compute on GPUs for graphics in particular.\u0000 We will start with a brief overview of the underlying GPU architectures for compute. We will then discuss how the languages are constructed to help take advantage of these architectures and what the differences are. Since the focus is on application to graphics, we will discuss interoperability with graphics APIs and performance implications.\u0000 We will also address issues related to choosing between compute and other programmable graphics stages such as pixel or fragment shaders, as well as how to interact with these other graphics pipeline stages.\u0000 Finally, we will discuss instances where compute has been used specifically for graphics. The attendee will leave the course with a basic understanding of where they can make use of compute to accelerate or extend graphics applications.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"267 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134510561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In these days, 3D models are introduced as new digital contents such as video games and data material for 3D printer. The demand of 3D modeling for ordinary people has been increased. While using existed hand-held 3D modeling system, users have to estimate unmeasured area through a display. Also users have to terminate consequently modeling with watching a process of modeling [Fudono et al. 2005]. In this paper, we propose a novel modeling system which indicates route guidance by means of area limitation of modeling at the beginning. Owing to area limitation of modeling, users can obtain desired 3D model with watching effective route guide for measuring and automatic termination of modeling. Users can obtain desired 3D model easily and quickly
最近,3D模型作为电子游戏和3D打印机的数据材料等新的数字内容被引入。普通民众对3D建模的需求越来越大。在使用现有的手持式三维建模系统时,用户必须通过显示器来估计未测量的面积。此外,用户必须通过观看建模过程来终止相应的建模[Fudono et al. 2005]。在本文中,我们提出了一种新颖的建模系统,该系统利用初始建模的面积限制来指示路线引导。由于建模面积的限制,用户可以通过观看有效的路线引导测量和自动终止建模来获得所需的三维模型。用户可以轻松快速地获得所需的3D模型
{"title":"3D interactive modeling with capturing instruction interface based on area limitation","authors":"Yuuki Ueba, Nobuchika Sakata, S. Nishida","doi":"10.1145/2543651.2543686","DOIUrl":"https://doi.org/10.1145/2543651.2543686","url":null,"abstract":"In these days, 3D models are introduced as new digital contents such as video games and data material for 3D printer. The demand of 3D modeling for ordinary people has been increased. While using existed hand-held 3D modeling system, users have to estimate unmeasured area through a display. Also users have to terminate consequently modeling with watching a process of modeling [Fudono et al. 2005]. In this paper, we propose a novel modeling system which indicates route guidance by means of area limitation of modeling at the beginning. Owing to area limitation of modeling, users can obtain desired 3D model with watching effective route guide for measuring and automatic termination of modeling. Users can obtain desired 3D model easily and quickly","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"58 2","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114007459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}