Where giant bulldozers shave the jungle, a monkey finds a razor and decides to use it.
当巨大的推土机刮平丛林时,一只猴子发现了一把剃刀,并决定使用它。
{"title":"Shave it","authors":"N. Toriano","doi":"10.1145/2542398.2542413","DOIUrl":"https://doi.org/10.1145/2542398.2542413","url":null,"abstract":"Where giant bulldozers shave the jungle, a monkey finds a razor and decides to use it.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"371 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123491810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Beautifully detailed titles showcasing the armor of The Avengers.
精美详细的标题展示了复仇者联盟的盔甲。
{"title":"The avengers: title sequence","authors":"M. Knight","doi":"10.1145/2542398.2542462","DOIUrl":"https://doi.org/10.1145/2542398.2542462","url":null,"abstract":"Beautifully detailed titles showcasing the armor of The Avengers.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121971969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Youngsam Shin, Won-Jong Lee, J. D. Lee, Shihwa Lee, Soojung Ryu, Jeongwook Kim
In this paper, we focus the impact of a memory bandwidth limitation by analyzing the bandwidth consumption for ray tracing system and present an energy efficient data transmission method between processor and ray tracing hardware engine. For evaluation of our approach, we have implemented a prototype of ray tracing architecture using our approach on FPGA platform. According to our experiment result, our approach shows a 48% reduction of system memory bandwidth on average.
{"title":"Energy efficient data transmission for ray tracing on mobile computing platform","authors":"Youngsam Shin, Won-Jong Lee, J. D. Lee, Shihwa Lee, Soojung Ryu, Jeongwook Kim","doi":"10.1145/2543651.2543673","DOIUrl":"https://doi.org/10.1145/2543651.2543673","url":null,"abstract":"In this paper, we focus the impact of a memory bandwidth limitation by analyzing the bandwidth consumption for ray tracing system and present an energy efficient data transmission method between processor and ray tracing hardware engine. For evaluation of our approach, we have implemented a prototype of ray tracing architecture using our approach on FPGA platform. According to our experiment result, our approach shows a 48% reduction of system memory bandwidth on average.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122525653","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A-me is a fictitious memory-evoking apparatus at the intersection of science, art and technology. The system enables users to experience other people's memories as well as store their own by interacting with a volumetric representation (MR) of a human brain. The user retrieves or stores memories (audio traces) by pointing and clicking at precise voxels locations. Triggered by their exploratory action, a story is slowly revealed and recomposed in the form of whispering voices revealing intimate stories. A-me it's a public receptacle for private memories, thus exploring the possibility of a collective physical brain. The installation introduces an original optical see-through AR setup for neuronavigation capable of overlaying a volume rendered MR scan onto a physical dummy head. Implementing such a system also forced us to address technical questions on quality assessment of AR systems for brain visualization.
a -me是一个虚构的记忆唤起装置在科学,艺术和技术的交叉点。该系统通过与人脑的体积表征(MR)互动,使用户能够体验他人的记忆,并存储自己的记忆。用户通过指向和点击精确的体素位置来检索或存储记忆(音频轨迹)。在他们的探索行动的触发下,一个故事被慢慢地揭示和重组,以窃窃私语的形式揭示了一个亲密的故事。a -me它是私人记忆的公共容器,从而探索集体物理大脑的可能性。该装置引入了一种用于神经导航的原始光学透明AR装置,能够将体积渲染的MR扫描覆盖到物理假头上。实现这样一个系统还迫使我们解决关于增强现实系统的质量评估的技术问题。
{"title":"A-me: augmented memories","authors":"Jordi Puig, A. Perkis, A. S. Hoel, Á. Cassinelli","doi":"10.1145/2542256.2542264","DOIUrl":"https://doi.org/10.1145/2542256.2542264","url":null,"abstract":"A-me is a fictitious memory-evoking apparatus at the intersection of science, art and technology. The system enables users to experience other people's memories as well as store their own by interacting with a volumetric representation (MR) of a human brain. The user retrieves or stores memories (audio traces) by pointing and clicking at precise voxels locations. Triggered by their exploratory action, a story is slowly revealed and recomposed in the form of whispering voices revealing intimate stories. A-me it's a public receptacle for private memories, thus exploring the possibility of a collective physical brain.\u0000 The installation introduces an original optical see-through AR setup for neuronavigation capable of overlaying a volume rendered MR scan onto a physical dummy head. Implementing such a system also forced us to address technical questions on quality assessment of AR systems for brain visualization.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121856969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The use of open-source microcontroller platforms in building design facilitates new responsive building systems and intelligent facades. Adaptive designs and intelligent spaces are at the forefront of the current architectural and artistic discourse. They engage users in interactive dialogue, allow for public domain authoring, and are critical factors in sustainable designs where buildings monitor their own performance and respond to environmental factors. This course explores the intersection of microcontroller-based physical computing with emergent material technologies. The presenters take a step further beyond current electronic paradigm and discuss the impact of smart materials on the electronically dominated world of computing. Smart materials not only complement or replace the need for electrically operated sensors or actuators, but can also eliminate microcontrollers altogether. Since in this arrangement the material itself takes on computational functions, sensing and actuation are processed locally and on an as-needed basis. Material-based computation can be achieved on very small scales (nanoscale) and can be truly embedded and ubiquitous within our built environment. The material response is direct and exhibits an extremely high-resolution. At the same time, the software-hardware integration inherent in smart-material computing sets limitations for dynamic readjustment of behavioral properties and functional configurations. In most instances, smart materials are specifically designed to perform a particular function within well-defined trigger conditions. However, these trigger properties are not easily re-configurable once integrated into building assemblies. This course will look at various ways in which performative materials can respond in an environment that is controlled by, and interfaced with the digital realm. Participants will be introduced to a range of nanotech-enabled emergent and smart materials that can respond to changes in their environment. They will also learn principles of feedback-based interactions that are essential for the realization of adaptive spaces.
{"title":"Computing with matter","authors":"A. Zarzycki, Martina Decker","doi":"10.1145/2542266.2542282","DOIUrl":"https://doi.org/10.1145/2542266.2542282","url":null,"abstract":"The use of open-source microcontroller platforms in building design facilitates new responsive building systems and intelligent facades. Adaptive designs and intelligent spaces are at the forefront of the current architectural and artistic discourse. They engage users in interactive dialogue, allow for public domain authoring, and are critical factors in sustainable designs where buildings monitor their own performance and respond to environmental factors.\u0000 This course explores the intersection of microcontroller-based physical computing with emergent material technologies. The presenters take a step further beyond current electronic paradigm and discuss the impact of smart materials on the electronically dominated world of computing. Smart materials not only complement or replace the need for electrically operated sensors or actuators, but can also eliminate microcontrollers altogether. Since in this arrangement the material itself takes on computational functions, sensing and actuation are processed locally and on an as-needed basis. Material-based computation can be achieved on very small scales (nanoscale) and can be truly embedded and ubiquitous within our built environment. The material response is direct and exhibits an extremely high-resolution.\u0000 At the same time, the software-hardware integration inherent in smart-material computing sets limitations for dynamic readjustment of behavioral properties and functional configurations. In most instances, smart materials are specifically designed to perform a particular function within well-defined trigger conditions. However, these trigger properties are not easily re-configurable once integrated into building assemblies.\u0000 This course will look at various ways in which performative materials can respond in an environment that is controlled by, and interfaced with the digital realm. Participants will be introduced to a range of nanotech-enabled emergent and smart materials that can respond to changes in their environment. They will also learn principles of feedback-based interactions that are essential for the realization of adaptive spaces.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"193 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123400039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Rising hope","authors":"M. Vitanov","doi":"10.1145/2542398.2542454","DOIUrl":"https://doi.org/10.1145/2542398.2542454","url":null,"abstract":"The story of the once fastest horse of the world.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125594535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Konstantin Owetschkin, L. Meyer, C. Geiger, Daniel Drochtert
This paper presents the design of "Mobile Virtual Arrow", a mobile phone based virtual reality simulator for traditional intuitive archery. We attached a mobile phone at a real archery bow to act as a magic window into a virtual outdoor 3D world. With our mobile 3D simulator we want to provide a believable archery experience and support users in practicing the motion sequence of traditional archery in a virtual environment. To provide a realistic haptic feedback we used a real bow as the interaction device and equipped it with a dedicated damping system extended by electronics. Integrated sensors help to detect drawing and releasing of the bow, aiming at a virtual target and moving the user's point of view according to the real user movements by using the sensor data of the smartphone. First feedback from users indicate a positive and believable user experience.
{"title":"Mobile virtual archery","authors":"Konstantin Owetschkin, L. Meyer, C. Geiger, Daniel Drochtert","doi":"10.1145/2543651.2543684","DOIUrl":"https://doi.org/10.1145/2543651.2543684","url":null,"abstract":"This paper presents the design of \"Mobile Virtual Arrow\", a mobile phone based virtual reality simulator for traditional intuitive archery. We attached a mobile phone at a real archery bow to act as a magic window into a virtual outdoor 3D world. With our mobile 3D simulator we want to provide a believable archery experience and support users in practicing the motion sequence of traditional archery in a virtual environment. To provide a realistic haptic feedback we used a real bow as the interaction device and equipped it with a dedicated damping system extended by electronics. Integrated sensors help to detect drawing and releasing of the bow, aiming at a virtual target and moving the user's point of view according to the real user movements by using the sensor data of the smartphone. First feedback from users indicate a positive and believable user experience.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116608597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The appearance of outdoor scenes changes dramatically with lighting and weather conditions, time of day, and season. We relate visual changes to scene attributes, which are human-nameable concepts used for high-level description of scenes. They carry semantic meaning and are more flexible than a categorical representation of scenes. While the discriminative scene attributes proposed in [Patterson and Hays 2012] distinguish scenes from each other, we focus on transient attributes which describe changes in appearance within each scene under real-world conditions.
户外场景的外观随着照明和天气条件、一天中的时间和季节而发生巨大变化。我们将视觉变化与场景属性联系起来,这些属性是用于场景高级描述的人类可命名的概念。它们具有语义意义,比场景的分类表示更灵活。虽然[Patterson and Hays 2012]中提出的区分场景属性可以区分场景,但我们关注的是描述现实世界条件下每个场景中外观变化的瞬态属性。
{"title":"Exploring outdoor appearance changes with transient scene attributes","authors":"Pierre-Yves Laffont, James Hays","doi":"10.1145/2542302.2542309","DOIUrl":"https://doi.org/10.1145/2542302.2542309","url":null,"abstract":"The appearance of outdoor scenes changes dramatically with lighting and weather conditions, time of day, and season. We relate visual changes to scene attributes, which are human-nameable concepts used for high-level description of scenes. They carry semantic meaning and are more flexible than a categorical representation of scenes. While the discriminative scene attributes proposed in [Patterson and Hays 2012] distinguish scenes from each other, we focus on transient attributes which describe changes in appearance within each scene under real-world conditions.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"2018 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132998034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The year: 1715, the place: the Caribbean. A dangerous time and place to be, where pirates rule over.
时间:1715年,地点:加勒比海。这是个危险的时代和地方,被海盗统治着。
{"title":"Assassin's Creed 4 Black Flag announcement trailer","authors":"Eszter Bohus","doi":"10.1145/2542398.2542491","DOIUrl":"https://doi.org/10.1145/2542398.2542491","url":null,"abstract":"The year: 1715, the place: the Caribbean. A dangerous time and place to be, where pirates rule over.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133013388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Wolf children","authors":"S. Kure","doi":"10.1145/2542398.2542434","DOIUrl":"https://doi.org/10.1145/2542398.2542434","url":null,"abstract":"Collaboration between 2D and 3D animation.","PeriodicalId":126796,"journal":{"name":"International Conference on Societal Automation","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114971323","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}