首页 > 最新文献

SIGGRAPH Asia 2015 Posters最新文献

英文 中文
Waving tentacles 8×8: controlling a SMA actuator by optical flow 摆动触手8×8:通过光流控制SMA致动器
Pub Date : 2015-11-02 DOI: 10.1145/2820926.2820931
Akira Nakayasu
When we see the wriggling movement and the shape of a tentacle like the sea anemone under the sea, we feel an existence of a primitive life. The goal of this research is to realize the expression of a kinetic artwork or interactive artwork such as waving tentacles of sea anemones. At present, soft actuators that bend in multiple directions have been developed. However, these each have a complex structure or are expensive. To realize the expression of waving tentacles we need a large number of actuators. Therefore, we developed a budget actuator with a simple structure. Previously, we have introduced three motion patterns for controlling a SMA actuator that can bend in three directions and an experimental system with 9 actuators [Nakayasu 2014]. In this paper, we introduce an experimental system with 64 actuators that react to a hand's movement via an optical flow algorithm.
当我们看到它在海底蠕动的动作和像海葵一样的触手的形状时,我们就会感觉到一种原始生命的存在。本研究的目标是实现动态艺术作品或互动艺术作品的表达,如海葵摆动的触手。目前已开发出多方向弯曲的软执行器。然而,这些都有一个复杂的结构或昂贵的。为了实现摆动触须的表达,需要大量的致动器。因此,我们开发了一种结构简单的预算执行器。在此之前,我们已经介绍了三种运动模式来控制可以向三个方向弯曲的SMA致动器和一个具有9个致动器的实验系统[Nakayasu 2014]。在本文中,我们介绍了一个由64个驱动器组成的实验系统,该系统通过光流算法对手的运动做出反应。
{"title":"Waving tentacles 8×8: controlling a SMA actuator by optical flow","authors":"Akira Nakayasu","doi":"10.1145/2820926.2820931","DOIUrl":"https://doi.org/10.1145/2820926.2820931","url":null,"abstract":"When we see the wriggling movement and the shape of a tentacle like the sea anemone under the sea, we feel an existence of a primitive life. The goal of this research is to realize the expression of a kinetic artwork or interactive artwork such as waving tentacles of sea anemones. At present, soft actuators that bend in multiple directions have been developed. However, these each have a complex structure or are expensive. To realize the expression of waving tentacles we need a large number of actuators. Therefore, we developed a budget actuator with a simple structure. Previously, we have introduced three motion patterns for controlling a SMA actuator that can bend in three directions and an experimental system with 9 actuators [Nakayasu 2014]. In this paper, we introduce an experimental system with 64 actuators that react to a hand's movement via an optical flow algorithm.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122376424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Using unity for immersive natural hazards visualization 使用unity进行沉浸式自然灾害可视化
Pub Date : 2015-11-02 DOI: 10.1145/2820926.2820970
F. Woolard, M. Bolger
Maps exist as two-dimensional representations of spatial information, generally designed for a single specific purpose. Our work focuses on representation of data relevant to natural hazards scenarios. Although visualization choices can be made on maps, their fundamental representation is recognizably the same as what it was hundreds of years ago. Video representations can improve on this by incorporating temporal information about disasters in a linear manner. Video still has restrictions though, as they require predetermined decisions about viewpoint and what information is presented at any time-point in the narrative. The current work aims to incorporate the strengths of these methods and expand on their impact. We create a highly customizable visualization tool that incorporates the Unity 3D game engine with scientific layers of information about natural hazards. We discuss the development of proof-of concept work in the bushfire hazard domain here.
地图是空间信息的二维表示形式,通常是为单一的特定目的而设计的。我们的工作重点是与自然灾害情景相关的数据表示。虽然可以在地图上进行可视化选择,但它们的基本表示与数百年前的相同。视频表示可以通过以线性方式结合有关灾害的时间信息来改进这一点。但视频仍然有限制,因为它们需要预先决定观点,以及在叙事的任何时间点呈现什么信息。目前的工作旨在结合这些方法的优势,并扩大其影响。我们创建了一个高度可定制的可视化工具,将Unity 3D游戏引擎与有关自然灾害的科学信息层结合在一起。我们在这里讨论了森林火灾危害领域的概念验证工作的发展。
{"title":"Using unity for immersive natural hazards visualization","authors":"F. Woolard, M. Bolger","doi":"10.1145/2820926.2820970","DOIUrl":"https://doi.org/10.1145/2820926.2820970","url":null,"abstract":"Maps exist as two-dimensional representations of spatial information, generally designed for a single specific purpose. Our work focuses on representation of data relevant to natural hazards scenarios. Although visualization choices can be made on maps, their fundamental representation is recognizably the same as what it was hundreds of years ago. Video representations can improve on this by incorporating temporal information about disasters in a linear manner. Video still has restrictions though, as they require predetermined decisions about viewpoint and what information is presented at any time-point in the narrative. The current work aims to incorporate the strengths of these methods and expand on their impact. We create a highly customizable visualization tool that incorporates the Unity 3D game engine with scientific layers of information about natural hazards. We discuss the development of proof-of concept work in the bushfire hazard domain here.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122907241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Automatic facial animation generation system of dancing characters considering emotion in dance and music 考虑舞蹈和音乐中情感的舞蹈人物面部动画自动生成系统
Pub Date : 2015-11-02 DOI: 10.1145/2820926.2820935
Wakana Asahina, N. Okada, Naoya Iwamoto, Taro Masuda, Tsukasa Fukusato, S. Morishima
In recent years, a lot of 3D character dance animation movies are created by amateur users using 3DCG animation editing tools (e.g. MikuMikuDance). Whereas, most of them are created manually. Then automatic facial animation system for dancing character will be useful to create dance movies and visualize impressions effectively. Therefore, we address the challenging theme to estimate dancing character's emotions (we call "dance emotion"). In previous work considering music features, DiPaola et al. [2006] proposed music-driven emotionally expressive face system. To detect the mood of the input music, they used a hierarchical framework (Thayer model), and achieved to generate facial animation that matches music emotion. However, their model can't express subtleties of emotion between two emotions because input music divided into few moods sharply using Gaussian mixture model. In addition, they decide more detailed moods based on the psychological rules that uses score information, so they requires MIDI data. In this paper, we propose "dance emotion model" to visualize dancing character's emotion as facial expression. Our model is built by the coordinate information frame by frame on the emotional space through perceptional experiment using music and dance motion database without MIDI data. Moreover, by considering the displacement on the emotional space, we can express not only a certain emotion but also subtleties of emotions. As the result, our system got a higher accuracy comparing with the previous work. We can create the facial expression result soon by inputting audio data and synchronized motion. It is shown the utility through the comparison with previous work in Figure 1.
近年来,很多3D角色舞蹈动画电影都是由业余用户使用3DCG动画编辑工具(如MikuMikuDance)创作的。然而,大多数都是手动创建的。然后,自动面部动画系统的舞蹈角色将有助于制作舞蹈电影和有效地可视化印象。因此,我们解决了一个具有挑战性的主题,即评估舞蹈角色的情感(我们称之为“舞蹈情感”)。在之前考虑音乐特征的工作中,DiPaola等人[2006]提出了音乐驱动的情感表达面部系统。为了检测输入音乐的情绪,他们使用了一个层次框架(Thayer模型),并实现了生成与音乐情绪相匹配的面部动画。然而,由于输入的音乐使用高斯混合模型被划分为几个情绪,他们的模型无法表达两种情绪之间的微妙之处。此外,他们根据使用分数信息的心理规则来决定更详细的情绪,因此他们需要MIDI数据。本文提出了“舞蹈情感模型”,将舞蹈角色的情感形象化为面部表情。我们的模型是利用没有MIDI数据的音乐和舞蹈动作数据库,通过感知实验,在情感空间上逐帧地获取坐标信息来构建的。此外,通过考虑情感空间上的位移,我们不仅可以表达某种情感,还可以表达情感的微妙之处。结果表明,与以往的工作相比,我们的系统获得了更高的精度。我们可以通过输入音频数据和同步动作来快速创建面部表情结果。通过与图1中前面工作的比较,展示了该实用程序。
{"title":"Automatic facial animation generation system of dancing characters considering emotion in dance and music","authors":"Wakana Asahina, N. Okada, Naoya Iwamoto, Taro Masuda, Tsukasa Fukusato, S. Morishima","doi":"10.1145/2820926.2820935","DOIUrl":"https://doi.org/10.1145/2820926.2820935","url":null,"abstract":"In recent years, a lot of 3D character dance animation movies are created by amateur users using 3DCG animation editing tools (e.g. MikuMikuDance). Whereas, most of them are created manually. Then automatic facial animation system for dancing character will be useful to create dance movies and visualize impressions effectively. Therefore, we address the challenging theme to estimate dancing character's emotions (we call \"dance emotion\"). In previous work considering music features, DiPaola et al. [2006] proposed music-driven emotionally expressive face system. To detect the mood of the input music, they used a hierarchical framework (Thayer model), and achieved to generate facial animation that matches music emotion. However, their model can't express subtleties of emotion between two emotions because input music divided into few moods sharply using Gaussian mixture model. In addition, they decide more detailed moods based on the psychological rules that uses score information, so they requires MIDI data. In this paper, we propose \"dance emotion model\" to visualize dancing character's emotion as facial expression. Our model is built by the coordinate information frame by frame on the emotional space through perceptional experiment using music and dance motion database without MIDI data. Moreover, by considering the displacement on the emotional space, we can express not only a certain emotion but also subtleties of emotions. As the result, our system got a higher accuracy comparing with the previous work. We can create the facial expression result soon by inputting audio data and synchronized motion. It is shown the utility through the comparison with previous work in Figure 1.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128569225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Timeline visualization of semantic content 语义内容的时间线可视化
Pub Date : 2015-11-02 DOI: 10.1145/2820926.2820974
Douglas J. Mason
People interact with large corpuses of documents everyday, from Googling the internet, reading a book, or checking up on their email. Much of this content has a temporal component: a Website was published on a particular date, your email arrived yesterday, and Chapter 2 comes after Chapter 1. As we read this content, we create an internal map that correlates what we read with its place in time and with other parts that we've read. The quality of this map is critical to understanding the structure of any large corpus and for locating salient information.
人们每天都要与大量的文档进行交互,从搜索互联网,阅读一本书,或者查看他们的电子邮件。很多内容都有时间的成分:一个网站是在某个特定的日期发布的,你的电子邮件是昨天收到的,第二章在第一章之后。当我们阅读这些内容时,我们会创建一个内部地图,将我们所读的内容与它在时间上的位置以及我们读过的其他部分联系起来。该地图的质量对于理解任何大型语料库的结构和定位重要信息至关重要。
{"title":"Timeline visualization of semantic content","authors":"Douglas J. Mason","doi":"10.1145/2820926.2820974","DOIUrl":"https://doi.org/10.1145/2820926.2820974","url":null,"abstract":"People interact with large corpuses of documents everyday, from Googling the internet, reading a book, or checking up on their email. Much of this content has a temporal component: a Website was published on a particular date, your email arrived yesterday, and Chapter 2 comes after Chapter 1. As we read this content, we create an internal map that correlates what we read with its place in time and with other parts that we've read. The quality of this map is critical to understanding the structure of any large corpus and for locating salient information.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133458922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Guided path tracing using clustered virtual point lights 引导路径跟踪使用集群虚拟点灯
Pub Date : 2015-11-02 DOI: 10.1145/2820926.2820955
Binh-Son Hua, Kok-Lim Low
Monte Carlo path tracing has been increasingly popular in movie production recently. It is a general and unbiased rendering technique that can easily handle diffuse and glossy surfaces. To trace light paths, most of existing path tracers rely on surface BRDFs for directional sampling. This works well for glossy appearance, but tends to be not effective for diffuse surfaces because in such cases, the rendering integral is mostly driven by the incoming radiance distribution, not the BRDFs. Therefore, with the same number of samples, it is more favorable to sample the incoming radiance distribution to achieve better effectiveness for diffuse scenes. [Vorba et al. 2014] addressed this sampling problem by using photons to estimate incoming radiance distributions which can then be compactly represented using Gaussian mixture functions.
蒙特卡罗路径追踪技术近年来在电影制作中越来越受欢迎。这是一种通用的、无偏见的渲染技术,可以很容易地处理漫射和光滑的表面。为了跟踪光路,大多数现有的路径跟踪器依赖于表面brdf进行定向采样。这对于有光泽的表面效果很好,但对于漫射表面往往不太有效,因为在这种情况下,渲染积分主要是由入射的辐射分布驱动的,而不是brdf。因此,在相同的采样次数下,更有利于对入射的辐射分布进行采样,以获得更好的漫射场景效果。[Vorba et al. 2014]通过使用光子来估计入射的辐射分布来解决这个采样问题,然后可以使用高斯混合函数紧凑地表示。
{"title":"Guided path tracing using clustered virtual point lights","authors":"Binh-Son Hua, Kok-Lim Low","doi":"10.1145/2820926.2820955","DOIUrl":"https://doi.org/10.1145/2820926.2820955","url":null,"abstract":"Monte Carlo path tracing has been increasingly popular in movie production recently. It is a general and unbiased rendering technique that can easily handle diffuse and glossy surfaces. To trace light paths, most of existing path tracers rely on surface BRDFs for directional sampling. This works well for glossy appearance, but tends to be not effective for diffuse surfaces because in such cases, the rendering integral is mostly driven by the incoming radiance distribution, not the BRDFs. Therefore, with the same number of samples, it is more favorable to sample the incoming radiance distribution to achieve better effectiveness for diffuse scenes. [Vorba et al. 2014] addressed this sampling problem by using photons to estimate incoming radiance distributions which can then be compactly represented using Gaussian mixture functions.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128280155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An interactive 3D social media browsing system in a tech-art gallery 技术艺术画廊中的交互式3D社交媒体浏览系统
Pub Date : 2015-11-02 DOI: 10.1145/2820926.2820953
Shih-Wei Sun, Jheng-Wei Peng, Wei-Chih Lin, Ying-Ting Chen, Wen-Huang Cheng, K. Hua
Using mobile devices to capture photos are very common behaviors in our daily life. With such many photos captured from the members belonging to a social network, [Yin et al. 2014] proposed to utilize the social context from the mobile devices, e.g., geo-tag from the GPS sensor, to help a user to capture better photos via a mobile device. Using the geo-tags of a photo and the analysis of the image content to construct a 3D model of a scene has been developed since the Photo Tourism [Snavely et al. 2006] project. The scene reconstruction scheme proposed by [Snavely et al. 2008] can visualize the photos in a 3D environment according to the photos collected from the social members. In addition, [Szeliski et al. 2013] indicated that it is a natural way to navigate the images from the social media sites in a 3D geo-located context. Therefore, for multimedia visualization with a natural and immersive 3D user experience in a tech-art gallery, we propose a 3D social media browsing system to allow users to use motion-sensing devices to interactively navigate the social photos in a virtual 3D scene constructed from a real physical space.
使用移动设备拍照是我们日常生活中非常常见的行为。由于从属于社交网络的成员中捕获了如此多的照片,[Yin et al. 2014]提出利用来自移动设备的社交上下文,例如来自GPS传感器的地理标签,帮助用户通过移动设备捕获更好的照片。使用照片的地理标签和对图像内容的分析来构建场景的3D模型,自photo Tourism [Snavely et al. 2006]项目以来一直在发展。[Snavely et al. 2008]提出的场景重建方案可以根据收集到的社会成员的照片,将照片在三维环境中可视化。此外,[Szeliski et al. 2013]指出,在3D地理定位环境中,从社交媒体网站上浏览图像是一种自然的方式。因此,为了在科技艺术画廊中提供自然的沉浸式3D用户体验的多媒体可视化,我们提出了一种3D社交媒体浏览系统,允许用户使用体感设备在真实物理空间构建的虚拟3D场景中交互式地浏览社交照片。
{"title":"An interactive 3D social media browsing system in a tech-art gallery","authors":"Shih-Wei Sun, Jheng-Wei Peng, Wei-Chih Lin, Ying-Ting Chen, Wen-Huang Cheng, K. Hua","doi":"10.1145/2820926.2820953","DOIUrl":"https://doi.org/10.1145/2820926.2820953","url":null,"abstract":"Using mobile devices to capture photos are very common behaviors in our daily life. With such many photos captured from the members belonging to a social network, [Yin et al. 2014] proposed to utilize the social context from the mobile devices, e.g., geo-tag from the GPS sensor, to help a user to capture better photos via a mobile device. Using the geo-tags of a photo and the analysis of the image content to construct a 3D model of a scene has been developed since the Photo Tourism [Snavely et al. 2006] project. The scene reconstruction scheme proposed by [Snavely et al. 2008] can visualize the photos in a 3D environment according to the photos collected from the social members. In addition, [Szeliski et al. 2013] indicated that it is a natural way to navigate the images from the social media sites in a 3D geo-located context. Therefore, for multimedia visualization with a natural and immersive 3D user experience in a tech-art gallery, we propose a 3D social media browsing system to allow users to use motion-sensing devices to interactively navigate the social photos in a virtual 3D scene constructed from a real physical space.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115744959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Biped control using multi-segment foot model based on the human feet 基于人足的多段足部模型的两足控制
Pub Date : 2015-11-02 DOI: 10.1145/2820926.2820943
Seokjae Lee, Jehee Lee
Physical simulation has been developed rapidly and recent work shows natural looking simulated motion through motion capture data and robust adaptation to external perturbations by using manually designed balance controller. However, developing general controller to simulate unpredictable or complex motion is still challenging.
物理仿真发展迅速,最近的工作通过运动捕获数据和手动设计的平衡控制器对外部扰动的鲁棒自适应显示了自然的模拟运动。然而,开发通用控制器来模拟不可预测或复杂的运动仍然具有挑战性。
{"title":"Biped control using multi-segment foot model based on the human feet","authors":"Seokjae Lee, Jehee Lee","doi":"10.1145/2820926.2820943","DOIUrl":"https://doi.org/10.1145/2820926.2820943","url":null,"abstract":"Physical simulation has been developed rapidly and recent work shows natural looking simulated motion through motion capture data and robust adaptation to external perturbations by using manually designed balance controller. However, developing general controller to simulate unpredictable or complex motion is still challenging.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127052234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intuitive 3D cubic style modeling system 直观的3D立方风格建模系统
Pub Date : 2015-11-02 DOI: 10.1145/2820926.2820956
Chen-Chi Hu, Tze-Hsiang Wei, Yu-Sheng Chen, Yi-Chieh Wu, Ming-Te Chi
Modeling is a key application in 3D fabrication. Although numerous powerful 3D-modeling software packages exist, few people can freely build their desired model because of insufficient background knowledge in geometry and difficulties manipulating the complexities of the modeling interface; the learning curve is steep for most people. For this study, we chose a cubic model, a model assembled from small cubes, to reduce the learning curve of modeling. We proposed an intuitive modeling system designed for elementary school students. Users can sketch a rough 2D contour, and then the system enables them to generate the thickness and shape of a 3D cubic model.
建模是三维制造中的一个关键应用。虽然存在许多功能强大的3d建模软件包,但由于几何背景知识不足和难以操作建模接口的复杂性,很少有人可以自由地构建他们想要的模型;对大多数人来说,学习曲线是陡峭的。在这项研究中,我们选择了一个立方模型,一个由小立方体组装而成的模型,以减少建模的学习曲线。我们提出了一个面向小学生的直观建模系统。用户可以绘制一个粗略的二维轮廓,然后系统使他们能够生成一个三维立方体模型的厚度和形状。
{"title":"Intuitive 3D cubic style modeling system","authors":"Chen-Chi Hu, Tze-Hsiang Wei, Yu-Sheng Chen, Yi-Chieh Wu, Ming-Te Chi","doi":"10.1145/2820926.2820956","DOIUrl":"https://doi.org/10.1145/2820926.2820956","url":null,"abstract":"Modeling is a key application in 3D fabrication. Although numerous powerful 3D-modeling software packages exist, few people can freely build their desired model because of insufficient background knowledge in geometry and difficulties manipulating the complexities of the modeling interface; the learning curve is steep for most people. For this study, we chose a cubic model, a model assembled from small cubes, to reduce the learning curve of modeling. We proposed an intuitive modeling system designed for elementary school students. Users can sketch a rough 2D contour, and then the system enables them to generate the thickness and shape of a 3D cubic model.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128595097","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
3D visualization of aurora from optional viewpoint at optional time 从可选的视点,在可选的时间,极光的三维可视化
Pub Date : 2015-11-02 DOI: 10.1145/2820926.2820967
Akira Takeuchi, Hiromitsu Fujii, A. Yamashita, Masayuki Tanaka, R. Kataoka, Y. Miyoshi, M. Okutomi, H. Asama
Three-dimensional analysis of the aurora is significant because the shape of aurora depends on solar wind which influences electric equipment such as satellites. Our research group set two fish-eye cameras in Alaska, U.S.A and reconstructed the Aurora's shape from a pair of stereo images [Fujii et al. 2014]. However, the method using the feature-based matching cannot detect dense enough feature points accurately since they are hard to detect from the aurora image whose most parts are with low contrast. In this paper, we achieved both increasing the detected feature points and improving accuracy. Applying this method, the 3D shape of aurora from optional view point at optional time can be visualized.
极光的三维分析很重要,因为极光的形状取决于太阳风,而太阳风会影响卫星等电子设备。我们的研究小组在美国阿拉斯加设置了两台鱼眼相机,并从一对立体图像中重建了极光的形状[Fujii et al. 2014]。然而,基于特征匹配的方法很难从大部分对比度较低的极光图像中检测到足够密集的特征点,因此无法准确检测到这些特征点。在本文中,我们既增加了检测到的特征点,又提高了精度。应用该方法,可以实现任意时间、任意视点上的极光三维形状的可视化。
{"title":"3D visualization of aurora from optional viewpoint at optional time","authors":"Akira Takeuchi, Hiromitsu Fujii, A. Yamashita, Masayuki Tanaka, R. Kataoka, Y. Miyoshi, M. Okutomi, H. Asama","doi":"10.1145/2820926.2820967","DOIUrl":"https://doi.org/10.1145/2820926.2820967","url":null,"abstract":"Three-dimensional analysis of the aurora is significant because the shape of aurora depends on solar wind which influences electric equipment such as satellites. Our research group set two fish-eye cameras in Alaska, U.S.A and reconstructed the Aurora's shape from a pair of stereo images [Fujii et al. 2014]. However, the method using the feature-based matching cannot detect dense enough feature points accurately since they are hard to detect from the aurora image whose most parts are with low contrast. In this paper, we achieved both increasing the detected feature points and improving accuracy. Applying this method, the 3D shape of aurora from optional view point at optional time can be visualized.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134475497","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Generating face ink portrait from face photograph 从人脸照片生成人脸墨水肖像
Pub Date : 2015-11-02 DOI: 10.1145/2820926.2820933
P. Chiang, Kuo-Hao Chang, Tung-Ju Hsieh
The Chinese ink portrait requires sophisticated skills and the training for Chinese ink painting takes a long time. In this research, a Chinese portrait generation system is proposed to allow the user to convert face images to Chinese ink portraits. We search the image using Active Shape Model (ASM) and extract facial features from an input face image. As a result, a feature-preserved ink diffused image is generated. In order to produce a feature-preserved Chinese ink portrait, we use artistic ink brush strokes to enhance face contour constructed with the facial features. The generated portraits can be used to replace faces in an ink painting.
中国的水墨画需要复杂的技巧,训练中国的水墨画需要很长时间。在本研究中,提出了一个中文肖像生成系统,允许用户将人脸图像转换为中文墨水肖像。我们使用主动形状模型(ASM)对图像进行搜索,并从输入的人脸图像中提取人脸特征。结果生成了特征保留的油墨扩散图像。为了制作出一幅保留特征的中国水墨画,我们用艺术的笔触来增强由面部特征构成的面部轮廓。生成的肖像可以用来代替水墨画中的人脸。
{"title":"Generating face ink portrait from face photograph","authors":"P. Chiang, Kuo-Hao Chang, Tung-Ju Hsieh","doi":"10.1145/2820926.2820933","DOIUrl":"https://doi.org/10.1145/2820926.2820933","url":null,"abstract":"The Chinese ink portrait requires sophisticated skills and the training for Chinese ink painting takes a long time. In this research, a Chinese portrait generation system is proposed to allow the user to convert face images to Chinese ink portraits. We search the image using Active Shape Model (ASM) and extract facial features from an input face image. As a result, a feature-preserved ink diffused image is generated. In order to produce a feature-preserved Chinese ink portrait, we use artistic ink brush strokes to enhance face contour constructed with the facial features. The generated portraits can be used to replace faces in an ink painting.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127795064","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
SIGGRAPH Asia 2015 Posters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1