Minkyu Kim, Jinhong Park, Sunho Ki, Youngduke Seo, Chulho Shin
As display resolution exponentially increases in mobile platforms, maintaining acceptable frame rate (e.g., 60Hz) becomes more challenging than ever because elevated computing demand (for both CPU and GPU) results in reduction of battery use time and increase in device surface temperature. Because power consumption increases almost linearly with GPU workload, the heavy GPU workload imposed by fixed high resolution can be allowed to be actually computed only when there are human perceptible benefits. Some recent techniques are exploited at runtime to reduce the workload such as skipping frames or lowering resolution during rendering. From those techniques, we still observe noticeable artifacts because they did not carefully investigate whether resulting sequences of frames are perceived as containing artifacts or not. Although one of recent studies relies on a user's view distance [Nixon et al. 2014], it can malfunction depending on environmental and biometric limitation.
{"title":"Dynamic rendering quality scaling for mobile GPU","authors":"Minkyu Kim, Jinhong Park, Sunho Ki, Youngduke Seo, Chulho Shin","doi":"10.1145/2820926.2820963","DOIUrl":"https://doi.org/10.1145/2820926.2820963","url":null,"abstract":"As display resolution exponentially increases in mobile platforms, maintaining acceptable frame rate (e.g., 60Hz) becomes more challenging than ever because elevated computing demand (for both CPU and GPU) results in reduction of battery use time and increase in device surface temperature. Because power consumption increases almost linearly with GPU workload, the heavy GPU workload imposed by fixed high resolution can be allowed to be actually computed only when there are human perceptible benefits. Some recent techniques are exploited at runtime to reduce the workload such as skipping frames or lowering resolution during rendering. From those techniques, we still observe noticeable artifacts because they did not carefully investigate whether resulting sequences of frames are perceived as containing artifacts or not. Although one of recent studies relies on a user's view distance [Nixon et al. 2014], it can malfunction depending on environmental and biometric limitation.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"419 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122795926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. Hirayama, H. Nakayama, A. Shiraki, T. Kakue, T. Shimobaba, T. Ito
Unlike conventional 2-D displays, volumetric displays have depth information and enable the 3-D images to be observed from any surrounding viewpoint. In a previous study, we developed an algorithm for utilizing the 3-D architecture of volumetric displays [Nakayama et al. 2013]. The 3-D objects designed by the algorithm can exhibit multiple 2-D images towards different directions simultaneously. Figure 2a shows a prototype of a 3-D crystal which has been designed using the proposed algorithm. As shown in Figure 2b, we can recognize three images when we look at it from the respective directions. In other words, the 3-D crystal is independently providing three images with directivity.
与传统的2-D显示器不同,体积显示器具有深度信息,可以从任何周围的视点观察到3-D图像。在之前的一项研究中,我们开发了一种利用三维立体显示器架构的算法[Nakayama et al. 2013]。该算法设计的三维物体可以同时呈现多个不同方向的二维图像。图2a显示了使用该算法设计的三维晶体原型。如图2b所示,当我们从不同的方向看时,我们可以识别出三幅图像。换句话说,三维晶体独立地提供三个具有指向性的图像。
{"title":"3-D crystal exhibiting multiple 2-D images with directivity","authors":"R. Hirayama, H. Nakayama, A. Shiraki, T. Kakue, T. Shimobaba, T. Ito","doi":"10.1145/2820926.2820936","DOIUrl":"https://doi.org/10.1145/2820926.2820936","url":null,"abstract":"Unlike conventional 2-D displays, volumetric displays have depth information and enable the 3-D images to be observed from any surrounding viewpoint. In a previous study, we developed an algorithm for utilizing the 3-D architecture of volumetric displays [Nakayama et al. 2013]. The 3-D objects designed by the algorithm can exhibit multiple 2-D images towards different directions simultaneously. Figure 2a shows a prototype of a 3-D crystal which has been designed using the proposed algorithm. As shown in Figure 2b, we can recognize three images when we look at it from the respective directions. In other words, the 3-D crystal is independently providing three images with directivity.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123965816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pei-Hsuan Tsai, Yu-Hsuan Huang, Tzu-Chieh Yu, M. Ouhyoung
We developed the Scope+ system, a video see-through augmented reality (AR) stereo microscope with computer graphics technology for medical and biological purposes (Figure 1d). We found out that it was inconvenient for medical scientists or researchers to look up information in reference books or do other tasks while using microscopes simultaneously. To solve the critical problems, we modify the structure of conventional microscope and implement AR technology for a better user experience. Not only text or video information, but also computer-generated content can be overlaid onto the real macro scene (Figure 2b); therefore users don't need to move away from the eyepieces anymore.
{"title":"Video see-through augmented reality stereo microscope with customized interpupillary distance design","authors":"Pei-Hsuan Tsai, Yu-Hsuan Huang, Tzu-Chieh Yu, M. Ouhyoung","doi":"10.1145/2820926.2820960","DOIUrl":"https://doi.org/10.1145/2820926.2820960","url":null,"abstract":"We developed the Scope+ system, a video see-through augmented reality (AR) stereo microscope with computer graphics technology for medical and biological purposes (Figure 1d). We found out that it was inconvenient for medical scientists or researchers to look up information in reference books or do other tasks while using microscopes simultaneously. To solve the critical problems, we modify the structure of conventional microscope and implement AR technology for a better user experience. Not only text or video information, but also computer-generated content can be overlaid onto the real macro scene (Figure 2b); therefore users don't need to move away from the eyepieces anymore.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125359079","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A novel 3D image display that uses a transparent solid cone and a flat panel display is proposed in this study. Under this system, an image that is actually positioned below the base of the cone appears to be located within the cone.
{"title":"3D display that uses transparent cones","authors":"K. Yanaka, Masayuki Yamada","doi":"10.1145/2820926.2820957","DOIUrl":"https://doi.org/10.1145/2820926.2820957","url":null,"abstract":"A novel 3D image display that uses a transparent solid cone and a flat panel display is proposed in this study. Under this system, an image that is actually positioned below the base of the cone appears to be located within the cone.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115320294","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Biological cells are the smallest units of life. The functions of the human body are mainly controlled by the cells of the central nervous system. In this work We present a brain shaped snow globe, by which it is possible to experience the mystery of the living body. We manufactured the snow globe by using neuro-scientific imaging, 3D computer aided design and microfabrication.
{"title":"Snow globe of a neural forest","authors":"A. Sato","doi":"10.1145/2820926.2820934","DOIUrl":"https://doi.org/10.1145/2820926.2820934","url":null,"abstract":"Biological cells are the smallest units of life. The functions of the human body are mainly controlled by the cells of the central nervous system. In this work We present a brain shaped snow globe, by which it is possible to experience the mystery of the living body. We manufactured the snow globe by using neuro-scientific imaging, 3D computer aided design and microfabrication.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128859580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A hybrid image is an image that changes interpretation as a function of viewing distance [Oliva et al. 2006]. Hybrid image IH obtained by superimposing two images, I1 and I2, is represented as follows:
混合图像是一种随着观看距离的变化而改变解释的图像[Oliva et al. 2006]。由I1和I2两幅图像叠加得到的混合图像IH表示如下:
{"title":"A method to generate hybrid pointillistic images","authors":"Junichi Sugita, Tokiichiro Takahashi","doi":"10.1145/2820926.2820962","DOIUrl":"https://doi.org/10.1145/2820926.2820962","url":null,"abstract":"A hybrid image is an image that changes interpretation as a function of viewing distance [Oliva et al. 2006]. Hybrid image IH obtained by superimposing two images, I1 and I2, is represented as follows:","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131433344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Takashina, Kotaro Aoki, Akiya Maekawa, Chihiro Tsukamoto, H. Kawai, Yoshiyuki Yamariku, Kaori Tsuruta, Marie Shimokawa, Yuji Kokumai, H. Koike
In the era of IoT, there is the necessity of information display for IoT devices in living space. One ideal realization should be 'ubiquitous' for such a purpose. We propose smart curtain as an implementation of ubiquitous computing in living space, which has the following four advantages; (a) a role of boundary between two spaces such as indoor and outdoor, (b) natural layer structure which consists of translucent curtain and drape curtain, (c) relatively large area which can offer enough space for displaying a lot of information and image which can make good atmosphere, and (d) context awareness by sensing surrounding environment and the state (opening/closing) of curtain. Though its visual quality is not expected to be so high, there should be many kinds of contents which don't need high visual quality.
{"title":"Smart curtain as interactive display in living space","authors":"T. Takashina, Kotaro Aoki, Akiya Maekawa, Chihiro Tsukamoto, H. Kawai, Yoshiyuki Yamariku, Kaori Tsuruta, Marie Shimokawa, Yuji Kokumai, H. Koike","doi":"10.1145/2820926.2820971","DOIUrl":"https://doi.org/10.1145/2820926.2820971","url":null,"abstract":"In the era of IoT, there is the necessity of information display for IoT devices in living space. One ideal realization should be 'ubiquitous' for such a purpose. We propose smart curtain as an implementation of ubiquitous computing in living space, which has the following four advantages; (a) a role of boundary between two spaces such as indoor and outdoor, (b) natural layer structure which consists of translucent curtain and drape curtain, (c) relatively large area which can offer enough space for displaying a lot of information and image which can make good atmosphere, and (d) context awareness by sensing surrounding environment and the state (opening/closing) of curtain. Though its visual quality is not expected to be so high, there should be many kinds of contents which don't need high visual quality.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131286316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sunho Ki, Jinhong Park, Jae-Ho Nah, Minkyu Kim, Youngduke Seo, Chulho Shin
Recent mobile GPUs support OpenGL ES 3.x for high quality graphics contents. Multiple render targets (MRTs) are one of the new important features of the OpenGL ES 3.x specification. MRTs facilitate rendering multiple render-target textures (called G-buffer) at once, so this feature enables deferred shading; after various geometry data, such as color, normal, reflection, and refraction, are rendered into the G-buffer in the first pass, we can perform screen-space lighting using the G-buffer data in the second pass. Complex lighting with deferred shading is now very common on desktop and console devices.
最近的移动gpu支持OpenGL ES 3。X用于高质量的图形内容。多渲染目标(mrt)是OpenGL ES 3新的重要特性之一。x规范。mrt有助于同时渲染多个渲染目标纹理(称为G-buffer),因此该功能支持延迟着色;在各种几何数据,如颜色、法线、反射和折射,在第一次被渲染到G-buffer后,我们可以在第二次使用G-buffer数据执行屏幕空间照明。带有延迟阴影的复杂照明现在在桌面和控制台设备上非常常见。
{"title":"Reusing MRTs for mobile GPUs","authors":"Sunho Ki, Jinhong Park, Jae-Ho Nah, Minkyu Kim, Youngduke Seo, Chulho Shin","doi":"10.1145/2820926.2820961","DOIUrl":"https://doi.org/10.1145/2820926.2820961","url":null,"abstract":"Recent mobile GPUs support OpenGL ES 3.x for high quality graphics contents. Multiple render targets (MRTs) are one of the new important features of the OpenGL ES 3.x specification. MRTs facilitate rendering multiple render-target textures (called G-buffer) at once, so this feature enables deferred shading; after various geometry data, such as color, normal, reflection, and refraction, are rendered into the G-buffer in the first pass, we can perform screen-space lighting using the G-buffer data in the second pass. Complex lighting with deferred shading is now very common on desktop and console devices.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133636412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pavel A. Savkin, Daiki Kuwahara, Masahide Kawai, Takuya Kato, S. Morishima
An appearance of a human face changes due to aging: sagging, spots, lusters, and wrinkles would be observed. Therefore, facial aging simulation techniques are required for long-term criminal investigation. While the appearance of an aged face varies greatly from person to person, wrinkles are one of the most important features which represent the human individuality. An individuality of wrinkles is defined by wrinkles shape and position.
{"title":"Wrinkles individuality representing aging simulation","authors":"Pavel A. Savkin, Daiki Kuwahara, Masahide Kawai, Takuya Kato, S. Morishima","doi":"10.1145/2820926.2820942","DOIUrl":"https://doi.org/10.1145/2820926.2820942","url":null,"abstract":"An appearance of a human face changes due to aging: sagging, spots, lusters, and wrinkles would be observed. Therefore, facial aging simulation techniques are required for long-term criminal investigation. While the appearance of an aged face varies greatly from person to person, wrinkles are one of the most important features which represent the human individuality. An individuality of wrinkles is defined by wrinkles shape and position.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"1209 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133322726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hikaru Ibayashi, Yuta Sugiura, Daisuke Sakamoto, N. Miyata, M. Tada, T. Okuma, T. Kurata, M. Mochimaru, T. Igarashi
Architecture-scale design requires two different viewpoints: a small-scale internal view, i.e., a first-person view of the space to see local details as an occupant of the space, and a large-scale external view, i.e., a top-down view of the entire space to make global decisions when designing the space. Architects or designers need to switch between these two viewpoints, but this can be inefficient and time-consuming. We present a collaborative design system, Dollhouse, to address this problem. By using our system, users can discuss the design of the space from two viewpoints simultaneously. This system also supports a set of interaction techniques to facilitate communication between these two user groups.
{"title":"Dollhouse VR: a multi-view, multi-user collaborative design workspace with VR technology","authors":"Hikaru Ibayashi, Yuta Sugiura, Daisuke Sakamoto, N. Miyata, M. Tada, T. Okuma, T. Kurata, M. Mochimaru, T. Igarashi","doi":"10.1145/2820926.2820948","DOIUrl":"https://doi.org/10.1145/2820926.2820948","url":null,"abstract":"Architecture-scale design requires two different viewpoints: a small-scale internal view, i.e., a first-person view of the space to see local details as an occupant of the space, and a large-scale external view, i.e., a top-down view of the entire space to make global decisions when designing the space. Architects or designers need to switch between these two viewpoints, but this can be inefficient and time-consuming. We present a collaborative design system, Dollhouse, to address this problem. By using our system, users can discuss the design of the space from two viewpoints simultaneously. This system also supports a set of interaction techniques to facilitate communication between these two user groups.","PeriodicalId":432851,"journal":{"name":"SIGGRAPH Asia 2015 Posters","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114214157","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}