Sou Tabata, Haruka Maeda, Keigo Hirokawa, Kei Yokoyama
We propose a system that automatically generates layouts for magazines that require graphical design. In this system, when images or texts are input as the content to be placed in layouts, an appropriate layout is automatically generated in consideration of content and design. The layout generation process is performed by randomized processing in accordance with a rule set of minimum conditions that must be satisfied for layouts (minimum condition rule set), where a large number of candidates are generated. An evaluation of appearance, style, design, and composition of the candidates are combined with an evaluation of their diverseness. Top candidates of the combined evaluation are returned. The automation makes the layout creation task performed by users such as graphical designers to be much more efficient. It also allows the user to choose from a wide range of ideas to create attractive layouts.
{"title":"Diverse Layout Generation for Graphical Design Magazines","authors":"Sou Tabata, Haruka Maeda, Keigo Hirokawa, Kei Yokoyama","doi":"10.1145/3355056.3364549","DOIUrl":"https://doi.org/10.1145/3355056.3364549","url":null,"abstract":"We propose a system that automatically generates layouts for magazines that require graphical design. In this system, when images or texts are input as the content to be placed in layouts, an appropriate layout is automatically generated in consideration of content and design. The layout generation process is performed by randomized processing in accordance with a rule set of minimum conditions that must be satisfied for layouts (minimum condition rule set), where a large number of candidates are generated. An evaluation of appearance, style, design, and composition of the candidates are combined with an evaluation of their diverseness. Top candidates of the combined evaluation are returned. The automation makes the layout creation task performed by users such as graphical designers to be much more efficient. It also allows the user to choose from a wide range of ideas to create attractive layouts.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132519428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shih-Syun Lin, Yu-Ming Chang, T. Le, Sheng-Yi Yao, Tong-Yee Lee
{"title":"Generation of Photorealistic QR Codes","authors":"Shih-Syun Lin, Yu-Ming Chang, T. Le, Sheng-Yi Yao, Tong-Yee Lee","doi":"10.1145/3355056.3364574","DOIUrl":"https://doi.org/10.1145/3355056.3364574","url":null,"abstract":"","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130134059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this poster, we proposed a refined scheme and system to realize the multi-directional 3D printing with the strength as the traditional unidirectional 3D printing. With the introduction of the 10.6m CO2 laser, the printing system can heat the interfaces of the already printed components and increase the intermolecular-penetrating diffusion while fabricating the base layers of the next components. Therefore, the interfacial bonding strength between components is augmented. The tensile tests demonstrate that the interfacial bonding strength can be increased by more than 27%, which reaches the strength of the integrated ones. The improved printing system makes it possible to realize multi-directional 3D printing with strength retention.
{"title":"Multi-directional 3D Printing with Strength Retention","authors":"Yupeng Guan, Yisong Gao, Lifang Wu, Kejian Cui, Jianwei Guo, Zechao Liu","doi":"10.1145/3355056.3364559","DOIUrl":"https://doi.org/10.1145/3355056.3364559","url":null,"abstract":"In this poster, we proposed a refined scheme and system to realize the multi-directional 3D printing with the strength as the traditional unidirectional 3D printing. With the introduction of the 10.6m CO2 laser, the printing system can heat the interfaces of the already printed components and increase the intermolecular-penetrating diffusion while fabricating the base layers of the next components. Therefore, the interfacial bonding strength between components is augmented. The tensile tests demonstrate that the interfacial bonding strength can be increased by more than 27%, which reaches the strength of the integrated ones. The improved printing system makes it possible to realize multi-directional 3D printing with strength retention.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134283917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniel Filonik, Tian Feng, Ke Sun, R. Nock, Alex Collins, T. Bednarz
{"title":"Non-Euclidean Embeddings for Graph Analytics and Visualisation","authors":"Daniel Filonik, Tian Feng, Ke Sun, R. Nock, Alex Collins, T. Bednarz","doi":"10.1145/3355056.3364585","DOIUrl":"https://doi.org/10.1145/3355056.3364585","url":null,"abstract":"","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125005408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yucheng Qiu, Daisuke Inagaki, K. Kohiyama, Hiroya Tanaka, Takashi Ijiri
We present an approach to obtain high-quality focus-stacking images. The key idea is to integrate the multi-view structure-from-motion (SfM) algorithm with the focus-stacking process; we carry out focus-bracketing shooting at multiple viewpoints, generate depth maps for all viewpoints by using the SfM algorithm, and compute focus stacking using the depth maps and local sharpness. By using the depth-maps, we successfully achieve focus-stacking results with less artifacts around object boundaries and without halo-artifacts, which was difficult to avoid by using the previous sharpest pixel and pyramid approaches. To illustrate the feasibility of our approach, we performed focus stacking of small objects such as insects and flowers.
{"title":"Focus stacking by multi-viewpoint focus bracketing","authors":"Yucheng Qiu, Daisuke Inagaki, K. Kohiyama, Hiroya Tanaka, Takashi Ijiri","doi":"10.1145/3355056.3364592","DOIUrl":"https://doi.org/10.1145/3355056.3364592","url":null,"abstract":"We present an approach to obtain high-quality focus-stacking images. The key idea is to integrate the multi-view structure-from-motion (SfM) algorithm with the focus-stacking process; we carry out focus-bracketing shooting at multiple viewpoints, generate depth maps for all viewpoints by using the SfM algorithm, and compute focus stacking using the depth maps and local sharpness. By using the depth-maps, we successfully achieve focus-stacking results with less artifacts around object boundaries and without halo-artifacts, which was difficult to avoid by using the previous sharpest pixel and pyramid approaches. To illustrate the feasibility of our approach, we performed focus stacking of small objects such as insects and flowers.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129969332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew R. Lilja, Shereen R Kadir, Rowan T. Hughes, Nick Gunn, Campbell W. Strong, Benjamin J. Bailey, R. Parton, J. McGhee
3D computer-animated representations of complex biological systems and environments are often vastly oversimplified. There are a number of key reasons: to highlight a distinct biological mechanism of interest; technical limitations of hardware and software computer graphics (CG) capabilities; and a lack of data regarding cellular environments. This oversimplification perpetuates a naive understanding of fundamental cellular dynamics and topologies. This work attempts to address these challenges through the development of a first-person interactive virtual environment that more authentically depicts molecular scales, densities and interactions in real-time. Driven by a collaboration between scientists, CG developers and 3D computer artists, Nanoscapes utilizes the latest CG advances in real-time pipelines to construct a cinematic 3D environment that better communicates the complexity associated with the cellular surface and nanomedicine delivery to the cell.
{"title":"Nanoscapes: Authentic Scales and Densities in Real-Time 3D Cinematic Visualizations of Cellular Landscapes","authors":"Andrew R. Lilja, Shereen R Kadir, Rowan T. Hughes, Nick Gunn, Campbell W. Strong, Benjamin J. Bailey, R. Parton, J. McGhee","doi":"10.1145/3355056.3364567","DOIUrl":"https://doi.org/10.1145/3355056.3364567","url":null,"abstract":"3D computer-animated representations of complex biological systems and environments are often vastly oversimplified. There are a number of key reasons: to highlight a distinct biological mechanism of interest; technical limitations of hardware and software computer graphics (CG) capabilities; and a lack of data regarding cellular environments. This oversimplification perpetuates a naive understanding of fundamental cellular dynamics and topologies. This work attempts to address these challenges through the development of a first-person interactive virtual environment that more authentically depicts molecular scales, densities and interactions in real-time. Driven by a collaboration between scientists, CG developers and 3D computer artists, Nanoscapes utilizes the latest CG advances in real-time pipelines to construct a cinematic 3D environment that better communicates the complexity associated with the cellular surface and nanomedicine delivery to the cell.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124358638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Higher education is embracing a digital transformation in a relatively slow adoption rate, with few fragmented solutions to portray the capabilities of new immersive technologies like Virtual Reality (VR). One may argue that deployment costs and substantial levels of design knowledge might be critical stagnation factors to create effective Virtual Immersive Educational (VIE) systems. We attempt to address these impediments in a cost-effective and user-friendly VIE system. In this paper, we report in brief the main elements of this design and initial result and lessons learned.
{"title":"Virtual Immersive Educational Systems: Early Results and Lessons Learned","authors":"Francesco Chinello, Konstantinos Koumaditis","doi":"10.1145/3355056.3364586","DOIUrl":"https://doi.org/10.1145/3355056.3364586","url":null,"abstract":"Higher education is embracing a digital transformation in a relatively slow adoption rate, with few fragmented solutions to portray the capabilities of new immersive technologies like Virtual Reality (VR). One may argue that deployment costs and substantial levels of design knowledge might be critical stagnation factors to create effective Virtual Immersive Educational (VIE) systems. We attempt to address these impediments in a cost-effective and user-friendly VIE system. In this paper, we report in brief the main elements of this design and initial result and lessons learned.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115352333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the present study, we propose a stealth projection method that visually removes the ProCam system in dynamic projection mapping (DPM). In recent years, DPM has been actively studied to change the appearance of moving and deforming objects by image projection. Various objects, such as an object held by the user, clothes, a human body, and a face, are projection targets, and the possibility of expressing these objects has continuously evolved. However, in order to realize this, high-speed and multiplexed special projection systems are needed, and objects are being closely enclosed by the systems. In DPM that seamlessly connects the real world and the virtual world, a complex device is an unnecessarily visually disturbing factor and should be removed in order to further exploit the potential effects of DPM. Therefore, in the present research, we propose a stealth projection method using a ProCam system that cannot be seen by combining a method that is capable of high-speed tracking with a single IR camera and all-around projection technology applying aerial image display technology.
{"title":"Stealth Projection: Visually Removing Projectors from Dynamic Projection Mapping","authors":"Masumi Kiyokawa, Shinichi Okuda, N. Hashimoto","doi":"10.1145/3355056.3364551","DOIUrl":"https://doi.org/10.1145/3355056.3364551","url":null,"abstract":"In the present study, we propose a stealth projection method that visually removes the ProCam system in dynamic projection mapping (DPM). In recent years, DPM has been actively studied to change the appearance of moving and deforming objects by image projection. Various objects, such as an object held by the user, clothes, a human body, and a face, are projection targets, and the possibility of expressing these objects has continuously evolved. However, in order to realize this, high-speed and multiplexed special projection systems are needed, and objects are being closely enclosed by the systems. In DPM that seamlessly connects the real world and the virtual world, a complex device is an unnecessarily visually disturbing factor and should be removed in order to further exploit the potential effects of DPM. Therefore, in the present research, we propose a stealth projection method using a ProCam system that cannot be seen by combining a method that is capable of high-speed tracking with a single IR camera and all-around projection technology applying aerial image display technology.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129076640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Conventional light-field methods of producing 3D images having circular parallax on a tabletop surface require several hundred projectors. Our novel approach produces a similar light field using only 1/10 the number of projectors. Adopting our method, two cylindrical mirrors are inserted in the projection light paths. By appropriately folding the paths with the mirrors, we form any viewpoint image in an annular viewing area from a group of rays sourced from all projectors arranged on a circle.
{"title":"360-Degree-Viewable Tabletop Light-Field 3D DisplayHaving Only 24 Projectors","authors":"S. Yoshida","doi":"10.1145/3355056.3364553","DOIUrl":"https://doi.org/10.1145/3355056.3364553","url":null,"abstract":"Conventional light-field methods of producing 3D images having circular parallax on a tabletop surface require several hundred projectors. Our novel approach produces a similar light field using only 1/10 the number of projectors. Adopting our method, two cylindrical mirrors are inserted in the projection light paths. By appropriately folding the paths with the mirrors, we form any viewpoint image in an annular viewing area from a group of rays sourced from all projectors arranged on a circle.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129820467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study proposes an interface device for augmenting multi-sensory reality based on a visually unified experience with a high level of consistency between real and virtual worlds using video see-through type mixed reality (MR). By putting on an MR head-mounted display (HMD) and holding a box-shaped device, virtual objects are displayed within the box, and both vibrations and reactions are presented in a synchronized way based on the dynamics of objects. Inside the device, multiple built-in actuators using solenoids and eccentric motors enable actions to be controlled in synchronicity with the motion of objects. Furthermore, one can also hear the sound emitted from virtual objects using 3D sound localization.
{"title":"HaptoBOX:","authors":"Kiichiro Kigawa, Toshikazu Ohshima","doi":"10.1145/3355056.3364560","DOIUrl":"https://doi.org/10.1145/3355056.3364560","url":null,"abstract":"This study proposes an interface device for augmenting multi-sensory reality based on a visually unified experience with a high level of consistency between real and virtual worlds using video see-through type mixed reality (MR). By putting on an MR head-mounted display (HMD) and holding a box-shaped device, virtual objects are displayed within the box, and both vibrations and reactions are presented in a synchronized way based on the dynamics of objects. Inside the device, multiple built-in actuators using solenoids and eccentric motors enable actions to be controlled in synchronicity with the motion of objects. Furthermore, one can also hear the sound emitted from virtual objects using 3D sound localization.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115522374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}