首页 > 最新文献

SIGGRAPH Asia 2019 Posters最新文献

英文 中文
Diverse Layout Generation for Graphical Design Magazines 图形设计杂志的多样化布局生成
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364549
Sou Tabata, Haruka Maeda, Keigo Hirokawa, Kei Yokoyama
We propose a system that automatically generates layouts for magazines that require graphical design. In this system, when images or texts are input as the content to be placed in layouts, an appropriate layout is automatically generated in consideration of content and design. The layout generation process is performed by randomized processing in accordance with a rule set of minimum conditions that must be satisfied for layouts (minimum condition rule set), where a large number of candidates are generated. An evaluation of appearance, style, design, and composition of the candidates are combined with an evaluation of their diverseness. Top candidates of the combined evaluation are returned. The automation makes the layout creation task performed by users such as graphical designers to be much more efficient. It also allows the user to choose from a wide range of ideas to create attractive layouts.
我们提出了一个系统,自动生成布局的杂志,需要图形设计。在该系统中,当输入图像或文本作为要放置在布局中的内容时,系统会根据内容和设计自动生成合适的布局。布局生成过程按照布局必须满足的最小条件规则集(最小条件规则集)进行随机处理,其中生成大量候选。对候选人的外貌、风格、设计和构成的评价与对其多样性的评价相结合。将返回综合评估的最佳候选人。自动化使得图形设计师等用户执行的布局创建任务更加高效。它还允许用户从广泛的想法中进行选择,以创建有吸引力的布局。
{"title":"Diverse Layout Generation for Graphical Design Magazines","authors":"Sou Tabata, Haruka Maeda, Keigo Hirokawa, Kei Yokoyama","doi":"10.1145/3355056.3364549","DOIUrl":"https://doi.org/10.1145/3355056.3364549","url":null,"abstract":"We propose a system that automatically generates layouts for magazines that require graphical design. In this system, when images or texts are input as the content to be placed in layouts, an appropriate layout is automatically generated in consideration of content and design. The layout generation process is performed by randomized processing in accordance with a rule set of minimum conditions that must be satisfied for layouts (minimum condition rule set), where a large number of candidates are generated. An evaluation of appearance, style, design, and composition of the candidates are combined with an evaluation of their diverseness. Top candidates of the combined evaluation are returned. The automation makes the layout creation task performed by users such as graphical designers to be much more efficient. It also allows the user to choose from a wide range of ideas to create attractive layouts.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132519428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Generation of Photorealistic QR Codes 生成逼真的QR码
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364574
Shih-Syun Lin, Yu-Ming Chang, T. Le, Sheng-Yi Yao, Tong-Yee Lee
{"title":"Generation of Photorealistic QR Codes","authors":"Shih-Syun Lin, Yu-Ming Chang, T. Le, Sheng-Yi Yao, Tong-Yee Lee","doi":"10.1145/3355056.3364574","DOIUrl":"https://doi.org/10.1145/3355056.3364574","url":null,"abstract":"","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130134059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Multi-directional 3D Printing with Strength Retention 多向3D打印与强度保持
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364559
Yupeng Guan, Yisong Gao, Lifang Wu, Kejian Cui, Jianwei Guo, Zechao Liu
In this poster, we proposed a refined scheme and system to realize the multi-directional 3D printing with the strength as the traditional unidirectional 3D printing. With the introduction of the 10.6m CO2 laser, the printing system can heat the interfaces of the already printed components and increase the intermolecular-penetrating diffusion while fabricating the base layers of the next components. Therefore, the interfacial bonding strength between components is augmented. The tensile tests demonstrate that the interfacial bonding strength can be increased by more than 27%, which reaches the strength of the integrated ones. The improved printing system makes it possible to realize multi-directional 3D printing with strength retention.
在这张海报中,我们提出了一个完善的方案和系统来实现多向3D打印,并且具有传统单向3D打印的强度。随着10.6m CO2激光器的引入,打印系统可以在制造下一个组件的基材层时加热已打印组件的界面并增加分子间渗透扩散。因此,增强了组件之间的界面结合强度。拉伸试验表明,界面结合强度可提高27%以上,达到整体结合强度。改进后的打印系统使多向3D打印成为可能。
{"title":"Multi-directional 3D Printing with Strength Retention","authors":"Yupeng Guan, Yisong Gao, Lifang Wu, Kejian Cui, Jianwei Guo, Zechao Liu","doi":"10.1145/3355056.3364559","DOIUrl":"https://doi.org/10.1145/3355056.3364559","url":null,"abstract":"In this poster, we proposed a refined scheme and system to realize the multi-directional 3D printing with the strength as the traditional unidirectional 3D printing. With the introduction of the 10.6m CO2 laser, the printing system can heat the interfaces of the already printed components and increase the intermolecular-penetrating diffusion while fabricating the base layers of the next components. Therefore, the interfacial bonding strength between components is augmented. The tensile tests demonstrate that the interfacial bonding strength can be increased by more than 27%, which reaches the strength of the integrated ones. The improved printing system makes it possible to realize multi-directional 3D printing with strength retention.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134283917","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Non-Euclidean Embeddings for Graph Analytics and Visualisation 非欧几里德嵌入图分析和可视化
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364585
Daniel Filonik, Tian Feng, Ke Sun, R. Nock, Alex Collins, T. Bednarz
{"title":"Non-Euclidean Embeddings for Graph Analytics and Visualisation","authors":"Daniel Filonik, Tian Feng, Ke Sun, R. Nock, Alex Collins, T. Bednarz","doi":"10.1145/3355056.3364585","DOIUrl":"https://doi.org/10.1145/3355056.3364585","url":null,"abstract":"","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125005408","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Focus stacking by multi-viewpoint focus bracketing 多视点对焦包围法对焦叠加
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364592
Yucheng Qiu, Daisuke Inagaki, K. Kohiyama, Hiroya Tanaka, Takashi Ijiri
We present an approach to obtain high-quality focus-stacking images. The key idea is to integrate the multi-view structure-from-motion (SfM) algorithm with the focus-stacking process; we carry out focus-bracketing shooting at multiple viewpoints, generate depth maps for all viewpoints by using the SfM algorithm, and compute focus stacking using the depth maps and local sharpness. By using the depth-maps, we successfully achieve focus-stacking results with less artifacts around object boundaries and without halo-artifacts, which was difficult to avoid by using the previous sharpest pixel and pyramid approaches. To illustrate the feasibility of our approach, we performed focus stacking of small objects such as insects and flowers.
我们提出了一种获得高质量焦点叠加图像的方法。其关键思想是将多视图运动结构(SfM)算法与焦点叠加过程相结合;我们在多个视点进行对焦覆盖拍摄,利用SfM算法生成所有视点的深度图,并利用深度图和局部清晰度计算焦点叠加。通过使用深度图,我们成功地实现了焦点叠加结果,减少了物体边界周围的伪影,并且没有使用之前最锐利的像素和金字塔方法难以避免的晕影。为了说明我们方法的可行性,我们对昆虫和花朵等小物体进行了焦点叠加。
{"title":"Focus stacking by multi-viewpoint focus bracketing","authors":"Yucheng Qiu, Daisuke Inagaki, K. Kohiyama, Hiroya Tanaka, Takashi Ijiri","doi":"10.1145/3355056.3364592","DOIUrl":"https://doi.org/10.1145/3355056.3364592","url":null,"abstract":"We present an approach to obtain high-quality focus-stacking images. The key idea is to integrate the multi-view structure-from-motion (SfM) algorithm with the focus-stacking process; we carry out focus-bracketing shooting at multiple viewpoints, generate depth maps for all viewpoints by using the SfM algorithm, and compute focus stacking using the depth maps and local sharpness. By using the depth-maps, we successfully achieve focus-stacking results with less artifacts around object boundaries and without halo-artifacts, which was difficult to avoid by using the previous sharpest pixel and pyramid approaches. To illustrate the feasibility of our approach, we performed focus stacking of small objects such as insects and flowers.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129969332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Nanoscapes: Authentic Scales and Densities in Real-Time 3D Cinematic Visualizations of Cellular Landscapes 纳米景观:真实的尺度和密度在实时三维电影可视化的细胞景观
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364567
Andrew R. Lilja, Shereen R Kadir, Rowan T. Hughes, Nick Gunn, Campbell W. Strong, Benjamin J. Bailey, R. Parton, J. McGhee
3D computer-animated representations of complex biological systems and environments are often vastly oversimplified. There are a number of key reasons: to highlight a distinct biological mechanism of interest; technical limitations of hardware and software computer graphics (CG) capabilities; and a lack of data regarding cellular environments. This oversimplification perpetuates a naive understanding of fundamental cellular dynamics and topologies. This work attempts to address these challenges through the development of a first-person interactive virtual environment that more authentically depicts molecular scales, densities and interactions in real-time. Driven by a collaboration between scientists, CG developers and 3D computer artists, Nanoscapes utilizes the latest CG advances in real-time pipelines to construct a cinematic 3D environment that better communicates the complexity associated with the cellular surface and nanomedicine delivery to the cell.
复杂生物系统和环境的三维计算机动画表现往往过于简化。有几个关键的原因:突出一个独特的感兴趣的生物学机制;硬件和软件计算机图形(CG)能力的技术限制;以及缺乏关于细胞环境的数据。这种过度简化延续了对基本细胞动力学和拓扑结构的幼稚理解。这项工作试图通过开发第一人称互动虚拟环境来解决这些挑战,该环境可以更真实地实时描述分子尺度、密度和相互作用。在科学家、CG开发人员和3D计算机艺术家之间的合作推动下,Nanoscapes利用实时管道中最新的CG进展来构建电影般的3D环境,从而更好地传达与细胞表面和纳米药物递送相关的复杂性。
{"title":"Nanoscapes: Authentic Scales and Densities in Real-Time 3D Cinematic Visualizations of Cellular Landscapes","authors":"Andrew R. Lilja, Shereen R Kadir, Rowan T. Hughes, Nick Gunn, Campbell W. Strong, Benjamin J. Bailey, R. Parton, J. McGhee","doi":"10.1145/3355056.3364567","DOIUrl":"https://doi.org/10.1145/3355056.3364567","url":null,"abstract":"3D computer-animated representations of complex biological systems and environments are often vastly oversimplified. There are a number of key reasons: to highlight a distinct biological mechanism of interest; technical limitations of hardware and software computer graphics (CG) capabilities; and a lack of data regarding cellular environments. This oversimplification perpetuates a naive understanding of fundamental cellular dynamics and topologies. This work attempts to address these challenges through the development of a first-person interactive virtual environment that more authentically depicts molecular scales, densities and interactions in real-time. Driven by a collaboration between scientists, CG developers and 3D computer artists, Nanoscapes utilizes the latest CG advances in real-time pipelines to construct a cinematic 3D environment that better communicates the complexity associated with the cellular surface and nanomedicine delivery to the cell.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124358638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual Immersive Educational Systems: Early Results and Lessons Learned 虚拟沉浸式教育系统:早期成果和经验教训
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364586
Francesco Chinello, Konstantinos Koumaditis
Higher education is embracing a digital transformation in a relatively slow adoption rate, with few fragmented solutions to portray the capabilities of new immersive technologies like Virtual Reality (VR). One may argue that deployment costs and substantial levels of design knowledge might be critical stagnation factors to create effective Virtual Immersive Educational (VIE) systems. We attempt to address these impediments in a cost-effective and user-friendly VIE system. In this paper, we report in brief the main elements of this design and initial result and lessons learned.
高等教育正在以相对缓慢的速度接受数字化转型,几乎没有零散的解决方案来描绘虚拟现实(VR)等新型沉浸式技术的能力。有人可能会说,部署成本和大量的设计知识水平可能是创建有效的虚拟沉浸式教育(VIE)系统的关键停滞因素。我们试图在一个具有成本效益和用户友好的VIE系统中解决这些障碍。在本文中,我们简要地报告了该设计的主要元素以及初步结果和经验教训。
{"title":"Virtual Immersive Educational Systems: Early Results and Lessons Learned","authors":"Francesco Chinello, Konstantinos Koumaditis","doi":"10.1145/3355056.3364586","DOIUrl":"https://doi.org/10.1145/3355056.3364586","url":null,"abstract":"Higher education is embracing a digital transformation in a relatively slow adoption rate, with few fragmented solutions to portray the capabilities of new immersive technologies like Virtual Reality (VR). One may argue that deployment costs and substantial levels of design knowledge might be critical stagnation factors to create effective Virtual Immersive Educational (VIE) systems. We attempt to address these impediments in a cost-effective and user-friendly VIE system. In this paper, we report in brief the main elements of this design and initial result and lessons learned.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115352333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Stealth Projection: Visually Removing Projectors from Dynamic Projection Mapping 隐形投影:从动态投影映射中可视地移除投影仪
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364551
Masumi Kiyokawa, Shinichi Okuda, N. Hashimoto
In the present study, we propose a stealth projection method that visually removes the ProCam system in dynamic projection mapping (DPM). In recent years, DPM has been actively studied to change the appearance of moving and deforming objects by image projection. Various objects, such as an object held by the user, clothes, a human body, and a face, are projection targets, and the possibility of expressing these objects has continuously evolved. However, in order to realize this, high-speed and multiplexed special projection systems are needed, and objects are being closely enclosed by the systems. In DPM that seamlessly connects the real world and the virtual world, a complex device is an unnecessarily visually disturbing factor and should be removed in order to further exploit the potential effects of DPM. Therefore, in the present research, we propose a stealth projection method using a ProCam system that cannot be seen by combining a method that is capable of high-speed tracking with a single IR camera and all-around projection technology applying aerial image display technology.
在本研究中,我们提出了一种隐形投影方法,在动态投影映射(DPM)中可视化地去除ProCam系统。近年来,DPM被积极研究,通过图像投影来改变运动和变形物体的外观。使用者手持的物体、衣服、人体、人脸等各种物体都是投影对象,表达这些物体的可能性也在不断发展。然而,为了实现这一点,需要高速和多路的特殊投影系统,并且物体被系统紧密地包围。在无缝连接现实世界和虚拟世界的DPM中,一个复杂的设备是一个不必要的视觉干扰因素,为了进一步利用DPM的潜在影响,应该去除它。因此,在本研究中,我们提出了一种利用ProCam系统的隐身投影方法,该方法结合了单红外相机的高速跟踪方法和应用航空图像显示技术的全方位投影技术。
{"title":"Stealth Projection: Visually Removing Projectors from Dynamic Projection Mapping","authors":"Masumi Kiyokawa, Shinichi Okuda, N. Hashimoto","doi":"10.1145/3355056.3364551","DOIUrl":"https://doi.org/10.1145/3355056.3364551","url":null,"abstract":"In the present study, we propose a stealth projection method that visually removes the ProCam system in dynamic projection mapping (DPM). In recent years, DPM has been actively studied to change the appearance of moving and deforming objects by image projection. Various objects, such as an object held by the user, clothes, a human body, and a face, are projection targets, and the possibility of expressing these objects has continuously evolved. However, in order to realize this, high-speed and multiplexed special projection systems are needed, and objects are being closely enclosed by the systems. In DPM that seamlessly connects the real world and the virtual world, a complex device is an unnecessarily visually disturbing factor and should be removed in order to further exploit the potential effects of DPM. Therefore, in the present research, we propose a stealth projection method using a ProCam system that cannot be seen by combining a method that is capable of high-speed tracking with a single IR camera and all-around projection technology applying aerial image display technology.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129076640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
360-Degree-Viewable Tabletop Light-Field 3D DisplayHaving Only 24 Projectors 360度可见桌面光场3D显示只有24个投影仪
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364553
S. Yoshida
Conventional light-field methods of producing 3D images having circular parallax on a tabletop surface require several hundred projectors. Our novel approach produces a similar light field using only 1/10 the number of projectors. Adopting our method, two cylindrical mirrors are inserted in the projection light paths. By appropriately folding the paths with the mirrors, we form any viewpoint image in an annular viewing area from a group of rays sourced from all projectors arranged on a circle.
传统的光场方法在桌面表面产生具有圆视差的3D图像需要数百个投影仪。我们的新方法仅使用投影仪数量的1/10就能产生类似的光场。采用我们的方法,在投影光路中插入两个圆柱形反射镜。通过适当地折叠路径与镜子,我们形成任何视点图像在环形观看区域从一组光线来自所有投影仪排列在一个圆圈上。
{"title":"360-Degree-Viewable Tabletop Light-Field 3D DisplayHaving Only 24 Projectors","authors":"S. Yoshida","doi":"10.1145/3355056.3364553","DOIUrl":"https://doi.org/10.1145/3355056.3364553","url":null,"abstract":"Conventional light-field methods of producing 3D images having circular parallax on a tabletop surface require several hundred projectors. Our novel approach produces a similar light field using only 1/10 the number of projectors. Adopting our method, two cylindrical mirrors are inserted in the projection light paths. By appropriately folding the paths with the mirrors, we form any viewpoint image in an annular viewing area from a group of rays sourced from all projectors arranged on a circle.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"130 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129820467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HaptoBOX:
Pub Date : 2019-11-17 DOI: 10.1145/3355056.3364560
Kiichiro Kigawa, Toshikazu Ohshima
This study proposes an interface device for augmenting multi-sensory reality based on a visually unified experience with a high level of consistency between real and virtual worlds using video see-through type mixed reality (MR). By putting on an MR head-mounted display (HMD) and holding a box-shaped device, virtual objects are displayed within the box, and both vibrations and reactions are presented in a synchronized way based on the dynamics of objects. Inside the device, multiple built-in actuators using solenoids and eccentric motors enable actions to be controlled in synchronicity with the motion of objects. Furthermore, one can also hear the sound emitted from virtual objects using 3D sound localization.
本研究提出了一种基于视觉统一体验的增强多感官现实界面设备,该体验使用视频透明型混合现实(MR),在真实世界和虚拟世界之间具有高度一致性。通过戴上磁共振头戴式显示器(HMD)并拿着一个盒子状的设备,虚拟物体就会显示在盒子里,振动和反应都会以基于物体动力学的同步方式呈现出来。在设备内部,多个内置执行器使用螺线管和偏心电机,使动作与物体的运动同步控制。此外,人们还可以通过3D声音定位听到虚拟物体发出的声音。
{"title":"HaptoBOX:","authors":"Kiichiro Kigawa, Toshikazu Ohshima","doi":"10.1145/3355056.3364560","DOIUrl":"https://doi.org/10.1145/3355056.3364560","url":null,"abstract":"This study proposes an interface device for augmenting multi-sensory reality based on a visually unified experience with a high level of consistency between real and virtual worlds using video see-through type mixed reality (MR). By putting on an MR head-mounted display (HMD) and holding a box-shaped device, virtual objects are displayed within the box, and both vibrations and reactions are presented in a synchronized way based on the dynamics of objects. Inside the device, multiple built-in actuators using solenoids and eccentric motors enable actions to be controlled in synchronicity with the motion of objects. Furthermore, one can also hear the sound emitted from virtual objects using 3D sound localization.","PeriodicalId":101958,"journal":{"name":"SIGGRAPH Asia 2019 Posters","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115522374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
SIGGRAPH Asia 2019 Posters
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1